Science.gov

Sample records for field method based

  1. Footstep Planning Based on Univector Field Method for Humanoid Robot

    NASA Astrophysics Data System (ADS)

    Hong, Youngdae; Kim, Jong-Hwan

    This paper proposes a footstep planning algorithm based on univector field method optimized by evolutionary programming for humanoid robot to arrive at a target point in a dynamic environment. The univector field method is employed to determine the moving direction of the humanoid robot at every footstep. Modifiable walking pattern generator, extending the conventional 3D-LIPM method by allowing the ZMP variation while in single support phase, is utilized to generate every joint trajectory of a robot satisfying the planned footstep. The proposed algorithm enables the humanoid robot not only to avoid either static or moving obstacles but also step over static obstacles. The performance of the proposed algorithm is demonstrated by computer simulations using a modeled small-sized humanoid robot HanSaRam (HSR)-VIII.

  2. DO TIE LABORATORY BASED METHODS REALLY REFLECT FIELD CONDITIONS

    EPA Science Inventory

    Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both interstitial waters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question ...

  3. DO TIE LABORATORY BASED ASSESSMENT METHODS REALLY PREDICT FIELD EFFECTS?

    EPA Science Inventory

    Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both porewaters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question of whethe...

  4. Krylov subspace iterative methods for boundary element method based near-field acoustic holography.

    PubMed

    Valdivia, Nicolas; Williams, Earl G

    2005-02-01

    The reconstruction of the acoustic field for general surfaces is obtained from the solution of a matrix system that results from a boundary integral equation discretized using boundary element methods. The solution to the resultant matrix system is obtained using iterative regularization methods that counteract the effect of noise on the measurements. These methods will not require the calculation of the singular value decomposition, which can be expensive when the matrix system is considerably large. Krylov subspace methods are iterative methods that have the phenomena known as "semi-convergence," i.e., the optimal regularization solution is obtained after a few iterations. If the iteration is not stopped, the method converges to a solution that generally is totally corrupted by errors on the measurements. For these methods the number of iterations play the role of the regularization parameter. We will focus our attention to the study of the regularizing properties from the Krylov subspace methods like conjugate gradients, least squares QR and the recently proposed Hybrid method. A discussion and comparison of the available stopping rules will be included. A vibrating plate is considered as an example to validate our results. PMID:15759691

  5. VSP wave field separation: An optimization method based on block relaxation and singular value thresholding

    NASA Astrophysics Data System (ADS)

    Gao, Lei; Chen, Wenchao; Wang, Baoli; Gao, Jinghuai

    2014-05-01

    In this paper, we present a high-fidelity new method for wave field separation of vertical seismic profiling (VSP) data. The method can keep the characteristics of waveform and amplitude variation along with the wave propagation. As a basic assumption, we assume that the wave field data of each event flattened regular wave is a low-rank matrix. Then, we construct an optimization equation to formulate the VSP wave field separation problem. To solve the equation, we combine block relaxation (BR) with singular value thresholding (SVT) to construct a new algorithm. We apply the method proposed in this paper to both synthetic and real data, and compare the results with that of the median filter based method, which is widely used in engineering practice. We conclude that the method proposed in this paper can offer a wave field separation with higher fidelity and higher signal to noise ratio (SNR).

  6. A comparison of field-based similarity searching methods: CatShape, FBSS, and ROCS.

    PubMed

    Moffat, Kirstin; Gillet, Valerie J; Whittle, Martin; Bravi, Gianpaolo; Leach, Andrew R

    2008-04-01

    Three field-based similarity methods are compared in retrospective virtual screening experiments. The methods are the CatShape module of CATALYST, ROCS, and an in-house program developed at the University of Sheffield called FBSS. The programs are used in both rigid and flexible searches carried out in the MDL Drug Data Report. UNITY 2D fingerprints are also used to provide a comparison with a more traditional approach to similarity searching, and similarity based on simple whole-molecule properties is used to provide a baseline for the more sophisticated searches. Overall, UNITY 2D fingerprints and ROCS with the chemical force field option gave comparable performance and were superior to the shape-only 3D methods. When the flexible methods were compared with the rigid methods, it was generally found that the flexible methods gave slightly better results than their respective rigid methods; however, the increased performance did not justify the additional computational cost required. PMID:18351728

  7. A new gradient shimming method based on undistorted field map of B0 inhomogeneity.

    PubMed

    Bao, Qingjia; Chen, Fang; Chen, Li; Song, Kan; Liu, Zao; Liu, Chaoyang

    2016-04-01

    Most existing gradient shimming methods for NMR spectrometers estimate field maps that resolve B0 inhomogeneity spatially from dual gradient-echo (GRE) images acquired at different echo times. However, the distortions induced by B0 inhomogeneity that always exists in the GRE images can result in estimated field maps that are distorted in both geometry and intensity, leading to inaccurate shimming. This work proposes a new gradient shimming method based on undistorted field map of B0 inhomogeneity obtained by a more accurate field map estimation technique. Compared to the traditional field map estimation method, this new method exploits both the positive and negative polarities of the frequency encoded gradients to eliminate the distortions caused by B0 inhomogeneity in the field map. Next, the corresponding automatic post-data procedure is introduced to obtain undistorted B0 field map based on knowledge of the invariant characteristics of the B0 inhomogeneity and the variant polarity of the encoded gradient. The experimental results on both simulated and real gradient shimming tests demonstrate the high performance of this new method. PMID:26851711

  8. A new gradient shimming method based on undistorted field map of B0 inhomogeneity

    NASA Astrophysics Data System (ADS)

    Bao, Qingjia; Chen, Fang; Chen, Li; Song, Kan; Liu, Zao; Liu, Chaoyang

    2016-04-01

    Most existing gradient shimming methods for NMR spectrometers estimate field maps that resolve B0 inhomogeneity spatially from dual gradient-echo (GRE) images acquired at different echo times. However, the distortions induced by B0 inhomogeneity that always exists in the GRE images can result in estimated field maps that are distorted in both geometry and intensity, leading to inaccurate shimming. This work proposes a new gradient shimming method based on undistorted field map of B0 inhomogeneity obtained by a more accurate field map estimation technique. Compared to the traditional field map estimation method, this new method exploits both the positive and negative polarities of the frequency encoded gradients to eliminate the distortions caused by B0 inhomogeneity in the field map. Next, the corresponding automatic post-data procedure is introduced to obtain undistorted B0 field map based on knowledge of the invariant characteristics of the B0 inhomogeneity and the variant polarity of the encoded gradient. The experimental results on both simulated and real gradient shimming tests demonstrate the high performance of this new method.

  9. A flat-field correction method for photon-counting-detector-based micro-CT

    NASA Astrophysics Data System (ADS)

    Park, So E.; Kim, Jae G.; Hegazy, M. A. A.; Cho, Min H.; Lee, Soo Y.

    2014-03-01

    As low-dose computed tomography becomes a hot issue in the field of clinical x-ray imaging, photon counting detectors have drawn great attention as alternative x-ray image sensors. Even though photon-counting image sensors have several advantages over the integration-type sensors, such as low noise and high DQE, they are known to be more sensitive to the various experimental conditions like temperature and electric drift. Particularly, time-varying detector response during the CT scan is troublesome in photon-counting-detector-based CTs. To overcome the time-varying behavior of the image sensor during the CT scan, we developed a flat-field correction method together with an automated scanning mechanism. We acquired the flat-field images and projection data every view alternatively. When we took the flat-field image, we moved down the imaging sample away from the field-of-view with aid of computer controlled linear positioning stage. Then, we corrected the flat-field effects view-by-view with the flat-field image taken at given view. With a CdTe photon-counting image sensor (XRI-UNO, IMATEK), we took CT images of small bugs. The CT images reconstructed with the proposed flat-field correction method were much superior to the ones reconstructed with the conventional flat-field correction method.

  10. Systems and Methods for Implementing Robust Carbon Nanotube-Based Field Emitters

    NASA Technical Reports Server (NTRS)

    Manohara, Harish (Inventor); Kristof, Valerie (Inventor); Toda, Risaku (Inventor)

    2015-01-01

    Systems and methods in accordance with embodiments of the invention implement carbon nanotube-based field emitters. In one embodiment, a method of fabricating a carbon nanotube field emitter includes: patterning a substrate with a catalyst, where the substrate has thereon disposed a diffusion barrier layer; growing a plurality of carbon nanotubes on at least a portion of the patterned catalyst; and heating the substrate to an extent where it begins to soften such that at least a portion of at least one carbon nanotube becomes enveloped by the softened substrate.

  11. Time-domain incident-field extrapolation technique based on the singularity-expansion method

    SciTech Connect

    Klaasen, J.J.

    1991-05-01

    In this report, a method presented to extrapolate measurements from Nuclear Electromagnetic Pulse (NEMP) assessments directly in the time domain. This method is based on a time-domain extrapolation function which is obtained from the Singularity Expansion Method representation of the measured incident field of the NEMP simulator. Once the time-domain extrapolation function is determined, the responses recorded during an assessment can be extrapolated simply by convolving them with the time domain extrapolation function. It is found that to obtain useful extrapolated responses, the incident field measurements needs to be made minimum phase; otherwise unbounded results can be obtained. Results obtained with this technique are presented, using data from actual assessments.

  12. A Fully Automatic Method for Gridding Bright Field Images of Bead-Based Microarrays.

    PubMed

    Datta, Abhik; Wai-Kin Kong, Adams; Yow, Kin-Choong

    2016-07-01

    In this paper, a fully automatic method for gridding bright field images of bead-based microarrays is proposed. There have been numerous techniques developed for gridding fluorescence images of traditional spotted microarrays but to our best knowledge, no algorithm has yet been developed for gridding bright field images of bead-based microarrays. The proposed gridding method is designed for automatic quality control during fabrication and assembly of bead-based microarrays. The method begins by estimating the grid parameters using an evolutionary algorithm. This is followed by a grid-fitting step that rigidly aligns an ideal grid with the image. Finally, a grid refinement step deforms the ideal grid to better fit the image. The grid fitting and refinement are performed locally and the final grid is a nonlinear (piecewise affine) grid. To deal with extreme corruptions in the image, the initial grid parameter estimation and grid-fitting steps employ robust search techniques. The proposed method does not have any free parameters that need tuning. The method is capable of identifying the grid structure even in the presence of extreme amounts of artifacts and distortions. Evaluation results on a variety of images are presented. PMID:26011899

  13. Nonlinear force-free extrapolation of the coronal magnetic field based on the magnetohydrodynamic relaxation method

    SciTech Connect

    Inoue, S.; Magara, T.; Choe, G. S.; Kim, K. S.; Pandey, V. S.; Shiota, D.; Kusano, K.

    2014-01-01

    We develop a nonlinear force-free field (NLFFF) extrapolation code based on the magnetohydrodynamic (MHD) relaxation method. We extend the classical MHD relaxation method in two important ways. First, we introduce an algorithm initially proposed by Dedner et al. to effectively clean the numerical errors associated with ∇ · B . Second, the multigrid type method is implemented in our NLFFF to perform direct analysis of the high-resolution magnetogram data. As a result of these two implementations, we successfully extrapolated the high resolution force-free field introduced by Low and Lou with better accuracy in a drastically shorter time. We also applied our extrapolation method to the MHD solution obtained from the flux-emergence simulation by Magara. We found that NLFFF extrapolation may be less effective for reproducing areas higher than a half-domain, where some magnetic loops are found in a state of continuous upward expansion. However, an inverse S-shaped structure consisting of the sheared and twisted loops formed in the lower region can be captured well through our NLFFF extrapolation method. We further discuss how well these sheared and twisted fields are reconstructed by estimating the magnetic topology and twist quantitatively.

  14. Spatial sound field synthesis and upmixing based on the equivalent source method.

    PubMed

    Bai, Mingsian R; Hsu, Hoshen; Wen, Jheng-Ciang

    2014-01-01

    Given scarce number of recorded signals, spatial sound field synthesis with an extended sweet spot is a challenging problem in acoustic array signal processing. To address the problem, a synthesis and upmixing approach inspired by the equivalent source method (ESM) is proposed. The synthesis procedure is based on the pressure signals recorded by a microphone array and requires no source model. The array geometry can also be arbitrary. Four upmixing strategies are adopted to enhance the resolution of the reproduced sound field when there are more channels of loudspeakers than the microphones. Multi-channel inverse filtering with regularization is exploited to deal with the ill-posedness in the reconstruction process. The distance between the microphone and loudspeaker arrays is optimized to achieve the best synthesis quality. To validate the proposed system, numerical simulations and subjective listening experiments are performed. The results demonstrated that all upmixing methods improved the quality of reproduced target sound field over the original reproduction. In particular, the underdetermined ESM interpolation method yielded the best spatial sound field synthesis in terms of the reproduction error, timbral quality, and spatial quality. PMID:24437767

  15. A novel autonomous real-time position method based on polarized light and geomagnetic field

    PubMed Central

    Wang, Yinlong; Chu, Jinkui; Zhang, Ran; Wang, Lu; Wang, Zhiwen

    2015-01-01

    Many animals exploit polarized light in order to calibrate their magnetic compasses for navigation. For example, some birds are equipped with biological magnetic and celestial compasses enabling them to migrate between the Western and Eastern Hemispheres. The Vikings' ability to derive true direction from polarized light is also widely accepted. However, their amazing navigational capabilities are still not completely clear. Inspired by birds' and Vikings' ancient navigational skills. Here we present a combined real-time position method based on the use of polarized light and geomagnetic field. The new method works independently of any artificial signal source with no accumulation of errors and can obtain the position and the orientation directly. The novel device simply consists of two polarized light sensors, a 3-axis compass and a computer. The field experiments demonstrate device performance. PMID:25851793

  16. A novel autonomous real-time position method based on polarized light and geomagnetic field

    NASA Astrophysics Data System (ADS)

    Wang, Yinlong; Chu, Jinkui; Zhang, Ran; Wang, Lu; Wang, Zhiwen

    2015-04-01

    Many animals exploit polarized light in order to calibrate their magnetic compasses for navigation. For example, some birds are equipped with biological magnetic and celestial compasses enabling them to migrate between the Western and Eastern Hemispheres. The Vikings' ability to derive true direction from polarized light is also widely accepted. However, their amazing navigational capabilities are still not completely clear. Inspired by birds' and Vikings' ancient navigational skills. Here we present a combined real-time position method based on the use of polarized light and geomagnetic field. The new method works independently of any artificial signal source with no accumulation of errors and can obtain the position and the orientation directly. The novel device simply consists of two polarized light sensors, a 3-axis compass and a computer. The field experiments demonstrate device performance.

  17. A novel autonomous real-time position method based on polarized light and geomagnetic field.

    PubMed

    Wang, Yinlong; Chu, Jinkui; Zhang, Ran; Wang, Lu; Wang, Zhiwen

    2015-01-01

    Many animals exploit polarized light in order to calibrate their magnetic compasses for navigation. For example, some birds are equipped with biological magnetic and celestial compasses enabling them to migrate between the Western and Eastern Hemispheres. The Vikings' ability to derive true direction from polarized light is also widely accepted. However, their amazing navigational capabilities are still not completely clear. Inspired by birds' and Vikings' ancient navigational skills. Here we present a combined real-time position method based on the use of polarized light and geomagnetic field. The new method works independently of any artificial signal source with no accumulation of errors and can obtain the position and the orientation directly. The novel device simply consists of two polarized light sensors, a 3-axis compass and a computer. The field experiments demonstrate device performance. PMID:25851793

  18. Identifying protein interaction subnetworks by a bagging Markov random field-based method

    PubMed Central

    Chen, Li; Xuan, Jianhua; Riggins, Rebecca B.; Wang, Yue; Clarke, Robert

    2013-01-01

    Identification of differentially expressed subnetworks from protein–protein interaction (PPI) networks has become increasingly important to our global understanding of the molecular mechanisms that drive cancer. Several methods have been proposed for PPI subnetwork identification, but the dependency among network member genes is not explicitly considered, leaving many important hub genes largely unidentified. We present a new method, based on a bagging Markov random field (BMRF) framework, to improve subnetwork identification for mechanistic studies of breast cancer. The method follows a maximum a posteriori principle to form a novel network score that explicitly considers pairwise gene interactions in PPI networks, and it searches for subnetworks with maximal network scores. To improve their robustness across data sets, a bagging scheme based on bootstrapping samples is implemented to statistically select high confidence subnetworks. We first compared the BMRF-based method with existing methods on simulation data to demonstrate its improved performance. We then applied our method to breast cancer data to identify PPI subnetworks associated with breast cancer progression and/or tamoxifen resistance. The experimental results show that not only an improved prediction performance can be achieved by the BMRF approach when tested on independent data sets, but biologically meaningful subnetworks can also be revealed that are relevant to breast cancer and tamoxifen resistance. PMID:23161673

  19. A Novel Microaneurysms Detection Method Based on Local Applying of Markov Random Field.

    PubMed

    Ganjee, Razieh; Azmi, Reza; Moghadam, Mohsen Ebrahimi

    2016-03-01

    Diabetic Retinopathy (DR) is one of the most common complications of long-term diabetes. It is a progressive disease and by damaging retina, it finally results in blindness of patients. Since Microaneurysms (MAs) appear as a first sign of DR in retina, early detection of this lesion is an essential step in automatic detection of DR. In this paper, a new MAs detection method is presented. The proposed approach consists of two main steps. In the first step, the MA candidates are detected based on local applying of Markov random field model (MRF). In the second step, these candidate regions are categorized to identify the correct MAs using 23 features based on shape, intensity and Gaussian distribution of MAs intensity. The proposed method is evaluated on DIARETDB1 which is a standard and publicly available database in this field. Evaluation of the proposed method on this database resulted in the average sensitivity of 0.82 for a confidence level of 75 as a ground truth. The results show that our method is able to detect the low contrast MAs with the background while its performance is still comparable to other state of the art approaches. PMID:26779642

  20. A novel prediction method about single components of analog circuits based on complex field modeling.

    PubMed

    Zhou, Jingyu; Tian, Shulin; Yang, Chenglin

    2014-01-01

    Few researches pay attention to prediction about analog circuits. The few methods lack the correlation with circuit analysis during extracting and calculating features so that FI (fault indicator) calculation often lack rationality, thus affecting prognostic performance. To solve the above problem, this paper proposes a novel prediction method about single components of analog circuits based on complex field modeling. Aiming at the feature that faults of single components hold the largest number in analog circuits, the method starts with circuit structure, analyzes transfer function of circuits, and implements complex field modeling. Then, by an established parameter scanning model related to complex field, it analyzes the relationship between parameter variation and degeneration of single components in the model in order to obtain a more reasonable FI feature set via calculation. According to the obtained FI feature set, it establishes a novel model about degeneration trend of analog circuits' single components. At last, it uses particle filter (PF) to update parameters for the model and predicts remaining useful performance (RUP) of analog circuits' single components. Since calculation about the FI feature set is more reasonable, accuracy of prediction is improved to some extent. Finally, the foregoing conclusions are verified by experiments. PMID:25147853

  1. A descriptive geometry based method for total and common cameras fields of view optimization

    NASA Astrophysics Data System (ADS)

    Salmane, H.; Ruichek, Y.; Khoudour, L.

    2011-07-01

    The presented work is conducted in the framework of the ANR-VTT PANsafer project (Towards a safer level crossing). One of the objectives of the project is to develop a video surveillance system that will be able to detect and recognize potential dangerous situation around level crossings. This paper addresses the problem of cameras positioning and orientation in order to view optimally monitored scenes. In general, adjusting cameras position and orientation is achieved experimentally and empirically by considering geometrical different configurations. This step requires a lot of time to adjust approximately the total and common fields of view of the cameras, especially when constrained environments, like level crossing environments, are considered. In order to simplify this task and to get more precise cameras positioning and orientation, we propose in this paper a method that optimizes automatically the total and common cameras fields with respect to the desired scene. Based on descriptive geometry, the method estimates the best cameras position and orientation by optimizing surfaces of 2D domains that are obtained by projecting/intersecting the field of view of each camera on/with horizontal and vertical planes. The proposed method is evaluated and tested to demonstrate its effectiveness.

  2. Analytical solution based on the wavenumber integration method for the acoustic field in a Pekeris waveguide

    NASA Astrophysics Data System (ADS)

    Wen-Yu, Luo; Xiao-Lin, Yu; Xue-Feng, Yang; Ren-He, Zhang

    2016-04-01

    An exact solution based on the wavenumber integration method is proposed and implemented in a numerical model for the acoustic field in a Pekeris waveguide excited by either a point source in cylindrical geometry or a line source in plane geometry. Besides, an unconditionally stable numerical solution is also presented, which entirely resolves the stability problem in previous methods. Generally the branch line integral contributes to the total field only at short ranges, and hence is usually ignored in traditional normal mode models. However, for the special case where a mode lies near the branch cut, the branch line integral can contribute to the total field significantly at all ranges. The wavenumber integration method is well-suited for such problems. Numerical results are also provided, which show that the present model can serve as a benchmark for sound propagation in a Pekeris waveguide. Project supported by the National Natural Science Foundation of China (Grant No. 11125420), the Knowledge Innovation Program of the Chinese Academy of Sciences, the China Postdoctoral Science Foundation (Grant No. 2014M561882), and the Doctoral Fund of Shandong Province, China (Grant No. BS2012HZ015).

  3. Transparent Conductive Coating Based on Carbon Nanotubes Using Electric Field Deposition Method

    SciTech Connect

    Latununuwe, Altje; Hattu, Nikmans; Setiawan, Andhy; Winata, Toto; Abdullah, Mikrajuddin; Darma, Yudi

    2010-10-24

    The transparent conductive coating based on carbon nanotubes (CNTs) had been fabricated using the electric field deposition method. The scanning electron microscope (SEM) results show a quite uniform CNTs on Corning glass substrates. Moreover the X-ray Diffraction (XRD) results shows the peak at around 25 deg. which proves the existence of CNT materials. The CNT thin films obtained with different deposition times have different transmittance coefficients at wavelength of 550 nm. I-V measurement results shows higher sheet resistance value which relates with bigger transmittance coefficients and vice versa.

  4. A new method for direction finding based on Markov random field model

    NASA Astrophysics Data System (ADS)

    Ota, Mamoru; Kasahara, Yoshiya; Goto, Yoshitaka

    2015-07-01

    Investigating the characteristics of plasma waves observed by scientific satellites in the Earth's plasmasphere/magnetosphere is effective for understanding the mechanisms for generating waves and the plasma environment that influences wave generation and propagation. In particular, finding the propagation directions of waves is important for understanding mechanisms of VLF/ELF waves. To find these directions, the wave distribution function (WDF) method has been proposed. This method is based on the idea that observed signals consist of a number of elementary plane waves that define wave energy density distribution. However, the resulting equations constitute an ill-posed problem in which a solution is not determined uniquely; hence, an adequate model must be assumed for a solution. Although many models have been proposed, we have to select the most optimum model for the given situation because each model has its own advantages and disadvantages. In the present study, we propose a new method for direction finding of the plasma waves measured by plasma wave receivers. Our method is based on the assumption that the WDF can be represented by a Markov random field model with inference of model parameters performed using a variational Bayesian learning algorithm. Using computer-generated spectral matrices, we evaluated the performance of the model and compared the results with those obtained from two conventional methods.

  5. Electrolocation-based underwater obstacle avoidance using wide-field integration methods.

    PubMed

    Dimble, Kedar D; Faddy, James M; Humbert, J Sean

    2014-03-01

    Weakly electric fish are capable of efficiently performing obstacle avoidance in dark and navigationally challenging aquatic environments using electrosensory information. This sensory modality enables extraction of relevant proximity information about surrounding obstacles by interpretation of perturbations induced to the fish's self-generated electric field. In this paper, reflexive obstacle avoidance is demonstrated by extracting relative proximity information using spatial decompositions of the perturbation signal, also called an electric image. Electrostatics equations were formulated for mathematically expressing electric images due to a straight tunnel to the electric field generated with a planar electro-sensor model. These equations were further used to design a wide-field integration based static output feedback controller. The controller was implemented in quasi-static simulations for environments with complicated geometries modelled using finite element methods to demonstrate sense and avoid behaviours. The simulation results were confirmed by performing experiments using a computer operated gantry system in environments lined with either conductive or non-conductive objects acting as global stimuli to the field of the electro-sensor. The proposed approach is computationally inexpensive and readily implementable, making underwater autonomous navigation in real-time feasible. PMID:24451219

  6. A Five-Parameter Wind Field Estimation Method Based on Spherical Upwind Lidar Measurements

    NASA Astrophysics Data System (ADS)

    Kapp, S.; Kühn, M.

    2014-12-01

    Turbine mounted scanning lidar systems of focussed continuous-wave type are taken into consideration to sense approaching wind fields. The quality of wind information depends on the lidar technology itself but also substantially on the scanning technique and reconstruction algorithm. In this paper a five-parameter wind field model comprising mean wind speed, vertical and horizontal linear shear and homogeneous direction angles is introduced. A corresponding parameter estimation method is developed based on the assumption of upwind lidar measurements scanned over spherical segments. As a main advantage of this method all relevant parameters, in terms of wind turbine control, can be provided. Moreover, the ability to distinguish between shear and skew potentially increases the quality of the resulting feedforward pitch angles when compared to three-parameter methods. It is shown that minimal three measurements, each in turn from two independent directions are necessary for the application of the algorithm, whereas simpler measurements, each taken from only one direction, are not sufficient.

  7. A New Self-Constrained Inversion Method of Potential Fields Based on Probability Tomography

    NASA Astrophysics Data System (ADS)

    Sun, S.; Chen, C.; WANG, H.; Wang, Q.

    2014-12-01

    The self-constrained inversion method of potential fields uses a priori information self-extracted from potential field data. Differing from external a priori information, the self-extracted information are generally parameters derived exclusively from the analysis of the gravity and magnetic data (Paoletti et al., 2013). Here we develop a new self-constrained inversion method based on probability tomography. Probability tomography doesn't need any priori information, as well as large inversion matrix operations. Moreover, its result can describe the sources, especially the distribution of which is complex and irregular, entirely and clearly. Therefore, we attempt to use the a priori information extracted from the probability tomography results to constrain the inversion for physical properties. The magnetic anomaly data was taken as an example in this work. The probability tomography result of magnetic total field anomaly(ΔΤ) shows a smoother distribution than the anomalous source and cannot display the source edges exactly. However, the gradients of ΔΤ are with higher resolution than ΔΤ in their own direction, and this characteristic is also presented in their probability tomography results. So we use some rules to combine the probability tomography results of ∂ΔΤ⁄∂x, ∂ΔΤ⁄∂y and ∂ΔΤ⁄∂z into a new result which is used for extracting a priori information, and then incorporate the information into the model objective function as spatial weighting functions to invert the final magnetic susceptibility. Some magnetic synthetic examples incorporated with and without a priori information extracted from the probability tomography results were made to do comparison, results of which show that the former are more concentrated and with higher resolution of the source body edges. This method is finally applied in an iron mine in China with field measured ΔΤ data and performs well. ReferencesPaoletti, V., Ialongo, S., Florio, G., Fedi, M

  8. Numerical focusing methods for full field OCT: a comparison based on a common signal model.

    PubMed

    Kumar, Abhishek; Drexler, Wolfgang; Leitgeb, Rainer A

    2014-06-30

    In this paper a theoretical model of the full field swept source (FF SS) OCT signal is presented based on the angular spectrum wave propagation approach which accounts for the defocus error with imaging depth. It is shown that using the same theoretical model of the signal, numerical defocus correction methods based on a simple forward model (FM) and inverse scattering (IS), the latter being similar to interferometric synthetic aperture microscopy (ISAM), can be derived. Both FM and IS are compared quantitatively with sub-aperture based digital adaptive optics (DAO). FM has the least numerical complexity, and is the fastest in terms of computational speed among the three. SNR improvement of more than 10 dB is shown for all the three methods over a sample depth of 1.5 mm. For a sample with non-uniform refractive index with depth, FM and IS both improved the depth of focus (DOF) by a factor of 7x for an imaging NA of 0.1. DAO performs the best in case of non-uniform refractive index with respect to DOF improvement by 11x. PMID:24977860

  9. Noticing and Naming as Social Practice: Examining the Relevance of a Contextualized Field-Based Early Childhood Literacy Methods Course

    ERIC Educational Resources Information Center

    Laman, Tasha Tropp; Miller, Erin T.; Lopez-Robertson, Julia

    2012-01-01

    This qualitative study examines what early childhood preservice teachers enrolled in a field-based literacy methods course deemed relevant regarding teaching, literacy, and learning. This study is based on postcourse interviews with 7 early childhood preservice teachers. Findings suggest that "contextualized field experiences" facilitate…

  10. Using geotypes for landslide hazard assessment and mapping: a coupled field and GIS-based method

    NASA Astrophysics Data System (ADS)

    Bilgot, S.; Parriaux, A.

    2009-04-01

    Switzerland is exceptionally subjected to landslides; indeed, about 10% of its area is considered as unstable. Making this observation, its Department of the Environment (BAFU) introduces in 1997 a method to realize landslide hazard maps. It is routinely used but, like most of the methods applied in Europe to map unstable areas, it is mainly based on the signs of previous or current phenomena (geomorphologic mapping, archive consultation, etc.) even though instabilities can appear where there is nothing to show that they existed earlier. Furthermore, the transcription from the geomorphologic map to the hazard map can vary according to the geologist or the geographer who realizes it: this method is affected by a certain lack of transparency. The aim of this project is to introduce the bedrock of a new method for landslide hazard mapping; based on instability predisposition assessment, it involves the designation of main factors for landslide susceptibility, their integration in a GIS to calculate a landslide predisposition index and the implementation of new methods to evaluate these factors; to be competitive, these processes have to be both cheap and quick. To identify the most important parameters to consider for assessing slope stability, we chose a large panel of topographic, geomechanic and hydraulic parameters and tested their importance by calculating safety factors on theoretical landslides using Geostudio 2007®; thus, we could determine that slope, cohesion, hydraulic conductivity and saturation play an important role in soil stability. After showing that cohesion and hydraulic conductivity of loose materials are strongly linked to their granulometry and plasticity index, we implemented two new field tests, one based on teledetection and one coupled sedimentometric and blue methylen test to evaluate these parameters. From these data, we could deduce approximated values of maximum cohesion and saturated hydraulic conductivity. The hydraulic conductivity of

  11. A comparison of instrumentation methods to estimate thoracolumbar motion in field-based occupational studies.

    PubMed

    Schall, Mark C; Fethke, Nathan B; Chen, Howard; Gerr, Fred

    2015-05-01

    The performance of an inertial measurement unit (IMU) system for directly measuring thoracolumbar trunk motion was compared to that of the Lumbar Motion Monitor (LMM). Thirty-six male participants completed a simulated material handling task with both systems deployed simultaneously. Estimates of thoracolumbar trunk motion obtained with the IMU system were processed using five common methods for estimating trunk motion characteristics. Results of measurements obtained from IMUs secured to the sternum and pelvis had smaller root-mean-square differences and mean bias estimates in comparison to results obtained with the LMM than results of measurements obtained solely from a sternum mounted IMU. Fusion of IMU accelerometer measurements with IMU gyroscope and/or magnetometer measurements was observed to increase comparability to the LMM. Results suggest investigators should consider computing thoracolumbar trunk motion as a function of estimates from multiple IMUs using fusion algorithms rather than using a single accelerometer secured to the sternum in field-based studies. PMID:25683549

  12. a Method to Estimate Temporal Interaction in a Conditional Random Field Based Approach for Crop Recognition

    NASA Astrophysics Data System (ADS)

    Diaz, P. M. A.; Feitosa, R. Q.; Sanches, I. D.; Costa, G. A. O. P.

    2016-06-01

    This paper presents a method to estimate the temporal interaction in a Conditional Random Field (CRF) based approach for crop recognition from multitemporal remote sensing image sequences. This approach models the phenology of different crop types as a CRF. Interaction potentials are assumed to depend only on the class labels of an image site at two consecutive epochs. In the proposed method, the estimation of temporal interaction parameters is considered as an optimization problem, whose goal is to find the transition matrix that maximizes the CRF performance, upon a set of labelled data. The objective functions underlying the optimization procedure can be formulated in terms of different accuracy metrics, such as overall and average class accuracy per crop or phenological stages. To validate the proposed approach, experiments were carried out upon a dataset consisting of 12 co-registered LANDSAT images of a region in southeast of Brazil. Pattern Search was used as the optimization algorithm. The experimental results demonstrated that the proposed method was able to substantially outperform estimates related to joint or conditional class transition probabilities, which rely on training samples.

  13. Study on Two Methods for Nonlinear Force-Free Extrapolation Based on Semi-Analytical Field

    NASA Astrophysics Data System (ADS)

    Liu, S.; Zhang, H. Q.; Su, J. T.; Song, M. T.

    2011-03-01

    In this paper, two semi-analytical solutions of force-free fields (Low and Lou, Astrophys. J. 352, 343, 1990) have been used to test two nonlinear force-free extrapolation methods. One is the boundary integral equation (BIE) method developed by Yan and Sakurai ( Solar Phys. 195, 89, 2000), and the other is the approximate vertical integration (AVI) method developed by Song et al. ( Astrophys. J. 649, 1084, 2006). Some improvements have been made to the AVI method to avoid the singular points in the process of calculation. It is found that the correlation coefficients between the first semi-analytical field and extrapolated field using the BIE method, and also that obtained by the improved AVI method, are greater than 90% below a height 10 of the 64×64 lower boundary. For the second semi-analytical field, these correlation coefficients are greater than 80% below the same relative height. Although differences between the semi-analytical solutions and the extrapolated fields exist for both the BIE and AVI methods, these two methods can give reliable results for heights of about 15% of the extent of the lower boundary.

  14. Defects evaluation system for spherical optical surfaces based on microscopic scattering dark-field imaging method.

    PubMed

    Zhang, Yihui; Yang, Yongying; Li, Chen; Wu, Fan; Chai, Huiting; Yan, Kai; Zhou, Lin; Li, Yang; Liu, Dong; Bai, Jian; Shen, Yibing

    2016-08-10

    In the field of automatic optical inspection, it is imperative to measure the defects on spherical optical surfaces. So a novel spherical surface defect evaluation system is established in this paper to evaluate defects on optical spheres. In order to ensure the microscopic scattering dark-field imaging of optical spheres with different surface shape and radius of curvature, illumination with variable aperture angle is employed. In addition, the scanning path of subapertures along the parallels and meridians is planned to detect the large optical spheres. Since analysis shows that the spherical defect information could be lost in the optical imaging, the three-dimensional correction based on a pin-hole model is proposed to recover the actual spherical defects from the captured two-dimensional images. Given the difficulty of subaperture stitching and defect feature extraction in three-dimensional (3D) space after the correction, the 3D subapertures are transformed into a plane to be spliced through geometric projection. Then, methods of the surface integral and calibration are applied to quantitatively evaluate the spherical defects. Furthermore, the 3D panorama of defect distribution on the spherical optical components can be displayed through the inverse projective reconstruction. Finally, the evaluation results are compared with the OLYMPUS microscope, testifying to the micrometer resolution, and the detection error is less than 5%. PMID:27534456

  15. Estimation of Field-scale Aquifer Hydraulic and Sorption Parameters Based on Borehole Spectral Gamma Methods

    NASA Astrophysics Data System (ADS)

    Ward, A. L.; Draper, K.; Hasan, N.

    2010-12-01

    Knowledge of spatially variable aquifer hydraulic and sorption parameters is a pre-requisite for an improved understanding of the transport and spreading of sorbing solutes and for the development of effective strategies for remediation. Local-scale estimates of these parameters are often derived from core measurements but are typically not representative of field values. Fields-scale estimates are typically derived from pump and tracer tests but often lack the spatial resolution necessary to deconvolve the effects of fine-scale heterogeneities. Geophysical methods have the potential to bridge this gap both in terms of coverage and resolution, provided meaningful petrophysical relationships can be developed. The objective of this study was to develop a petrophysical relationship between soil textural attributes and the gamma-energy response of natural sediments. Measurements from Hanford’s 300 Area show the best model to be a linear relationship between 232Th concentration and clay content (R2 = 94%). This relationship was used to generate a 3-D distribution of clay mass fraction based on borehole spectral gamma logs. The distribution of clay was then used to predict distributions of permeability, porosity, bubbling pressure, and the pore-size distribution index, all of which are required for predicting variably saturated flow, as well as the specific surface area and cation exchange capacity needed for reactive transport predictions. With this approach, it is possible to obtain reliable estimates of hydraulic properties in zones that could not be characterized by field or laboratory measurements. The spatial distribution of flow properties is consistent with lithologic transitions inferred from geologist’s logs. A preferential flow path, identified from solute and heat tracer experiments and attributed to an erosional incision in the low-permeability Ringold Formation, is also evident. The resulting distributions can be used as a starting model for the

  16. Simplified method of clinical phenotyping for older men and women using established field-based measures.

    PubMed

    Fukuda, David H; Smith-Ryan, Abbie E; Kendall, Kristina L; Moon, Jordan R; Stout, Jeffrey R

    2013-12-01

    The purpose of this investigation was to determine body composition classification using field-based testing measurements in healthy elderly men and women. The use of isoperformance curves is presented as a method for this determination. Baseline values from 107 healthy Caucasian men and women, over the age of 65years old, who participated in a separate longitudinal study, were used for this investigation. Field-based measurements of age, height, weight, body mass index (BMI), and handgrip strength were recorded on an individual basis. Relative skeletal muscle index (RSMI) and body fat percentage (FAT%) were determined by dual-energy X-ray absorptiometry (DXA) for each participant. Sarcopenia cut-off values for RSMI of 7.26kg·m(-2) for men and 5.45kg·m(-2) for women and elderly obesity cut-off values for FAT% of 27% for men and 38% for women were used. Individuals above the RSMI cut-off and below the FAT% cut-off were classified in the normal phenotype category, while individuals below the RSMI cut-off and above the FAT% cut-off were classified in the sarcopenic-obese phenotype category. Prediction equations for RSMI and FAT% from sex, BMI, and handgrip strength values were developed using multiple regression analysis. The prediction equations were validated using double cross-validation. The final regression equation developed to predict FAT% from sex, BMI, and handgrip strength resulted in a strong relationship (adjusted R(2)=0.741) to DXA values with a low standard error of the estimate (SEE=3.994%). The final regression equation developed to predict RSMI from the field-based testing measures also resulted in a strong relationship (adjusted R(2)=0.841) to DXA values with a low standard error of the estimate (SEE=0.544kg·m(-2)). Isoperformance curves were developed from the relationship between BMI and handgrip strength for men and women with the aforementioned clinical phenotype classification criteria. These visual representations were used to aid in the

  17. [Research on the temperature field detection method of large cylinder forgings during heat treatment process based on infrared spectra].

    PubMed

    Zhang, Yu-Cun; Fu, Xian-Bin; Liu, Bin; Qi, Yan-De; Zhou, Shan

    2013-01-01

    In order to grasp the changes of the forging's temperature field during heat treatment, a temperature field detection method based on infrared spectra for large cylinder forgings is proposed in the present paper. On the basis of heat transfer a temperature field model of large barrel forgings was established by the method of separating variables. Using infrared spectroscopy the large forgings temperature measurement system was built based on the three-level interference filter. The temperature field detection of forging was realized in its heat treatment by combining the temperature data and the forgings temperature field detection model. Finally, this method is feasible according to the simulation experiment. The heating forging temperature detection method can provide the theoretical basis for the correct implementation of the heat treatment process. PMID:23586224

  18. Correlation-based methods in calibrating an FBG sensor with strain field non-uniformity

    NASA Astrophysics Data System (ADS)

    Cieszczyk, S.

    2015-12-01

    Fibre Bragg gratings have many sensing applications, mainly for measuring strain and temperature. The physical quantity that influences grating uniformly along its length causes a related shift of the Bragg wavelength. Many peak detection algorithms have been proposed, among which the most popular are the detection of maximum intensity, the centroid detection, the least square method, the cross-correlation, auto-correlation and fast phase correlation. Nonuniform gratings elongation is a cause of spectrum deformation. The introduction of non-uniformity can be intentional or appear as an unintended effect of placing sensing elements in the tested structure. Heterogeneous impacts on grating may result in additional errors and the difficulty in tracking the Bragg wavelength based on a distorted spectrum. This paper presents the application of correlation methods of peak wavelength shifts estimation for non-uniform Bragg grating elongation. The autocorrelation, cross-correlation and fast phase correlation algorithms are considered and experimental spectra measured for axisymmetric strain field along the Bragg grating are analyzed. The strain profile consists of constant and variable components. The results of this study indicate the properties of correlation algorithms applied to moderately non-uniform elongation of an FBG sensor.

  19. An automatic detection method to the field wheat based on image processing

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Cao, Zhiguo; Bai, Xiaodong; Yu, Zhenghong; Li, Yanan

    2013-10-01

    The automatic observation of the field crop attracts more and more attention recently. The use of image processing technology instead of the existing manual observation method can observe timely and manage consistently. It is the basis that extracting the wheat from the field wheat images. In order to improve accuracy of the wheat segmentation, a novel two-stage wheat image segmentation method is proposed. Training stage adjusts several key thresholds which will be used in segmentation stage to achieve the best segmentation results, and counts these thresholds. Segmentation stage compares the different values of color index to determine which class of each pixel is. To verify the superiority of the proposed algorithm, we compared our method with other crop segmentation methods. Experiment results shows that the proposed method has the best performance.

  20. A new resonance based method for the measurement of magnetic field intensity

    NASA Astrophysics Data System (ADS)

    Kaluvan, Suresh; Park, Jinhyuk; Zhang, Haifeng; Umapathy, Mangalanathan; Choi, Seung-Bok

    2016-04-01

    A new magnetic field intensity measurement method using resonance principle is proposed in this paper. The proposed magnetic field sensor consists of magneto rheological (MR) fluid placed between two collocated, piezo-bounded, metallic, circular disc mounted face to face in the z-axis. The resonant frequency of the disc is changed by the magnetic field dependent viscosity of the MR fluid. The key enabling concept in this work is stiffening the circular metal disc using the rheological effect of MR fluid i.e. resonant frequency varies with respect to magnetic field strength. The change in resonant frequency is measured using simple closed loop electronics connected between the two piezo crystals. The analytical model of the vibrating circular discs with MR fluid placed at the center is derived and the results are validated with experimentation. The proposed magnetic flux density measurement concept is novel and it is found to have better sensitivity and linearity.

  1. A new method for matched field localization based on two-hydrophone

    NASA Astrophysics Data System (ADS)

    Li, Kun; Fang, Shi-liang

    2015-03-01

    The conventional matched field processing (MFP) uses large vertical arrays to locate an underwater acoustic target. However, the use of large vertical arrays increases equipment and computational cost, and causes some problems such as element failures, and array tilting to degrade the localization performance. In this paper, the matched field localization method using two-hydrophone is proposed for underwater acoustic pulse signals with an unknown emitted signal waveform. Using the received signal of hydrophones and the ocean channel pulse response which can be calculated from an acoustic propagation model, the spectral matrix of the emitted signal for different source locations can be estimated by employing the method of frequency domain least squares. The resulting spectral matrix of the emitted signal for every grid region is then multiplied by the ocean channel frequency response matrix to generate the spectral matrix of replica signal. Finally, the matched field localization using two-hydrophone for underwater acoustic pulse signals of an unknown emitted signal waveform can be estimated by comparing the difference between the spectral matrixes of the received signal and the replica signal. The simulated results from a shallow water environment for broadband signals demonstrate the significant localization performance of the proposed method. In addition, the localization accuracy in five different cases are analyzed by the simulation trial, and the results show that the proposed method has a sharp peak and low sidelobes, overcoming the problem of high sidelobes in the conventional MFP due to lack of the number of elements.

  2. Evaluation of Three Field-Based Methods for Quantifying Soil Carbon

    SciTech Connect

    Izaurralde, Roberto C.; Rice, Charles W.; Wielopolski, Lucien; Ebinger, Michael H.; Reeves, James B.; Thomson, Allison M.; Harris, Ron; Francis, Barry; Mitra, S.; Rappaport, Aaron; Etchevers, Jorge; Sayre, Ken D.; Govaerts, Bram; McCarty, G. W.

    2013-01-31

    Three advanced technologies to measure soil carbon (C) density (g C m22) are deployed in the field and the results compared against those obtained by the dry combustion (DC) method. The advanced methods are: a) Laser Induced Breakdown Spectroscopy (LIBS), b) Diffuse Reflectance Fourier Transform Infrared Spectroscopy (DRIFTS), and c) Inelastic Neutron Scattering (INS). The measurements and soil samples were acquired at Beltsville, MD, USA and at Centro International para el Mejoramiento del Maiz y el Trigo (CIMMYT) at El Bata´n, Mexico. At Beltsville, soil samples were extracted at three depth intervals (0–5, 5–15, and 15–30 cm) and processed for analysis in the field with the LIBS and DRIFTS instruments. The INS instrument determined soil C density to a depth of 30 cm via scanning and stationary measurements. Subsequently, soil core samples were analyzed in the laboratory for soil bulk density (kg m23), C concentration (g kg21) by DC, and results reported as soil C density (kg m22). Results from each technique were derived independently and contributed to a blind test against results from the reference (DC) method. A similar procedure was employed at CIMMYT in Mexico employing but only with the LIBS and DRIFTS instruments. Following conversion to common units, we found that the LIBS, DRIFTS, and INS results can be compared directly with those obtained by the DC method. The first two methods and the standard DC require soil sampling and need soil bulk density information to convert soil C concentrations to soil C densities while the INS method does not require soil sampling. We conclude that, in comparison with the DC method, the three instruments (a) showed acceptable performances although further work is needed to improve calibration techniques and (b) demonstrated their portability and their capacity to perform under field conditions.

  3. Evaluation of Three Field-Based Methods for Quantifying Soil Carbon

    PubMed Central

    Izaurralde, Roberto C.; Rice, Charles W.; Wielopolski, Lucian; Ebinger, Michael H.; Reeves, James B.; Thomson, Allison M.; Francis, Barry; Mitra, Sudeep; Rappaport, Aaron G.; Etchevers, Jorge D.; Sayre, Kenneth D.; Govaerts, Bram; McCarty, Gregory W.

    2013-01-01

    Three advanced technologies to measure soil carbon (C) density (g C m−2) are deployed in the field and the results compared against those obtained by the dry combustion (DC) method. The advanced methods are: a) Laser Induced Breakdown Spectroscopy (LIBS), b) Diffuse Reflectance Fourier Transform Infrared Spectroscopy (DRIFTS), and c) Inelastic Neutron Scattering (INS). The measurements and soil samples were acquired at Beltsville, MD, USA and at Centro International para el Mejoramiento del Maíz y el Trigo (CIMMYT) at El Batán, Mexico. At Beltsville, soil samples were extracted at three depth intervals (0–5, 5–15, and 15–30 cm) and processed for analysis in the field with the LIBS and DRIFTS instruments. The INS instrument determined soil C density to a depth of 30 cm via scanning and stationary measurements. Subsequently, soil core samples were analyzed in the laboratory for soil bulk density (kg m−3), C concentration (g kg−1) by DC, and results reported as soil C density (kg m−2). Results from each technique were derived independently and contributed to a blind test against results from the reference (DC) method. A similar procedure was employed at CIMMYT in Mexico employing but only with the LIBS and DRIFTS instruments. Following conversion to common units, we found that the LIBS, DRIFTS, and INS results can be compared directly with those obtained by the DC method. The first two methods and the standard DC require soil sampling and need soil bulk density information to convert soil C concentrations to soil C densities while the INS method does not require soil sampling. We conclude that, in comparison with the DC method, the three instruments (a) showed acceptable performances although further work is needed to improve calibration techniques and (b) demonstrated their portability and their capacity to perform under field conditions. PMID:23383225

  4. Localization of incipient tip vortex cavitation using ray based matched field inversion method

    NASA Astrophysics Data System (ADS)

    Kim, Dongho; Seong, Woojae; Choo, Youngmin; Lee, Jeunghoon

    2015-10-01

    Cavitation of marine propeller is one of the main contributing factors of broadband radiated ship noise. In this research, an algorithm for the source localization of incipient vortex cavitation is suggested. Incipient cavitation is modeled as monopole type source and matched-field inversion method is applied to find the source position by comparing the spatial correlation between measured and replicated pressure fields at the receiver array. The accuracy of source localization is improved by broadband matched-field inversion technique that enhances correlation by incoherently averaging correlations of individual frequencies. Suggested localization algorithm is verified through known virtual source and model test conducted in Samsung ship model basin cavitation tunnel. It is found that suggested localization algorithm enables efficient localization of incipient tip vortex cavitation using a few pressure data measured on the outer hull above the propeller and practically applicable to the typically performed model scale experiment in a cavitation tunnel at the early design stage.

  5. GPU-based parallel method of temperature field analysis in a floor heater with a controller

    NASA Astrophysics Data System (ADS)

    Forenc, Jaroslaw

    2016-06-01

    A parallel method enabling acceleration of the numerical analysis of the transient temperature field in an air floor heating system is presented in this paper. An initial-boundary value problem of the heater regulated by an on/off controller is formulated. The analogue model is discretized using the implicit finite difference method. The BiCGStab method is used to compute the obtained system of equations. A computer program implementing simultaneous computations on CPUand GPU(GPGPUtechnology) was developed. CUDA environment and linear algebra libraries (CUBLAS and CUSPARSE) are used by this program. The time of computations was reduced eight times in comparison with a program executed on the CPU only. Results of computations are presented in the form of time profiles and temperature field distributions. An influence of a model of the heat transfer coefficient on the simulation of the system operation was examined. The physical interpretation of obtained results is also presented.Results of computations were verified by comparing them with solutions obtained with the use of a commercial program - COMSOL Mutiphysics.

  6. Refraction-based X-ray Computed Tomography for Biomedical Purpose Using Dark Field Imaging Method

    NASA Astrophysics Data System (ADS)

    Sunaguchi, Naoki; Yuasa, Tetsuya; Huo, Qingkai; Ichihara, Shu; Ando, Masami

    We have proposed a tomographic x-ray imaging system using DFI (dark field imaging) optics along with a data-processing method to extract information on refraction from the measured intensities, and a reconstruction algorithm to reconstruct a refractive-index field from the projections generated from the extracted refraction information. The DFI imaging system consists of a tandem optical system of Bragg- and Laue-case crystals, a positioning device system for a sample, and two CCD (charge coupled device) cameras. Then, we developed a software code to simulate the data-acquisition, data-processing, and reconstruction methods to investigate the feasibility of the proposed methods. Finally, in order to demonstrate its efficacy, we imaged a sample with DCIS (ductal carcinoma in situ) excised from a breast cancer patient using a system constructed at the vertical wiggler beamline BL-14C in KEK-PF. Its CT images depicted a variety of fine histological structures, such as milk ducts, duct walls, secretions, adipose and fibrous tissue. They correlate well with histological sections.

  7. Microlens assembly error analysis for light field camera based on Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Li, Sai; Yuan, Yuan; Zhang, Hao-Wei; Liu, Bin; Tan, He-Ping

    2016-08-01

    This paper describes numerical analysis of microlens assembly errors in light field cameras using the Monte Carlo method. Assuming that there were no manufacturing errors, home-built program was used to simulate images of coupling distance error, movement error and rotation error that could appear during microlens installation. By researching these images, sub-aperture images and refocus images, we found that the images present different degrees of fuzziness and deformation for different microlens assembly errors, while the subaperture image presents aliasing, obscured images and other distortions that result in unclear refocus images.

  8. Hyperspectral image clustering method based on artificial bee colony algorithm and Markov random fields

    NASA Astrophysics Data System (ADS)

    Sun, Xu; Yang, Lina; Gao, Lianru; Zhang, Bing; Li, Shanshan; Li, Jun

    2015-01-01

    Center-oriented hyperspectral image clustering methods have been widely applied to hyperspectral remote sensing image processing; however, the drawbacks are obvious, including the over-simplicity of computing models and underutilized spatial information. In recent years, some studies have been conducted trying to improve this situation. We introduce the artificial bee colony (ABC) and Markov random field (MRF) algorithms to propose an ABC-MRF-cluster model to solve the problems mentioned above. In this model, a typical ABC algorithm framework is adopted in which cluster centers and iteration conditional model algorithm's results are considered as feasible solutions and objective functions separately, and MRF is modified to be capable of dealing with the clustering problem. Finally, four datasets and two indices are used to show that the application of ABC-cluster and ABC-MRF-cluster methods could help to obtain better image accuracy than conventional methods. Specifically, the ABC-cluster method is superior when used for a higher power of spectral discrimination, whereas the ABC-MRF-cluster method can provide better results when used for an adjusted random index. In experiments on simulated images with different signal-to-noise ratios, ABC-cluster and ABC-MRF-cluster showed good stability.

  9. Virtual local target method for avoiding local minimum in potential field based robot navigation.

    PubMed

    Zou, Xi-Yong; Zhu, Jing

    2003-01-01

    A novel robot navigation algorithm with global path generation capability is presented. Local minimum is a most intractable but is an encountered frequently problem in potential field based robot navigation. Through appointing appropriately some virtual local targets on the journey, it can be solved effectively. The key concept employed in this algorithm are the rules that govern when and how to appoint these virtual local targets. When the robot finds itself in danger of local minimum, a virtual local target is appointed to replace the global goal temporarily according to the rules. After the virtual target is reached, the robot continues on its journey by heading towards the global goal. The algorithm prevents the robot from running into local minima anymore. Simulation results showed that it is very effective in complex obstacle environments. PMID:12765277

  10. Controls on Nitrogen Fluxes from Agricultural Fields: Differing Conclusions Based on Choice of Sensitivity Analysis Method

    NASA Astrophysics Data System (ADS)

    Ahrens, T.; Matson, P.; Lobell, D.

    2006-12-01

    Sensitivity analyses (SA) of biogeochemical and agricultural models are often used to identify the importance of input variables for variance in model outputs, such as crop yield or nitrate leaching. Identification of these factors can aid in prioritizing efforts in research or decision support. Many types of sensitivity analyses are available, ranging from simple One-At-A-Time (OAT) screening exercises to more complex local and global variance-based methods (see Saltelli et al 2004). The purpose of this study was to determine the influence of the type of SA on factor prioritization in the Yaqui Valley, Mexico using the Water and Nitrogen Management Model (WNMM; Chen et al 2005). WNMM, a coupled plant-growth - biogeochemistry simulation model, was calibrated to reproduce crop growth, soil moisture, and gaseous N emission dynamics in experimental plots of irrigated wheat in the Yaqui Valley, Mexico from 1994-1997. Three types of SA were carried out using 16 input variables, including parameters related to weather, soil properties and crop management. Methods used for SA were local OAT, Monte Carlo (MC), and a global variance-based method (orthogonal input; OI). Results of the SA were based on typical interpretations used for each test: maximum absolute ratio of variation (MAROV) for OAT analyses; first- and second-order regressions for MC analyses; and a total effects index for OI. The three most important factors identified by MC and OI methods were generally in agreement, although the order of importance was not always consistent and there was little agreement for variables of less importance. OAT over-estimated the importance of two factors (planting date and pH) for many outputs. The biggest differences between the OAT results and those from MC and OI were likely due to the inability of OAT methods to account for non-linearity (eg. pH and ammonia volatilization), interactions among variables (eg. pH and timing of fertilization) and an over-reliance on baseline

  11. Image restoration method based on Hilbert transform for full-field optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Na, Jihoon; Choi, Woo June; Choi, Eun Seo; Ryu, Seon Young; Lee, Byeong Ha

    2008-01-01

    A full-field optical coherence tomography (FF-OCT) system utilizing a simple but novel image restoration method suitable for a high-speed system is demonstrated. An en-face image is retrieved from only two phase-shifted interference fringe images through using the mathematical Hilbert transform. With a thermal light source, a high-resolution FF-OCT system having axial and transverse resolutions of 1 and 2.2 μm, respectively, was implemented. The feasibility of the proposed scheme is confirmed by presenting the obtained en-face images of biological samples such as a piece of garlic and a gold beetle. The proposed method is robust to the error in the amount of the phase shift and does not leave residual fringes. The use of just two interference images and the strong immunity to phase errors provide great advantages in the imaging speed and the system design flexibility of a high-speed high-resolution FF-OCT system.

  12. Groundwater contamination field methods

    NASA Astrophysics Data System (ADS)

    Johnson, Ivan

    Half of the drinking water in the United States comes from groundwater; 75% of the nation's cities obtain all or part of their supplies from groundwater; and the rural areas are 95% dependent upon groundwater. Therefore it is imperative that every possible precaution be taken to protect the purity of the groundwater.Because of the increasing interest in prevention of groundwater contamination and the need for nationally recognized methods for investigation of contamination, a symposium entitled “Field Methods for Groundwater Contamination Studies and Their Standardization” was held February 2-7, 1986, in Cocoa Beach, Fla. The symposium was sponsored and organized by the American Society for Testing and Materials (ASTM) Committee D18 on Soil and Rock and Committee D19 on Water. Gene Collins of the National Institute for Petroleum and Energy Research (Bartlesville, Okla.) was symposium chair, and Ivan Johnson (A. Ivan Johnson, Inc., Consulting, Arvada, Colo.) was vice chair.

  13. Field screening for hexavalent chromium in soil: A fast-turnaround field method based on water extraction

    SciTech Connect

    McCain, R.G.; Baechler, M.A.

    1994-01-01

    Sodium dichromate has been identified as a contaminant of concern at several waste sites on the Hanford Site. Although chromium standards for soil are typically stated in terms of total chrome, much of the toxicity and carcinogenicity are attributed to the hexavalent state, which typically exists as a relatively mobile anion. Investigation and removal of crushed drums potentially containing residual sodium dichromate required a field test for hexavalent chromium to support characterization and remediation activities. Previous experience with a commercially available field test kit had been unsuccessful. This stimulated an effort to determine potential sources of error in the field test and led to a number of modifications that significantly improved the reliability of the test.

  14. A biomolecular detection method based on charge pumping in a nanogap embedded field-effect-transistor biosensor

    NASA Astrophysics Data System (ADS)

    Kim, Sungho; Ahn, Jae-Hyuk; Park, Tae Jung; Lee, Sang Yup; Choi, Yang-Kyu

    2009-06-01

    A unique direct electrical detection method of biomolecules, charge pumping, was demonstrated using a nanogap embedded field-effect-transistor (FET). With aid of a charge pumping method, sensitivity can fall below the 1 ng/ml concentration regime in antigen-antibody binding of an avian influenza case. Biomolecules immobilized in the nanogap are mainly responsible for the acute changes of the interface trap density due to modulation of the energy level of the trap. This finding is supported by a numerical simulation. The proposed detection method for biomolecules using a nanogap embedded FET represents a foundation for a chip-based biosensor capable of high sensitivity.

  15. First detection of the presence of naturally occurring grapevine downy mildew in the field by a fluorescence-based method.

    PubMed

    Latouche, Gwendal; Debord, Christian; Raynal, Marc; Milhade, Charlotte; Cerovic, Zoran G

    2015-10-01

    Early detection of fungal pathogen presence in the field would help to better time or avoid some of the fungicide treatments used to prevent crop production losses. We recently introduced a new phytoalexin-based method for a non-invasive detection of crop diseases using their fluorescence. The causal agent of grapevine downy mildew, Plasmopara viticola, induces the synthesis of stilbenoid phytoalexins by the host, Vitis vinifera, early upon infection. These stilbenoids emit violet-blue fluorescence under UV light. A hand-held solid-state UV-LED-based field fluorimeter, named Multiplex 330, was used to measure stilbenoid phytoalexins in a vineyard. It allowed us to non-destructively detect and monitor the naturally occurring downy mildew infections on leaves in the field. PMID:26293623

  16. Testing Allele Transmission of an SNP Set Using a Family-Based Generalized Genetic Random Field Method.

    PubMed

    Li, Ming; Li, Jingyun; He, Zihuai; Lu, Qing; Witte, John S; Macleod, Stewart L; Hobbs, Charlotte A; Cleves, Mario A

    2016-05-01

    Family-based association studies are commonly used in genetic research because they can be robust to population stratification (PS). Recent advances in high-throughput genotyping technologies have produced a massive amount of genomic data in family-based studies. However, current family-based association tests are mainly focused on evaluating individual variants one at a time. In this article, we introduce a family-based generalized genetic random field (FB-GGRF) method to test the joint association between a set of autosomal SNPs (i.e., single-nucleotide polymorphisms) and disease phenotypes. The proposed method is a natural extension of a recently developed GGRF method for population-based case-control studies. It models offspring genotypes conditional on parental genotypes, and, thus, is robust to PS. Through simulations, we presented that under various disease scenarios the FB-GGRF has improved power over a commonly used family-based sequence kernel association test (FB-SKAT). Further, similar to GGRF, the proposed FB-GGRF method is asymptotically well-behaved, and does not require empirical adjustment of the type I error rates. We illustrate the proposed method using a study of congenital heart defects with family trios from the National Birth Defects Prevention Study (NBDPS). PMID:27061818

  17. Separation of non-stationary multi-source sound field based on the interpolated time-domain equivalent source method

    NASA Astrophysics Data System (ADS)

    Bi, Chuan-Xing; Geng, Lin; Zhang, Xiao-Zheng

    2016-05-01

    In the sound field with multiple non-stationary sources, the measured pressure is the sum of the pressures generated by all sources, and thus cannot be used directly for studying the vibration and sound radiation characteristics of every source alone. This paper proposes a separation model based on the interpolated time-domain equivalent source method (ITDESM) to separate the pressure field belonging to every source from the non-stationary multi-source sound field. In the proposed method, ITDESM is first extended to establish the relationship between the mixed time-dependent pressure and all the equivalent sources distributed on every source with known location and geometry information, and all the equivalent source strengths at each time step are solved by an iterative solving process; then, the corresponding equivalent source strengths of one interested source are used to calculate the pressure field generated by that source alone. Numerical simulation of two baffled circular pistons demonstrates that the proposed method can be effective in separating the non-stationary pressure generated by every source alone in both time and space domains. An experiment with two speakers in a semi-anechoic chamber further evidences the effectiveness of the proposed method.

  18. A novel homogenization method for phase field approaches based on partial rank-one relaxation

    NASA Astrophysics Data System (ADS)

    Mosler, J.; Shchyglo, O.; Montazer Hojjat, H.

    2014-08-01

    This paper deals with the analysis of homogenization assumptions within phase field theories in a finite strain setting. Such homogenization assumptions define the average bulk's energy within the diffusive interface region where more than one phase co-exist. From a physical point of view, a correct computation of these energies is essential, since they define the driving force of material interfaces between different phases. The three homogenization assumptions considered in this paper are: (a) Voigt/Taylor model, (b) Reuss/Sachs model, and (c) Khachaturyan model. It is shown that these assumptions indeed share some similarities and sometimes lead to the same results. However, they are not equivalent. Only two of them allow the computation of the individual energies of the co-existing phases even within the aforementioned diffusive interface region: the Voigt/Taylor and the Reuss/Sachs model. Such a localization of the averaged energy is important in order to determine and to subsequently interpret the driving force at the interface. Since the Voigt/Taylor and the Reuss/Sachs model are known to be relatively restrictive in terms of kinematics (Voigt/Taylor) and linear momentum (Reuss/Sachs), a novel homogenization approach is advocated. Within a variational setting based on (incremental) energy minimization, the results predicted by the novel approach are bounded by those corresponding to the Voigt/Taylor and the Reuss/Sachs model. The new approach fulfills equilibrium at material interfaces (continuity of the stress vector) and it is kinematically compatible. In sharp contrast to existing approaches, it naturally defines the mismatch energy at incoherent material interfaces. From a mathematical point of view, it can be interpreted as a partial rank-one convexification.

  19. Time-dependent multiconfiguration self-consistent-field method based on the occupation-restricted multiple-active-space model for multielectron dynamics in intense laser fields

    NASA Astrophysics Data System (ADS)

    Sato, Takeshi; Ishikawa, Kenichi L.

    2015-02-01

    The time-dependent multiconfiguration self-consistent-field method based on the occupation-restricted multiple-active-space model is proposed (TD-ORMAS) for multielectron dynamics in intense laser fields. Extending the previously proposed time-dependent complete-active-space self-consistent-field method [TD-CASSCF; Phys. Rev. A 88, 023402 (2013), 10.1103/PhysRevA.88.023402], which divides the occupied orbitals into core and active orbitals, the TD-ORMAS method further subdivides the active orbitals into an arbitrary number of subgroups and poses the occupation restriction by giving the minimum and maximum number of electrons distributed in each subgroup. This enables highly flexible construction of the configuration-interaction (CI) space, allowing a large-active-space simulation of dynamics, e.g., the core excitation or ionization. The equations of motion for both CI coefficients and spatial orbitals are derived based on the time-dependent variational principle, and an efficient algorithm is proposed to solve for the orbital time derivatives. In-depth descriptions of the computational implementation are given in a readily programmable manner. The numerical application to the one-dimensional lithium hydride cluster models demonstrates that the high flexibility of the TD-ORMAS framework allows for the cost-effective simulations of multielectron dynamics by exploiting systematic series of approximations to the TD-CASSCF method.

  20. Method of depositing multi-layer carbon-based coatings for field emission

    DOEpatents

    Sullivan, J.P.; Friedmann, T.A.

    1999-08-10

    A novel field emitter device is disclosed for cold cathode field emission applications, comprising a multi-layer resistive carbon film. The multi-layered film of the present invention is comprised of at least two layers of a resistive carbon material, preferably amorphous-tetrahedrally coordinated carbon, such that the resistivities of adjacent layers differ. For electron emission from the surface, the preferred structure comprises a top layer having a lower resistivity than the bottom layer. For edge emitting structures, the preferred structure of the film comprises a plurality of carbon layers, wherein adjacent layers have different resistivities. Through selection of deposition conditions, including the energy of the depositing carbon species, the presence or absence of certain elements such as H, N, inert gases or boron, carbon layers having desired resistivities can be produced. Field emitters made according the present invention display improved electron emission characteristics in comparison to conventional field emitter materials. 8 figs.

  1. Method of depositing multi-layer carbon-based coatings for field emission

    DOEpatents

    Sullivan, John P.; Friedmann, Thomas A.

    1999-01-01

    A novel field emitter device for cold cathode field emission applications, comprising a multi-layer resistive carbon film. The multi-layered film of the present invention is comprised of at least two layers of a resistive carbon material, preferably amorphous-tetrahedrally coordinated carbon, such that the resistivities of adjacent layers differ. For electron emission from the surface, the preferred structure comprises a top layer having a lower resistivity than the bottom layer. For edge emitting structures, the preferred structure of the film comprises a plurality of carbon layers, wherein adjacent layers have different resistivities. Through selection of deposition conditions, including the energy of the depositing carbon species, the presence or absence of certain elements such as H, N, inert gases or boron, carbon layers having desired resistivities can be produced. Field emitters made according the present invention display improved electron emission characteristics in comparison to conventional field emitter materials.

  2. Do Toxicity Identification and Evaluation Laboratory-Based Methods Reflect Causes of Field Impairment?

    EPA Science Inventory

    Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both interstitial waters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question ...

  3. SELFI: an object-based, Bayesian method for faint emission line source detection in MUSE deep field data cubes

    NASA Astrophysics Data System (ADS)

    Meillier, Céline; Chatelain, Florent; Michel, Olivier; Bacon, Roland; Piqueras, Laure; Bacher, Raphael; Ayasso, Hacheme

    2016-04-01

    We present SELFI, the Source Emission Line FInder, a new Bayesian method optimized for detection of faint galaxies in Multi Unit Spectroscopic Explorer (MUSE) deep fields. MUSE is the new panoramic integral field spectrograph at the Very Large Telescope (VLT) that has unique capabilities for spectroscopic investigation of the deep sky. It has provided data cubes with 324 million voxels over a single 1 arcmin2 field of view. To address the challenge of faint-galaxy detection in these large data cubes, we developed a new method that processes 3D data either for modeling or for estimation and extraction of source configurations. This object-based approach yields a natural sparse representation of the sources in massive data fields, such as MUSE data cubes. In the Bayesian framework, the parameters that describe the observed sources are considered random variables. The Bayesian model leads to a general and robust algorithm where the parameters are estimated in a fully data-driven way. This detection algorithm was applied to the MUSE observation of Hubble Deep Field-South. With 27 h total integration time, these observations provide a catalog of 189 sources of various categories and with secured redshift. The algorithm retrieved 91% of the galaxies with only 9% false detection. This method also allowed the discovery of three new Lyα emitters and one [OII] emitter, all without any Hubble Space Telescope counterpart. We analyzed the reasons for failure for some targets, and found that the most important limitation of the method is when faint sources are located in the vicinity of bright spatially resolved galaxies that cannot be approximated by the Sérsic elliptical profile. The software and its documentation are available on the MUSE science web service (muse-vlt.eu/science).

  4. Consistent simulation of droplet evaporation based on the phase-field multiphase lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Safari, Hesameddin; Rahimian, Mohammad Hassan; Krafczyk, Manfred

    2014-09-01

    In the present article, we extend and generalize our previous article [H. Safari, M. H. Rahimian, and M. Krafczyk, Phys. Rev. E 88, 013304 (2013), 10.1103/PhysRevE.88.013304] to include the gradient of the vapor concentration at the liquid-vapor interface as the driving force for vaporization allowing the evaporation from the phase interface to work for arbitrary temperatures. The lattice Boltzmann phase-field multiphase modeling approach with a suitable source term, accounting for the effect of the phase change on the velocity field, is used to solve the two-phase flow field. The modified convective Cahn-Hilliard equation is employed to reconstruct the dynamics of the interface topology. The coupling between the vapor concentration and temperature field at the interface is modeled by the well-known Clausius-Clapeyron correlation. Numerous validation tests including one-dimensional and two-dimensional cases are carried out to demonstrate the consistency of the presented model. Results show that the model is able to predict the flow features around and inside an evaporating droplet quantitatively in quiescent as well as convective environments.

  5. Comparison of bridge load rating based on analytical and field testing methods

    NASA Astrophysics Data System (ADS)

    Cai, Chun S.; Shahawy, Mohsen A.; El-Saad, Adnan

    1999-02-01

    Unit H4 of the ACOSTA bridge in Jacksonville, Florida is composed of steel with a composite concrete deck. The bridge, designed for AASHTO HS20 loading, was built in 1993 on a horizontal curve. However it was later discovered that the designer neglected to consider the effect of curvature in the original design. Considering the effect of curvature in the analytical load rating resulted in load rating of HS4. To resolve the concerns about the low load rating of this essentially new bridge and to establish the actual load capacity, field load testing was conducted in 1996. Critical sections along the span were instrumented with strain and deflection gauges. The bridge was incrementally loaded and all measurements were recorded at each load step. The results were used to study the behavior of the bridge. The field test results combined with analysis resulted in a higher load rating than the original analytical rating considering curvature effect.

  6. Variational methods for field theories

    SciTech Connect

    Ben-Menahem, S.

    1986-09-01

    Four field theory models are studied: Periodic Quantum Electrodynamics (PQED) in (2 + 1) dimensions, free scalar field theory in (1 + 1) dimensions, the Quantum XY model in (1 + 1) dimensions, and the (1 + 1) dimensional Ising model in a transverse magnetic field. The last three parts deal exclusively with variational methods; the PQED part involves mainly the path-integral approach. The PQED calculation results in a better understanding of the connection between electric confinement through monopole screening, and confinement through tunneling between degenerate vacua. This includes a better quantitative agreement for the string tensions in the two approaches. Free field theory is used as a laboratory for a new variational blocking-truncation approximation, in which the high-frequency modes in a block are truncated to wave functions that depend on the slower background modes (Boron-Oppenheimer approximation). This ''adiabatic truncation'' method gives very accurate results for ground-state energy density and correlation functions. Various adiabatic schemes, with one variable kept per site and then two variables per site, are used. For the XY model, several trial wave functions for the ground state are explored, with an emphasis on the periodic Gaussian. A connection is established with the vortex Coulomb gas of the Euclidean path integral approach. The approximations used are taken from the realms of statistical mechanics (mean field approximation, transfer-matrix methods) and of quantum mechanics (iterative blocking schemes). In developing blocking schemes based on continuous variables, problems due to the periodicity of the model were solved. Our results exhibit an order-disorder phase transition. The transfer-matrix method is used to find a good (non-blocking) trial ground state for the Ising model in a transverse magnetic field in (1 + 1) dimensions.

  7. Preservice Teachers' Developing Understandings about Culturally Responsive Teaching in a Field-Based Writing Methods Course

    ERIC Educational Resources Information Center

    Bennett, Susan V.

    2010-01-01

    I investigated eight preservice teachers' understandings about culturally responsive pedagogy as they participated in a writing methods course in which they tutored children from different ethnic, socioeconomic, cultural, and linguistic backgrounds in an afterschool program at a local community center. I also investigated how these preservice…

  8. Field emitters with nanoscale tips based on Mo oxide fabricated by electrochemical methods

    NASA Astrophysics Data System (ADS)

    Tsukamoto, Takeo; Sato, Takahiro; Kitamura, Shin; Kitao, Akiko; Kubota, Oichi; Ozaki, Eiji; Motoi, Taiko

    2016-04-01

    Field emitters with nanoscale tips and a fabrication technique using a nanoscale gap are described. Each fabrication technique makes it possible to form emitters on a meter-scale glass substrate. The emitter has a configuration with one side gate to reduce the electron scattering losses at the counter electrode to improve the emission efficiency. All thin film layers constituting the emitter are fabricated by plasma-enhanced chemical vapor deposition and sputtering deposition. Nanoscale tips are formed between a shallow gap less than 7 nm deep by the joule heating of a Mo complex oxide, which is produced by the electro chemical etching of a deposited Mo layer. To our knowledge, this is the first work that shows a uniform efficiency of 5% or more achieved at an anode voltage of 10 kV and an operation voltage of 23 V.

  9. [Research on the temperature field detection method of hot forging based on long-wavelength infrared spectrum].

    PubMed

    Zhang, Yu-Cun; Wei, Bin; Fu, Xian-Bin

    2014-02-01

    A temperature field detection method based on long-wavelength infrared spectrum for hot forging is proposed in the present paper. This method combines primary spectrum pyrometry and three-stage FP-cavity LCTF. By optimizing the solutions of three group nonlinear equations in the mathematical model of temperature detection, the errors are reduced, thus measuring results will be more objective and accurate. Then the system of three-stage FP-cavity LCTF was designed on the principle of crystal birefringence. The system realized rapid selection of any wavelength in a certain wavelength range. It makes the response of the temperature measuring system rapid and accurate. As a result, without the emissivity of hot forging, the method can acquire exact information of temperature field and effectively suppress the background light radiation around the hot forging and ambient light that impact the temperature detection accuracy. Finally, the results of MATLAB showed that the infrared spectroscopy through the three-stage FP-cavity LCTF could meet the requirements of design. And experiments verified the feasibility of temperature measuring method. Compared with traditional single-band thermal infrared imager, the accuracy of measuring result was improved. PMID:24822408

  10. Simulation of the reduction process of solid oxide fuel cell composite anode based on phase field method

    NASA Astrophysics Data System (ADS)

    Jiao, Zhenjun; Shikazono, Naoki

    2016-02-01

    It is known that the reduction process influences the initial performances and durability of nickel-yttria-stabilized zirconia composite anode of the solid oxide fuel cell. In the present study, the reduction process of nickel-yttria stabilized zirconia composite anode is simulated based on the phase field method. An three-dimensional reconstructed microstructure of nickel oxide-yttria stabilized zirconia composite obtained by focused ion beam-scanning electron microscopy is used as the initial microstructure for the simulation. Both reduction of nickel oxide and nickel sintering mechanisms are considered in the model. The reduction rates of nickel oxide at different interfaces are defined based on the literature data. Simulation results are qualitatively compared to the experimental anode microstructures with different reduction temperatures.

  11. Creating long-term gridded fields of reference evapotranspiration in Alpine terrain based on a recalibrated Hargreaves method

    NASA Astrophysics Data System (ADS)

    Haslinger, Klaus; Bartsch, Annett

    2016-03-01

    A new approach for the construction of high-resolution gridded fields of reference evapotranspiration for the Austrian domain on a daily time step is presented. Gridded data of minimum and maximum temperatures are used to estimate reference evapotranspiration based on the formulation of Hargreaves. The calibration constant in the Hargreaves equation is recalibrated to the Penman-Monteith equation in a monthly and station-wise assessment. This ensures, on one hand, eliminated biases of the Hargreaves approach compared to the formulation of Penman-Monteith and, on the other hand, also reduced root mean square errors and relative errors on a daily timescale. The resulting new calibration parameters are interpolated over time to a daily temporal resolution for a standard year of 365 days. The overall novelty of the approach is the use of surface elevation as the only predictor to estimate the recalibrated Hargreaves parameter in space. A third-order polynomial is fitted to the recalibrated parameters against elevation at every station which yields a statistical model for assessing these new parameters in space by using the underlying digital elevation model of the temperature fields. With these newly calibrated parameters for every day of year and every grid point, the Hargreaves method is applied to the temperature fields, yielding reference evapotranspiration for the entire grid and time period from 1961-2013. This approach is opening opportunities to create high-resolution reference evapotranspiration fields based only temperature observations, but being as close as possible to the estimates of the Penman-Monteith approach.

  12. Creating long term gridded fields of reference evapotranspiration in Alpine terrain based on a re-calibrated Hargreaves method

    NASA Astrophysics Data System (ADS)

    Haslinger, K.; Bartsch, A.

    2015-05-01

    A new approach for the construction of high resolution gridded fields of reference evapotranspiration for the Austrian domain on a daily time step is presented. Forcing fields of gridded data of minimum and maximum temperatures are used to estimate reference evapotranspiration based on the formulation of Hargreaves. The calibration constant in the Hargreaves equation is recalibrated to the Penman-Monteith equation, which is recommended by the FAO, in a monthly and station-wise assessment. This ensures on one hand eliminated biases of the Hargreaves approach compared to the formulation of Penman-Monteith and on the other hand also reduced root mean square errors and relative errors on a daily time scale. The resulting new calibration parameters are interpolated in time to a daily temporal resolution for a standard year of 365 days. The overall novelty of the approach is the conduction of surface elevation as a predictor to estimate the re-calibrated Hargreaves parameter in space. A third order spline is fitted to the re-calibrated parameters against elevation at every station and yields the statistical model for assessing these new parameters in space by using the underlying digital elevation model of the temperature fields. Having newly calibrated parameters for every day of year and every grid point, the Hargreaves method is applied to the temperature fields, yielding reference evapotranspiration for the entire grid and time period from 1961-2013. With this approach it is possible to generate high resolution reference evapotranspiration fields starting when only temperature observations are available but re-calibrated to meet the requirements of the recommendations defined by the FAO.

  13. A comparison of hydroponic and soil-based screening methods to identify salt tolerance in the field in barley.

    PubMed

    Tavakkoli, Ehsan; Fatehi, Foad; Rengasamy, Pichu; McDonald, Glenn K

    2012-06-01

    Success in breeding crops for yield and other quantitative traits depends on the use of methods to evaluate genotypes accurately under field conditions. Although many screening criteria have been suggested to distinguish between genotypes for their salt tolerance under controlled environmental conditions, there is a need to test these criteria in the field. In this study, the salt tolerance, ion concentrations, and accumulation of compatible solutes of genotypes of barley with a range of putative salt tolerance were investigated using three growing conditions (hydroponics, soil in pots, and natural saline field). Initially, 60 genotypes of barley were screened for their salt tolerance and uptake of Na(+), Cl(-), and K(+) at 150 mM NaCl and, based on this, a subset of 15 genotypes was selected for testing in pots and in the field. Expression of salt tolerance in saline solution culture was not a reliable indicator of the differences in salt tolerance between barley plants that were evident in saline soil-based comparisons. Significant correlations were observed in the rankings of genotypes on the basis of their grain yield production at a moderately saline field site and their relative shoot growth in pots at EC(e) 7.2 [Spearman's rank correlation (rs)=0.79] and EC(e) 15.3 (rs=0.82) and the crucial parameter of leaf Na(+) (rs=0.72) and Cl(-) (rs=0.82) concentrations at EC(e) 7.2 dS m(-1). This work has established screening procedures that correlated well with grain yield at sites with moderate levels of soil salinity. This study also showed that both salt exclusion and osmotic tolerance are involved in salt tolerance and that the relative importance of these traits may differ with the severity of the salt stress. In soil, ion exclusion tended to be more important at low to moderate levels of stress but osmotic stress became more important at higher stress levels. Salt exclusion coupled with a synthesis of organic solutes were shown to be important components of

  14. A comparison of hydroponic and soil-based screening methods to identify salt tolerance in the field in barley

    PubMed Central

    Tavakkoli, Ehsan; Fatehi, Foad; Rengasamy, Pichu; McDonald, Glenn K.

    2012-01-01

    Success in breeding crops for yield and other quantitative traits depends on the use of methods to evaluate genotypes accurately under field conditions. Although many screening criteria have been suggested to distinguish between genotypes for their salt tolerance under controlled environmental conditions, there is a need to test these criteria in the field. In this study, the salt tolerance, ion concentrations, and accumulation of compatible solutes of genotypes of barley with a range of putative salt tolerance were investigated using three growing conditions (hydroponics, soil in pots, and natural saline field). Initially, 60 genotypes of barley were screened for their salt tolerance and uptake of Na+, Cl–, and K+ at 150 mM NaCl and, based on this, a subset of 15 genotypes was selected for testing in pots and in the field. Expression of salt tolerance in saline solution culture was not a reliable indicator of the differences in salt tolerance between barley plants that were evident in saline soil-based comparisons. Significant correlations were observed in the rankings of genotypes on the basis of their grain yield production at a moderately saline field site and their relative shoot growth in pots at ECe 7.2 [Spearman’s rank correlation (rs)=0.79] and ECe 15.3 (rs=0.82) and the crucial parameter of leaf Na+ (rs=0.72) and Cl– (rs=0.82) concentrations at ECe 7.2 dS m−1. This work has established screening procedures that correlated well with grain yield at sites with moderate levels of soil salinity. This study also showed that both salt exclusion and osmotic tolerance are involved in salt tolerance and that the relative importance of these traits may differ with the severity of the salt stress. In soil, ion exclusion tended to be more important at low to moderate levels of stress but osmotic stress became more important at higher stress levels. Salt exclusion coupled with a synthesis of organic solutes were shown to be important components of salt

  15. An inversion method of 2D NMR relaxation spectra in low fields based on LSQR and L-curve

    NASA Astrophysics Data System (ADS)

    Su, Guanqun; Zhou, Xiaolong; Wang, Lijia; Wang, Yuanjun; Nie, Shengdong

    2016-04-01

    The low-field nuclear magnetic resonance (NMR) inversion method based on traditional least-squares QR decomposition (LSQR) always produces some oscillating spectra. Moreover, the solution obtained by traditional LSQR algorithm often cannot reflect the true distribution of all the components. Hence, a good solution requires some manual intervention, for especially low signal-to-noise ratio (SNR) data. An approach based on the LSQR algorithm and L-curve is presented to solve this problem. The L-curve method is applied to obtain an improved initial optimal solution by balancing the residual and the complexity of the solutions instead of manually adjusting the smoothing parameters. First, the traditional LSQR algorithm is used on 2D NMR T1-T2 data to obtain its resultant spectra and corresponding residuals, whose norms are utilized to plot the L-curve. Second, the corner of the L-curve as the initial optimal solution for the non-negative constraint is located. Finally, a 2D map is corrected and calculated iteratively based on the initial optimal solution. The proposed approach is tested on both simulated and measured data. The results show that this algorithm is robust, accurate and promising for the NMR analysis.

  16. Lanczos-based Low-Rank Correction Method for Solving the Dyson Equation in Inhomogenous Dynamical Mean-Field Theory

    NASA Astrophysics Data System (ADS)

    Carrier, Pierre; Tang, Jok M.; Saad, Yousef; Freericks, James K.

    Inhomogeneous dynamical mean-field theory has been employed to solve many interesting strongly interacting problems from transport in multilayered devices to the properties of ultracold atoms in a trap. The main computational step, especially for large systems, is the problem of calculating the inverse of a large sparse matrix to solve Dyson's equation and determine the local Green's function at each lattice site from the corresponding local self-energy. We present a new e_cient algorithm, the Lanczos-based low-rank algorithm, for the calculation of the inverse of a large sparse matrix which yields this local (imaginary time) Green's function. The Lanczos-based low-rank algorithm is based on a domain decomposition viewpoint, but avoids explicit calculation of Schur complements and relies instead on low-rank matrix approximations derived from the Lanczos algorithm, for solving the Dyson equation. We report at least a 25-fold improvement of performance compared to explicit decomposition (such as sparse LU) of the matrix inverse. We also report that scaling relative to matrix sizes, of the low-rank correction method on the one hand and domain decomposition methods on the other, are comparable.

  17. Optimization of the Homogenization Heat Treatment of Nickel-Based Superalloys Based on Phase-Field Simulations: Numerical Methods and Experimental Validation

    NASA Astrophysics Data System (ADS)

    Rettig, Ralf; Ritter, Nils C.; Müller, Frank; Franke, Martin M.; Singer, Robert F.

    2015-12-01

    A method for predicting the fastest possible homogenization treatment of the as-cast microstructure of nickel-based superalloys is presented and compared with experimental results for the single-crystal superalloy ERBO/1. The computational prediction method is based on phase-field simulations. Experimentally determined compositional fields of the as-cast microstructure from microprobe measurements are being used as input data. The software program MICRESS is employed to account for multicomponent diffusion, dissolution of the eutectic phases, nucleation, and growth of liquid phase (incipient melting). The optimization itself is performed using an iterative algorithm that increases the temperature in such a way that the microstructural state is always very close to the incipient melting limit. Maps are derived allowing describing the dissolution of primary γ/ γ'-islands and the elimination of residual segregation with respect to temperature and time.

  18. Methods for rapid frequency-domain characterization of leakage currents in silicon nanowire-based field-effect transistors.

    PubMed

    Roinila, Tomi; Yu, Xiao; Verho, Jarmo; Li, Tie; Kallio, Pasi; Vilkko, Matti; Gao, Anran; Wang, Yuelin

    2014-01-01

    Silicon nanowire-based field-effect transistors (SiNW FETs) have demonstrated the ability of ultrasensitive detection of a wide range of biological and chemical targets. The detection is based on the variation of the conductance of a nanowire channel, which is caused by the target substance. This is seen in the voltage-current behavior between the drain and source. Some current, known as leakage current, flows between the gate and drain, and affects the current between the drain and source. Studies have shown that leakage current is frequency dependent. Measurements of such frequency characteristics can provide valuable tools in validating the functionality of the used transistor. The measurements can also be an advantage in developing new detection technologies utilizing SiNW FETs. The frequency-domain responses can be measured by using a commercial sine-sweep-based network analyzer. However, because the analyzer takes a long time, it effectively prevents the development of most practical applications. Another problem with the method is that in order to produce sinusoids the signal generator has to cope with a large number of signal levels. This may become challenging in developing low-cost applications. This paper presents fast, cost-effective frequency-domain methods with which to obtain the responses within seconds. The inverse-repeat binary sequence (IRS) is applied and the admittance spectroscopy between the drain and source is computed through Fourier methods. The methods is verified by experimental measurements from an n-type SiNW FET. PMID:25161832

  19. Morphological evolution and migration of void in bi-piezoelectric interface based on nonlocal phase field method

    NASA Astrophysics Data System (ADS)

    Li, H. B.; Wang, X.

    2016-05-01

    This paper reports the result of investigation into the morphological evolution and migration of void in bi-piezoelectric material interface by utilizing nonlocal phase field model and finite element method (FEM), where the small scale effect containing the long-range forces among atoms is considered. The nonlocal elastic strain energy and the nonlocal electric energy around the void are firstly calculated by the finite element method. Then based on the finite difference method (FDM), the thermodynamic equilibrium equation containing the surface energy and anisotropic diffusivity is solved to simulate the morphological evolution and migration of elliptical void in bi-piezoelectric films interface. Results show that the way of load condition plays a significant role in the evolution process, and the boundary of void's long axis gradually collapses toward the center of ellipse. In addition, the evolutionary speed of left boundary gradually decreases with scale effect coefficient growth. This work can provide references for the safety evaluation of piezoelectric materials in micro electro mechanical system.

  20. Self-consistent Green's function embedding for advanced electronic structure methods based on a dynamical mean-field concept

    NASA Astrophysics Data System (ADS)

    Chibani, Wael; Ren, Xinguo; Scheffler, Matthias; Rinke, Patrick

    2016-04-01

    We present an embedding scheme for periodic systems that facilitates the treatment of the physically important part (here a unit cell or a supercell) with advanced electronic structure methods, that are computationally too expensive for periodic systems. The rest of the periodic system is treated with computationally less demanding approaches, e.g., Kohn-Sham density-functional theory, in a self-consistent manner. Our scheme is based on the concept of dynamical mean-field theory formulated in terms of Green's functions. Our real-space dynamical mean-field embedding scheme features two nested Dyson equations, one for the embedded cluster and another for the periodic surrounding. The total energy is computed from the resulting Green's functions. The performance of our scheme is demonstrated by treating the embedded region with hybrid functionals and many-body perturbation theory in the GW approach for simple bulk systems. The total energy and the density of states converge rapidly with respect to the computational parameters and approach their bulk limit with increasing cluster (i.e., computational supercell) size.

  1. Microcalcification detection in full-field digital mammograms with PFCM clustering and weighted SVM-based method

    NASA Astrophysics Data System (ADS)

    Liu, Xiaoming; Mei, Ming; Liu, Jun; Hu, Wei

    2015-12-01

    Clustered microcalcifications (MCs) in mammograms are an important early sign of breast cancer in women. Their accurate detection is important in computer-aided detection (CADe). In this paper, we integrated the possibilistic fuzzy c-means (PFCM) clustering algorithm and weighted support vector machine (WSVM) for the detection of MC clusters in full-field digital mammograms (FFDM). For each image, suspicious MC regions are extracted with region growing and active contour segmentation. Then geometry and texture features are extracted for each suspicious MC, a mutual information-based supervised criterion is used to select important features, and PFCM is applied to cluster the samples into two clusters. Weights of the samples are calculated based on possibilities and typicality values from the PFCM, and the ground truth labels. A weighted nonlinear SVM is trained. During the test process, when an unknown image is presented, suspicious regions are located with the segmentation step, selected features are extracted, and the suspicious MC regions are classified as containing MC or not by the trained weighted nonlinear SVM. Finally, the MC regions are analyzed with spatial information to locate MC clusters. The proposed method is evaluated using a database of 410 clinical mammograms and compared with a standard unweighted support vector machine (SVM) classifier. The detection performance is evaluated using response receiver operating (ROC) curves and free-response receiver operating characteristic (FROC) curves. The proposed method obtained an area under the ROC curve of 0.8676, while the standard SVM obtained an area of 0.8268 for MC detection. For MC cluster detection, the proposed method obtained a high sensitivity of 92 % with a false-positive rate of 2.3 clusters/image, and it is also better than standard SVM with 4.7 false-positive clusters/image at the same sensitivity.

  2. Teaching Geographic Field Methods Using Paleoecology

    ERIC Educational Resources Information Center

    Walsh, Megan K.

    2014-01-01

    Field-based undergraduate geography courses provide numerous pedagogical benefits including an opportunity for students to acquire employable skills in an applied context. This article presents one unique approach to teaching geographic field methods using paleoecological research. The goals of this course are to teach students key geographic…

  3. Spatial methods for nonstationary fields

    NASA Astrophysics Data System (ADS)

    Nychka, D. W.

    2012-12-01

    Kriging is a non-parametric regression method used in geostatistics for estimating curves and surfaces and forms the core of most statistical methods for spatial data. In climate science these methods are very useful for estimating how climate varies over a geographic region when the observational data is sparse or the computer model runs are limited. A statistical challenge is to implement spatial methods for large sample sizes and also the heterogenity in the physical fields. Both common features of many geophysical problems. Equally important is to provide companion measures of uncertainty so that the estimated surfaces can be compared and interpreted in an objective way. Here we present a new statistical method that can represent nonstationary structure in a field and also scale to large numbers of spatial locations. A practical example is also presented for a subset of the North American Regional Climate Change and Assessment Program model data.

  4. Field method for sulfide determination

    SciTech Connect

    Wilson, B L; Schwarser, R R; Chukwuenye, C O

    1982-01-01

    A simple and rapid method was developed for determining the total sulfide concentration in water in the field. Direct measurements were made using a silver/sulfide ion selective electrode in conjunction with a double junction reference electrode connected to an Orion Model 407A/F Specific Ion Meter. The method also made use of a sulfide anti-oxidant buffer (SAOB II) which consists of ascorbic acid, sodium hydroxide, and disodium EDTA. Preweighed sodium sulfide crystals were sealed in air tight plastic volumetric flasks which were used in standardization process in the field. Field standards were prepared by adding SAOB II to the flask containing the sulfide crystals and diluting it to the mark with deionized deaerated water. Serial dilutions of the standards were used to prepare standards of lower concentrations. Concentrations as low as 6 ppB were obtained on lake samples with a reproducibility better than +- 10%.

  5. Assessment of sit-to-stand movement in nonspecific low back pain: a comparison study for psychometric properties of field-based and laboratory-based methods.

    PubMed

    Kahraman, Turhan; Ozcan Kahraman, Buse; Salik Sengul, Yesim; Kalemci, Orhan

    2016-06-01

    One of the most difficult tasks associated with the management of nonspecific low back pain (LBP) is its clinical assessment. Objective functional methods have been developed for assessment. However, few studies have used daily activities such as sit-to-stand (STS). The aim was to compare the psychometric properties of two commonly used STS assessment methods. A test-retest reliability study design was used. Participants with nonspecific LBP performed the 30-s chair stand test (30CST) and the STS test in Balance Master, which measures weight transfer, rising index and centre of gravity sway velocity. The same tests were reperformed after 48-72 h. The intraclass correlation coefficient (ICC), standard error of measurement (SEM), minimal detectable change and coefficient of variation were calculated to compare the reliability. The correlations between the tests, the Oswestry Disability Index and pain intensity were examined for validation. The 30CST had very high intrarater reliability (ICC=0.94). The variables of STS test in Balance Master had moderate intrarater reliability (ICC=0.62-0.69). There were significant correlations between the 30CST, Oswestry Disability Index and pain intensity at activity (P<0.01). The rising index was the only one variable that was significantly correlated with pain intensity at activity (P<0.05). The 30CST as the field-based method to measure STS movement was better than the laboratory-based method in terms of their psychometric properties. Moreover, the 30CST was associated with disability and pain related to LBP. The 30CST is a simple, cheap, less time consuming and psychometrically appropriate method to use in individuals with nonspecific LBP. PMID:27031182

  6. An image-based reaction field method for electrostatic interactions in molecular dynamics simulations of aqueous solutions

    NASA Astrophysics Data System (ADS)

    Lin, Yuchun; Baumketner, Andrij; Deng, Shaozhong; Xu, Zhenli; Jacobs, Donald; Cai, Wei

    2009-10-01

    In this paper, a new solvation model is proposed for simulations of biomolecules in aqueous solutions that combines the strengths of explicit and implicit solvent representations. Solute molecules are placed in a spherical cavity filled with explicit water, thus providing microscopic detail where it is most needed. Solvent outside of the cavity is modeled as a dielectric continuum whose effect on the solute is treated through the reaction field corrections. With this explicit/implicit model, the electrostatic potential represents a solute molecule in an infinite bath of solvent, thus avoiding unphysical interactions between periodic images of the solute commonly used in the lattice-sum explicit solvent simulations. For improved computational efficiency, our model employs an accurate and efficient multiple-image charge method to compute reaction fields together with the fast multipole method for the direct Coulomb interactions. To minimize the surface effects, periodic boundary conditions are employed for nonelectrostatic interactions. The proposed model is applied to study liquid water. The effect of model parameters, which include the size of the cavity, the number of image charges used to compute reaction field, and the thickness of the buffer layer, is investigated in comparison with the particle-mesh Ewald simulations as a reference. An optimal set of parameters is obtained that allows for a faithful representation of many structural, dielectric, and dynamic properties of the simulated water, while maintaining manageable computational cost. With controlled and adjustable accuracy of the multiple-image charge representation of the reaction field, it is concluded that the employed model achieves convergence with only one image charge in the case of pure water. Future applications to pKa calculations, conformational sampling of solvated biomolecules and electrolyte solutions are briefly discussed.

  7. 3D photo mosaicing of Tagiri shallow vent field by an autonomous underwater vehicle (3rd report) - Mosaicing method based on navigation data and visual features -

    NASA Astrophysics Data System (ADS)

    Maki, Toshihiro; Ura, Tamaki; Singh, Hanumant; Sakamaki, Takashi

    Large-area seafloor imaging will bring significant benefits to various fields such as academics, resource survey, marine development, security, and search-and-rescue. The authors have proposed a navigation method of an autonomous underwater vehicle for seafloor imaging, and verified its performance through mapping tubeworm colonies with the area of 3,000 square meters using the AUV Tri-Dog 1 at Tagiri vent field, Kagoshima bay in Japan (Maki et al., 2008, 2009). This paper proposes a post-processing method to build a natural photo mosaic from a number of pictures taken by an underwater platform. The method firstly removes lens distortion, invariances of color and lighting from each image, and then ortho-rectification is performed based on camera pose and seafloor estimated by navigation data. The image alignment is based on both navigation data and visual characteristics, implemented as an expansion of the image based method (Pizarro et al., 2003). Using the two types of information realizes an image alignment that is consistent both globally and locally, as well as making the method applicable to data sets with little visual keys. The method was evaluated using a data set obtained by the AUV Tri-Dog 1 at the vent field in Sep. 2009. A seamless, uniformly illuminated photo mosaic covering the area of around 500 square meters was created from 391 pictures, which covers unique features of the field such as bacteria mats and tubeworm colonies.

  8. An atomic orbital-based formulation of the complete active space self-consistent field method on graphical processing units

    SciTech Connect

    Hohenstein, Edward G.; Luehr, Nathan; Ufimtsev, Ivan S.; Martínez, Todd J.

    2015-06-14

    Despite its importance, state-of-the-art algorithms for performing complete active space self-consistent field (CASSCF) computations have lagged far behind those for single reference methods. We develop an algorithm for the CASSCF orbital optimization that uses sparsity in the atomic orbital (AO) basis set to increase the applicability of CASSCF. Our implementation of this algorithm uses graphical processing units (GPUs) and has allowed us to perform CASSCF computations on molecular systems containing more than one thousand atoms. Additionally, we have implemented analytic gradients of the CASSCF energy; the gradients also benefit from GPU acceleration as well as sparsity in the AO basis.

  9. Multi-scale modeling of microstructure dependent intergranular brittle fracture using a quantitative phase-field based method

    SciTech Connect

    Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.

    2015-12-07

    In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO2 and comparing the predictions with experiments.

  10. Field testing of component-level model-based fault detection methods for mixing boxes and VAV fan systems

    SciTech Connect

    Xu, Peng; Haves, Philip

    2002-05-16

    An automated fault detection and diagnosis tool for HVAC systems is being developed, based on an integrated, life-cycle, approach to commissioning and performance monitoring. The tool uses component-level HVAC equipment models implemented in the SPARK equation-based simulation environment. The models are configured using design information and component manufacturers' data and then fine-tuned to match the actual performance of the equipment by using data measured during functional tests of the sort using in commissioning. This paper presents the results of field tests of mixing box and VAV fan system models in an experimental facility and a commercial office building. The models were found to be capable of representing the performance of correctly operating mixing box and VAV fan systems and detecting several types of incorrect operation.

  11. Lattice Boltzmann method for binary fluids based on mass-conserving quasi-incompressible phase-field theory

    NASA Astrophysics Data System (ADS)

    Yang, Kang; Guo, Zhaoli

    2016-04-01

    In this paper, a lattice Boltzmann equation (LBE) model is proposed for binary fluids based on a quasi-incompressible phase-field model [J. Shen et al., Commun. Comput. Phys. 13, 1045 (2013), 10.4208/cicp.300711.160212a]. Compared with the other incompressible LBE models based on the incompressible phase-field theory, the quasi-incompressible model conserves mass locally. A series of numerical simulations are performed to validate the proposed model, and comparisons with an incompressible LBE model [H. Liang et al., Phys. Rev. E 89, 053320 (2014), 10.1103/PhysRevE.89.053320] are also carried out. It is shown that the proposed model can track the interface accurately. As the stationary droplet and rising bubble problems, the quasi-incompressible LBE gives nearly the same predictions as the incompressible model, but the compressible effect in the present model plays a significant role in the phase separation problem. Therefore, in general cases the present mass-conserving model should be adopted.

  12. Multi-scale modeling of microstructure dependent intergranular brittle fracture using a quantitative phase-field based method

    DOE PAGESBeta

    Chakraborty, Pritam; Zhang, Yongfeng; Tonks, Michael R.

    2015-12-07

    In this study, the fracture behavior of brittle materials is strongly influenced by their underlying microstructure that needs explicit consideration for accurate prediction of fracture properties and the associated scatter. In this work, a hierarchical multi-scale approach is pursued to model microstructure sensitive brittle fracture. A quantitative phase-field based fracture model is utilized to capture the complex crack growth behavior in the microstructure and the related parameters are calibrated from lower length scale atomistic simulations instead of engineering scale experimental data. The workability of this approach is demonstrated by performing porosity dependent intergranular fracture simulations in UO2 and comparing themore » predictions with experiments.« less

  13. Modified methods of stellar magnetic field measurements

    NASA Astrophysics Data System (ADS)

    Kholtygin, A. F.

    2014-12-01

    The standard methods of the magnetic field measurement, based on an analysis of the relation between the Stokes V-parameter and the first derivative of the total line profile intensity, were modified by applying a linear integral operator \\hat{L} to both sides of this relation. As the operator \\hat{L}, the operator of the wavelet transform with DOG-wavelets is used. The key advantage of the proposed method is an effective suppression of the noise contribution to the line profile and the Stokes parameter V. The efficiency of the method has been studied using model line profiles with various noise contributions. To test the proposed method, the spectropolarimetric observations of the A0 star α2 CVn, the Of?p star HD 148937, and the A0 supergiant HD 92207 were used. The longitudinal magnetic field strengths calculated by our method appeared to be in good agreement with those determined by other methods.

  14. [Calculation and analysis of arc temperature field of pulsed TIG welding based on Fowler-Milne method].

    PubMed

    Xiao, Xiao; Hua, Xue-Ming; Wu, Yi-Xiong; Li, Fang

    2012-09-01

    Pulsed TIG welding is widely used in industry due to its superior properties, and the measurement of arc temperature is important to analysis of welding process. The relationship between particle densities of Ar and temperature was calculated based on the theory of spectrum, the relationship between emission coefficient of spectra line at 794.8 nm and temperature was calculated, arc image of spectra line at 794.8 nm was captured by high speed camera, and both the Abel inversion and Fowler-Milne method were used to calculate the temperature distribution of pulsed TIG welding. PMID:23240389

  15. a Band Selection Method for Sub-Pixel Target Detection in Hyperspectral Images Based on Laboratory and Field Reflectance Spectral Comparison

    NASA Astrophysics Data System (ADS)

    Sharifi hashjin, S.; Darvishi, A.; Khazai, S.; Hatami, F.; Jafari houtki, M.

    2016-06-01

    In recent years, developing target detection algorithms has received growing interest in hyperspectral images. In comparison to the classification field, few studies have been done on dimension reduction or band selection for target detection in hyperspectral images. This study presents a simple method to remove bad bands from the images in a supervised manner for sub-pixel target detection. The proposed method is based on comparing field and laboratory spectra of the target of interest for detecting bad bands. For evaluation, the target detection blind test dataset is used in this study. Experimental results show that the proposed method can improve efficiency of the two well-known target detection methods, ACE and CEM.

  16. Assessing the performance of reactant transport layers and flow fields towards oxygen transport: A new imaging method based on chemiluminescence

    NASA Astrophysics Data System (ADS)

    Lopes, Thiago; Ho, Matthew; Kakati, Biraj K.; Kucernak, Anthony R. J.

    2015-01-01

    A new, simple and precise ex-situ optical imaging method is developed which allows indirect measurement of the partial pressure of oxygen (as ozone) within fuel cell components. Images of oxygen distribution are recorded with higher spatial (∼20 μm) and time (40 ms) resolutions. This approach is applied to assess oxygen concentration across the face of a pseudo polymer electrolyte fuel cell (PEFC), with a serpentine design flow field. We show that the amount of light produced is directly proportional to the partial pressure of ozone, in the same way as the local current density in a PEFC is proportional to the partial pressure of bimolecular oxygen. Hence the simulated system provides information relevant to a PEFC with the same geometry operating at the same stoichiometric ratio. This new approach allows direct imaging of flow under lands due to pressure gradients between the adjacent channels and non-laminar flow effects due to secondary flow around U-turns. These are major discoveries of fundamental importance in guiding materials development and in validating modelling studies. We find that contrary to many simulation papers, advection is an important mechanism in both the gas diffusion layer (more properly "reactant transport layer") and the microporous layer. Models which do not include these effects may underestimate reactant transport to the catalyst layer.

  17. Two-dimensional electrostatic force field measurements with simultaneous topography measurement on embedded interdigitated nanoelectrodes using a force distance curve based method

    NASA Astrophysics Data System (ADS)

    Jenke, Martin Günter; Santschi, Christian; Hoffmann, Patrik

    2008-02-01

    Accurate simultaneous measurements on the topography and electrostatic force field of 500nm pitch interdigitated electrodes embedded in a thin SiO2 layer in a plane perpendicular to the orientation of the electrodes are shown for the first time. A static force distance curve (FDC) based method has been developed, which allows a lateral and vertical resolution of 25 and 2nm, respectively. The measured force field distribution remains stable as result of the well controlled fabrication procedure of Pt cantilever tips that allows thousands of FDC measurements. A numerical model is established as well which demonstrates good agreement with the experimental results.

  18. Field evaluation of a VOST sampling method

    SciTech Connect

    Jackson, M.D.; Johnson, L.D.; Fuerst, R.G.; McGaughey, J.F.; Bursey, J.T.; Merrill, R.G.

    1994-12-31

    The VOST (SW-846 Method 0030) specifies the use of Tenax{reg_sign} and a particular petroleum-based charcoal (SKC Lot 104, or its equivalent), that is no longer commercially available. In field evaluation studies of VOST methodology, a replacement petroleum-based charcoal has been used: candidate replacement sorbents for charcoal were studied, and Anasorb{reg_sign} 747, a carbon-based sorbent, was selected for field testing. The sampling train was modified to use only Anasorb{reg_sign} in the back tube and Tenax{reg_sign} in the two front tubes to avoid analytical difficulties associated with the analysis of the sequential bed back tube used in the standard VOST train. The standard (SW-846 Method 0030) and the modified VOST methods were evaluated at a chemical manufacturing facility using a quadruple probe system with quadruple trains. In this field test, known concentrations of the halogenated volatile organic compounds, that are listed in the Clean Air Act Amendments of 1990, Title 3, were introduced into the VOST train and the modified VOST train, using the same certified gas cylinder as a source of test compounds. Statistical tests of the comparability of methods were performed on a compound-by-compound basis. For most compounds, the VOST and modified VOST methods were found to be statistically equivalent.

  19. [A Method to Measure the Velocity of Fragments of Large Equivalence Explosion Field Based on Explosion Flame Spectral Analysis].

    PubMed

    Liu, Ji; Yu, Li-xia; Zhang, Bin; Zhao Dong-e; Liij, Xiao-yan; Wang, Heng-fei

    2016-03-01

    The deflagration fire lasting for a long time and covering a large area in the process of large equivalent explosion makes it difficult to obtain velocity parameters of fragments in the near-field. In order to solve the problem, it is proposed in this paper a photoelectric transceiver integrated method which utilize laser screen as the sensing area. The analysis of three different types of warhead explosion flame spectral distribution of radiation shows that 0.3 to 1.0 μm within the band is at relatively low intensity. On the basis of this, the optical system applies the principle of determining the fixed distance by measuring the time and the reflector technology, which consists of single longitudinal mode laser, cylindrical Fresnel lens, narrow-band filters and high-speed optical sensors, etc. The system has its advantage, such as transceiver, compact structure and combination of narrowband filter and single longitudinal mode laser, which can stop the spectrum of fire from suppressing the interference of background light effectively. Large amounts of experiments in different models and equivalent have been conducted to measure the velocity of difference kinds of warheads, obtaining higher signal-to-noise ratio of the waveform signal after a series of signal de-noising and recognition through NI company data acquisition and recording system. The experimental results show that this method can complete the accurately test velocity of fragments around center of the explosion. Specifically, the minimum size of fragments can be measured is 4 mm while the speed can be obtained is up to 1 200 m x s(-1) and the capture rate is better than 95% comparing with test results of target plate. At the same time, the system adopts Fresnel lenses-transparent to form a rectangular screen, which makes the distribution of rectangular light uniform in vertical direction, and the light intensity uniformity in horizontal direction is more than 80%. Consequently, the system can

  20. Computer Based Virtual Field Trips.

    ERIC Educational Resources Information Center

    Clark, Kenneth F.; Hosticka, Alice; Schriver, Martha; Bedell, Jackie

    This paper discusses computer based virtual field trips that use technologies commonly found in public schools in the United States. The discussion focuses on the advantages of both using and creating these field trips for an instructional situation. A virtual field trip to Cumberland Island National Seashore, St. Marys, Georgia is used as a point…

  1. An Adjoint-based Method for the Inversion of the Juno and Cassini Gravity Measurements into Wind Fields

    NASA Astrophysics Data System (ADS)

    Galanti, Eli; Kaspi, Yohai

    2016-04-01

    During 2016-17, the Juno and Cassini spacecraft will both perform close eccentric orbits of Jupiter and Saturn, respectively, obtaining high-precision gravity measurements for these planets. These data will be used to estimate the depth of the observed surface flows on these planets. All models to date, relating the winds to the gravity field, have been in the forward direction, thus only allowing the calculation of the gravity field from given wind models. However, there is a need to do the inverse problem since the new observations will be of the gravity field. Here, an inverse dynamical model is developed to relate the expected measurable gravity field, to perturbations of the density and wind fields, and therefore to the observed cloud-level winds. In order to invert the gravity field into the 3D circulation, an adjoint model is constructed for the dynamical model, thus allowing backward integration. This tool is used for the examination of various scenarios, simulating cases in which the depth of the wind depends on latitude. We show that it is possible to use the gravity measurements to derive the depth of the winds, both on Jupiter and Saturn, also taking into account measurement errors. Calculating the solution uncertainties, we show that the wind depth can be determined more precisely in the low-to-mid-latitudes. In addition, the gravitational moments are found to be particularly sensitive to flows at the equatorial intermediate depths. Therefore, we expect that if deep winds exist on these planets they will have a measurable signature by Juno and Cassini.

  2. New Methods of Magnetic Field Measurements

    NASA Astrophysics Data System (ADS)

    Kholtygin, A. F.

    2015-04-01

    The standard methods of magnetic field measurements, based on the relation between the Stokes V parameter and the first derivative of the line profile intensity were modified by applying a linear integral transform to both sides of this relation. We used the wavelet integral transform with the DOG wavelets. The key advantage of the proposed method is the effective suppression of the noise contribution both to the line profile and the Stokes V parameter. To test the proposed method, spectropolarimetric observations of the young O star θ1 Ori C were used. We also demonstrate that the smoothed Time Variation Spectra (smTVS) can be used as a tool for detecting the local stellar magnetic fields.

  3. A field-based method to derive macroinvertebrate benchmark for specific conductivity adapted for small data sets and demonstrated in the Hun-Tai River Basin, Northeast China.

    PubMed

    Zhao, Qian; Jia, Xiaobo; Xia, Rui; Lin, Jianing; Zhang, Yuan

    2016-09-01

    Ionic mixtures, measured as specific conductivity, have been increasingly concerned because of their toxicities to aquatic organisms. However, identifying protective values of specific conductivity for aquatic organisms is challenging given that laboratory test systems cannot examine more salt-intolerant species nor effects occurring in streams. Large data sets used for deriving field-based benchmarks are rarely available. In this study, a field-based method for small data sets was used to derive specific conductivity benchmark, which is expected to prevent the extirpation of 95% of local taxa from circum-neutral to alkaline waters dominated by a mixture of SO4(2-) and HCO3(-) anions and other dissolved ions. To compensate for the smaller sample size, species level analyses were combined with genus level analyses. The benchmark is based on extirpation concentration (XC95) values of specific conductivity for 60 macroinvertebrate genera estimated from 296 sampling sites in the Hun-Tai River Basin. We derived the specific conductivity benchmark by using a 2-point interpolation method, which yielded the benchmark of 249 μS/cm. Our study tailored the method that was developed by USEPA to derive aquatic life benchmark for specific conductivity for basin scale application, and may provide useful information for water pollution control and management. PMID:27389551

  4. Human exposure assessment in the near field of GSM base-station antennas using a hybrid finite element/method of moments technique.

    PubMed

    Meyer, Frans J C; Davidson, David B; Jakobus, Ulrich; Stuchly, Maria A

    2003-02-01

    A hybrid finite-element method (FEM)/method of moments (MoM) technique is employed for specific absorption rate (SAR) calculations in a human phantom in the near field of a typical group special mobile (GSM) base-station antenna. The MoM is used to model the metallic surfaces and wires of the base-station antenna, and the FEM is used to model the heterogeneous human phantom. The advantages of each of these frequency domain techniques are, thus, exploited, leading to a highly efficient and robust numerical method for addressing this type of bioelectromagnetic problem. The basic mathematical formulation of the hybrid technique is presented. This is followed by a discussion of important implementation details-in particular, the linear algebra routines for sparse, complex FEM matrices combined with dense MoM matrices. The implementation is validated by comparing results to MoM (surface equivalence principle implementation) and finite-difference time-domain (FDTD) solutions of human exposure problems. A comparison of the computational efficiency of the different techniques is presented. The FEM/MoM implementation is then used for whole-body and critical-organ SAR calculations in a phantom at different positions in the near field of a base-station antenna. This problem cannot, in general, be solved using the MoM or FDTD due to computational limitations. This paper shows that the specific hybrid FEM/MoM implementation is an efficient numerical tool for accurate assessment of human exposure in the near field of base-station antennas. PMID:12665036

  5. An improved method for estimation of Jupiter's gravity field using the Juno expected measurements, a trajectory estimation model, and an adjoint based thermal wind model

    NASA Astrophysics Data System (ADS)

    Galanti, E.; Finocchiaro, S.; Kaspi, Y.; Iess, L.

    2013-12-01

    The upcoming high precision measurements of the Juno flybys around Jupiter, have the potential of improving the estimation of Jupiter's gravity field. The analysis of the Juno Doppler data will provide a very accurate reconstruction of spacial gravity variations, but these measurements will be over a limited latitudinal and longitudinal range. In order to deduce the full gravity field of Jupiter, additional information needs to be incorporated into the analysis, especially with regards to the Jovian wind structure and its depth at high latitudes. In this work we propose a new iterative method for the estimation of the Jupiter gravity field, using the Juno expected measurements, a trajectory estimation model, and an adjoint based inverse thermal wind model. Beginning with an artificial gravitational field, the trajectory estimation model together with an optimization procedure is used to obtain an initial solution of the gravitational moments. As upper limit constraints, the model applies the gravity harmonics obtained from a thermal wind model in which the winds are assumed to penetrate barotropicaly along the direction of the spin axis. The solution from the trajectory model is then used as an initial guess for the thermal wind model, and together with an adjoint optimization method, the optimal penetration depth of the winds is computed. As a final step, the gravity harmonics solution from the thermal wind model is given back to the trajectory model, along with an uncertainties estimate, to be used as constraints for a new calculation of the gravity field. We test this method for several cases, some with zonal harmonics only, and some with the full gravity field including longitudinal variations that include the tesseral harmonics as well. The results show that using this method some of the gravitational moments are fitted better to the 'observed' ones, mainly due to the fact that the thermal wind model is taking into consideration the wind structure and depth

  6. Apparatuses and methods for generating electric fields

    SciTech Connect

    Scott, Jill R; McJunkin, Timothy R; Tremblay, Paul L

    2013-08-06

    Apparatuses and methods relating to generating an electric field are disclosed. An electric field generator may include a semiconductive material configured in a physical shape substantially different from a shape of an electric field to be generated thereby. The electric field is generated when a voltage drop exists across the semiconductive material. A method for generating an electric field may include applying a voltage to a shaped semiconductive material to generate a complex, substantially nonlinear electric field. The shape of the complex, substantially nonlinear electric field may be configured for directing charged particles to a desired location. Other apparatuses and methods are disclosed.

  7. Numerical evolutions of fields on the 2-sphere using a spectral method based on spin-weighted spherical harmonics

    NASA Astrophysics Data System (ADS)

    Beyer, Florian; Daszuta, Boris; Frauendiener, Jörg; Whale, Ben

    2014-04-01

    Many applications in science call for the numerical simulation of systems on manifolds with spherical topology. Through the use of integer spin-weighted spherical harmonics, we present a method which allows for the implementation of arbitrary tensorial evolution equations. Our method combines two numerical techniques that were originally developed with different applications in mind. The first is Huffenberger and Wandelt’s spectral decomposition algorithm to perform the mapping from physical to spectral space. The second is the application of Luscombe and Luban’s method, to convert numerically divergent linear recursions into stable nonlinear recursions, to the calculation of reduced Wigner d-functions. We give a detailed discussion of the theory and numerical implementation of our algorithm. The properties of our method are investigated by solving the scalar and vectorial advection equation on the sphere, as well as the 2 + 1 Maxwell equations on a deformed sphere.

  8. Third-order aberrations in GRIN crystalline lens: A new method based on axial and field rays

    PubMed Central

    Río, Arturo Díaz del; Gómez-Reino, Carlos; Flores-Arias, M. Teresa

    2014-01-01

    This paper presents a new procedure for calculating the third-order aberration of gradient-index (GRIN) lenses that combines an iterative numerical method with the Hamiltonian theory of aberrations in terms of two paraxial rays with boundary conditions on general curved end surfaces and, as a second algebraic step has been presented. Application of this new method to a GRIN human is analyzed in the framework of the bi-elliptical model. The different third-order aberrations are determined, except those that need for their calculation skew rays, because the study is made only for meridional rays. PMID:25444647

  9. Third-order aberrations in GRIN crystalline lens: a new method based on axial and field rays.

    PubMed

    Río, Arturo Díaz Del; Gómez-Reino, Carlos; Flores-Arias, M Teresa

    2015-01-01

    This paper presents a new procedure for calculating the third-order aberration of gradient-index (GRIN) lenses that combines an iterative numerical method with the Hamiltonian theory of aberrations in terms of two paraxial rays with boundary conditions on general curved end surfaces and, as a second algebraic step has been presented. Application of this new method to a GRIN human is analyzed in the framework of the bi-elliptical model. The different third-order aberrations are determined, except those that need for their calculation skew rays, because the study is made only for meridional rays. PMID:25444647

  10. Coupling the Phase Field Method for diffusive transformations with dislocation density-based crystal plasticity: Application to Ni-based superalloys

    NASA Astrophysics Data System (ADS)

    Cottura, M.; Appolaire, B.; Finel, A.; Le Bouar, Y.

    2016-09-01

    A phase field model is coupled to strain gradient crystal plasticity based on dislocation densities. The resulting model includes anisotropic plasticity and the size-dependence of plastic activity, required when plasticity is confined in region below few microns in size. These two features are important for handling microstructure evolutions during diffusive phase transformations that involve plastic deformation occurring in confined areas such as Ni-based superalloys undergoing rafting. The model also uses a storage-recovery law for the evolution of the dislocation density of each glide system and a hardening matrix to account for the short-range interactions between dislocations. First, it is shown that the unstable modes during the morphological destabilization of a growing misfitting circular precipitate are selected by the anisotropy of plasticity. Then, the rafting of γ‧ precipitates in a Ni-based superalloy is investigated during [100] creep loadings. Our model includes most of the important physical phenomena accounted for during the microstructure evolution, such as the presence of different crystallographic γ‧ variants, their misfit with the γ matrix, the elastic inhomogeneity and anisotropy, the hardening, anisotropy and viscosity of plasticity. In agreement with experiments, the model predicts that rafting proceeds perpendicularly to the tensile loading axis and it is shown that plasticity slows down significantly the evolution of the rafts.

  11. A new laser vibrometry-based 2D selective intensity method for source identification in reverberant fields: part I. Development of the technique and preliminary validation

    NASA Astrophysics Data System (ADS)

    Revel, G. M.; Martarelli, M.; Chiariotti, P.

    2010-07-01

    The selective intensity technique is a powerful tool for the localization of acoustic sources and for the identification of the structural contribution to the acoustic emission. In practice, the selective intensity method is based on simultaneous measurements of acoustic intensity, by means of a couple of matched microphones, and structural vibration of the emitting object. In this paper high spatial density multi-point vibration data, acquired by using a scanning laser Doppler vibrometer, have been used for the first time. Therefore, by applying the selective intensity algorithm, the contribution of a large number of structural sources to the acoustic field radiated by the vibrating object can be estimated. The selective intensity represents the distribution of the acoustic monopole sources on the emitting surface, as if each monopole acted separately from the others. This innovative selective intensity approach can be very helpful when the measurement is performed on large panels in highly reverberating environments, such as aircraft cabins. In this case the separation of the direct acoustic field (radiated by the vibrating panels of the fuselage) and the reverberant one is difficult by traditional techniques. The first aim of this work is to develop and validate the technique in reverberating environments where the location and the quantification of each source are difficult by traditional techniques. The reverberant field is clearly challenging also for the proposed technique, affecting the achievable accuracy, mainly due to the fact that coherence between radiated and reverberated fields is often unknown and may be relevant. Secondly, the applicability of the method to real cases is demonstrated. A laboratory test case has been developed using a large wooden panel. The measurement is performed both in anechoic environment and under simulated reverberating conditions, for testing the ability of the selective intensity method to remove the reverberation.

  12. Development of an emission factor for ammonia emissions from US swine farms based on field tests and application of a mass balance method

    NASA Astrophysics Data System (ADS)

    Doorn, M. R. J.; Natschke, D. F.; Thorneloe, S. A.; Southerland, J.

    This paper discusses and summarizes post-1994 US and European information on ammonia (NH 3) emissions from swine farms and assesses the applicability for general use in the United States. The emission rates for the houses calculated by various methods show good agreement and suggest that the houses are a more significant source than previously thought. A general emission factor for houses of 3.7±1.0 kg NH 3/ year/ finisher pig or 59±10 g NH 3/kg live weight/year is recommended. For lagoons, it was found that there is good similarity between the field test results and the number calculated by a mass balance method. The suggested annual NH 3 emission factor for lagoons based on field tests at one swine farm lagoon in North Carolina is 2.4 kg/ year/ pig. Emission rates from sprayfields were estimated using a total mass balance approach, while subtracting the house and lagoon emissions. The total emission rates for finishing pigs at the test farm compared well to the total rate established by a mass balance approach based on nitrogen intake and volatilization. Therefore, it was concluded that a mass balance approach can be helpful in estimating NH 3 emissions from swine farms. A general emission factor of 7±2 kg NH 3/pig/year could be developed, which is comparable to general European emission factors, which varied from 4.8 to 6.4 kg NH 3/pig/year.

  13. Brain source localization: A new method based on MUltiple SIgnal Classification algorithm and spatial sparsity of the field signal for electroencephalogram measurements

    NASA Astrophysics Data System (ADS)

    Vergallo, P.; Lay-Ekuakille, A.

    2013-08-01

    Brain activity can be recorded by means of EEG (Electroencephalogram) electrodes placed on the scalp of the patient. The EEG reflects the activity of groups of neurons located in the head, and the fundamental problem in neurophysiology is the identification of the sources responsible of brain activity, especially if a seizure occurs and in this case it is important to identify it. The studies conducted in order to formalize the relationship between the electromagnetic activity in the head and the recording of the generated external field allow to know pattern of brain activity. The inverse problem, that is given the sampling field at different electrodes the underlying asset must be determined, is more difficult because the problem may not have a unique solution, or the search for the solution is made difficult by a low spatial resolution which may not allow to distinguish between activities involving sources close to each other. Thus, sources of interest may be obscured or not detected and known method in source localization problem as MUSIC (MUltiple SIgnal Classification) could fail. Many advanced source localization techniques achieve a best resolution by exploiting sparsity: if the number of sources is small as a result, the neural power vs. location is sparse. In this work a solution based on the spatial sparsity of the field signal is presented and analyzed to improve MUSIC method. For this purpose, it is necessary to set a priori information of the sparsity in the signal. The problem is formulated and solved using a regularization method as Tikhonov, which calculates a solution that is the better compromise between two cost functions to minimize, one related to the fitting of the data, and another concerning the maintenance of the sparsity of the signal. At the first, the method is tested on simulated EEG signals obtained by the solution of the forward problem. Relatively to the model considered for the head and brain sources, the result obtained allows to

  14. Developing new extension of GafChromic RTQA2 film to patient quality assurance field using a plan-based calibration method

    NASA Astrophysics Data System (ADS)

    Peng, Jiayuan; Zhang, Zhen; Wang, Jiazhou; Xie, Jiang; Chen, Junchao; Hu, Weigang

    2015-10-01

    GafChromic RTQA2 film is a type of radiochromic film designed for light field and radiation field alignment. The aim of this study is to extend the application of RTQA2 film to the measurement of patient specific quality assurance (QA) fields as a 2D relative dosimeter. Pre-irradiated and post-irradiated RTQA2 films were scanned in reflection mode using a flatbed scanner. A plan-based calibration (PBC) method utilized the mapping information of the calculated dose image and film grayscale image to create a dose versus pixel value calibration model. This model was used to calibrate the film grayscale image to the film relative dose image. The dose agreement between calculated and film dose images were analyzed by gamma analysis. To evaluate the feasibility of this method, eight clinically approved RapidArc cases (one abdomen cancer and seven head-and-neck cancer patients) were tested using this method. Moreover, three MLC gap errors and two MLC transmission errors were introduced to eight Rapidarc cases respectively to test the robustness of this method. The PBC method could overcome the film lot and post-exposure time variations of RTQA2 film to get a good 2D relative dose calibration result. The mean gamma passing rate of eight patients was 97.90%  ±  1.7%, which showed good dose consistency between calculated and film dose images. In the error test, the PBC method could over-calibrate the film, which means some dose error in the film would be falsely corrected to keep the dose in film consistent with the dose in the calculated dose image. This would then lead to a false negative result in the gamma analysis. In these cases, the derivative curve of the dose calibration curve would be non-monotonic which would expose the dose abnormality. By using the PBC method, we extended the application of more economical RTQA2 film to patient specific QA. The robustness of the PBC method has been improved by analyzing the monotonicity of the derivative of the

  15. Developing new extension of GafChromic RTQA2 film to patient quality assurance field using a plan-based calibration method.

    PubMed

    Peng, Jiayuan; Zhang, Zhen; Wang, Jiazhou; Xie, Jiang; Chen, Junchao; Hu, Weigang

    2015-10-01

    GafChromic RTQA2 film is a type of radiochromic film designed for light field and radiation field alignment. The aim of this study is to extend the application of RTQA2 film to the measurement of patient specific quality assurance (QA) fields as a 2D relative dosimeter.Pre-irradiated and post-irradiated RTQA2 films were scanned in reflection mode using a flatbed scanner. A plan-based calibration (PBC) method utilized the mapping information of the calculated dose image and film grayscale image to create a dose versus pixel value calibration model. This model was used to calibrate the film grayscale image to the film relative dose image. The dose agreement between calculated and film dose images were analyzed by gamma analysis. To evaluate the feasibility of this method, eight clinically approved RapidArc cases (one abdomen cancer and seven head-and-neck cancer patients) were tested using this method. Moreover, three MLC gap errors and two MLC transmission errors were introduced to eight Rapidarc cases respectively to test the robustness of this method.The PBC method could overcome the film lot and post-exposure time variations of RTQA2 film to get a good 2D relative dose calibration result. The mean gamma passing rate of eight patients was 97.90%  ±  1.7%, which showed good dose consistency between calculated and film dose images. In the error test, the PBC method could over-calibrate the film, which means some dose error in the film would be falsely corrected to keep the dose in film consistent with the dose in the calculated dose image. This would then lead to a false negative result in the gamma analysis. In these cases, the derivative curve of the dose calibration curve would be non-monotonic which would expose the dose abnormality.By using the PBC method, we extended the application of more economical RTQA2 film to patient specific QA. The robustness of the PBC method has been improved by analyzing the monotonicity of the derivative of the calibration

  16. Study on the installation method for the long-term observatory based on the field test during Chikyu Expedition 319

    NASA Astrophysics Data System (ADS)

    Kitada, K.; Araki, E.; Kimura, T.; Saffer, D. M.; Byrne, T.; McNeill, L. C.; Toczko, S.; Eguchi, N. O.; Takahashi, K.

    2009-12-01

    environmental conditions, such as shock, acceleration, and vibration during installation; and (2) to confirm sensor installation operational procedures, such as onboard assembly of the sensor tree, ship maneuvers to reenter the sensor tree, and entry into the hole. Acceleration and tilt data were recorded at 500 Hz and recovered after the dummy run test. Preliminary results from vibration analysis show that the strong vibration due to the high Kuroshio Current (~5 knot) occurred during the test. Spectral analysis of the collected acceleration data represents the drill pipe vibration and resonance of the instrument carrier. The resonance was much larger than the drill pipe vibration and its magnitude may depend on the structure of the instrument carrier. Preliminary results of the vibration mode, its amplitude, and comparison with current speed and direction, ships speed and depth of the sensor assembly are also shown to elucidate the cause of the vibration. These results give us an opportunity to establish installation methods, and to develop and refine sensors for the future long-term observatory emplacement.

  17. Study on copper phthalocyanine and perylene-based ambipolar organic light-emitting field-effect transistors produced using neutral beam deposition method

    SciTech Connect

    Kim, Dae-Kyu; Oh, Jeong-Do; Shin, Eun-Sol; Seo, Hoon-Seok; Choi, Jong-Ho

    2014-04-28

    The neutral cluster beam deposition (NCBD) method has been applied to the production and characterization of ambipolar, heterojunction-based organic light-emitting field-effect transistors (OLEFETs) with a top-contact, multi-digitated, long-channel geometry. Organic thin films of n-type N,N′-ditridecylperylene-3,4,9,10-tetracarboxylic diimide and p-type copper phthalocyanine were successively deposited on the hydroxyl-free polymethyl-methacrylate (PMMA)-coated SiO{sub 2} dielectrics using the NCBD method. Characterization of the morphological and structural properties of the organic active layers was performed using atomic force microscopy and X-ray diffraction. Various device parameters such as hole- and electron-carrier mobilities, threshold voltages, and electroluminescence (EL) were derived from the fits of the observed current-voltage and current-voltage-light emission characteristics of OLEFETs. The OLEFETs demonstrated good field-effect characteristics, well-balanced ambipolarity, and substantial EL under ambient conditions. The device performance, which is strongly correlated with the surface morphology and the structural properties of the organic active layers, is discussed along with the operating conduction mechanism.

  18. Primary combination of phase-field and discrete dislocation dynamics methods for investigating athermal plastic deformation in various realistic Ni-base single crystal superalloy microstructures

    NASA Astrophysics Data System (ADS)

    Gao, Siwen; Rajendran, Mohan Kumar; Fivel, Marc; Ma, Anxin; Shchyglo, Oleg; Hartmaier, Alexander; Steinbach, Ingo

    2015-10-01

    Three-dimensional discrete dislocation dynamics (DDD) simulations in combination with the phase-field method are performed to investigate the influence of different realistic Ni-base single crystal superalloy microstructures with the same volume fraction of {γ\\prime} precipitates on plastic deformation at room temperature. The phase-field method is used to generate realistic microstructures as the boundary conditions for DDD simulations in which a constant high uniaxial tensile load is applied along different crystallographic directions. In addition, the lattice mismatch between the γ and {γ\\prime} phases is taken into account as a source of internal stresses. Due to the high antiphase boundary energy and the rare formation of superdislocations, precipitate cutting is not observed in the present simulations. Therefore, the plastic deformation is mainly caused by dislocation motion in γ matrix channels. From a comparison of the macroscopic mechanical response and the dislocation evolution for different microstructures in each loading direction, we found that, for a given {γ\\prime} phase volume fraction, the optimal microstructure should possess narrow and homogeneous γ matrix channels.

  19. SU-E-J-246: A Deformation-Field Map Based Liver 4D CBCT Reconstruction Method Using Gold Nanoparticles as Constraints

    SciTech Connect

    Harris, W; Zhang, Y; Ren, L; Yin, F

    2014-06-01

    Purpose: To investigate the feasibility of using nanoparticle markers to validate liver tumor motion together with a deformation field map-based four dimensional (4D) cone-beam computed tomography (CBCT) reconstruction method. Methods: A technique for lung 4D-CBCT reconstruction has been previously developed using a deformation field map (DFM)-based strategy. In this method, each phase of the 4D-CBCT is considered as a deformation of a prior CT volume. The DFM is solved by a motion modeling and free-form deformation (MM-FD) technique, using a data fidelity constraint and the deformation energy minimization. For liver imaging, there is low contrast of a liver tumor in on-board projections. A validation of liver tumor motion using implanted gold nanoparticles, along with the MM-FD deformation technique is implemented to reconstruct onboard 4D CBCT liver radiotherapy images. These nanoparticles were placed around the liver tumor to reflect the tumor positions in both CT simulation and on-board image acquisition. When reconstructing each phase of the 4D-CBCT, the migrations of the gold nanoparticles act as a constraint to regularize the deformation field, along with the data fidelity and the energy minimization constraints. In this study, multiple tumor diameters and positions were simulated within the liver for on-board 4D-CBCT imaging. The on-board 4D-CBCT reconstructed by the proposed method was compared with the “ground truth” image. Results: The preliminary data, which uses reconstruction for lung radiotherapy suggests that the advanced reconstruction algorithm including the gold nanoparticle constraint will Resultin volume percentage differences (VPD) between lesions in reconstructed images by MM-FD and “ground truth” on-board images of 11.5% (± 9.4%) and a center of mass shift of 1.3 mm (± 1.3 mm) for liver radiotherapy. Conclusion: The advanced MM-FD technique enforcing the additional constraints from gold nanoparticles, results in improved accuracy

  20. Strongyloides stercoralis: a field-based survey of mothers and their preschool children using ELISA, Baermann and Koga plate methods reveals low endemicity in western Uganda.

    PubMed

    Stothard, J R; Pleasant, J; Oguttu, D; Adriko, M; Galimaka, R; Ruggiana, A; Kazibwe, F; Kabatereine, N B

    2008-09-01

    To ascertain the current status of strongyloidiasis in mothers and their preschool children, a field-based survey was conducted in western Uganda using a combination of diagnostic methods: ELISA, Baermann concentration and Koga agar plate. The prevalence of other soil-transmitted helminthiasis and intestinal schistosomiasis were also determined. In total, 158 mothers and 143 children were examined from five villages within Kabale, Hoima and Masindi districts. In mothers and children, the general prevalence of strongyloidiasis inferred by ELISA was approximately 4% and approximately 2%, respectively. Using the Baermann concentration method, two parasitologically proven cases were encountered in an unrelated mother and child, both of whom were sero-negative for strongyloidiasis. No infections were detected by Koga agar plate method. The general level of awareness of strongyloidiasis was very poor ( < 5%) in comparison to schistosomiasis (51%) and ascariasis (36%). Strongyloidiasis is presently at insufficient levels to justify inclusion within a community treatment programme targeting maternal and child health. Better epidemiological screening is needed, however, especially identifying infections in HIV-positive women of childbearing age. In the rural clinic setting, further use of the Baermann concentration method would appear to be the most immediate and pragmatic option for disease diagnosis. PMID:18416881

  1. Assessment of real-time PCR based methods for quantification of pollen-mediated gene flow from GM to conventional maize in a field study.

    PubMed

    Pla, Maria; La Paz, José-Luis; Peñas, Gisela; García, Nora; Palaudelmàs, Montserrat; Esteve, Teresa; Messeguer, Joaquima; Melé, Enric

    2006-04-01

    Maize is one of the main crops worldwide and an increasing number of genetically modified (GM) maize varieties are cultivated and commercialized in many countries in parallel to conventional crops. Given the labeling rules established e.g. in the European Union and the necessary coexistence between GM and non-GM crops, it is important to determine the extent of pollen dissemination from transgenic maize to other cultivars under field conditions. The most widely used methods for quantitative detection of GMO are based on real-time PCR, which implies the results are expressed in genome percentages (in contrast to seed or grain percentages). Our objective was to assess the accuracy of real-time PCR based assays to accurately quantify the contents of transgenic grains in non-GM fields in comparison with the real cross-fertilization rate as determined by phenotypical analysis. We performed this study in a region where both GM and conventional maize are normally cultivated and used the predominant transgenic maize Mon810 in combination with a conventional maize variety which displays the characteristic of white grains (therefore allowing cross-pollination quantification as percentage of yellow grains). Our results indicated an excellent correlation between real-time PCR results and number of cross-fertilized grains at Mon810 levels of 0.1-10%. In contrast, Mon810 percentage estimated by weight of grains produced less accurate results. Finally, we present and discuss the pattern of pollen-mediated gene flow from GM to conventional maize in an example case under field conditions. PMID:16604462

  2. Use of Finite-Difference Time-Domain Method with AN Anatomically Based Model of a Human for Exposures to Far-Near Fields and Electromagnetic Pulse

    NASA Astrophysics Data System (ADS)

    Chen, Jinyuan

    The three-dimensional finite-difference time-domain (FDTD) method has been used to calculate local, layer-averaged and whole-body averaged specific absorption rates (SARs) and internal radio-frequency (RF) currents in an anatomically -based model of a human for plane-wave (far-field) exposures from 20 to 100 MHz and for spatially variable electromagnetic fields of a parallel-plate applicator representative of RF dielectric heaters used in industry (near-field). The calculated results are in agreement with the experimental data of Hill and others. While the existence of large foot currents has been known previously, substantial RF currents (600-800 mA) induced over much of the body are obtained for E-polarized fields suggested in the 1982 ANSI RF safety guideline. The FDTD method has also been used for simulating Annular Phased Array (APA) of dipole antennas for hyperthermia of deep-seated tumors. Anatomically-based models based on two different regions of the human body (14,417 and 13,133 cells) were used to calculated the SAR distributions with a resolution of 1.31 cm. Annular-phased arrays of eight dipole antennas couple to the human body through either a homogeneous or a tapered water bolus with air assumed outside the ring of dipoles. The objective of the calculations was to focus the energy to a couple of assumed tumor sites in the liver or the prostate. The geometrical optics approximation and principle of focused arrays were used to estimate the phases for individual dipoles to focus the electromagnetic energy into the tumor and its surrounding. Considerably focused power distributions with SARs on the order of 100 W/Kg for input powers of 400-700 W have been obtained for assumed tumor sites in the liver and the prostate using tapered boluses and optimized magnitudes and phases of power to the various dipoles. Lastly the FDTD technique is used to calculate the internal fields and the induced current densities in anatomically based models of a human using 5

  3. Low Field Squid MRI Devices, Components and Methods

    NASA Technical Reports Server (NTRS)

    Penanen, Konstantin I. (Inventor); Eom, Byeong H. (Inventor); Hahn, Inseob (Inventor)

    2013-01-01

    Low field SQUID MRI devices, components and methods are disclosed. They include a portable low field (SQUID)-based MRI instrument and a portable low field SQUID-based MRI system to be operated under a bed where a subject is adapted to be located. Also disclosed is a method of distributing wires on an image encoding coil system adapted to be used with an NMR or MRI device for analyzing a sample or subject and a second order superconducting gradiometer adapted to be used with a low field SQUID-based MRI device as a sensing component for an MRI signal related to a subject or sample.

  4. Low field SQUID MRI devices, components and methods

    NASA Technical Reports Server (NTRS)

    Penanen, Konstantin I. (Inventor); Eom, Byeong H (Inventor); Hahn, Inseob (Inventor)

    2010-01-01

    Low field SQUID MRI devices, components and methods are disclosed. They include a portable low field (SQUID)-based MRI instrument and a portable low field SQUID-based MRI system to be operated under a bed where a subject is adapted to be located. Also disclosed is a method of distributing wires on an image encoding coil system adapted to be used with an NMR or MRI device for analyzing a sample or subject and a second order superconducting gradiometer adapted to be used with a low field SQUID-based MRI device as a sensing component for an MRI signal related to a subject or sample.

  5. Low Field Squid MRI Devices, Components and Methods

    NASA Technical Reports Server (NTRS)

    Penanen, Konstantin I. (Inventor); Eom, Byeong H. (Inventor); Hahn, Inseob (Inventor)

    2014-01-01

    Low field SQUID MRI devices, components and methods are disclosed. They include a portable low field (SQUID)-based MRI instrument and a portable low field SQUID-based MRI system to be operated under a bed where a subject is adapted to be located. Also disclosed is a method of distributing wires on an image encoding coil system adapted to be used with an NMR or MRI device for analyzing a sample or subject and a second order superconducting gradiometer adapted to be used with a low field SQUID-based MRI device as a sensing component for an MRI signal related to a subject or sample.

  6. Low field SQUID MRI devices, components and methods

    NASA Technical Reports Server (NTRS)

    Penanen, Konstantin I. (Inventor); Eom, Byeong H. (Inventor); Hahn, Inseob (Inventor)

    2011-01-01

    Low field SQUID MRI devices, components and methods are disclosed. They include a portable low field (SQUID)-based MRI instrument and a portable low field SQUID-based MRI system to be operated under a bed where a subject is adapted to be located. Also disclosed is a method of distributing wires on an image encoding coil system adapted to be used with an NMR or MRI device for analyzing a sample or subject and a second order superconducting gradiometer adapted to be used with a low field SQUID-based MRI device as a sensing component for an MRI signal related to a subject or sample.

  7. A comparative study of spin coated and floating film transfer method coated poly (3-hexylthiophene)/poly (3-hexylthiophene)-nanofibers based field effect transistors

    NASA Astrophysics Data System (ADS)

    Tiwari, Shashi; Takashima, Wataru; Nagamatsu, S.; Balasubramanian, S. K.; Prakash, Rajiv

    2014-09-01

    A comparative study on electrical performance, optical properties, and surface morphology of poly(3-hexylthiophene) (P3HT) and P3HT-nanofibers based "normally on" type p-channel field effect transistors (FETs), fabricated by two different coating techniques has been reported here. Nanofibers are prepared in the laboratory with the approach of self-assembly of P3HT molecules into nanofibers in an appropriate solvent. P3HT (0.3 wt. %) and P3HT-nanofibers (˜0.25 wt. %) are used as semiconductor transport materials for deposition over FETs channel through spin coating as well as through our recently developed floating film transfer method (FTM). FETs fabricated using FTM show superior performance compared to spin coated devices; however, the mobility of FTM films based FETs is comparable to the mobility of spin coated one. The devices based on P3HT-nanofibers (using both the techniques) show much better performance in comparison to P3HT FETs. The best performance among all the fabricated organic field effect transistors are observed for FTM coated P3HT-nanofibers FETs. This improved performance of nanofiber-FETs is due to ordering of fibers and also due to the fact that fibers offer excellent charge transport facility because of point to point transmission. The optical properties and structural morphologies (P3HT and P3HT-nanofibers) are studied using UV-visible absorption spectrophotometer and atomic force microscopy , respectively. Coating techniques and effect of fiber formation for organic conductors give information for fabrication of organic devices with improved performance.

  8. A comparative study of spin coated and floating film transfer method coated poly (3-hexylthiophene)/poly (3-hexylthiophene)-nanofibers based field effect transistors

    SciTech Connect

    Tiwari, Shashi; Balasubramanian, S. K.; Takashima, Wataru; Nagamatsu, S.; Prakash, Rajiv

    2014-09-07

    A comparative study on electrical performance, optical properties, and surface morphology of poly(3-hexylthiophene) (P3HT) and P3HT-nanofibers based “normally on” type p-channel field effect transistors (FETs), fabricated by two different coating techniques has been reported here. Nanofibers are prepared in the laboratory with the approach of self-assembly of P3HT molecules into nanofibers in an appropriate solvent. P3HT (0.3 wt. %) and P3HT-nanofibers (∼0.25 wt. %) are used as semiconductor transport materials for deposition over FETs channel through spin coating as well as through our recently developed floating film transfer method (FTM). FETs fabricated using FTM show superior performance compared to spin coated devices; however, the mobility of FTM films based FETs is comparable to the mobility of spin coated one. The devices based on P3HT-nanofibers (using both the techniques) show much better performance in comparison to P3HT FETs. The best performance among all the fabricated organic field effect transistors are observed for FTM coated P3HT-nanofibers FETs. This improved performance of nanofiber-FETs is due to ordering of fibers and also due to the fact that fibers offer excellent charge transport facility because of point to point transmission. The optical properties and structural morphologies (P3HT and P3HT-nanofibers) are studied using UV-visible absorption spectrophotometer and atomic force microscopy , respectively. Coating techniques and effect of fiber formation for organic conductors give information for fabrication of organic devices with improved performance.

  9. Magnetic space-based field measurements

    NASA Technical Reports Server (NTRS)

    Langel, R. A.

    1981-01-01

    Satellite measurements of the geomagnetic field began with the launch of Sputnik 3 in May 1958 and have continued sporadically in the intervening years. A list of spacecraft that have made significant contributions to an understanding of the near-earth geomagnetic field is presented. A new era in near-earth magnetic field measurements began with NASA's launch of Magsat in October 1979. Attention is given to geomagnetic field modeling, crustal magnetic anomaly studies, and investigations of the inner earth. It is concluded that satellite-based magnetic field measurements make global surveys practical for both field modeling and for the mapping of large-scale crustal anomalies. They are the only practical method of accurately modeling the global secular variation. Magsat is providing a significant contribution, both because of the timeliness of the survey and because its vector measurement capability represents an advance in the technology of such measurements.

  10. A new laser vibrometry-based 2D selective intensity method for source identification in reverberant fields: part II. Application to an aircraft cabin

    NASA Astrophysics Data System (ADS)

    Revel, G. M.; Martarelli, M.; Chiariotti, P.

    2010-07-01

    The selective intensity technique is a powerful tool for the localization of acoustic sources and for the identification of the structural contribution to the acoustic emission. In practice, the selective intensity method is based on simultaneous measurements of acoustic intensity, by means of a couple of matched microphones, and structural vibration of the emitting object. In this paper high spatial density multi-point vibration data, acquired by using a scanning laser Doppler vibrometer, have been used for the first time. Therefore, by applying the selective intensity algorithm, the contribution of a large number of structural sources to the acoustic field radiated by the vibrating object can be estimated. The selective intensity represents the distribution of the acoustic monopole sources on the emitting surface, as if each monopole acted separately from the others. This innovative selective intensity approach can be very helpful when the measurement is performed on large panels in highly reverberating environments, such as aircraft cabins. In this case the separation of the direct acoustic field (radiated by the vibrating panels of the fuselage) and the reverberant one is difficult by traditional techniques. The work shown in this paper is the application of part of the results of the European project CREDO (Cabin Noise Reduction by Experimental and Numerical Design Optimization) carried out within the framework of the EU. Therefore the aim of this paper is to illustrate a real application of the method to the interior acoustic characterization of an Alenia Aeronautica ATR42 ground test facility, Alenia Aeronautica being a partner of the CREDO project.

  11. Historic Methods for Capturing Magnetic Field Images

    NASA Astrophysics Data System (ADS)

    Kwan, Alistair

    2016-03-01

    I investigated two late 19th-century methods for capturing magnetic field images from iron filings for historical insight into the pedagogy of hands-on physics education methods, and to flesh out teaching and learning practicalities tacit in the historical record. Both methods offer opportunities for close sensory engagement in data-collection processes.

  12. Historic Methods for Capturing Magnetic Field Images

    ERIC Educational Resources Information Center

    Kwan, Alistair

    2016-01-01

    I investigated two late 19th-century methods for capturing magnetic field images from iron filings for historical insight into the pedagogy of hands-on physics education methods, and to flesh out teaching and learning practicalities tacit in the historical record. Both methods offer opportunities for close sensory engagement in data-collection…

  13. Human Biology, A Guide to Field Methods.

    ERIC Educational Resources Information Center

    Weiner, J. S.; Lourie, J. A.

    The aim of this handbook is to provide, in a form suitable for use in the field, instructions on the whole range of methods required for the fulfillment of human biological studies on a comparative basis. Certain of these methods can be used to carry out the rapid surveys on growth, physique, and genetic constitution. They are also appropriate for…

  14. Uncertainties of the Gravity Recovery and Climate Experiment time-variable gravity-field solutions based on three-cornered hat method

    NASA Astrophysics Data System (ADS)

    Ferreira, Vagner G.; Montecino, Henry D. C.; Yakubu, Caleb I.; Heck, Bernhard

    2016-01-01

    Currently, various satellite processing centers produce extensive data, with different solutions of the same field being available. For instance, the Gravity Recovery and Climate Experiment (GRACE) has been monitoring terrestrial water storage (TWS) since April 2002, while the Center for Space Research (CSR), the Jet Propulsion Laboratory (JPL), the GeoForschungsZentrum (GFZ), and the Groupe de Recherche de Géodésie Spatiale (GRGS) provide individual monthly solutions in the form of Stokes coefficients. The inverted TWS maps (or the regionally averaged values) from these coefficients are being used in many applications; however, as no ground truth data exist, the uncertainties are unknown. Consequently, the purpose of this work is to assess the quality of each processing center by estimating their uncertainties using a generalized formulation of the three-cornered hat (TCH) method. Overall, the TCH results for the study period of August 2002 to June 2014 indicate that at a global scale, the CSR, GFZ, GRGS, and JPL presented uncertainties of 9.4, 13.7, 14.8, and 13.2 mm, respectively. At a basin scale, the overall good performance of the CSR was observed at 91 river basins. The TCH-based results were confirmed by a comparison with an ensemble solution from the four GRACE processing centers.

  15. Soil Identification using Field Electrical Resistivity Method

    NASA Astrophysics Data System (ADS)

    Hazreek, Z. A. M.; Rosli, S.; Chitral, W. D.; Fauziah, A.; Azhar, A. T. S.; Aziman, M.; Ismail, B.

    2015-06-01

    Geotechnical site investigation with particular reference to soil identification was important in civil engineering works since it reports the soil condition in order to relate the design and construction of the proposed works. In the past, electrical resistivity method (ERM) has widely being used in soil characterization but experienced several black boxes which related to its results and interpretations. Hence, this study performed a field electrical resistivity method (ERM) using ABEM SAS (4000) at two different types of soils (Gravelly SAND and Silty SAND) in order to discover the behavior of electrical resistivity values (ERV) with type of soils studied. Soil basic physical properties was determine thru density (p), moisture content (w) and particle size distribution (d) in order to verify the ERV obtained from each type of soil investigated. It was found that the ERV of Gravelly SAND (278 Ωm & 285 Ωm) was slightly higher than SiltySAND (223 Ωm & 199 Ωm) due to the uncertainties nature of soils. This finding has showed that the results obtained from ERM need to be interpreted based on strong supported findings such as using direct test from soil laboratory data. Furthermore, this study was able to prove that the ERM can be established as an alternative tool in soil identification provided it was being verified thru other relevance information such as using geotechnical properties.

  16. A simple method based on laboratory inoculum and field inoculum for evaluating potato resistance to black scurf caused by Rhizoctonia solani

    PubMed Central

    Zhang, Xiao-Yu; Yu, Xiao-Xia; Yu, Zhuo; Xue, Yu-Feng; Qi, Li-Peng

    2014-01-01

    A two-step method was developed to evaluate potato resistance to black scurf caused by Rhizoctonia solani. Tuber piece inoculum was first conducted in the laboratory, which was also first reported in this study. After inoculation with pathogen discs and culture for 48 h, the necrotic spots on the inoculated potato pieces were generated and measured by the crossing method. Further evaluation was conducted through field experiments using a wheat bran inoculum method. The wheat bran inoculum was placed into the pit dispersedly and surrounded seed tubers. Each cultivar or line was subjected to five treatments of 0-, 2-, 3-, 4-, and 5-g soil inoculum. The results showed that 2–4 g of wheat bran inoculum was the optimum for identifying tuber black scurf resistance. The laboratory scores positively correlated with the incidence and severity of black scurf in the field. According to the results in the laboratory, relatively resistant cultivars could be selected for further estimation of tuber black scurf resistance in field experiments. It is a practical and effective screening method for rapid identification of resistant potato germplasm, which can reduce workload in the field, shorten time required for identification. PMID:24987302

  17. Method for making field-structured memory materials

    DOEpatents

    Martin, James E.; Anderson, Robert A.; Tigges, Chris P.

    2002-01-01

    A method of forming a dual-level memory material using field structured materials. The field structured materials are formed from a dispersion of ferromagnetic particles in a polymerizable liquid medium, such as a urethane acrylate-based photopolymer, which are applied as a film to a support and then exposed in selected portions of the film to an applied magnetic or electric field. The field can be applied either uniaxially or biaxially at field strengths up to 150 G or higher to form the field structured materials. After polymerizing the field-structure materials, a magnetic field can be applied to selected portions of the polymerized field-structured material to yield a dual-level memory material on the support, wherein the dual-level memory material supports read-and-write binary data memory and write once, read many memory.

  18. Field-theory methods in coagulation theory

    SciTech Connect

    Lushnikov, A. A.

    2011-08-15

    Coagulating systems are systems of chaotically moving particles that collide and coalesce, producing daughter particles of mass equal to the sum of the masses involved in the respective collision event. The present article puts forth basic ideas underlying the application of methods of quantum-field theory to the theory of coagulating systems. Instead of the generally accepted treatment based on the use of a standard kinetic equation that describes the time evolution of concentrations of particles consisting of a preset number of identical objects (monomers in the following), one introduces the probability W(Q, t) to find the system in some state Q at an instant t for a specific rate of transitions between various states. Each state Q is characterized by a set of occupation numbers Q = (n{sub 1}, n{sub 2}, ..., n{sub g}, ...), where n{sub g} is the total number of particles containing precisely g monomers. Thereupon, one introduces the generating functional {Psi} for the probability W(Q, t). The time evolution of {Psi} is described by an equation that is similar to the Schroedinger equation for a one-dimensional Bose field. This equation is solved exactly for transition rates proportional to the product of the masses of colliding particles. It is shown that, within a finite time interval, which is independent of the total mass of the entire system, a giant particle of mass about the mass of the entire system may appear in this system. The particle in question is unobservable in the thermodynamic limit, and this explains the well-known paradox of mass-concentration nonconservation in classical kinetic theory. The theory described in the present article is successfully applied in studying the time evolution of random graphs.

  19. Field-theory methods in coagulation theory

    NASA Astrophysics Data System (ADS)

    Lushnikov, A. A.

    2011-08-01

    Coagulating systems are systems of chaotically moving particles that collide and coalesce, producing daughter particles of mass equal to the sum of the masses involved in the respective collision event. The present article puts forth basic ideas underlying the application of methods of quantum-field theory to the theory of coagulating systems. Instead of the generally accepted treatment based on the use of a standard kinetic equation that describes the time evolution of concentrations of particles consisting of a preset number of identical objects (monomers in the following), one introduces the probability W( Q, t) to find the system in some state Q at an instant t for a specific rate of transitions between various states. Each state Q is characterized by a set of occupation numbers Q = { n 1, n 2, ..., n g , ...}, where n g is the total number of particles containing precisely g monomers. Thereupon, one introduces the generating functional Ψ for the probability W( Q, t). The time evolution of Ψ is described by an equation that is similar to the Schrödinger equation for a one-dimensional Bose field. This equation is solved exactly for transition rates proportional to the product of the masses of colliding particles. It is shown that, within a finite time interval, which is independent of the total mass of the entire system, a giant particle of mass about the mass of the entire system may appear in this system. The particle in question is unobservable in the thermodynamic limit, and this explains the well-known paradox of mass-concentration nonconservation in classical kinetic theory. The theory described in the present article is successfully applied in studying the time evolution of random graphs.

  20. Preliminary Evaluation of a Field and Non-Field Based Social Studies Preservice Teacher Education Program

    ERIC Educational Resources Information Center

    Napier, John D.; Vansickle, Ronald L.

    1978-01-01

    Comparison of pre-service social studies teachers in field and non-field based methods courses indicated no significant differences with regard to teaching skills, attitudes, or behaviors teachers should exhibit in the classroom. (Author/DB)

  1. Got Mud? Field-based Learning in Wetland Ecology.

    ERIC Educational Resources Information Center

    Baldwin, Andrew H.

    2001-01-01

    Describes methods for teaching wetland ecology classes based mainly on direct, hands-on field experiences for students. Makes the case that classroom lectures are necessary but there is no substitute for field and laboratory experiences. (Author/MM)

  2. Assessing and monitoring the ecotoxicity of pulp and paper wastewater for irrigating reed fields using the polyurethane foam unit method based on monitoring protozoal communities.

    PubMed

    Ding, Cheng; Chen, Tianming; Li, Zhaoxia; Yan, Jinlong

    2015-05-01

    Using the standardized polyurethane foam unit (PFU) method, a preliminary investigation was carried out on the bioaccumulation and the ecotoxic effects of the pulp and paper wastewater for irrigating reed fields. Static ectoxicity test had shown protozoal communities were very sensitive to variations in toxin time and effective concentration (EC) of the pulp and paper wastewater. Shannon-Wiener diversity index (H) was a more suitable indicator of the extent of water pollution than Gleason and Margalef diversity index (d), Simpson's diversity index (D), and Pielou's index (J). The regression equation between S eq and EC was S eq  = - 0.118EC + 18.554. The relatively safe concentration and maximum acceptable toxicant concentration (MATC) of the wastewater for the protozoal communities were about 20 % and 42 %, respectively. To safely use this wastewater for irrigation, more than 58 % of the toxins must be removed or diluted by further processing. Monitoring of the wastewater in representative irrigated reed fields showed that the regularity of the protozoal colonization process was similar to the static ectoxicity, indicating that the toxicity of the irrigating pulp and paper wastewater was not lethal to protozoal communities in the reed fields. This study demonstrated the applicability of the PFU method in monitoring the ecotoxic effects of pulp and paper wastewater on the level of microbial communities and may guide the supervision and control of pulp and paper wastewater irrigating within the reed fields ecological system (RFES). PMID:25772871

  3. Stochastic mean-field polycrystal plasticity methods

    NASA Astrophysics Data System (ADS)

    Tonks, Michael R.

    To accommodate multiple length scales, mean-field polycrystal plasticity models treat each material point as an aggregate of N crystals. The crystal velocity gradients Lc are approximated and then used to evaluate the crystal stresses T c. The Tc are averaged to determine the material point stress T. Commonly, the Lc are approximated with the fully constrained model (FCM) based on the Taylor hypothesis which equates Lc to the macro-scale velocity gradient L. Herein, we present two stochastic models that relax the FCM constraint. Through various applications we show that these computationally efficient stochastic models provide realistic response predictions. We first investigate the texture evolution in a planar polycrystal with our stochastic Taylor model (STM), in which we define L c as a realization of a normal distribution with mean equal to L. Our STM predictions agree with crystal plasticity finite element method (CPFEM) predictions, demonstrating the development of a steady-state texture that is not predicted by the FCM. The computational cost of the STM is comparable to the FCM, i.e. substantially less than the CPFEM. We develop the STM for 3-D polycrystals based on CPFEM analysis results which show that Lc follows a normal distribution. In addition to the STM, we develop the stochastic no-constraints model (SNCM), which differs from the STM in the manner with which the Lc distribution means are determined. Calibration and validation of the models are performed using tantalum compression experiment data. Both models predict the compression textures more accurately than the FCM, and the SNCM predicts them more accurately than the STM. The STM is slightly more computationally expensive than the FCM, while the SNCM is three times more expensive. Finally, we incorporate the STM in a finite element simulation of the Taylor impact of two tantalum specimens. Our simulation predictions mimic the texture and deformation data measured from a powder metallurgy

  4. A method for characterizing photon radiation fields

    SciTech Connect

    Whicker, J.J.; Hsu, H.H.; Hsieh, F.H.; Borak, T.B.

    1999-04-01

    Uncertainty in dosimetric and exposure rate measurements can increase in areas where multi-directional and low-energy photons (< 100 keV) exist because of variations in energy and angular measurement response. Also, accurate measurement of external exposures in spatially non-uniform fields may require multiple dosimetry. Therefore, knowledge of the photon fields in the workplace is required for full understanding of the accuracy of dosimeters and instruments, and for determining the need for multiple dosimeters. This project was designed to develop methods to characterize photon radiation fields in the workplace, and to test the methods in a plutonium facility. The photon field at selected work locations was characterized using TLDs and a collimated NaI(Tl) detector from which spatial variations in photon energy distributions were calculated from measured spectra. Laboratory results showed the accuracy and utility of the method. Field measurement results combined with observed work patterns suggested the following: (1) workers are exposed from all directions, but not isotropically, (2) photon energy distributions were directionally dependent, (3) stuffing nearby gloves into the glovebox reduced exposure rates significantly, (4) dosimeter placement on the front of the chest provided for a reasonable estimate of the average dose equivalent to workers` torsos, (5) justifiable conclusions regarding the need for multiple dosimetry can be made using this quantitative method, and (6) measurements of the exposure rates with ionization chambers pointed with open beta windows toward the glovebox provided the highest measured rates, although absolute accuracy of the field measurements still needs to be assessed.

  5. A new signal restoration method based on deconvolution of the Point Spread Function (PSF) for the Flat-Field Holographic Concave Grating UV spectrometer system

    NASA Astrophysics Data System (ADS)

    Dai, Honglin; Luo, Yongdao

    2013-12-01

    In recent years, with the development of the Flat-Field Holographic Concave Grating, they are adopted by all kinds of UV spectrometers. By means of single optical surface, the Flat-Field Holographic Concave Grating can implement dispersion and imaging that make the UV spectrometer system design quite compact. However, the calibration of the Flat-Field Holographic Concave Grating is very difficult. Various factors make its imaging quality difficult to be guaranteed. So we have to process the spectrum signal with signal restoration before using it. Guiding by the theory of signals and systems, and after a series of experiments, we found that our UV spectrometer system is a Linear Space- Variant System. It means that we have to measure PSF of every pixel of the system which contains thousands of pixels. Obviously, that's a large amount of calculation .For dealing with this problem, we proposes a novel signal restoration method. This method divides the system into several Linear Space-Invariant subsystems and then makes signal restoration with PSFs. Our experiments turn out that this method is effective and inexpensive.

  6. A New Method for Coronal Magnetic Field Reconstruction

    NASA Astrophysics Data System (ADS)

    Yi, Sibaek; Choe, Gwangson; Lim, Daye

    2015-08-01

    We present a new, simple, variational method for reconstruction of coronal force-free magnetic fields based on vector magnetogram data. Our method employs vector potentials for magnetic field description in order to ensure the divergence-free condition. As boundary conditions, it only requires the normal components of magnetic field and current density so that the boundary conditions are not over-specified as in many other methods. The boundary normal current distribution is initially fixed once and for all and does not need continual adjustment as in stress-and-relax type methods. We have tested the computational code based on our new method in problems with known solutions and those with actual photospheric data. When solutions are fully given at all boundaries, the accuracy of our method is almost comparable to best performing methods in the market. When magnetic field data are given only at the photospheric boundary, our method excels other methods in most “figures of merit” devised by Schrijver et al. (2006). Furthermore the residual force in the solution is at least an order of magnitude smaller than that of any other method. It can also accommodate the source-surface boundary condition at the top boundary. Our method is expected to contribute to the real time monitoring of the sun required for future space weather forecasts.

  7. Electric Field Quantitative Measurement System and Method

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R. (Inventor)

    2016-01-01

    A method and system are provided for making a quantitative measurement of an electric field. A plurality of antennas separated from one another by known distances are arrayed in a region that extends in at least one dimension. A voltage difference between at least one selected pair of antennas is measured. Each voltage difference is divided by the known distance associated with the selected pair of antennas corresponding thereto to generate a resulting quantity. The plurality of resulting quantities defined over the region quantitatively describe an electric field therein.

  8. Advanced operator splitting-based semi-implicit spectral method to solve the binary phase-field crystal equations with variable coefficients

    NASA Astrophysics Data System (ADS)

    Tegze, György; Bansel, Gurvinder; Tóth, Gyula I.; Pusztai, Tamás; Fan, Zhongyun; Gránásy, László

    2009-03-01

    We present an efficient method to solve numerically the equations of dissipative dynamics of the binary phase-field crystal model proposed by Elder et al. [K.R. Elder, M. Katakowski, M. Haataja, M. Grant, Phys. Rev. B 75 (2007) 064107] characterized by variable coefficients. Using the operator splitting method, the problem has been decomposed into sub-problems that can be solved more efficiently. A combination of non-trivial splitting with spectral semi-implicit solution leads to sets of algebraic equations of diagonal matrix form. Extensive testing of the method has been carried out to find the optimum balance among errors associated with time integration, spatial discretization, and splitting. We show that our method speeds up the computations by orders of magnitude relative to the conventional explicit finite difference scheme, while the costs of the pointwise implicit solution per timestep remains low. Also we show that due to its numerical dissipation, finite differencing can not compete with spectral differencing in terms of accuracy. In addition, we demonstrate that our method can efficiently be parallelized for distributed memory systems, where an excellent scalability with the number of CPUs is observed.

  9. RAPID COMMUNICATION: Large improvement in high-field critical current densities of Nb3Al conductors by the transformation-heat-based up-quenching method

    NASA Astrophysics Data System (ADS)

    Takeuchi, T.; Banno, N.; Fukuzaki, T.; Wada, H.

    2000-10-01

    The bcc supersaturated solid solution Nb(Al)ss obtained by rapid heating and quenching of a multifilamentary Nb/Al composite wire has shown a crystal structure change from a disordered to an ordered structure before transforming to the A15 Nb3Al phase. Such ordering of the bcc phase seems to be responsible for the A15 phase stacking faults that depress the critical temperature (Tc), the upper critical magnetic field (Bc2) and, hence, the critical current density (Jc) of Nb3Al in high fields. A heat treatment around 1000 °C, higher than conventional transformation temperatures by about 200 °C, suppresses the ordering and yields a new phenomenon termed the `transformation-heat-based up-quenching' (TRUQ). TRUQ is characterized by the self-heating of the bcc phase by the transformation heat, which propagates through the whole length of a composite wire and transforms it to Nb3Al. A subsequent annealing at 800 °C enhances the long-range ordering of the Nb3Al phase and drastically improves the high-field critical current densities of the Nb3Al conductors.

  10. A field day of soil regulation methods

    NASA Astrophysics Data System (ADS)

    Kempter, Axel; Kempter, Carmen

    2015-04-01

    The subject Soil plays an important role in the school subject geography. In particular in the upper classes it is expected that the knowledge from the area of Soil can be also be applied in other subjects. Thus, e.g., an assessment of economy and agricultural development and developing potential requires the interweaving of natural- geographic and human-geographic factors. The treatment of the subject Soil requires the desegregation of the results of different fields like Physics, Chemistry and Biology. Accordingly the subject gives cause to professional-covering lessons and offers the opportunity for practical work as well as excursions. Beside the mediation of specialist knowledge and with the support of the methods and action competences, the independent learning and the practical work should have a special emphasis on the field excursion by using stimulating exercises oriented to solving problems and mastering the methods. This aim should be achieved by the interdisciplinary treatment of the subject Soil in the task-oriented learning process on the field day. The methods and experiments should be sensibly selected for both the temporal and material supply constraints. During the field day the pupils had to categorize soil texture, soil colour, soil profile, soil skeleton, lime content, ion exchanger (Soils filter materials), pH-Value, water retention capacity and evidence of different ions like e.g. Fe3+, Mg2+, Cl- and NO3-. The pupils worked on stations and evaluated the data to receive a general view of the ground at the end. According to numbers of locations, amount of time and group size, different procedures can be offered. There are groups of experts who carry out the same experiment at all locations and split for the evaluation in different groups or each group ran through all stations. The results were compared and discussed at the end.

  11. Wave field restoration using three-dimensional Fourier filtering method.

    PubMed

    Kawasaki, T; Takai, Y; Ikuta, T; Shimizu, R

    2001-11-01

    A wave field restoration method in transmission electron microscopy (TEM) was mathematically derived based on a three-dimensional (3D) image formation theory. Wave field restoration using this method together with spherical aberration correction was experimentally confirmed in through-focus images of amorphous tungsten thin film, and the resolution of the reconstructed phase image was successfully improved from the Scherzer resolution limit to the information limit. In an application of this method to a crystalline sample, the surface structure of Au(110) was observed in a profile-imaging mode. The processed phase image showed quantitatively the atomic relaxation of the topmost layer. PMID:11794629

  12. Inverse field-based approach for simultaneous B₁ mapping at high fields - a phantom based study.

    PubMed

    Jin, Jin; Liu, Feng; Zuo, Zhentao; Xue, Rong; Li, Mingyan; Li, Yu; Weber, Ewald; Crozier, Stuart

    2012-04-01

    Based on computational electromagnetics and multi-level optimization, an inverse approach of attaining accurate mapping of both transmit and receive sensitivity of radiofrequency coils is presented. This paper extends our previous study of inverse methods of receptivity mapping at low fields, to allow accurate mapping of RF magnetic fields (B(1)) for high-field applications. Accurate receive sensitivity mapping is essential to image domain parallel imaging methods, such as sensitivity encoding (SENSE), to reconstruct high quality images. Accurate transmit sensitivity mapping will facilitate RF-shimming and parallel transmission techniques that directly address the RF inhomogeneity issue, arguably the most challenging issue of high-field magnetic resonance imaging (MRI). The inverse field-based approach proposed herein is based on computational electromagnetics and iterative optimization. It fits an experimental image to the numerically calculated signal intensity by iteratively optimizing the coil-subject geometry to better resemble the experiments. Accurate transmit and receive sensitivities are derived as intermediate results of the optimization process. The method is validated by imaging studies using homogeneous saline phantom at 7T. A simulation study at 300MHz demonstrates that the proposed method is able to obtain receptivity mapping with errors an order of magnitude less than that of the conventional method. The more accurate receptivity mapping and simultaneously obtained transmit sensitivity mapping could enable artefact-reduced and intensity-corrected image reconstructions. It is hoped that by providing an approach to the accurate mapping of both transmit and receive sensitivity, the proposed method will facilitate a range of applications in high-field MRI and parallel imaging. PMID:22391489

  13. An analytical method to calculate equivalent fields to irregular symmetric and asymmetric photon fields

    SciTech Connect

    Tahmasebi Birgani, Mohamad J.; Chegeni, Nahid; Zabihzadeh, Mansoor; Hamzian, Nima

    2014-04-01

    Equivalent field is frequently used for central axis depth-dose calculations of rectangular- and irregular-shaped photon beams. As most of the proposed models to calculate the equivalent square field are dosimetry based, a simple physical-based method to calculate the equivalent square field size was used as the basis of this study. The table of the sides of the equivalent square or rectangular fields was constructed and then compared with the well-known tables by BJR and Venselaar, et al. with the average relative error percentage of 2.5 ± 2.5% and 1.5 ± 1.5%, respectively. To evaluate the accuracy of this method, the percentage depth doses (PDDs) were measured for some special irregular symmetric and asymmetric treatment fields and their equivalent squares for Siemens Primus Plus linear accelerator for both energies, 6 and 18 MV. The mean relative differences of PDDs measurement for these fields and their equivalent square was approximately 1% or less. As a result, this method can be employed to calculate equivalent field not only for rectangular fields but also for any irregular symmetric or asymmetric field.

  14. Gravity field determination using boundary element methods

    NASA Astrophysics Data System (ADS)

    Klees, Roland

    1993-09-01

    The Boundary Element Method (BEM), a numerical technique for solving boundary integral equations, is introduced to determine the earth's gravity field. After a short survey on its main principles, we apply this method to the fixed gravimetric boundary value problem (BVP), i.e. the determination of the earth's gravitational potential from measurements of the intensity of the gravity field in points on the earth's surface. We show how to linearize this nonlinear BVP using an implicit function theorem and how to transform the linearized BVP into a boundary integral equation using the single layer representation. A Galerkin method is used to transform the boundary integral equation using the single layer representation. A Galerkin method is used to transform the boundary integral equation into a linear system of equations. We discuss the major problems of this approach for setting up and solving the linear system. The BVP is numerically solved for a bounded part of the earth's surface using a high resolution reference gravity model, measured gravity values of high density, and a 50 ṡ 50 m2 digital terrain model to describe the earth's surface. We obtain a gravity field resolution of 1 ṡ 1 km2 with an accuracy of the order 10-3 to 10-4 in about 1 CPU-hour on a Siemens/Fujitsu SIMD vector pipeline machine using highly sophisticated numerical integration techniques and fast equation solvers. We conclude that BEM is a powerful numerical tool for solving boundary value problems and may be an alternative to classical geodetic techniques.

  15. Non-destructive observation of intact bacteria and viruses in water by the highly sensitive frequency transmission electric-field method based on SEM

    SciTech Connect

    Ogura, Toshihiko

    2014-08-08

    Highlights: • We developed a high-sensitive frequency transmission electric-field (FTE) system. • The output signal was highly enhanced by applying voltage to a metal layer on SiN. • The spatial resolution of new FTE method is 41 nm. • New FTE system enables observation of the intact bacteria and virus in water. - Abstract: The high-resolution structural analysis of biological specimens by scanning electron microscopy (SEM) presents several advantages. Until now, wet bacterial specimens have been examined using atmospheric sample holders. However, images of unstained specimens in water using these holders exhibit very poor contrast and heavy radiation damage. Recently, we developed the frequency transmission electric-field (FTE) method, which facilitates the SEM observation of biological specimens in water without radiation damage. However, the signal detection system presents low sensitivity. Therefore, a high EB current is required to generate clear images, and thus reducing spatial resolution and inducing thermal damage to the samples. Here a high-sensitivity detection system is developed for the FTE method, which enhances the output signal amplitude by hundredfold. The detection signal was highly enhanced when voltage was applied to the metal layer on silicon nitride thin film. This enhancement reduced the EB current and improved the spatial resolution as well as the signal-to-noise ratio. The spatial resolution of a high-sensitive FTE system is 41 nm, which is considerably higher than previous FTE system. New FTE system can easily be utilised to examine various unstained biological specimens in water, such as living bacteria and viruses.

  16. A fast tree-based method for estimating column densities in adaptive mesh refinement codes. Influence of UV radiation field on the structure of molecular clouds

    NASA Astrophysics Data System (ADS)

    Valdivia, Valeska; Hennebelle, Patrick

    2014-11-01

    Context. Ultraviolet radiation plays a crucial role in molecular clouds. Radiation and matter are tightly coupled and their interplay influences the physical and chemical properties of gas. In particular, modeling the radiation propagation requires calculating column densities, which can be numerically expensive in high-resolution multidimensional simulations. Aims: Developing fast methods for estimating column densities is mandatory if we are interested in the dynamical influence of the radiative transfer. In particular, we focus on the effect of the UV screening on the dynamics and on the statistical properties of molecular clouds. Methods: We have developed a tree-based method for a fast estimate of column densities, implemented in the adaptive mesh refinement code RAMSES. We performed numerical simulations using this method in order to analyze the influence of the screening on the clump formation. Results: We find that the accuracy for the extinction of the tree-based method is better than 10%, while the relative error for the column density can be much more. We describe the implementation of a method based on precalculating the geometrical terms that noticeably reduces the calculation time. To study the influence of the screening on the statistical properties of molecular clouds we present the probability distribution function of gas and the associated temperature per density bin and the mass spectra for different density thresholds. Conclusions: The tree-based method is fast and accurate enough to be used during numerical simulations since no communication is needed between CPUs when using a fully threaded tree. It is then suitable to parallel computing. We show that the screening for far UV radiation mainly affects the dense gas, thereby favoring low temperatures and affecting the fragmentation. We show that when we include the screening, more structures are formed with higher densities in comparison to the case that does not include this effect. We

  17. Improved methods for fan sound field determination

    NASA Technical Reports Server (NTRS)

    Cicon, D. E.; Sofrin, T. G.; Mathews, D. C.

    1981-01-01

    Several methods for determining acoustic mode structure in aircraft turbofan engines using wall microphone data were studied. A method for reducing data was devised and implemented which makes the definition of discrete coherent sound fields measured in the presence of engine speed fluctuation more accurate. For the analytical methods, algorithms were developed to define the dominant circumferential modes from full and partial circumferential arrays of microphones. Axial arrays were explored to define mode structure as a function of cutoff ratio, and the use of data taken at several constant speeds was also evaluated in an attempt to reduce instrumentation requirements. Sensitivities of the various methods to microphone density, array size and measurement error were evaluated and results of these studies showed these new methods to be impractical. The data reduction method used to reduce the effects of engine speed variation consisted of an electronic circuit which windowed the data so that signal enhancement could occur only when the speed was within a narrow range.

  18. Trap Profiling Based on Frequency Varied Charge Pumping Method for Hot Carrier Stressed Thin Gate Oxide Metal Oxide Semiconductors Field Effect Transistors.

    PubMed

    Choi, Pyungho; Kim, Hyunjin; Kim, Sangsub; Kim, Soonkon; Javadi, Reza; Park, Hyoungsun; Choi, Byoungdeog

    2016-05-01

    In this study, pulse frequency and reverse bias voltage is modified in charge pumping and advanced technique is presented to extract oxide trap profile in hot carrier stressed thin gate oxide metal oxide semiconductor field effect transistors (MOSFETs). Carrier trapping-detrapping in a gate oxide was analyzed after hot carrier stress and the relationship between trapping depth and frequency was investigated. Hot carrier induced interface traps appears in whole channel area but induced border traps mainly appears in above pinch-off region near drain and gradually decreases toward center of the channel. Thus, hot carrier stress causes interface trap generation in whole channel area while most border trap generation occurs in the drain region under the gate. Ultimately, modified charge pumping method was performed to get trap density distribution of hot carrier stressed MOSFET devices, and the trapping-detrapping mechanism is also analyzed. PMID:27483833

  19. A reduced-scaling density matrix-based method for the computation of the vibrational Hessian matrix at the self-consistent field level

    SciTech Connect

    Kussmann, Jörg; Luenser, Arne; Beer, Matthias; Ochsenfeld, Christian

    2015-03-07

    An analytical method to calculate the molecular vibrational Hessian matrix at the self-consistent field level is presented. By analysis of the multipole expansions of the relevant derivatives of Coulomb-type two-electron integral contractions, we show that the effect of the perturbation on the electronic structure due to the displacement of nuclei decays at least as r{sup −2} instead of r{sup −1}. The perturbation is asymptotically local, and the computation of the Hessian matrix can, in principle, be performed with O(N) complexity. Our implementation exhibits linear scaling in all time-determining steps, with some rapid but quadratic-complexity steps remaining. Sample calculations illustrate linear or near-linear scaling in the construction of the complete nuclear Hessian matrix for sparse systems. For more demanding systems, scaling is still considerably sub-quadratic to quadratic, depending on the density of the underlying electronic structure.

  20. Duality relations in the auxiliary field method

    SciTech Connect

    Silvestre-Brac, Bernard

    2011-05-15

    The eigenenergies {epsilon}{sup (N)}(m;{l_brace}n{sub i}, l{sub i{r_brace}}) of a system of N identical particles with a mass m are functions of the various radial quantum numbers n{sub i} and orbital quantum numbers l{sub i}. Approximations E{sup (N)}(m;Q) of these eigenenergies, depending on a principal quantum number Q({l_brace}n{sub i}, l{sub i{r_brace}}), can be obtained in the framework of the auxiliary field method. We demonstrate the existence of numerous exact duality relations linking quantities E{sup (N)}(m;Q) and E{sup (p)}(m';Q') for various forms of the potentials (independent of m and N) and for both nonrelativistic and semirelativistic kinematics. As the approximations computed with the auxiliary field method can be very close to the exact results, we show with several examples that these duality relations still hold, with sometimes a good accuracy, for the exact eigenenergies {epsilon}{sup (N)}(m;{l_brace}n{sub i}, l{sub i{r_brace}}).

  1. Narrow field electromagnetic sensor system and method

    DOEpatents

    McEwan, T.E.

    1996-11-19

    A narrow field electromagnetic sensor system and method of sensing a characteristic of an object provide the capability to realize a characteristic of an object such as density, thickness, or presence, for any desired coordinate position on the object. One application is imaging. The sensor can also be used as an obstruction detector or an electronic trip wire with a narrow field without the disadvantages of impaired performance when exposed to dirt, snow, rain, or sunlight. The sensor employs a transmitter for transmitting a sequence of electromagnetic signals in response to a transmit timing signal, a receiver for sampling only the initial direct RF path of the electromagnetic signal while excluding all other electromagnetic signals in response to a receive timing signal, and a signal processor for processing the sampled direct RF path electromagnetic signal and providing an indication of the characteristic of an object. Usually, the electromagnetic signal is a short RF burst and the obstruction must provide a substantially complete eclipse of the direct RF path. By employing time-of-flight techniques, a timing circuit controls the receiver to sample only the initial direct RF path of the electromagnetic signal while not sampling indirect path electromagnetic signals. The sensor system also incorporates circuitry for ultra-wideband spread spectrum operation that reduces interference to and from other RF services while allowing co-location of multiple electronic sensors without the need for frequency assignments. 12 figs.

  2. Narrow field electromagnetic sensor system and method

    DOEpatents

    McEwan, Thomas E.

    1996-01-01

    A narrow field electromagnetic sensor system and method of sensing a characteristic of an object provide the capability to realize a characteristic of an object such as density, thickness, or presence, for any desired coordinate position on the object. One application is imaging. The sensor can also be used as an obstruction detector or an electronic trip wire with a narrow field without the disadvantages of impaired performance when exposed to dirt, snow, rain, or sunlight. The sensor employs a transmitter for transmitting a sequence of electromagnetic signals in response to a transmit timing signal, a receiver for sampling only the initial direct RF path of the electromagnetic signal while excluding all other electromagnetic signals in response to a receive timing signal, and a signal processor for processing the sampled direct RF path electromagnetic signal and providing an indication of the characteristic of an object. Usually, the electromagnetic signal is a short RF burst and the obstruction must provide a substantially complete eclipse of the direct RF path. By employing time-of-flight techniques, a timing circuit controls the receiver to sample only the initial direct RF path of the electromagnetic signal while not sampling indirect path electromagnetic signals. The sensor system also incorporates circuitry for ultra-wideband spread spectrum operation that reduces interference to and from other RF services while allowing co-location of multiple electronic sensors without the need for frequency assignments.

  3. A new method of field MRTD test

    NASA Astrophysics Data System (ADS)

    Chen, Zhibin; Song, Yan; Liu, Xianhong; Xiao, Wenjian

    2014-09-01

    MRTD is an important indicator to measure the imaging performance of infrared camera. In the traditional laboratory test, blackbody is used as simulated heat source which is not only expensive and bulky but also difficult to meet field testing requirements of online automatic infrared camera MRTD. To solve this problem, this paper introduces a new detection device for MRTD, which uses LED as a simulation heat source and branded plated zinc sulfide glass carved four-bar target as a simulation target. By using high temperature adaptability cassegrain collimation system, the target is simulated to be distance-infinite so that it can be observed by the human eyes to complete the subjective test, or collected to complete objective measurement by image processing. This method will use LED to replace blackbody. The color temperature of LED is calibrated by thermal imager, thereby, the relation curve between the LED temperature controlling current and the blackbody simulation temperature difference is established, accurately achieved the temperature control of the infrared target. Experimental results show that the accuracy of the device in field testing of thermal imager MRTD can be limited within 0.1K, which greatly reduces the cost to meet the project requirements with a wide application value.

  4. Dynamic mean field theory for lattice gas models of fluids confined in porous materials: Higher order theory based on the Bethe-Peierls and path probability method approximations

    SciTech Connect

    Edison, John R.; Monson, Peter A.

    2014-07-14

    Recently we have developed a dynamic mean field theory (DMFT) for lattice gas models of fluids in porous materials [P. A. Monson, J. Chem. Phys. 128(8), 084701 (2008)]. The theory can be used to describe the relaxation processes in the approach to equilibrium or metastable states for fluids in pores and is especially useful for studying system exhibiting adsorption/desorption hysteresis. In this paper we discuss the extension of the theory to higher order by means of the path probability method (PPM) of Kikuchi and co-workers. We show that this leads to a treatment of the dynamics that is consistent with thermodynamics coming from the Bethe-Peierls or Quasi-Chemical approximation for the equilibrium or metastable equilibrium states of the lattice model. We compare the results from the PPM with those from DMFT and from dynamic Monte Carlo simulations. We find that the predictions from PPM are qualitatively similar to those from DMFT but give somewhat improved quantitative accuracy, in part due to the superior treatment of the underlying thermodynamics. This comes at the cost of greater computational expense associated with the larger number of equations that must be solved.

  5. Forward Modeling Method of Gravity and Magnetic Fields and Their Gradient Tensors Based on 3-D Delaunay Discretization in Cartesian and Spherical Coordinate Systems

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Chen, C.; Du, J.; Sun, S.; Liang, Q.

    2015-12-01

    In the study of the inversion of gravity and magnetic data, the discretization of underground space is usually achieved by the use of structured grids. For instance, using the regular block as the module unit to divide model space in Cartesian coordinate system and the tesseroid in spherical coordinate system. Structured grids show clear spatial structures and mathematical properties. However, the block can only provide a rough approximation to the given terrain and using the tesseroid to approximate the terrain even seems impracticable. These shape determining errors cause the reduction of forward modeling precision. Moreover, the precision decreases again while using the tesseroid as no analytical algorithm has been acquired. On the other hand, since most terrain data has a limited resolution, unstructured grids, based on the polyhedron or tetrahedron, could fill the space completely, which allows us to reduce errors in shape determination to the minima. In addition, the analytical algorithms for polyhedron have been proposed. In our study, we use the tetrahedron as the module unit to divide the underground space. Moreover, based on the former researches, we supplement new analytical algorithms for tetrahedron to forward modeling gravity and magnetic fields and their gradient tensors in both Cartesian and spherical coordinate systems. The algorithm is testified by comparing the forward gravity and magnetic data of a block with the data obtained using the existed algorithms. The absolute difference between these two data is under 10e-9 mGal. Our approach is suitable for the inversion of gravity and magnetic data in both Cartesian and spherical coordinate systems.This study is supported by Natural Science Fund of Hubei Province (Grant No.: 2015CFB361) and International Cooperation Project in Science and Technology of China (Grant No.: 2010DFA24580).

  6. A field method for measurement of infiltration

    USGS Publications Warehouse

    Johnson, A.I.

    1963-01-01

    The determination of infiltration--the downward entry of water into a soil (or sediment)--is receiving increasing attention in hydrologic studies because of the need for more quantitative data on all phases of the hydrologic cycle. A measure of infiltration, the infiltration rate, is usually determined in the field by flooding basins or furrows, sprinkling, or measuring water entry from cylinders (infiltrometer rings). Rates determined by ponding in large areas are considered most reliable, but the high cost usually dictates that infiltrometer rings, preferably 2 feet in diameter or larger, be used. The hydrology of subsurface materials is critical in the study of infiltration. The zone controlling the rate of infiltration is usually the least permeable zone. Many other factors affect infiltration rate--the sediment (soil) structure, the condition of the sediment surface, the distribution of soil moisture or soil- moisture tension, the chemical and physical nature of the sediments, the head of applied water, the depth to ground water, the chemical quality and the turbidity of the applied water, the temperature of the water and the sediments, the percentage of entrapped air in the sediments, the atmospheric pressure, the length of time of application of water, the biological activity in the sediments, and the type of equipment or method used. It is concluded that specific values of the infiltration rate for a particular type of sediment are probably nonexistent and that measured rates are primarily for comparative use. A standard field-test method for determining infiltration rates by means of single- or double-ring infiltrometers is described and the construction, installation, and operation of the infiltrometers are discussed in detail.

  7. Field methods for measuring concentrated flow erosion

    NASA Astrophysics Data System (ADS)

    Castillo, C.; Pérez, R.; James, M. R.; Quinton, J. N.; Taguas, E. V.; Gómez, J. A.

    2012-04-01

    Many studies have stressed the importance of gully erosion in the overall soil loss and sediment yield of agricultural catchments, for instance in recent years (Vandaele and Poesen, 1995; De Santisteban et al., 2006; Wu el al, 2008). Several techniques have been used for determining gully erosion in field studies. The conventional techniques involved the use of different devices (i.e. ruler, pole, tape, micro-topographic profilers, total station) to calculate rill and gully volumes through the determination of cross sectional areas and length of reaches (Casalí et al, 1999; Hessel and van Asch, 2003). Optical devices (i.e. laser profilemeters) have also been designed for the purpose of rapid and detailed assessment of cross sectional areas in gully networks (Giménez et al., 2009). These conventional 2d methods provide a simple and un-expensive approach for erosion evaluation, but are time consuming to carry out if a good accuracy is required. On the other hand, remote sensing techniques are being increasingly applied to gully erosion investigation such as aerial photography used for big-scale, long-term, investigations (e.g. Martínez-Casasnovas et al., 2004; Ionita, 2006), airborne and terrestrial LiDAR datasets for gully volume evaluation (James et al., 2007; Evans and Lindsay, 2010) and recently, major advances in 3D photo-reconstruction techniques (Welty et al. 2010, James et al., 2011). Despite its interest, few studies simultaneously compare the accuracies of the range of conventional and remote sensing techniques used, or define the most suitable method for a particular scale, given and time and cost constraints. That was the reason behind the International Workshop Innovations in the evaluation and measurement of rill and gully erosion, held in Cordoba in May 2011 and from which derive part of the materials presented in this abstract. The main aim of this work was to compare the accuracy and time requirements of traditional (2D) and recently developed

  8. New electric field methods in chemical relaxation spectrometry.

    PubMed Central

    Persoons, A; Hellemans, L

    1978-01-01

    New stationary relaxation methods for the investigation of ionic and dipolar equilibria are presented. The methods are based on the measurement of non-linearities in conductance and permittivity under high electric field conditions. The chemical contributions to the nonlinear effects are discussed in their static as well as their dynamic behavior. A sampling of experimental results shows the potential and range of possible applications of the new techniques. It is shown that these methods will become useful in the study of nonlinear responses to perturbation, in view of the general applicability of the experimental principles involved. PMID:708817

  9. Knowledge-based flow field zoning

    NASA Technical Reports Server (NTRS)

    Andrews, Alison E.

    1988-01-01

    Automation flow field zoning in two dimensions is an important step towards easing the three-dimensional grid generation bottleneck in computational fluid dynamics. A knowledge based approach works well, but certain aspects of flow field zoning make the use of such an approach challenging. A knowledge based flow field zoner, called EZGrid, was implemented and tested on representative two-dimensional aerodynamic configurations. Results are shown which illustrate the way in which EZGrid incorporates the effects of physics, shape description, position, and user bias in a flow field zoning.

  10. A parallel domain decomposition-based implicit method for the Cahn-Hilliard-Cook phase-field equation in 3D

    NASA Astrophysics Data System (ADS)

    Zheng, Xiang; Yang, Chao; Cai, Xiao-Chuan; Keyes, David

    2015-03-01

    We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn-Hilliard-Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton-Krylov-Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracy (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors.

  11. A parallel domain decomposition-based implicit method for the Cahn–Hilliard–Cook phase-field equation in 3D

    SciTech Connect

    Zheng, Xiang; Yang, Chao; Cai, Xiao-Chuan; Keyes, David

    2015-03-15

    We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn–Hilliard–Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton–Krylov–Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracy (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors.

  12. Field testing method for photovaltaic modules

    NASA Astrophysics Data System (ADS)

    Ramos, Gerber N.

    For remote areas, where solar photovoltaic modules are the only source of power, it is essential to perform preventive maintenance to insure that the PV system works properly; unfortunately, prices for PV testers range from 1,700 to 8,000. To address this issue, a portable inexpensive tester and analysis methodology have been developed. Assembling a simple tester, which costs $530 and weighs about 5 pounds, and using the Four-Parameters PV Model, we characterized the current-voltage (I-V) curve at environmental testing conditions; and then employing radiation, temperature, and age degradation sensitivity equations, we extrapolated the I-V curve to standard testing conditions. After applying the methodology to three kinds of silicon modules (mono-crystalline, multi-crystalline, and thin-film), we obtained maximum power points up to 97% of the manufacturer's specifications. Therefore, based on these results, it is reasonably accurate and affordable to verify the performance of solar modules in the field.

  13. Potential theoretic methods for far field sound radiation calculations

    NASA Technical Reports Server (NTRS)

    Hariharan, S. I.; Stenger, Edward J.; Scott, J. R.

    1995-01-01

    In the area of computational acoustics, procedures which accurately predict the far-field sound radiation are much sought after. A systematic development of such procedures are found in a sequence of papers by Atassi. The method presented here is an alternate approach to predicting far field sound based on simple layer potential theoretic methods. The main advantages of this method are: it requires only a simple free space Green's function, it can accommodate arbitrary shapes of Kirchoff surfaces, and is readily extendable to three-dimensional problems. Moreover, the procedure presented here, though tested for unsteady lifting airfoil problems, can easily be adapted to other areas of interest, such as jet noise radiation problems. Results are presented for lifting airfoil problems and comparisons are made with the results reported by Atassi. Direct comparisons are also made for the flat plate case.

  14. School's IN for Summer: An Alternative Field Experience for Elementary Science Methods Students

    ERIC Educational Resources Information Center

    Hanuscin, Deborah L.; Musikul, Kusalin

    2007-01-01

    Field experiences are critical to teacher learning and enhance the effectiveness of methods courses; however, when methods courses are offered in the summer, traditional school-based field experiences are not possible. This article describes an alternative campus-based experience created as part of an elementary science methods course. The Summer…

  15. Filter-based method for background removal in high-sensitivity wide-field-surface-enhanced Raman scattering imaging in vivo.

    PubMed

    Mallia, Rupananda J; McVeigh, Patrick Z; Veilleux, Israel; Wilson, Brian C

    2012-07-01

    As molecular imaging moves towards lower detection limits, the elimination of endogenous background signals becomes imperative. We present a facile background-suppression technique that specifically segregates the signal from surface-enhanced Raman scattering (SERS)-active nanoparticles (NPs) from the tissue autofluorescence background in vivo. SERS NPs have extremely narrow spectral peaks that do not overlap significantly with endogenous Raman signals. This can be exploited, using specific narrow-band filters, to image picomolar (pM) concentrations of NPs against a broad tissue autofluorescence background in wide-field mode, with short integration times that compare favorably with point-by-point mapping typically used in SERS imaging. This advance will facilitate the potential applications of SERS NPs as contrast agents in wide-field multiplexed biomarker-targeted imaging in vivo. PMID:22894500

  16. From documents to datasets: A MediaWiki-based method of annotating and extracting species observations in century-old field notebooks.

    PubMed

    Thomer, Andrea; Vaidya, Gaurav; Guralnick, Robert; Bloom, David; Russell, Laura

    2012-01-01

    Part diary, part scientific record, biological field notebooks often contain details necessary to understanding the location and environmental conditions existent during collecting events. Despite their clear value for (and recent use in) global change studies, the text-mining outputs from field notebooks have been idiosyncratic to specific research projects, and impossible to discover or re-use. Best practices and workflows for digitization, transcription, extraction, and integration with other sources are nascent or non-existent. In this paper, we demonstrate a workflow to generate structured outputs while also maintaining links to the original texts. The first step in this workflow was to place already digitized and transcribed field notebooks from the University of Colorado Museum of Natural History founder, Junius Henderson, on Wikisource, an open text transcription platform. Next, we created Wikisource templates to document places, dates, and taxa to facilitate annotation and wiki-linking. We then requested help from the public, through social media tools, to take advantage of volunteer efforts and energy. After three notebooks were fully annotated, content was converted into XML and annotations were extracted and cross-walked into Darwin Core compliant record sets. Finally, these recordsets were vetted, to provide valid taxon names, via a process we call "taxonomic referencing." The result is identification and mobilization of 1,068 observations from three of Henderson's thirteen notebooks and a publishable Darwin Core record set for use in other analyses. Although challenges remain, this work demonstrates a feasible approach to unlock observations from field notebooks that enhances their discovery and interoperability without losing the narrative context from which those observations are drawn."Compose your notes as if you were writing a letter to someone a century in the future."Perrine and Patton (2011). PMID:22859891

  17. From documents to datasets: A MediaWiki-based method of annotating and extracting species observations in century-old field notebooks

    PubMed Central

    Thomer, Andrea; Vaidya, Gaurav; Guralnick, Robert; Bloom, David; Russell, Laura

    2012-01-01

    Abstract Part diary, part scientific record, biological field notebooks often contain details necessary to understanding the location and environmental conditions existent during collecting events. Despite their clear value for (and recent use in) global change studies, the text-mining outputs from field notebooks have been idiosyncratic to specific research projects, and impossible to discover or re-use. Best practices and workflows for digitization, transcription, extraction, and integration with other sources are nascent or non-existent. In this paper, we demonstrate a workflow to generate structured outputs while also maintaining links to the original texts. The first step in this workflow was to place already digitized and transcribed field notebooks from the University of Colorado Museum of Natural History founder, Junius Henderson, on Wikisource, an open text transcription platform. Next, we created Wikisource templates to document places, dates, and taxa to facilitate annotation and wiki-linking. We then requested help from the public, through social media tools, to take advantage of volunteer efforts and energy. After three notebooks were fully annotated, content was converted into XML and annotations were extracted and cross-walked into Darwin Core compliant record sets. Finally, these recordsets were vetted, to provide valid taxon names, via a process we call “taxonomic referencing.” The result is identification and mobilization of 1,068 observations from three of Henderson’s thirteen notebooks and a publishable Darwin Core record set for use in other analyses. Although challenges remain, this work demonstrates a feasible approach to unlock observations from field notebooks that enhances their discovery and interoperability without losing the narrative context from which those observations are drawn. “Compose your notes as if you were writing a letter to someone a century in the future.” Perrine and Patton (2011) PMID:22859891

  18. Gravitational collapse of scalar fields via spectral methods

    SciTech Connect

    Oliveira, H. P. de; Rodrigues, E. L.; Skea, J. E. F.

    2010-11-15

    In this paper we present a new numerical code based on the Galerkin method to integrate the field equations for the spherical collapse of massive and massless scalar fields. By using a spectral decomposition in terms of the radial coordinate, the field equations were reduced to a finite set of ordinary differential equations in the space of modes associated with the Galerkin expansion of the scalar field, together with algebraic sets of equations connecting modes associated with the metric functions. The set of ordinary differential equations with respect to the null coordinate is then integrated using an eighth-order Runge-Kutta method. The numerical tests have confirmed the high accuracy and fast convergence of the code. As an application we have evaluated the whole spectrum of black hole masses which ranges from infinitesimal to large values obtained after varying the amplitude of the initial scalar field distribution. We have found strong numerical evidence that this spectrum is described by a nonextensive distribution law.

  19. Process system and method for fabricating submicron field emission cathodes

    DOEpatents

    Jankowski, A.F.; Hayes, J.P.

    1998-05-05

    A process method and system for making field emission cathodes exists. The deposition source divergence is controlled to produce field emission cathodes with height-to-base aspect ratios that are uniform over large substrate surface areas while using very short source-to-substrate distances. The rate of hole closure is controlled from the cone source. The substrate surface is coated in well defined increments. The deposition source is apertured to coat pixel areas on the substrate. The entire substrate is coated using a manipulator to incrementally move the whole substrate surface past the deposition source. Either collimated sputtering or evaporative deposition sources can be used. The position of the aperture and its size and shape are used to control the field emission cathode size and shape. 3 figs.

  20. Process system and method for fabricating submicron field emission cathodes

    DOEpatents

    Jankowski, Alan F.; Hayes, Jeffrey P.

    1998-01-01

    A process method and system for making field emission cathodes exists. The deposition source divergence is controlled to produce field emission cathodes with height-to-base aspect ratios that are uniform over large substrate surface areas while using very short source-to-substrate distances. The rate of hole closure is controlled from the cone source. The substrate surface is coated in well defined increments. The deposition source is apertured to coat pixel areas on the substrate. The entire substrate is coated using a manipulator to incrementally move the whole substrate surface past the deposition source. Either collimated sputtering or evaporative deposition sources can be used. The position of the aperture and its size and shape are used to control the field emission cathode size and shape.

  1. Junction-based field emission structure for field emission display

    DOEpatents

    Dinh, Long N.; Balooch, Mehdi; McLean, II, William; Schildbach, Marcus A.

    2002-01-01

    A junction-based field emission display, wherein the junctions are formed by depositing a semiconducting or dielectric, low work function, negative electron affinity (NEA) silicon-based compound film (SBCF) onto a metal or n-type semiconductor substrate. The SBCF can be doped to become a p-type semiconductor. A small forward bias voltage is applied across the junction so that electron transport is from the substrate into the SBCF region. Upon entering into this NEA region, many electrons are released into the vacuum level above the SBCF surface and accelerated toward a positively biased phosphor screen anode, hence lighting up the phosphor screen for display. To turn off, simply switch off the applied potential across the SBCF/substrate. May be used for field emission flat panel displays.

  2. Magnetic field transfer device and method

    DOEpatents

    Wipf, S.L.

    1990-02-13

    A magnetic field transfer device includes a pair of oppositely wound inner coils which each include at least one winding around an inner coil axis, and an outer coil which includes at least one winding around an outer coil axis. The windings may be formed of superconductors. The axes of the two inner coils are parallel and laterally spaced from each other so that the inner coils are positioned in side-by-side relation. The outer coil is outwardly positioned from the inner coils and rotatable relative to the inner coils about a rotational axis substantially perpendicular to the inner coil axes to generate a hypothetical surface which substantially encloses the inner coils. The outer coil rotates relative to the inner coils between a first position in which the outer coil axis is substantially parallel to the inner coil axes and the outer coil augments the magnetic field formed in one of the inner coils, and a second position 180[degree] from the first position, in which the augmented magnetic field is transferred into the other inner coil and reoriented 180[degree] from the original magnetic field. The magnetic field transfer device allows a magnetic field to be transferred between volumes with negligible work being required to rotate the outer coil with respect to the inner coils. 16 figs.

  3. Magnetic field transfer device and method

    DOEpatents

    Wipf, Stefan L.

    1990-01-01

    A magnetic field transfer device includes a pair of oppositely wound inner coils which each include at least one winding around an inner coil axis, and an outer coil which includes at least one winding around an outer coil axis. The windings may be formed of superconductors. The axes of the two inner coils are parallel and laterally spaced from each other so that the inner coils are positioned in side-by-side relation. The outer coil is outwardly positioned from the inner coils and rotatable relative to the inner coils about a rotational axis substantially perpendicular to the inner coil axes to generate a hypothetical surface which substantially encloses the inner coils. The outer coil rotates relative to the inner coils between a first position in which the outer coil axis is substantially parallel to the inner coil axes and the outer coil augments the magnetic field formed in one of the inner coils, and a second position 180.degree. from the first position, in which the augmented magnetic field is transferred into the other inner coil and reoriented 180.degree. from the original magnetic field. The magnetic field transfer device allows a magnetic field to be transferred between volumes with negligible work being required to rotate the outer coil with respect to the inner coils.

  4. Inter-comparison of four remote sensing based surface energy balance methods to retrieve surface evapotranspiration and water stress of irrigated fields in semi-arid climate

    NASA Astrophysics Data System (ADS)

    Chirouze, J.; Boulet, G.; Jarlan, L.; Fieuzal, R.; Rodriguez, J. C.; Ezzahar, J.; Er-Raki, S.; Bigeard, G.; Merlin, O.; Garatuza-Payan, J.; Watts, C.; Chehbouni, G.

    2013-01-01

    Remotely sensed surface temperature can provide a good proxy for water stress level and is therefore particularly useful to estimate spatially distributed evapotranspiration. Instantaneous stress levels or instantaneous latent heat flux are deduced from the surface energy balance equation constrained by this equilibrium temperature. Pixel average surface temperature depends on two main factors: stress and vegetation fraction cover. Methods estimating stress vary according to the way they treat each factor. Two families of methods can be defined: the contextual methods, where stress levels are scaled on a given image between hot/dry and cool/wet pixels for a particular vegetation cover, and single-pixel methods which evaluate latent heat as the residual of the surface energy balance for one pixel independently from the others. Four models, two contextual (S-SEBI and a triangle method, inspired by Moran et al., 1994) and two single-pixel (TSEB, SEBS) are applied at seasonal scale over a four by four km irrigated agricultural area in semi-arid northern Mexico. Their performances, both at local and spatial standpoints, are compared relatively to energy balance data acquired at seven locations within the area, as well as a more complex soil-vegetation-atmosphere transfer model forced with true irrigation and rainfall data. Stress levels are not always well retrieved by most models, but S-SEBI as well as TSEB, although slightly biased, show good performances. Drop in model performances is observed when vegetation is senescent, mostly due to a poor partitioning both between turbulent fluxes and between the soil/plant components of the latent heat flux and the available energy. As expected, contextual methods perform well when extreme hydric and vegetation conditions are encountered in the same image (therefore, esp. in spring and early summer) while they tend to exaggerate the spread in water status in more homogeneous conditions (esp. in winter).

  5. DEVELOPMENT OF AN EMISSION FACTOR FOR AMMONIA EMISSIONS FROM U.S. SWINE FARMS BASED ON FIELD TESTS AND APPLICATION OF A MASS BALANCE METHOD

    EPA Science Inventory

    This paper summarizes and discusses recent available U.S. and European information on
    ammonia (NH3) emissions from swine farms and assesses the applicability for general use
    in the United States. The emission rates for the swine barns calculated by various methods show
    g...

  6. Intercomparison of four remote-sensing-based energy balance methods to retrieve surface evapotranspiration and water stress of irrigated fields in semi-arid climate

    NASA Astrophysics Data System (ADS)

    Chirouze, J.; Boulet, G.; Jarlan, L.; Fieuzal, R.; Rodriguez, J. C.; Ezzahar, J.; Er-Raki, S.; Bigeard, G.; Merlin, O.; Garatuza-Payan, J.; Watts, C.; Chehbouni, G.

    2014-03-01

    Instantaneous evapotranspiration rates and surface water stress levels can be deduced from remotely sensed surface temperature data through the surface energy budget. Two families of methods can be defined: the contextual methods, where stress levels are scaled on a given image between hot/dry and cool/wet pixels for a particular vegetation cover, and single-pixel methods, which evaluate latent heat as the residual of the surface energy balance for one pixel independently from the others. Four models, two contextual (S-SEBI and a modified triangle method, named VIT) and two single-pixel (TSEB, SEBS) are applied over one growing season (December-May) for a 4 km × 4 km irrigated agricultural area in the semi-arid northern Mexico. Their performance, both at local and spatial standpoints, are compared relatively to energy balance data acquired at seven locations within the area, as well as an uncalibrated soil-vegetation-atmosphere transfer (SVAT) model forced with local in situ data including observed irrigation and rainfall amounts. Stress levels are not always well retrieved by most models, but S-SEBI as well as TSEB, although slightly biased, show good performance. The drop in model performance is observed for all models when vegetation is senescent, mostly due to a poor partitioning both between turbulent fluxes and between the soil/plant components of the latent heat flux and the available energy. As expected, contextual methods perform well when contrasted soil moisture and vegetation conditions are encountered in the same image (therefore, especially in spring and early summer) while they tend to exaggerate the spread in water status in more homogeneous conditions (especially in winter). Surface energy balance models run with available remotely sensed products prove to be nearly as accurate as the uncalibrated SVAT model forced with in situ data.

  7. Dispersion-Corrected Mean-Field Electronic Structure Methods.

    PubMed

    Grimme, Stefan; Hansen, Andreas; Brandenburg, Jan Gerit; Bannwarth, Christoph

    2016-05-11

    Mean-field electronic structure methods like Hartree-Fock, semilocal density functional approximations, or semiempirical molecular orbital (MO) theories do not account for long-range electron correlation (London dispersion interaction). Inclusion of these effects is mandatory for realistic calculations on large or condensed chemical systems and for various intramolecular phenomena (thermochemistry). This Review describes the recent developments (including some historical aspects) of dispersion corrections with an emphasis on methods that can be employed routinely with reasonable accuracy in large-scale applications. The most prominent correction schemes are classified into three groups: (i) nonlocal, density-based functionals, (ii) semiclassical C6-based, and (iii) one-electron effective potentials. The properties as well as pros and cons of these methods are critically discussed, and typical examples and benchmarks on molecular complexes and crystals are provided. Although there are some areas for further improvement (robustness, many-body and short-range effects), the situation regarding the overall accuracy is clear. Various approaches yield long-range dispersion energies with a typical relative error of 5%. For many chemical problems, this accuracy is higher compared to that of the underlying mean-field method (i.e., a typical semilocal (hybrid) functional like B3LYP). PMID:27077966

  8. Field-based gunfire location systems

    NASA Astrophysics Data System (ADS)

    Uzes, Charles A.

    2009-05-01

    A new approach to gunfire location coupling antenna design to field models and signal processing procedures enables direction finding and ranging of projectile sources in spectrally competitive environments, the ranging permitted in certain circumstances. The approach is based upon the notion that data collection should enable mathematical models for incident acoustic fields in antenna neighborhoods, permitting utilization of systems having high resolving power. Theory, procedures, and design are outlined and gunfire location field test results incorporating multiple shooters, echoes, and reverberation are presented. *Technology protected by US Patents 7,423,934; 7,394,724;,7,372,774; 7,123,548; and patents pending.

  9. Forest health monitoring: Field methods guide

    SciTech Connect

    Tallent-Halsell, N.G.

    1994-10-01

    This guide is intended to instruct Forest Health Monitors when collecting data on forest health indicators; site condition, growth and regeneration, crown condition, tree damage and mortality assessment, photosynthetically active radiation, vegetation structure, ozone bioindicator species, lichen community structure and field logistics. This guide contains information on measuring, observing and recording data related to the above listed forest health indicators. Pertinent quality assurance information is also included.

  10. FOREST HEALTH MONITORING FIELD METHODS GUIDE

    EPA Science Inventory

    This EMAP-FHM methods Guide is intended to instruct forest Health Monitors when collecting data on forest health indicators; site condition, growth and regeneration, crown condition, tree damage and mortality assessment, photosynthetically active radiation, vegetation structure, ...

  11. Dispersion Method Using Focused Ultrasonic Field

    NASA Astrophysics Data System (ADS)

    Kim, Jungsoon; Kim, Moojoon; Ha, Kanglyel; Chu, Minchul

    2010-07-01

    The dispersion of powders into liquids has become one of the most important techniques in high-tech industries and it is a common process in the formulation of various products, such as paint, ink, shampoo, beverages, and polishing media. In this study, an ultrasonic system with a cylindrical transducer is newly introduced for pure nanoparticle dispersion. The acoustics pressure field and the characteristics of the shock pulse caused by cavitation are investigated. The frequency spectrum of the pulse from the collapse of air bubbles in the cavitation is analyzed theoretically. It was confirmed that a TiO2 water suspension can be dispersed effectively using the suggested system.

  12. Determination of traces of cobalt in soils: A field method

    USGS Publications Warehouse

    Almond, H.

    1953-01-01

    The growing use of geochemical prospecting methods in the search for ore deposits has led to the development of a field method for the determination of cobalt in soils. The determination is based on the fact that cobalt reacts with 2-nitroso-1-naphthol to yield a pink compound that is soluble in carbon tetrachloride. The carbon tetrachloride extract is shaken with dilute cyanide to complex interfering elements and to remove excess reagent. The cobalt content is estimated by comparing the pink color in the carbon tetrachloride with a standard series prepared from standard solutions. The cobalt 2-nitroso-1-naphtholate system in carbon tetrachloride follows Beer's law. As little as 1 p.p.m. can be determined in a 0.1-gram sample. The method is simple and fast and requires only simple equipment. More than 40 samples can be analyzed per man-day with an accuracy within 30% or better.

  13. Light-field-based phase imaging

    NASA Astrophysics Data System (ADS)

    Liu, Jingdan; Xu, Tingfa; Yue, Weirui; Situ, Guohai

    2014-10-01

    Phase contains important information about the diffraction or scattering property of an object, and therefore the imaging of phase is vital to many applications including biomedicine and metrology, just name a few. However, due to the limited bandwidth of image sensors, it is not possible to directly detect the phase of an optical field. Many methods including the Transport of Intensity Equation (TIE) have been well demonstrated for quantitative and non-interferometric imaging of phase. The TIE offers an experimentally simple technique for computing phase quantitatively from two or more defocused images. Usually, the defocused images were experimentally obtained by shifting the camera along the optical axis with slight intervals. Note that light field imaging has the capability to take an image stack focused at different depths by digital refocusing the captured light field of a scene. In this paper, we propose to combine Light Field Microscopy and the TIE method for phase imaging, taking the digital-refocusing advantage of Light Field Microscopy. We demonstrate the propose technique by simulation results. Compare with the traditional camera-shifting technique, light-field imaging allows the capturing the defocused images without any mechanical instability and therefore demonstrate advantage in practical applications.

  14. Field-Based Evaluation of Two Herbaceous Plant Community Composition Sampling Methods for Long-Term Monitoring in Northern Great Plains National Parks

    USGS Publications Warehouse

    Symstad, Amy J.; Wienk, Cody L.; Thorstenson, Andy

    2006-01-01

    The Northern Great Plains Inventory & Monitoring (I&M) Network (Network) of the National Park Service (NPS) consists of 13 NPS units in North Dakota, South Dakota, Nebraska, and eastern Wyoming. The Network is in the planning phase of a long-term program to monitor the health of park ecosystems. Plant community composition is one of the 'Vital Signs,' or indicators, that will be monitored as part of this program for three main reasons. First, plant community composition is information-rich; a single sampling protocol can provide information on the diversity of native and non-native species, the abundance of individual dominant species, and the abundance of groups of plants. Second, plant community composition is of specific management concern. The abundance and diversity of exotic plants, both absolute and relative to native species, is one of the greatest management concerns in almost all Network parks (Symstad 2004). Finally, plant community composition reflects the effects of a variety of current or anticipated stressors on ecosystem health in the Network parks including invasive exotic plants, large ungulate grazing, lack of fire in a fire-adapted system, chemical exotic plant control, nitrogen deposition, increased atmospheric carbon dioxide concentrations, and climate change. Before the Network begins its Vital Signs monitoring, a detailed plan describing specific protocols used for each of the Vital Signs must go through rigorous development and review. The pilot study on which we report here is one of the components of this protocol development. The goal of the work we report on here was to determine a specific method to use for monitoring plant community composition of the herb layer (< 2 m tall).

  15. BLOCK DISPLACEMENT METHOD FIELD DEMONSTRATION AND SPECIFICATIONS

    EPA Science Inventory

    The Block Displacement technique has been developed as a remedial action method for isolating large tracks of ground contaminated by hazardous waste. The technique places a low permeability barrier around and under a large block of contaminated earth. The Block Displacement proce...

  16. Methods of approximation of reference fields of different classes

    NASA Astrophysics Data System (ADS)

    Kolesova, Valentina I.

    1993-11-01

    The summary geomagnetic field on the reference field for the regional anomalies is surface of the Earth consists of the follow- the sum of the main geomagnetic field and ing components: the intermediate anomalies. Since the components mentioned above have the F0 = Fm + Fim + Fr + F1 + F (1) different space-spectral characteristics, different methods are used for the analytiwhere cal descriptions. The main geomagnetic field, being the global reference field, is approximated by F0 - the observed geomagnetic field the optimal way as a spherical harmonic Fm - the main geomagnetic field series [1]: Fim - the field of the intermediate anoma- n lies Fr - the field of the regional anomalies X = (g cosm\\ + n=i m=O F1 - the field of the local anomalies, - the external geomagnetic field.

  17. Approximate iterative operator method for potential-field downward continuation

    NASA Astrophysics Data System (ADS)

    Tai, Zhenhua; Zhang, Fengxu; Zhang, Fengqin; Hao, Mengcheng

    2016-05-01

    An approximate iterative operator method in wavenumber domain was proposed to improve the stability and accuracy of downward continuation of potential fields measured from the ground surface, marine or airborne. Firstly, the generalized iterative formula of downward continuation is derived in wavenumber domain; then, the transformational relationship between horizontal second-order partial derivatives and continuation is derived based on the Taylor series and Laplace equation, to obtain an approximate operator. By introducing this operator to the generalized iterative formula, a rapid algorithm is developed for downward continuation. The filtering and convergence characteristics of this method are analyzed for the purpose of estimating the optimal interval of number of iterations. We demonstrate the proposed method on synthetic data, and the results validate the flexibility of the proposed method. At last, we apply the proposed method to real data, and the results show the proposed method can enhance gravity anomalies generated by concealed orebodies. And in the contour obtained by making our proposed method results continue upward to measured level, the numerical results have approximate distribution and amplitude with original anomalies.

  18. Advanced Fuzzy Potential Field Method for Mobile Robot Obstacle Avoidance

    PubMed Central

    Park, Jong-Wook; Kwak, Hwan-Joo; Kang, Young-Chang; Kim, Dong W.

    2016-01-01

    An advanced fuzzy potential field method for mobile robot obstacle avoidance is proposed. The potential field method primarily deals with the repulsive forces surrounding obstacles, while fuzzy control logic focuses on fuzzy rules that handle linguistic variables and describe the knowledge of experts. The design of a fuzzy controller—advanced fuzzy potential field method (AFPFM)—that models and enhances the conventional potential field method is proposed and discussed. This study also examines the rule-explosion problem of conventional fuzzy logic and assesses the performance of our proposed AFPFM through simulations carried out using a mobile robot. PMID:27123001

  19. Advanced Fuzzy Potential Field Method for Mobile Robot Obstacle Avoidance.

    PubMed

    Park, Jong-Wook; Kwak, Hwan-Joo; Kang, Young-Chang; Kim, Dong W

    2016-01-01

    An advanced fuzzy potential field method for mobile robot obstacle avoidance is proposed. The potential field method primarily deals with the repulsive forces surrounding obstacles, while fuzzy control logic focuses on fuzzy rules that handle linguistic variables and describe the knowledge of experts. The design of a fuzzy controller--advanced fuzzy potential field method (AFPFM)--that models and enhances the conventional potential field method is proposed and discussed. This study also examines the rule-explosion problem of conventional fuzzy logic and assesses the performance of our proposed AFPFM through simulations carried out using a mobile robot. PMID:27123001

  20. Ocean Wave Simulation Based on Wind Field.

    PubMed

    Li, Zhongyi; Wang, Hao

    2016-01-01

    Ocean wave simulation has a wide range of applications in movies, video games and training systems. Wind force is the main energy resource for generating ocean waves, which are the result of the interaction between wind and the ocean surface. While numerous methods to handle simulating oceans and other fluid phenomena have undergone rapid development during the past years in the field of computer graphic, few of them consider to construct ocean surface height field from the perspective of wind force driving ocean waves. We introduce wind force to the construction of the ocean surface height field through applying wind field data and wind-driven wave particles. Continual and realistic ocean waves result from the overlap of wind-driven wave particles, and a strategy was proposed to control these discrete wave particles and simulate an endless ocean surface. The results showed that the new method is capable of obtaining a realistic ocean scene under the influence of wind fields at real time rates. PMID:26808718

  1. Ocean Wave Simulation Based on Wind Field

    PubMed Central

    2016-01-01

    Ocean wave simulation has a wide range of applications in movies, video games and training systems. Wind force is the main energy resource for generating ocean waves, which are the result of the interaction between wind and the ocean surface. While numerous methods to handle simulating oceans and other fluid phenomena have undergone rapid development during the past years in the field of computer graphic, few of them consider to construct ocean surface height field from the perspective of wind force driving ocean waves. We introduce wind force to the construction of the ocean surface height field through applying wind field data and wind-driven wave particles. Continual and realistic ocean waves result from the overlap of wind-driven wave particles, and a strategy was proposed to control these discrete wave particles and simulate an endless ocean surface. The results showed that the new method is capable of obtaining a realistic ocean scene under the influence of wind fields at real time rates. PMID:26808718

  2. IR photodetector based on rectangular quantum wire in magnetic field

    SciTech Connect

    Jha, Nandan

    2014-04-24

    In this paper we study rectangular quantum wire based IR detector with magnetic field applied along the wires. The energy spectrum of a particle in rectangular box shows level repulsions and crossings when external magnetic field is applied. Due to this complex level dynamics, we can tune the spacing between any two levels by varying the magnetic field. This method allows user to change the detector parameters according to his/her requirements. In this paper, we numerically calculate the energy sub-band levels of the square quantum wire in constant magnetic field along the wire and quantify the possible operating wavelength range that can be obtained by varying the magnetic field. We also calculate the photon absorption probability at different magnetic fields and give the efficiency for different wavelengths if the transition is assumed between two lowest levels.

  3. On a spectral method for forward gravity field modelling

    NASA Astrophysics Data System (ADS)

    Root, B. C.; Novák, P.; Dirkx, D.; Kaban, M.; van der Wal, W.; Vermeersen, L. L. A.

    2016-07-01

    This article reviews a spectral forward gravity field modelling method that was initially designed for topographic/isostatic mass reduction of gravity data. The method transforms 3D spherical density models into gravitational potential fields using a spherical harmonic representation. The binomial series approximation in the approach, which is crucial for its computational efficiency, is examined and an error analysis is performed. It is shown that, this method cannot be used for density layers in crustal and upper mantle regions, because it results in large errors in the modelled potential field. Here, a correction is proposed to mitigate this erroneous behaviour. The improved method is benchmarked with a tesseroid gravity field modelling method and is shown to be accurate within ±4 mGal for a layer representing the Moho density interface, which is below other errors in gravity field studies. After the proposed adjustment the method can be used for the global gravity modelling of the complete Earth's density structure.

  4. Identification of heterogeneous elastic material characteristics by virtual fields method

    NASA Astrophysics Data System (ADS)

    Sato, Yuya; Arikawa, Shuichi; Yoneyama, Satoru

    2015-03-01

    In this study, a method for identifying the elastic material characteristics of a heterogeneous material from measured displacements is proposed. The virtual fields method is employed for determining the elastic material characteristics. The solid propellant is considered as heterogeneous materials for the test subject. An equation representing the distribution of the material properties of the solid propellant is obtained by Fick's law, and the distribution is applied to the virtual fields method. The effectiveness of the proposed method is demonstrated by applying to displacement fields obtained using finite element analysis. Results show that the heterogeneous material properties can be obtained by the proposed method.

  5. Bootstrapping conformal field theories with the extremal functional method.

    PubMed

    El-Showk, Sheer; Paulos, Miguel F

    2013-12-13

    The existence of a positive linear functional acting on the space of (differences between) conformal blocks has been shown to rule out regions in the parameter space of conformal field theories (CFTs). We argue that at the boundary of the allowed region the extremal functional contains, in principle, enough information to determine the dimensions and operator product expansion (OPE) coefficients of an infinite number of operators appearing in the correlator under analysis. Based on this idea we develop the extremal functional method (EFM), a numerical procedure for deriving the spectrum and OPE coefficients of CFTs lying on the boundary (of solution space). We test the EFM by using it to rederive the low lying spectrum and OPE coefficients of the two-dimensional Ising model based solely on the dimension of a single scalar quasiprimary--no Virasoro algebra required. Our work serves as a benchmark for applications to more interesting, less known CFTs in the near future. PMID:24483643

  6. Some equivalences between the auxiliary field method and envelope theory

    SciTech Connect

    Buisseret, Fabien; Semay, Claude; Silvestre-Brac, Bernard

    2009-03-15

    The auxiliary field method has been recently proposed as an efficient technique to compute analytical approximate solutions of eigenequations in quantum mechanics. We show that the auxiliary field method is completely equivalent to the envelope theory, which is another well-known procedure to analytically solve eigenequations, although relying on different principles a priori. This equivalence leads to a deeper understanding of both frameworks.

  7. DISPLACEMENT BASED SEISMIC DESIGN METHODS.

    SciTech Connect

    HOFMAYER,C.MILLER,C.WANG,Y.COSTELLO,J.

    2003-07-15

    A research effort was undertaken to determine the need for any changes to USNRC's seismic regulatory practice to reflect the move, in the earthquake engineering community, toward using expected displacement rather than force (or stress) as the basis for assessing design adequacy. The research explored the extent to which displacement based seismic design methods, such as given in FEMA 273, could be useful for reviewing nuclear power stations. Two structures common to nuclear power plants were chosen to compare the results of the analysis models used. The first structure is a four-story frame structure with shear walls providing the primary lateral load system, referred herein as the shear wall model. The second structure is the turbine building of the Diablo Canyon nuclear power plant. The models were analyzed using both displacement based (pushover) analysis and nonlinear dynamic analysis. In addition, for the shear wall model an elastic analysis with ductility factors applied was also performed. The objectives of the work were to compare the results between the analyses, and to develop insights regarding the work that would be needed before the displacement based analysis methodology could be considered applicable to facilities licensed by the NRC. A summary of the research results, which were published in NUREGICR-6719 in July 2001, is presented in this paper.

  8. Field olfactometry assessment of dairy manure land application methods.

    PubMed

    Brandt, R C; Elliott, H A; Adviento-Borbe, M A A; Wheeler, E F; Kleinman, P J A; Beegle, D B

    2011-01-01

    Surface application of manure in reduced tillage systems generates nuisance odors, but their management is hindered by a lack of standardized field quantification methods. An investigation was undertaken to evaluate odor emissions associated with various technologies that incorporate manure with minimal soil disturbance. Dairy manure slurry was applied by five methods in a 3.5-m swath to grassland in 61-m-inside-diameter rings. Nasal Ranger Field Olfactometer (NRO) instruments were used to collect dilutions-to-threshold (D/T) observations from the center of each ring using a panel of four odor assessors taking four readings each over a 10-min period. The Best Estimate Threshold D/T (BET10) was calculated for each application method and an untreated control based on preapplication and <1 h, 2 to 4 h, and approximately 24 h after spreading. Whole-air samples were simultaneously collected for laboratory dynamic olfactometer evaluation using the triangular forced-choice (TFC) method. The BET10 of NRO data composited for all measurement times showed D/T decreased in the following order (a = 0.05): surface broadcast > aeration infiltration > surface + chisel incorporation > direct ground injection Sshallow disk injection > control, which closely followed laboratory TFC odor panel results (r = 0.83). At 24 h, odor reduction benefits relative to broadcasting persisted for all methods except aeration infiltration, and odors associated with direct ground injection were not different from the untreated control. Shallow disk injection provided substantial odor reduction with familiar toolbar equipment that is well adapted to regional soil conditions and conservation tillage operations. PMID:21520750

  9. Light Field Imaging Based Accurate Image Specular Highlight Removal

    PubMed Central

    Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo

    2016-01-01

    Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into “unsaturated” and “saturated” category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083

  10. New light field camera based on physical based rendering tracing

    NASA Astrophysics Data System (ADS)

    Chung, Ming-Han; Chang, Shan-Ching; Lee, Chih-Kung

    2014-03-01

    Even though light field technology was first invented more than 50 years ago, it did not gain popularity due to the limitation imposed by the computation technology. With the rapid advancement of computer technology over the last decade, the limitation has been uplifted and the light field technology quickly returns to the spotlight of the research stage. In this paper, PBRT (Physical Based Rendering Tracing) was introduced to overcome the limitation of using traditional optical simulation approach to study the light field camera technology. More specifically, traditional optical simulation approach can only present light energy distribution but typically lack the capability to present the pictures in realistic scenes. By using PBRT, which was developed to create virtual scenes, 4D light field information was obtained to conduct initial data analysis and calculation. This PBRT approach was also used to explore the light field data calculation potential in creating realistic photos. Furthermore, we integrated the optical experimental measurement results with PBRT in order to place the real measurement results into the virtually created scenes. In other words, our approach provided us with a way to establish a link of virtual scene with the real measurement results. Several images developed based on the above-mentioned approaches were analyzed and discussed to verify the pros and cons of the newly developed PBRT based light field camera technology. It will be shown that this newly developed light field camera approach can circumvent the loss of spatial resolution associated with adopting a micro-lens array in front of the image sensors. Detailed operational constraint, performance metrics, computation resources needed, etc. associated with this newly developed light field camera technique were presented in detail.

  11. An improved reconstruction method for cosmological density fields

    NASA Technical Reports Server (NTRS)

    Gramann, Mirt

    1993-01-01

    This paper proposes some improvements to existing reconstruction methods for recovering the initial linear density and velocity fields of the universe from the present large-scale density distribution. We derive the Eulerian continuity equation in the Zel'dovich approximation and show that, by applying this equation, we can trace the evolution of the gravitational potential of the universe more exactly than is possible with previous approaches based on the Zel'dovich-Bernoulli equation. The improved reconstruction method is tested using N-body simulations. When the Zel'dovich-Bernoulli equation describes the formation of filaments, then the Zel'dovich continuity equation also follows the clustering of clumps inside the filaments. Our reconstruction method recovers the true initial gravitational potential with an rms error about 3 times smaller than previous methods. We examine the recovery of the initial distribution of Fourier components and find the scale at which the recovered phases are scrambled with respect their true initial values. Integrating the Zel'dovich continuity equation back in time, we can improve the spatial resolution of the reconstruction by a factor of about 2.

  12. Individual SWCNT based ionic field effect transistor

    NASA Astrophysics Data System (ADS)

    Pang, Pei; He, Jin; Park, Jae Hyun; Krstic, Predrag; Lindsay, Stuart

    2011-03-01

    Here we report that the ionic current through a single-walled carbon nanotube (SWCNT) can be effectively gated by a perpendicular electrical field from a top gate electrode, working as ionic field effect transistor. Both our experiment and simulation confirms that the electroosmotic current (EOF) is the main component in the ionic current through the SWCNT and is responsible for the gating effect. We also studied the gating efficiency as a function of solution concentration and pH and demonstrated that the device can work effectively in the physiological relevant condition. This work opens the door to use CNT based nanofluidics for ion and molecule manipulation. This work was supported by the DNA Sequencing Technology Program of the National Human Genome Research Institute (1RC2HG005625-01, 1R21HG004770-01), Arizona Technology Enterprises and the Biodesign Institute.

  13. Method of using triaxial magnetic fields for making particle structures

    DOEpatents

    Martin, James E.; Anderson, Robert A.; Williamson, Rodney L.

    2005-01-18

    A method of producing three-dimensional particle structures with enhanced magnetic susceptibility in three dimensions by applying a triaxial energetic field to a magnetic particle suspension and subsequently stabilizing said particle structure. Combinations of direct current and alternating current fields in three dimensions produce particle gel structures, honeycomb structures, and foam-like structures.

  14. COMPARABILITY BETWEEN VARIOUS FIELD AND LABORATORY WOODSTOVE EMISSION MEASUREMENT METHODS

    EPA Science Inventory

    The paper compares various field and laboratory woodstove emission measurement methods. n 1988, the U.S. EPA promulgated performance standards for residential wood heaters (woodstoves). ver the past several years, a number of field studies have been undertaken to determine the ac...

  15. FIELD AND LABORATORY METHODS APPLICABLE TO OVERBURDENS AND MINESOIL

    EPA Science Inventory

    Incorporated within this manual are step-by-step procedures on field identification of common rocks and minerals; field sampling techniques; processing of rock and soil samples; and chemical, mineralogical, microbiological, and physical analyses of the samples. The method can be ...

  16. FIELD ANALYTICAL SCREENING PROGRAM: PCP METHOD - INNOVATIVE TECHNOLOGY EVALUATION REPORT

    EPA Science Inventory

    This innovative technology evaluation report (ITER) presents information on the demonstration of the U.S. Environmental Protection Agency (EPA) Region 7 Superfund Field Analytical Screening Program (FASP) method for determining pentachlorophenol (PCP) contamination in soil and wa...

  17. FIELD ANALYTICAL SCREENING PROGRAM PCB METHOD: INNOVATIVE TECHNOLOGY EVALUATION REPORT

    EPA Science Inventory

    This innovative technology evaluation report (ITER) presents information on the demonstration of the U.S. Environmental Protection Agency (EPA) Region 7 Superfund Field Analytical Screening Program (FASP) method for determining polychlorinated biphenyl (PCB) contamination in soil...

  18. FIELD ANALYTICAL SCREENING PROGRAM: PCB METHOD - INNOVATIVE TECHNOLOGY REPORT

    EPA Science Inventory

    This innovative technology evaluation report (ITER) presents information on the demonstration of the U.S. Environmental Protection Agency (EPA) Region 7 Superfund Field Analytical Screening Program (FASP) method for determining polychlorinated biphenyl (PCB) contamination in soil...

  19. Method and system for progressive mesh storage and reconstruction using wavelet-encoded height fields

    NASA Technical Reports Server (NTRS)

    Baxes, Gregory A. (Inventor); Linger, Timothy C. (Inventor)

    2011-01-01

    Systems and methods are provided for progressive mesh storage and reconstruction using wavelet-encoded height fields. A method for progressive mesh storage includes reading raster height field data, and processing the raster height field data with a discrete wavelet transform to generate wavelet-encoded height fields. In another embodiment, a method for progressive mesh storage includes reading texture map data, and processing the texture map data with a discrete wavelet transform to generate wavelet-encoded texture map fields. A method for reconstructing a progressive mesh from wavelet-encoded height field data includes determining terrain blocks, and a level of detail required for each terrain block, based upon a viewpoint. Triangle strip constructs are generated from vertices of the terrain blocks, and an image is rendered utilizing the triangle strip constructs. Software products that implement these methods are provided.

  20. Method and system for progressive mesh storage and reconstruction using wavelet-encoded height fields

    NASA Technical Reports Server (NTRS)

    Baxes, Gregory A. (Inventor)

    2010-01-01

    Systems and methods are provided for progressive mesh storage and reconstruction using wavelet-encoded height fields. A method for progressive mesh storage includes reading raster height field data, and processing the raster height field data with a discrete wavelet transform to generate wavelet-encoded height fields. In another embodiment, a method for progressive mesh storage includes reading texture map data, and processing the texture map data with a discrete wavelet transform to generate wavelet-encoded texture map fields. A method for reconstructing a progressive mesh from wavelet-encoded height field data includes determining terrain blocks, and a level of detail required for each terrain block, based upon a viewpoint. Triangle strip constructs are generated from vertices of the terrain blocks, and an image is rendered utilizing the triangle strip constructs. Software products that implement these methods are provided.

  1. A Method for Fast Computation of FTLE Fields

    NASA Astrophysics Data System (ADS)

    Brunton, Steven; Rowley, Clarence

    2008-11-01

    An efficient method for computing finite time Lyapunov exponent (FTLE) fields is investigated. FTLE fields, which measure the stretching between nearby particles, are important in determining transport mechanisms in unsteady flows. Ridges of the FTLE field are Lagrangian Coherent Structures (LCS) and provide an unsteady analogue of invariant manifolds from dynamical systems theory. FTLE field computations are expensive because of the large number of particle trajectories which must be integrated. However, when computing a time series of fields, it is possible to use the integrated trajectories at a previous time to compute an approximation of the integrated trajectories initialized at a later time, resulting in significant computational savings. This work provides analytic estimates for accumulated error and computation time as well as simulations comparing exact results with the approximate method for a number of interesting flows.

  2. Lagrangian based methods for coherent structure detection

    SciTech Connect

    Allshouse, Michael R.; Peacock, Thomas

    2015-09-15

    There has been a proliferation in the development of Lagrangian analytical methods for detecting coherent structures in fluid flow transport, yielding a variety of qualitatively different approaches. We present a review of four approaches and demonstrate the utility of these methods via their application to the same sample analytic model, the canonical double-gyre flow, highlighting the pros and cons of each approach. Two of the methods, the geometric and probabilistic approaches, are well established and require velocity field data over the time interval of interest to identify particularly important material lines and surfaces, and influential regions, respectively. The other two approaches, implementing tools from cluster and braid theory, seek coherent structures based on limited trajectory data, attempting to partition the flow transport into distinct regions. All four of these approaches share the common trait that they are objective methods, meaning that their results do not depend on the frame of reference used. For each method, we also present a number of example applications ranging from blood flow and chemical reactions to ocean and atmospheric flows.

  3. Lagrangian based methods for coherent structure detection.

    PubMed

    Allshouse, Michael R; Peacock, Thomas

    2015-09-01

    There has been a proliferation in the development of Lagrangian analytical methods for detecting coherent structures in fluid flow transport, yielding a variety of qualitatively different approaches. We present a review of four approaches and demonstrate the utility of these methods via their application to the same sample analytic model, the canonical double-gyre flow, highlighting the pros and cons of each approach. Two of the methods, the geometric and probabilistic approaches, are well established and require velocity field data over the time interval of interest to identify particularly important material lines and surfaces, and influential regions, respectively. The other two approaches, implementing tools from cluster and braid theory, seek coherent structures based on limited trajectory data, attempting to partition the flow transport into distinct regions. All four of these approaches share the common trait that they are objective methods, meaning that their results do not depend on the frame of reference used. For each method, we also present a number of example applications ranging from blood flow and chemical reactions to ocean and atmospheric flows. PMID:26428570

  4. Force-free magnetic fields - The magneto-frictional method

    NASA Technical Reports Server (NTRS)

    Yang, W. H.; Sturrock, P. A.; Antiochos, S. K.

    1986-01-01

    The problem under discussion is that of calculating magnetic field configurations in which the Lorentz force j x B is everywhere zero, subject to specified boundary conditions. We choose to represent the magnetic field in terms of Clebsch variables in the form B = grad alpha x grad beta. These variables are constant on any field line so that each field line is labeled by the corresponding values of alpha and beta. When the field is described in this way, the most appropriate choice of boundary conditions is to specify the values of alpha and beta on the bounding surface. We show that such field configurations may be calculated by a magneto-frictional method. We imagine that the field lines move through a stationary medium, and that each element of magnetic field is subject to a frictional force parallel to and opposing the velocity of the field line. This concept leads to an iteration procedure for modifying the variables alpha and beta, that tends asymptotically towards the force-free state. We apply the method first to a simple problem in two rectangular dimensions, and then to a problem of cylindrical symmetry that was previously discussed by Barnes and Sturrock (1972). In one important respect, our new results differ from the earlier results of Barnes and Sturrock, and we conclude that the earlier article was in error.

  5. Field calibration of binocular stereo vision based on fast reconstruction of 3D control field

    NASA Astrophysics Data System (ADS)

    Zhang, Haijun; Liu, Changjie; Fu, Luhua; Guo, Yin

    2015-08-01

    Construction of high-speed railway in China has entered a period of rapid growth. To accurately and quickly obtain the dynamic envelope curve of high-speed vehicle is an important guarantee for safe driving. The measuring system is based on binocular stereo vision. Considering the difficulties in field calibration such as environmental changes and time limits, carried out a field calibration method based on fast reconstruction of three-dimensional control field. With the rapid assembly of pre-calibrated three-dimensional control field, whose coordinate accuracy is guaranteed by manufacture accuracy and calibrated by V-STARS, two cameras take a quick shot of it at the same time. The field calibration parameters are then solved by the method combining linear solution with nonlinear optimization. Experimental results showed that the measurement accuracy can reach up to +/- 0.5mm, and more importantly, in the premise of guaranteeing accuracy, the speed of the calibration and the portability of the devices have been improved considerably.

  6. FIELD ANALYTICAL SCREENING PROGRAM: PCP METHOD - INNOVATIVE TECHNOLOGY EVALUATION REPORT

    EPA Science Inventory

    The Field Analytical Screening Program (FASP) pentachlorophenol (PCP) method uses a gas chromatograph (GC) equipped with a megabore capillary column and flame ionization detector (FID) and electron capture detector (ECD) to identify and quantify PCP. The FASP PCP method is design...

  7. Field tests of carbon monitoring methods in forestry projects

    SciTech Connect

    1999-07-01

    In response to the emerging scientific consensus on the facts of global climate change, the international Joint Implementation (JI) program provided a pilot phase in which utilities and other industries could finance, among other activities, international efforts to sequester carbon dioxide, a major greenhouse gas. To make JI and its successor mechanisms workable, however, cost-effective methods are needed for monitoring progress in the reduction of greenhouse gas emissions. The papers in this volume describe field test experiences with methods for measuring carbon storage by three types of land use: natural forest, plantation forest, and agroforestry. Each test, in a slightly different land-use situation, contributes to the knowledge of carbon-monitoring methods as experienced in the field. The field tests of the agroforestry guidelines in Guatemala and the Philippines, for example, suggested adaptations in terms of plot size and method of delineating the total area for sampling.

  8. Copula-Based Interpolation and Simulation of Precipitation Fields

    NASA Astrophysics Data System (ADS)

    Haese, Barbara; Hörning, Sebastian; Schalge, Bernd; Kunstmann, Harald

    2016-04-01

    The knowledge of the spatio-temporal distribution of precipitation is crucial to improve the understanding of the regional water cycle. So far precipitation fields derived from atmospheric models still suffer from large errors when it comes to reproducing the correct spatio-temporal distribution of rainfall fields. Usually stochastic precipitation fields conditioned on observations are more reliable. In our approach we derive precipitation fields with the copula-based method of random mixing. In a first step we generate different observation types, here rain gauge and microwave link measurements, from a virtual reality of the Neckar catchment (VR). These virtual observations mimic the advantages and disadvantages of the real observations. Rain gauges provide a high-quality information for a specific measurement point but their spatial representativeness is often rare. Microwave links, e. g. from commercial cellular operators, on the other hand can be used to estimate line integrals of near-surface rainfall information but they provide a very dense observational system. The precipitation fields of this stochastic interpolation, respectively simulation, are constrained on both, the point and line information. By using the virtual observations instead of real ones, we are able to compare the interpolated fields with the original fields. This allows us to evaluate the statistical precipitation fields in a very detailed manner in respect to the spatial and temporal resolution. In a further step we will use this method to simulate precipitation fields constrained on real observations, which could be used for example as input data for surface-subsurface models or hydrological models.

  9. Geochemical field method for determination of nickel in plants

    USGS Publications Warehouse

    Reichen, L.E.

    1951-01-01

    The use of biogeochemical data in prospecting for nickel emphasizes the need for a simple, moderately accurate field method for the determination of nickel in plants. In order to follow leads provided by plants of unusual nickel content without loss of time, the plants should be analyzed and the results given to the field geologist promptly. The method reported in this paper was developed to meet this need. Speed is acquired by elimination of the customary drying and controlled ashing; the fresh vegetation is ashed in an open dish over a gasoline stove. The ash is put into solution with hydrochloric acid and the solution buffered. A chromograph is used to make a confined spot with an aliquot of the ash solution on dimethylglyoxime reagent paper. As little as 0.025% nickel in plant ash can be determined. With a simple modification, 0.003% can be detected. Data are given comparing the results obtained by an accepted laboratory procedure. Results by the field method are within 30% of the laboratory values. The field method for nickel in plants meets the requirements of biogeochemical prospecting with respect to accuracy, simplicity, speed, and ease of performance in the field. With experience, an analyst can make 30 determinations in an 8-hour work day in the field.

  10. Localized Dictionaries Based Orientation Field Estimation for Latent Fingerprints.

    PubMed

    Xiao Yang; Jianjiang Feng; Jie Zhou

    2014-05-01

    Dictionary based orientation field estimation approach has shown promising performance for latent fingerprints. In this paper, we seek to exploit stronger prior knowledge of fingerprints in order to further improve the performance. Realizing that ridge orientations at different locations of fingerprints have different characteristics, we propose a localized dictionaries-based orientation field estimation algorithm, in which noisy orientation patch at a location output by a local estimation approach is replaced by real orientation patch in the local dictionary at the same location. The precondition of applying localized dictionaries is that the pose of the latent fingerprint needs to be estimated. We propose a Hough transform-based fingerprint pose estimation algorithm, in which the predictions about fingerprint pose made by all orientation patches in the latent fingerprint are accumulated. Experimental results on challenging latent fingerprint datasets show the proposed method outperforms previous ones markedly. PMID:26353229

  11. Method of electric field flow fractionation wherein the polarity of the electric field is periodically reversed

    DOEpatents

    Stevens, Fred J.

    1992-01-01

    A novel method of electric field flow fractionation for separating solute molecules from a carrier solution is disclosed. The method of the invention utilizes an electric field that is periodically reversed in polarity, in a time-dependent, wave-like manner. The parameters of the waveform, including amplitude, frequency and wave shape may be varied to optimize separation of solute species. The waveform may further include discontinuities to enhance separation.

  12. Method of determining interwell oil field fluid saturation distribution

    DOEpatents

    Donaldson, Erle C.; Sutterfield, F. Dexter

    1981-01-01

    A method of determining the oil and brine saturation distribution in an oil field by taking electrical current and potential measurements among a plurality of open-hole wells geometrically distributed throughout the oil field. Poisson's equation is utilized to develop fluid saturation distributions from the electrical current and potential measurement. Both signal generating equipment and chemical means are used to develop current flow among the several open-hole wells.

  13. Non-perturbative methods in relativistic field theory

    SciTech Connect

    Franz Gross

    2013-03-01

    This talk reviews relativistic methods used to compute bound and low energy scattering states in field theory, with emphasis on approaches that John Tjon and I discussed (and argued about) together. I compare the Bethe–Salpeter and Covariant Spectator equations, show some applications, and then report on some of the things we have learned from the beautiful Feynman–Schwinger technique for calculating the exact sum of all ladder and crossed ladder diagrams in field theory.

  14. Design of traveling wave tubes based on field theory

    SciTech Connect

    Vanderplaats, N.R.; Kodis, M.A. . Vacuum Electronics Branch); Freund, H.P. )

    1994-07-01

    A method is described for the design of helix traveling wave tubes (TWT) which is based on the linear field analysis of the coupled beam-wave system. The dispersion relations are obtained by matching of radial admittances at boundaries instead of the individual field components. This approach provides flexibility in modeling various beam and circuit configurations with relative ease by choosing the appropriate admittance functions for each case. The method is illustrated for the case of a solid beam inside a sheath helix which is loaded externally by lossy dielectric material, a conducting cylinder, and axial vanes. Extension of the analysis to include a thin tape helix model is anticipated in the near future. The TWT model may be divided into axial regions to include velocity tapers, lossy materials and severs, with the helix geometry in each region varied arbitrarily. The relations between the ac velocities, current densities, and axial electric fields are used to derive a general expression for the new amplitudes of the three forward waves at each axial boundary. The sum of the fields for the three forward waves (two waves in a drift region) is followed to the circuit output. Numerical results of the field analysis are compared with the coupled-mode Pierce theory. A method is suggested for applying the field analysis to accurate design of practical TWT's that have a more complex circuit geometry, which starts with a simple measurement of the dispersion of the helix circuit. The field analysis may then be used to generate a circuit having properties very nearly equivalent to those of the actual circuit.

  15. E-field extraction from Hx- and Hy- near field values by using plane wave spectrum method

    NASA Astrophysics Data System (ADS)

    Ravelo, B.; Riah, Z.; Baudry, D.; Mazari, B.

    2011-01-01

    This paper deals with a technique for calculating the 3D E-field components knowing only the two components (Hx and Hy) of the H-field in near-zone. The originality of the under study technique lies on the possibility to take into account the evanescent wave influences. The presented E-field extraction process is based on the exploitation of the Maxwell-Ampere relation combined with the plane wave spectrum (PWS) method. The efficiency of the proposed technique is evidenced by comparing the E-field deduced from H-field and the own E-field radiated by the association of electrical- and also magnetic- elementary dipoles in different configurations by using Matlab text programming environment. In addition, as a concrete demonstrator, the concept was also validated with the computation of EM-wave radiated by an open-end microstrip transmission line. As result of comparison, very good agreement between the exact E-field and that one extracted from the H-field was realized by considering the near-field scanned at the height, z = 5 mm and 8 mm above the under test structure at the operating frequency, f = 1 GHz. The presented technique can simplify the difficulties about the E-near-field measurement in EMC applications.

  16. A comprehensive method of estimating electric fields from vector magnetic field and Doppler measurements

    SciTech Connect

    Kazachenko, Maria D.; Fisher, George H.; Welsch, Brian T.

    2014-11-01

    Photospheric electric fields, estimated from sequences of vector magnetic field and Doppler measurements, can be used to estimate the flux of magnetic energy (the Poynting flux) into the corona and as time-dependent boundary conditions for dynamic models of the coronal magnetic field. We have modified and extended an existing method to estimate photospheric electric fields that combines a poloidal-toroidal decomposition (PTD) of the evolving magnetic field vector with Doppler and horizontal plasma velocities. Our current, more comprehensive method, which we dub the 'PTD-Doppler-FLCT Ideal' (PDFI) technique, can now incorporate Doppler velocities from non-normal viewing angles. It uses the FISHPACK software package to solve several two-dimensional Poisson equations, a faster and more robust approach than our previous implementations. Here, we describe systematic, quantitative tests of the accuracy and robustness of the PDFI technique using synthetic data from anelastic MHD (ANMHD) simulations, which have been used in similar tests in the past. We find that the PDFI method has less than 1% error in the total Poynting flux and a 10% error in the helicity flux rate at a normal viewing angle (θ = 0) and less than 25% and 10% errors, respectively, at large viewing angles (θ < 60°). We compare our results with other inversion methods at zero viewing angle and find that our method's estimates of the fluxes of magnetic energy and helicity are comparable to or more accurate than other methods. We also discuss the limitations of the PDFI method and its uncertainties.

  17. A Multifunctional Interface Method for Coupling Finite Element and Finite Difference Methods: Two-Dimensional Scalar-Field Problems

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.

    2002-01-01

    A multifunctional interface method with capabilities for variable-fidelity modeling and multiple method analysis is presented. The methodology provides an effective capability by which domains with diverse idealizations can be modeled independently to exploit the advantages of one approach over another. The multifunctional method is used to couple independently discretized subdomains, and it is used to couple the finite element and the finite difference methods. The method is based on a weighted residual variational method and is presented for two-dimensional scalar-field problems. A verification test problem and a benchmark application are presented, and the computational implications are discussed.

  18. Method of improving field emission characteristics of diamond thin films

    DOEpatents

    Krauss, Alan R.; Gruen, Dieter M.

    1999-01-01

    A method of preparing diamond thin films with improved field emission properties. The method includes preparing a diamond thin film on a substrate, such as Mo, W, Si and Ni. An atmosphere of hydrogen (molecular or atomic) can be provided above the already deposited film to form absorbed hydrogen to reduce the work function and enhance field emission properties of the diamond film. In addition, hydrogen can be absorbed on intergranular surfaces to enhance electrical conductivity of the diamond film. The treated diamond film can be part of a microtip array in a flat panel display.

  19. Method of improving field emission characteristics of diamond thin films

    DOEpatents

    Krauss, A.R.; Gruen, D.M.

    1999-05-11

    A method of preparing diamond thin films with improved field emission properties is disclosed. The method includes preparing a diamond thin film on a substrate, such as Mo, W, Si and Ni. An atmosphere of hydrogen (molecular or atomic) can be provided above the already deposited film to form absorbed hydrogen to reduce the work function and enhance field emission properties of the diamond film. In addition, hydrogen can be absorbed on intergranular surfaces to enhance electrical conductivity of the diamond film. The treated diamond film can be part of a microtip array in a flat panel display. 3 figs.

  20. DC-based magnetic field controller

    DOEpatents

    Kotter, D.K.; Rankin, R.A.; Morgan, J.P.

    1994-05-31

    A magnetic field controller is described for laboratory devices and in particular to dc operated magnetic field controllers for mass spectrometers, comprising a dc power supply in combination with improvements to a Hall probe subsystem, display subsystem, preamplifier, field control subsystem, and an output stage. 1 fig.

  1. DC-based magnetic field controller

    DOEpatents

    Kotter, Dale K.; Rankin, Richard A.; Morgan, John P,.

    1994-01-01

    A magnetic field controller for laboratory devices and in particular to dc operated magnetic field controllers for mass spectrometers, comprising a dc power supply in combination with improvements to a hall probe subsystem, display subsystem, preamplifier, field control subsystem, and an output stage.

  2. [Sub-field imaging spectrometer design based on Offner structure].

    PubMed

    Wu, Cong-Jun; Yan, Chang-Xiang; Liu, Wei; Dai, Hu

    2013-08-01

    To satisfy imaging spectrometers's miniaturization, lightweight and large field requirements in space application, the current optical design of imaging spectrometer with Offner structure was analyzed, and an simple method to design imaging spectrometer with concave grating based on current ways was given. Using the method offered, the sub-field imaging spectrometer with 400 km altitude, 0.4-1.0 microm wavelength range, 5 F-number of 720 mm focal length and 4.3 degrees total field was designed. Optical fiber was used to transfer the image in telescope's focal plane to three slits arranged in the same plane so as to achieve subfield. The CCD detector with 1 024 x 1 024 and 18 microm x 18 microm was used to receive the image of the three slits after dispersing. Using ZEMAX software optimization and tolerance analysis, the system can satisfy 5 nm spectrum resolution and 5 m field resolution, and the MTF is over 0.62 with 28 lp x mm(-1). The field of the system is almost 3 times that of similar instruments used in space probe. PMID:24159892

  3. Evanescent Field Based Photoacoustics: Optical Property Evaluation at Surfaces.

    PubMed

    Goldschmidt, Benjamin S; Rudy, Anna M; Nowak, Charissa A; Tsay, Yowting; Whiteside, Paul J D; Hunt, Heather K

    2016-01-01

    Here, we present a protocol to estimate material and surface optical properties using the photoacoustic effect combined with total internal reflection. Optical property evaluation of thin films and the surfaces of bulk materials is an important step in understanding new optical material systems and their applications. The method presented can estimate thickness, refractive index, and use absorptive properties of materials for detection. This metrology system uses evanescent field-based photoacoustics (EFPA), a field of research based upon the interaction of an evanescent field with the photoacoustic effect. This interaction and its resulting family of techniques allow the technique to probe optical properties within a few hundred nanometers of the sample surface. This optical near field allows for the highly accurate estimation of material properties on the same scale as the field itself such as refractive index and film thickness. With the use of EFPA and its sub techniques such as total internal reflection photoacoustic spectroscopy (TIRPAS) and optical tunneling photoacoustic spectroscopy (OTPAS), it is possible to evaluate a material at the nanoscale in a consolidated instrument without the need for many instruments and experiments that may be cost prohibitive. PMID:27500652

  4. Prediction of sound fields in acoustical cavities using the boundary element method. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Kipp, C. R.; Bernhard, R. J.

    1985-01-01

    A method was developed to predict sound fields in acoustical cavities. The method is based on the indirect boundary element method. An isoparametric quadratic boundary element is incorporated. Pressure, velocity and/or impedance boundary conditions may be applied to a cavity by using this method. The capability to include acoustic point sources within the cavity is implemented. The method is applied to the prediction of sound fields in spherical and rectangular cavities. All three boundary condition types are verified. Cases with a point source within the cavity domain are also studied. Numerically determined cavity pressure distributions and responses are presented. The numerical results correlate well with available analytical results.

  5. A Topologically-Informed Hyperstreamline Seeding Method for Alignment Tensor Fields.

    PubMed

    Fu, Fred; Abukhdeir, Nasser Mohieddin

    2015-03-01

    A topologically-informed hyperstreamline seeding method is presented for visualization of alignment tensor fields. The method is inspired by and applied to visualization of nematic liquid crystal (LC) orientation dynamics simulations. The method distributes hyperstreamlines along domain boundaries and edges of a nearest-neighbor graph whose vertices are degenerate regions of the alignment tensor field, which correspond to orientational defects in a nematic LC domain. This is accomplished without iteration while conforming to a user-specified spacing between hyperstreamlines and avoids possible failure modes associated with hyperstreamline integration in the vicinity of degeneracies in alignment (orientational defects). It is shown that the presented seeding method enables automated hyperstreamline-based visualization of a broad range of alignment tensor fields which enhances the ability of researchers to interpret these fields and provides an alternative to using glyph-based techniques. PMID:26357072

  6. Accurate force fields and methods for modelling organic molecular crystals at finite temperatures.

    PubMed

    Nyman, Jonas; Pundyke, Orla Sheehan; Day, Graeme M

    2016-06-21

    We present an assessment of the performance of several force fields for modelling intermolecular interactions in organic molecular crystals using the X23 benchmark set. The performance of the force fields is compared to several popular dispersion corrected density functional methods. In addition, we present our implementation of lattice vibrational free energy calculations in the quasi-harmonic approximation, using several methods to account for phonon dispersion. This allows us to also benchmark the force fields' reproduction of finite temperature crystal structures. The results demonstrate that anisotropic atom-atom multipole-based force fields can be as accurate as several popular DFT-D methods, but have errors 2-3 times larger than the current best DFT-D methods. The largest error in the examined force fields is a systematic underestimation of the (absolute) lattice energy. PMID:27230942

  7. Fast Field Calibration of MIMU Based on the Powell Algorithm

    PubMed Central

    Ma, Lin; Chen, Wanwan; Li, Bin; You, Zheng; Chen, Zhigang

    2014-01-01

    The calibration of micro inertial measurement units is important in ensuring the precision of navigation systems, which are equipped with microelectromechanical system sensors that suffer from various errors. However, traditional calibration methods cannot meet the demand for fast field calibration. This paper presents a fast field calibration method based on the Powell algorithm. As the key points of this calibration, the norm of the accelerometer measurement vector is equal to the gravity magnitude, and the norm of the gyro measurement vector is equal to the rotational velocity inputs. To resolve the error parameters by judging the convergence of the nonlinear equations, the Powell algorithm is applied by establishing a mathematical error model of the novel calibration. All parameters can then be obtained in this manner. A comparison of the proposed method with the traditional calibration method through navigation tests shows the classic performance of the proposed calibration method. The proposed calibration method also saves more time compared with the traditional calibration method. PMID:25177801

  8. Evolutionary Based Techniques for Fault Tolerant Field Programmable Gate Arrays

    NASA Technical Reports Server (NTRS)

    Larchev, Gregory V.; Lohn, Jason D.

    2006-01-01

    The use of SRAM-based Field Programmable Gate Arrays (FPGAs) is becoming more and more prevalent in space applications. Commercial-grade FPGAs are potentially susceptible to permanently debilitating Single-Event Latchups (SELs). Repair methods based on Evolutionary Algorithms may be applied to FPGA circuits to enable successful fault recovery. This paper presents the experimental results of applying such methods to repair four commonly used circuits (quadrature decoder, 3-by-3-bit multiplier, 3-by-3-bit adder, 440-7 decoder) into which a number of simulated faults have been introduced. The results suggest that evolutionary repair techniques can improve the process of fault recovery when used instead of or as a supplement to Triple Modular Redundancy (TMR), which is currently the predominant method for mitigating FPGA faults.

  9. DOM Based XSS Detecting Method Based on Phantomjs

    NASA Astrophysics Data System (ADS)

    Dong, Ri-Zhan; Ling, Jie; Liu, Yi

    Because malicious code does not appear in html source code, DOM based XSS cannot be detected by traditional methods. By analyzing the causes of DOM based XSS, this paper proposes a detection method of DOM based XSS based on phantomjs. This paper uses function hijacking to detect dangerous operation and achieves a prototype system. Comparing with existing tools shows that the system improves the detection rate and the method is effective to detect DOM based XSS.

  10. Accurate wavelength calibration method for flat-field grating spectrometers.

    PubMed

    Du, Xuewei; Li, Chaoyang; Xu, Zhe; Wang, Qiuping

    2011-09-01

    A portable spectrometer prototype is built to study wavelength calibration for flat-field grating spectrometers. An accurate calibration method called parameter fitting is presented. Both optical and structural parameters of the spectrometer are included in the wavelength calibration model, which accurately describes the relationship between wavelength and pixel position. Along with higher calibration accuracy, the proposed calibration method can provide information about errors in the installation of the optical components, which will be helpful for spectrometer alignment. PMID:21929865

  11. Background field method and the cohomology of renormalization

    NASA Astrophysics Data System (ADS)

    Anselmi, Damiano

    2016-03-01

    Using the background field method and the Batalin-Vilkovisky formalism, we prove a key theorem on the cohomology of perturbatively local functionals of arbitrary ghost numbers in renormalizable and nonrenormalizable quantum field theories whose gauge symmetries are general covariance, local Lorentz symmetry, non-Abelian Yang-Mills symmetries and Abelian gauge symmetries. Interpolating between the background field approach and the usual, nonbackground approach by means of a canonical transformation, we take advantage of the properties of both approaches and prove that a closed functional is the sum of an exact functional plus a functional that depends only on the physical fields and possibly the ghosts. The assumptions of the theorem are the mathematical versions of general properties that characterize the counterterms and the local contributions to the potential anomalies. This makes the outcome a theorem on the cohomology of renormalization, rather than the whole local cohomology. The result supersedes numerous involved arguments that are available in the literature.

  12. Field Science Ethnography: Methods For Systematic Observation on an Expedition

    NASA Technical Reports Server (NTRS)

    Clancey, William J.; Clancy, Daniel (Technical Monitor)

    2001-01-01

    The Haughton-Mars expedition is a multidisciplinary project, exploring an impact crater in an extreme environment to determine how people might live and work on Mars. The expedition seeks to understand and field test Mars facilities, crew roles, operations, and computer tools. I combine an ethnographic approach to establish a baseline understanding of how scientists prefer to live and work when relatively unemcumbered, with a participatory design approach of experimenting with procedures and tools in the context of use. This paper focuses on field methods for systematically recording and analyzing the expedition's activities. Systematic photography and time-lapse video are combined with concept mapping to organize and present information. This hybrid approach is generally applicable to the study of modern field expeditions having a dozen or more multidisciplinary participants, spread over a large terrain during multiple field seasons.

  13. Hyperspectral Imaging and Related Field Methods: Building the Science

    NASA Technical Reports Server (NTRS)

    Goetz, Alexander F. H.; Steffen, Konrad; Wessman, Carol

    1999-01-01

    The proposal requested funds for the computing power to bring hyperspectral image processing into undergraduate and graduate remote sensing courses. This upgrade made it possible to handle more students in these oversubscribed courses and to enhance CSES' summer short course entitled "Hyperspectral Imaging and Data Analysis" provided for government, industry, university and military. Funds were also requested to build field measurement capabilities through the purchase of spectroradiometers, canopy radiation sensors and a differential GPS system. These instruments provided systematic and complete sets of field data for the analysis of hyperspectral data with the appropriate radiometric and wavelength calibration as well as atmospheric data needed for application of radiative transfer models. The proposed field equipment made it possible to team-teach a new field methods course, unique in the country, that took advantage of the expertise of the investigators rostered in three different departments, Geology, Geography and Biology.

  14. Field Deployable Method for Arsenic Speciation in Water.

    PubMed

    Voice, Thomas C; Flores Del Pino, Lisveth V; Havezov, Ivan; Long, David T

    2011-01-01

    Contamination of drinking water supplies by arsenic is a world-wide problem. Total arsenic measurements are commonly used to investigate and regulate arsenic in water, but it is well understood that arsenic occurs in several chemical forms, and these exhibit different toxicities. It is problematic to use laboratory-based speciation techniques to assess exposure as it has been suggested that the distribution of species is not stable during transport in some types of samples. A method was developed in this study for the on-site speciation of the most toxic dissolved arsenic species: As (III), As (V), monomethylarsonic acid (MMA) and dimethylarsenic acid (DMA). Development criteria included ease of use under field conditions, applicable at levels of concern for drinking water, and analytical performance.The approach is based on selective retention of arsenic species on specific ion-exchange chromatography cartridges followed by selective elution and quantification using graphite furnace atomic absorption spectroscopy. Water samples can be delivered to a set of three cartridges using either syringes or peristaltic pumps. Species distribution is stable at this point, and the cartridges can be transported to the laboratory for elution and quantitative analysis. A set of ten replicate spiked samples of each compound, having concentrations between 1 and 60 µg/L, were analyzed. Arsenic recoveries ranged from 78-112 % and relative standard deviations were generally below 10%. Resolution between species was shown to be outstanding, with the only limitation being that the capacity for As (V) was limited to approximately 50 µg/L. This could be easily remedied by changes in either cartridge design, or the extraction procedure. Recoveries were similar for two spiked hard groundwater samples indicating that dissolved minerals are not likely to be problematic. These results suggest that this methodology can be use for analysis of the four primary arsenic species of concern in

  15. Field Deployable Method for Arsenic Speciation in Water

    PubMed Central

    Voice, Thomas C.; Flores del Pino, Lisveth V.; Havezov, Ivan; Long, David T.

    2010-01-01

    Contamination of drinking water supplies by arsenic is a world-wide problem. Total arsenic measurements are commonly used to investigate and regulate arsenic in water, but it is well understood that arsenic occurs in several chemical forms, and these exhibit different toxicities. It is problematic to use laboratory-based speciation techniques to assess exposure as it has been suggested that the distribution of species is not stable during transport in some types of samples. A method was developed in this study for the on-site speciation of the most toxic dissolved arsenic species: As (III), As (V), monomethylarsonic acid (MMA) and dimethylarsenic acid (DMA). Development criteria included ease of use under field conditions, applicable at levels of concern for drinking water, and analytical performance. The approach is based on selective retention of arsenic species on specific ion-exchange chromatography cartridges followed by selective elution and quantification using graphite furnace atomic absorption spectroscopy. Water samples can be delivered to a set of three cartridges using either syringes or peristaltic pumps. Species distribution is stable at this point, and the cartridges can be transported to the laboratory for elution and quantitative analysis. A set of ten replicate spiked samples of each compound, having concentrations between 1 and 60 µg/L, were analyzed. Arsenic recoveries ranged from 78–112 % and relative standard deviations were generally below 10%. Resolution between species was shown to be outstanding, with the only limitation being that the capacity for As (V) was limited to approximately 50 µg/L. This could be easily remedied by changes in either cartridge design, or the extraction procedure. Recoveries were similar for two spiked hard groundwater samples indicating that dissolved minerals are not likely to be problematic. These results suggest that this methodology can be use for analysis of the four primary arsenic species of concern in

  16. FIELD SCREENING METHODS FOR HAZARDOUS WASTES AND TOXIC CHEMICALS

    EPA Science Inventory

    The purpose of this document is to present the technical papers that were presented at the Second International Symposium on Field Screening Methods for Hazardous Wastes and Toxic Chemicals. ixty platform presentations were made and included in one of ten sessions: hemical sensor...

  17. Work function measurements by the field emission retarding potential method

    NASA Technical Reports Server (NTRS)

    Swanson, L. W.; Strayer, R. W.; Mackie, W. A.

    1971-01-01

    Using the field emission retarding potential method true work functions have been measured for the following monocrystalline substrates: W(110), W(111), W(100), Nb(100), Ni(100), Cu(100), Ir(110) and Ir(111). The electron elastic and inelastic reflection coefficients from several of these surfaces have also been examined near zero primary beam energy.

  18. Unsaturated soil hydraulic conductivity: The field infiltrometer method

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Theory: Field methods to measure the unsaturated soil hydraulic conductivity assume presence of steady-state water flow. Soil infiltrometers are desired to apply water onto the soil surface at constant negative pressure. Water is applied to the soil from the Marriott device through a porous membrane...

  19. FIELD SCREENING METHOD FOR POLYCHLORINATED BIPHENYL COMPOUNDS IN WATER

    EPA Science Inventory

    The U.S. Environmental Protection Agency has been exploring the complexation of silver ions with certain organic pollutants as part of a search for alternative low-cost, rapid, field screening methods. he result is a rapid, easy, and inexpensive procedure for determining polychlo...

  20. A study on the discrete image method for calculation of transient electromagnetic fields in geological media

    NASA Astrophysics Data System (ADS)

    Meng, Qing-Xin; Pan, He-Ping; Luo, Miao

    2015-12-01

    We conducted a study on the numerical calculation and response analysis of a transient electromagnetic field generated by a ground source in geological media. One solution method, the traditional discrete image method, involves complex operation, and its digital filtering algorithm requires a large number of calculations. To solve these problems, we proposed an improved discrete image method, where the following are realized: the real number of the electromagnetic field solution based on the Gaver-Stehfest algorithm for approximate inversion, the exponential approximation of the objective kernel function using the Prony method, the transient electromagnetic field according to discrete image theory, and closed-form solution of the approximate coefficients. To verify the method, we tentatively calculated the transient electromagnetic field in a homogeneous model and compared it with the results obtained from the Hankel transform digital filtering method. The results show that the method has considerable accuracy and good applicability. We then used this method to calculate the transient electromagnetic field generated by a ground magnetic dipole source in a typical geoelectric model and analyzed the horizontal component response of the induced magnetic field obtained from the "ground excitation-stratum measurement" method. We reached the conclusion that the horizontal component response of a transient field is related to the geoelectric structure, observation time, spatial location, and others. The horizontal component response of the induced magnetic field reflects the eddy current field distribution and its vertical gradient variation. During the detection of abnormal objects, positions with a zero or comparatively large offset were selected for the drillhole measurements or a comparatively long observation delay was adopted to reduce the influence of the ambient field on the survey results. The discrete image method and forward calculation results in this paper

  1. DNA-based methods of geochemical prospecting

    DOEpatents

    Ashby, Matthew

    2011-12-06

    The present invention relates to methods for performing surveys of the genetic diversity of a population. The invention also relates to methods for performing genetic analyses of a population. The invention further relates to methods for the creation of databases comprising the survey information and the databases created by these methods. The invention also relates to methods for analyzing the information to correlate the presence of nucleic acid markers with desired parameters in a sample. These methods have application in the fields of geochemical exploration, agriculture, bioremediation, environmental analysis, clinical microbiology, forensic science and medicine.

  2. Real-Time Tracking Method for a Magnetic Target Using Total Geomagnetic Field Intensity

    NASA Astrophysics Data System (ADS)

    Fan, Liming; Kang, Chong; Zhang, Xiaojun; Wan, Shengwei

    2016-06-01

    We propose an efficient and effective method for real-time tracking a long-range magnetic target using total geomagnetic field intensity. This method is based on a scalar magnetometer sensor array and an improved particle swarm optimization algorithm. Due to the effect of the geomagnetic field variations, the detection distance range of the method based on the gradient tensor is short. To increase the detection range, the geomagnetic field variations must be eliminated in the method. In this paper, the geomagnetic quasi-gradient calculated from total geomagnetic field intensity in the sensor array is used. We design a sensor array with five magnetometers and use the geomagnetic quasi-gradient to eliminate the geomagnetic field variations. The improved particle swarm optimization (IPSO) algorithm, which minimizes the errors of total geomagnetic field values between measurements and calculations, is applied in this real-time tracking method to track a long-range magnetic target position. The detailed principle of the method and the steps of the IPSO algorithm are described in detail. The method is validated with a numerical simulation. The results show that the average relative error of position is less than 2 % and the execution time is less than 1.5 s.

  3. A field expansions method for scattering by periodic multilayered media.

    PubMed

    Malcolm, Alison; Nicholls, David P

    2011-04-01

    The interaction of acoustic and electromagnetic waves with periodic structures plays an important role in a wide range of problems of scientific and technological interest. This contribution focuses upon the robust and high-order numerical simulation of a model for the interaction of pressure waves generated within the earth incident upon layers of sediment near the surface. Herein described is a boundary perturbation method for the numerical simulation of scattering returns from irregularly shaped periodic layered media. The method requires only the discretization of the layer interfaces (so that the number of unknowns is an order of magnitude smaller than finite difference and finite element simulations), while it avoids not only the need for specialized quadrature rules but also the dense linear systems characteristic of boundary integral/element methods. The approach is a generalization to multiple layers of Bruno and Reitich's "Method of Field Expansions" for dielectric structures with two layers. By simply considering the entire structure simultaneously, rather than solving in individual layers separately, the full field can be recovered in time proportional to the number of interfaces. As with the original field expansions method, this approach is extremely efficient and spectrally accurate. PMID:21476635

  4. Multiresolution and Explicit Methods for Vector Field Analysis and Visualization

    NASA Technical Reports Server (NTRS)

    Nielson, Gregory M.

    1997-01-01

    This is a request for a second renewal (3d year of funding) of a research project on the topic of multiresolution and explicit methods for vector field analysis and visualization. In this report, we describe the progress made on this research project during the second year and give a statement of the planned research for the third year. There are two aspects to this research project. The first is concerned with the development of techniques for computing tangent curves for use in visualizing flow fields. The second aspect of the research project is concerned with the development of multiresolution methods for curvilinear grids and their use as tools for visualization, analysis and archiving of flow data. We report on our work on the development of numerical methods for tangent curve computation first.

  5. Magnetic space-based field measurements

    NASA Technical Reports Server (NTRS)

    Langel, R. A.

    1981-01-01

    Because the near Earth magnetic field is a complex combination of fields from outside the Earth of fields from its core and of fields from its crust, measurements from space prove to be the only practical way to obtain timely, global surveys. Due to difficulty in making accurate vector measurements, early satellites such as Sputnik and Vanguard measured only the magnitude survey. The attitude accuracy was 20 arc sec. Both the Earth's core fields and the fields arising from its crust were mapped from satellite data. The standard model of the core consists of a scalar potential represented by a spherical harmonics series. Models of the crustal field are relatively new. Mathematical representation is achieved in localized areas by arrays of dipoles appropriately located in the Earth's crust. Measurements of the Earth's field are used in navigation, to map charged particles in the magnetosphere, to study fluid properties in the Earth's core, to infer conductivity of the upper mantels, and to delineate regional scale geological features.

  6. Interferometric methods for mapping static electric and magnetic fields

    NASA Astrophysics Data System (ADS)

    Pozzi, Giulio; Beleggia, Marco; Kasama, Takeshi; Dunin-Borkowski, Rafal E.

    2014-02-01

    The mapping of static electric and magnetic fields using electron probes with a resolution and sensitivity that are sufficient to reveal nanoscale features in materials requires the use of phase-sensitive methods such as the shadow technique, coherent Foucault imaging and the Transport of Intensity Equation. Among these approaches, image-plane off-axis electron holography in the transmission electron microscope has acquired a prominent role thanks to its quantitative capabilities and broad range of applicability. After a brief overview of the main ideas and methods behind field mapping, we focus on theoretical models that form the basis of the quantitative interpretation of electron holographic data. We review the application of electron holography to a variety of samples (including electric fields associated with p-n junctions in semiconductors, quantized magnetic flux in superconductors and magnetization topographies in nanoparticles and other magnetic materials) and electron-optical geometries (including multiple biprism, amplitude and mixed-type set-ups). We conclude by highlighting the emerging perspectives of (i) three-dimensional field mapping using electron holographic tomography and (ii) the model-independent determination of the locations and magnitudes of field sources (electric charges and magnetic dipoles) directly from electron holographic data.

  7. Extending methods: using Bourdieu's field analysis to further investigate taste

    NASA Astrophysics Data System (ADS)

    Schindel Dimick, Alexandra

    2015-06-01

    In this commentary on Per Anderhag, Per-Olof Wickman and Karim Hamza's article Signs of taste for science, I consider how their study is situated within the concern for the role of science education in the social and cultural production of inequality. Their article provides a finely detailed methodology for analyzing the constitution of taste within science education classrooms. Nevertheless, because the authors' socially situated methodology draws upon Bourdieu's theories, it seems equally important to extend these methods to consider how and why students make particular distinctions within a relational context—a key aspect of Bourdieu's theory of cultural production. By situating the constitution of taste within Bourdieu's field analysis, researchers can explore the ways in which students' tastes and social positionings are established and transformed through time, space, place, and their ability to navigate the field. I describe the process of field analysis in relation to the authors' paper and suggest that combining the authors' methods with a field analysis can provide a strong methodological and analytical framework in which theory and methods combine to create a detailed understanding of students' interest in relation to their context.

  8. Generalized theoretical method for the interaction between arbitrary nonuniform electric field and molecular vibrations: Toward near-field infrared spectroscopy and microscopy.

    PubMed

    Iwasa, Takeshi; Takenaka, Masato; Taketsugu, Tetsuya

    2016-03-28

    A theoretical method to compute infrared absorption spectra when a molecule is interacting with an arbitrary nonuniform electric field such as near-fields is developed and numerically applied to simple model systems. The method is based on the multipolar Hamiltonian where the light-matter interaction is described by a spatial integral of the inner product of the molecular polarization and applied electric field. The computation scheme is developed under the harmonic approximation for the molecular vibrations and the framework of modern electronic structure calculations such as the density functional theory. Infrared reflection absorption and near-field infrared absorption are considered as model systems. The obtained IR spectra successfully reflect the spatial structure of the applied electric field and corresponding vibrational modes, demonstrating applicability of the present method to analyze modern nanovibrational spectroscopy using near-fields. The present method can use arbitral electric fields and thus can integrate two fields such as computational chemistry and electromagnetics. PMID:27036436

  9. Method of recovering oil-based fluid

    SciTech Connect

    Brinkley, H.E.

    1993-07-13

    A method is described of recovering oil-based fluid, said method comprising the steps of: applying an oil-based fluid absorbent cloth of man-made fiber to an oil-based fluid, the cloth having at least a portion thereof that is napped so as to raise ends and loops of the man-made fibers and define voids; and absorbing the oil-based fluid into the napped portion of the cloth.

  10. A sparse reconstruction method for the estimation of multiresolution emission fields via atmospheric inversion

    DOE PAGESBeta

    Ray, J.; Lee, J.; Yadav, V.; Lefantzi, S.; Michalak, A. M.; van Bloemen Waanders, B.

    2014-08-20

    We present a sparse reconstruction scheme that can also be used to ensure non-negativity when fitting wavelet-based random field models to limited observations in non-rectangular geometries. The method is relevant when multiresolution fields are estimated using linear inverse problems. Examples include the estimation of emission fields for many anthropogenic pollutants using atmospheric inversion or hydraulic conductivity in aquifers from flow measurements. The scheme is based on three new developments. Firstly, we extend an existing sparse reconstruction method, Stagewise Orthogonal Matching Pursuit (StOMP), to incorporate prior information on the target field. Secondly, we develop an iterative method that uses StOMP tomore » impose non-negativity on the estimated field. Finally, we devise a method, based on compressive sensing, to limit the estimated field within an irregularly shaped domain. We demonstrate the method on the estimation of fossil-fuel CO2 (ffCO2) emissions in the lower 48 states of the US. The application uses a recently developed multiresolution random field model and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of two. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less

  11. Field methods to measure surface displacement and strain with the Video Image Correlation method

    NASA Technical Reports Server (NTRS)

    Maddux, Gary A.; Horton, Charles M.; Mcneill, Stephen R.; Lansing, Matthew D.

    1994-01-01

    The objective of this project was to develop methods and application procedures to measure displacement and strain fields during the structural testing of aerospace components using paint speckle in conjunction with the Video Image Correlation (VIC) system.

  12. Multi-electron systems in strong magnetic fields II: A fixed-phase diffusion quantum Monte Carlo application based on trial functions from a Hartree-Fock-Roothaan method

    NASA Astrophysics Data System (ADS)

    Boblest, S.; Meyer, D.; Wunner, G.

    2014-11-01

    We present a quantum Monte Carlo application for the computation of energy eigenvalues for atoms and ions in strong magnetic fields. The required guiding wave functions are obtained with the Hartree-Fock-Roothaan code described in the accompanying publication (Schimeczek and Wunner, 2014). Our method yields highly accurate results for the binding energies of symmetry subspace ground states and at the same time provides a means for quantifying the quality of the results obtained with the above-mentioned Hartree-Fock-Roothaan method. Catalogue identifier: AETV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AETV_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 72 284 No. of bytes in distributed program, including test data, etc.: 604 948 Distribution format: tar.gz Programming language: C++. Computer: Cluster of 1-˜500 HP Compaq dc5750. Operating system: Linux. Has the code been vectorized or parallelized?: Yes. Code includes MPI directives. RAM: 500 MB per node Classification: 2.1. External routines: Boost::Serialization, Boost::MPI, LAPACK BLAS Nature of problem: Quantitative modelings of features observed in the X-ray spectra of isolated neutron stars are hampered by the lack of sufficiently large and accurate databases for atoms and ions up to the last fusion product iron, at high magnetic field strengths. The predominant amount of line data in the literature has been calculated with Hartree-Fock methods, which are intrinsically restricted in precision. Our code is intended to provide a powerful tool for calculating very accurate energy values from, and thereby improving the quality of, existing Hartree-Fock results. Solution method: The Fixed-phase quantum Monte Carlo method is used in combination with guiding functions obtained in Hartree

  13. An Efficient Method for Far-field Tsunami Forecasting

    NASA Astrophysics Data System (ADS)

    Hossen, M. J.; Cummins, P. R.; Dettmer, J.; Baba, T.

    2015-12-01

    We have developed a hybrid method to forecast far-field tsunamis by combining traditional, least-squares inversion for initial sea surface displacement (LSQ) and time reverse imaging (TRI). This method has the same source representation as LSQ, which involves dividing the source region into a grid of "point" sources. For each of these, a tsunami Green's function (GF) is computed using a basis function for sea surface displacement whose support is concentrated near the grid point. Instead of solving the linear inverse problem for initial sea surface displacement using regularized least-squares, we apply the TRI method to estimate initial sea surface displacement at each source grid point by convolving GFs with time-reversed observed waveforms recorded near the source region. This tsunami-source estimate is then used to forecast tsunami waveforms at greater distance. We apply this method to the 2011 Tohoku, Japan tsunami because of the availability of an extensive set of high-quality tsunami waveform recordings. The results show that the method can predict tsunami waveforms having good agreement with observed waveforms at near-field stations not part of the source estimation, and excellent agreement with far-field waveforms. The spatial distribution of cumulative sea surface displacement agrees well with other models obtained in more sophisticated inversions, but the temporal resolution of this method does not resolve source kinematics. The method has potential for application in tsunami warning systems, as it is computationally efficient and can be applied to estimate the initial source model by applying precomputed Green's functions in order to provide more accurate and realistic tsunami predictions.

  14. Estimation method of a separatrix profile of field-reduced configuration plasma with the deconvolution concept

    NASA Astrophysics Data System (ADS)

    Yamanaka, Koji; Suzuki, Yukihisa; Kitano, Katsuhisa; Ito, Shoji; Okada, Shigefumi; Goto, Seiichi

    1999-01-01

    A method to analyze the separatrix profile of a field-reversed configuration is presented that is based on a multichannel excluded flux measurement. In the method, the plasma current is represented by current filaments. This current code includes all the magnetic sources (e.g., a vacuum conducting vessel, coils for the confinement field, search coils, and coils for additional fields) as inputs to estimate the separatrix profile. With the aid of a numerically calculated function, experimental data are deconvolved to determine the current filament. The influence of measurement error included in the raw data of the calculated profiles is also discussed.

  15. New method of asymmetric flow field measurement in hypersonic shock tunnel.

    PubMed

    Yan, D P; He, A Z; Ni, X W

    1991-03-01

    In this paper a method of large aperture (?500 mm) high sensitivity moire deflectometry is used to obtain multidirectional deflectograms of the asymmetric flow field in hypersonic (M = 10.29) shock tunnel. At the same time, a 3-D reconstructive method of the asymmetric flow field is presented which is based on the integration of the moire deflective angle and the double-cubic many-knot interpolating splines; it is used to calculate the 3-D density distribution of the asymmetric flow field. PMID:20582058

  16. General flow field analysis methods for helicopter rotor aeroacoustics

    NASA Technical Reports Server (NTRS)

    Quackenbush, Todd R.; Lam, C. Gordon; Bliss, Donald B.

    1991-01-01

    Previous work in the analysis of rotor flow fields for aeroacoustic applications involved the preliminary development of an efficient and accurate Lagrangian simulation of the unsteady vorticity field in the vicinity of helicopter main rotor that could analyze a limited class of rotor/wake interactions. The capabilities of this analysis have subsequently been considerably enhanced to allow it to serve as the foundation for a general analysis of the rotor/wake interaction noise. This paper presents the details of these enhancements, which focus on the expansion of the reconstruction approach developed previously to handle arbitrary vortex wake interactions within three-dimensional regions located near or within the rotor disk. Also, the development of nearfield velocity corrections appropriate for the analysis of such interactions is described, as is a preliminary study of methods for using the new high-resolution flow field analysis for noise predictions. The results show that by employing this novel flow field reconstruction technique it is possible to employ full-span free wake analyses with temporal and spatial resolution suitable for acoustic applications while reducing the computation time required by one to two orders of magnitude relative to traditional methods.

  17. The reduced basis method for the electric field integral equation

    SciTech Connect

    Fares, M.; Hesthaven, J.S.; Maday, Y.; Stamm, B.

    2011-06-20

    We introduce the reduced basis method (RBM) as an efficient tool for parametrized scattering problems in computational electromagnetics for problems where field solutions are computed using a standard Boundary Element Method (BEM) for the parametrized electric field integral equation (EFIE). This combination enables an algorithmic cooperation which results in a two step procedure. The first step consists of a computationally intense assembling of the reduced basis, that needs to be effected only once. In the second step, we compute output functionals of the solution, such as the Radar Cross Section (RCS), independently of the dimension of the discretization space, for many different parameter values in a many-query context at very little cost. Parameters include the wavenumber, the angle of the incident plane wave and its polarization.

  18. Method for imaging with low frequency electromagnetic fields

    DOEpatents

    Lee, Ki H.; Xie, Gan Q.

    1994-01-01

    A method for imaging with low frequency electromagnetic fields, and for interpreting the electromagnetic data using ray tomography, in order to determine the earth conductivity with high accuracy and resolution. The imaging method includes the steps of placing one or more transmitters, at various positions in a plurality of transmitter holes, and placing a plurality of receivers in a plurality of receiver holes. The transmitters generate electromagnetic signals which diffuse through a medium, such as earth, toward the receivers. The measured diffusion field data H is then transformed into wavefield data U. The traveltimes corresponding to the wavefield data U, are then obtained, by charting the wavefield data U, using a different regularization parameter .alpha. for each transform. The desired property of the medium, such as conductivity, is then derived from the velocity, which in turn is constructed from the wavefield data U using ray tomography.

  19. Method for imaging with low frequency electromagnetic fields

    DOEpatents

    Lee, K.H.; Xie, G.Q.

    1994-12-13

    A method is described for imaging with low frequency electromagnetic fields, and for interpreting the electromagnetic data using ray tomography, in order to determine the earth conductivity with high accuracy and resolution. The imaging method includes the steps of placing one or more transmitters, at various positions in a plurality of transmitter holes, and placing a plurality of receivers in a plurality of receiver holes. The transmitters generate electromagnetic signals which diffuse through a medium, such as earth, toward the receivers. The measured diffusion field data H is then transformed into wavefield data U. The travel times corresponding to the wavefield data U, are then obtained, by charting the wavefield data U, using a different regularization parameter [alpha] for each transform. The desired property of the medium, such as conductivity, is then derived from the velocity, which in turn is constructed from the wavefield data U using ray tomography. 13 figures.

  20. A self-consistent field method for galactic dynamics

    NASA Technical Reports Server (NTRS)

    Hernquist, Lars; Ostriker, Jeremiah P.

    1992-01-01

    The present study describes an algorithm for evolving collisionless stellar systems in order to investigate the evolution of systems with density profiles like the R exp 1/4 law, using only a few terms in the expansions. A good fit is obtained for a truncated isothermal distribution, which renders the method appropriate for galaxies with flat rotation curves. Calculations employing N of about 10 exp 6-7 are straightforward on existing supercomputers, making possible simulations having significantly smoother fields than with direct methods such as tree-codes. Orbits are found in a given static or time-dependent gravitational field; the potential, phi(r, t) is revised from the resultant density, rho(r, t). Possible scientific uses of this technique are discussed, including tidal perturbations of dwarf galaxies, the adiabatic growth of central masses in spheroidal galaxies, instabilities in realistic galaxy models, and secular processes in galactic evolution.

  1. A Method for Evaluating Volt-VAR Optimization Field Demonstrations

    SciTech Connect

    Schneider, Kevin P.; Weaver, T. F.

    2014-08-31

    In a regulated business environment a utility must be able to validate that deployed technologies provide quantifiable benefits to the end-use customers. For traditional technologies there are well established procedures for determining what benefits will be derived from the deployment. But for many emerging technologies procedures for determining benefits are less clear and completely absent in some cases. Volt-VAR Optimization is a technology that is being deployed across the nation, but there are still numerous discussions about potential benefits and how they are achieved. This paper will present a method for the evaluation, and quantification of benefits, for field deployments of Volt-VAR Optimization technologies. In addition to the basic methodology, the paper will present a summary of results, and observations, from two separate Volt-VAR Optimization field evaluations using the proposed method.

  2. Vision Sensor-Based Road Detection for Field Robot Navigation

    PubMed Central

    Lu, Keyu; Li, Jian; An, Xiangjing; He, Hangen

    2015-01-01

    Road detection is an essential component of field robot navigation systems. Vision sensors play an important role in road detection for their great potential in environmental perception. In this paper, we propose a hierarchical vision sensor-based method for robust road detection in challenging road scenes. More specifically, for a given road image captured by an on-board vision sensor, we introduce a multiple population genetic algorithm (MPGA)-based approach for efficient road vanishing point detection. Superpixel-level seeds are then selected in an unsupervised way using a clustering strategy. Then, according to the GrowCut framework, the seeds proliferate and iteratively try to occupy their neighbors. After convergence, the initial road segment is obtained. Finally, in order to achieve a globally-consistent road segment, the initial road segment is refined using the conditional random field (CRF) framework, which integrates high-level information into road detection. We perform several experiments to evaluate the common performance, scale sensitivity and noise sensitivity of the proposed method. The experimental results demonstrate that the proposed method exhibits high robustness compared to the state of the art. PMID:26610514

  3. Vision Sensor-Based Road Detection for Field Robot Navigation.

    PubMed

    Lu, Keyu; Li, Jian; An, Xiangjing; He, Hangen

    2015-01-01

    Road detection is an essential component of field robot navigation systems. Vision sensors play an important role in road detection for their great potential in environmental perception. In this paper, we propose a hierarchical vision sensor-based method for robust road detection in challenging road scenes. More specifically, for a given road image captured by an on-board vision sensor, we introduce a multiple population genetic algorithm (MPGA)-based approach for efficient road vanishing point detection. Superpixel-level seeds are then selected in an unsupervised way using a clustering strategy. Then, according to the GrowCut framework, the seeds proliferate and iteratively try to occupy their neighbors. After convergence, the initial road segment is obtained. Finally, in order to achieve a globally-consistent road segment, the initial road segment is refined using the conditional random field (CRF) framework, which integrates high-level information into road detection. We perform several experiments to evaluate the common performance, scale sensitivity and noise sensitivity of the proposed method. The experimental results demonstrate that the proposed method exhibits high robustness compared to the state of the art. PMID:26610514

  4. Full-Field Strain Measurement On Titanium Welds And Local Elasto-Plastic Identification With The Virtual Fields Method

    SciTech Connect

    Tattoli, F.; Casavola, C.; Pierron, F.; Rotinat, R.; Pappalettere, C.

    2011-01-17

    One of the main problems in welding is the microstructural transformation within the area affected by the thermal history. The resulting heterogeneous microstructure within the weld nugget and the heat affected zones is often associated with changes in local material properties. The present work deals with the identification of material parameters governing the elasto--plastic behaviour of the fused and heat affected zones as well as the base material for titanium hybrid welded joints (Ti6Al4V alloy). The material parameters are identified from heterogeneous strain fields with the Virtual Fields Method. This method is based on a relevant use of the principle of virtual work and it has been shown to be useful and much less time consuming than classical finite element model updating approaches applied to similar problems. The paper will present results and discuss the problem of selection of the weld zones for the identification.

  5. Correlation theory-based signal processing method for CMF signals

    NASA Astrophysics Data System (ADS)

    Shen, Yan-lin; Tu, Ya-qing

    2016-06-01

    Signal processing precision of Coriolis mass flowmeter (CMF) signals affects measurement accuracy of Coriolis mass flowmeters directly. To improve the measurement accuracy of CMFs, a correlation theory-based signal processing method for CMF signals is proposed, which is comprised of the correlation theory-based frequency estimation method and phase difference estimation method. Theoretical analysis shows that the proposed method eliminates the effect of non-integral period sampling signals on frequency and phase difference estimation. The results of simulations and field experiments demonstrate that the proposed method improves the anti-interference performance of frequency and phase difference estimation and has better estimation performance than the adaptive notch filter, discrete Fourier transform and autocorrelation methods in terms of frequency estimation and the data extension-based correlation, Hilbert transform, quadrature delay estimator and discrete Fourier transform methods in terms of phase difference estimation, which contributes to improving the measurement accuracy of Coriolis mass flowmeters.

  6. Regularization methods for Nuclear Lattice Effective Field Theory

    NASA Astrophysics Data System (ADS)

    Klein, Nico; Lee, Dean; Liu, Weitao; Meißner, Ulf-G.

    2015-07-01

    We investigate Nuclear Lattice Effective Field Theory for the two-body system for several lattice spacings at lowest order in the pionless as well as in the pionful theory. We discuss issues of regularizations and predictions for the effective range expansion. In the pionless case, a simple Gaussian smearing allows to demonstrate lattice spacing independence over a wide range of lattice spacings. We show that regularization methods known from the continuum formulation are necessary as well as feasible for the pionful approach.

  7. Lidar Tracking of Multiple Fluorescent Tracers: Method and Field Test

    NASA Technical Reports Server (NTRS)

    Eberhard, Wynn L.; Willis, Ron J.

    1992-01-01

    Past research and applications have demonstrated the advantages and usefulness of lidar detection of a single fluorescent tracer to track air motions. Earlier researchers performed an analytical study that showed good potential for lidar discrimination and tracking of two or three different fluorescent tracers at the same time. The present paper summarizes the multiple fluorescent tracer method, discusses its expected advantages and problems, and describes our field test of this new technique.

  8. Work function measurements by the field emission retarding potential method.

    NASA Technical Reports Server (NTRS)

    Strayer, R. W.; Mackie, W.; Swanson, L. W.

    1973-01-01

    Description of the theoretical foundation of the field electron retarding potential method, and review of its experimental application to the measurement of single crystal face work functions. The results obtained from several substrates are discussed. An interesting and useful fallout from the experimental approach described is the ability to accurately measure the elastic and inelastic reflection coefficient for impinging electrons to near zero-volt energy.

  9. Bringing the Field into the Classroom: A Field Methods Course on Saudi Arabian Sign Language

    ERIC Educational Resources Information Center

    Stephen, Anika; Mathur, Gaurav

    2012-01-01

    The methodology used in one graduate-level linguistics field methods classroom is examined through the lens of the students' experiences. Four male Deaf individuals from the Kingdom of Saudi Arabia served as the consultants for the course. After a brief background information about their country and its practices surrounding deaf education, both…

  10. Integration of Multiple Field Methods in Characterizing a Field Site with Bayesian Inverse Modeling

    NASA Astrophysics Data System (ADS)

    Savoy, H.; Dietrich, P.; Osorio-Murillo, C. A.; Kalbacher, T.; Kolditz, O.; Ames, D. P.; Rubin, Y.

    2014-12-01

    A hydraulic property of a field can be expressed as a space random function (SRF), and the parameters of that SRF can be constrained by the Method of Anchored Distributions (MAD). MAD is a general Bayesian inverse modeling technique that quantifies the uncertainty of SRF parameters by integrating various direct local data along with indirect non-local data. An example is given with a high-resolution 3D aquifer analog with known hydraulic conductivity (K) and porosity (n) at every location. MAD is applied using different combinations of simulated measurements of K, n, and different scales of hydraulic head that represent different field methods. The ln(K) and n SRF parameters are characterized with each of the method combinations to assess the influence of the methods on the SRFs and their implications. The forward modeling equations are solved by the numerical modeling software OpenGeoSys (opengeosys.org) and MAD is applied with the software MAD# (mad.codeplex.com). The inverse modeling results are compared to the aquifer analog for success evaluation. The goal of the study is to show how integrating combinations of multi-scale and multi-type measurements from the field via MAD can be used to reduce the uncertainty in field-scale SRFs, as well as point values, of hydraulic properties.

  11. Simulating recrystallization in titanium using the phase field method

    NASA Astrophysics Data System (ADS)

    Gentry, S. P.; Thornton, K.

    2015-08-01

    Integrated computational materials engineering (ICME) links physics-based models to predict performance of materials based on their processing history. The recrystallization phase field model is developed and parameterized for commercially pure titanium. Stored energy and nucleation of dislocation-free grains are added into a phase field grain-growth model. A two-dimensional simulation of recrystallization in titanium at 800°C was performed; the recrystallized volume fraction was measured from the simulated microstructures. Fitting the recrystallized volume fraction to the Avramiequation gives the time exponent n as 1.8 and the annealing time to reach 50% recrystallization (t0.5) as 71 s. As expected, the microstructure evolves faster when driven by stored energy than when driven by grain boundary energy.

  12. Camera array based light field microscopy.

    PubMed

    Lin, Xing; Wu, Jiamin; Zheng, Guoan; Dai, Qionghai

    2015-09-01

    This paper proposes a novel approach for high-resolution light field microscopy imaging by using a camera array. In this approach, we apply a two-stage relay system for expanding the aperture plane of the microscope into the size of an imaging lens array, and utilize a sensor array for acquiring different sub-apertures images formed by corresponding imaging lenses. By combining the rectified and synchronized images from 5 × 5 viewpoints with our prototype system, we successfully recovered color light field videos for various fast-moving microscopic specimens with a spatial resolution of 0.79 megapixels at 30 frames per second, corresponding to an unprecedented data throughput of 562.5 MB/s for light field microscopy. We also demonstrated the use of the reported platform for different applications, including post-capture refocusing, phase reconstruction, 3D imaging, and optical metrology. PMID:26417490

  13. Camera array based light field microscopy

    PubMed Central

    Lin, Xing; Wu, Jiamin; Zheng, Guoan; Dai, Qionghai

    2015-01-01

    This paper proposes a novel approach for high-resolution light field microscopy imaging by using a camera array. In this approach, we apply a two-stage relay system for expanding the aperture plane of the microscope into the size of an imaging lens array, and utilize a sensor array for acquiring different sub-apertures images formed by corresponding imaging lenses. By combining the rectified and synchronized images from 5 × 5 viewpoints with our prototype system, we successfully recovered color light field videos for various fast-moving microscopic specimens with a spatial resolution of 0.79 megapixels at 30 frames per second, corresponding to an unprecedented data throughput of 562.5 MB/s for light field microscopy. We also demonstrated the use of the reported platform for different applications, including post-capture refocusing, phase reconstruction, 3D imaging, and optical metrology. PMID:26417490

  14. Relaxation method and TCLE method of linear response in terms of thermo-field dynamics

    NASA Astrophysics Data System (ADS)

    Saeki, Mizuhiko

    2008-03-01

    The general formulae of the dynamic susceptibility are derived using the relaxation method and the TCLE method for the linear response by introducing the renormalized hat-operator in terms of thermo-field dynamics (TFD). In the former method, the Kubo formula is calculated for systems with no external driving fields, while in the latter method the admittance is directly calculated from time-convolutionless equations with external driving terms. The relation between the two methods is analytically investigated, and also the fluctuation-dissipation theorem is examined for the two methods in terms of TFD. The TCLE method is applied to an interacting spin system, and a formula of the transverse magnetic susceptibility is derived for such a system. The transverse magnetic susceptibility of an interacting spin system with S = 1 / 2 spins is obtained up to the first order in powers of the spin-spin interaction.

  15. Performance of FFT methods in local gravity field modelling

    NASA Technical Reports Server (NTRS)

    Forsberg, Rene; Solheim, Dag

    1989-01-01

    Fast Fourier transform (FFT) methods provide a fast and efficient means of processing large amounts of gravity or geoid data in local gravity field modelling. The FFT methods, however, has a number of theoretical and practical limitations, especially the use of flat-earth approximation, and the requirements for gridded data. In spite of this the method often yields excellent results in practice when compared to other more rigorous (and computationally expensive) methods, such as least-squares collocation. The good performance of the FFT methods illustrate that the theoretical approximations are offset by the capability of taking into account more data in larger areas, especially important for geoid predictions. For best results good data gridding algorithms are essential. In practice truncated collocation approaches may be used. For large areas at high latitudes the gridding must be done using suitable map projections such as UTM, to avoid trivial errors caused by the meridian convergence. The FFT methods are compared to ground truth data in New Mexico (xi, eta from delta g), Scandinavia (N from delta g, the geoid fits to 15 cm over 2000 km), and areas of the Atlantic (delta g from satellite altimetry using Wiener filtering). In all cases the FFT methods yields results comparable or superior to other methods.

  16. ALTERNATIVE FIELD METHODS TO TREAT MERCURY IN SOIL

    SciTech Connect

    Ernest F. Stine Jr; Steven T. Downey

    2002-08-14

    U.S. Department of Energy (DOE) used large quantities of mercury in the uranium separating process from the 1950s until the late 1980s in support of national defense. Some of this mercury, as well as other hazardous metals and radionuclides, found its way into, and under, several buildings, soil and subsurface soils and into some of the surface waters. Several of these areas may pose potential health or environmental risks and must be dealt with under current environmental regulations. DOE's National Energy Technology Laboratory (NETL) awarded a contract ''Alternative Field Methods to Treat Mercury in Soil'' to IT Group, Knoxville TN (IT) and its subcontractor NFS, Erwin, TN to identify remedial methods to clean up mercury-contaminated high-clay content soils using proven treatment chemistries. The sites of interest were the Y-12 National Security Complex located in Oak Ridge, Tennessee, the David Witherspoon properties located in Knoxville, Tennessee, and at other similarly contaminated sites. The primary laboratory-scale contract objectives were (1) to safely retrieve and test samples of contaminated soil in an approved laboratory and (2) to determine an acceptable treatment method to ensure that the mercury does not leach from the soil above regulatory levels. The leaching requirements were to meet the TC (0.2 mg/l) and UTS (0.025 mg/l) TCLP criteria. In-situ treatments were preferred to control potential mercury vapors emissions and liquid mercury spills associated with ex-situ treatments. All laboratory work was conducted in IT's and NFS laboratories. Mercury contaminated nonradioactive soil from under the Alpha 2 building in the Y-12 complex was used. This soils contained insufficient levels of leachable mercury and resulted in TCLP mercury concentrations that were similar to the applicable LDR limits. The soil was spiked at multiple levels with metallic (up to 6000 mg/l) and soluble mercury compounds (up to 500 mg/kg) to simulate expected ranges of mercury

  17. Size-extensive vibrational self-consistent field method

    NASA Astrophysics Data System (ADS)

    Keçeli, Murat; Hirata, So

    2011-10-01

    The vibrational self-consistent field (VSCF) method is a mean-field approach to solve the vibrational Schrödinger equation and serves as a basis of vibrational perturbation and coupled-cluster methods. Together they account for anharmonic effects on vibrational transition frequencies and vibrationally averaged properties. This article reports the definition, programmable equations, and corresponding initial implementation of a diagrammatically size-extensive modification of VSCF, from which numerous terms with nonphysical size dependence in the original VSCF equations have been eliminated. When combined with a quartic force field (QFF), this compact and strictly size-extensive VSCF (XVSCF) method requires only quartic force constants of the partial ^4 V / partial Q_i^2 partial Q_j^2 type, where V is the electronic energy and Qi is the ith normal coordinate. Consequently, the cost of a XVSCF calculation with a QFF increases only quadratically with the number of modes, while that of a VSCF calculation grows quartically. The effective (mean-field) potential of XVSCF felt by each mode is shown to be harmonic, making the XVSCF equations subject to a self-consistent analytical solution without matrix diagonalization or a basis-set expansion, which are necessary in VSCF. Even when the same set of force constants is used, XVSCF is nearly three orders of magnitude faster than VSCF implemented similarly. Yet, the results of XVSCF and VSCF are shown to approach each other as the molecular size is increased, implicating the inclusion of unnecessary, nonphysical terms in VSCF. The diagrams of the XVSCF energy expression and their evaluation rules are also proposed, underscoring their connected structures.

  18. An Empirical Method for Fast Prediction of Rarefied Flow Field around a Vertical Plate

    NASA Astrophysics Data System (ADS)

    He, Tao; Wang, Jiang-Feng

    2016-06-01

    Numerical study is conducted to investigate the effects of free-stream Knudsen (Kn) number on rarefied flow field around a vertical plate employing an unstructured DSMC method, and an empirical method for fast prediction of flow-field structure at different Kn numbers in a given inflow velocity is proposed. First, the flow at a velocity 7500m/s is simulated using a perfect-gas model with free-stream Kn changing from 0.035 to 13.36. The flow-field characteristics in these cases with varying Kn numbers are analyzed and a linear-expansion phenomenon as a function of the square of Kn is discovered. An empirical method is proposed for fast flow-field prediction at different Kn based on the least-square-fitting method. Further, the effects of chemical reactions on flow field are investigated to verify the applicability of the empirical method in the real gas conditions. Three of the cases in perfect-gas flow are simulated again by introducing five-species air chemical module. The flow properties with and without chemical reactions are compared. In the end, the variation of chemical-reaction flow field as a function of Kn is analyzed and it is shown that the empirical method are also suitable when considering chemical reactions.

  19. A General Assignment Method for Oriented Sample (OS) Solid-state NMR of Proteins Based on The Correlation of Resonances through Heteronuclear Dipolar Couplings in Samples Aligned Parallel and Perpendicular to the Magnetic Field

    PubMed Central

    Lu, George J.; Son, Woo Sung; Opella, Stanley J.

    2011-01-01

    A general method for assigning oriented sample (OS) solid-state NMR spectra of proteins is demonstrated. In principle, this method requires only a single sample of a uniformly 15N-labeled membrane protein in magnetically aligned bilayers, and a previously assigned isotropic chemical shift spectrum obtained either from solution NMR on micelle or isotropic bicelle samples or from magic angle spinning (MAS) solid-state NMR on unoriented proteoliposomes. The sequential isotropic resonance assignments are transferred to the OS solid-state NMR spectra of aligned samples by correlating signals from the same residue observed in protein-containing bilayers aligned with their normals parallel and perpendicular to the magnetic field. The underlying principle is that the resonances from the same residue have heteronuclear dipolar couplings that differ by exactly a factor of two between parallel and perpendicular alignments. The method is demonstrated on the membrane-bound form of Pf1 coat protein in phospholipid bilayers, whose assignments have been previously made using an earlier generation of methods that relied on the preparation of many selectively labeled (by residue type) samples. The new method provides the correct resonance assignments using only a single uniformly 15N-labeled sample, two solid-state NMR spectra, and a previously assigned isotropic spectrum. Significantly, this approach is equally applicable to residues in alpha helices, beta sheets, loops, and any other elements of tertiary structure. Moreover, the strategy bridges between OS solid-state NMR of aligned samples and solution NMR or MAS solid-state NMR of unoriented samples. In combination with the development of complementary experimental methods, it provides a step towards unifying these apparently different NMR approaches. PMID:21316275

  20. Camera self-calibration method based on two vanishing points

    NASA Astrophysics Data System (ADS)

    Duan, Shaoli; Zang, Huaping; Xu, Mengmeng; Zhang, Xiaofang; Gong, Qiaoxia; Tian, Yongzhi; Liang, Erjun; Liu, Xiaomin

    2015-10-01

    Camera calibration is one of the indispensable processes to obtain 3D depth information from 2D images in the field of computer vision. Camera self-calibration is more convenient and flexible, especially in the application of large depth of fields, wide fields of view, and scene conversion, as well as other occasions like zooms. In this paper, a self-calibration method based on two vanishing points is proposed, the geometric characteristic of disappear points formed by two groups of orthogonal parallel lines is applied to camera self-calibration. By using the vectors' orthogonal properties of connection optical centers and the vanishing points, the constraint equations on the camera intrinsic parameters are established. By this method, four internal parameters of the camera can be solved though only four images taken from different viewpoints in a scene. Compared with the two other self-calibration methods with absolute quadric and calibration plate, the method based on two vanishing points does not require calibration objects, camera movement, the information on the size and location of parallel lines, without strict experimental equipment, and having convenient calibration process and simple algorithm. Compared with the experimental results of the method based on calibration plate, self-calibration method by using machine vision software Halcon, the practicability and effectiveness of the proposed method in this paper is verified.

  1. METHOD AND APPARATUS FOR TRAPPING IONS IN A MAGNETIC FIELD

    DOEpatents

    Luce, J.S.

    1962-04-17

    A method and apparatus are described for trapping ions within an evacuated container and within a magnetic field utilizing dissociation and/or ionization of molecular ions to form atomic ions and energetic neutral particles. The atomic ions are magnetically trapped as a result of a change of charge-to- mass ratio. The molecular ions are injected into the container and into the path of an energetic carbon arc discharge which dissociates and/or ionizes a portion of the molecular ions into atomic ions and energetic neutrals. The resulting atomic ions are trapped by the magnetic field to form a circulating beam of atomic ions, and the energetic neutrals pass out of the system and may be utilized in a particle accelerator. (AEC)

  2. Magnetic Field Configuration Models and Reconstruction Methods: a comparative study

    NASA Astrophysics Data System (ADS)

    Al-haddad, Nada; Möstl, Christian; Roussev, Ilia; Nieves-Chinchilla, Teresa; Poedts, Stefaan; Hidalgo, Miguel Angel; Marubashi, Katsuhide; Savani, Neel

    2012-07-01

    This study aims to provide a reference to different magnetic field models and reconstruction methods. In order to understand the dissimilarities of those models and codes, we analyze 59 events from the CDAW list, using four different magnetic field models and reconstruction techniques; force- free reconstruction (Lepping et al.(1990); Lynch et al.(2003)), magnetostatic reconstruction, referred as Grad-Shafranov (Hu & Sonnerup(2001); Mostl et al.(2009)), cylinder reconstruction (Marubashi & Lepping(2007)), elliptical, non-force free (Hidalgo et al.(2002)). The resulted parameters of the reconstructions, for the 59 events are compared, statistically, as well as in more details for some cases. The differences between the reconstruction codes are discussed, and suggestions are provided as how to enhance them. Finally we look at 2 unique cases under the microscope, to provide a comprehensive idea of the different aspects of how the fitting codes work.

  3. Magnetic field adjustment structure and method for a tapered wiggler

    DOEpatents

    Halbach, Klaus

    1988-01-01

    An improved method and structure is disclosed for adjusting the magnetic field generated by a group of electromagnet poles spaced along the path of a charged particle beam to compensate for energy losses in the charged particles which comprises providing more than one winding on at least some of the electromagnet poles; connecting one respective winding on each of several consecutive adjacent electromagnet poles to a first power supply, and the other respective winding on the electromagnet pole to a different power supply in staggered order; and independently adjusting one power supply to independently vary the current in one winding on each electromagnet pole in a group whereby the magnetic field strength of each of a group of electromagnet poles may be changed in smaller increments.

  4. Magnetic field adjustment structure and method for a tapered wiggler

    SciTech Connect

    Halbach, Klaus

    1988-03-01

    An improved method and structure is disclosed for adjusting the magnetic field generated by a group of electromagnet poles spaced along the path of a charged particle beam to compensate for energy losses in the charged particles which comprises providing more than one winding on at least some of the electromagnet poles; connecting one respective winding on each of several consecutive adjacent electromagnet poles to a first power supply, and the other respective winding on the electromagnet pole to a different power supply in staggered order; and independently adjusting one power supply to independently vary the current in one winding on each electromagnet pole in a group whereby the magnetic field strength of each of a group of electromagnet poles may be changed in smaller increments.

  5. Reconstruction of radiating sound fields using minimum energy method.

    PubMed

    Bader, Rolf

    2010-01-01

    A method for reconstructing a pressure field at the surface of a radiating body or source is presented using recording data of a microphone array. The radiation is assumed to consist of many spherical radiators, as microphone positions are present in the array. These monopoles are weighted using a parameter alpha, which broadens or narrows the overall radiation directivity as an effective and highly intuitive parameter of the radiation characteristics. A radiation matrix is built out of these weighted monopole radiators, and for different assumed values of alpha, a linear equation solver reconstructs the pressure field at the body's surface. It appears that from these many arbitrary reconstructions, the correct one minimizes the reconstruction energy. The method is tested, localizing the radiation points of a Balinese suling flute, reconstructing complex radiation from a duff frame drum, and determining the radiation directivity for the first seven modes of an Usbek tambourine. Stability in terms of measurement noise is demonstrated for the plain method, and additional highly effective algorithm is added for a noise level up to 0 dB. The stability of alpha in terms of minimal reconstruction energy is shown over the whole range of possible values for alpha. Additionally, the treatment of unwanted room reflections is discussed, still leading to satisfactory results in many cases. PMID:20058977

  6. Iterative Methods to Solve Linear RF Fields in Hot Plasma

    NASA Astrophysics Data System (ADS)

    Spencer, Joseph; Svidzinski, Vladimir; Evstatiev, Evstati; Galkin, Sergei; Kim, Jin-Soo

    2014-10-01

    Most magnetic plasma confinement devices use radio frequency (RF) waves for current drive and/or heating. Numerical modeling of RF fields is an important part of performance analysis of such devices and a predictive tool aiding design and development of future devices. Prior attempts at this modeling have mostly used direct solvers to solve the formulated linear equations. Full wave modeling of RF fields in hot plasma with 3D nonuniformities is mostly prohibited, with memory demands of a direct solver placing a significant limitation on spatial resolution. Iterative methods can significantly increase spatial resolution. We explore the feasibility of using iterative methods in 3D full wave modeling. The linear wave equation is formulated using two approaches: for cold plasmas the local cold plasma dielectric tensor is used (resolving resonances by particle collisions), while for hot plasmas the conductivity kernel (which includes a nonlocal dielectric response) is calculated by integrating along test particle orbits. The wave equation is discretized using a finite difference approach. The initial guess is important in iterative methods, and we examine different initial guesses including the solution to the cold plasma wave equation. Work is supported by the U.S. DOE SBIR program.

  7. Field estimates of gravity terrain corrections and Y2K-compatible method to convert from gravity readings with multiple base stations to tide- and long-term drift-corrected observations

    USGS Publications Warehouse

    Plouff, Donald

    2000-01-01

    Gravity observations are directly made or are obtained from other sources by the U.S. Geological Survey in order to prepare maps of the anomalous gravity field and consequently to interpret the subsurface distribution of rock densities and associated lithologic or geologic units. Observations are made in the field with gravity meters at new locations and at reoccupations of previously established gravity "stations." This report illustrates an interactively-prompted series of steps needed to convert gravity "readings" to values that are tied to established gravity datums and includes computer programs to implement those steps. Inasmuch as individual gravity readings have small variations, gravity-meter (instrument) drift may not be smoothly variable, and acommodations may be needed for ties to previously established stations, the reduction process is iterative. Decision-making by the program user is prompted by lists of best values and graphical displays. Notes about irregularities of topography, which affect the value of observed gravity but are not shown in sufficient detail on topographic maps, must be recorded in the field. This report illustrates ways to record field notes (distances, heights, and slope angles) and includes computer programs to convert field notes to gravity terrain corrections. This report includes approaches that may serve as models for other applications, for example: portrayal of system flow; style of quality control to document and validate computer applications; lack of dependence on proprietary software except source code compilation; method of file-searching with a dwindling list; interactive prompting; computer code to write directly in the PostScript (Adobe Systems Incorporated) printer language; and high-lighting the four-digit year on the first line of time-dependent data sets for assured Y2K compatibility. Computer source codes provided are written in the Fortran scientific language. In order for the programs to operate, they first

  8. A sparse reconstruction method for the estimation of multi-resolution emission fields via atmospheric inversion

    DOE PAGESBeta

    Ray, J.; Lee, J.; Yadav, V.; Lefantzi, S.; Michalak, A. M.; van Bloemen Waanders, B.

    2015-04-29

    Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) and fitting.more » Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO2 (ffCO2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO2 emissions and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also

  9. Nondestructive acoustic electric field probe apparatus and method

    DOEpatents

    Migliori, Albert

    1982-01-01

    The disclosure relates to a nondestructive acoustic electric field probe and its method of use. A source of acoustic pulses of arbitrary but selected shape is placed in an oil bath along with material to be tested across which a voltage is disposed and means for receiving acoustic pulses after they have passed through the material. The received pulses are compared with voltage changes across the material occurring while acoustic pulses pass through it and analysis is made thereof to determine preselected characteristics of the material.

  10. A method to localize RF B₁ field in high-field magnetic resonance imaging systems.

    PubMed

    Yoo, Hyoungsuk; Gopinath, Anand; Vaughan, J Thomas

    2012-12-01

    In high-field magnetic resonance imaging (MRI) systems, B₀ fields of 7 and 9.4 T, the RF field shows greater inhomogeneity compared to clinical MRI systems with B₀ fields of 1.5 and 3.0 T. In multichannel RF coils, the magnitude and phase of the input to each coil element can be controlled independently to reduce the nonuniformity of the RF field. The convex optimization technique has been used to obtain the optimum excitation parameters with iterative solutions for homogeneity in a selected region of interest. The pseudoinverse method has also been used to find a solution. The simulation results for 9.4- and 7-T MRI systems are discussed in detail for the head model. Variation of the simulation results in a 9.4-T system with the number of RF coil elements for different positions of the regions of interest in a spherical phantom are also discussed. Experimental results were obtained in a phantom in the 9.4-T system and are compared to the simulation results and the specific absorption rate has been evaluated. PMID:22929360

  11. Magnetic irreversibility: An important amendment in the zero-field-cooling and field-cooling method

    NASA Astrophysics Data System (ADS)

    Teixeira Dias, Fábio; das Neves Vieira, Valdemar; Esperança Nunes, Sabrina; Pureur, Paulo; Schaf, Jacob; Fernanda Farinela da Silva, Graziele; de Paiva Gouvêa, Cristol; Wolff-Fabris, Frederik; Kampert, Erik; Obradors, Xavier; Puig, Teresa; Roa Rovira, Joan Josep

    2016-02-01

    The present work reports about experimental procedures to correct significant deviations of magnetization data, caused by magnetic relaxation, due to small field cycling by sample transport in the inhomogeneous applied magnetic field of commercial magnetometers. The extensively used method for measuring the magnetic irreversibility by first cooling the sample in zero field, switching on a constant applied magnetic field and measuring the magnetization M(T) while slowly warming the sample, and subsequently measuring M(T) while slowly cooling it back in the same field, is very sensitive even to small displacement of the magnetization curve. In our melt-processed YBaCuO superconducting sample we observed displacements of the irreversibility limit up to 7 K in high fields. Such displacements are detected only on confronting the magnetic irreversibility limit with other measurements, like for instance zero resistance, in which the sample remains fixed and so is not affected by such relaxation. We measured the magnetic irreversibility, Tirr(H), using a vibrating sample magnetometer (VSM) from Quantum Design. The zero resistance data, Tc0(H), were obtained using a PPMS from Quantum Design. On confronting our irreversibility lines with those of zero resistance, we observed that the Tc0(H) data fell several degrees K above the Tirr(H) data, which obviously contradicts the well known properties of superconductivity. In order to get consistent Tirr(H) data in the H-T plane, it was necessary to do a lot of additional measurements as a function of the amplitude of the sample transport and extrapolate the Tirr(H) data for each applied field to zero amplitude.

  12. Field method for rapid quantification of labile organic carbon in hyper-arid desert soils validated by two thermal methods

    NASA Astrophysics Data System (ADS)

    Fletcher, Lauren E.; Valdivia-Silva, Julio E.; Perez-Montaño, Saul; Condori-Apaza, Renee M.; Conley, Catharine A.; Navarro-Gonzalez, Rafael; McKay, Christopher P.

    2014-03-01

    The objective of this work was to develop a field method for the determination of labile organic carbon in hyper-arid desert soils. Industry standard methods rely on expensive analytical equipment that are not possible to take into the field, while scientific challenges require fast turn-around of large numbers of samples in order to characterize the soils throughout this region. Here we present a method utilizing acid-hydrolysis extraction of the labile fraction of organic carbon followed by potassium permanganate oxidation, which provides a quick and inexpensive approach to investigate samples in the field. Strict reagent standardization and calibration steps within this method allowed the determination of very low levels of organic carbon in hyper-arid soils, in particular, with results similar to those determined by the alternative methods of Calcination and Pyrolysis-Gas Chromatography-Mass Spectrometry. Field testing of this protocol increased the understanding of the role of organic materials in hyper-arid environments and allowed real-time, strategic decision making for planning for more detailed laboratory-based analysis.

  13. On the Methods for Constructing Meson-Baryon Reaction Models within Relativistic Quantum Field Theory

    SciTech Connect

    B. Julia-Diaz, H. Kamano, T.-S. H. Lee, A. Matsuyama, T. Sato, N. Suzuki

    2009-04-01

    Within the relativistic quantum field theory, we analyze the differences between the $\\pi N$ reaction models constructed from using (1) three-dimensional reductions of Bethe-Salpeter Equation, (2) method of unitary transformation, and (3) time-ordered perturbation theory. Their relations with the approach based on the dispersion relations of S-matrix theory are dicusssed.

  14. Feature Surfaces in Symmetric Tensor Fields Based on Eigenvalue Manifold.

    PubMed

    Palacios, Jonathan; Yeh, Harry; Wang, Wenping; Zhang, Yue; Laramee, Robert S; Sharma, Ritesh; Schultz, Thomas; Zhang, Eugene

    2016-03-01

    Three-dimensional symmetric tensor fields have a wide range of applications in solid and fluid mechanics. Recent advances in the (topological) analysis of 3D symmetric tensor fields focus on degenerate tensors which form curves. In this paper, we introduce a number of feature surfaces, such as neutral surfaces and traceless surfaces, into tensor field analysis, based on the notion of eigenvalue manifold. Neutral surfaces are the boundary between linear tensors and planar tensors, and the traceless surfaces are the boundary between tensors of positive traces and those of negative traces. Degenerate curves, neutral surfaces, and traceless surfaces together form a partition of the eigenvalue manifold, which provides a more complete tensor field analysis than degenerate curves alone. We also extract and visualize the isosurfaces of tensor modes, tensor isotropy, and tensor magnitude, which we have found useful for domain applications in fluid and solid mechanics. Extracting neutral and traceless surfaces using the Marching Tetrahedra method can cause the loss of geometric and topological details, which can lead to false physical interpretation. To robustly extract neutral surfaces and traceless surfaces, we develop a polynomial description of them which enables us to borrow techniques from algebraic surface extraction, a topic well-researched by the computer-aided design (CAD) community as well as the algebraic geometry community. In addition, we adapt the surface extraction technique, called A-patches, to improve the speed of finding degenerate curves. Finally, we apply our analysis to data from solid and fluid mechanics as well as scalar field analysis. PMID:26441450

  15. Phase field approaches of bone remodeling based on TIP

    NASA Astrophysics Data System (ADS)

    Ganghoffer, Jean-François; Rahouadj, Rachid; Boisse, Julien; Forest, Samuel

    2016-01-01

    The process of bone remodeling includes a cycle of repair, renewal, and optimization. This adaptation process, in response to variations in external loads and chemical driving factors, involves three main types of bone cells: osteoclasts, which remove the old pre-existing bone; osteoblasts, which form the new bone in a second phase; osteocytes, which are sensing cells embedded into the bone matrix, trigger the aforementioned sequence of events. The remodeling process involves mineralization of the bone in the diffuse interface separating the marrow, which contains all specialized cells, from the newly formed bone. The main objective advocated in this contribution is the setting up of a modeling and simulation framework relying on the phase field method to capture the evolution of the diffuse interface between the new bone and the marrow at the scale of individual trabeculae. The phase field describes the degree of mineralization of this diffuse interface; it varies continuously between the lower value (no mineral) and unity (fully mineralized phase, e.g. new bone), allowing the consideration of a diffuse moving interface. The modeling framework is the theory of continuous media, for which field equations for the mechanical, chemical, and interfacial phenomena are written, based on the thermodynamics of irreversible processes. Additional models for the cellular activity are formulated to describe the coupling of the cell activity responsible for bone production/resorption to the kinetics of the internal variables. Kinetic equations for the internal variables are obtained from a pseudo-potential of dissipation. The combination of the balance equations for the microforce associated to the phase field and the kinetic equations lead to the Ginzburg-Landau equation satisfied by the phase field with a source term accounting for the dissipative microforce. Simulations illustrating the proposed framework are performed in a one-dimensional situation showing the evolution of

  16. In-house validation of a method for determination of silver nanoparticles in chicken meat based on asymmetric flow field-flow fractionation and inductively coupled plasma mass spectrometric detection.

    PubMed

    Loeschner, Katrin; Navratilova, Jana; Grombe, Ringo; Linsinger, Thomas P J; Købler, Carsten; Mølhave, Kristian; Larsen, Erik H

    2015-08-15

    Nanomaterials are increasingly used in food production and packaging, and validated methods for detection of nanoparticles (NPs) in foodstuffs need to be developed both for regulatory purposes and product development. Asymmetric flow field-flow fractionation with inductively coupled plasma mass spectrometric detection (AF(4)-ICP-MS) was applied for quantitative analysis of silver nanoparticles (AgNPs) in a chicken meat matrix following enzymatic sample preparation. For the first time an analytical validation of nanoparticle detection in a food matrix by AF(4)-ICP-MS has been carried out and the results showed repeatable and intermediately reproducible determination of AgNP mass fraction and size. The findings demonstrated the potential of AF(4)-ICP-MS for quantitative analysis of NPs in complex food matrices for use in food monitoring and control. The accurate determination of AgNP size distribution remained challenging due to the lack of certified size standards. PMID:25794724

  17. Accurate, efficient, and (iso)geometrically flexible collocation methods for phase-field models

    NASA Astrophysics Data System (ADS)

    Gomez, Hector; Reali, Alessandro; Sangalli, Giancarlo

    2014-04-01

    We propose new collocation methods for phase-field models. Our algorithms are based on isogeometric analysis, a new technology that makes use of functions from computational geometry, such as, for example, Non-Uniform Rational B-Splines (NURBS). NURBS exhibit excellent approximability and controllable global smoothness, and can represent exactly most geometries encapsulated in Computer Aided Design (CAD) models. These attributes permitted us to derive accurate, efficient, and geometrically flexible collocation methods for phase-field models. The performance of our method is demonstrated by several numerical examples of phase separation modeled by the Cahn-Hilliard equation. We feel that our method successfully combines the geometrical flexibility of finite elements with the accuracy and simplicity of pseudo-spectral collocation methods, and is a viable alternative to classical collocation methods.

  18. Field-Based Teacher Education: Past, Present, and Future.

    ERIC Educational Resources Information Center

    Bruce, William C.; And Others

    This monograph consists of five papers originating from a 1974 conference entitled, "Field-Based Teacher Education for the '80's." The first paper, "Public School-College Cooperation in the Field-Based Education of Teachers (FBTE)--A Historical Perspective," by James L. Slay, focuses on how the historical development of public school cooperation…

  19. How to Plan a Theme Based Field Day

    ERIC Educational Resources Information Center

    Shea, Scott A.; Fagala, Lisa M.

    2006-01-01

    Having a theme-based field day is a great way to get away from doing the traditional track-and-field type events, such as the softball throw, 50 yard dash, and sack race, year after year. In a theme-based field day format all stations or events are planned around a particular theme. This allows the teacher to be creative while also adding…

  20. Design for validation, based on formal methods

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.

    1990-01-01

    Validation of ultra-reliable systems decomposes into two subproblems: (1) quantification of probability of system failure due to physical failure; (2) establishing that Design Errors are not present. Methods of design, testing, and analysis of ultra-reliable software are discussed. It is concluded that a design-for-validation based on formal methods is needed for the digital flight control systems problem, and also that formal methods will play a major role in the development of future high reliability digital systems.

  1. Vocabulary Teaching Based on Semantic-Field

    ERIC Educational Resources Information Center

    Wangru, Cao

    2016-01-01

    Vocabulary is an indispensable part of language and it is of vital importance for second language learners. Wilkins (1972) points out: "without grammar very little can be conveyed, without vocabulary nothing can be conveyed." Vocabulary teaching has experienced several stages characterized by grammatical-translation method, audio-lingual…

  2. Methodical problems of magnetic field measurements in umbra of sunspots

    NASA Astrophysics Data System (ADS)

    Lozitska, N. I.; Lozitsky, V. G.; Andryeyeva, O. A.; Akhtemov, Z. S.; Malashchuk, V. M.; Perebeynos, V. A.; Stepanyan, N. N.; Shtertser, N. I.

    2015-02-01

    Visual measurements of magnetic field strengths in sunspot umbra provide data on magnetic field strength modulus directly, i.e., irrespective from any solar atmosphere model assumptions. In order to increase the accuracy of calculation of the solar magnetic indexes, such as B ‾ max or Bsp, the inclusion of all available data from different observatories is needed. In such measurements some methodical problems arise, which bring about inconsistency of the data samples combined from different sources; this work describes the problems at hand and proposes solutions on how to eliminate the inconsistencies. Data sets of sunspot magnetic field strength visual measurements from Mt. Wilson, Crimea and Kyiv observatories in 2010-2012 have been processed. It is found that two measurement modes of Zeeman split, σ → σ and σ → π, yield almost the same results, if data rows are long enough (over ∼100 sunspots in central area of Sun, r < 0.7 R). It is generally held that the most reliable measurement results are obtained for magnetic fields that exceed 2400 G. However, the empirical comparison of the internal data consistency of the samples produced by different observers shows that for reliable results this limit can be lowered down to 1100 G. To increase the precision of measurements, empirical calibration of the line-shifter is required by using closely positioned telluric lines. Such calibrations have been performed at Kyiv and Crimea, but as far as we know, it has not been carried out at Mt. Wilson observatory after its diffraction grate was replaced in 1994. Taking into consideration the highest quality and coverage of Mt. Wilson sunspot observational data, the authors are convinced that reliable calibration of its instrument by narrow telluric lines is definitely required.

  3. An acoustic intensity-based method and its aeroacoustic applications

    NASA Astrophysics Data System (ADS)

    Yu, Chao

    Aircraft noise prediction and control is one of the most urgent and challenging tasks worldwide. A hybrid approach is usually considered for predicting the aerodynamic noise. The approach separates the field into aerodynamic source and acoustic propagation regions. Conventional CFD solvers are typically used to evaluate the flow field in the source region. Once the sound source is predicted, the linearized Euler Equations (LEE) can be used to extend the near-field CFD solution to the mid-field acoustic radiation. However, the far-field extension is very time consuming and always prohibited by the excessive computer memory requirements. The FW-H method, instead, predicts the far-field radiation using the flow-field quantities on a closed control surface (that encloses the entire aerodynamic source region) if the wave equation is assumed outside. The surface integration, however, has to be carried out for each far-field location. This would be still computationally intensive for a practical 3D problem even though the intensity in terms of the CPU time has been much decreased compared with that required by the LEE methods. For an accurate far-field prediction, the other difficulty of using the FW-H method is that the complete control surface may be infeasible to accomplish for most practical applications. Motivated by the need for the accurate and efficient far-field prediction techniques, an Acoustic Intensity-Based Method (AIBM) has been developed based on an acoustic input from an OPEN control surface. The AIBM assumes that the sound propagation is governed by the modified Helmholtz equation on and outside a control surface that encloses all the nonlinear effects and noise sources. The prediction of the acoustic radiation field is carried out by the inverse method with an input of acoustic pressure derivative and its simultaneous, co-located acoustic pressure. The reconstructed acoustic radiation field using the AIBM is unique due to the unique continuation theory

  4. Model-Based Method for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  5. Defeaturing CAD models using a geometry-based size field and facet-based reduction operators.

    SciTech Connect

    Quadros, William Roshan; Owen, Steven James

    2010-04-01

    We propose a method to automatically defeature a CAD model by detecting irrelevant features using a geometry-based size field and a method to remove the irrelevant features via facet-based operations on a discrete representation. A discrete B-Rep model is first created by obtaining a faceted representation of the CAD entities. The candidate facet entities are then marked for reduction by using a geometry-based size field. This is accomplished by estimating local mesh sizes based on geometric criteria. If the field value at a facet entity goes below a user specified threshold value then it is identified as an irrelevant feature and is marked for reduction. The reduction of marked facet entities is primarily performed using an edge collapse operator. Care is taken to retain a valid geometry and topology of the discrete model throughout the procedure. The original model is not altered as the defeaturing is performed on a separate discrete model. Associativity between the entities of the discrete model and that of original CAD model is maintained in order to decode the attributes and boundary conditions applied on the original CAD entities onto the mesh via the entities of the discrete model. Example models are presented to illustrate the effectiveness of the proposed approach.

  6. Field-induced phase transitions in chiral smectic liquid crystals studied by the constant current method

    NASA Astrophysics Data System (ADS)

    H, Dhaouadi; R, Zgueb; O, Riahi; F, Trabelsi; T, Othman

    2016-05-01

    In ferroelectric liquid crystals, phase transitions can be induced by an electric field. The current constant method allows these transition to be quickly localized and thus the (E,T) phase diagram of the studied product can be obtained. In this work, we make a slight modification to the measurement principles based on this method. This modification allows the characteristic parameters of ferroelectric liquid crystal to be quantitatively measured. The use of a current square signal highlights a phenomenon of ferroelectric hysteresis with remnant polarization at null field, which points out an effect of memory in this compound.

  7. Enzyme catalysis enhanced dark-field imaging as a novel immunohistochemical method.

    PubMed

    Fan, Lin; Tian, Yanyan; Yin, Rong; Lou, Doudou; Zhang, Xizhi; Wang, Meng; Ma, Ming; Luo, Shouhua; Li, Suyi; Gu, Ning; Zhang, Yu

    2016-04-28

    Conventional immunohistochemistry is limited to subjective judgment based on human experience and thus it is clinically required to develop a quantitative immunohistochemical detection. 3,3'-Diaminobenzidin (DAB) aggregates, a type of staining product formed by conventional immunohistochemistry, were found to have a special optical property of dark-field imaging for the first time, and the mechanism was explored. On this basis, a novel immunohistochemical method based on dark-field imaging for detecting HER2 overexpressed in breast cancer was established, and the quantitative analysis standard and relevant software for measuring the scattering intensity was developed. In order to achieve a more sensitive detection, the HRP (horseradish peroxidase)-labeled secondary antibodies conjugated gold nanoparticles were constructed as nanoprobes to load more HRP enzymes, resulting in an enhanced DAB deposition as a dark-field label. Simultaneously, gold nanoparticles also act as a synergistically enhanced agent due to their mimicry of enzyme catalysis and dark-field scattering properties. PMID:26786242

  8. Bi-color near infrared thermoreflectometry: a method for true temperature field measurement.

    PubMed

    Sentenac, Thierry; Gilblas, Rémi; Hernandez, Daniel; Le Maoult, Yannick

    2012-12-01

    In a context of radiative temperature field measurement, this paper deals with an innovative method, called bicolor near infrared thermoreflectometry, for the measurement of true temperature fields without prior knowledge of the emissivity field of an opaque material. This method is achieved by a simultaneous measurement, in the near infrared spectral band, of the radiance temperature fields and of the emissivity fields measured indirectly by reflectometry. The theoretical framework of the method is introduced and the principle of the measurements at two wavelengths is detailed. The crucial features of the indirect measurement of emissivity are the measurement of bidirectional reflectivities in a single direction and the introduction of an unknown variable, called the "diffusion factor." Radiance temperature and bidirectional reflectivities are then merged into a bichromatic system based on Kirchhoff's laws. The assumption of the system, based on the invariance of the diffusion factor for two near wavelengths, and the value of the chosen wavelengths, are then discussed in relation to a database of several material properties. A thermoreflectometer prototype was developed, dimensioned, and evaluated. Experiments were carried out to outline its trueness in challenging cases. First, experiments were performed on a metallic sample with a high emissivity value. The bidirectional reflectivity was then measured from low signals. The results on erbium oxide demonstrate the power of the method with materials with high emissivity variations in near infrared spectral band. PMID:23278013

  9. Surface profile and stress field evaluation using digital gradient sensing method

    NASA Astrophysics Data System (ADS)

    Miao, C.; Sundaram, B. M.; Huang, L.; Tippur, H. V.

    2016-09-01

    Shape and surface topography evaluation from measured orthogonal slope/gradient data is of considerable engineering significance since many full-field optical sensors and interferometers readily output such a data accurately. This has applications ranging from metrology of optical and electronic elements (lenses, silicon wafers, thin film coatings), surface profile estimation, wave front and shape reconstruction, to name a few. In this context, a new methodology for surface profile and stress field determination based on a recently introduced non-contact, full-field optical method called digital gradient sensing (DGS) capable of measuring small angular deflections of light rays coupled with a robust finite-difference-based least-squares integration (HFLI) scheme in the Southwell configuration is advanced here. The method is demonstrated by evaluating (a) surface profiles of mechanically warped silicon wafers and (b) stress gradients near growing cracks in planar phase objects.

  10. Spectroscopic method for measuring plasma magnetic fields having arbitrary distributions of direction and amplitude.

    PubMed

    Stambulchik, E; Tsigutkin, K; Maron, Y

    2007-06-01

    An approach for measurements of magnetic fields, based on the comparison of the magnetic-field-induced contributions to the line shapes of different fine-structure components of an atomic multiplet, is proposed and experimentally demonstrated. Contrary to the methods based on detecting an anisotropy in either the emitted radiation or in the dispersion properties of the medium, the present method is applicable when the field direction or amplitude vary significantly in the region viewed or during the time of observation. The technique can be used even when the line shapes are Stark or Doppler dominated. It has potential applications in laser-matter interactions, plasmas driven by high-current pulses, and astrophysics. PMID:17677852

  11. Silica microwire-based interferometric electric field sensor.

    PubMed

    Han, Chunyang; Lv, Fangxing; Sun, Chen; Ding, Hui

    2015-08-15

    Silica microwire, as an optical waveguide whose diameter is close to or smaller than the wavelength of the guided light, is of great interest because it exhibits a number of excellent properties such as tight confinement, large evanescent fields, and great configurability. Here, we report a silica microwire-based compact photonic sensor for real-time detection of high electric field. This device contains an interferometer with propylene carbonate cladding. Based on the Kerr electro-optic effect of propylene carbonate, the applied intensive transient electric field can change the refractive index of propylene carbonate, which shifts the interferometric fringe. Therefore, the electric field could be demodulated by monitoring the fringe shift. The sensor was successfully used to detect alternating electric field with frequency of 50 Hz and impulse electric field with duration time of 200 μs. This work lays a foundation for future applications in electric field sensing. PMID:26274634

  12. Tls Field Data Based Intensity Correction for Forest Environments

    NASA Astrophysics Data System (ADS)

    Heinzel, J.; Huber, M. O.

    2016-06-01

    Terrestrial laser scanning (TLS) is increasingly used for forestry applications. Besides the three dimensional point coordinates, the 'intensity' of the reflected signal plays an important role in forestry and vegetation studies. The benefit of the signal intensity is caused by the wavelength of the laser that is within the near infrared (NIR) for most scanners. The NIR is highly indicative for various vegetation characteristics. However, the intensity as recorded by most terrestrial scanners is distorted by both external and scanner specific factors. Since details about system internal alteration of the signal are often unknown to the user, model driven approaches are impractical. On the other hand, existing data driven calibration procedures require laborious acquisition of separate reference datasets or areas of homogenous reflection characteristics from the field data. In order to fill this gap, the present study introduces an approach to correct unwanted intensity variations directly from the point cloud of the field data. The focus is on the variation over range and sensor specific distortions. Instead of an absolute calibration of the values, a relative correction within the dataset is sufficient for most forestry applications. Finally, a method similar to time series detrending is presented with the only pre-condition of a relative equal distribution of forest objects and materials over range. Our test data covers 50 terrestrial scans captured with a FARO Focus 3D S120 scanner using a laser wavelength of 905 nm. Practical tests demonstrate that our correction method removes range and scanner based alterations of the intensity.

  13. Hardware implementation of N-LUT method using field programmable gate array technology

    NASA Astrophysics Data System (ADS)

    Kwon, Do-woo; Kim, Seung-Cheol; Kim, Eun-Soo

    2011-02-01

    Hardware implementation for holographic 3D display application is researched by many researchers. Therefore, in this paper, we propose the hardware implementation method for novel look-up table (N-LUT) method using Field Programmable Gate Array (FPGA) technology. In the proposed method, calculation process is divided by some segment block for fast parallel processing of calculation of N-LUT method. That is, by using parallel processing by use of some segmented block based on FPGA technology, calculation speed of CGH can be increased

  14. [Family planning methods based on fertility awareness].

    PubMed

    Haghenbeck-Altamirano, Francisco Javier; Ayala-Yáñez, Rodrigo; Herrera-Meillón, Héctor

    2012-04-01

    The desire to limit fertility is recognized both by individuals and by nations. The concept of family planning is based on the right of individuals and couples to regulate their fertility and is based in the area of health, human rights and population. Despite the changes in policies and family planning programs worldwide, there are large geographic areas that have not yet met the minimum requirements in this regard, the reasons are multiple, including economic reasons but also ideological or religious. Knowledge on the physiology of the menstrual cycle, specifically ovulation process has been further enhanced due to the advances in reproductive medicine research. The series of events around ovulation are used to detect the "fertile window", this way women will look for the possibility of postponing their pregnancy or actually start looking for it. The aim of this article is to review the current methods of family planning based on fertility awareness, from the historical methods like the core temperature determination and rhythm, to the most popular ones like the Billings ovulation method, the Sympto-thermal method and current methods like the two days, and the standard days method. There are also mentioned methods that require electronic devices or specifically computer designed ones to detect this "window of fertility". The spread and popularity of these methods is low and their knowledge among physicians, including gynecologists, is also quite scarce. The effectiveness of these methods has been difficult to quantify due to the lack of well designed, randomized studies which are affected by small populations of patients using these methods. The publications mention high effectiveness with their proper use, but not with typical use, what indicates the need for increased awareness among medical practitioners and trainers, obtaining a better use and understanding of methods and reducing these discrepancies. PMID:22808858

  15. 63,65Cu NMR Method in a Local Field for Investigation of Copper Ore Concentrates

    NASA Astrophysics Data System (ADS)

    Gavrilenko, A. N.; Starykh, R. V.; Khabibullin, I. Kh.; Matukhin, V. L.

    2015-01-01

    To choose the most efficient method and ore beneficiation flow diagram, it is important to know physical and chemical properties of ore concentrates. The feasibility of application of the 63,65Cu nuclear magnetic resonance (NMR) method in a local field aimed at studying the properties of copper ore concentrates in the copper-iron-sulfur system is demonstrated. 63,65Cu NMR spectrum is measured in a local field for a copper concentrate sample and relaxation parameters (times T1 and T2) are obtained. The spectrum obtained was used to identify a mineral (chalcopyrite) contained in the concentrate. Based on the experimental data, comparative characteristics of natural chalcopyrite and beneficiated copper concentrate are given. The feasibility of application of the NMR method in a local field to explore mineral deposits is analyzed.

  16. Performance of climate field reconstruction methods over multiple seasons and climate variables

    NASA Astrophysics Data System (ADS)

    Dannenberg, Matthew P.; Wise, Erika K.

    2013-09-01

    Studies of climate variability require long time series of data but are limited by the absence of preindustrial instrumental records. For such studies, proxy-based climate reconstructions, such as those produced from tree-ring widths, provide the opportunity to extend climatic records into preindustrial periods. Climate field reconstruction (CFR) methods are capable of producing spatially-resolved reconstructions of climate fields. We assessed the performance of three commonly used CFR methods (canonical correlation analysis, point-by-point regression, and regularized expectation maximization) over spatially-resolved fields using multiple seasons and climate variables. Warm- and cool-season geopotential height, precipitable water, and surface temperature were tested for each method using tree-ring chronologies. Spatial patterns of reconstructive skill were found to be generally consistent across each of the methods, but the robustness of the validation metrics varied by CFR method, season, and climate variable. The most robust validation metrics were achieved with geopotential height, the October through March temporal composite, and the Regularized Expectation Maximization method. While our study is limited to assessment of skill over multidecadal (rather than multi-centennial) time scales, our findings suggest that the climate variable of interest, seasonality, and spatial domain of the target field should be considered when assessing potential CFR methods for real-world applications.

  17. A Multipole Expansion Method for Analyzing Lightning Field Changes

    NASA Technical Reports Server (NTRS)

    Koshak, William J.; Krider, E. Philip; Murphy, Martin J.

    1998-01-01

    Changes in the surface electric field are frequently used to infer the locations and magnitudes of lightning-caused changes in thundercloud charge distributions. The traditional procedure is to assume that the charges that are effectively deposited by the flash can be modeled either as a single point charge (the Q-model) or a point dipole (the P-model). The Q-model has 4 unknown parameters and provides a good description of many cloud-to-ground (CG) flashes. The P-model has 6 unknown parameters and describes many intracloud (IC) discharges. In this paper, we introduce a new analysis method that assumes that the change in the cloud charge can be described by a truncated multipole expansion, i.e., there are both monopole and dipole terms in the unknown source distribution, and both terms are applied simultaneously. This method can be used to analyze CG flashes that are accompanied by large changes in the cloud dipole moment and complex IC discharges. If there is enough information content in the measurements, the model can also be generalized to include quadrupole and higher order terms. The parameters of the charge moments are determined using a 3-dimensional grid search in combination with a linear inversion, and because of this, local minima in the error function and the associated solution ambiguities are avoided. The multipole method has been tested on computer simulated sources and on natural lightning at the NASA Kennedy Space Center and USAF Eastern Range.

  18. A Multipole Expansion Method for Analyzing Lightning Field Changes

    NASA Technical Reports Server (NTRS)

    Koshak, William J.; Krider, E. Philip; Murphy, Martin J.

    1999-01-01

    Changes in the surface electric field are frequently used to infer the locations and magnitudes of lightning-caused changes in thundercloud charge distributions. The traditional procedure is to assume that the charges that are effectively deposited by the flash can be modeled either as a single point charge (the Q model) or a point dipole (the P model). The Q model has four unknown parameters and provides a good description of many cloud-to-ground (CG) flashes. The P model has six unknown parameters and describes many intracloud (IC) discharges. In this paper we introduce a new analysis method that assumes that the change in the cloud charge can be described by a truncated multipole expansion, i.e., there are both monopole and dipole terms in the unknown source distribution, and both terms are applied simultaneously. This method can be used to analyze CG flashes that are accompanied by large changes in the cloud dipole moment and complex IC discharges. If there is enough information content in the measurements, the model can also be generalized to include quadrupole and higher order terms. The parameters of the charge moments are determined using a dme-dimensional grid search in combination with a linear inversion, and because of this, local minima in the error function and the associated solution ambiguities are avoided. The multipole method has been tested on computer-simulated sources and on natural lightning at the NASA Kennedy Space Center and U.S. Air Force Eastern Range.

  19. A copula-based downscaling methodology of RCM precipitation fields

    NASA Astrophysics Data System (ADS)

    Lorenz, Manuel

    2016-04-01

    Many hydrological studies require long term precipitation time series at a fine spatial resolution. While regional climate models are nowadays capable of simulating reasonable high-resolution precipitation fields, the long computing time makes the generation of such long term time series often infeasible for practical purposes. We introduce a comparatively fast stochastic approach to simulate precipitation fields which resemble the spatial dependencies and density distributions of the dynamic model. Nested RCM simulations at two different spatial resolutions serve as a training set to derive the statistics which will then be used in a random path simulation where fine scale precipitation values are simulated from a multivariate Gaussian Copula. The chosen RCM is the Weather Research and Forecasting Model (WRF). Simulated daily precipitation fields of the RCM are based on ERA-Interim reanalysis data from 1971 to 2000 and are available at a spatial resolution of 42 km (Europe) and 7 km (Germany). In order to evaluate the method, the stochastic algorithm is applied to the nested German domain and the resulting spatial dependencies and density distributions are compared to the original 30 years long 7 km WRF simulations. Preliminary evaluations based on QQ-plots for one year indicate that the distributions of the downscaled values are very similar to the original values for most cells. In this presentation, a detailed overview of the stochastic downscaling algorithm and the evaluation of the long term simulations are given. Additionally, an outlook for a 5 km and 1 km downscaling experiment for urban hydrology studies is presented.

  20. New method of applying conformal group to quantum fields

    NASA Astrophysics Data System (ADS)

    Han, Lei; Wang, Hai-Jun

    2015-09-01

    Most of previous work on applying the conformal group to quantum fields has emphasized its invariant aspects, whereas in this paper we find that the conformal group can give us running quantum fields, with some constants, vertex and Green functions running, compatible with the scaling properties of renormalization group method (RGM). We start with the renormalization group equation (RGE), in which the differential operator happens to be a generator of the conformal group, named dilatation operator. In addition we link the operator/spatial representation and unitary/spinor representation of the conformal group by inquiring a conformal-invariant interaction vertex mimicking the similar process of Lorentz transformation applied to Dirac equation. By this kind of application, we find out that quite a few interaction vertices are separately invariant under certain transformations (generators) of the conformal group. The significance of these transformations and vertices is explained. Using a particular generator of the conformal group, we suggest a new equation analogous to RGE which may lead a system to evolve from asymptotic regime to nonperturbative regime, in contrast to the effect of the conventional RGE from nonperturbative regime to asymptotic regime. Supported by NSFC (91227114)

  1. Hybrid star structure with the Field Correlator Method

    NASA Astrophysics Data System (ADS)

    Burgio, G. F.; Zappalà, D.

    2016-03-01

    We explore the relevance of the color-flavor locking phase in the equation of state (EoS) built with the Field Correlator Method (FCM) for the description of the quark matter core of hybrid stars. For the hadronic phase, we use the microscopic Brueckner-Hartree-Fock (BHF) many-body theory, and its relativistic counterpart, i.e. the Dirac-Brueckner (DBHF). We find that the main features of the phase transition are directly related to the values of the quark-antiquark potential V1, the gluon condensate G2 and the color-flavor superconducting gap Δ. We confirm that the mapping between the FCM and the CSS (constant speed of sound) parameterization holds true even in the case of paired quark matter. The inclusion of hyperons in the hadronic phase and its effect on the mass-radius relation of hybrid stars is also investigated.

  2. Methods for Quantitative Interpretation of Retarding Field Analyzer Data

    SciTech Connect

    Calvey, J.R.; Crittenden, J.A.; Dugan, G.F.; Palmer, M.A.; Furman, M.; Harkay, K.

    2011-03-28

    Over the course of the CesrTA program at Cornell, over 30 Retarding Field Analyzers (RFAs) have been installed in the CESR storage ring, and a great deal of data has been taken with them. These devices measure the local electron cloud density and energy distribution, and can be used to evaluate the efficacy of different cloud mitigation techniques. Obtaining a quantitative understanding of RFA data requires use of cloud simulation programs, as well as a detailed model of the detector itself. In a drift region, the RFA can be modeled by postprocessing the output of a simulation code, and one can obtain best fit values for important simulation parameters with a chi-square minimization method.

  3. Apparatus and method for producing an artificial gravitational field

    NASA Technical Reports Server (NTRS)

    Mccanna, Jason (Inventor)

    1993-01-01

    An apparatus and method is disclosed for producing an artificial gravitational field in a spacecraft by rotating the same around a spin axis. The centrifugal force thereby created acts as an artificial gravitational force. The apparatus includes an engine which produces a drive force offset from the spin axis to drive the spacecraft towards a destination. The engine is also used as a counterbalance for a crew cabin for rotation of the spacecraft. Mass of the spacecraft, which may include either the engine or crew cabin, is shifted such that the centrifugal force acting on that mass is no longer directed through the center of mass of the craft. This off-center centrifugal force creates a moment that counterbalances the moment produced by the off-center drive force to eliminate unwanted rotation which would otherwise be precipitated by the offset drive force.

  4. Full field imaging based instantaneous hyperspectral absolute refractive index measurement

    SciTech Connect

    Baba, Justin S; Boudreaux, Philip R

    2012-01-01

    Multispectral refractometers typically measure refractive index (RI) at discrete monochromatic wavelengths via a serial process. We report on the demonstration of a white light full field imaging based refractometer capable of instantaneous multispectral measurement of absolute RI of clear liquid/gel samples across the entire visible light spectrum. The broad optical bandwidth refractometer is capable of hyperspectral measurement of RI in the range 1.30 1.70 between 400nm 700nm with a maximum error of 0.0036 units (0.24% of actual) at 414nm for a = 1.50 sample. We present system design and calibration method details as well as results from a system validation sample.

  5. Using Field Trips and Field-Based Laboratories to Teach Undergraduate Soil Science

    NASA Astrophysics Data System (ADS)

    Brevik, Eric C.; Steffan, Joshua; Hopkins, David

    2015-04-01

    Classroom activities can provide important background information allowing students to understand soils. However, soils are formed in nature; therefore, understanding their properties and spatial relationships in the field is a critical component for gaining a comprehensive and holistic understanding of soils. Field trips and field-based laboratories provide students with the field experiences and skills needed to gain this understanding. Field studies can 1) teach students the fundamentals of soil descriptions, 2) expose students to features (e.g., structure, redoximorphic features, clay accumulation, etc.) discussed in the classroom, and 3) allow students to verify for themselves concepts discussed in the more theoretical setting of the classroom. In each case, actually observing these aspects of soils in the field reinforces and improves upon classroom learning and comprehension. In addition, the United States Department of Agriculture's Natural Resources Conservation Service has identified a lack of fundamental field skills as a problem when they hire recent soil science graduates, thereby demonstrating the need for increased field experiences for the modern soil science student. In this presentation we will provide examples of field trips and field-based laboratories that we have designed for our undergraduate soil science classes, discuss the learning objectives, and provide several examples of comments our students have made in response to these field experiences.

  6. A Property Restriction Based Knowledge Merging Method

    NASA Astrophysics Data System (ADS)

    Che, Haiyan; Chen, Wei; Feng, Tie; Zhang, Jiachen

    Merging new instance knowledge extracted from the Web according to certain domain ontology into the knowledge base (KB for short) is essential for the knowledge management and should be processed carefully, since this may introduce redundant or contradictory knowledge, and the quality of the knowledge in the KB, which is very important for a knowledge-based system to provide users high quality services, will suffer from such "bad" knowledge. Advocates a property restriction based knowledge merging method, it can identify the equivalent instances, redundant or contradictory knowledge according to the property restrictions defined in the domain ontology and can consolidate the knowledge about equivalent instances and discard the redundancy and conflict to keep the KB compact and consistent. This knowledge merging method has been used in a semantic-based search engine project: CRAB and the effect is satisfactory.

  7. Acoustic spectroscopy: A powerful analytical method for the pharmaceutical field?

    PubMed

    Bonacucina, Giulia; Perinelli, Diego R; Cespi, Marco; Casettari, Luca; Cossi, Riccardo; Blasi, Paolo; Palmieri, Giovanni F

    2016-04-30

    Acoustics is one of the emerging technologies developed to minimize processing, maximize quality and ensure the safety of pharmaceutical, food and chemical products. The operating principle of acoustic spectroscopy is the measurement of the ultrasound pulse intensity and phase after its propagation through a sample. The main goal of this technique is to characterise concentrated colloidal dispersions without dilution, in such a way as to be able to analyse non-transparent and even highly structured systems. This review presents the state of the art of ultrasound-based techniques in pharmaceutical pre-formulation and formulation steps, showing their potential, applicability and limits. It reports in a simplified version the theory behind acoustic spectroscopy, describes the most common equipment on the market, and finally overviews different studies performed on systems and materials used in the pharmaceutical or related fields. PMID:26976503

  8. Recommendation advertising method based on behavior retargeting

    NASA Astrophysics Data System (ADS)

    Zhao, Yao; YIN, Xin-Chun; CHEN, Zhi-Min

    2011-10-01

    Online advertising has become an important business in e-commerce. Ad recommended algorithms are the most critical part in recommendation systems. We propose a recommendation advertising method based on behavior retargeting which can avoid leakage click of advertising due to objective reasons and can observe the changes of the user's interest in time. Experiments show that our new method can have a significant effect and can be further to apply to online system.

  9. Polyfluorene-based organic field-effect transistors

    NASA Astrophysics Data System (ADS)

    Hamilton, Michael C.

    and demonstrated the combination of several physical phenomena, including slow carrier transport and the existence of few reversible and many irreversible trap states. A relatively low (65°C) optimal operating temperature of organic-based devices was observed. The trap states were further characterized using the photodischarge method to investigate the kinetics and distribution of trap states. A narrow distribution of trap states at 0.3eV above the valence band was found, which is consistent with field-effect mobility and bias temperature stress results.

  10. Singular boundary method for global gravity field modelling

    NASA Astrophysics Data System (ADS)

    Cunderlik, Robert

    2014-05-01

    The singular boundary method (SBM) and method of fundamental solutions (MFS) are meshless boundary collocation techniques that use the fundamental solution of a governing partial differential equation (e.g. the Laplace equation) as their basis functions. They have been developed to avoid singular numerical integration as well as mesh generation in the traditional boundary element method (BEM). SBM have been proposed to overcome a main drawback of MFS - its controversial fictitious boundary outside the domain. The key idea of SBM is to introduce a concept of the origin intensity factors that isolate singularities of the fundamental solution and its derivatives using some appropriate regularization techniques. Consequently, the source points can be placed directly on the real boundary and coincide with the collocation nodes. In this study we deal with SBM applied for high-resolution global gravity field modelling. The first numerical experiment presents a numerical solution to the fixed gravimetric boundary value problem. The achieved results are compared with the numerical solutions obtained by MFS or the direct BEM indicating efficiency of all methods. In the second numerical experiments, SBM is used to derive the geopotential and its first derivatives from the Tzz components of the gravity disturbing tensor observed by the GOCE satellite mission. A determination of the origin intensity factors allows to evaluate the disturbing potential and gravity disturbances directly on the Earth's surface where the source points are located. To achieve high-resolution numerical solutions, the large-scale parallel computations are performed on the cluster with 1TB of the distributed memory and an iterative elimination of far zones' contributions is applied.

  11. Matched field localization based on CS-MUSIC algorithm

    NASA Astrophysics Data System (ADS)

    Guo, Shuangle; Tang, Ruichun; Peng, Linhui; Ji, Xiaopeng

    2016-04-01

    The problem caused by shortness or excessiveness of snapshots and by coherent sources in underwater acoustic positioning is considered. A matched field localization algorithm based on CS-MUSIC (Compressive Sensing Multiple Signal Classification) is proposed based on the sparse mathematical model of the underwater positioning. The signal matrix is calculated through the SVD (Singular Value Decomposition) of the observation matrix. The observation matrix in the sparse mathematical model is replaced by the signal matrix, and a new concise sparse mathematical model is obtained, which means not only the scale of the localization problem but also the noise level is reduced; then the new sparse mathematical model is solved by the CS-MUSIC algorithm which is a combination of CS (Compressive Sensing) method and MUSIC (Multiple Signal Classification) method. The algorithm proposed in this paper can overcome effectively the difficulties caused by correlated sources and shortness of snapshots, and it can also reduce the time complexity and noise level of the localization problem by using the SVD of the observation matrix when the number of snapshots is large, which will be proved in this paper.

  12. FieldChopper, a new tool for automatic model generation and virtual screening based on molecular fields.

    PubMed

    Kalliokoski, Tuomo; Ronkko, Toni; Poso, Antti

    2008-06-01

    Algorithms were developed for ligand-based virtual screening of molecular databases. FieldChopper (FC) is based on the discretization of the electrostatic and van der Waals field into three classes. A model is built from a set of superimposed active molecules. The similarity of the compounds in the database to the model is then calculated using matrices that define scores for comparing field values of different categories. The method was validated using 12 publicly available data sets by comparing the method to the electrostatic similarity comparison program EON. The results suggest that FC is competitive with more complex descriptors and could be used as a molecular sieve in virtual screening experiments when multiple active ligands are known. PMID:18489083

  13. Evidence-Based Practice: Integrating Classroom Curriculum and Field Education

    ERIC Educational Resources Information Center

    Tuchman, Ellen; Lalane, Monique

    2011-01-01

    This article describes the use of problem-based learning to teach the scope and consequences of evidence-based practices in mental health through an innovative assignment that integrates classroom and field learning. The authors illustrate the planning and implementation of the Evidence-Based Practice: Integrating Classroom Curriculum and Field…

  14. Optimal assignment methods for ligand-based virtual screening

    PubMed Central

    2009-01-01

    Background Ligand-based virtual screening experiments are an important task in the early drug discovery stage. An ambitious aim in each experiment is to disclose active structures based on new scaffolds. To perform these "scaffold-hoppings" for individual problems and targets, a plethora of different similarity methods based on diverse techniques were published in the last years. The optimal assignment approach on molecular graphs, a successful method in the field of quantitative structure-activity relationships, has not been tested as a ligand-based virtual screening method so far. Results We evaluated two already published and two new optimal assignment methods on various data sets. To emphasize the "scaffold-hopping" ability, we used the information of chemotype clustering analyses in our evaluation metrics. Comparisons with literature results show an improved early recognition performance and comparable results over the complete data set. A new method based on two different assignment steps shows an increased "scaffold-hopping" behavior together with a good early recognition performance. Conclusion The presented methods show a good combination of chemotype discovery and enrichment of active structures. Additionally, the optimal assignment on molecular graphs has the advantage to investigate and interpret the mappings, allowing precise modifications of internal parameters of the similarity measure for specific targets. All methods have low computation times which make them applicable to screen large data sets. PMID:20150995

  15. Integrating Field-Based Research into the Classroom: An Environmental Sampling Exercise

    ERIC Educational Resources Information Center

    DeSutter, T.; Viall, E.; Rijal, I.; Murdoff, M.; Guy, A.; Pang, X.; Koltes, S.; Luciano, R.; Bai, X.; Zitnick, K.; Wang, S.; Podrebarac, F.; Casey, F.; Hopkins, D.

    2010-01-01

    A field-based, soil methods, and instrumentation course was developed to expose graduate students to numerous strategies for measuring soil parameters. Given the northern latitude of North Dakota State University and the rapid onset of winter, this course met once per week for the first 8 weeks of the fall semester and centered on the field as a…

  16. Comparison of aquatic macroinvertebrate samples collected using different field methods

    USGS Publications Warehouse

    Lenz, Bernard N.; Miller, Michael A.

    1996-01-01

    Government agencies, academic institutions, and volunteer monitoring groups in the State of Wisconsin collect aquatic macroinvertebrate data to assess water quality. Sampling methods differ among agencies, reflecting the differences in the sampling objectives of each agency. Lack of infor- mation about data comparability impedes data shar- ing among agencies, which can result in duplicated sampling efforts or the underutilization of avail- able information. To address these concerns, com- parisons were made of macroinvertebrate samples collected from wadeable streams in Wisconsin by personnel from the U.S. Geological Survey- National Water Quality Assessment Program (USGS-NAWQA), the Wisconsin Department of Natural Resources (WDNR), the U.S. Department of Agriculture-Forest Service (USDA-FS), and volunteers from the Water Action Volunteer-Water Quality Monitoring Program (WAV). This project was part of the Intergovernmental Task Force on Monitoring Water Quality (ITFM) Wisconsin Water Resources Coordination Project. The numbers, types, and environmental tolerances of the organ- isms collected were analyzed to determine if the four different field methods that were used by the different agencies and volunteer groups provide comparable results. Additionally, this study com- pared the results of samples taken from different locations and habitats within the same streams.

  17. Generalized method of eigenoscillations for near-field optical microscopy

    NASA Astrophysics Data System (ADS)

    Jiang, Bor-Yuan; Zhang, Lingfeng; Castro Neto, Antonio; Basov, Dimitri; Fogler, Michael

    2015-03-01

    Electromagnetic interaction between a sub-wavelength particle (the ``probe'') and a material surface (the ``sample'') is studied theoretically. The interaction is shown to be governed by a series of resonances (eigenoscillations), corresponding to surface polariton modes localized near the probe. The resonance parameters depend on the dielectric function and geometry of the probe, as well as the surface reflectivity of the material. Calculation of such resonances is carried out for several axisymmetric particle shapes (spherical, spheroidal, and pear-shaped). For spheroids an efficient numerical method is proposed, capable of handling cases of large or strongly momentum-dependent surface reflectivity. The method is applied to modeling near-field spectroscopy studies of various materials. For highly resonant materials such as aluminum oxide (by itself or covered with graphene) a rich structure of the simulated signal is found, including multi-peak spectra and nonmonotonic approach curves. These features have a strong dependence on physical parameters, e.g., the probe shape. For less resonant materials such as silicon oxide the dependence is weaker, and the spheroid model is generally applicable.

  18. Cultivating Kuumba: Applying Art Based Strategies to Any Field

    ERIC Educational Resources Information Center

    Ellis, Auburn Elizabeth

    2015-01-01

    There are many contemporary issues to address in adult education. This paper explores art-based strategies and the utilization of creativity (Kuumba) to expand learning for global communities in any field of practice. Benefits of culturally grounded approaches to adult education are discussed. Images from ongoing field research can be viewed at…

  19. Utilizing Field-Based Instruction as an Effective Teaching Strategy

    ERIC Educational Resources Information Center

    Kozar, Joy M.; Marcketti, Sara B.

    2008-01-01

    The purpose of this study was to examine the effectiveness of field-based instruction on student learning outcomes. Researchers in the past have noted the importance of engaging students on a deeper level through the use of active course designs. To investigate the outcomes of active learning, two field assignments created for two separate…

  20. A Calibration Method for Wide-Field Multicolor Photometric Systems

    NASA Astrophysics Data System (ADS)

    Zhou, Xu; Chen, Jiansheng; Xu, Wen; Zhang, Mei; Jiang, Zhaoji; Zheng, Zhongyuan; Zhu, Jin

    1999-07-01

    The purpose of this paper is to present a method to self-calibrate the spectral energy distribution (SED) of objects in a survey based on the fitting of a SED library to observed multicolor photometry. We adopt, for illustrative purposes, the Vilnius and Gunn & Stryker SED libraries. The self-calibration technique can improve the quality of observations which are not taken under perfectly photometric conditions. The more passbands used for the photometry, the better the results. This technique has been applied to the BATC 15 passband CCD survey.

  1. Assessment of density functional theory based ΔSCF (self-consistent field) and linear response methods for longest wavelength excited states of extended π-conjugated molecular systems

    SciTech Connect

    Filatov, Michael; Huix-Rotllant, Miquel

    2014-07-14

    Computational investigation of the longest wavelength excitations in a series of cyanines and linear n-acenes is undertaken with the use of standard spin-conserving linear response time-dependent density functional theory (TD-DFT) as well as its spin-flip variant and a ΔSCF method based on the ensemble DFT. The spin-conserving linear response TD-DFT fails to accurately reproduce the lowest excitation energy in these π-conjugated systems by strongly overestimating the excitation energies of cyanines and underestimating the excitation energies of n-acenes. The spin-flip TD-DFT is capable of correcting the underestimation of excitation energies of n-acenes by bringing in the non-dynamic electron correlation into the ground state; however, it does not fully correct for the overestimation of the excitation energies of cyanines, for which the non-dynamic correlation does not seem to play a role. The ensemble DFT method employed in this work is capable of correcting for the effect of missing non-dynamic correlation in the ground state of n-acenes and for the deficient description of differential correlation effects between the ground and excited states of cyanines and yields the excitation energies of both types of extended π-conjugated systems with the accuracy matching high-level ab initio multireference calculations.

  2. Multiresolution and Explicit Methods for Vector Field Analysis and Visualization

    NASA Technical Reports Server (NTRS)

    1996-01-01

    We first report on our current progress in the area of explicit methods for tangent curve computation. The basic idea of this method is to decompose the domain into a collection of triangles (or tetrahedra) and assume linear variation of the vector field over each cell. With this assumption, the equations which define a tangent curve become a system of linear, constant coefficient ODE's which can be solved explicitly. There are five different representation of the solution depending on the eigenvalues of the Jacobian. The analysis of these five cases is somewhat similar to the phase plane analysis often associate with critical point classification within the context of topological methods, but it is not exactly the same. There are some critical differences. Moving from one cell to the next as a tangent curve is tracked, requires the computation of the exit point which is an intersection of the solution of the constant coefficient ODE and the edge of a triangle. There are two possible approaches to this root computation problem. We can express the tangent curve into parametric form and substitute into an implicit form for the edge or we can express the edge in parametric form and substitute in an implicit form of the tangent curve. Normally the solution of a system of ODE's is given in parametric form and so the first approach is the most accessible and straightforward. The second approach requires the 'implicitization' of these parametric curves. The implicitization of parametric curves can often be rather difficult, but in this case we have been successful and have been able to develop algorithms and subsequent computer programs for both approaches. We will give these details along with some comparisons in a forthcoming research paper on this topic.

  3. Bayesian individualization via sampling-based methods.

    PubMed

    Wakefield, J

    1996-02-01

    We consider the situation where we wish to adjust the dosage regimen of a patient based on (in general) sparse concentration measurements taken on-line. A Bayesian decision theory approach is taken which requires the specification of an appropriate prior distribution and loss function. A simple method for obtaining samples from the posterior distribution of the pharmacokinetic parameters of the patient is described. In general, these samples are used to obtain a Monte Carlo estimate of the expected loss which is then minimized with respect to the dosage regimen. Some special cases which yield analytic solutions are described. When the prior distribution is based on a population analysis then a method of accounting for the uncertainty in the population parameters is described. Two simulation studies showing how the methods work in practice are presented. PMID:8827585

  4. Melamine sensing based on evanescent field enhanced optical fiber sensor

    NASA Astrophysics Data System (ADS)

    Luo, Ji; Yao, Jun; Wang, Wei-min; Zhuang, Xu-ye; Ma, Wen-ying; Lin, Qiao

    2013-08-01

    Melamine is an insalubrious chemical, and has been frequently added into milk products illegally, to make the products more protein-rich. However, it can cause some various diseases, such as kidney stones and bladder cancer. In this paper, a novel optical fiber sensor with high sensitivity based on absorption of the evanescent field for melamine detection is successfully proposed and developed. Different concentrations of melamine changing from 0 to 10mg/mL have been detected using the micro/nano-sensing fiber decorated with silver nanoparticles cluster layer. As the concentration increases, the sensing fiber's output intensity gradually deceases and the absorption of the analyte becomes large. The concentration changing of 1mg/ml can cause the absorbance varying 0.664 and the limit of the melamine detectable concentration is 1ug/mL. Besides, the coupling properties between silver nanoparticles have also been analyzed by the FDTD method. Overall, this evanescent field enhanced optical fiber sensor has potential to be used in oligo-analyte detection and will promote the development of biomolecular and chemical sensing applications.

  5. Partial homogeneity based high-resolution nuclear magnetic resonance spectra under inhomogeneous magnetic fields

    SciTech Connect

    Wei, Zhiliang; Lin, Liangjie; Lin, Yanqin E-mail: chenz@xmu.edu.cn; Chen, Zhong E-mail: chenz@xmu.edu.cn; Chen, Youhe

    2014-09-29

    In nuclear magnetic resonance (NMR) technique, it is of great necessity and importance to obtain high-resolution spectra, especially under inhomogeneous magnetic fields. In this study, a method based on partial homogeneity is proposed for retrieving high-resolution one-dimensional NMR spectra under inhomogeneous fields. Signals from series of small voxels, which characterize high resolution due to small sizes, are recorded simultaneously. Then, an inhomogeneity correction algorithm is developed based on pattern recognition to correct the influence brought by field inhomogeneity automatically, thus yielding high-resolution information. Experiments on chemical solutions and fish spawn were carried out to demonstrate the performance of the proposed method. The proposed method serves as a single radiofrequency pulse high-resolution NMR spectroscopy under inhomogeneous fields and may provide an alternative of obtaining high-resolution spectra of in vivo living systems or chemical-reaction systems, where performances of conventional techniques are usually degenerated by field inhomogeneity.

  6. A T-EOF Based Prediction Method.

    NASA Astrophysics Data System (ADS)

    Lee, Yung-An

    2002-01-01

    A new statistical time series prediction method based on temporal empirical orthogonal function (T-EOF) is introduced in this study. This method first applies singular spectrum analysis (SSA) to extract dominant T-EOFs from historical data. Then, the most recent data are projected onto an optimal subset of the T-EOFs to estimate the corresponding temporal principal components (T-PCs). Finally, a forecast is constructed from these T-EOFs and T-PCs. Results from forecast experiments on the El Niño sea surface temperature (SST) indices from 1993 to 2000 showed that this method consistently yielded better correlation skill than autoregressive models for a lead time longer than 6 months. Furthermore, the correlation skills of this method in predicting Niño-3 index remained above 0.5 for a lead time up to 36 months during this period. However, this method still encountered the `spring barrier' problem. Because the 1990s exhibited relatively weak spring barrier, these results indicate that the T-EOF based prediction method has certain extended forecasting capability in the period when the spring barrier is weak. They also suggest that the potential predictability of ENSO in a certain period may be longer than previously thought.

  7. A FIELD CANCELATION SIGNAL EXTRACTION METHOD FOR MAGNETIC PARTICLE IMAGING

    PubMed Central

    Mahlke, Max; Hubertus, Simon; Lammers, Twan; Kiessling, Fabian

    2014-01-01

    In nowadays Magnetic Particle Imaging (MPI) signal detection and excitation happens at the same time. This concept, however, leads to strong coupling of the drive (excitation) field (DF) with the receive chain. As the induced DF signal is several orders of magnitude higher, special measures have to be taken to suppress this signal portion within the receive signal to keep the required dynamic range of the subsequent analog to digital conversion in a technically feasible range. For “frequency space MPI” high-order band-stop-filters have been successfully used to remove the DF signals, which unfortunately as well removes the fundamental harmonic components of the signal of the magnetic nanoparticles (MNP). According to the Langevin theory the fundamental harmonic component has a large signal contribution and is important for direct reconstruction of the particle concentration. In order to separate the fundamental harmonic component of the MNP from the induced DF signal, different concepts have been proposed using signal cancelation based on additional DF signals, also in combination with additional filtering. In this paper, we propose a field-cancelation (FC) concept in which a receive coil (RC) consists of a series connection of a primary coil in combination with an additional cancelation coil. The geometry of the primary coil was chosen to be sensitive for the MNP signal while the cancelation coil was chosen to minimize the overall inductive coupling of the FC-RC with the DF. Sensitivity plots and mutual coupling coefficients were calculated using a thin-wire approximation. A prototype FC-RC was manufactured and effectiveness of the reduction of the mutual inductive coupling (d) was tested in an existing mouse MPI scanner. The difference between simulations (ds=70 dB) and the measurements (dms=55 dB) indicated the feasibility as well as the need for further investigations. PMID:25892745

  8. EXAMINATION OF AUTOMATIC DATA REDUCTION METHODS FOR PARTICLE FIELD HOLOGRAMS

    EPA Science Inventory

    Holographic recording techniques provide one of the most powerful particle field diagnostic tools in existence. A hologram can provide a frozen three-dimensional image of a particle field through which detailed microscopic examination of individual particles is possible. Frequent...

  9. Spatiotemporal multiplexing method for visual field of view extension in holographic displays with naked eye observation

    NASA Astrophysics Data System (ADS)

    Finke, G.; Kujawińska, M.; Kozacki, T.; Zaperty, W.

    2016-09-01

    In this paper we propose a method which allows to overcome the basic functional problems in holographic displays with naked eye observation caused by delivering too small images visible in narrow viewing angles. The solution is based on combining the spatiotemporal multiplexing method with a 4f optical system. It enables to increase an aperture of a holographic display and extend the angular visual field of view. The applicability of the modified display is evidenced by Wigner distribution analysis of holographic imaging with spatiotemporal multiplexing method and by the experiments performed at the display demonstrator.

  10. Geophysics-based method of locating a stationary earth object

    DOEpatents

    Daily, Michael R.; Rohde, Steven B.; Novak, James L.

    2008-05-20

    A geophysics-based method for determining the position of a stationary earth object uses the periodic changes in the gravity vector of the earth caused by the sun- and moon-orbits. Because the local gravity field is highly irregular over a global scale, a model of local tidal accelerations can be compared to actual accelerometer measurements to determine the latitude and longitude of the stationary object.

  11. Quantum memory for nonstationary light fields based on controlled reversible inhomogeneous broadening

    SciTech Connect

    Kraus, B.; Tittel, W.; Gisin, N.; Nilsson, M.; Kroell, S.; Cirac, J. I.

    2006-02-15

    We propose a method for efficient storage and recall of arbitrary nonstationary light fields, such as, for instance, single photon time-bin qubits or intense fields, in optically dense atomic ensembles. Our approach to quantum memory is based on controlled, reversible, inhomogeneous broadening and relies on a hidden time-reversal symmetry of the optical Bloch equations describing the propagation of the light field. We briefly discuss experimental realizations of our proposal.

  12. Teaching Geographic Field Methods to Cultural Resource Management Technicians

    ERIC Educational Resources Information Center

    Mires, Peter B.

    2004-01-01

    There are perhaps 10,000 technicians in the United States who work in the field known as cultural resource management (CRM). The typical field technician possesses a bachelor's degree in anthropology, geography, or a closely allied discipline. The author's experience has been that few CRM field technicians receive adequate undergraduate training…

  13. Graphene-based field effect transistors for radiation-induced field sensing

    NASA Astrophysics Data System (ADS)

    Di Gaspare, Alessandra; Valletta, Antonio; Fortunato, Guglielmo; Larciprete, Rosanna; Mariucci, Luigi; Notargiacomo, Andrea; Cimino, Roberto

    2016-07-01

    We propose the implementation of graphene-based field effect transistor (FET) as radiation sensor. In the proposed detector, graphene obtained via chemical vapor deposition is integrated into a Si-based field effect device as the gate readout electrode, able to sense any change in the field distribution induced by ionization in the underneath absorber, because of the strong variation in the graphene conductivity close to the charge neutrality point. Different 2-dimensional layered materials can be envisaged in this kind of device.

  14. Graphical Methods for Quantifying Macromolecules through Bright Field Imaging

    SciTech Connect

    Chang, Hang; DeFilippis, Rosa Anna; Tlsty, Thea D.; Parvin, Bahram

    2008-08-14

    Bright ?eld imaging of biological samples stained with antibodies and/or special stains provides a rapid protocol for visualizing various macromolecules. However, this method of sample staining and imaging is rarely employed for direct quantitative analysis due to variations in sample fixations, ambiguities introduced by color composition, and the limited dynamic range of imaging instruments. We demonstrate that, through the decomposition of color signals, staining can be scored on a cell-by-cell basis. We have applied our method to Flbroblasts grown from histologically normal breast tissue biopsies obtained from two distinct populations. Initially, nuclear regions are segmented through conversion of color images into gray scale, and detection of dark elliptic features. Subsequently, the strength of staining is quanti?ed by a color decomposition model that is optimized by a graph cut algorithm. In rare cases where nuclear signal is significantly altered as a result of samplepreparation, nuclear segmentation can be validated and corrected. Finally, segmented stained patterns are associated with each nuclear region following region-based tessellation. Compared to classical non-negative matrix factorization, proposed method (i) improves color decomposition, (ii) has a better noise immunity, (iii) is more invariant to initial conditions, and (iv) has a superior computing performance

  15. BTTB-RRCG method for downward continuation of potential field data

    NASA Astrophysics Data System (ADS)

    Zhang, Yile; Wong, Yau Shu; Lin, Yuanfang

    2016-03-01

    This paper presents a conjugate gradient (CG) method for accurate and robust downward continuation of potential field data. Utilizing the Block-Toeplitz Toeplitz-Block (BTTB) structure, the storage requirement and the computational complexity can be significantly reduced. Unlike the wavenumber domain regularization methods based on fast Fourier transform, the BTTB-based conjugate gradient method induces little artifacts near the boundary. The application of a re-weighted regularization in a space domain significantly improves the stability of the CG scheme for noisy data. The synthetic data with different levels of added noise and real field data are used to validate the effectiveness of the proposed scheme, and the computed results are compared with those based on recently proposed wavenumber domain methods and the Taylor series method. The simulation results verify that the proposed scheme is superior to the existing methods considered in this study in terms of accuracy and robustness. The proposed scheme is a powerful computational tool capable of applications for large scale data with modest computational cost.

  16. Estimating attenuation of ultraviolet radiation in streams: field and laboratory methods.

    PubMed

    Belmont, Patrick; Hargreaves, Bruce R; Morris, Donald P; Williamson, Craig E

    2007-01-01

    We adapted and tested a laboratory quantitative filter pad method and field-based microcosm method for estimating diffuse attenuation coefficients (K(d)) of ultraviolet radiation (UVR) for a wide range of stream optical environments (K(d320) = 3-44 m(-1)). Logistical difficulties of direct measurements of UVR attenuation have inhibited widespread monitoring of this important parameter in streams. Suspended sediment concentrations were manipulated in a microcosm, which was used to obtain direct measurements of diffuse attenuation. Dissolved and particulate absorption measurements of samples from the microcosm experiments were used to calibrate the laboratory method. Conditions sampled cover a range of suspended sediment (0-50 mg L(-1)) and dissolved organic carbon concentrations (1-4 mg L(-1)). We evaluated four models for precision and reproducibility in calculating particulate absorption and the optimal model was used in an empirical approach to estimate diffuse attenuation coefficients from total absorption coefficients. We field-tested the laboratory method by comparing laboratory-estimated and field-measured diffuse attenuation coefficients for seven sites on the main stem and 10 tributaries of the Lehigh River, eastern Pennsylvania, USA. The laboratory-based method described here affords widespread application, which will further our understanding of how stream optical environments vary spatially and temporally and consequently influence ecological processes in streams. PMID:18028207

  17. Integrated atom detector based on field ionization near carbon nanotubes

    SciTech Connect

    Gruener, B.; Jag, M.; Stibor, A.; Visanescu, G.; Haeffner, M.; Kern, D.; Guenther, A.; Fortagh, J.

    2009-12-15

    We demonstrate an atom detector based on field ionization and subsequent ion counting. We make use of field enhancement near tips of carbon nanotubes to reach extreme electrostatic field values of up to 9x10{sup 9} V/m, which ionize ground-state rubidium atoms. The detector is based on a carpet of multiwall carbon nanotubes grown on a substrate and used for field ionization, and a channel electron multiplier used for ion counting. We measure the field enhancement at the tips of carbon nanotubes by field emission of electrons. We demonstrate the operation of the field ionization detector by counting atoms from a thermal beam of a rubidium dispenser source. By measuring the ionization rate of rubidium as a function of the applied detector voltage we identify the field ionization distance, which is below a few tens of nanometers in front of nanotube tips. We deduce from the experimental data that field ionization of rubidium near nanotube tips takes place on a time scale faster than 10{sup -10} s. This property is particularly interesting for the development of fast atom detectors suitable for measuring correlations in ultracold quantum gases. We also describe an application of the detector as partial pressure gauge.

  18. Evaluation of different field methods for measuring soil water infiltration

    NASA Astrophysics Data System (ADS)

    Pla-Sentís, Ildefonso; Fonseca, Francisco

    2010-05-01

    Soil infiltrability, together with rainfall characteristics, is the most important hydrological parameter for the evaluation and diagnosis of the soil water balance and soil moisture regime. Those balances and regimes are the main regulating factors of the on site water supply to plants and other soil organisms and of other important processes like runoff, surface and mass erosion, drainage, etc, affecting sedimentation, flooding, soil and water pollution, water supply for different purposes (population, agriculture, industries, hydroelectricity), etc. Therefore the direct measurement of water infiltration rates or its indirect deduction from other soil characteristics or properties has become indispensable for the evaluation and modelling of the previously mentioned processes. Indirect deductions from other soil characteristics measured under laboratory conditions in the same soils, or in other soils, through the so called "pedo-transfer" functions, have demonstrated to be of limited value in most of the cases. Direct "in situ" field evaluations have to be preferred in any case. In this contribution we present the results of past experiences in the measurement of soil water infiltration rates in many different soils and land conditions, and their use for deducing soil water balances under variable climates. There are also presented and discussed recent results obtained in comparing different methods, using double and single ring infiltrometers, rainfall simulators, and disc permeameters, of different sizes, in soils with very contrasting surface and profile characteristics and conditions, including stony soils and very sloping lands. It is concluded that there are not methods universally applicable to any soil and land condition, and that in many cases the results are significantly influenced by the way we use a particular method or instrument, and by the alterations in the soil conditions by the land management, but also due to the manipulation of the surface

  19. On the no-field method for void time determination in flow field-flow fractionation.

    PubMed

    Martin, Michel; Hoyos, Mauricio

    2011-07-01

    Elution time measurements of colloidal particles injected in a symmetrical flow field-flow fractionation (flow FFF) system when the inlet and outlet cross-flow connections are closed have been performed. This no-field method has been proposed earlier for void time (and void volume) determination in flow FFF Giddings et al. (1977). The elution times observed were much larger than expected on the basis of the channel geometrical volume and the flow rate. In order to explain these discrepancies, a flow model allowing the carrier liquid to flow through the porous walls toward the reservoirs located behind the porous elements and along these reservoirs was developed. The ratio between the observed elution time and expected one is found to depend only on a parameter which is a function of the effective permeability and thickness of the porous elements and of the channel thickness and length. The permeabilities of the frits used in the system were measured. Their values lead to predicted elution times in reasonable agreement with experimental ones, taking into account likely membrane protrusion inside the channel on system assembly. They comfort the basic feature of the flow model, in the no-field case. The carrier liquid mostly bypasses the channel to flow along the system mainly in the reservoir. It flows through the porous walls toward the reservoirs near channel inlet and again through the porous walls from the reservoirs to the channel near channel outlet before exiting the system. In order to estimate the extent of this bypassing process, it is desirable that the hydrodynamic characteristics of the permeable elements (permeability and thickness) are provided by flow FFF manufacturers. The model applies to symmetrical as well as asymmetrical flow FFF systems. PMID:21256498

  20. Stochastic variational method as quantization scheme: Field quantization of the complex Klein-Gordon equation

    NASA Astrophysics Data System (ADS)

    Koide, T.; Kodama, T.

    2015-09-01

    The stochastic variational method (SVM) is the generalization of the variational approach to systems described by stochastic variables. In this paper, we investigate the applicability of SVM as an alternative field-quantization scheme, by considering the complex Klein-Gordon equation. There, the Euler-Lagrangian equation for the stochastic field variables leads to the functional Schrödinger equation, which can be interpreted as the Euler (ideal fluid) equation in the functional space. The present formulation is a quantization scheme based on commutable variables, so that there appears no ambiguity associated with the ordering of operators, e.g., in the definition of Noether charges.

  1. Bare PCB test method based on AI

    NASA Astrophysics Data System (ADS)

    Li, Aihua; Zhou, Huiyang; Wan, Nianhong; Qu, Liangsheng

    1995-08-01

    The shortcomings of conventional methods used for developing test sets on current automated printed circuit board (PCB) test machines consist of overlooking the information from CAD, historical test data, and the experts' knowledge. Thus, the generated test sets and proposed test sequence may be sub-optimal and inefficient. This paper presents a weighting bare PCB test method based on analysis and utilization of the CAD information. AI technique is applied for faults statistics and faults identification. Also, the generation of test sets and the planning of test procedure are discussed. A faster and more efficient test system is achieved.

  2. The application of strain field intensity method in the steel bridge fatigue life evaluation

    NASA Astrophysics Data System (ADS)

    Zhao, Xuefeng; Wang, Yanhong; Cui, Yanjun; Cao, Kaisheng

    2012-04-01

    Asce's survey shows that 80%--90% bridge damage were associated with fatigue and fracture problems. With the operation of vehicle weight and traffic volume increases constantly, the fatigue of welded steel bridge is becoming more and more serious in recent years. A large number of studies show that most prone to fatigue damage of steel bridge is part of the welding position. Thus, it's important to find a more precise method to assess the fatigue life of steel bridge. Three kinds of fatigue analysis method is commonly used in engineering practice, such as nominal stress method, the local stress strain method and field intensity method. The first two methods frequently used for fatigue life assessment of steel bridge, but field intensity method uses less ,and it widely used in fatigue life assessment of aerospace and mechanical. Nominal stress method and the local stress strain method in engineering has been widely applied, but not considering stress gradient and multiaxial stress effects, the accuracy of calculation stability is relatively poor, so it's difficult to fully explain the fatigue damage mechanism. Therefore, it used strain field intensity method to evaluate the fatigue life of steel bridge. The fatigue life research of the steel bridge based on the strain field method and the fatigue life of the I-section plate girder was analyzed. Using Ansys on the elastoplastic finite element analysis determined the dangerous part of the structure and got the stress-strain history of the dangerous point. At the same time, in order to divide the unit more elaborate introduced the sub-structure technology. Finally, it applies K.N. Smith damage equation to calculate the fatigue life of the dangerous point. In order to better simulating the actual welding defects, it dug a small hole in the welding parts. It dug different holds from different view in the welding parts and plused the same load to calculate its fatigue life. Comparing the results found that the welding

  3. Design method for the flow field and drag of bodies of revolution in incompressible flow

    SciTech Connect

    Wolfe, W.P.; Oberkampf, W.L.

    1982-01-01

    A design method has been developed for determining the flow field, pressure distribution, boundary layer separation point, and drag of bodies of revolution at zero angle of attack in incompressible flow. The approach taken is the classical coupling of potential and boundary solutions to obtain the flow field about the body. The potential solution is obtained by modeling the body with an axial distribution of source/sink elements whose strengths vary linearly along their length. The laminar and turbulent boundary layer solutions are obtained from conventional solutions of the momentum integral equation. An approximate method is used to estimate the boundary layer transition point on the body. An empirical base pressure correlation is used to determine the base drag. Body surface pressure distributions and drag predictions are compared with experimental measurements.

  4. Using Educational Data Mining Methods to Assess Field-Dependent and Field-Independent Learners' Complex Problem Solving

    ERIC Educational Resources Information Center

    Angeli, Charoula; Valanides, Nicos

    2013-01-01

    The present study investigated the problem-solving performance of 101 university students and their interactions with a computer modeling tool in order to solve a complex problem. Based on their performance on the hidden figures test, students were assigned to three groups of field-dependent (FD), field-mixed (FM), and field-independent (FI)…

  5. New Method For Static and Temporal Gravity Field Recovery Using Grace

    NASA Astrophysics Data System (ADS)

    Han, S.-C.; Jekeli, C.; Shum, C. K.

    The gravity field dedicated satellite missions like CHAMP, GRACE, and GOCE are supposed to map the Earth's global gravity field with the unprecedented accuracy and resolution. New models of Earth's static and time-variable gravity field will be avail- able every month as one of the science products from GRACE. Here we present an alternative method [Jekeli, 1999] to estimate the gravity field efficiently using the in situ satellite-to-satellite observations at satellite altitude. Considering the energy re- lation between the kinetic energy of the satellite and the gravitational potential, the disturbing potential observations can be computed from the specific force observa- tions and the state vector in the inertial frame, using the high-low GPS-LEO GPS tracking data, the low-low satellite-to-satellite GRACE measurement, and data from 3-axis accelerometers. The disturbing potential observations is the sum of a linear combination of other potentials due to tides, atmosphere, other modeled signals (e.g., N-body) and signals (hydrological and oceanic mass variations). The advantage of the method is its potential ability to efficiently replace corrections (e.g., atmosphere and tides) from different models. The inverse solution method is based on conjugate gra- dient [Han et al., 2001] and has been demonstrated to be able to efficiently recover gravity field solutions up to degree and order 120. The appropriate pre-conditioner like the block-diagonal part of the full normal matrix is used to accelerate the conver- gence rate. The method is applicable to CHAMP and GOCE. The CHAMP RSO orbit products and STAR accelerometer data are used to compute the in situ potentials and the corresponding gravity field is recovered. The synthetic potential difference obser- vations are computed with the expected error of GRACE range-rage measurements and the monthly gravity field is recovered in the presence of systematic errors such as atmosphere and tides.

  6. A probability density function method for acoustic field uncertainty analysis

    NASA Astrophysics Data System (ADS)

    James, Kevin R.; Dowling, David R.

    2005-11-01

    Acoustic field predictions, whether analytical or computational, rely on knowledge of the environmental, boundary, and initial conditions. When knowledge of these conditions is uncertain, acoustic field predictions will also be uncertain, even if the techniques for field prediction are perfect. Quantifying acoustic field uncertainty is important for applications that require accurate field amplitude and phase predictions, like matched-field techniques for sonar, nondestructive evaluation, bio-medical ultrasound, and atmospheric remote sensing. Drawing on prior turbulence research, this paper describes how an evolution equation for the probability density function (PDF) of the predicted acoustic field can be derived and used to quantify predicted-acoustic-field uncertainties arising from uncertain environmental, boundary, or initial conditions. Example calculations are presented in one and two spatial dimensions for the one-point PDF for the real and imaginary parts of a harmonic field, and show that predicted field uncertainty increases with increasing range and frequency. In particular, at 500 Hz in an ideal 100 m deep underwater sound channel with a 1 m root-mean-square depth uncertainty, the PDF results presented here indicate that at a range of 5 km, all phases and a 10 dB range of amplitudes will have non-negligible probability. Evolution equations for the two-point PDF are also derived.

  7. An image mosaic method based on corner

    NASA Astrophysics Data System (ADS)

    Jiang, Zetao; Nie, Heting

    2015-08-01

    In view of the shortcomings of the traditional image mosaic, this paper describes a new algorithm for image mosaic based on the Harris corner. Firstly, Harris operator combining the constructed low-pass smoothing filter based on splines function and circular window search is applied to detect the image corner, which allows us to have better localisation performance and effectively avoid the phenomenon of cluster. Secondly, the correlation feature registration is used to find registration pair, remove the false registration using random sampling consensus. Finally use the method of weighted trigonometric combined with interpolation function for image fusion. The experiments show that this method can effectively remove the splicing ghosting and improve the accuracy of image mosaic.

  8. Ecologically-Based Invasive Plant Management Field School Workbook 2009

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A curriculum developed for a field-based course of study for ecologically-based invasive plant management. This curriculum is presented in a modular format with specific exercises to emphasize the important aspects to applying this decision tool to land management....

  9. Tail Lobe Revisited: Magnetic Field Modeling Based on Plasma Data

    NASA Technical Reports Server (NTRS)

    Karlsson, S. B. P.; Tsyganenko, N. A.

    1999-01-01

    Plasma data from the ISEE-1 and -2 spacecraft during 1977-1980 have been used to determine the distribution of data points in the magnetotail in the range of distances -20 < XGSM < --15, i.e. which of the records that were located in the current sheet, in the tail lobe, in the magnetosheath and in the boundary layers respectively. The ISEE-1 and -2 magnetic field data for the records in the tail lobe were then used to model the tail lobe magnetic field dependence on the solar wind dynamic pressure, on the Interplanetary Magnetic Field (IMF) and on the Dst index. The tail lobe magnetic field was assumed to be dependent on the square root of the dynamic pressure based on the balance between the total magnetic pressure in the tail lobes and the dynamic pressure of the solar wind. The IMF dependent terms, added to the pressure term, were sought in many different forms while the Dst dependence of the tail lobe magnetic field was assumed to be linear. The field shows a strong dependence on the square root of the dynamic pressure and the different IMF dependent terms all constitute a significant contribution to the total field. However, the dependence on the Dst index turned out to be very weak at those down-tail distances. The results of this study are intended to be used for parameterizing future versions of the data-based models of the global magnetospheric magnetic field.

  10. Treecode-based generalized Born method

    NASA Astrophysics Data System (ADS)

    Xu, Zhenli; Cheng, Xiaolin; Yang, Haizhao

    2011-02-01

    We have developed a treecode-based O(Nlog N) algorithm for the generalized Born (GB) implicit solvation model. Our treecode-based GB (tGB) is based on the GBr6 [J. Phys. Chem. B 111, 3055 (2007)], an analytical GB method with a pairwise descreening approximation for the R6 volume integral expression. The algorithm is composed of a cutoff scheme for the effective Born radii calculation, and a treecode implementation of the GB charge-charge pair interactions. Test results demonstrate that the tGB algorithm can reproduce the vdW surface based Poisson solvation energy with an average relative error less than 0.6% while providing an almost linear-scaling calculation for a representative set of 25 proteins with different sizes (from 2815 atoms to 65456 atoms). For a typical system of 10k atoms, the tGB calculation is three times faster than the direct summation as implemented in the original GBr6 model. Thus, our tGB method provides an efficient way for performing implicit solvent GB simulations of larger biomolecular systems at longer time scales.

  11. A multicore based parallel image registration method.

    PubMed

    Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L; Foran, David J

    2009-01-01

    Image registration is a crucial step for many image-assisted clinical applications such as surgery planning and treatment evaluation. In this paper we proposed a landmark based nonlinear image registration algorithm for matching 2D image pairs. The algorithm was shown to be effective and robust under conditions of large deformations. In landmark based registration, the most important step is establishing the correspondence among the selected landmark points. This usually requires an extensive search which is often computationally expensive. We introduced a nonregular data partition algorithm using the K-means clustering algorithm to group the landmarks based on the number of available processing cores. The step optimizes the memory usage and data transfer. We have tested our method using IBM Cell Broadband Engine (Cell/B.E.) platform. PMID:19964921

  12. Developing Preservice Teachers' Self-Efficacy through Field-Based Science Teaching Practice with Elementary Students

    ERIC Educational Resources Information Center

    Flores, Ingrid M.

    2015-01-01

    Thirty preservice teachers enrolled in a field-based science methods course were placed at a public elementary school for coursework and for teaching practice with elementary students. Candidates focused on building conceptual understanding of science content and pedagogical methods through innovative curriculum development and other course…

  13. Coal Field Fire Fighting - Practiced methods, strategies and tactics

    NASA Astrophysics Data System (ADS)

    Wündrich, T.; Korten, A. A.; Barth, U. H.

    2009-04-01

    achieved. For an effective and efficient fire fighting optimal tactics are requiered and can be divided into four fundamental tactics to control fire hazards: - Defense (digging away the coal, so that the coal can not begin to burn; or forming a barrier, so that the fire can not reach the not burning coal), - Rescue the coal (coal mining of a not burning seam), - Attack (active and direct cooling of burning seam), - Retreat (only monitoring till self-extinction of a burning seam). The last one is used when a fire exceeds the organizational and/or technical scope of a mission. In other words, "to control a coal fire" does not automatically and in all situations mean "to extinguish a coal fire". Best-practice tactics or a combination of them can be selected for control of a particular coal fire. For the extinguishing works different extinguishing agents are available. They can be applied by different application techniques and varying distinctive operating expenses. One application method may be the drilling of boreholes from the surface or covering the surface with low permeability soils. The mainly used extinction agents for coal field fire are as followed: Water (with or without additives), Slurry, Foaming mud/slurry, Inert gases, Dry chemicals and materials and Cryogenic agents. Because of its tremendous dimension and its complexity the worldwide challenge of coal fires is absolutely unique - it can only be solved with functional application methods, best fitting strategies and tactics, organisation and research as well as the dedication of the involved fire fighters, who work under extreme individual risks on the burning coal fields.

  14. Field-Based Teacher Education in Literacy: Preparing Teachers in Real Classroom Contexts

    ERIC Educational Resources Information Center

    DeGraff, Tricia L.; Schmidt, Cynthia M.; Waddell, Jennifer H.

    2015-01-01

    For the past two decades, scholars have advocated for reforms in teacher education that emphasize relevant connections between theory and practice in university coursework and focus on clinical experiences. This paper is based on our experiences in designing and implementing an integrated literacy methods course in a field-based teacher education…

  15. Matrix-based image reconstruction methods for tomography

    SciTech Connect

    Llacer, J.; Meng, J.D.

    1984-10-01

    Matrix methods of image reconstruction have not been used, in general, because of the large size of practical matrices, ill condition upon inversion and the success of Fourier-based techniques. An exception is the work that has been done at the Lawrence Berkeley Laboratory for imaging with accelerated radioactive ions. An extension of that work into more general imaging problems shows that, with a correct formulation of the problem, positron tomography with ring geometries results in well behaved matrices which can be used for image reconstruction with no distortion of the point response in the field of view and flexibility in the design of the instrument. Maximum Likelihood Estimator methods of reconstruction, which use the system matrices tailored to specific instruments and do not need matrix inversion, are shown to result in good preliminary images. A parallel processing computer structure based on multiple inexpensive microprocessors is proposed as a system to implement the matrix-MLE methods. 14 references, 7 figures.

  16. FIELD COMPARISON OF PORTABLE GAS CHROMATOGRAPHS WITH METHOD TO-14

    EPA Science Inventory

    A field-deployable prototype fast gas chromatograph (FGC) and two commercially-available portable gas chromatographs (PGC) were evaluated by measuring organic vapors in ambient air at a field monitoring site in metropolitan San Juan, Puerto Rico. he data were compared with simult...

  17. Using Problem Fields as a Method of Change.

    ERIC Educational Resources Information Center

    Pehkonen, Erkki

    1992-01-01

    Discusses the rationale and use of problem fields which are sets of related and/or connected open-ended problem-solving tasks within mathematics instruction. Polygons with matchsticks and the number triangle are two examples of problem fields presented along with variations in conditions that promote other matchstick puzzles. (11 references) (JJK)

  18. Mobility Measurement Based on Visualized Electric Field Migration in Organic Field-Effect Transistors

    NASA Astrophysics Data System (ADS)

    Manaka, Takaaki; Liu, Fei; Weis, Martin; Iwamoto, Mitsumasa

    2009-06-01

    Based on the transient electric field migration directly probed by the time-resolved microscopic optical second-harmonic generation (TRM-SHG) technique, we developed a “dynamic” approach to evaluate the carrier mobility in organic field-effect transistor (OFET). The accuracy of this potential- and current-independent approach was well confirmed by computational simulations and a comparison with the conventional mobility measured by OFET transfer characteristics.

  19. Magnetic Field Measurements Based on Terfenol Coated Photonic Crystal Fibers

    PubMed Central

    Quintero, Sully M. M.; Martelli, Cicero; Braga, Arthur M. B.; Valente, Luiz C. G.; Kato, Carla C.

    2011-01-01

    A magnetic field sensor based on the integration of a high birefringence photonic crystal fiber and a composite material made of Terfenol particles and an epoxy resin is proposed. An in-fiber modal interferometer is assembled by evenly exciting both eigenemodes of the HiBi fiber. Changes in the cavity length as well as the effective refractive index are induced by exposing the sensor head to magnetic fields. The magnetic field sensor has a sensitivity of 0.006 (nm/mT) over a range from 0 to 300 mT with a resolution about ±1 mT. A fiber Bragg grating magnetic field sensor is also fabricated and employed to characterize the response of Terfenol composite to the magnetic field. PMID:22247655

  20. Fast and stable explicit operator splitting methods for phase-field models

    NASA Astrophysics Data System (ADS)

    Cheng, Yuanzhen; Kurganov, Alexander; Qu, Zhuolin; Tang, Tao

    2015-12-01

    Numerical simulations of phase-field models require long time computations and therefore it is necessary to develop efficient and highly accurate numerical methods. In this paper, we propose fast and stable explicit operator splitting methods for both one- and two-dimensional nonlinear diffusion equations for thin film epitaxy with slope selection and the Cahn-Hilliard equation. The equations are split into nonlinear and linear parts. The nonlinear part is solved using a method of lines together with an efficient large stability domain explicit ODE solver. The linear part is solved by a pseudo-spectral method, which is based on the exact solution and thus has no stability restriction on the time-step size. We demonstrate the performance of the proposed methods on a number of one- and two-dimensional numerical examples, where different stages of coarsening such as the initial preparation, alternating rapid structural transition and slow motion can be clearly observed.

  1. Chapter 11. Community analysis-based methods

    SciTech Connect

    Cao, Y.; Wu, C.H.; Andersen, G.L.; Holden, P.A.

    2010-05-01

    Microbial communities are each a composite of populations whose presence and relative abundance in water or other environmental samples are a direct manifestation of environmental conditions, including the introduction of microbe-rich fecal material and factors promoting persistence of the microbes therein. As shown by culture-independent methods, different animal-host fecal microbial communities appear distinctive, suggesting that their community profiles can be used to differentiate fecal samples and to potentially reveal the presence of host fecal material in environmental waters. Cross-comparisons of microbial communities from different hosts also reveal relative abundances of genetic groups that can be used to distinguish sources. In increasing order of their information richness, several community analysis methods hold promise for MST applications: phospholipid fatty acid (PLFA) analysis, denaturing gradient gel electrophoresis (DGGE), terminal restriction fragment length polymorphism (TRFLP), cloning/sequencing, and PhyloChip. Specific case studies involving TRFLP and PhyloChip approaches demonstrate the ability of community-based analyses of contaminated waters to confirm a diagnosis of water quality based on host-specific marker(s). The success of community-based MST for comprehensively confirming fecal sources relies extensively upon using appropriate multivariate statistical approaches. While community-based MST is still under evaluation and development as a primary diagnostic tool, results presented herein demonstrate its promise. Coupled with its inherently comprehensive ability to capture an unprecedented amount of microbiological data that is relevant to water quality, the tools for microbial community analysis are increasingly accessible, and community-based approaches have unparalleled potential for translation into rapid, perhaps real-time, monitoring platforms.

  2. Field-emission-induced electromigration method for the integration of single-electron transistors

    NASA Astrophysics Data System (ADS)

    Ueno, Shunsuke; Tomoda, Yusuke; Kume, Watari; Hanada, Michinobu; Takiya, Kazutoshi; Shirakashi, Jun-ichi

    2012-01-01

    We report a simple and easy method for the integration of planar-type single-electron transistors (SETs). This method is based on electromigration induced by a field emission current, which is so-called “activation”. The integration of two SETs was achieved by performing the activation to the series-connected initial nanogaps. In both simultaneously activated devices, current-voltage (ID-VD) curves displayed Coulomb blockade properties, and Coulomb blockade voltage was also obviously modulated by the gate voltage at 16 K. Moreover, the charging energy of both SETs was well controlled by the preset current in the activation.

  3. A gearbox fault diagnosis scheme based on near-field acoustic holography and spatial distribution features of sound field

    NASA Astrophysics Data System (ADS)

    Lu, Wenbo; Jiang, Weikang; Yuan, Guoqing; Yan, Li

    2013-05-01

    Vibration signal analysis is the main technique in machine condition monitoring or fault diagnosis, whereas in some cases vibration-based diagnosis is restrained because of its contact measurement. Acoustic-based diagnosis (ABD) with non-contact measurement has received little attention, although sound field may contain abundant information related to fault pattern. A new scheme of ABD for gearbox based on near-field acoustic holography (NAH) and spatial distribution features of sound field is presented in this paper. It focuses on applying distribution information of sound field to gearbox fault diagnosis. A two-stage industrial helical gearbox is experimentally studied in a semi-anechoic chamber and a lab workshop, respectively. Firstly, multi-class faults (mild pitting, moderate pitting, severe pitting and tooth breakage) are simulated, respectively. Secondly, sound fields and corresponding acoustic images in different gearbox running conditions are obtained by fast Fourier transform (FFT) based NAH. Thirdly, by introducing texture analysis to fault diagnosis, spatial distribution features are extracted from acoustic images for capturing fault patterns underlying the sound field. Finally, the features are fed into multi-class support vector machine for fault pattern identification. The feasibility and effectiveness of our proposed scheme is demonstrated on the good experimental results and the comparison with traditional ABD method. Even with strong noise interference, spatial distribution features of sound field can reliably reveal the fault patterns of gearbox, and thus the satisfactory accuracy can be obtained. The combination of histogram features and gray level gradient co-occurrence matrix features is suggested for good diagnosis accuracy and low time cost.

  4. Detection of Inorganic Arsenic in Rice Using a Field Test Kit: A Screening Method.

    PubMed

    Bralatei, Edi; Lacan, Severine; Krupp, Eva M; Feldmann, Jörg

    2015-11-17

    Rice is a staple food eaten by more than 50% of the world's population and is a daily dietary constituent in most South East Asian countries where 70% of the rice export comes from and where there is a high level of arsenic contamination in groundwater used for irrigation. Research shows that rice can take up and store inorganic arsenic during cultivation, and rice is considered to be one of the major routes of exposure to inorganic arsenic, a class I carcinogen for humans. Here, we report the use of a screening method based on the Gutzeit methodology to detect inorganic arsenic (iAs) in rice within 1 h. After optimization, 30 rice commodities from the United Kingdom market were tested with the field method and were compared to the reference method (high-performance liquid chromatography-inductively coupled plasma-mass spectrometry, HPLC-ICP-MS). In all but three rice samples, iAs compound can be determined. The results show no bias for iAs using the field method. Results obtained show quantification limits of about 50 μg kg(-1), a good reproducibility for a field method of ±12%, and only a few false positives and negatives (<10%) could only be recorded at the 2015 European Commission (EC) guideline for baby rice of 100 μg kg(-1), while none were recorded at the maximum level suggested by the World Health Organization (WHO) and implemented by the EC for polished and white rice of 200 μg kg(-1). The method is reliable, fast, and inexpensive; hence, it is suggested to be used as a screening method in the field for preselection of rice which violates legislative guidelines. PMID:26506262

  5. Calorimetric method of ac loss measurement in a rotating magnetic field.

    PubMed

    Ghoshal, P K; Coombs, T A; Campbell, A M

    2010-07-01

    A method is described for calorimetric ac-loss measurements of high-T(c) superconductors (HTS) at 80 K. It is based on a technique used at 4.2 K for conventional superconducting wires that allows an easy loss measurement in parallel or perpendicular external field orientation. This paper focuses on ac loss measurement setup and calibration in a rotating magnetic field. This experimental setup is to demonstrate measuring loss using a temperature rise method under the influence of a rotating magnetic field. The slight temperature increase of the sample in an ac-field is used as a measure of losses. The aim is to simulate the loss in rotating machines using HTS. This is a unique technique to measure total ac loss in HTS at power frequencies. The sample is mounted on to a cold finger extended from a liquid nitrogen heat exchanger (HEX). The thermal insulation between the HEX and sample is provided by a material of low thermal conductivity, and low eddy current heating sample holder in vacuum vessel. A temperature sensor and noninductive heater have been incorporated in the sample holder allowing a rapid sample change. The main part of the data is obtained in the calorimetric measurement is used for calibration. The focus is on the accuracy and calibrations required to predict the actual ac losses in HTS. This setup has the advantage of being able to measure the total ac loss under the influence of a continuous moving field as experienced by any rotating machines. PMID:20687748

  6. Calorimetric method of ac loss measurement in a rotating magnetic field

    NASA Astrophysics Data System (ADS)

    Ghoshal, P. K.; Coombs, T. A.; Campbell, A. M.

    2010-07-01

    A method is described for calorimetric ac-loss measurements of high-Tc superconductors (HTS) at 80 K. It is based on a technique used at 4.2 K for conventional superconducting wires that allows an easy loss measurement in parallel or perpendicular external field orientation. This paper focuses on ac loss measurement setup and calibration in a rotating magnetic field. This experimental setup is to demonstrate measuring loss using a temperature rise method under the influence of a rotating magnetic field. The slight temperature increase of the sample in an ac-field is used as a measure of losses. The aim is to simulate the loss in rotating machines using HTS. This is a unique technique to measure total ac loss in HTS at power frequencies. The sample is mounted on to a cold finger extended from a liquid nitrogen heat exchanger (HEX). The thermal insulation between the HEX and sample is provided by a material of low thermal conductivity, and low eddy current heating sample holder in vacuum vessel. A temperature sensor and noninductive heater have been incorporated in the sample holder allowing a rapid sample change. The main part of the data is obtained in the calorimetric measurement is used for calibration. The focus is on the accuracy and calibrations required to predict the actual ac losses in HTS. This setup has the advantage of being able to measure the total ac loss under the influence of a continuous moving field as experienced by any rotating machines.

  7. Calorimetric method of ac loss measurement in a rotating magnetic field

    SciTech Connect

    Ghoshal, P. K.; Coombs, T. A.; Campbell, A. M.

    2010-07-15

    A method is described for calorimetric ac-loss measurements of high-T{sub c} superconductors (HTS) at 80 K. It is based on a technique used at 4.2 K for conventional superconducting wires that allows an easy loss measurement in parallel or perpendicular external field orientation. This paper focuses on ac loss measurement setup and calibration in a rotating magnetic field. This experimental setup is to demonstrate measuring loss using a temperature rise method under the influence of a rotating magnetic field. The slight temperature increase of the sample in an ac-field is used as a measure of losses. The aim is to simulate the loss in rotating machines using HTS. This is a unique technique to measure total ac loss in HTS at power frequencies. The sample is mounted on to a cold finger extended from a liquid nitrogen heat exchanger (HEX). The thermal insulation between the HEX and sample is provided by a material of low thermal conductivity, and low eddy current heating sample holder in vacuum vessel. A temperature sensor and noninductive heater have been incorporated in the sample holder allowing a rapid sample change. The main part of the data is obtained in the calorimetric measurement is used for calibration. The focus is on the accuracy and calibrations required to predict the actual ac losses in HTS. This setup has the advantage of being able to measure the total ac loss under the influence of a continuous moving field as experienced by any rotating machines.

  8. Optimal grid-based methods for thin film micromagnetics simulations

    NASA Astrophysics Data System (ADS)

    Muratov, C. B.; Osipov, V. V.

    2006-08-01

    Thin film micromagnetics are a broad class of materials with many technological applications, primarily in magnetic memory. The dynamics of the magnetization distribution in these materials is traditionally modeled by the Landau-Lifshitz-Gilbert (LLG) equation. Numerical simulations of the LLG equation are complicated by the need to compute the stray field due to the inhomogeneities in the magnetization which presents the chief bottleneck for the simulation speed. Here, we introduce a new method for computing the stray field in a sample for a reduced model of ultra-thin film micromagnetics. The method uses a recently proposed idea of optimal finite difference grids for approximating Neumann-to-Dirichlet maps and has an advantage of being able to use non-uniform discretization in the film plane, as well as an efficient way of dealing with the boundary conditions at infinity for the stray field. We present several examples of the method's implementation and give a detailed comparison of its performance for studying domain wall structures compared to the conventional FFT-based methods.

  9. UAV path planning using artificial potential field method updated by optimal control theory

    NASA Astrophysics Data System (ADS)

    Chen, Yong-bo; Luo, Guan-chen; Mei, Yue-song; Yu, Jian-qiao; Su, Xiao-long

    2016-04-01

    The unmanned aerial vehicle (UAV) path planning problem is an important assignment in the UAV mission planning. Based on the artificial potential field (APF) UAV path planning method, it is reconstructed into the constrained optimisation problem by introducing an additional control force. The constrained optimisation problem is translated into the unconstrained optimisation problem with the help of slack variables in this paper. The functional optimisation method is applied to reform this problem into an optimal control problem. The whole transformation process is deduced in detail, based on a discrete UAV dynamic model. Then, the path planning problem is solved with the help of the optimal control method. The path following process based on the six degrees of freedom simulation model of the quadrotor helicopters is introduced to verify the practicability of this method. Finally, the simulation results show that the improved method is more effective in planning path. In the planning space, the length of the calculated path is shorter and smoother than that using traditional APF method. In addition, the improved method can solve the dead point problem effectively.

  10. Method for extruding pitch based foam

    DOEpatents

    Klett, James W.

    2002-01-01

    A method and apparatus for extruding pitch based foam is disclosed. The method includes the steps of: forming a viscous pitch foam; passing the precursor through an extrusion tube; and subjecting the precursor in said extrusion tube to a temperature gradient which varies along the length of the extrusion tube to form an extruded carbon foam. The apparatus includes an extrusion tube having a passageway communicatively connected to a chamber in which a viscous pitch foam formed in the chamber paring through the extrusion tube, and a heating mechanism in thermal communication with the tube for heating the viscous pitch foam along the length of the tube in accordance with a predetermined temperature gradient.

  11. The generalized active space concept in multiconfigurational self-consistent field methods.

    PubMed

    Ma, Dongxia; Li Manni, Giovanni; Gagliardi, Laura

    2011-07-28

    A multiconfigurational self-consistent field method based on the concept of generalized active space (GAS) is presented. GAS wave functions are obtained by defining an arbitrary number of active spaces with arbitrary occupation constraints. By a suitable choice of the GAS spaces, numerous ineffective configurations present in a large complete active space (CAS) can be removed, while keeping the important ones in the CI space. As a consequence, the GAS self-consistent field approach retains the accuracy of the CAS self-consistent field (CASSCF) ansatz and, at the same time, can deal with larger active spaces, which would be unaffordable at the CASSCF level. Test calculations on the Gd atom, Gd(2) molecule, and oxoMn(salen) complex are presented. They show that GAS wave functions achieve the same accuracy as CAS wave functions on systems that would be prohibitive at the CAS level. PMID:21806111

  12. Homogenization method based on the inverse problem

    SciTech Connect

    Tota, A.; Makai, M.

    2013-07-01

    We present a method for deriving homogeneous multi-group cross sections to replace a heterogeneous region's multi-group cross sections; providing that the fluxes and the currents on the external boundary, and the region averaged fluxes are preserved. The method is developed using diffusion approximation to the neutron transport equation in a symmetrical slab geometry. Assuming that the boundary fluxes are given, two response matrices (RMs) can be defined. The first derives the boundary current from the boundary flux, the second derives the flux integral over the region from the boundary flux. Assuming that these RMs are known, we present a formula which reconstructs the multi-group cross-section matrix and the diffusion coefficients from the RMs of a homogeneous slab. Applying this formula to the RMs of a slab with multiple homogeneous regions yields a homogenization method; which produce such homogenized multi-group cross sections and homogenized diffusion coefficients, that the fluxes and the currents on the external boundary, and the region averaged fluxes are preserved. The method is based on the determination of the eigenvalues and the eigenvectors of the RMs. We reproduce the four-group cross section matrix and the diffusion constants from the RMs in numerical examples. We give conditions for replacing a heterogeneous region by a homogeneous one so that the boundary current and the region-averaged flux are preserved for a given boundary flux. (authors)

  13. Alternative Methods for Field Corrections in Helical Solenoids

    SciTech Connect

    Lopes, M. L.; Krave, S. T.; Tompkins, J. C.; Yonehara, K.; Flanagan, G.; Kahn, S. A.; Melconian, K.

    2015-05-01

    Helical cooling channels have been proposed for highly efficient 6D muon cooling. Helical solenoids produce solenoidal, helical dipole, and helical gradient field components. Previous studies explored the geometric tunability limits on these main field components. In this paper we present two alternative correction schemes, tilting the solenoids and the addition of helical lines, to reduce the required strength of the anti-solenoid and add an additional tuning knob.

  14. Enzyme catalysis enhanced dark-field imaging as a novel immunohistochemical method

    NASA Astrophysics Data System (ADS)

    Fan, Lin; Tian, Yanyan; Yin, Rong; Lou, Doudou; Zhang, Xizhi; Wang, Meng; Ma, Ming; Luo, Shouhua; Li, Suyi; Gu, Ning; Zhang, Yu

    2016-04-01

    Conventional immunohistochemistry is limited to subjective judgment based on human experience and thus it is clinically required to develop a quantitative immunohistochemical detection. 3,3'-Diaminobenzidin (DAB) aggregates, a type of staining product formed by conventional immunohistochemistry, were found to have a special optical property of dark-field imaging for the first time, and the mechanism was explored. On this basis, a novel immunohistochemical method based on dark-field imaging for detecting HER2 overexpressed in breast cancer was established, and the quantitative analysis standard and relevant software for measuring the scattering intensity was developed. In order to achieve a more sensitive detection, the HRP (horseradish peroxidase)-labeled secondary antibodies conjugated gold nanoparticles were constructed as nanoprobes to load more HRP enzymes, resulting in an enhanced DAB deposition as a dark-field label. Simultaneously, gold nanoparticles also act as a synergistically enhanced agent due to their mimicry of enzyme catalysis and dark-field scattering properties.Conventional immunohistochemistry is limited to subjective judgment based on human experience and thus it is clinically required to develop a quantitative immunohistochemical detection. 3,3'-Diaminobenzidin (DAB) aggregates, a type of staining product formed by conventional immunohistochemistry, were found to have a special optical property of dark-field imaging for the first time, and the mechanism was explored. On this basis, a novel immunohistochemical method based on dark-field imaging for detecting HER2 overexpressed in breast cancer was established, and the quantitative analysis standard and relevant software for measuring the scattering intensity was developed. In order to achieve a more sensitive detection, the HRP (horseradish peroxidase)-labeled secondary antibodies conjugated gold nanoparticles were constructed as nanoprobes to load more HRP enzymes, resulting in an enhanced DAB

  15. Borehole-to-surface electromagnetic methods -- System design and field examples

    SciTech Connect

    Bartel, L.C.; Wilt, M.J.; Tseng, H.W.

    1995-05-01

    Borehole-to-surface electromagnetic (EM) methods are an attractive alternative to Surface-based EM methods for a variety of environmental and engineering applications. They have improved sensitivity to the subsurface resistivity distribution because of the closer proximity to the area of interest offered by the borehole for the source or the receiver. For the borehole-to-surface measurements the source is in the borehole and the receivers are on the surface. On the other hand, for the surface-to-borehole methods, the source is on the surface and the receiver is in a borehole. The surface-to-borehole method has an added advantage since measurements are often more accurate due to the lower noise environment for the receiver. For these methods, the source can be a grounded electric dipole or a vertical magnetic dipole source. An added benefit of these techniques is field measurements are made using a variety of arrays where the system is tailored to the application and where one can take advantage of some new imaging methods. In this short paper the authors describe the application of the borehole-to-surface method, discuss benefits and shortcomings, and give two field examples where they have been used for underground imaging. The examples were the monitoring of a salt water flooding of an oil well and the characterization of a fuel oil spill.

  16. Tuning of random lasers by means of external magnetic fields based on the Voigt effect

    NASA Astrophysics Data System (ADS)

    Ghasempour Ardakani, Abbas; Mahdavi, Seyed Mohammad; Bahrampour, Ali Reza

    2013-04-01

    It has been proposed that emission spectrum of random lasers with magnetically active semiconductor constituents can be made tunable by external magnetic fields. By employing the FDTD method, the spectral intensity and spatial distribution of electric field are calculated in the presence of an external magnetic field. It is numerically shown that due to the magneto-optical Voigt effect, the emission spectrum of a semiconductor-based random laser can be made tunable by adjusting the external magnetic field. The effect of magnetic field on the localization length of the laser modes is investigated. It is also shown that the spatial distribution of electric field exhibited remarkable modification with variation of magnetic field.

  17. Systems, computer-implemented methods, and tangible computer-readable storage media for wide-field interferometry

    NASA Technical Reports Server (NTRS)

    Lyon, Richard G. (Inventor); Leisawitz, David T. (Inventor); Rinehart, Stephen A. (Inventor); Memarsadeghi, Nargess (Inventor)

    2012-01-01

    Disclosed herein are systems, computer-implemented methods, and tangible computer-readable storage media for wide field imaging interferometry. The method includes for each point in a two dimensional detector array over a field of view of an image: gathering a first interferogram from a first detector and a second interferogram from a second detector, modulating a path-length for a signal from an image associated with the first interferogram in the first detector, overlaying first data from the modulated first detector and second data from the second detector, and tracking the modulating at every point in a two dimensional detector array comprising the first detector and the second detector over a field of view for the image. The method then generates a wide-field data cube based on the overlaid first data and second data for each point. The method can generate an image from the wide-field data cube.

  18. A new method for calculating the scattered field by an arbitrary cross-sectional conducting cylinder

    NASA Astrophysics Data System (ADS)

    Ragheb, Hassan A.

    2011-04-01

    Scattering of a plane electromagnetic wave by an arbitrary cross-sectional perfectly conducting cylinder must be performed numerically. This article aims to present a new approach for addressing this problem, which is based on simulating the arbitrary cross-sectional perfectly conducting cylinder by perfectly conducting strips of narrow width. The problem is then turned out to calculate the scattered electromagnetic field from N conducting strips. The technique of solving such a problem uses an asymptotic method. This method is based on an approximate technique introduced by Karp and Russek (Karp, S.N., and Russek, A. (1956), 'Diffraction by a Wide Slit', Journal of Applied Physics, 27, 886-894.) for solving scattering by wide slit. The method is applied here for calculating the scattered field in the far zone for E-polarised incident waves (transverse magnetic (TM) with respect to z-axis) on a perfectly conducting cylinder with arbitrary cross-section. Numerical examples are introduced first for comparison to show the accuracy of the method. Other examples for well-known scattering by conducting cylinders are then introduced followed by new examples which can only be solved by numerical methods.

  19. Characteristic-based time domain method for antenna analysis

    NASA Astrophysics Data System (ADS)

    Jiao, Dan; Jin, Jian-Ming; Shang, J. S.

    2001-01-01

    The characteristic-based time domain method, developed in the computational fluid dynamics community for solving the Euler equations, is applied to the antenna radiation problem. Based on the principle of the characteristic-based algorithm, a governing equation in the cylindrical coordinate system is formulated directly to facilitate the analysis of body-of-revolution antennas and also to achieve the exact Riemann problem. A finite difference scheme with second-order accuracy in both time and space is constructed from the eigenvalue and eigenvector analysis of the derived governing equation. Rigorous boundary conditions for all the field components are formulated to improve the accuracy of the characteristic-based finite difference scheme. Numerical results demonstrate the validity and accuracy of the proposed technique.

  20. A New Method for Radar Rainfall Estimation Using Merged Radar and Gauge Derived Fields

    NASA Astrophysics Data System (ADS)

    Hasan, M. M.; Sharma, A.; Johnson, F.; Mariethoz, G.; Seed, A.

    2014-12-01

    Accurate estimation of rainfall is critical for any hydrological analysis. The advantage of radar rainfall measurements is their ability to cover large areas. However, the uncertainties in the parameters of the power law, that links reflectivity to rainfall intensity, have to date precluded the widespread use of radars for quantitative rainfall estimates for hydrological studies. There is therefore considerable interest in methods that can combine the strengths of radar and gauge measurements by merging the two data sources. In this work, we propose two new developments to advance this area of research. The first contribution is a non-parametric radar rainfall estimation method (NPZR) which is based on kernel density estimation. Instead of using a traditional Z-R relationship, the NPZR accounts for the uncertainty in the relationship between reflectivity and rainfall intensity. More importantly, this uncertainty can vary for different values of reflectivity. The NPZR method reduces the Mean Square Error (MSE) of the estimated rainfall by 16 % compared to a traditionally fitted Z-R relation. Rainfall estimates are improved at 90% of the gauge locations when the method is applied to the densely gauged Sydney Terrey Hills radar region. A copula based spatial interpolation method (SIR) is used to estimate rainfall from gauge observations at the radar pixel locations. The gauge-based SIR estimates have low uncertainty in areas with good gauge density, whilst the NPZR method provides more reliable rainfall estimates than the SIR method, particularly in the areas of low gauge density. The second contribution of the work is to merge the radar rainfall field with spatially interpolated gauge rainfall estimates. The two rainfall fields are combined using a temporally and spatially varying weighting scheme that can account for the strengths of each method. The weight for each time period at each location is calculated based on the expected estimation error of each method

  1. Hazard surveillance for workplace magnetic fields. 1: Walkaround sampling method for measuring ambient field magnitude; 2: Field characteristics from waveform measurements

    SciTech Connect

    Methner, M.M.; Bowman, J.D.

    1998-03-01

    Recent epidemiologic research has suggested that exposure to extremely low frequency (ELF) magnetic fields (MF) may be associated with leukemia, brain cancer, spontaneous abortions, and Alzheimer`s disease. A walkaround sampling method for measuring ambient ELF-MF levels was developed for use in conducting occupational hazard surveillance. This survey was designed to determine the range of MF levels at different industrial facilities so they could be categorized by MF levels and identified for possible subsequent personal exposure assessments. Industries were selected based on their annual electric power consumption in accordance with the hypothesis that large power consumers would have higher ambient MFs when compared with lower power consumers. Sixty-two facilities within thirteen 2-digit Standard Industrial Classifications (SIC) were selected based on their willingness to participate. A traditional industrial hygiene walkaround survey was conducted to identify MF sources, with a special emphasis on work stations.

  2. Characterizing the complex permittivity of high-κ dielectrics using enhanced field method

    SciTech Connect

    Chao, Hsien-Wen; Wong, Wei-Syuan; Chang, Tsun-Hsu

    2015-11-15

    This paper proposed a method to characterize the complex permittivities of samples based on the enhancement of the electric field strength. The enhanced field method significantly improves the measuring range and accuracy of the samples’ electrical properties. Full-wave simulations reveal that the resonant frequency is closely related to the dielectric constant of the sample. In addition, the loss tangent can be determined from the measured quality factor and the just obtained dielectric constant. Materials with low dielectric constant and very low loss tangent are measured for benchmarking and the measured results agree well with previous understanding. Interestingly, materials with extremely high dielectric constants (ε{sub r} > 50), such as titanium dioxide, calcium titanate, and strontium titanate, differ greatly as expected.

  3. An Implicit Characteristic Based Method for Electromagnetics

    NASA Technical Reports Server (NTRS)

    Beggs, John H.; Briley, W. Roger

    2001-01-01

    An implicit characteristic-based approach for numerical solution of Maxwell's time-dependent curl equations in flux conservative form is introduced. This method combines a characteristic based finite difference spatial approximation with an implicit lower-upper approximate factorization (LU/AF) time integration scheme. This approach is advantageous for three-dimensional applications because the characteristic differencing enables a two-factor approximate factorization that retains its unconditional stability in three space dimensions, and it does not require solution of tridiagonal systems. Results are given both for a Fourier analysis of stability, damping and dispersion properties, and for one-dimensional model problems involving propagation and scattering for free space and dielectric materials using both uniform and nonuniform grids. The explicit Finite Difference Time Domain Method (FDTD) algorithm is used as a convenient reference algorithm for comparison. The one-dimensional results indicate that for low frequency problems on a highly resolved uniform or nonuniform grid, this LU/AF algorithm can produce accurate solutions at Courant numbers significantly greater than one, with a corresponding improvement in efficiency for simulating a given period of time. This approach appears promising for development of dispersion optimized LU/AF schemes for three dimensional applications.

  4. General purpose, field-portable cell-based biosensor platform.

    PubMed

    Gilchrist, K H; Barker, V N; Fletcher, L E; DeBusschere, B D; Ghanouni, P; Giovangrandi, L; Kovacs, G T

    2001-09-01

    There are several groups of researchers developing cell-based biosensors for chemical and biological warfare agents based on electrophysiologic monitoring of cells. In order to transition such sensors from the laboratory to the field, a general-purpose hardware and software platform is required. This paper describes the design, implementation, and field-testing of such a system, consisting of cell-transport and data acquisition instruments. The cell-transport module is a self-contained, battery-powered instrument that allows various types of cell-based modules to be maintained at a preset temperature and ambient CO(2) level while in transit or in the field. The data acquisition module provides 32 channels of action potential amplification, filtering, and real-time data streaming to a laptop computer. At present, detailed analysis of the data acquired is carried out off-line, but sufficient computing power is available in the data acquisition module to enable the most useful algorithms to eventually be run real-time in the field. Both modules have sufficient internal power to permit realistic field-testing, such as the example presented in this paper. PMID:11544049

  5. Multi-field Pattern Matching based on Sparse Feature Sampling.

    PubMed

    Wang, Zhongjie; Seidel, Hans-Peter; Weinkauf, Tino

    2016-01-01

    We present an approach to pattern matching in 3D multi-field scalar data. Existing pattern matching algorithms work on single scalar or vector fields only, yet many numerical simulations output multi-field data where only a joint analysis of multiple fields describes the underlying phenomenon fully. Our method takes this into account by bundling information from multiple fields into the description of a pattern. First, we extract a sparse set of features for each 3D scalar field using the 3D SIFT algorithm (Scale-Invariant Feature Transform). This allows for a memory-saving description of prominent features in the data with invariance to translation, rotation, and scaling. Second, the user defines a pattern as a set of SIFT features in multiple fields by e.g. brushing a region of interest. Third, we locate and rank matching patterns in the entire data set. Experiments show that our algorithm is efficient in terms of required memory and computational efforts. PMID:26390479

  6. Near Field Communication-based telemonitoring with integrated ECG recordings

    PubMed Central

    Morak, J.; Kumpusch, H.; Hayn, D.; Leitner, M.; Scherr, D.; Fruhwald, F.M.; Schreier, G.

    2011-01-01

    Objectives Telemonitoring of vital signs is an established option in treatment of patients with chronic heart failure (CHF). In order to allow for early detection of atrial fibrillation (AF) which is highly prevalent in the CHF population telemonitoring programs should include electrocardiogram (ECG) signals. It was therefore the aim to extend our current home monitoring system based on mobile phones and Near Field Communication technology (NFC) to enable patients acquiring their ECG signals autonomously in an easy-to-use way. Methods We prototypically developed a sensing device for the concurrent acquisition of blood pressure and ECG signals. The design of the device equipped with NFC technology and Bluetooth allowed for intuitive interaction with a mobile phone based patient terminal. This ECG monitoring system was evaluated in the course of a clinical pilot trial to assess the system’s technical feasibility, usability and patient’s adherence to twice daily usage. Results 21 patients (4f, 54 ± 14 years) suffering from CHF were included in the study and were asked to transmit two ECG recordings per day via the telemonitoring system autonomously over a monitoring period of seven days. One patient dropped out from the study. 211 data sets were transmitted over a cumulative monitoring period of 140 days (overall adherence rate 82.2%). 55% and 8% of the transmitted ECG signals were sufficient for ventricular and atrial rhythm assessment, respectively. Conclusions Although ECG signal quality has to be improved for better AF detection the developed communication design of joining Bluetooth and NFC technology in our telemonitoring system allows for ambulatory ECG acquisition with high adherence rates and system usability in heart failure patients. PMID:23616890

  7. An atomic orbital-based formulation of analytical gradients and nonadiabatic coupling vector elements for the state-averaged complete active space self-consistent field method on graphical processing units.

    PubMed

    Snyder, James W; Hohenstein, Edward G; Luehr, Nathan; Martínez, Todd J

    2015-10-21

    We recently presented an algorithm for state-averaged complete active space self-consistent field (SA-CASSCF) orbital optimization that capitalizes on sparsity in the atomic orbital basis set to reduce the scaling of computational effort with respect to molecular size. Here, we extend those algorithms to calculate the analytic gradient and nonadiabatic coupling vectors for SA-CASSCF. Combining the low computational scaling with acceleration from graphical processing units allows us to perform SA-CASSCF geometry optimizations for molecules with more than 1000 atoms. The new approach will make minimal energy conical intersection searches and nonadiabatic dynamics routine for molecular systems with O(10(2)) atoms. PMID:26493897

  8. An atomic orbital-based formulation of analytical gradients and nonadiabatic coupling vector elements for the state-averaged complete active space self-consistent field method on graphical processing units

    SciTech Connect

    Snyder, James W.; Hohenstein, Edward G.; Luehr, Nathan; Martínez, Todd J.

    2015-10-21

    We recently presented an algorithm for state-averaged complete active space self-consistent field (SA-CASSCF) orbital optimization that capitalizes on sparsity in the atomic orbital basis set to reduce the scaling of computational effort with respect to molecular size. Here, we extend those algorithms to calculate the analytic gradient and nonadiabatic coupling vectors for SA-CASSCF. Combining the low computational scaling with acceleration from graphical processing units allows us to perform SA-CASSCF geometry optimizations for molecules with more than 1000 atoms. The new approach will make minimal energy conical intersection searches and nonadiabatic dynamics routine for molecular systems with O(10{sup 2}) atoms.

  9. Perspectives on the simulation of protein-surface interactions using empirical force field methods.

    PubMed

    Latour, Robert A

    2014-12-01

    Protein-surface interactions are of fundamental importance for a broad range of applications in the fields of biomaterials and biotechnology. Present experimental methods are limited in their ability to provide a comprehensive depiction of these interactions at the atomistic level. In contrast, empirical force field based simulation methods inherently provide the ability to predict and visualize protein-surface interactions with full atomistic detail. These methods, however, must be carefully developed, validated, and properly applied before confidence can be placed in results from the simulations. In this perspectives paper, I provide an overview of the critical aspects that I consider being of greatest importance for the development of these methods, with a focus on the research that my combined experimental and molecular simulation groups have conducted over the past decade to address these issues. These critical issues include the tuning of interfacial force field parameters to accurately represent the thermodynamics of interfacial behavior, adequate sampling of these types of complex molecular systems to generate results that can be comparable with experimental data, and the generation of experimental data that can be used for simulation results evaluation and validation. PMID:25028242

  10. Identifying work related injuries: comparison of methods for interrogating text fields

    PubMed Central

    2010-01-01

    Background Work-related injuries in Australia are estimated to cost around $57.5 billion annually, however there are currently insufficient surveillance data available to support an evidence-based public health response. Emergency departments (ED) in Australia are a potential source of information on work-related injuries though most ED's do not have an 'Activity Code' to identify work-related cases with information about the presenting problem recorded in a short free text field. This study compared methods for interrogating text fields for identifying work-related injuries presenting at emergency departments to inform approaches to surveillance of work-related injury. Methods Three approaches were used to interrogate an injury description text field to classify cases as work-related: keyword search, index search, and content analytic text mining. Sensitivity and specificity were examined by comparing cases flagged by each approach to cases coded with an Activity code during triage. Methods to improve the sensitivity and/or specificity of each approach were explored by adjusting the classification techniques within each broad approach. Results The basic keyword search detected 58% of cases (Specificity 0.99), an index search detected 62% of cases (Specificity 0.87), and the content analytic text mining (using adjusted probabilities) approach detected 77% of cases (Specificity 0.95). Conclusions The findings of this study provide strong support for continued development of text searching methods to obtain information from routine emergency department data, to improve the capacity for comprehensive injury surveillance. PMID:20374657

  11. Method of using an electric field controlled emulsion phase contactor

    DOEpatents

    Scott, T.C.

    1993-11-16

    A system is described for contacting liquid phases comprising a column for transporting a liquid phase contacting system, the column having upper and lower regions. The upper region has a nozzle for introducing a dispersed phase and means for applying thereto a vertically oriented high intensity pulsed electric field. This electric field allows improved flow rates while shattering the dispersed phase into many micro-droplets upon exiting the nozzle to form a dispersion within a continuous phase. The lower region employs means for applying to the dispersed phase a horizontally oriented high intensity pulsed electric field so that the dispersed phase undergoes continuous coalescence and redispersion while being urged from side to side as it progresses through the system, increasing greatly the mass transfer opportunity. 5 figures.

  12. Method of using an electric field controlled emulsion phase contactor

    DOEpatents

    Scott, Timothy C.

    1993-01-01

    A system for contacting liquid phases comprising a column for transporting a liquid phase contacting system, the column having upper and lower regions. The upper region has a nozzle for introducing a dispersed phase and means for applying thereto a vertically oriented high intensity pulsed electric field. This electric field allows improved flow rates while shattering the dispersed phase into many micro-droplets upon exiting the nozzle to form a dispersion within a continuous phase. The lower region employs means for applying to the dispersed phase a horizontally oriented high intensity pulsed electric field so that the dispersed phase undergoes continuous coalescence and redispersion while being urged from side to side as it progresses through the system, increasing greatly the mass transfer opportunity.

  13. Fuzzy logic based ELF magnetic field estimation in substations.

    PubMed

    Kosalay, Ilhan

    2008-01-01

    This paper examines estimation of the extremely low frequency magnetic fields (MF) in the power substation. First, the results of the previous relevant research studies and the MF measurements in a sample power substation are presented. Then, a fuzzy logic model based on the geometric definitions in order to estimate the MF distribution is explained. Visual software, which has a three-dimensional screening unit, based on the fuzzy logic technique, has been developed. PMID:18440967

  14. High field magnetic resonance imaging-based gel dosimetry for small radiation fields

    NASA Astrophysics Data System (ADS)

    Ding, Xuanfeng

    Small megavoltage photon radiation fields (< 3cm diameter) are used in advanced radiation therapy techniques, such as intensity modulated radiotherapy, and stereotactic radiosurgery, as well as for cellular and preclinical radiobiology studies (very small fields, <1 mm diameter). Radiation dose characteristics for these small fields are difficult to determine in multiple dimensions because of steep dose gradients (30--40% per mm) and conditions of electronic disequilibrium. Conventional radiation dosimetry techniques have limitations for small fields because detector size may be large compared to radiation field size and/or dose acquisition may be restricted to one or two dimensions. Polymer gel dosimetry, is a three-dimensional (3D) dosimeter based on radiation-induced polymerization of tissue equivalent gelatin. Polymer gel dosimeters can be read using magnetic resonance imaging (MRI), which detects changes in relaxivity due to gel polymerization. Spatial resolution for dose readout is limited to 0.25--0.5mm pixel size because of available the magnetic field strengths (1.5T and 3T) and the stability of polymer gelatin at room temperature. A reliable glucose-based MAGIC (methacrylic and ascorbic acid in gelatine initiated by copper) gel dosimeter was formulated and evaluated for small field 3D dosimetry using 3T and 7T high field MRI for dose readout. The melting point of the original recipe MAGIC gel was increased by 4°C by adding 10% glucose to improve gel stability. Excellent spatial resolution of 79um (1.5 hr scan) and 39um (12 hr scan) was achieved using 7T MRI, proving gel stability for long scan times and high resolution 3D dosimetry.

  15. Occupation numbers of spherical orbits in self-consistent beyond-mean-field methods

    NASA Astrophysics Data System (ADS)

    Rodríguez, Tomás R.; Poves, Alfredo; Nowacki, Frédéric

    2016-05-01

    We present a method to compute the number of particles occupying spherical single-particle (SSP) levels within the energy density functional (EDF) framework. These SSP levels are defined for each nucleus by performing self-consistent mean-field calculations. The nuclear many-body states, in which the occupation numbers are evaluated, are obtained with a symmetry conserving configuration mixing (SCCM) method based on the Gogny EDF. The method allows a closer comparison between EDF and shell model with configuration mixing in large valence spaces (SM-CI) results, and can serve as a guidance to define physically sound valence spaces for SM-CI calculations. As a first application of the method, we analyze the onset of deformation in neutron-rich N =40 isotones and the role of the SSP levels around this harmonic oscillator magic number, with particular emphasis in the structure of 64Cr.

  16. Gradient shimming based on regularized estimation for B0-field and shim functions

    NASA Astrophysics Data System (ADS)

    Song, Kan; Bao, Qingjia; Chen, Fang; Huang, Chongyang; Feng, Jiwen; Liu, Chaoyang

    2016-07-01

    Mapping B0-field and shim functions spatially is a crucial step in the gradient shimming. The conventional estimation method used in the phase difference imaging technique takes no account for noise and T2∗ effects, and is prone to create noisy and distorted field maps. This paper describes a new gradient shimming based on the regularized estimation for B0-field and shim functions. Based on a statistical model, the B0-field and shim function maps are estimated by a Penalized Maximum Likelihood method that minimizes two regularized least-squares cost functions, respectively. The first cost function of B0-field exploits the two facts that the noise in the phase difference measurements is Gaussian and the B0-field maps tend to be smooth. And the other one adds an additional fact that each shim function corresponds to a given spherical harmonic of the magnetic field. Significant improvements in the quality of field mapping and in the final shimming results are demonstrated through computer simulations as well as experiments, especially when the magnetic field homogeneity is poor.

  17. Gradient shimming based on regularized estimation for B0-field and shim functions.

    PubMed

    Song, Kan; Bao, Qingjia; Chen, Fang; Huang, Chongyang; Feng, Jiwen; Liu, Chaoyang

    2016-07-01

    Mapping B0-field and shim functions spatially is a crucial step in the gradient shimming. The conventional estimation method used in the phase difference imaging technique takes no account for noise and T2(∗) effects, and is prone to create noisy and distorted field maps. This paper describes a new gradient shimming based on the regularized estimation for B0-field and shim functions. Based on a statistical model, the B0-field and shim function maps are estimated by a Penalized Maximum Likelihood method that minimizes two regularized least-squares cost functions, respectively. The first cost function of B0-field exploits the two facts that the noise in the phase difference measurements is Gaussian and the B0-field maps tend to be smooth. And the other one adds an additional fact that each shim function corresponds to a given spherical harmonic of the magnetic field. Significant improvements in the quality of field mapping and in the final shimming results are demonstrated through computer simulations as well as experiments, especially when the magnetic field homogeneity is poor. PMID:27131476

  18. Anisotropic Upper Critical Field of Iron-Based Superconductors

    NASA Astrophysics Data System (ADS)

    Huang, Ruiqi; She, Weilong

    2016-09-01

    The upper critical field and its anisotropy are the easiest properties to examine in the research of iron-based superconductors. Based on warped cylindrical Fermi surface models, we investigate the temperature and angle dependence of the upper critical field in detail by employing the quasi-classical formalism of the Werthamer-Helfand-Hohenberg (WHH) theory. Our numerical results reveal the anisotropy of the upper critical field, which may be caused by an anisotropic gap function (e.g., d-wave pairing) or an anisotropic Fermi surface, respectively. Further, according to our analysis, this anisotropy can be modulated by the deformation of the Fermi surface and will be strongly suppressed by the Pauli paramagnetic effect.

  19. Iron-based superconductors in high magnetic fields

    NASA Astrophysics Data System (ADS)

    Coldea, Amalia I.; Braithwaite, Daniel; Carrington, Antony

    2013-01-01

    Here we review measurements of the normal and superconducting state properties of iron-based superconductors using high magnetic fields. We discuss the various physical mechanisms that limit superconductivity in high fields, and the information on the superconducting state that can be extracted from the upper critical field, but also how thermal fluctuations affect its determination by resistivity and specific heat measurements. We also discuss measurements of the normal state electronic structure focusing on measurement of quantum oscillations, particularly the de Haas-van Alphen effect. These results have determined very accurately, the topology of the Fermi surface and the quasi-particle masses in a number of different iron-based superconductors, from the 1111, 122 and 111 families.

  20. Cellular automata based byte error correcting codes over finite fields

    NASA Astrophysics Data System (ADS)

    Köroğlu, Mehmet E.; Şiap, İrfan; Akın, Hasan

    2012-08-01

    Reed-Solomon codes are very convenient for burst error correction which occurs frequently in applications, but as the number of errors increase, the circuit structure of implementing Reed-Solomon codes becomes very complex. An alternative solution to this problem is the modular and regular structure of cellular automata which can be constructed with VLSI economically. Therefore, in recent years, cellular automata have became an important tool for error correcting codes. For the first time, cellular automata based byte error correcting codes analogous to extended Reed-Solomon codes over binary fields was studied by Chowdhury et al. [1] and Bhaumik et al. [2] improved the coding-decoding scheme. In this study cellular automata based double-byte error correcting codes are generalized from binary fields to primitive finite fields Zp.

  1. NIM: A Node Influence Based Method for Cancer Classification

    PubMed Central

    Wang, Yiwen; Yang, Jianhua

    2014-01-01

    The classification of different cancer types owns great significance in the medical field. However, the great majority of existing cancer classification methods are clinical-based and have relatively weak diagnostic ability. With the rapid development of gene expression technology, it is able to classify different kinds of cancers using DNA microarray. Our main idea is to confront the problem of cancer classification using gene expression data from a graph-based view. Based on a new node influence model we proposed, this paper presents a novel high accuracy method for cancer classification, which is composed of four parts: the first is to calculate the similarity matrix of all samples, the second is to compute the node influence of training samples, the third is to obtain the similarity between every test sample and each class using weighted sum of node influence and similarity matrix, and the last is to classify each test sample based on its similarity between every class. The data sets used in our experiments are breast cancer, central nervous system, colon tumor, prostate cancer, acute lymphoblastic leukemia, and lung cancer. experimental results showed that our node influence based method (NIM) is more efficient and robust than the support vector machine, K-nearest neighbor, C4.5, naive Bayes, and CART. PMID:25180045

  2. Towards Making Data Bases Practical for use in the Field

    NASA Astrophysics Data System (ADS)

    Fischer, T. P.; Lehnert, K. A.; Chiodini, G.; McCormick, B.; Cardellini, C.; Clor, L. E.; Cottrell, E.

    2014-12-01

    Geological, geochemical, and geophysical research is often field based with travel to remote areas and collection of samples and data under challenging environmental conditions. Cross-disciplinary investigations would greatly benefit from near real-time data access and visualisation within the existing framework of databases and GIS tools. An example of complex, interdisciplinary field-based and data intensive investigations is that of volcanologists and gas geochemists, who sample gases from fumaroles, hot springs, dry gas vents, hydrothermal vents and wells. Compositions of volcanic gas plumes are measured directly or by remote sensing. Soil gas fluxes from volcanic areas are measured by accumulation chamber and involve hundreds of measurements to calculate the total emission of a region. Many investigators also collect rock samples from recent or ancient volcanic eruptions. Structural, geochronological, and geophysical data collected during the same or related field campaigns complement these emissions data. All samples and data collected in the field require a set of metadata including date, time, location, sample or measurement id, and descriptive comments. Currently, most of these metadata are written in field notebooks and later transferred into a digital format. Final results such as laboratory analyses of samples and calculated flux data are tabulated for plotting, correlation with other types of data, modeling and finally publication and presentation. Data handling, organization and interpretation could be greatly streamlined by using digital tools available in the field to record metadata, assign an International Geo Sample Number (IGSN), upload measurements directly from field instruments, and arrange sample curation. Available data display tools such as GeoMapApp and existing data sets (PetDB, IRIS, UNAVCO) could be integrated to direct locations for additional measurements during a field campaign. Nearly live display of sampling locations, pictures

  3. Inquiry-Based Field Studies Involving Teacher-Scientist Collaboration.

    ERIC Educational Resources Information Center

    Odom, Arthur Louis

    2001-01-01

    Describes a collaborative professional development program, Inquiry-Based Field Studies Involving Teacher-Scientist Collaboration, that uses scientist-teacher teams to improve teachers' understanding of scientific inquiry. Reports that the project allowed teachers to develop a deeper understanding on the nature of science. (Author/YDS)

  4. Field-Based Concerns about Fourth-Generation Evaluation Theory.

    ERIC Educational Resources Information Center

    Lai, Morris K.

    Some aspects of fourth generation evaluation procedures that have been advocated by E. G. Guba and Y. S. Lincoln were examined empirically, with emphasis on areas where there have been discrepancies between theory and field-based experience. In fourth generation evaluation, the product of an evaluation is not a set of conclusions, recommendations,…

  5. Field-Based Research Experience in Earth Science Teacher Education.

    ERIC Educational Resources Information Center

    O'Neal, Michael L.

    2003-01-01

    Describes the pilot of a field-based research experience in earth science teacher education designed to produce well-prepared, scientifically and technologically literate earth science teachers through a teaching- and research-oriented partnership between in-service teachers and a university scientist-educator. Indicates that the pilot program was…

  6. An anion sensor based on an organic field effect transistor.

    PubMed

    Minami, Tsuyoshi; Minamiki, Tsukuru; Tokito, Shizuo

    2015-06-11

    We propose an organic field effect transistor (OFET)-based sensor design as a new and innovative platform for anion detection. OFETs could be fabricated on low-cost plastic film substrates using printing technologies, suggesting that OFETs can potentially be applied to practical supramolecular anion sensor devices in the near future. PMID:25966040

  7. Participative Critical Enquiry in Graduate Field-Based Learning

    ERIC Educational Resources Information Center

    Reilly, Kathy; Clavin, Alma; Morrissey, John

    2016-01-01

    This paper outlines a critical pedagogic approach to field-based learning (FBL) at graduate level. Drawing on student experience stemming from a FBL module and as part of an MA programme in Environment, Society and Development, the paper addresses the complexities associated with student-led, participative critical enquiry during fieldwork in…

  8. Development of threedimensional optical correction method for reconstruction of flow field in droplet

    NASA Astrophysics Data System (ADS)

    Ko, Han Seo; Gim, Yeonghyeon; Kang, Seung-Hwan

    2015-11-01

    A three-dimensional optical correction method was developed to reconstruct droplet-based flow fields. For a numerical simulation, synthetic phantoms were reconstructed by a simultaneous multiplicative algebraic reconstruction technique using three projection images which were positioned at an offset angle of 45°. If the synthetic phantom in a conical object with refraction index which differs from atmosphere, the image can be distorted because a light is refracted on the surface of the conical object. Thus, the direction of the projection ray was replaced by the refracted ray which occurred on the surface of the conical object. In order to prove the method considering the distorted effect, reconstruction results of the developed method were compared with the original phantom. As a result, the reconstruction result of the method showed smaller error than that without the method. The method was applied for a Taylor cone which was caused by high voltage between a droplet and a substrate to reconstruct the three-dimensional flow fields for analysis of the characteristics of the droplet. This work was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Korean government (MEST) (No. 2013R1A2A2A01068653).

  9. Comparison of six experimental methods to measure snow SSA in the field

    NASA Astrophysics Data System (ADS)

    Picard, G.; Domine, F.; Arnaud, L.; Champollion, N.; Cliche, P.; Dufour, A.; Flin, F.; Gallet, J.; Langlois, A.; Lesaffre, B.; Royer, A.

    2009-12-01

    The size of snow grains is a crucial variable to interpret both optical and microwave remote sensing data, and to quantify physical and chemical processes within the snowpack. However, “grain size” is an ambiguous variable that is more and more replaced with the physical variable “specific surface area” (SSA). Up to recently, methods to measure snow SSA were tedious and not easy to implement in the field. These earlier methods include stereology, CH4 adsorption, and X-ray tomography. Recently, faster methods based on the measurement of NIR reflectance have been developed, but the accuracy of most of these methods has been subjected only to limited testing. We have therefore organized an intercomparison campaign on the Glacier de La Girose, 3200 m a.s.l., French Alps, in April 2009. Four recent or novel NIR / SWIR methods were used: The DUFISSS integrating sphere operating at 1310 nm, the POSSSUM SSA profiler operating at 1310 and 635 nm, the IRIS mobile integrating sphere operating at 1300 nm, and the NIR photography operating at 850 nm and originally developed at SLF in Switzerland. In addition, snow samples were taken and transported in liquid nitrogen for measurement in the laboratory using CH4 adsorption and other samples were filled in the field with 1-chloronaphthalene for X-Ray microtomography. Comparison of the data sets obtained using these six methods will be presented and discussed. Conclusions will be drawn regarding the accuracy and potential of the recent or novel NIR techniques tested here.

  10. An equivalent source method for modelling the global lithospheric magnetic field

    NASA Astrophysics Data System (ADS)

    Kother, Livia; Hammer, Magnus D.; Finlay, Christopher C.; Olsen, Nils

    2015-10-01

    We present a new technique for modelling the global lithospheric magnetic field at Earth's surface based on the estimation of equivalent potential field sources. As a demonstration we show an application to magnetic field measurements made by the CHAMP satellite during the period 2009-2010 when it was at its lowest altitude and solar activity was quiet. All three components of the vector field data are utilized at all available latitudes. Estimates of core and large-scale magnetospheric sources are removed from the measurements using the CHAOS-4 model. Quiet-time and night-side data selection criteria are also employed to minimize the influence of the ionospheric field. The model for the remaining lithospheric magnetic field consists of magnetic equivalent potential field sources (monopoles) arranged in an icosahedron grid at a depth of 100 km below the surface. The corresponding model parameters are estimated using an iteratively reweighted least-squares algorithm that includes model regularization (either quadratic or maximum entropy) and Huber weighting. Data error covariance matrices are implemented, accounting for the dependence of data variances on quasi-dipole latitude. The resulting equivalent source lithospheric field models show a degree correlation to MF7 greater than 0.7 out to spherical harmonic degree 100. Compared to the quadratic regularization approach, the entropy regularized model possesses notably lower power above degree 70 and a lower number of degrees of freedom despite fitting the observations to a very similar level. Advantages of our equivalent source method include its local nature, the possibility for regional grid refinement and the production of local power spectra, the ability to implement constraints and regularization depending on geographical position, and the ease of transforming the equivalent source values into spherical harmonics.

  11. Potential Methods for Reducing Nitrate Losses in Artificially Drained Fields

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Nitrate in water removed from fields by subsurface drain ('tile') systems is often at concentrations exceeding the ten mg N/L maximum contaminant level (MCL) set by the USEPA for drinking water and has been implicated in contributing to the hypoxia problem within the northern Gulf of Mexico. Because...

  12. NITRIC ACID SHOOTOUT: FIELD COMPARISON OF MEASUREMENT METHODS (JOURNAL VERSION)

    EPA Science Inventory

    Eighteen instruments for measuring atmospheric concentrations of nitric acid were compared in an eight-day field study at Pomona College, situated in the eastern portion of the Los Angeles Basin, in September 1985. The study design included collocated and separated duplicate samp...

  13. Enhancing Field Research Methods with Mobile Survey Technology

    ERIC Educational Resources Information Center

    Glass, Michael R.

    2015-01-01

    This paper assesses the experience of undergraduate students using mobile devices and a commercial application, iSurvey, to conduct a neighborhood survey. Mobile devices offer benefits for enhancing student learning and engagement. This field exercise created the opportunity for classroom discussions on the practicalities of urban research, the…

  14. Field-structured material media and methods for synthesis thereof

    DOEpatents

    Martin, James E.; Hughes, Robert C.; Anderson, Robert A.

    2001-09-18

    The present application is directed to a new class of composite materials, called field-structured composite (FSC) materials, which comprise a oriented aggregate structure made of magnetic particles suspended in a nonmagnetic medium, and to a new class of processes for their manufacture. FSC materials have much potential for application, including use in chemical, optical, environmental, and mechanical sensors.

  15. Specific force field parameters determination for the hybrid ab initio QM/MM LSCF method.

    PubMed

    Ferré, Nicolas; Assfeld, Xavier; Rivail, Jean-Louis

    2002-04-30

    The pure quantum mechanics method, called Local Self-Consistent Field (LSCF), that allows to optimize a wave function within the constraint that some predefined spinorbitals are kept frozen, is discussed. These spinorbitals can be of any shape, and their occupation numbers can be 0 or 1. Any post-Hartree-Fock method, based on the restricted or unrestricted Hartree-Fock Slater determinant, and Kohn-Sham-based DFT method are available. The LSCF method is easily applied to hybrid quantum mechanics/molecular mechanics (QM/MM) procedure where the quantum and the classical parts are covalently bonded. The complete methodology of our hybrid QM/MM scheme is detailed for studies of macromolecular systems. Not only the energy but also the gradients are derived; thus, the full geometry optimization of the whole system is feasible. We show that only specific force field parameters are needed for a correct description of the molecule, they are given for some general chemical bonds. A careful analysis of the errors induced by the use of molecular mechanics in hybrid computation show that a general procedure can be derived to obtain accurate results at low computation effort. The methodology is applied to the structure determination of the crambin protein and to Menshutkin reactions between primary amines and chloromethane. PMID:11939595

  16. Size-extensive vibrational self-consistent field methods with anharmonic geometry corrections

    NASA Astrophysics Data System (ADS)

    Hermes, Matthew R.; Keçeli, Murat; Hirata, So

    2012-06-01

    In the size-extensive vibrational self-consistent field (XVSCF) method introduced earlier [M. Keçeli and S. Hirata, J. Chem. Phys. 135, 134108 (2011)], 10.1063/1.3644895, only a small subset of even-order force constants that can form connected diagrams were used to compute extensive total energies and intensive transition frequencies. The mean-field potentials of XVSCF formed with these force constants have been shown to be effectively harmonic, making basis functions, quadrature, or matrix diagonalization in the conventional VSCF method unnecessary. We introduce two size-consistent VSCF methods, XVSCF(n) and XVSCF[n], for vibrationally averaged geometries in addition to energies and frequencies including anharmonic effects caused by up to the nth-order force constants. The methods are based on our observations that a small number of odd-order force constants of certain types can form open, connected diagrams isomorphic to the diagram of the mean-field potential gradients and that these nonzero gradients shift the potential minima by intensive amounts, which are interpreted as anharmonic geometry corrections. XVSCF(n) evaluates these mean-field gradients and force constants at the equilibrium geometry and estimates this shift accurately, but approximately, neglecting the coupling between these two quantities. XVSCF[n] solves the coupled equations for geometry corrections and frequencies with an iterative algorithm, giving results that should be identical to those of VSCF when applied to an infinite system. We present the diagrammatic and algebraic definitions, algorithms, and initial implementations as well as numerical results of these two methods. The results show that XVSCF(n) and XVSCF[n] reproduce the vibrationally averaged geometries of VSCF for naphthalene and anthracene in their ground and excited vibrational states accurately at fractions of the computational cost.

  17. Random fields generation on the GPU with the spectral turning bands method

    NASA Astrophysics Data System (ADS)

    Hunger, L.; Cosenza, B.; Kimeswenger, S.; Fahringer, T.

    2014-08-01

    Random field (RF) generation algorithms are of paramount importance for many scientific domains, such as astrophysics, geostatistics, computer graphics and many others. Some examples are the generation of initial conditions for cosmological simulations or hydrodynamical turbulence driving. In the latter a new random field is needed every time-step. Current approaches commonly make use of 3D FFT (Fast Fourier Transform) and require the whole generated field to be stored in memory. Moreover, they are limited to regular rectilinear meshes and need an extra processing step to support non-regular meshes. In this paper, we introduce TBARF (Turning BAnd Random Fields), a RF generation algorithm based on the turning band method that is optimized for massively parallel hardware such as GPUs. Our algorithm replaces the 3D FFT with a lower order, one-dimensional FFT followed by a projection step, and is further optimized with loop unrolling and blocking. We show that TBARF can easily generate RF on non-regular (non uniform) meshes and can afford mesh sizes bigger than the available GPU memory by using a streaming, out-of-core approach. TBARF is 2 to 5 times faster than the traditional methods when generating RFs with more than 16M cells. It can also generate RF on non-regular meshes, and has been successfully applied to two real case scenarios: planetary nebulae and cosmological simulations.

  18. Novel magnetic field sensor based on magnetic fluids infiltrated dual-core photonic crystal fibers

    NASA Astrophysics Data System (ADS)

    Li, Jianhua; Wang, Rong; Wang, Jingyuan; Zhang, Baofu; Xu, Zhiyong; Wang, Huali

    2014-03-01

    Novel magnetic field sensor based on magnetic fluids infiltrated dual-core Photonic Crystal Fibers (PCFs) is proposed in this paper. Inside the cross-section of the designed PCFs, the two fiber cores filled with magnetic fluids (Fe3O4) are separated by an air hole, and then form two independent waveguides with mode coupling. The mode coupling under different magnetic field strength is investigated theoretically. A novel and simple magnetic field sensing system is proposed and its sensing performances have been studied numerically. The results show that the magnetic field sensor with 15-cm PCFs has a large sensing range and high sensitivity of 4.80 pm/Oe. It provides a new feasible method to design PCF-based magnetic field sensor.

  19. Graph-based Methods for Orbit Classification

    SciTech Connect

    Bagherjeiran, A; Kamath, C

    2005-09-29

    An important step in the quest for low-cost fusion power is the ability to perform and analyze experiments in prototype fusion reactors. One of the tasks in the analysis of experimental data is the classification of orbits in Poincare plots. These plots are generated by the particles in a fusion reactor as they move within the toroidal device. In this paper, we describe the use of graph-based methods to extract features from orbits. These features are then used to classify the orbits into several categories. Our results show that existing machine learning algorithms are successful in classifying orbits with few points, a situation which can arise in data from experiments.

  20. Classification of high resolution remote sensing image based on geo-ontology and conditional random fields

    NASA Astrophysics Data System (ADS)

    Hong, Liang

    2013-10-01

    The availability of high spatial resolution remote sensing data provides new opportunities for urban land-cover classification. More geometric details can be observed in the high resolution remote sensing image, Also Ground objects in the high resolution remote sensing image have displayed rich texture, structure, shape and hierarchical semantic characters. More landscape elements are represented by a small group of pixels. Recently years, the an object-based remote sensing analysis methodology is widely accepted and applied in high resolution remote sensing image processing. The classification method based on Geo-ontology and conditional random fields is presented in this paper. The proposed method is made up of four blocks: (1) the hierarchical ground objects semantic framework is constructed based on geoontology; (2) segmentation by mean-shift algorithm, which image objects are generated. And the mean-shift method is to get boundary preserved and spectrally homogeneous over-segmentation regions ;(3) the relations between the hierarchical ground objects semantic and over-segmentation regions are defined based on conditional random fields framework ;(4) the hierarchical classification results are obtained based on geo-ontology and conditional random fields. Finally, high-resolution remote sensed image data -GeoEye, is used to testify the performance of the presented method. And the experimental results have shown the superiority of this method to the eCognition method both on the effectively and accuracy, which implies it is suitable for the classification of high resolution remote sensing image.

  1. FIELD VALIDATION OF EPA (ENVIRONMENTAL PROTECTION AGENCY) REFERENCE METHOD 23

    EPA Science Inventory

    The accuracy and precision of U.S. Environmental Protection Agency Reference Method 23 was evaluated at a trichloroethylene degreasing facility and an ethylene dichloride plant. The method consists of a procedure for obtaining an integrated sample followed by gas chromatographic ...

  2. Computational Method for Electrical Potential and Other Field Problems

    ERIC Educational Resources Information Center

    Hastings, David A.

    1975-01-01

    Proposes the finite differences relaxation method as a teaching tool in secondary and university level courses discussing electrical potential, temperature distribution in a region, and similar problems. Outlines the theory and operating procedures of the method, and discusses examples of teaching applications, including possible laboratory…

  3. Alternative methods to smooth the Earth's gravity field

    NASA Technical Reports Server (NTRS)

    Jekeli, C.

    1981-01-01

    Convolutions on the sphere with corresponding convolution theorems are developed for one and two dimensional functions. Some of these results are used in a study of isotropic smoothing operators or filters. Well known filters in Fourier spectral analysis, such as the rectangular, Gaussian, and Hanning filters, are adapted for data on a sphere. The low-pass filter most often used on gravity data is the rectangular (or Pellinen) filter. However, its spectrum has relatively large sidelobes; and therefore, this filter passes a considerable part of the upper end of the gravity spectrum. The spherical adaptations of the Gaussian and Hanning filters are more efficient in suppressing the high-frequency components of the gravity field since their frequency response functions are strongly field since their frequency response functions are strongly tapered at the high frequencies with no, or small, sidelobes. Formulas are given for practical implementation of these new filters.

  4. Geometric and Topological Methods for Quantum Field Theory

    NASA Astrophysics Data System (ADS)

    Cardona, Alexander; Contreras, Iván.; Reyes-Lega, Andrés. F.

    2013-05-01

    Introduction; 1. A brief introduction to Dirac manifolds Henrique Bursztyn; 2. Differential geometry of holomorphic vector bundles on a curve Florent Schaffhauser; 3. Paths towards an extension of Chern-Weil calculus to a class of infinite dimensional vector bundles Sylvie Paycha; 4. Introduction to Feynman integrals Stefan Weinzierl; 5. Iterated integrals in quantum field theory Francis Brown; 6. Geometric issues in quantum field theory and string theory Luis J. Boya; 7. Geometric aspects of the standard model and the mysteries of matter Florian Scheck; 8. Absence of singular continuous spectrum for some geometric Laplacians Leonardo A. Cano García; 9. Models for formal groupoids Iván Contreras; 10. Elliptic PDEs and smoothness of weakly Einstein metrics of Hölder regularity Andrés Vargas; 11. Regularized traces and the index formula for manifolds with boundary Alexander Cardona and César Del Corral; Index.

  5. Magnetic field adjustment structure and method for a tapered wiggler

    SciTech Connect

    Halbach, K.

    1988-03-01

    An improved wiggler having means for adjusting the magnetic field generated by electromagnet poles spaced along the path of a charged particle beam to compensate for energy losses in the charge particles is described which comprises; (a) windings on at least some of the electromagnet poles in the wiggler; (b) one of the windings on each of a group of adjacent electromagnet poles connected to a first power supply, and another winding on the electromagnet poles having more than one winding connected to a second power supply; and (c) means for independently adjusting one power supply to independently vary the current in one of the windings on a group of adjacent electromagnet poles; whereby the magnetic field strength of a group of adjacent electromagnet poles in the wiggler may be changed in smaller increments.

  6. Magnetic field sensor using a polymer-based vibrator

    NASA Astrophysics Data System (ADS)

    Wu, Jiang; Hasebe, Kazuhiko; Mizuno, Yosuke; Tabaru, Marie; Nakamura, Kentaro

    2016-09-01

    In this technical note, a polymer-based magnetic sensor with a high resolution was devised for sensing the high magnetic field. It consisted of a bimorph (vibrator) made of poly (phenylene sulfide) (PPS) and a phosphor-bronze foil glued on the free end of the bimorph. According to Faraday’s law of induction, when a magnetic field in the direction perpendicular to the bimorph was applied, the foil cut the magnetic flux, and generated an alternating voltage across the leads at the natural frequency of the bimorph. Because PPS has low mechanical loss, low elastic modulus, and low density, high vibration velocity can be achieved if it is employed as the elastomer of the bimorph. The devised sensor was tested in the magnetic field range of 0.1–570 mT and exhibited a minimum detectable magnetic field of 0.1 mT. At a zero-to-peak driving voltage of 60 V, the sensitivity of the PPS-based magnetic sensor reached 10.5 V T‑1, which was 1.36 times the value of the aluminum-based magnetic sensor with the same principle and dimensions.

  7. Circuitry, systems and methods for detecting magnetic fields

    DOEpatents

    Kotter, Dale K [Shelley, ID; Spencer, David F [Idaho Falls, ID; Roybal, Lyle G [Idaho Falls, ID; Rohrbaugh, David T [Idaho Falls, ID

    2010-09-14

    Circuitry for detecting magnetic fields includes a first magnetoresistive sensor and a second magnetoresistive sensor configured to form a gradiometer. The circuitry includes a digital signal processor and a first feedback loop coupled between the first magnetoresistive sensor and the digital signal processor. A second feedback loop which is discrete from the first feedback loop is coupled between the second magnetoresistive sensor and the digital signal processor.

  8. PROGRESS ON GENERIC PHASE-FIELD METHOD DEVELOPMENT

    SciTech Connect

    Biner, Bullent; Tonks, Michael; Millett, Paul C.; Li, Yulan; Hu, Shenyang Y.; Gao, Fei; Sun, Xin; Martinez, E.; Anderson, D.

    2012-09-26

    In this report, we summarize our current collobarative efforts, involving three national laboratories: Idaho National Laboratory (INL), Pacific Northwest National Laboratory (PNNL) and Los Alamos National Laboatory (LANL), to develop a computational framework for homogenous and heterogenous nucleation mechanisms into the generic phase-field model. During the studies, the Fe-Cr system was chosen as a model system due to its simplicity and availability of reliable thermodynamic and kinetic data, as well as the range of applications of low-chromium ferritic steels in nuclear reactors. For homogenous nucleation, the relavant parameters determined from atomistic studies were used directly to determine the energy functional and parameters in the phase-field model. Interfacial energy, critical nucleus size, nucleation rate, and coarsening kinetics were systematically examined in two- and three- dimensional models. For the heteregoneous nucleation mechanism, we studied the nucleation and growth behavior of chromium precipitates due to the presence of dislocations. The results demonstrate that both nucleation schemes can be introduced to a phase-field modeling algorithm with the desired accuracy and computational efficiency.

  9. Ferroelectric memory element based on thin film field effect transistor

    NASA Astrophysics Data System (ADS)

    Poghosyan, A. R.; Aghamalyan, N. R.; Elbakyan, E. Y.; Guo, R.; Hovsepyan, R. K.

    2013-09-01

    We report the preparation and investigation of ferroelectric field effect transistors (FET) using ZnO:Li films with high field mobility of the charge carriers as a FET channel and as a ferroelectric active element simultaneously. The possibility for using of ferroelectric FET based on the ZnO:Li films in the ZnO:Li/LaB6 heterostructure as a bi-stable memory element for information recording is shown. The proposed ferroelectric memory structure does not manifest a fatigue after multiple readout of once recorded information.

  10. Generation of arbitrary vector fields based on a pair of orthogonal elliptically polarized base vectors.

    PubMed

    Xu, Danfeng; Gu, Bing; Rui, Guanghao; Zhan, Qiwen; Cui, Yiping

    2016-02-22

    We present an arbitrary vector field with hybrid polarization based on the combination of a pair of orthogonal elliptically polarized base vectors on the Poincaré sphere. It is shown that the created vector field is only dependent on the latitude angle 2χ but is independent on the longitude angle 2ψ on the Poincaré sphere. By adjusting the latitude angle 2χ, which is related to two identical waveplates in a common path interferometric arrangement, one could obtain arbitrary type of vector fields. Experimentally, we demonstrate the generation of such kind of vector fields and confirm the distribution of state of polarization by the measurement of Stokes parameters. Besides, we investigate the tight focusing properties of these vector fields. It is found that the additional degree of freedom 2χ provided by arbitrary vector field with hybrid polarization allows one to control the spatial structure of polarization and to engineer the focusing field. PMID:26907066

  11. Improved Field Emission Algorithms for Modeling Field Emission Devices Using a Conformal Finite-Difference Time-Domain Particle-in-Cell Method

    NASA Astrophysics Data System (ADS)

    Lin, M. C.; Loverich, J.; Stoltz, P. H.; Nieter, C.

    2013-10-01

    This work introduces a conformal finite difference time domain (CFDTD) particle-in-cell (PIC) method with an improved field emission algorithm to accurately and efficiently study field emission devices. The CFDTD method is based on the Dey-Mittra algorithm or cut-cell algorithm, as implemented in the Vorpal code. For the field emission algorithm, we employ the elliptic function v(y) found by Forbes and a new fitting function t(y)2 for the Fowler-Nordheim (FN) equation. With these improved correction factors, field emission of electrons from a cathode surface is much closer to the prediction of the exact FN formula derived by Murphy and Good. This work was supported in part by both the U.S. Department of Defense under Grant No. FA9451-07-C-0025 and the U.S. Department of Energy under Grant No. DE-SC0004436.

  12. Application of Gaussian expansion method to nuclear mean-field calculations with deformation

    NASA Astrophysics Data System (ADS)

    Nakada, H.

    2008-08-01

    We extensively develop a method of implementing mean-field calculations for deformed nuclei, using the Gaussian expansion method (GEM). This GEM algorithm has the following advantages: (i) it can efficiently describe the energy-dependent asymptotics of the wave functions at large r, (ii) it is applicable to various effective interactions including those with finite ranges, and (iii) the basis parameters are insensitive to nuclide, thereby many nuclei in wide mass range can be handled by a single set of bases. Superposing the spherical GEM bases with feasible truncation for the orbital angular momentum, we obtain deformed single-particle wave-functions to reasonable precision. We apply the new algorithm to the Hartree-Fock and the Hartree-Fock-Bogolyubov calculations of Mg nuclei with the Gogny interaction, by which neck structure of a deformed neutron halo is suggested for 40Mg.

  13. Radio frequency electromagnetic field compliance assessment of multi-band and MIMO equipped radio base stations.

    PubMed

    Thors, Björn; Thielens, Arno; Fridén, Jonas; Colombi, Davide; Törnevik, Christer; Vermeeren, Günter; Martens, Luc; Joseph, Wout

    2014-05-01

    In this paper, different methods for practical numerical radio frequency exposure compliance assessments of radio base station products were investigated. Both multi-band base station antennas and antennas designed for multiple input multiple output (MIMO) transmission schemes were considered. For the multi-band case, various standardized assessment methods were evaluated in terms of resulting compliance distance with respect to the reference levels and basic restrictions of the International Commission on Non-Ionizing Radiation Protection. Both single frequency and multiple frequency (cumulative) compliance distances were determined using numerical simulations for a mobile communication base station antenna transmitting in four frequency bands between 800 and 2600 MHz. The assessments were conducted in terms of root-mean-squared electromagnetic fields, whole-body averaged specific absorption rate (SAR) and peak 10 g averaged SAR. In general, assessments based on peak field strengths were found to be less computationally intensive, but lead to larger compliance distances than spatial averaging of electromagnetic fields used in combination with localized SAR assessments. For adult exposure, the results indicated that even shorter compliance distances were obtained by using assessments based on localized and whole-body SAR. Numerical simulations, using base station products employing MIMO transmission schemes, were performed as well and were in agreement with reference measurements. The applicability of various field combination methods for correlated exposure was investigated, and best estimate methods were proposed. Our results showed that field combining methods generally considered as conservative could be used to efficiently assess compliance boundary dimensions of single- and dual-polarized multicolumn base station antennas with only minor increases in compliance distances. PMID:24523232

  14. Evaluation of base widening methods on flexible pavements in Wyoming

    NASA Astrophysics Data System (ADS)

    Offei, Edward

    The surface transportation system forms the biggest infrastructure investment in the United States of which the roadway pavement is an integral part. Maintaining the roadways can involve rehabilitation in the form of widening, which requires a longitudinal joint between the existing and new pavement sections to accommodate wider travel lanes, additional travel lanes or modification to shoulder widths. Several methods are utilized for the joint construction between the existing and new pavement sections including vertical, tapered and stepped joints. The objective of this research is to develop a formal recommendation for the preferred joint construction method that provides the best base layer support for the state of Wyoming. Field collection of Dynamic Cone Penetrometer (DCP) data, Falling Weight Deflectometer (FWD) data, base samples for gradation and moisture content were conducted on 28 existing and 4 newly constructed pavement widening projects. A survey of constructability issues on widening projects as experienced by WYDOT engineers was undertaken. Costs of each joint type were compared as well. Results of the analyses indicate that the tapered joint type showed relatively better pavement strength compared to the vertical joint type and could be the preferred joint construction method. The tapered joint type also showed significant base material savings than the vertical joint type. The vertical joint has an 18% increase in cost compared to the tapered joint. This research is intended to provide information and/or recommendation to state policy makers as to which of the base widening joint techniques (vertical, tapered, stepped) for flexible pavement provides better pavement performance.

  15. An Object-Based Method for Chinese Landform Types Classification

    NASA Astrophysics Data System (ADS)

    Ding, Hu; Tao, Fei; Zhao, Wufan; Na, Jiaming; Tang, Guo'an

    2016-06-01

    Landform classification is a necessary task for various fields of landscape and regional planning, for example for landscape evaluation, erosion studies, hazard prediction, et al. This study proposes an improved object-based classification for Chinese landform types using the factor importance analysis of random forest and the gray-level co-occurrence matrix (GLCM). In this research, based on 1km DEM of China, the combination of the terrain factors extracted from DEM are selected by correlation analysis and Sheffield's entropy method. Random forest classification tree is applied to evaluate the importance of the terrain factors, which are used as multi-scale segmentation thresholds. Then the GLCM is conducted for the knowledge base of classification. The classification result was checked by using the 1:4,000,000 Chinese Geomorphological Map as reference. And the overall classification accuracy of the proposed method is 5.7% higher than ISODATA unsupervised classification, and 15.7% higher than the traditional object-based classification method.

  16. An adaptive lattice Boltzmann method for predicting turbulent wake fields in wind parks

    NASA Astrophysics Data System (ADS)

    Deiterding, Ralf; Wood, Stephen L.

    2014-11-01

    Wind turbines create large-scale wake structures that can affect downstream turbines considerably. Numerical simulation of the turbulent flow field is a viable approach in order to obtain a better understanding of these interactions and to optimize the turbine placement in wind parks. Yet, the development of effective computational methods for predictive wind farm simulation is challenging. As an alternative approach to presently employed vortex and actuator-based methods, we are currently developing a parallel adaptive lattice Boltzmann method for large eddy simulation of turbulent weakly compressible flows with embedded moving structures that shows good potential for effective wind turbine wake prediction. Since the method is formulated in an Eulerian frame of reference and on a dynamically changing nonuniform Cartesian grid, even moving boundaries can be considered rather easily. The presentation will describe all crucial components of the numerical method and discuss first verification computations. Among other configurations, simulations of the wake fields created by multiple Vesta V27 turbines will be shown.

  17. Risk Prediction Modeling of Sequencing Data Using a Forward Random Field Method

    PubMed Central

    Wen, Yalu; He, Zihuai; Li, Ming; Lu, Qing

    2016-01-01

    With the advance in high-throughput sequencing technology, it is feasible to investigate the role of common and rare variants in disease risk prediction. While the new technology holds great promise to improve disease prediction, the massive amount of data and low frequency of rare variants pose great analytical challenges on risk prediction modeling. In this paper, we develop a forward random field method (FRF) for risk prediction modeling using sequencing data. In FRF, subjects’ phenotypes are treated as stochastic realizations of a random field on a genetic space formed by subjects’ genotypes, and an individual’s phenotype can be predicted by adjacent subjects with similar genotypes. The FRF method allows for multiple similarity measures and candidate genes in the model, and adaptively chooses the optimal similarity measure and disease-associated genes to reflect the underlying disease model. It also avoids the specification of the threshold of rare variants and allows for different directions and magnitudes of genetic effects. Through simulations, we demonstrate the FRF method attains higher or comparable accuracy over commonly used support vector machine based methods under various disease models. We further illustrate the FRF method with an application to the sequencing data obtained from the Dallas Heart Study. PMID:26892725

  18. Phase field method to optimize dielectric devices for electromagnetic wave propagation

    SciTech Connect

    Takezawa, Akihiro Kitamura, Mitsuru

    2014-01-15

    We discuss a phase field method for shape optimization in the context of electromagnetic wave propagation. The proposed method has the same functional capabilities as the level set method for shape optimization. The first advantage of the method is the simplicity of computation, since extra operations such as re-initialization of functions are not required. The second is compatibility with the topology optimization method due to the similar domain representation and the sensitivity analysis. Structural shapes are represented by the phase field function defined in the design domain, and this function is optimized by solving a time-dependent reaction diffusion equation. The artificial double-well potential function used in the equation is derived from sensitivity analysis. We study four types of 2D or 2.5D (axisymmetric) optimization problems. Two are the classical problems of photonic crystal design based on the Bloch theory and photonic crystal wave guide design, and two are the recent topics of designing dielectric left-handed metamaterials and dielectric ring resonators.

  19. METHOD DEVELOPMENT, EVALUATION, REFINEMENT, AND ANALYSIS FOR FIELD STUDIES

    EPA Science Inventory

    Manufacturers routinely introduce new pesticides into the marketplace and discontinue manufacturing older pesticides that may be more toxic to humans. Analytical methods and environmental data are needed for current use residential pesticides (e.g., pyrethrins, synthetic pyrethr...

  20. EVALUATION OF SAMPLING AND FIELD FILTRATION METHODS FOR THE ANALYSIS OF TRACE METALS IN GROUND WATER

    EPA Science Inventory

    Selected groundwater sampling and filtering methods were evaluated to determine their effects on field parameters and trace metal concentrations in samples collected under several types of field conditions. he study focused on sampling in conventional standpipe monitoring wells u...

  1. Field sampling method for quantifying odorants in humid environments.

    PubMed

    Trabue, Steven L; Scoggin, Kenwood D; Li, Hong; Burns, Robert; Xin, Hongwei

    2008-05-15

    Most air quality studies in agricultural environments use thermal desorption analysis for quantifying semivolatile organic compounds (SVOCs) associated with odor. The objective of this study was to develop a robust sampling technique for measuring SVOCs in humid environments. Test atmospheres were generated at ambient temperatures (23 +/- 1.5 degrees C) and 25, 50, and 80% relative humidity (RH). Sorbent material used included Tenax, graphitized carbon, and carbon molecular sieve (CMS). Sorbent tubes were challenged with 2, 4, 8, 12, and 24 L of air at various RHs. Sorbent tubes with CMS material performed poorly at both 50 and 80% RH dueto excessive sorption of water. Heating of CMS tubes during sampling or dry-purging of CMS tubes post sampling effectively reduced water sorption with heating of tubes being preferred due to the higher recovery and reproducibility. Tenaxtubes had breakthrough of the more volatile compounds and tended to form artifacts with increasing volumes of air sampled. Graphitized carbon sorbent tubes containing Carbopack X and Carbopack C performed best with quantitative recovery of all compounds at all RHs and sampling volumes tested. The graphitized carbon tubes were taken to the field for further testing. Field samples taken from inside swine feeding operations showed that butanoic acid, 4-methylphenol, 4-ethylphenol, indole, and 3-methylindole were the compounds detected most often above their odor threshold values. Field samples taken from a poultry facility demonstrated that butanoic acid, 3-methylbutanoic acid, and 4-methylphenol were the compounds above their odor threshold values detected most often, relative humidity, CAFO, VOC, SVOC, thermal desorption, swine, poultry, air quality, odor. PMID:18546717

  2. Method for moving high tonnage biomass from field to furnace

    SciTech Connect

    Clayton, J.E.; Eiland, B.R.

    1984-08-01

    Sugar cane harvesting and transport equipment can be used for harvesting frozen sugar cane or alternate crops for biomass after the harvest for sugar production is completed. Use of a rotary rake for preparing windrows of sugarcane residue and use of a forage harvester and a sound baler for recovering the material is satisfactory. The field residue contained high amounts of sulfur and ash, which may cause pollution problems. Material baled with a round baler had twice the bulk density of forage chopped material. A hay crumper reduced the drying time of residue windrows so that the residue could be collected before the ratoon crop sprouted, thus avoiding crop damage by the equipment.

  3. Determination of optical field generated by a microlens using digital holographic method

    NASA Astrophysics Data System (ADS)

    Kozacki, T.; Józwik, M.; Jóźwicki, R.

    2009-09-01

    In the paper, application of the digital holographic method for full field characterization of the beam generated by microlenses is considered. For this goal, the laboratory setup was designed based on Mach-Zehnder interferometry with the additional reference channel. The beam generated by a microlens was imaged by an afocal system and intensity distributions or interferograms (holograms) were registered by CCD camera. The digital holography using one image allows us to determine microlens parameters, i.e., focal length, aberrations, and shape. The optimum conditions to determine the surface shape of a microlens using holographic method have been found. We compare obtained results with geometrical and interferometric measurements. We show the advantage of digital holography for a shape microlens determination (improved accuracy), aberrations, and focal length (characterization facility). Through optimum refocusing, the digital holography gives more precise shape. The paper is accompanied with computer simulations and the experimental measurement data for geometrical, interferometric, and holographic methods.

  4. Field Analysis of Microbial Contamination Using Three Molecular Methods in Parallel

    NASA Technical Reports Server (NTRS)

    Morris, H.; Stimpson, E.; Schenk, A.; Kish, A.; Damon, M.; Monaco, L.; Wainwright, N.; Steele, A.

    2010-01-01

    Advanced technologies with the capability of detecting microbial contamination remain an integral tool for the next stage of space agency proposed exploration missions. To maintain a clean, operational spacecraft environment with minimal potential for forward contamination, such technology is a necessity, particularly, the ability to analyze samples near the point of collection and in real-time both for conducting biological scientific experiments and for performing routine monitoring operations. Multiple molecular methods for detecting microbial contamination are available, but many are either too large or not validated for use on spacecraft. Two methods, the adenosine- triphosphate (ATP) and Limulus Amebocyte Lysate (LAL) assays have been approved by the NASA Planetary Protection Office for the assessment of microbial contamination on spacecraft surfaces. We present the first parallel field analysis of microbial contamination pre- and post-cleaning using these two methods as well as universal primer-based polymerase chain reaction (PCR).

  5. Statistical validation of event predictors: A comparative study based on the field of seizure prediction

    SciTech Connect

    Feldwisch-Drentrup, Hinnerk; Schulze-Bonhage, Andreas; Timmer, Jens; Schelter, Bjoern

    2011-06-15

    The prediction of events is of substantial interest in many research areas. To evaluate the performance of prediction methods, the statistical validation of these methods is of utmost importance. Here, we compare an analytical validation method to numerical approaches that are based on Monte Carlo simulations. The comparison is performed in the field of the prediction of epileptic seizures. In contrast to the analytical validation method, we found that for numerical validation methods insufficient but realistic sample sizes can lead to invalid high rates of false positive conclusions. Hence we outline necessary preconditions for sound statistical tests on above chance predictions.

  6. Experiments of multichannel least-square methods for sound field reproduction inside aircraft mock-up: Objective evaluations

    NASA Astrophysics Data System (ADS)

    Gauthier, P.-A.; Camier, C.; Lebel, F.-A.; Pasco, Y.; Berry, A.; Langlois, J.; Verron, C.; Guastavino, C.

    2016-08-01

    Sound environment reproduction of various flight conditions in aircraft mock-ups is a valuable tool for the study, prediction, demonstration and jury testing of interior aircraft sound quality and annoyance. To provide a faithful reproduced sound environment, time, frequency and spatial characteristics should be preserved. Physical sound field reproduction methods for spatial sound reproduction are mandatory to immerse the listener's body in the proper sound fields so that localization cues are recreated at the listener's ears. Vehicle mock-ups pose specific problems for sound field reproduction. Confined spaces, needs for invisible sound sources and very specific acoustical environment make the use of open-loop sound field reproduction technologies such as wave field synthesis (based on free-field models of monopole sources) not ideal. In this paper, experiments in an aircraft mock-up with multichannel least-square methods and equalization are reported. The novelty is the actual implementation of sound field reproduction with 3180 transfer paths and trim panel reproduction sources in laboratory conditions with a synthetic target sound field. The paper presents objective evaluations of reproduced sound fields using various metrics as well as sound field extrapolation and sound field characterization.

  7. A field method for making a quantitative estimate of altered tuff in sandstone

    USGS Publications Warehouse

    Cadigan, R.A.

    1954-01-01

    The use of benzidine to identify altered tuff in sandstone is practical for field or field laboratory studies associated with stratigraphic correlations, mineral deposit investigations, or paleogeographic interpretations. The method is based on the ability of saturated benzidine (C12H12N2) solution to produce a blue stain on montmorillonite-bearing tuff grains. The method is substantiated by the results of microscopic, X-ray spectrometer, and spectrographic tests which lead to the conclusion that: (1) the benzidine stain test differentiates grains of different composition, (2) the white or gray grains which are stained a uniform blue color are fragments of altered tuff, and (3) white or gray grains which stain in a few small spots are probably silicified tuff. The amount of sand grains taken from a hand specimen or an outcrop which will be held by a penny is spread out on a nonabsorbent white surface and soaked with benzidine for 5 minutes. The approximate number blue grains and the average grain size are used in a chart to determine a reference number which measures relative order of abundance. The chart, based on a volume relationship, corrects for the variation in the number of grains in the sample as the grain size varies. Practical use of the method depends on a knowledge of several precautionary measures as well as an understanding of the limitations of benzidine staining tests.

  8. Adjusting thresholds of satellite-based convective initiation interest fields based on the cloud environment

    NASA Astrophysics Data System (ADS)

    Jewett, Christopher P.; Mecikalski, John R.

    2013-11-01

    The Time-Space Exchangeability (TSE) concept states that similar characteristics of a given property are closely related statistically for objects or features within close proximity. In this exercise, the objects considered are growing cumulus clouds, and the data sets to be considered in a statistical sense are geostationary satellite infrared (IR) fields that help describe cloud growth rates, cloud top heights, and whether cloud tops contain significant amounts of frozen hydrometeors. In this exercise, the TSE concept is applied to alter otherwise static thresholds of IR fields of interest used within a satellite-based convective initiation (CI) nowcasting algorithm. The convective environment in which the clouds develop dictate growth rate and precipitation processes, and cumuli growing within similar mesoscale environments should have similar growth characteristics. Using environmental information provided by regional statistics of the interest fields, the thresholds are examined for adjustment toward improving the accuracy of 0-1 h CI nowcasts. Growing cumulus clouds are observed within a CI algorithm through IR fields for many 1000 s of cumulus cloud objects, from which statistics are generated on mesoscales. Initial results show a reduction in the number of false alarms of ~50%, yet at the cost of eliminating approximately ~20% of the correct CI forecasts. For comparison, static thresholds (i.e., with the same threshold values applied across the entire satellite domain) within the CI algorithm often produce a relatively high probability of detection, with false alarms being a significant problem. In addition to increased algorithm performance, a benefit of using a method like TSE is that a variety of unknown variables that influence cumulus cloud growth can be accounted for without need for explicit near-cloud observations that can be difficult to obtain.

  9. Extracting flat-field images from scene-based image sequences using phase correlation

    NASA Astrophysics Data System (ADS)

    Caron, James N.; Montes, Marcos J.; Obermark, Jerome L.

    2016-06-01

    Flat-field image processing is an essential step in producing high-quality and radiometrically calibrated images. Flat-fielding corrects for variations in the gain of focal plane array electronics and unequal illumination from the system optics. Typically, a flat-field image is captured by imaging a radiometrically uniform surface. The flat-field image is normalized and removed from the images. There are circumstances, such as with remote sensing, where a flat-field image cannot be acquired in this manner. For these cases, we developed a phase-correlation method that allows the extraction of an effective flat-field image from a sequence of scene-based displaced images. The method uses sub-pixel phase correlation image registration to align the sequence to estimate the static scene. The scene is removed from sequence producing a sequence of misaligned flat-field images. An average flat-field image is derived from the realigned flat-field sequence.

  10. Spectral methods for modeling supersonic chemically reacting flow fields

    NASA Technical Reports Server (NTRS)

    Drummond, J. P.; Hussaini, M. Y.; Zang, T. A.

    1985-01-01

    A numerical algorithm was developed for solving the equations describing chemically reacting supersonic flows. The algorithm employs a two-stage Runge-Kutta method for integrating the equations in time and a Chebyshev spectral method for integrating the equations in space. The accuracy and efficiency of the technique were assessed by comparison with an existing implicit finite-difference procedure for modeling chemically reacting flows. The comparison showed that the procedure presented yields equivalent accuracy on much coarser grids as compared to the finite-difference procedure with resultant significant gains in computational efficiency.

  11. A Web-Based Information System for Field Data Management

    NASA Astrophysics Data System (ADS)

    Weng, Y. H.; Sun, F. S.

    2014-12-01

    A web-based field data management system has been designed and developed to allow field geologists to store, organize, manage, and share field data online. System requirements were analyzed and clearly defined first regarding what data are to be stored, who the potential users are, and what system functions are needed in order to deliver the right data in the right way to the right user. A 3-tiered architecture was adopted to create this secure, scalable system that consists of a web browser at the front end while a database at the back end and a functional logic server in the middle. Specifically, HTML, CSS, and JavaScript were used to implement the user interface in the front-end tier, the Apache web server runs PHP scripts, and MySQL to server is used for the back-end database. The system accepts various types of field information, including image, audio, video, numeric, and text. It allows users to select data and populate them on either Google Earth or Google Maps for the examination of the spatial relations. It also makes the sharing of field data easy by converting them into XML format that is both human-readable and machine-readable, and thus ready for reuse.

  12. Resonant Magnetic Field Sensors Based On MEMS Technology.

    PubMed

    Herrera-May, Agustín L; Aguilera-Cortés, Luz A; García-Ramírez, Pedro J; Manjarrez, Elías

    2009-01-01

    Microelectromechanical systems (MEMS) technology allows the integration of magnetic field sensors with electronic components, which presents important advantages such as small size, light weight, minimum power consumption, low cost, better sensitivity and high resolution. We present a discussion and review of resonant magnetic field sensors based on MEMS technology. In practice, these sensors exploit the Lorentz force in order to detect external magnetic fields through the displacement of resonant structures, which are measured with optical, capacitive, and piezoresistive sensing techniques. From these, the optical sensing presents immunity to electromagnetic interference (EMI) and reduces the read-out electronic complexity. Moreover, piezoresistive sensing requires an easy fabrication process as well as a standard packaging. A description of the operation mechanisms, advantages and drawbacks of each sensor is considered. MEMS magnetic field sensors are a potential alternative for numerous applications, including the automotive industry, military, medical, telecommunications, oceanographic, spatial, and environment science. In addition, future markets will need the development of several sensors on a single chip for measuring different parameters such as the magnetic field, pressure, temperature and acceleration. PMID:22408480

  13. Resonant Magnetic Field Sensors Based On MEMS Technology

    PubMed Central

    Herrera-May, Agustín L.; Aguilera-Cortés, Luz A.; García-Ramírez, Pedro J.; Manjarrez, Elías

    2009-01-01

    Microelectromechanical systems (MEMS) technology allows the integration of magnetic field sensors with electronic components, which presents important advantages such as small size, light weight, minimum power consumption, low cost, better sensitivity and high resolution. We present a discussion and review of resonant magnetic field sensors based on MEMS technology. In practice, these sensors exploit the Lorentz force in order to detect external magnetic fields through the displacement of resonant structures, which are measured with optical, capacitive, and piezoresistive sensing techniques. From these, the optical sensing presents immunity to electromagnetic interference (EMI) and reduces the read-out electronic complexity. Moreover, piezoresistive sensing requires an easy fabrication process as well as a standard packaging. A description of the operation mechanisms, advantages and drawbacks of each sensor is considered. MEMS magnetic field sensors are a potential alternative for numerous applications, including the automotive industry, military, medical, telecommunications, oceanographic, spatial, and environment science. In addition, future markets will need the development of several sensors on a single chip for measuring different parameters such as the magnetic field, pressure, temperature and acceleration. PMID:22408480

  14. Geodynamics branch data base for main magnetic field analysis

    NASA Technical Reports Server (NTRS)

    Langel, Robert A.; Baldwin, R. T.

    1991-01-01

    The data sets used in geomagnetic field modeling at GSFC are described. Data are measured and obtained from a variety of information and sources. For clarity, data sets from different sources are categorized and processed separately. The data base is composed of magnetic observatory data, surface data, high quality aeromagnetic, high quality total intensity marine data, satellite data, and repeat data. These individual data categories are described in detail in a series of notebooks in the Geodynamics Branch, GSFC. This catalog reviews the original data sets, the processing history, and the final data sets available for each individual category of the data base and is to be used as a reference manual for the notebooks. Each data type used in geomagnetic field modeling has varying levels of complexity requiring specialized processing routines for satellite and observatory data and two general routines for processing aeromagnetic, marine, land survey, and repeat data.

  15. Student-Centred Inquiry "as" Curriculum as a Model for Field-Based Teacher Education

    ERIC Educational Resources Information Center

    Oliver, Kimberly L.; Oesterreich, Heather A.

    2013-01-01

    This research project focuses on teacher education in a field-based methods course. We were interested in understanding what "could be" when we worked with pre-service teachers in a high school physical education class to assist them in the process of learning to listen and respond to their students in ways that might better facilitate…

  16. Providing Culturally Responsive Teaching in Field-Based and Student Teaching Experiences: A Case Study

    ERIC Educational Resources Information Center

    Kea, Cathy D.; Trent, Stanley C.

    2013-01-01

    This mixed design study chronicles the yearlong outcomes of 27 undergraduate preservice teacher candidates' ability to design and deliver culturally responsive lesson plans during field-based experience lesson observations and student teaching settings after receiving instruction in a special education methods course. While components of…

  17. Inquiry-Based Field Experiences: Transforming Early Childhood Teacher Candidates' Effectiveness

    ERIC Educational Resources Information Center

    Linn, Vicki; Jacobs, Gera

    2015-01-01

    Contemporary teacher preparation programs are challenged to provide transformational learning experiences that enhance the development of highly effective teachers. This mixed-methods case study explored the influence of inquiry-based field experiences as a pedagogical approach to teacher preparation. Four teacher candidates participated in a…

  18. The multiconfiguration time-dependent Hartree-Fock method based on a closed-shell-type multiconfiguration self - consistent field reference state and its application to the LiH molecule

    NASA Astrophysics Data System (ADS)

    Sasagane, Kotoku; Mori, Kazuhide; Ichihara, Akira; Itoh, Reikichi

    1990-03-01

    The linear response calculations in the multiconfiguration time-dependent Hartree-Fock (MCTDHF) approximation with a closed-shell-type MCSCF state as the time-independent reference state are discussed. The application to the LiH molecule with a small basis set ([4s2p1d/2s1p]) shows validity of our MCTDHF approach to the singlet ground state. Our MCSCF correlation energy is 97% of the total (=full CI) correlation energy and the MCTDHF excitation energies are in good agreements with the Δ full CI excitation energies. The Born-Oppenheimer potential energy curves for the lowest three singlet states of LiH and the corresponding vibrational level spacings, the transition moments, the oscillator strengths, and the frequency-dependent dipole polarizabilities are reported. All of these results imply the potentiality of our MCTDHF method for the future work with the larger basis set. One of such basis sets ([9s8p4d/8s7p1d]) is referentially used only at the single-configuration TDHF level, and the resultant near-Hartree-Fock polarizability and Thomas-Reiche-Kuhn sum rule is very promising.

  19. A method for estimating tokamak poloidal field coil currents which incorporates engineering constraints

    SciTech Connect

    Stewart, W.A.

    1990-05-01

    This thesis describes the development of a design tool for the poloidal field magnet system of a tokamak. Specifically, an existing program for determining the poloidal field coil currents has been modified to: support the general case of asymmetric equilibria and coil sets, determine the coil currents subject to constraints on the maximum values of those currents, and determine the coil currents subject to limits on the forces those coils may carry. The equations representing the current limits and coil force limits are derived and an algorithm based on Newton's method is developed to determine a set of coil currents which satisfies those limits. The resulting program allows the designer to quickly determine whether or not a given coil set is capable of supporting a given equilibrium. 25 refs.

  20. Field evaluation of personal sampling methods for multiple bioaerosols.

    PubMed

    Wang, Chi-Hsun; Chen, Bean T; Han, Bor-Cheng; Liu, Andrew Chi-Yeu; Hung, Po-Chen; Chen, Chih-Yong; Chao, Hsing Jasmine

    2015-01-01

    Ambient bioaerosols are ubiquitous in the daily environment and can affect health in various ways. However, few studies have been conducted to comprehensively evaluate personal bioaerosol exposure in occupational and indoor environments because of the complex composition of bioaerosols and the lack of standardized sampling/analysis methods. We conducted a study to determine the most efficient collection/analysis method for the personal exposure assessment of multiple bioaerosols. The sampling efficiencies of three filters and four samplers were compared. According to our results, polycarbonate (PC) filters had the highest relative efficiency, particularly for bacteria. Side-by-side sampling was conducted to evaluate the three filter samplers (with PC filters) and the NIOSH Personal Bioaerosol Cyclone Sampler. According to the results, the Button Aerosol Sampler and the IOM Inhalable Dust Sampler had the highest relative efficiencies for fungi and bacteria, followed by the NIOSH sampler. Personal sampling was performed in a pig farm to assess occupational bioaerosol exposure and to evaluate the sampling/analysis methods. The Button and IOM samplers yielded a similar performance for personal bioaerosol sampling at the pig farm. However, the Button sampler is more likely to be clogged at high airborne dust concentrations because of its higher flow rate (4 L/min). Therefore, the IOM sampler is a more appropriate choice for performing personal sampling in environments with high dust levels. In summary, the Button and IOM samplers with PC filters are efficient sampling/analysis methods for the personal exposure assessment of multiple bioaerosols. PMID:25799419