Science.gov

Sample records for analysis technique based

  1. Rapid Disaster Analysis based on SAR Techniques

    NASA Astrophysics Data System (ADS)

    Yang, C. H.; Soergel, U.

    2015-03-01

    Due to all-day and all-weather capability spaceborne SAR is a valuable means for rapid mapping during and after disaster. In this paper, three change detection techniques based on SAR data are discussed: (1) initial coarse change detection, (2) flooded area detection, and (3) linear-feature change detection. The 2011 Tohoku Earthquake and Tsunami is used as case study, where earthquake and tsunami events provide a complex case for this study. In (1), pre- and post-event TerraSAR-X images are coregistered accurately to produce a false-color image. Such image provides a quick and rough overview of potential changes, which is useful for initial decision making and identifies areas worthwhile to be analysed further in more depth. In (2), the post-event TerraSAR-X image is used to extract the flooded area by morphological approaches. In (3), we are interested in detecting changes of linear shape as indicator for modified man-made objects. Morphological approaches, e.g. thresholding, simply extract pixel-based changes in the difference image. However, in this manner many irrelevant changes are highlighted, too (e.g., farming activity, speckle). In this study, Curvelet filtering is applied in the difference image not only to suppress false alarms but also to enhance the change signals of linear-feature form (e.g. buildings) in settlements. Afterwards, thresholding is conducted to extract linear-shaped changed areas. These three techniques mentioned above are designed to be simple and applicable in timely disaster analysis. They are all validated by comparing with the change map produced by Center for Satellite Based Crisis Information, DLR.

  2. New Flutter Analysis Technique for CFD-based Unsteady Aeroelasticity

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi; Jutte, Christine V.

    2009-01-01

    This paper presents a flutter analysis technique for the transonic flight regime. The technique uses an iterative approach to determine the critical dynamic pressure for a given mach number. Unlike other CFD-based flutter analysis methods, each iteration solves for the critical dynamic pressure and uses this value in subsequent iterations until the value converges. This process reduces the iterations required to determine the critical dynamic pressure. To improve the accuracy of the analysis, the technique employs a known structural model, leaving only the aerodynamic model as the unknown. The aerodynamic model is estimated using unsteady aeroelastic CFD analysis combined with a parameter estimation routine. The technique executes as follows. The known structural model is represented as a finite element model. Modal analysis determines the frequencies and mode shapes for the structural model. At a given mach number and dynamic pressure, the unsteady CFD analysis is performed. The output time history of the surface pressure is converted to a nodal aerodynamic force vector. The forces are then normalized by the given dynamic pressure. A multi-input multi-output parameter estimation software, ERA, estimates the aerodynamic model through the use of time histories of nodal aerodynamic forces and structural deformations. The critical dynamic pressure is then calculated using the known structural model and the estimated aerodynamic model. This output is used as the dynamic pressure in subsequent iterations until the critical dynamic pressure is determined. This technique is demonstrated on the Aerostructures Test Wing-2 model at NASA's Dryden Flight Research Center.

  3. FDTD technique based crosstalk analysis of bundled SWCNT interconnects

    NASA Astrophysics Data System (ADS)

    Singh Duksh, Yograj; Kaushik, Brajesh Kumar; Agarwal, Rajendra P.

    2015-05-01

    The equivalent electrical circuit model of a bundled single-walled carbon nanotube based distributed RLC interconnects is employed for the crosstalk analysis. The accurate time domain analysis and crosstalk effect in the VLSI interconnect has emerged as an essential design criteria. This paper presents a brief description of the numerical method based finite difference time domain (FDTD) technique that is intended for estimation of voltages and currents on coupled transmission lines. For the FDTD implementation, the stability of the proposed model is strictly restricted by the Courant condition. This method is used for the estimation of crosstalk induced propagation delay and peak voltage in lossy RLC interconnects. Both functional and dynamic crosstalk effects are analyzed in the coupled transmission line. The effect of line resistance on crosstalk induced delay, and peak voltage under dynamic and functional crosstalk is also evaluated. The FDTD analysis and the SPICE simulations are carried out at 32 nm technology node for the global interconnects. It is observed that the analytical results obtained using the FDTD technique are in good agreement with the SPICE simulation results. The crosstalk induced delay, propagation delay, and peak voltage obtained using the FDTD technique shows average errors of 4.9%, 3.4% and 0.46%, respectively, in comparison to SPICE.

  4. Image analysis techniques associated with automatic data base generation.

    NASA Technical Reports Server (NTRS)

    Bond, A. D.; Ramapriyan, H. K.; Atkinson, R. J.; Hodges, B. C.; Thomas, D. T.

    1973-01-01

    This paper considers some basic problems relating to automatic data base generation from imagery, the primary emphasis being on fast and efficient automatic extraction of relevant pictorial information. Among the techniques discussed are recursive implementations of some particular types of filters which are much faster than FFT implementations, a 'sequential similarity detection' technique of implementing matched filters, and sequential linear classification of multispectral imagery. Several applications of the above techniques are presented including enhancement of underwater, aerial and radiographic imagery, detection and reconstruction of particular types of features in images, automatic picture registration and classification of multiband aerial photographs to generate thematic land use maps.

  5. Improved mesh based photon sampling techniques for neutron activation analysis

    SciTech Connect

    Relson, E.; Wilson, P. P. H.; Biondo, E. D.

    2013-07-01

    The design of fusion power systems requires analysis of neutron activation of large, complex volumes, and the resulting particles emitted from these volumes. Structured mesh-based discretization of these problems allows for improved modeling in these activation analysis problems. Finer discretization of these problems results in large computational costs, which drives the investigation of more efficient methods. Within an ad hoc subroutine of the Monte Carlo transport code MCNP, we implement sampling of voxels and photon energies for volumetric sources using the alias method. The alias method enables efficient sampling of a discrete probability distribution, and operates in 0(1) time, whereas the simpler direct discrete method requires 0(log(n)) time. By using the alias method, voxel sampling becomes a viable alternative to sampling space with the 0(1) approach of uniformly sampling the problem volume. Additionally, with voxel sampling it is straightforward to introduce biasing of volumetric sources, and we implement this biasing of voxels as an additional variance reduction technique that can be applied. We verify our implementation and compare the alias method, with and without biasing, to direct discrete sampling of voxels, and to uniform sampling. We study the behavior of source biasing in a second set of tests and find trends between improvements and source shape, material, and material density. Overall, however, the magnitude of improvements from source biasing appears to be limited. Future work will benefit from the implementation of efficient voxel sampling - particularly with conformal unstructured meshes where the uniform sampling approach cannot be applied. (authors)

  6. BCC skin cancer diagnosis based on texture analysis techniques

    NASA Astrophysics Data System (ADS)

    Chuang, Shao-Hui; Sun, Xiaoyan; Chang, Wen-Yu; Chen, Gwo-Shing; Huang, Adam; Li, Jiang; McKenzie, Frederic D.

    2011-03-01

    In this paper, we present a texture analysis based method for diagnosing the Basal Cell Carcinoma (BCC) skin cancer using optical images taken from the suspicious skin regions. We first extracted the Run Length Matrix and Haralick texture features from the images and used a feature selection algorithm to identify the most effective feature set for the diagnosis. We then utilized a Multi-Layer Perceptron (MLP) classifier to classify the images to BCC or normal cases. Experiments showed that detecting BCC cancer based on optical images is feasible. The best sensitivity and specificity we achieved on our data set were 94% and 95%, respectively.

  7. Ionospheric Plasma Drift Analysis Technique Based On Ray Tracing

    NASA Astrophysics Data System (ADS)

    Ari, Gizem; Toker, Cenk

    2016-07-01

    Ionospheric drift measurements provide important information about the variability in the ionosphere, which can be used to quantify ionospheric disturbances caused by natural phenomena such as solar, geomagnetic, gravitational and seismic activities. One of the prominent ways for drift measurement depends on instrumentation based measurements, e.g. using an ionosonde. The drift estimation of an ionosonde depends on measuring the Doppler shift on the received signal, where the main cause of Doppler shift is the change in the length of the propagation path of the signal between the transmitter and the receiver. Unfortunately, ionosondes are expensive devices and their installation and maintenance require special care. Furthermore, the ionosonde network over the world or even Europe is not dense enough to obtain a global or continental drift map. In order to overcome the difficulties related to an ionosonde, we propose a technique to perform ionospheric drift estimation based on ray tracing. First, a two dimensional TEC map is constructed by using the IONOLAB-MAP tool which spatially interpolates the VTEC estimates obtained from the EUREF CORS network. Next, a three dimensional electron density profile is generated by inputting the TEC estimates to the IRI-2015 model. Eventually, a close-to-real situation electron density profile is obtained in which ray tracing can be performed. These profiles can be constructed periodically with a period of as low as 30 seconds. By processing two consequent snapshots together and calculating the propagation paths, we estimate the drift measurements over any coordinate of concern. We test our technique by comparing the results to the drift measurements taken at the DPS ionosonde at Pruhonice, Czech Republic. This study is supported by TUBITAK 115E915 and Joint TUBITAK 114E092 and AS CR14/001 projects.

  8. GC-Based Techniques for Breath Analysis: Current Status, Challenges, and Prospects.

    PubMed

    Xu, Mingjun; Tang, Zhentao; Duan, Yixiang; Liu, Yong

    2016-07-01

    Breath analysis is a noninvasive diagnostic method that profiles a person's physical state by volatile organic compounds in the breath. It has huge potential in the field of disease diagnosis. In order to offer opportunities for practical applications, various GC-based techniques have been investigated for on-line breath analysis since GC is the most preferred technique for mixed gas separation. This article reviews the development of breath analysis and GC-based techniques in basic breath research, involving sampling methods, preconcentration methods, conventional GC-based techniques, and newly developed GC techniques for breath analysis. The combination of GC and newly developed detection techniques takes advantages of the virtues of each. In addition, portable GC or micro GC are poised to become field GC-based techniques in breath analysis. Challenges faced in GC-based techniques for breath analysis are discussed candidly. Effective cooperation of experts from different fields is urgent to promote the development of breath analysis. PMID:26529095

  9. A new variance-based global sensitivity analysis technique

    NASA Astrophysics Data System (ADS)

    Wei, Pengfei; Lu, Zhenzhou; Song, Jingwen

    2013-11-01

    A new set of variance-based sensitivity indices, called W-indices, is proposed. Similar to the Sobol's indices, both main and total effect indices are defined. The W-main effect indices measure the average reduction of model output variance when the ranges of a set of inputs are reduced, and the total effect indices quantify the average residual variance when the ranges of the remaining inputs are reduced. Geometrical interpretations show that the W-indices gather the full information of the variance ratio function, whereas, Sobol's indices only reflect the marginal information. Then the double-loop-repeated-set Monte Carlo (MC) (denoted as DLRS MC) procedure, the double-loop-single-set MC (denoted as DLSS MC) procedure and the model emulation procedure are introduced for estimating the W-indices. It is shown that the DLRS MC procedure is suitable for computing all the W-indices despite its highly computational cost. The DLSS MC procedure is computationally efficient, however, it is only applicable for computing low order indices. The model emulation is able to estimate all the W-indices with low computational cost as long as the model behavior is correctly captured by the emulator. The Ishigami function, a modified Sobol's function and two engineering models are utilized for comparing the W- and Sobol's indices and verifying the efficiency and convergence of the three numerical methods. Results show that, for even an additive model, the W-total effect index of one input may be significantly larger than its W-main effect index. This indicates that there may exist interaction effects among the inputs of an additive model when their distribution ranges are reduced.

  10. Surface analysis of cast aluminum by means of artificial vision and AI-based techniques

    NASA Astrophysics Data System (ADS)

    Platero, Carlos; Fernandez, Carlos; Campoy, Pascual; Aracil, Rafael

    1996-02-01

    An architecture for surface analysis of continuous cast aluminum strip is described. The data volume to be processed has forced up the development of a high-parallel architecture for high- speed image processing. An especially suitable lighting system has been developed for defect enhancing in metallic surfaces. A special effort has been put in the design of the defect detection algorithm to reach two main objectives: robustness and low processing time. These goals have been achieved combining a local analysis together with data interpretation based on syntactical analysis that has allowed us to avoid morphological analysis. Defect classification is accomplished by means of rule-based systems along with data-based classifiers. The use of clustering techniques is discussed to perform partitions in Rn by SOM, divergency methods to reduce the feature vector applied to the data-based classifiers. The combination of techniques inside a hybrid system leads to near 100% classification success.

  11. Simultaneous and integrated neutron-based techniques for material analysis of a metallic ancient flute

    NASA Astrophysics Data System (ADS)

    Festa, G.; Pietropaolo, A.; Grazzi, F.; Sutton, L. F.; Scherillo, A.; Bognetti, L.; Bini, A.; Barzagli, E.; Schooneveld, E.; Andreani, C.

    2013-09-01

    A metallic 19th century flute was studied by means of integrated and simultaneous neutron-based techniques: neutron diffraction, neutron radiative capture analysis and neutron radiography. This experiment follows benchmark measurements devoted to assessing the effectiveness of a multitask beamline concept for neutron-based investigation on materials. The aim of this study is to show the potential application of the approach using multiple and integrated neutron-based techniques for musical instruments. Such samples, in the broad scenario of cultural heritage, represent an exciting research field. They may represent an interesting link between different disciplines such as nuclear physics, metallurgy and acoustics.

  12. Knowledge-base for the new human reliability analysis method, A Technique for Human Error Analysis (ATHEANA)

    SciTech Connect

    Cooper, S.E.; Wreathall, J.; Thompson, C.M., Drouin, M.; Bley, D.C.

    1996-10-01

    This paper describes the knowledge base for the application of the new human reliability analysis (HRA) method, a ``A Technique for Human Error Analysis`` (ATHEANA). Since application of ATHEANA requires the identification of previously unmodeled human failure events, especially errors of commission, and associated error-forcing contexts (i.e., combinations of plant conditions and performance shaping factors), this knowledge base is an essential aid for the HRA analyst.

  13. A damage identification technique based on embedded sensitivity analysis and optimization processes

    NASA Astrophysics Data System (ADS)

    Yang, Chulho; Adams, Douglas E.

    2014-07-01

    A vibration based structural damage identification method, using embedded sensitivity functions and optimization algorithms, is discussed in this work. The embedded sensitivity technique requires only measured or calculated frequency response functions to obtain the sensitivity of system responses to each component parameter. Therefore, this sensitivity analysis technique can be effectively used for the damage identification process. Optimization techniques are used to minimize the difference between the measured frequency response functions of the damaged structure and those calculated from the baseline system using embedded sensitivity functions. The amount of damage can be quantified directly in engineering units as changes in stiffness, damping, or mass. Various factors in the optimization process and structural dynamics are studied to enhance the performance and robustness of the damage identification process. This study shows that the proposed technique can improve the accuracy of damage identification with less than 2 percent error of estimation.

  14. COHN analysis: Body composition measurements based on the associated particle imaging and prompt-gamma neutron activation analysis techniques

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The measurement of the body's carbon (C), oxygen (O), hydrogen (H), and nitrogen (N) content can be used to calculate the relative amounts of fat, protein, and water. A system based on prompt-gamma neutron activation analysis (PGNAA), coupled with the associated particle imaging (API) technique, is...

  15. Sex-based differences in lifting technique under increasing load conditions: A principal component analysis.

    PubMed

    Sheppard, P S; Stevenson, J M; Graham, R B

    2016-05-01

    The objective of the present study was to determine if there is a sex-based difference in lifting technique across increasing-load conditions. Eleven male and 14 female participants (n = 25) with no previous history of low back disorder participated in the study. Participants completed freestyle, symmetric lifts of a box with handles from the floor to a table positioned at 50% of their height for five trials under three load conditions (10%, 20%, and 30% of their individual maximum isometric back strength). Joint kinematic data for the ankle, knee, hip, and lumbar and thoracic spine were collected using a two-camera Optotrak motion capture system. Joint angles were calculated using a three-dimensional Euler rotation sequence. Principal component analysis (PCA) and single component reconstruction were applied to assess differences in lifting technique across the entire waveforms. Thirty-two PCs were retained from the five joints and three axes in accordance with the 90% trace criterion. Repeated-measures ANOVA with a mixed design revealed no significant effect of sex for any of the PCs. This is contrary to previous research that used discrete points on the lifting curve to analyze sex-based differences, but agrees with more recent research using more complex analysis techniques. There was a significant effect of load on lifting technique for five PCs of the lower limb (PC1 of ankle flexion, knee flexion, and knee adduction, as well as PC2 and PC3 of hip flexion) (p < 0.005). However, there was no significant effect of load on the thoracic and lumbar spine. It was concluded that when load is standardized to individual back strength characteristics, males and females adopted a similar lifting technique. In addition, as load increased male and female participants changed their lifting technique in a similar manner. PMID:26851478

  16. Validity of content-based techniques to distinguish true and fabricated statements: A meta-analysis.

    PubMed

    Oberlader, Verena A; Naefgen, Christoph; Koppehele-Gossel, Judith; Quinten, Laura; Banse, Rainer; Schmidt, Alexander F

    2016-08-01

    Within the scope of judicial decisions, approaches to distinguish between true and fabricated statements have been of particular importance since ancient times. Although methods focusing on "prototypical" deceptive behavior (e.g., psychophysiological phenomena, nonverbal cues) have largely been rejected with regard to validity, content-based techniques constitute a promising approach and are well established within the applied forensic context. The basic idea of this approach is that experience-based and nonexperience-based statements differ in their content-related quality. In order to test the validity of the most prominent content-based techniques, criteria-based content analysis (CBCA) and reality monitoring (RM), we conducted a comprehensive meta-analysis on English- and German-language studies. Based on a variety of decision criteria, 56 studies were included revealing an overall effect size of g = 1.03 (95% confidence interval [0.78, 1.27], Q = 420.06, p < .001, I2 = 92.48%, N = 3,429). There was no significant difference in the effectiveness of CBCA and RM. Additionally, we investigated a number of moderator variables, such as characteristics of participants, statements, and judgment procedures, as well as general study characteristics. Results showed that the application of all CBCA criteria outperformed any incomplete CBCA criteria set. Furthermore, statement classification based on discriminant functions revealed higher discrimination rates than decisions based on sum scores. Finally, unpublished studies showed higher effect sizes than studies published in peer-reviewed journals. All results are discussed in terms of their significance for future research (e.g., developing standardized decision rules) and practical application (e.g., user training, applying complete criteria set). (PsycINFO Database Record PMID:27149290

  17. Advanced SuperDARN meteor wind observations based on raw time series analysis technique

    NASA Astrophysics Data System (ADS)

    Tsutsumi, M.; Yukimatu, A. S.; Holdsworth, D. A.; Lester, M.

    2009-04-01

    The meteor observation technique based on SuperDARN raw time series analysis has been upgraded. This technique extracts meteor information as biproducts and does not degrade the quality of normal SuperDARN operations. In the upgrade the radar operating system (RADOPS) has been modified so that it can oversample every 15 km during the normal operations, which have a range resolution of 45 km. As an alternative method for better range determination a frequency domain interferometry (FDI) capability was also coded in RADOPS, where the operating radio frequency can be changed every pulse sequence. Test observations were conducted using the CUTLASS Iceland East and Finland radars, where oversampling and FDI operation (two frequencies separated by 3 kHz) were simultaneously carried out. Meteor ranges obtained in both ranging techniques agreed very well. The ranges were then combined with the interferometer data to estimate meteor echo reflection heights. Although there were still some ambiguities in the arrival angles of echoes because of the rather long antenna spacing of the interferometers, the heights and arrival angles of most of meteor echoes were more accurately determined than previously. Wind velocities were successfully estimated over the height range of 84 to 110 km. The FDI technique developed here can be further applied to the common SuperDARN operation, and study of fine horizontal structures of F region plasma irregularities is expected in the future.

  18. Tissue deformation analysis using a laser based digital image correlation technique.

    PubMed

    Kerl, Johannes; Parittotokkaporn, Tassanai; Frasson, Luca; Oldfield, Matthew; y Baena, Ferdinando Rodriguez; Beyrau, Frank

    2012-02-01

    A laser based technique for planar time-resolved measurements of tissue deformation in transparent biomedical materials with high spatial resolution is developed. The approach is based on monitoring the displacement of micrometer particles previously embedded into a semi-transparent sample as it is deformed by some form of external loading. The particles are illuminated in a plane inside the tissue material by a thin laser light sheet, and the pattern is continuously recorded by a digital camera. Image analysis yields the locally and temporally resolved sample deformation in the measurement plane without the need for any in situ measurement hardware. The applicability of the method for determination of tissue deformation and material strain during the insertion of a needle probe into a soft material sample is demonstrated by means of an in vitro trial on gelatin. PMID:22301185

  19. A novel murmur-based heart sound feature extraction technique using envelope-morphological analysis

    NASA Astrophysics Data System (ADS)

    Yao, Hao-Dong; Ma, Jia-Li; Fu, Bin-Bin; Wang, Hai-Yang; Dong, Ming-Chui

    2015-07-01

    Auscultation of heart sound (HS) signals serves as an important primary approach to diagnose cardiovascular diseases (CVDs) for centuries. Confronting the intrinsic drawbacks of traditional HS auscultation, computer-aided automatic HS auscultation based on feature extraction technique has witnessed explosive development. Yet, most existing HS feature extraction methods adopt acoustic or time-frequency features which exhibit poor relationship with diagnostic information, thus restricting the performance of further interpretation and analysis. Tackling such a bottleneck problem, this paper innovatively proposes a novel murmur-based HS feature extraction method since murmurs contain massive pathological information and are regarded as the first indications of pathological occurrences of heart valves. Adapting discrete wavelet transform (DWT) and Shannon envelope, the envelope-morphological characteristics of murmurs are obtained and three features are extracted accordingly. Validated by discriminating normal HS and 5 various abnormal HS signals with extracted features, the proposed method provides an attractive candidate in automatic HS auscultation.

  20. Applications of synchrotron-based micro-imaging techniques for the analysis of Cultural Heritage materials

    SciTech Connect

    Cotte, Marine; Chilida, Javier; Walter, Philippe; Taniguchi, Yoko; Susini, Jean

    2009-01-29

    The analysis of cultural Heritage objects is often technically challenging. When analyzing micro-fragments, the amount of matter is usually very tiny, hence requiring sensitive techniques. These samples, in particular painting fragments, may present multi-layered structures, with layer thickness of {approx}10 {mu}m. It leads to favor micro-imaging techniques, with a good lateral resolution (about one micrometer), that manage the discriminative study of each layer. Besides, samples are usually very complex in term of chemistry, as they are made of mineral and organic matters, amorphous and crystallized phases, major and minor elements. Accordingly, a multi-modal approach is generally essential to solve the chemical complexity of such hybrid materials. Different examples will be given, to illustrate the various possibilities of synchrotron-based micro-imaging techniques, such as micro X-ray diffraction, micro X-ray fluorescence, micro X-ray absorption spectroscopy and micro FTIR spectroscopy. Focus will be made on paintings, but the whole range of museum objects (going from soft matter like paper or wood to hard matter like metal and glass) will be also considered.

  1. Subdivision based isogeometric analysis technique for electric field integral equations for simply connected structures

    NASA Astrophysics Data System (ADS)

    Li, Jie; Dault, Daniel; Liu, Beibei; Tong, Yiying; Shanker, Balasubramaniam

    2016-08-01

    The analysis of electromagnetic scattering has long been performed on a discrete representation of the geometry. This representation is typically continuous but not differentiable. The need to define physical quantities on this geometric representation has led to development of sets of basis functions that need to satisfy constraints at the boundaries of the elements/tessellations (viz., continuity of normal or tangential components across element boundaries). For electromagnetics, these result in either curl/div-conforming basis sets. The geometric representation used for analysis is in stark contrast with that used for design, wherein the surface representation is higher order differentiable. Using this representation for both geometry and physics on geometry has several advantages, and is elucidated in Hughes et al. (2005) [7]. Until now, a bulk of the literature on isogeometric methods have been limited to solid mechanics, with some effort to create NURBS based basis functions for electromagnetic analysis. In this paper, we present the first complete isogeometry solution methodology for the electric field integral equation as applied to simply connected structures. This paper systematically proceeds through surface representation using subdivision, definition of vector basis functions on this surface, to fidelity in the solution of integral equations. We also present techniques to stabilize the solution at low frequencies, and impose a Calderón preconditioner. Several results presented serve to validate the proposed approach as well as demonstrate some of its capabilities.

  2. A novel mesh processing based technique for 3D plant analysis

    PubMed Central

    2012-01-01

    Background In recent years, imaging based, automated, non-invasive, and non-destructive high-throughput plant phenotyping platforms have become popular tools for plant biology, underpinning the field of plant phenomics. Such platforms acquire and record large amounts of raw data that must be accurately and robustly calibrated, reconstructed, and analysed, requiring the development of sophisticated image understanding and quantification algorithms. The raw data can be processed in different ways, and the past few years have seen the emergence of two main approaches: 2D image processing and 3D mesh processing algorithms. Direct image quantification methods (usually 2D) dominate the current literature due to comparative simplicity. However, 3D mesh analysis provides the tremendous potential to accurately estimate specific morphological features cross-sectionally and monitor them over-time. Result In this paper, we present a novel 3D mesh based technique developed for temporal high-throughput plant phenomics and perform initial tests for the analysis of Gossypium hirsutum vegetative growth. Based on plant meshes previously reconstructed from multi-view images, the methodology involves several stages, including morphological mesh segmentation, phenotypic parameters estimation, and plant organs tracking over time. The initial study focuses on presenting and validating the accuracy of the methodology on dicotyledons such as cotton but we believe the approach will be more broadly applicable. This study involved applying our technique to a set of six Gossypium hirsutum (cotton) plants studied over four time-points. Manual measurements, performed for each plant at every time-point, were used to assess the accuracy of our pipeline and quantify the error on the morphological parameters estimated. Conclusion By directly comparing our automated mesh based quantitative data with manual measurements of individual stem height, leaf width and leaf length, we obtained the mean

  3. Digital Instrumentation and Control Failure Events Derivation and Analysis by Frame-Based Technique

    SciTech Connect

    Hui-Wen Huang; Chunkuan Shih; Swu Yih; Yen-Chang Tzeng; Ming-Huei Chen

    2006-07-01

    A frame-based technique, including physical frame, logical frame, and cognitive frame, was adopted to perform digital I and C failure events derivation and analysis for generic ABWR. The physical frame was structured with a modified PCTran-ABWR plant simulation code, which was extended and enhanced on the feedwater system, recirculation system, and steam line system. The logical model is structured with MATLAB, which was incorporated into PCTran-ABWR to improve the pressure control system, feedwater control system, recirculation control system, and automated power regulation control system. As a result, the software failure of these digital control systems can be properly simulated and analyzed. The cognitive frame was simulated by the operator awareness status in the scenarios. Moreover, via an internal characteristics tuning technique, the modified PCTran-ABWR can precisely reflect the characteristics of the power-core flow. Hence, in addition to the transient plots, the analysis results can then be demonstrated on the power-core flow map. A number of postulated I and C system software failure events were derived to achieve the dynamic analyses. The basis for event derivation includes the published classification for software anomalies, the digital I and C design data for ABWR, chapter 15 accident analysis of generic SAR, and the reported NPP I and C software failure events. The case study of this research includes (1) the software CMF analysis for the major digital control systems; and (2) postulated ABWR digital I and C software failure events derivation from the actual happening of non-ABWR digital I and C software failure events, which were reported to LER of USNRC or IRS of IAEA. These events were analyzed by PCTran-ABWR. Conflicts among plant status, computer status, and human cognitive status are successfully identified. The operator might not easily recognize the abnormal condition, because the computer status seems to progress normally. However, a well

  4. An Evaluation of Microcomputer-Based Strain Analysis Techniques on Meteoritic Chondrules

    NASA Astrophysics Data System (ADS)

    Hill, H. G. M.

    1995-09-01

    Introduction: Chondrule flattening and distinct foliation are preserved in certain chondrites [1] and have been interpreted, by some, as evidence of shock-induced pressure through hypervelocity impacts on parent bodies [2]. Recently, mean aspect ratios of naturally and artificially shocked chondrules, in the Allende (CV3) chondrite, have been correlated with shock intensity [3] using established shock stage criteria [4]. Clearly, quantification of chondrule deformation and appropriate petrographic criteria can be useful tools for constraining parent body shock history and, possibly, post-shock heating [3]. Here, strain analysis techniques (R(sub)(f)/phi and Fry) normally employed in structural geology, have been adapted and evaluated [5], for measuring mean chondrule strain, and orientation. In addition, the possible use of such strain data for partial shock stage classification is considered. R(sub)(f)/phi and Fry Analysis: The relationship between displacement and shape changes in rocks is known as strain [6] and assumes that an initial circle with a unit radius is deformed to form an ellipse, the finite strain ellipse (Rf). The strain ratio (Rs) is an expression of the change of shape. The orientation of the strain ellipse (phi) is the angle subtended between the semi-major axes and the direction of a fixed point of reference. Generally, log mean Rf ~ Rs and, therefore, the approximation Rf = Rs is valid. For chondrules, this is reasonable as they were originally molten, or partially-molten, droplets [7]. Fry's 'center-to-center' geological strain analysis technique [8] is based on the principle that the distribution of particle centers in rocks can sometimes be used to determine the state of finite strain (Rf). Experimental Techniques: The Bovedy (L3) chondrite was chosen for investigation as it contains abundant, oriented, elliptical chondrules [5]. Hardware employed consisted of a Macintosh microcomputer and a flat-bed scanner. Chondrule outlines, obtained

  5. Microfluidic assay-based optical measurement techniques for cell analysis: A review of recent progress.

    PubMed

    Choi, Jong-Ryul; Song, Hyerin; Sung, Jong Hwan; Kim, Donghyun; Kim, Kyujung

    2016-03-15

    Since the early 2000s, microfluidic cell culture systems have attracted significant attention as a promising alternative to conventional cell culture methods and the importance of designing an efficient detection system to analyze cell behavior on a chip in real time is raised. For this reason, various measurement techniques for microfluidic devices have been developed with the development of microfluidic assays for high-throughput screening and mimicking of in vivo conditions. In this review, we discuss optical measurement techniques for microfluidic assays. First of all, the recent development of fluorescence- and absorbance-based optical measurement systems is described. Next, advanced optical detection systems are introduced with respect to three emphases: 1) optimization for long-term, real-time, and in situ measurements; 2) performance improvements; and 3) multimodal analysis conjugations. Moreover, we explore presents future prospects for the establishment of optical detection systems following the development of complex, multi-dimensional microfluidic cell culture assays to mimic in vivo tissue, organ, and human systems. PMID:26409023

  6. Performance analysis of compressive ghost imaging based on different signal reconstruction techniques.

    PubMed

    Kang, Yan; Yao, Yin-Ping; Kang, Zhi-Hua; Ma, Lin; Zhang, Tong-Yi

    2015-06-01

    We present different signal reconstruction techniques for implementation of compressive ghost imaging (CGI). The different techniques are validated on the data collected from ghost imaging with the pseudothermal light experimental system. Experiment results show that the technique based on total variance minimization gives high-quality reconstruction of the imaging object with less time consumption. The different performances among these reconstruction techniques and their parameter settings are also analyzed. The conclusion thus offers valuable information to promote the implementation of CGI in real applications. PMID:26367039

  7. An Overview of Micromechanics-Based Techniques for the Analysis of Microstructural Randomness in Functionally Graded Materials

    SciTech Connect

    Ferrante, Fernando J.; Brady, Lori L. Graham; Acton, Katherine; Arwade, Sanjay R.

    2008-02-15

    A review of current research efforts to develop micromechanics-based techniques for the study of microstructural randomness of functionally graded materials is presented, along with a framework developed by the authors of this paper that includes stochastic simulation of statistically inhomogeneous samples and a windowing technique coupled with a micromechanical homogenization technique. The methodology is illustrated through the analysis of one sample coupled with finite element modeling.

  8. Prioritization of sub-watersheds based on morphometric analysis using geospatial technique in Piperiya watershed, India

    NASA Astrophysics Data System (ADS)

    Chandniha, Surendra Kumar; Kansal, Mitthan Lal

    2014-11-01

    Hydrological investigation and behavior of watershed depend upon geo-morphometric characteristics of catchment. Morphometric analysis is commonly used for development of regional hydrological model of ungauged watershed. A critical valuation and assessment of geo-morphometric constraints has been carried out. Prioritization of watersheds based on water plot capacity of Piperiya watershed has been evaluated by linear, aerial and relief aspects. Morphometric analysis has been attempted for prioritization for nine sub-watersheds of Piperiya watershed in Hasdeo river basin, which is a tributary of the Mahanadi. Sub-watersheds are delineated by ArcMap 9.3 software as per digital elevation model (DEM). Assessment of drainages and their relative parameters such as stream order, stream length, stream frequency, drainage density, texture ratio, form factor, circulatory ratio, elongation ratio, bifurcation ratio and compactness ratio has been calculated separately for each sub-watershed using the Remote Sensing (RS) and Geospatial techniques. Finally, the prioritized score on the basis of morphometric behavior of each sub-watershed is assigned and thereafter consolidated scores have been estimated to identify the most sensitive parameters. The analysis reveals that stream order varies from 1 to 5; however, the first-order stream covers maximum area of about 87.7 %. Total number of stream segment of all order is 1,264 in the watershed. The study emphasizes the prioritization of the sub-watersheds on the basis of morphometric analysis. The final score of entire nine sub-watersheds is assigned as per erosion threat. The sub-watershed with the least compound parameter value was assigned as highest priority. However, the sub-watersheds has been categorized into three classes as high (4.1-4.7), medium (4.8-5.3) and low (>5.4) priority on the basis of their maximum (6.0) and minimum (4.1) prioritized score.

  9. Operational modal analysis via image based technique of very flexible space structures

    NASA Astrophysics Data System (ADS)

    Sabatini, Marco; Gasbarri, Paolo; Palmerini, Giovanni B.; Monti, Riccardo

    2013-08-01

    Vibrations represent one of the most important topics of the engineering design relevant to flexible structures. The importance of this problem increases when a very flexible system is considered, and this is often the case of space structures. In order to identify the modal characteristics, in terms of natural frequencies and relevant modal parameters, ground tests are performed. However, these parameters could vary due to the operative conditions of the system. In order to continuously monitor the modal characteristics during the satellite lifetime, an operational modal analysis is mandatory. This kind of analysis is usually performed by using classical accelerometers or strain gauges and by properly analyzing the acquired output. In this paper a different approach for the vibrations data acquisition will be performed via image-based technique. In order to simulate a flexible satellite, a free flying platform is used; the problem is furthermore complicated by the fact that the overall system, constituted by a highly rigid bus and very flexible panels, must necessarily be modeled as a multibody system. In the experimental campaign, the camera, placed on the bus, will be used to identify the eigenfrequencies of the vibrating structure; in this case aluminum thin plates simulate very flexible solar panels. The structure is excited by a hammer or studied during a fast attitude maneuver. The results of the experimental activity will be investigated and compared with respect to the numerical simulation obtained via a FEM-multibody software and the relevant results will be proposed and discussed.

  10. Instanton-based techniques for analysis and reduction of error floor of LDPC codes

    SciTech Connect

    Chertkov, Michael; Chilappagari, Shashi K; Stepanov, Mikhail G; Vasic, Bane

    2008-01-01

    We describe a family of instanton-based optimization methods developed recently for the analysis of the error floors of low-density parity-check (LDPC) codes. Instantons are the most probable configurations of the channel noise which result in decoding failures. We show that the general idea and the respective optimization technique are applicable broadly to a variety of channels, discrete or continuous, and variety of sub-optimal decoders. Specifically, we consider: iterative belief propagation (BP) decoders, Gallager type decoders, and linear programming (LP) decoders performing over the additive white Gaussian noise channel (AWGNC) and the binary symmetric channel (BSC). The instanton analysis suggests that the underlying topological structures of the most probable instanton of the same code but different channels and decoders are related to each other. Armed with this understanding of the graphical structure of the instanton and its relation to the decoding failures, we suggest a method to construct codes whose Tanner graphs are free of these structures, and thus have less significant error floors.

  11. Polyspectral signal analysis techniques for condition based maintenance of helicopter drive-train system

    NASA Astrophysics Data System (ADS)

    Hassan Mohammed, Mohammed Ahmed

    For an efficient maintenance of a diverse fleet of air- and rotorcraft, effective condition based maintenance (CBM) must be established based on rotating components monitored vibration signals. In this dissertation, we present theory and applications of polyspectral signal processing techniques for condition monitoring of critical components in the AH-64D helicopter tail rotor drive train system. Currently available vibration-monitoring tools are mostly built around auto- and cross-power spectral analysis which have limited performance in detecting frequency correlations higher than second order. Studying higher order correlations and their Fourier transforms, higher order spectra, provides more information about the vibration signals which helps in building more accurate diagnostic models of the mechanical system. Based on higher order spectral analysis, different signal processing techniques are developed to assess health conditions of different critical rotating-components in the AH-64D helicopter drive-train. Based on cross-bispectrum, quadratic nonlinear transfer function is presented to model second order nonlinearity in a drive-shaft running between the two hanger bearings. Then, quadratic-nonlinearity coupling coefficient between frequency harmonics of the rotating shaft is used as condition metric to study different seeded shaft faults compared to baseline case, namely: shaft misalignment, shaft imbalance, and combination of shaft misalignment and imbalance. The proposed quadratic-nonlinearity metric shows better capabilities in distinguishing the four studied shaft settings than the conventional linear coupling based on cross-power spectrum. We also develop a new concept of Quadratic-Nonlinearity Power-Index spectrum, QNLPI(f), that can be used in signal detection and classification, based on bicoherence spectrum. The proposed QNLPI(f) is derived as a projection of the three-dimensional bicoherence spectrum into two-dimensional spectrum that

  12. Performance Analysis of SAC Optical PPM-CDMA System-Based Interference Rejection Technique

    NASA Astrophysics Data System (ADS)

    Alsowaidi, N.; Eltaif, Tawfig; Mokhtar, M. R.

    2016-03-01

    In this paper, we aim to theoretically analyse optical code division multiple access (OCDMA) system that based on successive interference cancellation (SIC) using pulse position modulation (PPM), considering the interference between the users, imperfection cancellation occurred during the cancellation process and receiver noises. Spectral amplitude coding (SAC) scheme is used to suppress the overlapping between the users and reduce the receiver noises effect. The theoretical analysis of the multiple access interference (MAI)-limited performance of this approach indicates the influence of the size of M-ary PPM on OCDMA system. The OCDMA system performance improves with increasing M-ary PPM. Therefore, it was found that the SIC/SAC-OCDMA system using PPM technique along with modified prime (MPR) codes used as signature sequence code offers significant improvement over the one without cancellation and it can support up to 103 users at the benchmarking value of bit error rate (BER) = 10-9 with prime number p = 11 while the system without cancellation scheme can support only up to 52 users.

  13. Error analysis of a 3D imaging system based on fringe projection technique

    NASA Astrophysics Data System (ADS)

    Zhang, Zonghua; Dai, Jie

    2013-12-01

    In the past few years, optical metrology has found numerous applications in scientific and commercial fields owing to its non-contact nature. One of the most popular methods is the measurement of 3D surface based on fringe projection techniques because of the advantages of non-contact operation, full-field and fast acquisition and automatic data processing. In surface profilometry by using digital light processing (DLP) projector, many factors affect the accuracy of 3D measurement. However, there is no research to give the complete error analysis of a 3D imaging system. This paper will analyze some possible error sources of a 3D imaging system, for example, nonlinear response of CCD camera and DLP projector, sampling error of sinusoidal fringe pattern, variation of ambient light and marker extraction during calibration. These error sources are simulated in a software environment to demonstrate their effects on measurement. The possible compensation methods are proposed to give high accurate shape data. Some experiments were conducted to evaluate the effects of these error sources on 3D shape measurement. Experimental results and performance evaluation show that these errors have great effect on measuring 3D shape and it is necessary to compensate for them for accurate measurement.

  14. Spatio-temporal analysis of discharge regimes based on hydrograph classification techniques in an agricultural catchment

    NASA Astrophysics Data System (ADS)

    Chen, Xiaofei; Bloeschl, Guenter; Blaschke, Alfred Paul; Silasari, Rasmiaditya; Exner-Kittridge, Mike

    2016-04-01

    The stream, discharges and groundwater hydro-graphs is an integration in spatial and temporal variations for small-scale hydrological response. Characterizing discharges response regime in a drainage farmland is essential to irrigation strategies and hydrologic modeling. Especially for agricultural basins, diurnal hydro-graphs from drainage discharges have been investigated to achieve drainage process inferences in varying magnitudes. To explore the variability of discharge responses, we developed an impersonal method to characterize and classify discharge hydrograph based on features of magnitude and time-series. A cluster analysis (hierarchical k-means) and principal components analysis techniques are used for discharge time-series and groundwater level hydro-graphs to analyze their event characteristics, using 8 different discharge and 18 groundwater level hydro-graphs to test. As the variability of rainfall activity, system location, discharge regime and soil moisture pre-event condition in the catchment, three main clusters of discharge hydro-graph are identified from the test. The results show that : (1) the hydro-graphs from these drainage discharges had similar shapes but different magnitudes for individual rainstorm; the similarity is also showed in overland flow discharge and spring system; (2) for each cluster, the similarity of shape insisted, but the rising slope are different due to different antecedent wetness condition and the rain accumulation meanwhile the difference of regression slope can be explained by system location and discharge area; and (3) surface water always has a close proportional relation with soil moisture throughout the year, while only after the soil moisture exceeds a certain threshold does the outflow of tile drainage systems have a direct ratio relationship with soil moisture and a inverse relationship with the groundwater levels. Finally, we discussed the potential application of hydrograph classification in a wider range of

  15. Development of evaluation technique of GMAW welding quality based on statistical analysis

    NASA Astrophysics Data System (ADS)

    Feng, Shengqiang; Terasaki, Hidenri; Komizo, Yuichi; Hu, Shengsun; Chen, Donggao; Ma, Zhihua

    2014-11-01

    Nondestructive techniques for appraising gas metal arc welding(GMAW) faults plays a very important role in on-line quality controllability and prediction of the GMAW process. On-line welding quality controllability and prediction have several disadvantages such as high cost, low efficiency, complication and greatly being affected by the environment. An enhanced, efficient evaluation technique for evaluating welding faults based on Mahalanobis distance(MD) and normal distribution is presented. In addition, a new piece of equipment, designated the weld quality tester(WQT), is developed based on the proposed evaluation technique. MD is superior to other multidimensional distances such as Euclidean distance because the covariance matrix used for calculating MD takes into account correlations in the data and scaling. The values of MD obtained from welding current and arc voltage are assumed to follow a normal distribution. The normal distribution has two parameters: the mean µ and standard deviation σ of the data. In the proposed evaluation technique used by the WQT, values of MD located in the range from zero to µ+3 σ are regarded as "good". Two experiments which involve changing the flow of shielding gas and smearing paint on the surface of the substrate are conducted in order to verify the sensitivity of the proposed evaluation technique and the feasibility of using WQT. The experimental results demonstrate the usefulness of the WQT for evaluating welding quality. The proposed technique can be applied to implement the on-line welding quality controllability and prediction, which is of great importance to design some novel equipment for weld quality detection.

  16. Synchrotron-Based Microspectroscopic Analysis of Molecular and Biopolymer Structures Using Multivariate Techniques and Advanced Multi-Components Modeling

    SciTech Connect

    Yu, P.

    2008-01-01

    More recently, advanced synchrotron radiation-based bioanalytical technique (SRFTIRM) has been applied as a novel non-invasive analysis tool to study molecular, functional group and biopolymer chemistry, nutrient make-up and structural conformation in biomaterials. This novel synchrotron technique, taking advantage of bright synchrotron light (which is million times brighter than sunlight), is capable of exploring the biomaterials at molecular and cellular levels. However, with the synchrotron RFTIRM technique, a large number of molecular spectral data are usually collected. The objective of this article was to illustrate how to use two multivariate statistical techniques: (1) agglomerative hierarchical cluster analysis (AHCA) and (2) principal component analysis (PCA) and two advanced multicomponent modeling methods: (1) Gaussian and (2) Lorentzian multi-component peak modeling for molecular spectrum analysis of bio-tissues. The studies indicated that the two multivariate analyses (AHCA, PCA) are able to create molecular spectral corrections by including not just one intensity or frequency point of a molecular spectrum, but by utilizing the entire spectral information. Gaussian and Lorentzian modeling techniques are able to quantify spectral omponent peaks of molecular structure, functional group and biopolymer. By application of these four statistical methods of the multivariate techniques and Gaussian and Lorentzian modeling, inherent molecular structures, functional group and biopolymer onformation between and among biological samples can be quantified, discriminated and classified with great efficiency.

  17. Accuracy of an approximate static structural analysis technique based on stiffness matrix eigenmodes

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Hajela, P.

    1979-01-01

    Use of the stiffness matrix eigenmodes, instead of the vibration eigenmodes, as generalized coordinates is proposed for condensation of static load deflection equations in finite element stiffness method. The modes are selected by strain energy criteria and the resulting fast, approximate analysis technique is evaluated by applications to idealized built-up wings and a fuselage segment. The best results obtained are a two-order of magnitude reduction of the number of degrees of freedom in a high aspect ratio wing associated with less than one percent error in prediction of the largest displacement.

  18. Wilcoxon signed-rank-based technique for the pulse-shape analysis of HPGe detectors

    NASA Astrophysics Data System (ADS)

    Martín, S.; Quintana, B.; Barrientos, D.

    2016-07-01

    The characterization of the electric response of segmented-contact high-purity germanium detectors requires scanning systems capable of accurately associating each pulse with the position of the interaction that generated it. This process requires an algorithm sensitive to changes above the electronic noise in the pulse shapes produced at different positions, depending on the resolution of the Ge crystal. In this work, a pulse-shape comparison technique based on the Wilcoxon signed-rank test has been developed. It provides a method to distinguish pulses coming from different interaction points in the germanium crystal. Therefore, this technique is a necessary step for building a reliable pulse-shape database that can be used later for the determination of the position of interaction for γ-ray tracking spectrometry devices such as AGATA, GRETA or GERDA. The method was validated by comparison with a χ2 test using simulated and experimental pulses corresponding to a Broad Energy germanium detector (BEGe).

  19. Fluorous affinity-based separation techniques for the analysis of biogenic and related molecules.

    PubMed

    Hayama, Tadashi; Yoshida, Hideyuki; Yamaguchi, Masatoshi; Nohta, Hitoshi

    2014-12-01

    Perfluoroalkyl-containing compounds have a unique 'fluorous' property that refers to the remarkably specific affinity they share. Fluorous compounds can be easily isolated from non-fluorous species on the perfluoroalkyl-functionalized stationary phases used in fluorous solid-phase extraction and fluorous liquid chromatography by means of fluorous-fluorous interactions (fluorophilicity). Recently, this unique specificity has been applied to the highly selective enrichment and analysis of different classes of biogenic and related compounds in complex samples. Because the biogenic compounds are generally not 'fluorous', they must be derivatized with appropriate perfluoroalkyl group-containing reagent in order to utilize fluorous interaction. In this review, we introduce the application of fluorous affinity techniques including derivatization methods to biogenic sample analysis. PMID:24865313

  20. Aerodynamic measurement techniques. [laser based diagnostic techniques

    NASA Technical Reports Server (NTRS)

    Hunter, W. W., Jr.

    1976-01-01

    Laser characteristics of intensity, monochromatic, spatial coherence, and temporal coherence were developed to advance laser based diagnostic techniques for aerodynamic related research. Two broad categories of visualization and optical measurements were considered, and three techniques received significant attention. These are holography, laser velocimetry, and Raman scattering. Examples of the quantitative laser velocimeter and Raman scattering measurements of velocity, temperature, and density indicated the potential of these nonintrusive techniques.

  1. Mass Spectrometry Based Imaging Techniques for Spatially Resolved Analysis of Molecules

    PubMed Central

    Matros, Andrea; Mock, Hans-Peter

    2013-01-01

    Higher plants are composed of a multitude of tissues with specific functions, reflected by distinct profiles for transcripts, proteins, and metabolites. Comprehensive analysis of metabolites and proteins has advanced tremendously within recent years, and this progress has been driven by the rapid development of sophisticated mass spectrometric techniques. In most of the current “omics”-studies, analysis is performed on whole organ or whole plant extracts, rendering to the loss of spatial information. Mass spectrometry imaging (MSI) techniques have opened a new avenue to obtain information on the spatial distribution of metabolites and of proteins. Pioneered in the field of medicine, the approaches are now applied to study the spatial profiles of molecules in plant systems. A range of different plant organs and tissues have been successfully analyzed by MSI, and patterns of various classes of metabolites from primary and secondary metabolism could be obtained. It can be envisaged that MSI approaches will substantially contribute to build spatially resolved biochemical networks. PMID:23626593

  2. Advanced NMR-based techniques for pore structure analysis of coal

    SciTech Connect

    Smith, D.M.

    1992-01-01

    One of the main problems in coal utilization is the inability to properly characterize its complex pore structure. Coals typically have micro/ultra-micro pores but they also exhibit meso and macroporosity. Conventional pore size techniques (adsorption/condensation, mercury porosimetry) are limited because of this broad pore size range, microporosity, reactive nature of coal, samples must be completely dried, and network/percolation effects. Small angle scattering is limited because it probes both open and closed pores. Although one would not expect any single technique to provide a satisfactory description of a coal's structure, it is apparent that better techniques are necessary. We believe that measurement of the NMR parameters of various gas phase and adsorbed phase NMR active probes can provide the resolution to this problem. We will investigate the dependence of the common NMR parameters such as chemical shifts and relaxation times of several different nuclei and compounds on the pore structure of model microporous solids, carbons, and coals. In particular, we will study the interaction between several small molecules and the pore surfaces in coals. These molecules have been selected for their chemical and physical properties. A special NMR probe will be constructed which will allow the concurrent measurement of NMR properties and adsorption uptake at a variety of temperatures. All samples will be subjected to a suite of conventional'' pore structure analyses. These include nitrogen adsorption at 77 K with BET analysis, CO[sub 2] and CH[sub 4] adsorption at 273 K with D-R (Dubinin-Radushkevich) analysis, helium pycnometry, and small angle X-ray scattering as well as gas diffusion measurements.

  3. Analysis to feature-based video stabilization/registration techniques within application of traffic data collection

    NASA Astrophysics Data System (ADS)

    Sadat, Mojtaba T.; Viti, Francesco

    2015-02-01

    Machine vision is rapidly gaining popularity in the field of Intelligent Transportation Systems. In particular, advantages are foreseen by the exploitation of Aerial Vehicles (AV) in delivering a superior view on traffic phenomena. However, vibration on AVs makes it difficult to extract moving objects on the ground. To partly overcome this issue, image stabilization/registration procedures are adopted to correct and stitch multiple frames taken of the same scene but from different positions, angles, or sensors. In this study, we examine the impact of multiple feature-based techniques for stabilization, and we show that SURF detector outperforms the others in terms of time efficiency and output similarity.

  4. Development of EMD based signal improvement technique and its application to pulse shape analysis

    NASA Astrophysics Data System (ADS)

    Siwal, Davinder; Suyal, V.; Prasad, A.; Mandal, S.; Singh, R.

    2013-04-01

    A new technique of signal improvement has been developed under the framework of Empirical Mode Decomposition method. It identifies the signal noise from the estimation of correlation coefficient. Such calculations are performed both in the frequency as well as in the time domains of the signal, among the IMFs and the given signal itself. Each of the Fast Fourier Transformed IMFs reflects the complete picture of the frequency involved in the given signal. Therefore, the correlation curve obtained in time domain can be use to identify the noise components. The application of the proposed method has been implemented on the pulse shape data of the liquid scintillator based neutron detector.

  5. Analysis of meteorological variables in the Australasian region using ground- and space-based GPS techniques

    NASA Astrophysics Data System (ADS)

    Kuleshov, Yuriy; Choy, Suelynn; Fu, Erjiang Frank; Chane-Ming, Fabrice; Liou, Yuei-An; Pavelyev, Alexander G.

    2016-07-01

    Results of analysis of meteorological variables (temperature and moisture) in the Australasian region using the global positioning system (GPS) radio occultation (RO) and GPS ground-based observations verified with in situ radiosonde (RS) data are presented. The potential of using ground-based GPS observations for retrieving column integrated precipitable water vapour (PWV) over the Australian continent has been demonstrated using the Australian ground-based GPS reference stations network. Using data from the 15 ground-based GPS stations, the state of the atmosphere over Victoria during a significant weather event, the March 2010 Melbourne storm, has been investigated, and it has been shown that the GPS observations has potential for monitoring the movement of a weather front that has sharp moisture contrast. Temperature and moisture variability in the atmosphere over various climatic regions (the Indian and the Pacific Oceans, the Antarctic and Australia) has been examined using satellite-based GPS RO and in situ RS observations. Investigating recent atmospheric temperature trends over Antarctica, the time series of the collocated GPS RO and RS data were examined, and strong cooling in the lower stratosphere and warming through the troposphere over Antarctica has been identified, in agreement with outputs of climate models. With further expansion of the Global Navigation Satellite Systems (GNSS) system, it is expected that GNSS satellite- and ground-based measurements would be able to provide an order of magnitude larger amount of data which in turn could significantly advance weather forecasting services, climate monitoring and analysis in the Australasian region.

  6. Novel Recognition Method of Blast Furnace Dust Composition by Multifeature Analysis Based on Comprehensive Image-Processing Techniques

    NASA Astrophysics Data System (ADS)

    Guo, Hongwei; Su, Buxin; Bai, Zhenlong; Zhang, Jianliang; Li, Xinyu

    2014-11-01

    The traditional artificial recognition methods for the blast furnace dust composition have several disadvantages, including a great deal of information to dispose, complex operation, and low working efficiency. In this article, a multifeature analysis method based on comprehensive image-processing techniques was proposed to automatically recognize the blast furnace dust composition. First, the artificial recognition and feature analysis, which included image preprocessing, Harris corner feature, Canny edge feature, and Ruffle feature analysis, was designed to build the template image, so that any unknown dust digital image could be tested. Second, the composition of coke, microvariation pulverized coal, vitric, ash, and iron from dust would be distinguished according to their different range of values based on the multifeature analysis. The method is valid for recognizing the blast furnace dust composition automatically, and it is fast and has a high recognition accuracy.

  7. Novel Recognition Method of Blast Furnace Dust Composition by Multifeature Analysis Based on Comprehensive Image-Processing Techniques

    NASA Astrophysics Data System (ADS)

    Guo, Hongwei; Su, Buxin; Bai, Zhenlong; Zhang, Jianliang; Li, Xinyu

    2014-09-01

    The traditional artificial recognition methods for the blast furnace dust composition have several disadvantages, including a great deal of information to dispose, complex operation, and low working efficiency. In this article, a multifeature analysis method based on comprehensive image-processing techniques was proposed to automatically recognize the blast furnace dust composition. First, the artificial recognition and feature analysis, which included image preprocessing, Harris corner feature, Canny edge feature, and Ruffle feature analysis, was designed to build the template image, so that any unknown dust digital image could be tested. Second, the composition of coke, microvariation pulverized coal, vitric, ash, and iron from dust would be distinguished according to their different range of values based on the multifeature analysis. The method is valid for recognizing the blast furnace dust composition automatically, and it is fast and has a high recognition accuracy.

  8. A New Signal Processing Technique for Neutron Capture Cross Section Measurement Based on Pulse Width Analysis

    NASA Astrophysics Data System (ADS)

    Katabuchi, T.; Matsuhashi, T.; Terada, K.; Mizumoto, M.; Hirose, K.; Kimura, A.; Furutaka, K.; Hara, K. Y.; Harada, H.; Hori, J.; Igashira, M.; Kamiyama, T.; Kitatani, F.; Kino, K.; Kiyanagi, Y.; Koizumi, M.; Nakamura, S.; Oshima, M.; Toh, Y.

    2014-05-01

    A fast data acquisition method based on pulse width analysis was developed for γ-ray spectroscopy with an NaI(Tl) detector. The new method was tested in experiments with standard γ-ray sources and pulsed neutron beam from a spallation neutron source. Pulse height spectra were successfully reconstructed from pulse width distribution by use of an energy calibration curve. The 197Au(n, γ)198Au cross section was measured by this method to test the viability. The obtained experimental cross section showed a good agreement with a calculation using the resonance parameters of JENDL-4.0.

  9. Automated cloud classification using a ground based infra-red camera and texture analysis techniques

    NASA Astrophysics Data System (ADS)

    Rumi, Emal; Kerr, David; Coupland, Jeremy M.; Sandford, Andrew P.; Brettle, Mike J.

    2013-10-01

    Clouds play an important role in influencing the dynamics of local and global weather and climate conditions. Continuous monitoring of clouds is vital for weather forecasting and for air-traffic control. Convective clouds such as Towering Cumulus (TCU) and Cumulonimbus clouds (CB) are associated with thunderstorms, turbulence and atmospheric instability. Human observers periodically report the presence of CB and TCU clouds during operational hours at airports and observatories; however such observations are expensive and time limited. Robust, automatic classification of cloud type using infrared ground-based instrumentation offers the advantage of continuous, real-time (24/7) data capture and the representation of cloud structure in the form of a thermal map, which can greatly help to characterise certain cloud formations. The work presented here utilised a ground based infrared (8-14 μm) imaging device mounted on a pan/tilt unit for capturing high spatial resolution sky images. These images were processed to extract 45 separate textural features using statistical and spatial frequency based analytical techniques. These features were used to train a weighted k-nearest neighbour (KNN) classifier in order to determine cloud type. Ground truth data were obtained by inspection of images captured simultaneously from a visible wavelength colour camera at the same installation, with approximately the same field of view as the infrared device. These images were classified by a trained cloud observer. Results from the KNN classifier gave an encouraging success rate. A Probability of Detection (POD) of up to 90% with a Probability of False Alarm (POFA) as low as 16% was achieved.

  10. AVR Microcontroller-based automated technique for analysis of DC motors

    NASA Astrophysics Data System (ADS)

    Kaur, P.; Chatterji, S.

    2014-01-01

    This paper provides essential information on the development of a 'dc motor test and analysis control card' using AVR series ATMega32 microcontroller. This card can be interfaced to PC and calculates parameters like motor losses, efficiency and plot characteristics for dc motors. Presently, there are different tests and methods available to evaluate motor parameters, but a single and universal user-friendly automated set-up has been discussed in this paper. It has been accomplished by designing a data acquisition and SCR bridge firing hardware based on AVR ATMega32 microcontroller. This hardware has the capability to drive the phase-controlled rectifiers and acquire real-time values of current, voltage, temperature and speed of motor. Various analyses feasible with the designed hardware are of immense importance for dc motor manufacturers and quality-sensitive users. Authors, through this paper aim to provide details of this AVR-based hardware which can be used for dc motor parameter analysis and also for motor control applications.

  11. Analysis and coding technique based on computational intelligence methods and image-understanding architecture

    NASA Astrophysics Data System (ADS)

    Kuvychko, Igor

    2000-05-01

    Human vision involves higher-level knowledge and top-bottom processes for resolving ambiguity and uncertainty in the real images. Even very advanced low-level image processing can not give any advantages without a highly effective knowledge-representation and reasoning system that is the solution of image understanding problem. Methods of image analysis and coding are directly based on the methods of knowledge representation and processing. Article suggests such models and mechanisms in form of Spatial Turing Machine that in place of symbols and tapes works with hierarchical networks represented dually as discrete and continuous structures. Such networks are able to perform both graph and diagrammatic operations being the basis of intelligence. Computational intelligence methods provide transformation of continuous image information into the discrete structures, making it available for analysis. Article shows that symbols naturally emerge in such networks, giving opportunity to use symbolic operations. Such framework naturally combines methods of machine learning, classification and analogy with induction, deduction and other methods of higher level reasoning. Based on these principles image understanding system provides more flexible ways of handling with ambiguity and uncertainty in the real images and does not require supercomputers. That opens way to new technologies in the computer vision and image databases.

  12. Numerical analysis of radiation propagation in innovative volumetric receivers based on selective laser melting techniques

    NASA Astrophysics Data System (ADS)

    Alberti, Fabrizio; Santiago, Sergio; Roccabruna, Mattia; Luque, Salvador; Gonzalez-Aguilar, Jose; Crema, Luigi; Romero, Manuel

    2016-05-01

    Volumetric absorbers constitute one of the key elements in order to achieve high thermal conversion efficiencies in concentrating solar power plants. Regardless of the working fluid or thermodynamic cycle employed, design trends towards higher absorber output temperatures are widespread, which lead to the general need of components of high solar absorptance, high conduction within the receiver material, high internal convection, low radiative and convective heat losses and high mechanical durability. In this context, the use of advanced manufacturing techniques, such as selective laser melting, has allowed for the fabrication of intricate geometries that are capable of fulfilling the previous requirements. This paper presents a parametric design and analysis of the optical performance of volumetric absorbers of variable porosity conducted by means of detailed numerical ray tracing simulations. Sections of variable macroscopic porosity along the absorber depth were constructed by the fractal growth of single-cell structures. Measures of performance analyzed include optical reflection losses from the absorber front and rear faces, penetration of radiation inside the absorber volume, and radiation absorption as a function of absorber depth. The effects of engineering design parameters such as absorber length and wall thickness, material reflectance and porosity distribution on the optical performance of absorbers are discussed, and general design guidelines are given.

  13. Analysis of RDSS positioning accuracy based on RNSS wide area differential technique

    NASA Astrophysics Data System (ADS)

    Xing, Nan; Su, RanRan; Zhou, JianHua; Hu, XiaoGong; Gong, XiuQiang; Liu, Li; He, Feng; Guo, Rui; Ren, Hui; Hu, GuangMing; Zhang, Lei

    2013-10-01

    The BeiDou Navigation Satellite System (BDS) provides Radio Navigation Service System (RNSS) as well as Radio Determination Service System (RDSS). RDSS users can obtain positioning by responding the Master Control Center (MCC) inquiries to signal transmitted via GEO satellite transponder. The positioning result can be calculated with elevation constraint by MCC. The primary error sources affecting the RDSS positioning accuracy are the RDSS signal transceiver delay, atmospheric trans-mission delay and GEO satellite position error. During GEO orbit maneuver, poor orbit forecast accuracy significantly impacts RDSS services. A real-time 3-D orbital correction method based on wide-area differential technique is raised to correct the orbital error. Results from the observation shows that the method can successfully improve positioning precision during orbital maneuver, independent from the RDSS reference station. This improvement can reach 50% in maximum. Accurate calibration of the RDSS signal transceiver delay precision and digital elevation map may have a critical role in high precise RDSS positioning services.

  14. An Analysis Technique for Active Neutron Multiplicity Measurements Based on First Principles

    SciTech Connect

    Evans, Louise G; Goddard, Braden; Charlton, William S; Peerani, Paolo

    2012-08-13

    Passive neutron multiplicity counting is commonly used to quantify the total mass of plutonium in a sample, without prior knowledge of the sample geometry. However, passive neutron counting is less applicable to uranium measurements due to the low spontaneous fission rates of uranium. Active neutron multiplicity measurements are therefore used to determine the {sup 235}U mass in a sample. Unfortunately, there are still additional challenges to overcome for uranium measurements, such as the coupling of the active source and the uranium sample. Techniques, such as the coupling method, have been developed to help reduce the dependence of calibration curves for active measurements on uranium samples; although, they still require similar geometry known standards. An advanced active neutron multiplicity measurement method is being developed by Texas A&M University, in collaboration with Los Alamos National Laboratory (LANL) in an attempt to overcome the calibration curve requirements. This method can be used to quantify the {sup 235}U mass in a sample containing uranium without using calibration curves. Furthermore, this method is based on existing detectors and nondestructive assay (NDA) systems, such as the LANL Epithermal Neutron Multiplicity Counter (ENMC). This method uses an inexpensive boron carbide liner to shield the uranium sample from thermal and epithermal neutrons while allowing fast neutrons to reach the sample. Due to the relatively low and constant fission and absorption energy dependent cross-sections at high neutron energies for uranium isotopes, fast neutrons can penetrate the sample without significant attenuation. Fast neutron interrogation therefore creates a homogeneous fission rate in the sample, allowing for first principle methods to be used to determine the {sup 235}U mass in the sample. This paper discusses the measurement method concept and development, including measurements and simulations performed to date, as well as the potential

  15. Cost Analysis of MRI Services in Iran: An Application of Activity Based Costing Technique

    PubMed Central

    Bayati, Mohsen; Mahboub Ahari, Alireza; Badakhshan, Abbas; Gholipour, Mahin; Joulaei, Hassan

    2015-01-01

    Background: Considerable development of MRI technology in diagnostic imaging, high cost of MRI technology and controversial issues concerning official charges (tariffs) have been the main motivations to define and implement this study. Objectives: The present study aimed to calculate the unit-cost of MRI services using activity-based costing (ABC) as a modern cost accounting system and to fairly compare calculated unit-costs with official charges (tariffs). Materials and Methods: We included both direct and indirect costs of MRI services delivered in fiscal year 2011 in Shiraz Shahid Faghihi hospital. Direct allocation method was used for distribution of overhead costs. We used micro-costing approach to calculate unit-cost of all different MRI services. Clinical cost data were retrieved from the hospital registering system. Straight-line method was used for depreciation cost estimation. To cope with uncertainty and to increase the robustness of study results, unit costs of 33 MRI services was calculated in terms of two scenarios. Results: Total annual cost of MRI activity center (AC) was calculated at USD 400,746 and USD 532,104 based on first and second scenarios, respectively. Ten percent of the total cost was allocated from supportive departments. The annual variable costs of MRI center were calculated at USD 295,904. Capital costs measured at USD 104,842 and USD 236, 200 resulted from the first and second scenario, respectively. Existing tariffs for more than half of MRI services were above the calculated costs. Conclusion: As a public hospital, there are considerable limitations in both financial and administrative databases of Shahid Faghihi hospital. Labor cost has the greatest share of total annual cost of Shahid Faghihi hospital. The gap between unit costs and tariffs implies that the claim for extra budget from health providers may not be relevant for all services delivered by the studied MRI center. With some adjustments, ABC could be implemented in MRI

  16. Experimental investigation of evanescence-based infrared biodetection technique for micro-total-analysis systems

    NASA Astrophysics Data System (ADS)

    Chandrasekaran, Arvind; Packirisamy, Muthukumaran

    2009-09-01

    The advent of microoptoelectromechanical systems (MOEMS) and its integration with other technologies such as microfluidics, microthermal, immunoproteomics, etc. has led to the concept of an integrated micro-total-analysis systems (μTAS) or Lab-on-a-Chip for chemical and biological applications. Recently, research and development of μTAS have attained a significant growth rate over several biodetection sciences, in situ medical diagnoses, and point-of-care testing applications. However, it is essential to develop suitable biophysical label-free detection methods for the success, reliability, and ease of use of the μTAS. We proposed an infrared (IR)-based evanescence wave detection system on the silicon-on-insulator platform for biodetection with μTAS. The system operates on the principle of bio-optical interaction that occurs due to the evanescence of light from the waveguide device. The feasibility of biodetection has been experimentally investigated by the detection of horse radish peroxidase upon its reaction with hydrogen peroxide.

  17. Nuclear spectroscopy pulse height analysis based on digital signal processing techniques

    SciTech Connect

    Simoes, J.B.; Simoes, P.C.P.S.; Correia, C.M.B.A.

    1995-08-01

    A digital approach to pulse height analysis is presented. It consists of entire pulse digitization, using a flash analog-to-digital converter (ADC), being its height estimated by a floating point digital signal processor (DSP) as one parameter of a model best fitting to the pulse samples. The differential nonlinearity (DNL) is reduced by simultaneously adding to the pulse, prior to its digitization, two analog signals provided by a digital-to-analog converter (DAC). One of them is a small amplitude dither signal used to eliminate a bias introduced by the fitting algorithm. The other, with large amplitude, corrects the ADC nonlinearities by a method similar to the well known Gatti`s sliding scale. The simulations carried out showed that, using a 12-bit flash ADC, a 14-bit DAC and a dedicated floating point DSP performing a polynomial fitting to the samples around the pulse peak, it is actually possible to process about 10,000 events per second, with a constant height pulse dispersion of only 4 on 8,192 channels and a very good differential linearity. A prototype system based on the Texas Instruments floating point DSP TMS320C31 and built following the presented methodology has already been tested and performed as expected.

  18. A new approach to the analysis of alpha spectra based on neural network techniques

    NASA Astrophysics Data System (ADS)

    Baeza, A.; Miranda, J.; Guillén, J.; Corbacho, J. A.; Pérez, R.

    2011-10-01

    The analysis of alpha spectra requires good radiochemical procedures in order to obtain well differentiated alpha peaks in the spectrum, and the easiest way to analyze them is by directly summing the counts obtained in the Regions of Interest (ROIs). However, the low-energy tails of the alpha peaks frequently make this simple approach unworkable because some peaks partially overlap. Many fitting procedures have been proposed to solve this problem, most of them based on semi-empirical mathematical functions that emulate the shape of a theoretical alpha peak. The main drawback of these methods is that the great number of fitting parameters used means that their physical meaning is obscure or completely lacking. We propose another approach—the application of an artificial neural network. Instead of fitting the experimental data to a mathematical function, the fit is carried out by an artificial neural network (ANN) that has previously been trained to model the shape of an alpha peak using as training patterns several polonium spectra obtained from actual samples analyzed in our laboratory. In this sense, the ANN is able to learn the shape of an actual alpha peak. We have designed such an ANN as a feed-forward multi-layer perceptron with supervised training based on a back-propagation algorithm. The fitting procedure is based on the experimental observables that are characteristic of alpha peaks—the number of counts of the maximum and several peak widths at different heights. Polonium isotope spectra were selected because the alpha peaks corresponding to 208Po, 209Po, and 210Po are monoenergetic and well separated. The uncertainties introduced by this fitting procedure were less than the counting uncertainties. This new approach was applied to the problem of resolving overlapping peaks. Firstly, a theoretical study was carried out by artificially overlapping alpha peaks from actual samples in order to test the ability of the ANN to resolve each peak. Then, the ANN

  19. Advanced NMR-based techniques for pore structure analysis of coal. Final project report

    SciTech Connect

    Smith, D.M.; Hua, D.W.

    1996-02-01

    During the 3 year term of the project, new methods have been developed for characterizing the pore structure of porous materials such as coals, carbons, and amorphous silica gels. In general, these techniques revolve around; (1) combining multiple techniques such as small-angle x-ray scattering (SAXS) and adsorption of contrast-matched adsorbates or {sup 129}Xe NMR and thermoporometry (the change in freezing point with pore size), (2) combining adsorption isotherms over several pressure ranges to obtain a more complete description of pore filling, or (3) applying NMR ({sup 129}Xe, {sup 14}N{sub 2}, {sup 15}N{sub 2}) techniques with well-defined porous solids with pores in the large micropore size range (>1 nm).

  20. A meta-analysis of cognitive-based behaviour change techniques as interventions to improve medication adherence

    PubMed Central

    Easthall, Claire; Song, Fujian; Bhattacharya, Debi

    2013-01-01

    Objective To describe and evaluate the use of cognitive-based behaviour change techniques as interventions to improve medication adherence. Design Systematic review and meta-analysis of interventions to improve medication adherence. Data sources Search of the MEDLINE, EMBASE, PsycINFO, CINAHL and The Cochrane Library databases from the earliest year to April 2013 without language restriction. References of included studies were also screened to identify further relevant articles. Review methods We used predefined criteria to select randomised controlled trials describing a medication adherence intervention that used Motivational Interviewing (MI) or other cognitive-based techniques. Data were extracted and risk of bias was assessed by two independent reviewers. We conducted the meta-analysis using a random effects model and Hedges’ g as the measure of effect size. Results We included 26 studies (5216 participants) in the meta-analysis. Interventions most commonly used MI, but many used techniques such as aiming to increase the patient's confidence and sense of self-efficacy, encouraging support-seeking behaviours and challenging negative thoughts, which were not specifically categorised. Interventions were most commonly delivered from community-based settings by routine healthcare providers such as general practitioners and nurses. An effect size (95% CI) of 0.34 (0.23 to 0.46) was calculated and was statistically significant (p < 0.001). Heterogeneity was high with an I2 value of 68%. Adjustment for publication bias generated a more conservative estimate of summary effect size of 0.21 (0.08 to 0.33). The majority of subgroup analyses produced statistically non-significant results. Conclusions Cognitive-based behaviour change techniques are effective interventions eliciting improvements in medication adherence that are likely to be greater than the behavioural and educational interventions largely used in current practice. Subgroup analyses suggest that these

  1. Novel Laser-Based Technique is Ideal for Real-Time Environmental Analysis

    ERIC Educational Resources Information Center

    Journal of Chemical Education, 2005

    2005-01-01

    Ocean Optics offers laser-induced breakdown spectrometer systems (LIBS) that can be used to identify light to heavy metals in a variety of sample types and geometries in environmental analysis applications. LIBS are versatile, real-time, high-resolution analyzers for qualitative analysis, in less than one second, of every element in solids,…

  2. An alternative Shell inversion technique - Analysis and validation based on COSMIC and ionosonde data

    NASA Astrophysics Data System (ADS)

    Lin, Jian; Wu, Yun; Qiao, Xuejun; Zhou, Yiyan

    2012-01-01

    Multi-channel Global Positioning System (GPS) carrier phase signals, received by the six low Earth orbiting (LEO) satellites from the Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) program, were used to undertake active limb sounding of the Earth's atmosphere and ionosphere via radio occultation. In the ionospheric radio occultation (IRO) data processing, the standard Shell inversion technique (SIT), transformed from the traditional Abel inversion technique (AIT), is widely used, and can retrieve good electron density profiles. In this paper, an alternative SIT method is proposed. The comparison between different inversion techniques will be discussed, taking advantage of the availability of COSMIC datasets. Moreover, the occultation results obtained from the SIT and alternative SIT at 500 km and 800 km, are compared with ionosonde measurements. The electron densities from the alternative SIT show excellent consistency to those from the SIT, with strong correlations over 0.996 and 0.999 at altitudes of 500 km and 800 km, respectively, and the peak electron densities (NmF2) from the alternative SIT are equivalent to the SIT, with 0.839 vs. 0.844, and 0.907 vs. 0.909 correlation coefficients when comparing to those by the ionosondes. These results show that: (1) the NmF2 and hmF2 retrieved from the SIT and alternative SIT are highly consistent, and in a good agreement with those measured by ionosondes, (2) no matter which inversion technique is used, the occultation results at the higher orbits (˜800 km) are better than those at the lower orbits (˜500 km).

  3. Advanced NMR-based techniques for pore structure analysis of coal

    SciTech Connect

    Smith, D.M.

    1992-01-01

    One of the main problems in coal utilization is the inability to properly characterize its complex pore structure. Coals typically have micro/ultra-micro pores but they also exhibit meso and macroporosity. Conventional pore size techniques (adsorption/condensation, mercury porosimetry) are limited because of this broad pore size range, microporosity, reactive nature of coal, samples must be completely dried, and network/percolation effects. Small angle scattering is limited because it probes both open and closed pores. Although one would not expect any single technique to provide a satisfactory description of a coal's structure, it is apparent that better techniques are necessary. We believe that measurement of the NMR parameters of various gas phase and adsorbed phase NMR active probes can provide the resolution to this problem. We now have two suites of well-characterized microporous materials including oxides (zeolites and silica gel) and activated carbons from our industrial partner, Air Products in Allentown, PA. Our current work may be divided into three areas: small-angle X-ray scattering (SAXS), adsorption, and NMR.

  4. Characterization of electrically-active defects in ultraviolet light-emitting diodes with laser-based failure analysis techniques

    NASA Astrophysics Data System (ADS)

    Miller, Mary A.; Tangyunyong, Paiboon; Cole, Edward I.

    2016-01-01

    Laser-based failure analysis techniques demonstrate the ability to quickly and non-intrusively screen deep ultraviolet light-emitting diodes (LEDs) for electrically-active defects. In particular, two laser-based techniques, light-induced voltage alteration and thermally-induced voltage alteration, generate applied voltage maps (AVMs) that provide information on electrically-active defect behavior including turn-on bias, density, and spatial location. Here, multiple commercial LEDs were examined and found to have dark defect signals in the AVM indicating a site of reduced resistance or leakage through the diode. The existence of the dark defect signals in the AVM correlates strongly with an increased forward-bias leakage current. This increased leakage is not present in devices without AVM signals. Transmission electron microscopy analysis of a dark defect signal site revealed a dislocation cluster through the pn junction. The cluster included an open core dislocation. Even though LEDs with few dark AVM defect signals did not correlate strongly with power loss, direct association between increased open core dislocation densities and reduced LED device performance has been presented elsewhere [M. W. Moseley et al., J. Appl. Phys. 117, 095301 (2015)].

  5. A new QMR-based technique for body composition analysis in infants

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Accurate assessment and tracking of infant body composition is useful in evaluation of the amount and quality of weight gain, which can provide key information in both clinical and research settings. Thus, body composition analysis (BCA) results can be used to monitor and evaluate infant growth patt...

  6. FBGs cascade interrogation technique based on wavelength-to-delay mapping and KLT analysis

    NASA Astrophysics Data System (ADS)

    Hervás, J.; Barrera, D.; Fernández-Pousa, Carlos R.; Sales, S.

    2016-05-01

    The Karhunen-Loeve transform is applied to the coarsely sampled impulse response generated by an FBG cascade in order to calculate the temperature change suffered by the FBGs. Thanks to a dispersive media, the wavelength change performed by the temperature change produces a delay shift in the sample generated by an FBG, delay shift which is recorded in the eigenvalues calculated by the KLT routine, letting to measure the temperature variation. Although the FBGs samples are represented only by four points, a continuous temperature measurement can be performed thanks to the KLT algorithm. This means a three order reduction in the number of points giving this method a low computational complexity. Simulations are performed to validate the interrogation technique and estimate performance and an experimental example is provided to demonstrate real operation.

  7. Scatterometry based 65nm node CDU analysis and prediction using novel reticle measurement technique

    NASA Astrophysics Data System (ADS)

    van Ingen Schenau, Koen; Vanoppen, Peter; van der Laan, Hans; Kiers, Ton; Janssen, Maurice

    2005-05-01

    Scatterometry was selected as CD metrology for the 65nm CDU system qualification. Because of the dominant reticle residuals component in the 65nm CD budget for dense lines, significant improvements in reticle CD metrology were required. SEM is an option but requires extensive measurements due to the scatterometry grating modules. Therefore a new technique was developed and called SERUM (Spot sensor Enabled Reticle Uniformity Measurements). It uses the on board exposure system metrology sensors to measure transmission that is converted to reticle CD. It has the advantage that an entire reticle is measured within two minutes with good repeatability. The reticle fingerprints correlate well to the SEM measurements. With the improvements in reticle CD metrology offered by SEM and SERUM the reticle residuals component no longer dominates the 65nm budget for CDU system qualification.

  8. Application of the windowed-Fourier-transform-based fringe analysis technique for investigating temperature and concentration fields in fluids.

    PubMed

    Mohanan, Sharika; Srivastava, Atul

    2014-04-10

    The present work is concerned with the development and application of a novel fringe analysis technique based on the principles of the windowed-Fourier-transform (WFT) for the determination of temperature and concentration fields from interferometric images for a range of heat and mass transfer applications. Based on the extent of the noise level associated with the experimental data, the technique has been coupled with two different phase unwrapping methods: the Itoh algorithm and the quality guided phase unwrapping technique for phase extraction. In order to generate the experimental data, a range of experiments have been carried out which include cooling of a vertical flat plate in free convection conditions, combustion of mono-propellant flames, and growth of organic as well as inorganic crystals from their aqueous solutions. The flat plate and combustion experiments are modeled as heat transfer applications wherein the interest is to determine the whole-field temperature distribution. Aqueous-solution-based crystal growth experiments are performed to simulate the mass transfer phenomena and the interest is to determine the two-dimensional solute concentration field around the growing crystal. A Mach-Zehnder interferometer has been employed to record the path-integrated quantity of interest (temperature and/or concentration) in the form of interferometric images in the experiments. The potential of the WFT method has also been demonstrated on numerically simulated phase data for varying noise levels, and the accuracy in phase extraction have been quantified in terms of the root mean square errors. Three levels of noise, i.e., 0%, 10%, and 20% have been considered. Results of the present study show that the WFT technique allows an accurate extraction of phase values that can subsequently be converted into two-dimensional temperature and/or concentration distribution fields. Moreover, since WFT is a local processing technique, speckle patterns and the inherent

  9. DATA ANALYSIS TECHNIQUES

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Food scientists use standards and calibrations to relate the concentration of a compound of interest to the instrumental response. The techniques used include classical, single point, and inverse calibrations, as well as standard addition and internal standards. Several fundamental criteria -- sel...

  10. 2D Flood Modelling Using Advanced Terrain Analysis Techniques And A Fully Continuous DEM-Based Rainfall-Runoff Algorithm

    NASA Astrophysics Data System (ADS)

    Nardi, F.; Grimaldi, S.; Petroselli, A.

    2012-12-01

    Remotely sensed Digital Elevation Models (DEMs), largely available at high resolution, and advanced terrain analysis techniques built in Geographic Information Systems (GIS), provide unique opportunities for DEM-based hydrologic and hydraulic modelling in data-scarce river basins paving the way for flood mapping at the global scale. This research is based on the implementation of a fully continuous hydrologic-hydraulic modelling optimized for ungauged basins with limited river flow measurements. The proposed procedure is characterized by a rainfall generator that feeds a continuous rainfall-runoff model producing flow time series that are routed along the channel using a bidimensional hydraulic model for the detailed representation of the inundation process. The main advantage of the proposed approach is the characterization of the entire physical process during hydrologic extreme events of channel runoff generation, propagation, and overland flow within the floodplain domain. This physically-based model neglects the need for synthetic design hyetograph and hydrograph estimation that constitute the main source of subjective analysis and uncertainty of standard methods for flood mapping. Selected case studies show results and performances of the proposed procedure as respect to standard event-based approaches.

  11. [Analyzer Design of Atmospheric Particulate Matter's Concentration and Elemental Composition Based on β and X-Ray's Analysis Techniques].

    PubMed

    Ge, Liang-quan; Liu, He-fan; Zeng, Guo-qiang; Zhang, Qing-xian; Ren, Mao-qiang; Li, Dan; Gu, Yi; Luo, Yao-yao; Zhao, Jian-kun

    2016-03-01

    Monitoring atmospheric particulate matter requires real-time analysis, such as particulate matter's concentrations, their element types and contents. An analyzer which is based on β and X rays analysis techniques is designed to meet those demands. Applying β-ray attenuation law and energy dispersive X-ray fluorescence analysis principle, the paper introduces the analyzer's overall design scheme, structure, FPGA circuit hardware and software for the analyzer. And the analyzer can measure atmospheric particulate matters' concentration, elements and their contents by on-line analysis. Pure elemental particle standard samples were prepared by deposition, and those standard samples were used to set the calibration for the analyzer in this paper. The analyzer can monitor atmospheric particulate matters concentration, 30 kinds of elements and content, such as TSP, PM10 and PM2.5. Comparing the measurement results from the analyzer to Chengdu Environmental Protection Agency's monitoring results for monitoring particulate matters, a high consistency is obtained by the application in eastern suburbs of Chengdu. Meanwhile, the analyzer are highly sensitive in monitoring particulate matters which contained heavy metal elements (such as As, Hg, Cd, Cr, Pb and so on). The analyzer has lots of characteristics through technical performance testing, such as continuous measurement, low detection limit, quick analysis, easy to use and so on. In conclusion, the analyzer can meet the demands for analyzing atmospheric particulate matter's concentration, elements and their contents in urban environmental monitoring. PMID:27400540

  12. Application of an ensemble technique based on singular spectrum analysis to daily rainfall forecasting.

    PubMed

    Baratta, Daniela; Cicioni, Giovambattista; Masulli, Francesco; Studer, Léonard

    2003-01-01

    In previous work, we have proposed a constructive methodology for temporal data learning supported by results and prescriptions related to the embedding theorem, and using the singular spectrum analysis both in order to reduce the effects of the possible discontinuity of the signal and to implement an efficient ensemble method. In this paper we present new results concerning the application of this approach to the forecasting of the individual rain-fall intensities series collected by 135 stations distributed in the Tiber basin. The average RMS error of the obtained forecasting is less than 3mm of rain. PMID:12672433

  13. Frontier-based techniques in measuring hospital efficiency in Iran: a systematic review and meta-regression analysis

    PubMed Central

    2013-01-01

    Background In recent years, there has been growing interest in measuring the efficiency of hospitals in Iran and several studies have been conducted on the topic. The main objective of this paper was to review studies in the field of hospital efficiency and examine the estimated technical efficiency (TE) of Iranian hospitals. Methods Persian and English databases were searched for studies related to measuring hospital efficiency in Iran. Ordinary least squares (OLS) regression models were applied for statistical analysis. The PRISMA guidelines were followed in the search process. Results A total of 43 efficiency scores from 29 studies were retrieved and used to approach the research question. Data envelopment analysis was the principal frontier efficiency method in the estimation of efficiency scores. The pooled estimate of mean TE was 0.846 (±0.134). There was a considerable variation in the efficiency scores between the different studies performed in Iran. There were no differences in efficiency scores between data envelopment analysis (DEA) and stochastic frontier analysis (SFA) techniques. The reviewed studies are generally similar and suffer from similar methodological deficiencies, such as no adjustment for case mix and quality of care differences. The results of OLS regression revealed that studies that included more variables and more heterogeneous hospitals generally reported higher TE. Larger sample size was associated with reporting lower TE. Conclusions The features of frontier-based techniques had a profound impact on the efficiency scores among Iranian hospital studies. These studies suffer from major methodological deficiencies and were of sub-optimal quality, limiting their validity and reliability. It is suggested that improving data collection and processing in Iranian hospital databases may have a substantial impact on promoting the quality of research in this field. PMID:23945011

  14. Model building techniques for analysis.

    SciTech Connect

    Walther, Howard P.; McDaniel, Karen Lynn; Keener, Donald; Cordova, Theresa Elena; Henry, Ronald C.; Brooks, Sean; Martin, Wilbur D.

    2009-09-01

    The practice of mechanical engineering for product development has evolved into a complex activity that requires a team of specialists for success. Sandia National Laboratories (SNL) has product engineers, mechanical designers, design engineers, manufacturing engineers, mechanical analysts and experimentalists, qualification engineers, and others that contribute through product realization teams to develop new mechanical hardware. The goal of SNL's Design Group is to change product development by enabling design teams to collaborate within a virtual model-based environment whereby analysis is used to guide design decisions. Computer-aided design (CAD) models using PTC's Pro/ENGINEER software tools are heavily relied upon in the product definition stage of parts and assemblies at SNL. The three-dimensional CAD solid model acts as the design solid model that is filled with all of the detailed design definition needed to manufacture the parts. Analysis is an important part of the product development process. The CAD design solid model (DSM) is the foundation for the creation of the analysis solid model (ASM). Creating an ASM from the DSM currently is a time-consuming effort; the turnaround time for results of a design needs to be decreased to have an impact on the overall product development. This effort can be decreased immensely through simple Pro/ENGINEER modeling techniques that summarize to the method features are created in a part model. This document contains recommended modeling techniques that increase the efficiency of the creation of the ASM from the DSM.

  15. Mechanism analysis on biofouling detection based on optical fiber sensing technique

    NASA Astrophysics Data System (ADS)

    Ma, Huiping; Yuan, Feng; Liu, Yongmeng; Jiang, Xiuzhen

    2010-08-01

    More attention is paid to on-line monitoring of biofouling in industrial water process systems. Based on optical fiber sensing technology, biofouling detection mechanism is put forward in the paper. With biofouling formation, optical characteristics and the relation between light intensity and refractive index studied, schematic diagram of optical fiber self-referencing detecting system and technological flowchart are presented. Immunity to electromagnetic interference and other influencing factors by which the precision is great improved is also remarkable characteristic. Absorption spectrum of fluid medium molecule is measured by infrared spectrum and impurity is analyzed by character fingerprints of different liquid. Other pollutant source can be identified by means of infrared spectrum and arithmetic research of artificial neural networks (ANN) technology. It can be used in other fields such as mining, environment protection, medical treatment and transportation of oil, gas and water.

  16. Characterization of electrically-active defects in ultraviolet light-emitting diodes with laser-based failure analysis techniques

    DOE PAGESBeta

    Miller, Mary A.; Tangyunyong, Paiboon; Edward I. Cole, Jr.

    2016-01-12

    In this study, laser-based failure analysis techniques demonstrate the ability to quickly and non-intrusively screen deep ultraviolet light-emitting diodes(LEDs) for electrically-active defects. In particular, two laser-based techniques, light-induced voltage alteration and thermally-induced voltage alteration, generate applied voltage maps (AVMs) that provide information on electrically-active defect behavior including turn-on bias, density, and spatial location. Here, multiple commercial LEDs were examined and found to have dark defect signals in the AVM indicating a site of reduced resistance or leakage through the diode. The existence of the dark defect signals in the AVM correlates strongly with an increased forward-bias leakage current. This increasedmore » leakage is not present in devices without AVM signals. Transmission electron microscopyanalysis of a dark defect signal site revealed a dislocation cluster through the pn junction. The cluster included an open core dislocation. Even though LEDs with few dark AVM defect signals did not correlate strongly with power loss, direct association between increased open core dislocation densities and reduced LED device performance has been presented elsewhere [M. W. Moseley et al., J. Appl. Phys. 117, 095301 (2015)].« less

  17. NEO fireball diversity: energetics-based entry modeling and analysis techniques

    NASA Astrophysics Data System (ADS)

    Revelle, Douglas O.

    2007-05-01

    Observations of fireballs reveal that a number of very different types of materials are routinely entering the atmosphere over a very large height and corresponding mass and energy range. There are five well-known fireball groups. The compositions of these groups can be reliably deduced on a statistical basis based entirely on their observed end-heights in the atmosphere (Ceplecha and McCrosky, 1970, Wetherill and ReVelle, 1981). ReVelle (1983, 2001, 2002, 2005) has also reinterpreted these observations in terms of the properties of porous meteoroids, using the degree to which the observational data can be reproduced using a modern hypersonic aerodynamic entry dynamics approach for porous as well as homogeneous bodies. These data and modeled parameters include the standard properties of drag, deceleration, ablation and fragmentation as well as most recently a model of the panchromatic luminous emission from the fireball during progressive atmospheric penetration. Using a recently developed bolide entry modeling code, ReVelle (2005) has systematically examined the behavior of meteoroids using their semi-well known physical properties. In order to illustrate this, we have investigated a sampling of four of the possible extremes within the NEO bolide population: 1) Type I: Antarctic bolide of 2003: A "small" Aten asteroid, 2) Type I: Park Forest meteorite fall: March 27, 2003, 3) Type I: Mediterranean bolide June 6, 2002, 4) Type II: Revelstoke meteorite fall: March 31, 1965 (with no luminosity data available), and 5) Type II/III: Tagish Lake meteorite fall: January 18, 2000 (with infrasonic data questionable?) In addition to the entry properties, each of these events (except possibly Tagish Lake) also had mechanical, acoustic-gravity waves generated that were subsequently detected following their entry into the atmosphere. Since these waves can also be used to identify key physical properties of these unusual objects, we will also report on our ability to model such

  18. Semi-Automated Classification of Gray Scale Aerial Photographs using Geographic Object Based Image Analysis (GEOBIA) Technique

    NASA Astrophysics Data System (ADS)

    Harb Rabia, Ahmed; Terribile, Fabio

    2013-04-01

    Aerial photography is an important source of high resolution remotely sensed data. Before 1970, aerial photographs were the only remote sensing data source for land use and land cover classification. Using these old aerial photographs improve the final output of land use and land cover change detection. However, classic techniques of aerial photographs classification like manual interpretation or screen digitization require great experience, long processing time and vast effort. A new technique needs to be developed in order to reduce processing time and effort and to give better results. Geographic object based image analysis (GEOBIA) is a newly developed area of Geographic Information Science and remote sensing in which automatic segmentation of images into objects of similar spectral, temporal and spatial characteristics is undertaken. Unlike pixel-based technique, GEOBIA deals with the object properties such as texture, square fit, roundness and many other properties that can improve classification results. GEOBIA technique can be divided into two main steps; segmentation and classification. Segmentation process is grouping adjacent pixels into objects of similar spectral and spatial characteristics. Classification process is assigning classes to the generated objects based on the characteristics of the individual objects. This study aimed to use GEOBIA technique to develop a novel approach for land use and land cover classification of aerial photographs that saves time and effort and gives improved results. Aerial photographs from 1954 of Valle Telesina in Italy were used in this study. Images were rectified and georeferenced in Arcmap using topographic maps. Images were then processed in eCognition software to generate land use and land cover map of 1954. A decision tree rule set was developed in eCognition to classify images and finally nine classes of general land use and land cover in the study area were recognized (forest, trees stripes, agricultural

  19. Get to Understand More from Single-Cells: Current Studies of Microfluidic-Based Techniques for Single-Cell Analysis

    PubMed Central

    Lo, Shih-Jie; Yao, Da-Jeng

    2015-01-01

    This review describes the microfluidic techniques developed for the analysis of a single cell. The characteristics of microfluidic (e.g., little sample amount required, high-throughput performance) make this tool suitable to answer and to solve biological questions of interest about a single cell. This review aims to introduce microfluidic related techniques for the isolation, trapping and manipulation of a single cell. The major approaches for detection in single-cell analysis are introduced; the applications of single-cell analysis are then summarized. The review concludes with discussions of the future directions and opportunities of microfluidic systems applied in analysis of a single cell. PMID:26213918

  20. 2D wavelet-analysis-based calibration technique for flat-panel imaging detectors: application in cone beam volume CT

    NASA Astrophysics Data System (ADS)

    Tang, Xiangyang; Ning, Ruola; Yu, Rongfeng; Conover, David L.

    1999-05-01

    The application of the newly developed flat panel x-ray imaging detector in cone beam volume CT has attracted increasing interest recently. Due to an imperfect solid state array manufacturing process, however, defective elements, gain non-uniformity and offset image unavoidably exist in all kinds of flat panel x-ray imaging detectors, which will cause severe streak and ring artifacts in a cone beam reconstruction image and severely degrade image quality. A calibration technique, in which the artifacts resulting from the defective elements, gain non-uniformity and offset image can be reduced significantly, is presented in this paper. The detection of defective elements is distinctively based upon two-dimensional (2D) wavelet analysis. Because of its inherent localizability in recognizing singularities or discontinuities, wavelet analysis possesses the capability of detecting defective elements over a rather large x-ray exposure range, e.g., 20% to approximately 60% of the dynamic range of the detector used. Three-dimensional (3D) images of a low-contrast CT phantom have been reconstructed from projection images acquired by a flat panel x-ray imaging detector with and without calibration process applied. The artifacts caused individually by defective elements, gain non-uniformity and offset image have been separated and investigated in detail, and the correlation with each other have also been exposed explicitly. The investigation is enforced by quantitative analysis of the signal to noise ratio (SNR) and the image uniformity of the cone beam reconstruction image. It has been demonstrated that the ring and streak artifacts resulting from the imperfect performance of a flat panel x-ray imaging detector can be reduced dramatically, and then the image qualities of a cone beam reconstruction image, such as contrast resolution and image uniformity are improved significantly. Furthermore, with little modification, the calibration technique presented here is also applicable

  1. Technique based on LED multispectral imaging and multivariate analysis for monitoring the conservation state of the Dead Sea Scrolls.

    PubMed

    Marengo, Emilio; Manfredi, Marcello; Zerbinati, Orfeo; Robotti, Elisa; Mazzucco, Eleonora; Gosetti, Fabio; Bearman, Greg; France, Fenella; Shor, Pnina

    2011-09-01

    The aim of this project is the development of a noninvasive technique based on LED multispectral imaging (MSI) for monitoring the conservation state of the Dead Sea Scrolls (DSS) collection. It is well-known that changes in the parchment reflectance drive the transition of the scrolls from legible to illegible. Capitalizing on this fact, we will use spectral imaging to detect changes in the reflectance before they become visible to the human eye. The technique uses multivariate analysis and statistical process control theory. The present study was carried out on a "sample" parchment of calfskin. The monitoring of the surface of a commercial modern parchment aged consecutively for 2 h and 6 h at 80 °C and 50% relative humidity (ASTM) was performed at the Imaging Lab of the Library of Congress (Washington, DC, U.S.A.). MSI is here carried out in the vis-NIR range limited to 1 μm, with a number of bands of 13 and bandwidths that range from about 10 nm in UV to 40 nm in IR. Results showed that we could detect and locate changing pixels, on the basis of reflectance changes, after only a few "hours" of aging. PMID:21777009

  2. An expert diagnostic system based on neural networks and image analysis techniques in the field of automated cytogenetics.

    PubMed

    Beksaç, M S; Eskiizmirliler, S; Cakar, A N; Erkmen, A M; Dağdeviren, A; Lundsteen, C

    1996-03-01

    In this study, we introduce an expert system for intelligent chromosome recognition and classification based on artificial neural networks (ANN) and features obtained by automated image analysis techniques. A microscope equipped with a CCTV camera, integrated with an IBM-PC compatible computer environment including a frame grabber, is used for image data acquisition. Features of the chromosomes are obtained directly from the digital chromosome images. Two new algorithms for automated object detection and object skeletonizing constitute the basis of the feature extraction phase which constructs the components of the input vector to the ANN part of the system. This first version of our intelligent diagnostic system uses a trained unsupervised neural network structure and an original rule-based classification algorithm to find a karyotyped form of randomly distributed chromosomes over a complete metaphase. We investigate the effects of network parameters on the classification performance and discuss the adaptability and flexibility of the neural system in order to reach a structure giving an output including information about both structural and numerical abnormalities. Moreover, the classification performances of neural and rule-based system are compared for each class of chromosome. PMID:8705397

  3. NASA standard: Trend analysis techniques

    NASA Technical Reports Server (NTRS)

    1990-01-01

    Descriptive and analytical techniques for NASA trend analysis applications are presented in this standard. Trend analysis is applicable in all organizational elements of NASA connected with, or supporting, developmental/operational programs. This document should be consulted for any data analysis activity requiring the identification or interpretation of trends. Trend analysis is neither a precise term nor a circumscribed methodology: it generally connotes quantitative analysis of time-series data. For NASA activities, the appropriate and applicable techniques include descriptive and graphical statistics, and the fitting or modeling of data by linear, quadratic, and exponential models. Usually, but not always, the data is time-series in nature. Concepts such as autocorrelation and techniques such as Box-Jenkins time-series analysis would only rarely apply and are not included in this document. The basic ideas needed for qualitative and quantitative assessment of trends along with relevant examples are presented.

  4. Data Analysis Techniques at LHC

    SciTech Connect

    Boccali, Tommaso

    2005-10-12

    A review of the recent developments on data analysis techniques for the upcoming LHC experiments is presented, with the description of early tests ('Data Challenges'), which are being performed before the start-up, to validate the overall design.

  5. Graph-Based Symbolic Technique and Its Application in the Frequency Response Bound Analysis of Analog Integrated Circuits

    PubMed Central

    Tlelo-Cuautle, E.; Rodriguez-Chavez, S.; Palma-Rodriguez, A. A.

    2014-01-01

    A new graph-based symbolic technique (GBST) for deriving exact analytical expressions like the transfer function H(s) of an analog integrated circuit (IC), is introduced herein. The derived H(s) of a given analog IC is used to compute the frequency response bounds (maximum and minimum) associated to the magnitude and phase of H(s), subject to some ranges of process variational parameters, and by performing nonlinear constrained optimization. Our simulations demonstrate the usefulness of the new GBST for deriving the exact symbolic expression for H(s), and the last section highlights the good agreement between the frequency response bounds computed by our variational analysis approach versus traditional Monte Carlo simulations. As a conclusion, performing variational analysis using our proposed GBST for computing the frequency response bounds of analog ICs, shows a gain in computing time of 100x for a differential circuit topology and 50x for a 3-stage amplifier, compared to traditional Monte Carlo simulations. PMID:25136650

  6. A new technique for calculating reentry base heating. [analysis of laminar base flow field of two dimensional reentry body

    NASA Technical Reports Server (NTRS)

    Meng, J. C. S.

    1973-01-01

    The laminar base flow field of a two-dimensional reentry body has been studied by Telenin's method. The flow domain was divided into strips along the x-axis, and the flow variations were represented by Lagrange interpolation polynomials in the transformed vertical coordinate. The complete Navier-Stokes equations were used in the near wake region, and the boundary layer equations were applied elsewhere. The boundary conditions consisted of the flat plate thermal boundary layer in the forebody region and the near wake profile in the downstream region. The resulting two-point boundary value problem of 33 ordinary differential equations was then solved by the multiple shooting method. The detailed flow field and thermal environment in the base region are presented in the form of temperature contours, Mach number contours, velocity vectors, pressure distributions, and heat transfer coefficients on the base surface. The maximum heating rate was found on the centerline, and the two-dimensional stagnation point flow solution was adquate to estimate the maximum heating rate so long as the local Reynolds number could be obtained.

  7. Visualization and Analysis of Wireless Sensor Network Data for Smart Civil Structure Applications Based On Spatial Correlation Technique

    NASA Astrophysics Data System (ADS)

    Chowdhry, Bhawani Shankar; White, Neil M.; Jeswani, Jai Kumar; Dayo, Khalil; Rathi, Manorma

    2009-07-01

    Disasters affecting infrastructure, such as the 2001 earthquakes in India, 2005 in Pakistan, 2008 in China and the 2004 tsunami in Asia, provide a common need for intelligent buildings and smart civil structures. Now, imagine massive reductions in time to get the infrastructure working again, realtime information on damage to buildings, massive reductions in cost and time to certify that structures are undamaged and can still be operated, reductions in the number of structures to be rebuilt (if they are known not to be damaged). Achieving these ideas would lead to huge, quantifiable, long-term savings to government and industry. Wireless sensor networks (WSNs) can be deployed in buildings to make any civil structure both smart and intelligent. WSNs have recently gained much attention in both public and research communities because they are expected to bring a new paradigm to the interaction between humans, environment, and machines. This paper presents the deployment of WSN nodes in the Top Quality Centralized Instrumentation Centre (TQCIC). We created an ad hoc networking application to collect real-time data sensed from the nodes that were randomly distributed throughout the building. If the sensors are relocated, then the application automatically reconfigures itself in the light of the new routing topology. WSNs are event-based systems that rely on the collective effort of several micro-sensor nodes, which are continuously observing a physical phenomenon. WSN applications require spatially dense sensor deployment in order to achieve satisfactory coverage. The degree of spatial correlation increases with the decreasing inter-node separation. Energy consumption is reduced dramatically by having only those sensor nodes with unique readings transmit their data. We report on an algorithm based on a spatial correlation technique that assures high QoS (in terms of SNR) of the network as well as proper utilization of energy, by suppressing redundant data transmission

  8. Analysis of Land Covers over Northern Peninsular Malaysia by Using ALOS-PALSAR Data Based on Frequency-Based Contextual and Neural Network Classification Technique

    NASA Astrophysics Data System (ADS)

    Lim, H. S.; MatJafri, M. Z.; Abdullah, K.; Saleh, N. Mohd.

    2008-11-01

    Optical and microwave remote sensing data have been widely used in land cover and land use classification. Optical satellite remote sensing methods are more appropriate but require cloud-free conditions for data to be useful especially at Equatorial region. In Equatorial region cloud free acquisitions can be rare reducing these sensors' applicability to such studies. ALOS-PALSAR data can be acquired day and night irrespective of weather conditions. This paper presents a comparison between frequency-based contextual and neural network classification technique by using ALOS-PALSAR data for land cover assessment in Northern Peninsular Malaysia. The ALOS-PALSAR data acquired on 10 November 2006 were converted to vegetation, urban, water and other land features. The PALSAR data of training areas were choose and selected based on the optical satellite imagery and were classified using supervised classification methods. Supervised classification techniques were used in the classification analysis. The best supervised classifier was chosen based on the highest overall accuracy and Kappa statistic. Based on the result produced by this study, it can be pointed out the utility of ALOS-PALSAR data as an alternative data source for land cover classification in the Peninsular Malaysia.

  9. Comparison of two headspace sampling techniques for the analysis of off-flavour volatiles from oat based products.

    PubMed

    Cognat, Claudine; Shepherd, Tom; Verrall, Susan R; Stewart, Derek

    2012-10-01

    Two different headspace sampling techniques were compared for analysis of aroma volatiles from freshly produced and aged plain oatcakes. Solid phase microextraction (SPME) using a Carboxen-Polydimethylsiloxane (PDMS) fibre and entrainment on Tenax TA within an adsorbent tube were used for collection of volatiles. The effects of variation in the sampling method were also considered using SPME. The data obtained using both techniques were processed by multivariate statistical analysis (PCA). Both techniques showed similar capacities to discriminate between the samples at different ages. Discrimination between fresh and rancid samples could be made on the basis of changes in the relative abundances of 14-15 of the constituents in the volatile profiles. A significant effect on the detection level of volatile compounds was observed when samples were crushed and analysed by SPME-GC-MS, in comparison to undisturbed product. The applicability and cost effectiveness of both methods were considered. PMID:25005987

  10. A preliminary structural analysis of space-base living quarters modules to verify a weight-estimating technique

    NASA Technical Reports Server (NTRS)

    Grissom, D. S.; Schneider, W. C.

    1971-01-01

    The determination of a base line (minimum weight) design for the primary structure of the living quarters modules in an earth-orbiting space base was investigated. Although the design is preliminary in nature, the supporting analysis is sufficiently thorough to provide a reasonably accurate weight estimate of the major components that are considered to comprise the structural weight of the space base.

  11. NASA standard: Trend analysis techniques

    NASA Technical Reports Server (NTRS)

    1988-01-01

    This Standard presents descriptive and analytical techniques for NASA trend analysis applications. Trend analysis is applicable in all organizational elements of NASA connected with, or supporting, developmental/operational programs. Use of this Standard is not mandatory; however, it should be consulted for any data analysis activity requiring the identification or interpretation of trends. Trend Analysis is neither a precise term nor a circumscribed methodology, but rather connotes, generally, quantitative analysis of time-series data. For NASA activities, the appropriate and applicable techniques include descriptive and graphical statistics, and the fitting or modeling of data by linear, quadratic, and exponential models. Usually, but not always, the data is time-series in nature. Concepts such as autocorrelation and techniques such as Box-Jenkins time-series analysis would only rarely apply and are not included in this Standard. The document presents the basic ideas needed for qualitative and quantitative assessment of trends, together with relevant examples. A list of references provides additional sources of information.

  12. Global precipitation estimates based on a technique for combining satellite-based estimates, rain gauge analysis, and NWP model precipitation information

    NASA Technical Reports Server (NTRS)

    Huffman, George J.; Adler, Robert F.; Rudolf, Bruno; Schneider, Udo; Keehn, Peter R.

    1995-01-01

    The 'satellite-gauge model' (SGM) technique is described for combining precipitation estimates from microwave satellite data, infrared satellite data, rain gauge analyses, and numerical weather prediction models into improved estimates of global precipitation. Throughout, monthly estimates on a 2.5 degrees x 2.5 degrees lat-long grid are employed. First, a multisatellite product is developed using a combination of low-orbit microwave and geosynchronous-orbit infrared data in the latitude range 40 degrees N - 40 degrees S (the adjusted geosynchronous precipitation index) and low-orbit microwave data alone at higher latitudes. Then the rain gauge analysis is brougth in, weighting each field by its inverse relative error variance to produce a nearly global, observationally based precipitation estimate. To produce a complete global estimate, the numerical model results are used to fill data voids in the combined satellite-gauge estimate. Our sequential approach to combining estimates allows a user to select the multisatellite estimate, the satellite-gauge estimate, or the full SGM estimate (observationally based estimates plus the model information). The primary limitation in the method is imperfections in the estimation of relative error for the individual fields. The SGM results for one year of data (July 1987 to June 1988) show important differences from the individual estimates, including model estimates as well as climatological estimates. In general, the SGM results are drier in the subtropics than the model and climatological results, reflecting the relatively dry microwave estimates that dominate the SGM in oceanic regions.

  13. A single base extension technique for the analysis of known mutations utilizing capillary gel electrophoreisis with electrochemical detection.

    PubMed

    Brazill, Sara A; Kuhr, Werner G

    2002-07-15

    A novel single nucleotide polymorphism (SNP) detection system is described in which the accuracy of DNA polymerase and advantages of electrochemical detection are demonstrated. A model SNP system is presented to illustrate the potential advantages in coupling the single base extension (SBE) technique to capillary gel electrophoresis (CGE) with electrochemical detection. An electrochemically labeled primer, with a ferrocene acetate covalently attached to its 5' end, is used in the extension reaction. When the Watson-Crick complementary ddNTP is added to the SBE reaction, the primer is extended by a single nucleotide. The reaction mixture is subsequently separated by CGE, and the ferrocene-tagged fragments are detected at the separation anode with sinusoidal voltammetry. This work demonstrates the first single base resolution separation of DNA coupled with electrochemical detection. The unextended primer (20-mer) and the 21-mer extension product are separated with a resolution of 0.8. PMID:12139049

  14. High Throughput Sample Preparation and Analysis for DNA Sequencing, PCR and Combinatorial Screening of Catalysis Based on Capillary Array Technique

    SciTech Connect

    Yonghua Zhang

    2002-05-27

    Sample preparation has been one of the major bottlenecks for many high throughput analyses. The purpose of this research was to develop new sample preparation and integration approach for DNA sequencing, PCR based DNA analysis and combinatorial screening of homogeneous catalysis based on multiplexed capillary electrophoresis with laser induced fluorescence or imaging UV absorption detection. The author first introduced a method to integrate the front-end tasks to DNA capillary-array sequencers. protocols for directly sequencing the plasmids from a single bacterial colony in fused-silica capillaries were developed. After the colony was picked, lysis was accomplished in situ in the plastic sample tube using either a thermocycler or heating block. Upon heating, the plasmids were released while chromsomal DNA and membrane proteins were denatured and precipitated to the bottom of the tube. After adding enzyme and Sanger reagents, the resulting solution was aspirated into the reaction capillaries by a syringe pump, and cycle sequencing was initiated. No deleterious effect upon the reaction efficiency, the on-line purification system, or the capillary electrophoresis separation was observed, even though the crude lysate was used as the template. Multiplexed on-line DNA sequencing data from 8 parallel channels allowed base calling up to 620 bp with an accuracy of 98%. The entire system can be automatically regenerated for repeated operation. For PCR based DNA analysis, they demonstrated that capillary electrophoresis with UV detection can be used for DNA analysis starting from clinical sample without purification. After PCR reaction using cheek cell, blood or HIV-1 gag DNA, the reaction mixtures was injected into the capillary either on-line or off-line by base stacking. The protocol was also applied to capillary array electrophoresis. The use of cheaper detection, and the elimination of purification of DNA sample before or after PCR reaction, will make this approach an

  15. Prospects and limitations for determining the parameters in physical-based regional landslide susceptibility model using back analysis technique

    NASA Astrophysics Data System (ADS)

    Dong, Jia-Jyun; Liu, Chia-Nan; Lin, Yan-Cheng; Chen, Ci-Ren

    2010-05-01

    Landslide susceptibility analysis is crucial from viewpoint of hazard mitigation. Statistical and deterministic approaches are frequently adopted for landslide susceptibility analysis. Based on physical models, deterministic approaches are superior to the statistical approaches for they fully take the mechanical mechanisms into account. However, it is difficult to input the appropriate mechanical parameters (including strength and hydraulic) in a deterministic model. Back analysis is a promising way to calibrate the required parameters though few researches have paid attention to evaluate the performance of back analysis approach. This research use hypothetical cases (100 cells) to investigate the prospects and limitations for estimating the parameters of a deterministic model by using back-analysis approach. Based on the assigned hydraulic and strength parameters, the corresponding safety factor and landslide inventory (cell with safety factor less than 1), as well as the depth of ground water table for each cell, were calculated using a deterministic model, TRIGRS. The landslide inventory derived from the forward calculation is then used to back-calculate the pre-assigned parameters. Two scenarios of back analysis approaches were examined in this research. The results reveal that the non-uniqueness of back-analyzed hydraulic and strength parameters is detrimental to the performance if only the landslide inventory is utilized to back-calculate the parameters. However, the performance of back-calculation will be improved if the spatial and temporal variation of ground water table is used to calibrate the hydraulic parameters first. Thereafter, the multiple landslide inventories are hopefully helpful to soothe the non-uniqueness on back-calculating the hydraulic and strength parameters for a deterministic landslide susceptibility analysis in regional scale.

  16. Techniques for Automated Performance Analysis

    SciTech Connect

    Marcus, Ryan C.

    2014-09-02

    The performance of a particular HPC code depends on a multitude of variables, including compiler selection, optimization flags, OpenMP pool size, file system load, memory usage, MPI configuration, etc. As a result of this complexity, current predictive models have limited applicability, especially at scale. We present a formulation of scientific codes, nodes, and clusters that reduces complex performance analysis to well-known mathematical techniques. Building accurate predictive models and enhancing our understanding of scientific codes at scale is an important step towards exascale computing.

  17. A novel fast and flexible technique of radical kinetic behaviour investigation based on pallet for plasma evaluation structure and numerical analysis

    NASA Astrophysics Data System (ADS)

    Malinowski, Arkadiusz; Takeuchi, Takuya; Chen, Shang; Suzuki, Toshiya; Ishikawa, Kenji; Sekine, Makoto; Hori, Masaru; Lukasiak, Lidia; Jakubowski, Andrzej

    2013-07-01

    This paper describes a new, fast, and case-independent technique for sticking coefficient (SC) estimation based on pallet for plasma evaluation (PAPE) structure and numerical analysis. Our approach does not require complicated structure, apparatus, or time-consuming measurements but offers high reliability of data and high flexibility. Thermal analysis is also possible. This technique has been successfully applied to estimation of very low value of SC of hydrogen radicals on chemically amplified ArF 193 nm photoresist (the main goal of this study). Upper bound of our technique has been determined by investigation of SC of fluorine radical on polysilicon (in elevated temperature). Sources of estimation error and ways of its reduction have been also discussed. Results of this study give an insight into the process kinetics, and not only they are helpful in better process understanding but additionally they may serve as parameters in a phenomenological model development for predictive modelling of etching for ultimate CMOS topography simulation.

  18. Applicability of neuro-fuzzy techniques in predicting ground-water vulnerability: a GIS-based sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Dixon, B.

    2005-07-01

    Modeling groundwater vulnerability reliably and cost effectively for non-point source (NPS) pollution at a regional scale remains a major challenge. In recent years, Geographic Information Systems (GIS), neural networks and fuzzy logic techniques have been used in several hydrological studies. However, few of these research studies have undertaken an extensive sensitivity analysis. The overall objective of this research is to examine the sensitivity of neuro-fuzzy models used to predict groundwater vulnerability in a spatial context by integrating GIS and neuro-fuzzy techniques. The specific objectives are to assess the sensitivity of neuro-fuzzy models in a spatial domain using GIS by varying (i) shape of the fuzzy sets, (ii) number of fuzzy sets, and (iii) learning and validation parameters (including rule weights). The neuro-fuzzy models were developed using NEFCLASS-J software on a JAVA platform and were loosely integrated with a GIS. Four plausible parameters which are critical in transporting contaminants through the soil profile to the groundwater, included soil hydrologic group, depth of the soil profile, soil structure (pedality points) of the A horizon, and landuse. In order to validate the model predictions, coincidence reports were generated among model inputs, model predictions, and well/spring contamination data for NO 3-N. A total of 16 neuro-fuzzy models were developed for selected sub-basins of Illinois River Watershed, AR. The sensitivity analysis showed that neuro-fuzzy models were sensitive to the shape of the fuzzy sets, number of fuzzy sets, nature of the rule weights, and validation techniques used during the learning processes. Compared to bell-shaped and triangular-shaped membership functions, the neuro-fuzzy models with a trapezoidal membership function were the least sensitive to the various permutations and combinations of the learning and validation parameters. Over all, Models 11 and 8 showed relatively higher coincidence with well

  19. Extension of an Itô-based general approximation technique for random vibration of a BBW general hysteris model part II: Non-Gaussian analysis

    NASA Astrophysics Data System (ADS)

    Davoodi, H.; Noori, M.

    1990-07-01

    The work presented in this paper constitutes the second phase of on-going research aimed at developing mathematical models for representing general hysteretic behavior of structures and approximation techniques for the computation and analysis of the response of hysteretic systems to random excitations. In this second part, the technique previously developed by the authors for the Gaussian response analysis of non-linear systems with general hysteretic behavior is extended for the non-Gaussian analysis of these systems. This approximation technique is based on the approach proposed independently by Ibrahim and Wu-Lin. In this work up to fourth order moments of the response co-ordinates are obtained for the Bouc-Baber-Wen smooth hysteresis model. These higher order statistics previously have not been made available for general hysteresis models by using existing approximation methods. Second order moments obtained for the model by this non-Gaussian closure scheme are compared with equivalent linearization and Gaussian closure results via Monte Carlo simulation (MCS). Higher order moments are compared with the simulation results. The study performed for a wide range of degradation parameters and input power spectral density ( PSD) levels shows that the non-Gaussian responses obtained by this approach are in better agreement with the MCS results than the linearized and Gaussian ones. This approximation technique can provide information on higher order moments for general hysteretic systems. This information is valuable in random vibration and the reliability analysis of hysteretically yielding structures.

  20. Task-based strategy for optimized contrast enhanced breast imaging: Analysis of six imaging techniques for mammography and tomosynthesis

    SciTech Connect

    Ikejimba, Lynda C.; Kiarashi, Nooshin; Ghate, Sujata V.; Samei, Ehsan; Lo, Joseph Y.

    2014-06-15

    Purpose: The use of contrast agents in breast imaging has the capability of enhancing nodule detectability and providing physiological information. Accordingly, there has been a growing trend toward using iodine as a contrast medium in digital mammography (DM) and digital breast tomosynthesis (DBT). Widespread use raises concerns about the best way to use iodine in DM and DBT, and thus a comparison is necessary to evaluate typical iodine-enhanced imaging methods. This study used a task-based observer model to determine the optimal imaging approach by analyzing six imaging paradigms in terms of their ability to resolve iodine at a given dose: unsubtracted mammography and tomosynthesis, temporal subtraction mammography and tomosynthesis, and dual energy subtraction mammography and tomosynthesis. Methods: Imaging performance was characterized using a detectability index d{sup ′}, derived from the system task transfer function (TTF), an imaging task, iodine signal difference, and the noise power spectrum (NPS). The task modeled a 10 mm diameter lesion containing iodine concentrations between 2.1 mg/cc and 8.6 mg/cc. TTF was obtained using an edge phantom, and the NPS was measured over several exposure levels, energies, and target-filter combinations. Using a structured CIRS phantom, d{sup ′} was generated as a function of dose and iodine concentration. Results: For all iodine concentrations and dose, temporal subtraction techniques for mammography and tomosynthesis yielded the highest d{sup ′}, while dual energy techniques for both modalities demonstrated the next best performance. Unsubtracted imaging resulted in the lowest d{sup ′} values for both modalities, with unsubtracted mammography performing the worst out of all six paradigms. Conclusions: At any dose, temporal subtraction imaging provides the greatest detectability, with temporally subtracted DBT performing the highest. The authors attribute the successful performance to excellent cancellation of

  1. Photogrammetric Techniques for Road Surface Analysis

    NASA Astrophysics Data System (ADS)

    Knyaz, V. A.; Chibunichev, A. G.

    2016-06-01

    The quality and condition of a road surface is of great importance for convenience and safety of driving. So the investigations of the behaviour of road materials in laboratory conditions and monitoring of existing roads are widely fulfilled for controlling a geometric parameters and detecting defects in the road surface. Photogrammetry as accurate non-contact measuring method provides powerful means for solving different tasks in road surface reconstruction and analysis. The range of dimensions concerned in road surface analysis can have great variation from tenths of millimetre to hundreds meters and more. So a set of techniques is needed to meet all requirements of road parameters estimation. Two photogrammetric techniques for road surface analysis are presented: for accurate measuring of road pavement and for road surface reconstruction based on imagery obtained from unmanned aerial vehicle. The first technique uses photogrammetric system based on structured light for fast and accurate surface 3D reconstruction and it allows analysing the characteristics of road texture and monitoring the pavement behaviour. The second technique provides dense 3D model road suitable for road macro parameters estimation.

  2. A RT-based Technique for the Analysis and the Removal of Titan's Atmosphere by Cassini/VIMS-IR data

    NASA Astrophysics Data System (ADS)

    Sindoni, G.; Tosi, F.; Adriani, A.; Moriconi, M. L.; D'Aversa, E.; Grassi, D.; Oliva, F.; Dinelli, B. M.; Castelli, E.

    2015-12-01

    Since 2004, the Visual and Infrared Mapping Spectrometer (VIMS), together with the CIRS and UVIS spectrometers, aboard the Cassini spacecraft has provided insight on Saturn and Titan atmospheres through remote sensing observations. The presence of clouds and aerosols in Titan's dense atmosphere makes the analysis of the surface radiation a difficult task. For this purpose, an atmospheric radiative transfer (RT) model is required. The implementation of a RT code, which includes multiple scattering, in an inversion algorithm based on the Bayesian approach, can provide strong constraints about both the surface albedo and the atmospheric composition. The application of this retrieval procedure we have developed to VIMS-IR spectra acquired in nadir or slant geometries allows us to retrieve the equivalent opacity of Titan's atmosphere in terms of variable aerosols and gaseous content. Thus, the separation of the atmospheric and surface contributions in the observed spectrum is possible. The atmospheric removal procedure was tested on the spectral range 1-2.2μm of publicly available VIMS data covering the Ontario Lacus and Ligeia Mare regions. The retrieval of the accurate composition of Titan's atmosphere is a much more complex task. So far, the information about the vertical structure of the atmosphere by limb spectra was mostly derived under conditions where the scattering could be neglected [1,2]. Indeed, since the very high aerosol load in the middle-low atmosphere produces strong scattering effects on the measured spectra, the analysis requires a RT modeling taking into account multiple scattering in a spherical-shell geometry. Therefore the use of an innovative method we are developing based on the Monte-Carlo approach, can provide important information about the vertical distribution of the aerosols and the gases composing Titan's atmosphere.[1]Bellucci et al., (2009). Icarus, 201, Issue 1, p. 198-216.[2]de Kok et al., (2007). Icarus, 191, Issue 1, p. 223-235.

  3. Terahertz Wide-Angle Imaging and Analysis on Plane-wave Criteria Based on Inverse Synthetic Aperture Techniques

    NASA Astrophysics Data System (ADS)

    Gao, Jing Kun; Qin, Yu Liang; Deng, Bin; Wang, Hong Qiang; Li, Jin; Li, Xiang

    2016-04-01

    This paper presents two parts of work around terahertz imaging applications. The first part aims at solving the problems occurred with the increasing of the rotation angle. To compensate for the nonlinearity of terahertz radar systems, a calibration signal acquired from a bright target is always used. Generally, this compensation inserts an extra linear phase term in the intermediate frequency (IF) echo signal which is not expected in large-rotation angle imaging applications. We carried out a detailed theoretical analysis on this problem, and a minimum entropy criterion was employed to estimate and compensate for the linear-phase errors. In the second part, the effects of spherical wave on terahertz inverse synthetic aperture imaging are analyzed. Analytic criteria of plane-wave approximation were derived in the cases of different rotation angles. Experimental results of corner reflectors and an aircraft model based on a 330-GHz linear frequency-modulated continuous wave (LFMCW) radar system validated the necessity and effectiveness of the proposed compensation. By comparing the experimental images obtained under plane-wave assumption and spherical-wave correction, it also showed to be highly consistent with the analytic criteria we derived.

  4. School Principals' Personal Constructs Regarding Technology: An Analysis Based on Decision-Making Grid Technique

    ERIC Educational Resources Information Center

    Bektas, Fatih

    2014-01-01

    This study aims to determine the similarities and differences between existing school principals' personal constructs of "ideal principal qualities" in terms of technology by means of the decision-making grid technique. The study has a phenomenological design, and the study group consists of 17 principals who have been serving at…

  5. Visual exploratory analysis of DCE-MRI data in breast cancer based on novel nonlinear dimensional data reduction techniques

    NASA Astrophysics Data System (ADS)

    Meyer-Bäse, Anke; Lespinats, Sylvain; Steinbrücker, Frank; Saalbach, Axel; Schlossbauer, Thomas; Barbu, Adrian

    2009-04-01

    Visualization of multi-dimensional data sets becomes a critical and significant area in modern medical image processing. To analyze such high dimensional data, novel nonlinear embedding approaches become increasingly important to show dependencies among these data in a two- or three-dimensional space. This paper investigates the potential of novel nonlinear dimensional data reduction techniques and compares their results with proven nonlinear techniques when applied to the differentiation of malignant and benign lesions described by high-dimensional data sets arising from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). Two important visualization modalities in medical imaging are presented: the mapping on a lower-dimensional data manifold and the image fusion.

  6. Multicomponent analysis using established techniques

    NASA Astrophysics Data System (ADS)

    Dillehay, David L.

    1991-04-01

    Recent environmental concerns have greatly increased the need, application and scope of real-time continuous emission monitoring systems. New techniques like Fourier Transform Infrared have been applied with limited success for this application. However, the use of well-tried and established techniques (Gas Filter Correlation and Single Beam Dual Wavelength) combined with sophisticated microprocessor technology have produced reliable monitoring systems with increased measurement accuracy.

  7. Feature-Based Registration Techniques

    NASA Astrophysics Data System (ADS)

    Lorenz, Cristian; Klinder, Tobias; von Berg, Jens

    In contrast to intensity-based image registration, where a similarity measure is typically evaluated at each voxel location, feature-based registration works on a sparse set of image locations. Therefore, it needs an explicit step of interpolation to supply a dense deformation field. In this chapter, the application of feature-based registration to pulmonary image registration as well as hybrid methods, combining feature-based with intensity-based registration, is discussed. In contrast to pure feature based registration methods, hybrid methods are increasingly proposed in the pulmonary context and have the potential to out-perform purely intensity based registration methods. Available approaches will be classified along the categories feature type, correspondence definition, and interpolation type to finally achieve a dense deformation field.

  8. Application of spectral analysis techniques to the intercomparison of aerosol data - Part 4: Synthesized analysis of multisensor satellite and ground-based AOD measurements using combined maximum covariance analysis

    NASA Astrophysics Data System (ADS)

    Li, J.; Carlson, B. E.; Lacis, A. A.

    2014-08-01

    In this paper, we introduce the usage of a newly developed spectral decomposition technique - combined maximum covariance analysis (CMCA) - in the spatiotemporal comparison of four satellite data sets and ground-based observations of aerosol optical depth (AOD). This technique is based on commonly used principal component analysis (PCA) and maximum covariance analysis (MCA). By decomposing the cross-covariance matrix between the joint satellite data field and Aerosol Robotic Network (AERONET) station data, both parallel comparison across different satellite data sets and the evaluation of the satellite data against the AERONET measurements are simultaneously realized. We show that this new method not only confirms the seasonal and interannual variability of aerosol optical depth, aerosol-source regions and events represented by different satellite data sets, but also identifies the strengths and weaknesses of each data set in capturing the variability associated with sources, events or aerosol types. Furthermore, by examining the spread of the spatial modes of different satellite fields, regions with the largest uncertainties in aerosol observation are identified. We also present two regional case studies that respectively demonstrate the capability of the CMCA technique in assessing the representation of an extreme event in different data sets, and in evaluating the performance of different data sets on seasonal and interannual timescales. Global results indicate that different data sets agree qualitatively for major aerosol-source regions. Discrepancies are mostly found over the Sahel, India, eastern and southeastern Asia. Results for eastern Europe suggest that the intense wildfire event in Russia during summer 2010 was less well-represented by SeaWiFS (Sea-viewing Wide Field-of-view Sensor) and OMI (Ozone Monitoring Instrument), which might be due to misclassification of smoke plumes as clouds. Analysis for the Indian subcontinent shows that here SeaWiFS agrees

  9. Robustness of reliability-growth analysis techniques

    NASA Astrophysics Data System (ADS)

    Ellis, Karen E.

    The author examines the robustness of techniques commonly applied to failure time data to determine if the failure rate (1/mean-time-between-failures) is changing over time. The models examined are the Duane postulate, Crow-Army Material Systems Analysis Activity, and Kalman filtering (also referred to as dynamic linear modeling). Each has as a foundation the underlying premise of changing failure rate over time. The techniques seek to confirm or reject whether failure rate is changing significantly, based on observed data. To compare the ability of each method to accomplish such a rejection or confirmation, a known failure time distribution is simulated, and then each model is applied and results are compared.

  10. A Three Corner Hat-based analysis of station position time series for the assessment of inter-technique precision at ITRF co-located sites

    NASA Astrophysics Data System (ADS)

    Abbondanza, C.; Chin, T. M.; Gross, R. S.; Heflin, M. B.; Hurst, K. J.; Parker, J. W.; Wu, X.; Altamimi, Z.

    2012-12-01

    Assessing the uncertainty in geodetic positioning is a crucial factor when combining independent space-geodetic solutions for the computation of the International Terrestrial Reference Frame (ITRF). ITRF is a combined product based on the stacking of VLBI, GPS, SLR and DORIS solutions and merging the single technique reference frames with terrestrial local tie measurements at co-located sites. In current ITRF realizations, the uncertainty evaluation of the four techniques relies on the analysis of the post-fit residuals, which are a by-product of the combination process. An alternative approach to the assessment of the inter-technique precision can be offered by a Three Corner Hat (TCH) analysis of the non-linear residual time series obtained at ITRF co-location sites as a by-product of the stacking procedure. Non-linear residuals of station position time series stemming from global networks of the four techniques can be modeled as a composition of periodic signals (commonly annual and semi-annual) and stochastic noise, typically characterized as a combination of flicker and white noise. Pair-wise differences of station position time series of at least three co-located instruments can be formed with the aim of removing the common geophysical signal and characterizing the inter-technique precision. The application of TCH relies on the hypothesis of absence of correlation between the error processes of the four techniques and assumes the stochastic noise to be Gaussian. If the hypothesis of statistical independence between the space-geodetic technique errors is amply verified, the assumption of pure white noise of the stochastic error processes appears to be more questionable. In fact, previous studies focused on geodetic positioning consistently showed that flicker noise generally prevails over white noise in the analysis of global network GPS time series, whereas in VLBI, SLR and DORIS time series Gaussian noise is predominant. In this investigation, TCH is applied

  11. [Development of Selective LC Analysis Method for Biogenic and Related Compounds Based on a Fluorous Affinity Technique].

    PubMed

    Hayama, Tadashi

    2015-01-01

    A separation-oriented derivatization method combined with LC has been developed for the selective analysis of biogenic and related compounds. In this method, we utilized a specific affinity between perfluoroalkyl-containing compounds, i.e., 'fluorous' compounds (fluorophilicity). Our strategy involves the derivatization of target analytes with perfluoroalkyl reagents, followed by selective retention of the derivatives with a perfluoroalkyl-modified stationary phase LC column. The perfluoroalkylated derivatives are strongly retained on the column owing to their fluorophilicity, whereas non-derivatized species, such as sample matrices, are hardly retained. Therefore, utilizing this derivatization method, target analytes can be determined selectively without interference from matrices. This method has been successfully applied to the LC analysis of some biogenic and related compounds in complex biological samples. PMID:26329550

  12. Computer navigation vs conventional mechanical jig technique in hip resurfacing arthroplasty: a meta-analysis based on 7 studies.

    PubMed

    Liu, Hao; Li, Lin; Gao, Wei; Wang, Meilin; Ni, Chunhui

    2013-01-01

    The studies on the accuracy of femoral component in hip resurfacing arthroplasty with the help of computer-assisted navigation were not consistent. This study aims to assess at the functional outcomes after computer navigation in hip resurfacing arthroplasty by systematically reviewing and meta-analyzing the data, which were searched up to December 2011 in PubMed, MEDLINE, EMBASE, MetaMed, EBSCO HOST, and the Web site of Google scholar. Totally, 197 articles about hip resurfacing arthroplasty were collected; finally, 7 articles met the inclusion criteria and were included in this meta-analysis (520 patients with 555 hip resurfacing arthroplasty). The odds ratio for the number of outliers was 0.155 (95% confidence interval, 0.048-0.498; P < .003). In conclusion, this meta-analysis suggests that the computer-assisted navigation system makes the femoral component positioning in hip resurfacing arthroplasty easier and more precise. PMID:22771091

  13. Comprehensive analysis of mitochondrial permeability transition pore activity in living cells using fluorescence-imaging-based techniques.

    PubMed

    Bonora, Massimo; Morganti, Claudia; Morciano, Giampaolo; Giorgi, Carlotta; Wieckowski, Mariusz R; Pinton, Paolo

    2016-06-01

    Mitochondrial permeability transition (mPT) refers to a sudden increase in the permeability of the inner mitochondrial membrane. Long-term studies of mPT revealed that this phenomenon has a critical role in multiple pathophysiological processes. mPT is mediated by the opening of a complex termed the mPT pore (mPTP), which is responsible for the osmotic influx of water into the mitochondrial matrix, resulting in swelling of mitochondria and dissipation of the mitochondrial membrane potential. Here we provide three independent optimized protocols for monitoring mPT in living cells: (i) measurement using a calcein-cobalt technique, (ii) measurement of the mPTP-dependent alteration of the mitochondrial membrane potential, and (iii) measurement of mitochondrial swelling. These procedures can easily be modified and adapted to different cell types. Cell culture and preparation of the samples are estimated to take ∼1 d for methods (i) and (ii), and ∼3 d for method (iii). The entire experiment, including analyses, takes ∼2 h. PMID:27172167

  14. Bone quality around bioactive silica-based coated stainless steel implants: analysis by micro-Raman, XRF and XAS techniques.

    PubMed

    Ballarre, Josefina; Desimone, Paula M; Chorro, Matthieu; Baca, Matías; Orellano, Juan Carlos; Ceré, Silvia M

    2013-11-01

    Surface modification of surgical stainless steel implants by sol gel coatings has been proposed as a tool to generate a surface that besides being protective could also create a "bioactive" interface to generate a natural bonding between the metal surface and the existing bone. The aim of this work is to analyze the quality and bone formation around hybrid bioactive coatings containing glass-ceramic particles, made by sol-gel process on 316L stainless steel used as permanent implant in terms of mineralization, calcium content and bone maturity with micro Raman, X-ray microfluorescence and X-ray absorption techniques. Uncoated implants seem to generate a thin bone layer at the beginning of osseointegration process and then this layer being separated from the surface with time. The hybrid coatings without glass-ceramic particles generate new bone around implants, with high concentration of Ca and P at the implant/tissue interface. This fact seems to be related with the presence of silica nanoparticles in the layer. The addition of bioactive particles promotes and enhances the bone quality with a homogeneous Ca and P content and a low rate of beta carbonate substitution and crystallinity, similar to young and mechanical resistant bone. PMID:24076155

  15. Analysis of the nonlinear behavior of shear-Alfvén modes in tokamaks based on Hamiltonian mapping techniques

    SciTech Connect

    Briguglio, S. Vlad, G.; Fogaccia, G.; Di Troia, C.; Fusco, V.; Wang, X.; Zonca, F.

    2014-11-15

    We present a series of numerical simulation experiments set up to illustrate the fundamental physics processes underlying the nonlinear dynamics of Alfvénic modes resonantly excited by energetic particles in tokamak plasmas and of the ensuing energetic particle transports. These phenomena are investigated by following the evolution of a test particle population in the electromagnetic fields computed in self-consistent MHD-particle simulation performed by the HMGC code. Hamiltonian mapping techniques are used to extract and illustrate several features of wave-particle dynamics. The universal structure of resonant particle phase space near an isolated resonance is recovered and analyzed, showing that bounded orbits and untrapped trajectories, divided by the instantaneous separatrix, form phase space zonal structures, whose characteristic non-adiabatic evolution time is the same as the nonlinear time of the underlying fluctuations. Bounded orbits correspond to a net outward resonant particle flux, which produces a flattening and/or gradient inversion of the fast ion density profile around the peak of the linear wave-particle resonance. The connection of this phenomenon to the mode saturation is analyzed with reference to two different cases: a Toroidal Alfvén eigenmode in a low shear magnetic equilibrium and a weakly unstable energetic particle mode for stronger magnetic shear. It is shown that, in the former case, saturation is reached because of radial decoupling (resonant particle redistribution matching the mode radial width) and is characterized by a weak dependence of the mode amplitude on the growth rate. In the latter case, saturation is due to resonance detuning (resonant particle redistribution matching the resonance width) with a stronger dependence of the mode amplitude on the growth rate.

  16. Analysis of algebraic reconstruction technique for accurate imaging of gas temperature and concentration based on tunable diode laser absorption spectroscopy

    NASA Astrophysics Data System (ADS)

    Hui-Hui, Xia; Rui-Feng, Kan; Jian-Guo, Liu; Zhen-Yu, Xu; Ya-Bai, He

    2016-06-01

    An improved algebraic reconstruction technique (ART) combined with tunable diode laser absorption spectroscopy(TDLAS) is presented in this paper for determining two-dimensional (2D) distribution of H2O concentration and temperature in a simulated combustion flame. This work aims to simulate the reconstruction of spectroscopic measurements by a multi-view parallel-beam scanning geometry and analyze the effects of projection rays on reconstruction accuracy. It finally proves that reconstruction quality dramatically increases with the number of projection rays increasing until more than 180 for 20 × 20 grid, and after that point, the number of projection rays has little influence on reconstruction accuracy. It is clear that the temperature reconstruction results are more accurate than the water vapor concentration obtained by the traditional concentration calculation method. In the present study an innovative way to reduce the error of concentration reconstruction and improve the reconstruction quality greatly is also proposed, and the capability of this new method is evaluated by using appropriate assessment parameters. By using this new approach, not only the concentration reconstruction accuracy is greatly improved, but also a suitable parallel-beam arrangement is put forward for high reconstruction accuracy and simplicity of experimental validation. Finally, a bimodal structure of the combustion region is assumed to demonstrate the robustness and universality of the proposed method. Numerical investigation indicates that the proposed TDLAS tomographic algorithm is capable of detecting accurate temperature and concentration profiles. This feasible formula for reconstruction research is expected to resolve several key issues in practical combustion devices. Project supported by the Young Scientists Fund of the National Natural Science Foundation of China (Grant No. 61205151), the National Key Scientific Instrument and Equipment Development Project of China (Grant

  17. A dual resolution measurement based Monte Carlo simulation technique for detailed dose analysis of small volume organs in the skull base region

    NASA Astrophysics Data System (ADS)

    Yeh, Chi-Yuan; Tung, Chuan-Jung; Chao, Tsi-Chain; Lin, Mu-Han; Lee, Chung-Chi

    2014-11-01

    The purpose of this study was to examine dose distribution of a skull base tumor and surrounding critical structures in response to high dose intensity-modulated radiosurgery (IMRS) with Monte Carlo (MC) simulation using a dual resolution sandwich phantom. The measurement-based Monte Carlo (MBMC) method (Lin et al., 2009) was adopted for the study. The major components of the MBMC technique involve (1) the BEAMnrc code for beam transport through the treatment head of a Varian 21EX linear accelerator, (2) the DOSXYZnrc code for patient dose simulation and (3) an EPID-measured efficiency map which describes non-uniform fluence distribution of the IMRS treatment beam. For the simulated case, five isocentric 6 MV photon beams were designed to deliver a total dose of 1200 cGy in two fractions to the skull base tumor. A sandwich phantom for the MBMC simulation was created based on the patient's CT scan of a skull base tumor [gross tumor volume (GTV)=8.4 cm3] near the right 8th cranial nerve. The phantom, consisted of a 1.2-cm thick skull base region, had a voxel resolution of 0.05×0.05×0.1 cm3 and was sandwiched in between 0.05×0.05×0.3 cm3 slices of a head phantom. A coarser 0.2×0.2×0.3 cm3 single resolution (SR) phantom was also created for comparison with the sandwich phantom. A particle history of 3×108 for each beam was used for simulations of both the SR and the sandwich phantoms to achieve a statistical uncertainty of <2%. Our study showed that the planning target volume (PTV) receiving at least 95% of the prescribed dose (VPTV95) was 96.9%, 96.7% and 99.9% for the TPS, SR, and sandwich phantom, respectively. The maximum and mean doses to large organs such as the PTV, brain stem, and parotid gland for the TPS, SR and sandwich MC simulations did not show any significant difference; however, significant dose differences were observed for very small structures like the right 8th cranial nerve, right cochlea, right malleus and right semicircular canal. Dose

  18. Application of Electromigration Techniques in Environmental Analysis

    NASA Astrophysics Data System (ADS)

    Bald, Edward; Kubalczyk, Paweł; Studzińska, Sylwia; Dziubakiewicz, Ewelina; Buszewski, Bogusław

    Inherently trace-level concentration of pollutants in the environment, together with the complexity of sample matrices, place a strong demand on the detection capabilities of electromigration methods. Significant progress is continually being made, widening the applicability of these techniques, mostly capillary zone electrophoresis, micellar electrokinetic chromatography, and capillary electrochromatography, to the analysis of real-world environmental samples, including the concentration sensitivity and robustness of the developed analytical procedures. This chapter covers the recent major developments in the domain of capillary electrophoresis analysis of environmental samples for pesticides, polycyclic aromatic hydrocarbons, phenols, amines, carboxylic acids, explosives, pharmaceuticals, and ionic liquids. Emphasis is made on pre-capillary and on-capillary chromatography and electrophoresis-based concentration of analytes and detection improvement.

  19. Typology of delivery quality: latent profile analysis of teacher engagement and delivery techniques in a school-based prevention intervention, keepin’ it REAL curriculum

    PubMed Central

    Shin, YoungJu; Miller-Day, Michelle; Pettigrew, Jonathan; Hecht, Michael L.; Krieger, Janice L.

    2014-01-01

    Enhancing the delivery quality of school-based, evidence-based prevention programs is one key to ensuring uniform program effects on student outcomes. Program evaluations often focus on content dosage when implementing prevention curricula, however, less is known about implementation quality of prevention content, especially among teachers who may or may not have a prevention background. The goal of the current study is to add to the scholarly literature on implementation quality for a school-based substance use prevention intervention. Twenty-five schools in Ohio and Pennsylvania implemented the original keepin’ REAL (kiR) substance use prevention curriculum. Each of the 10, 40–45 min lessons of the kiR curriculum was video recorded. Coders observed and rated a random sample of 276 videos reflecting 78 classes taught by 31 teachers. Codes included teachers’ delivery techniques (e.g. lecture, discussion, demonstration and role play) and engagement with students (e.g. attentiveness, enthusiasm and positivity). Based on the video ratings, a latent profile analysis was run to identify typology of delivery quality. Five profiles were identified: holistic approach, attentive teacher-orientated approach, enthusiastic lecture approach, engaged interactive learning approach and skill practice-only approach. This study provides a descriptive typology of delivery quality while implementing a school-based substance use prevention intervention. PMID:25274721

  20. Application of python-based Abaqus preprocess and postprocess technique in analysis of gearbox vibration and noise reduction

    NASA Astrophysics Data System (ADS)

    Yi, Guilian; Sui, Yunkang; Du, Jiazheng

    2011-06-01

    To reduce vibration and noise, a damping layer and constraint layer are usually pasted on the inner surface of a gearbox thin shell, and their thicknesses are the main parameters in the vibration and noise reduction design. The normal acceleration of the point on the gearbox surface is the main index that can reflect the vibration and noise of that point, and the normal accelerations of different points can reflect the degree of the vibration and noise of the whole structure. The K-S function is adopted to process many points' normal accelerations as the comprehensive index of the vibration characteristics of the whole structure, and the vibration acceleration level is adopted to measure the degree of the vibration and noise. Secondary development of the Abaqus preprocess and postprocess on the basis of the Python scripting programming automatically modifies the model parameters, submits the job, and restarts the analysis totally, which avoids the tedious work of returning to the Abaqus/CAE for modifying and resubmitting and improves the speed of the preprocess and postprocess and the computational efficiency.

  1. Wavelet analysis of an Ionospheric foF_2 parameter as a Precursor of Earthquakes using Ground based Techniques

    NASA Astrophysics Data System (ADS)

    Sonakia, Anjana; Gwal, Ashok Kumar; Sondhiya, Deepak Kumar; Kasde, Satish Kumar

    Abstract: The Wavelet analysis of the variations in hourly-mean value of F2-layer critical frequency foF2 is performed in association with the occurrence of three earthquakes occurred at New Zealand with magnitudes M > 6.0, depths h < 150 km and distances from the vertical sounding station R < 1500 km. For the study, data of the Christ church sounding station are used, which were registered every hour in the years 1957-1990. It is shown that on the average foF2 increases before the earthquakes. The aim of the present work is to prove the foF2-increases significantly for New Zealand region earthquakes and to determine behavior of foF2 and observable modification of the mean foF2 frequency in connection with the magnitudes of the earthquakes. Keywords: Earthquake precursor, Wavelet power spectrum, Scale-average wavelet power, Ionospheric Total Electron Content and foF_2 parameter

  2. Analysis of land-use/land-cover change in the Carpathian region based on remote sensing techniques

    NASA Astrophysics Data System (ADS)

    Dezsõ, Zs.; Bartholy, J.; Pongrácz, R.; Barcza, Z.

    2003-04-01

    Human activities result in different significant environmental changes, these complex feedback processes may cause dramatic changes in our everyday life. Among others they include land-use and consequently land-cover changes. In order to study such complex variables full spatial coverage of the given area is one of the key issues. Rapid development of satellite use in different topics of research has provided an excellent tool to build agricultural monitoring systems and to improve our understanding of the complex links between air, water and land, including vegetation. In the last few years serious flood events occurred at the watershed of the river Tisza (both in Hungary and in Ukraine). One of the reasons of these floods is heavy precipitation at the region, which result in severe runoff consequences because of the significant change in land-use/land-cover. In this analysis both land-use change and Normalized Difference Vegetation Index (NDVI) values for the Carpathian Region have been statistically analysed for the last two decades. Remotely sensed datasets observed by NOAA and NASA satellites are available for this period. The spatial resolution of these measurements is 1 to 8 km. Tendencies in the change of natural and artificial land-cover types are investigated in the upper watershed of the river Tisza. According to our estimations the forest area on the Ukrainian part of the watershed decreased by about 10% in the last decade. Possible reasons include regional effects of the global climate change, deforestation in the region, etc.

  3. Crude Oil Price Forecasting Based on Hybridizing Wavelet Multiple Linear Regression Model, Particle Swarm Optimization Techniques, and Principal Component Analysis

    PubMed Central

    Shabri, Ani; Samsudin, Ruhaidah

    2014-01-01

    Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series. PMID:24895666

  4. Crude oil price forecasting based on hybridizing wavelet multiple linear regression model, particle swarm optimization techniques, and principal component analysis.

    PubMed

    Shabri, Ani; Samsudin, Ruhaidah

    2014-01-01

    Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series. PMID:24895666

  5. Detailed fuel spray analysis techniques

    NASA Technical Reports Server (NTRS)

    Mularz, E. J.; Bosque, M. A.; Humenik, F. M.

    1983-01-01

    Detailed fuel spray analyses are a necessary input to the analytical modeling of the complex mixing and combustion processes which occur in advanced combustor systems. It is anticipated that by controlling fuel-air reaction conditions, combustor temperatures can be better controlled, leading to improved combustion system durability. Thus, a research program is underway to demonstrate the capability to measure liquid droplet size, velocity, and number density throughout a fuel spray and to utilize this measurement technique in laboratory benchmark experiments. The research activities from two contracts and one grant are described with results to data. The experiment to characterize fuel sprays is also described. These experiments and data should be useful for application to and validation of turbulent flow modeling to improve the design systems of future advanced technology engines.

  6. Digital techniques for ULF wave polarization analysis

    NASA Technical Reports Server (NTRS)

    Arthur, C. W.

    1979-01-01

    Digital power spectral and wave polarization analysis are powerful techniques for studying ULF waves in the earth's magnetosphere. Four different techniques for using the spectral matrix to perform such an analysis have been presented in the literature. Three of these techniques are similar in that they require transformation of the spectral matrix to the principal axis system prior to performing the polarization analysis. The differences in the three techniques lie in the manner in which determine this transformation. A comparative study of these three techniques using both simulated and real data has shown them to be approximately equal in quality of performance. The fourth technique does not require transformation of the spectral matrix. Rather, it uses the measured spectral matrix and state vectors for a desired wave type to design a polarization detector function in the frequency domain. The design of various detector functions and their application to both simulated and real data will be presented.

  7. AI-based technique for tracking chains of discontinuous symbols and its application to the analysis of topographic maps

    NASA Astrophysics Data System (ADS)

    Mecocci, Alessandro; Lilla, Massimiliano

    1994-12-01

    Automatic digitization of topographic maps is a very important task nowadays. Among the different elements of a topographic map discontinuous lines represent important information. Generally they are difficult to track because they show very large gaps, and abrupt direction changes. In this paper an architecture that automates the digitalization of discontinuous lines (dot-dot lines, dash-dot-dash lines, dash-asterisk lines, etc.) is presented. The tracking process must detect the elementary symbols and then concatenate these symbols into a significant chain that represents the line. The proposed architecture is composed of a common kernel, based on a suitable modification of the A* algorithm, that starts different auxiliary processes depending on the particular line to be tracked. Three auxiliary processes are considered: search strategy generation (SSG) which is responsible for the strategy used to scan the image pixels; low level symbol detection (LSD) which decides if a certain image region around the pixel selected by the SSG is an elementary symbol; cost evaluation (CE) which gives the quality of each symbol with respect to the global course of the line. The whole system has been tested on a 1:50.000 map furnished by the Istituto Geografico Militare Italiano (IGMI). The results were very good for different types of discontinuous lines. Over the whole map (i.e. about 80 Mbytes of digitized data) 95% of the elementary symbols of the lines have been correctly chained. The operator time required to correct misclassifications is a small part of the time needed to manually digitize the discontinuous lines.

  8. Signal analysis techniques for incipient failure detection in turbomachinery

    NASA Technical Reports Server (NTRS)

    Coffin, T.

    1985-01-01

    Signal analysis techniques for the detection and classification of incipient mechanical failures in turbomachinery were developed, implemented and evaluated. Signal analysis techniques available to describe dynamic measurement characteristics are reviewed. Time domain and spectral methods are described, and statistical classification in terms of moments is discussed. Several of these waveform analysis techniques were implemented on a computer and applied to dynamic signals. A laboratory evaluation of the methods with respect to signal detection capability is described. Plans for further technique evaluation and data base development to characterize turbopump incipient failure modes from Space Shuttle main engine (SSME) hot firing measurements are outlined.

  9. Innovative Techniques Simplify Vibration Analysis

    NASA Technical Reports Server (NTRS)

    2010-01-01

    In the early years of development, Marshall Space Flight Center engineers encountered challenges related to components in the space shuttle main engine. To assess the problems, they evaluated the effects of vibration and oscillation. To enhance the method of vibration signal analysis, Marshall awarded Small Business Innovation Research (SBIR) contracts to AI Signal Research, Inc. (ASRI), in Huntsville, Alabama. ASRI developed a software package called PC-SIGNAL that NASA now employs on a daily basis, and in 2009, the PKP-Module won Marshall s Software of the Year award. The technology is also used in many industries: aircraft and helicopter, rocket engine manufacturing, transportation, and nuclear power."

  10. A numerical comparison of sensitivity analysis techniques

    SciTech Connect

    Hamby, D.M.

    1993-12-31

    Engineering and scientific phenomena are often studied with the aid of mathematical models designed to simulate complex physical processes. In the nuclear industry, modeling the movement and consequence of radioactive pollutants is extremely important for environmental protection and facility control. One of the steps in model development is the determination of the parameters most influential on model results. A {open_quotes}sensitivity analysis{close_quotes} of these parameters is not only critical to model validation but also serves to guide future research. A previous manuscript (Hamby) detailed many of the available methods for conducting sensitivity analyses. The current paper is a comparative assessment of several methods for estimating relative parameter sensitivity. Method practicality is based on calculational ease and usefulness of the results. It is the intent of this report to demonstrate calculational rigor and to compare parameter sensitivity rankings resulting from various sensitivity analysis techniques. An atmospheric tritium dosimetry model (Hamby) is used here as an example, but the techniques described can be applied to many different modeling problems. Other investigators (Rose; Dalrymple and Broyd) present comparisons of sensitivity analyses methodologies, but none as comprehensive as the current work.

  11. Comparing Techniques for Certified Static Analysis

    NASA Technical Reports Server (NTRS)

    Cachera, David; Pichardie, David

    2009-01-01

    A certified static analysis is an analysis whose semantic validity has been formally proved correct with a proof assistant. The recent increasing interest in using proof assistants for mechanizing programming language metatheory has given rise to several approaches for certification of static analysis. We propose a panorama of these techniques and compare their respective strengths and weaknesses.

  12. A New Microcell Technique for NMR Analysis.

    ERIC Educational Resources Information Center

    Yu, Sophia J.

    1987-01-01

    Describes a new laboratory technique for working with small samples of compounds used in nuclear magnetic resonance (NMR) analysis. Demonstrates how microcells can be constructed for each experiment and samples can be recycled. (TW)

  13. Statistical Evaluation of Time Series Analysis Techniques

    NASA Technical Reports Server (NTRS)

    Benignus, V. A.

    1973-01-01

    The performance of a modified version of NASA's multivariate spectrum analysis program is discussed. A multiple regression model was used to make the revisions. Performance improvements were documented and compared to the standard fast Fourier transform by Monte Carlo techniques.

  14. Techniques for Enhancing Web-Based Education.

    ERIC Educational Resources Information Center

    Barbieri, Kathy; Mehringer, Susan

    The Virtual Workshop is a World Wide Web-based set of modules on high performance computing developed at the Cornell Theory Center (CTC) (New York). This approach reaches a large audience, leverages staff effort, and poses challenges for developing interesting presentation techniques. This paper describes the following techniques with their…

  15. Exact geometry solid-shell element based on a sampling surfaces technique for 3D stress analysis of doubly-curved composite shells

    NASA Astrophysics Data System (ADS)

    Kulikov, G. M.; Mamontov, A. A.; Plotnikova, S. V.; Mamontov, S. A.

    2015-11-01

    A hybrid-mixed ANS four-node shell element by using the sampling surfaces (SaS) technique is developed. The SaS formulation is based on choosing inside the nth layer In not equally spaced SaS parallel to the middle surface of the shell in order to introduce the displacements of these surfaces as basic shell variables. Such choice of unknowns with the consequent use of Lagrange polynomials of degree In - 1 in the thickness direction for each layer permits the presentation of the layered shell formulation in a very compact form. The SaS are located inside each layer at Chebyshev polynomial nodes that allows one to minimize uniformly the error due to the Lagrange interpolation. To implement the efficient analytical integration throughout the element, the enhanced ANS method is employed. The proposed hybrid-mixed four-node shell element is based on the Hu-Washizu variational equation and exhibits a superior performance in the case of coarse meshes. It could be useful for the 3D stress analysis of thick and thin doubly-curved shells since the SaS formulation gives the possibility to obtain numerical solutions with a prescribed accuracy, which asymptotically approach the exact solutions of elasticity as the number of SaS tends to infinity.

  16. Advanced analysis techniques for uranium assay

    SciTech Connect

    Geist, W. H.; Ensslin, Norbert; Carrillo, L. A.; Beard, C. A.

    2001-01-01

    Uranium has a negligible passive neutron emission rate making its assay practicable only with an active interrogation method. The active interrogation uses external neutron sources to induce fission events in the uranium in order to determine the mass. This technique requires careful calibration with standards that are representative of the items to be assayed. The samples to be measured are not always well represented by the available standards which often leads to large biases. A technique of active multiplicity counting is being developed to reduce some of these assay difficulties. Active multiplicity counting uses the measured doubles and triples count rates to determine the neutron multiplication (f4) and the product of the source-sample coupling ( C ) and the 235U mass (m). Since the 35U mass always appears in the multiplicity equations as the product of Cm, the coupling needs to be determined before the mass can be known. A relationship has been developed that relates the coupling to the neutron multiplication. The relationship is based on both an analytical derivation and also on empirical observations. To determine a scaling constant present in this relationship, known standards must be used. Evaluation of experimental data revealed an improvement over the traditional calibration curve analysis method of fitting the doubles count rate to the 235Um ass. Active multiplicity assay appears to relax the requirement that the calibration standards and unknown items have the same chemical form and geometry.

  17. Photomorphic analysis techniques: An interim spatial analysis using satellite remote sensor imagery and historical data

    NASA Technical Reports Server (NTRS)

    Keuper, H. R.; Peplies, R. W.; Gillooly, R. P.

    1977-01-01

    The use of machine scanning and/or computer-based techniques to provide greater objectivity in the photomorphic approach was investigated. Photomorphic analysis and its application in regional planning are discussed. Topics included: delineation of photomorphic regions; inadequacies of existing classification systems; tonal and textural characteristics and signature analysis techniques; pattern recognition and Fourier transform analysis; and optical experiments. A bibliography is included.

  18. Magnetic Analysis Techniques Applied to Desert Varnish

    NASA Technical Reports Server (NTRS)

    Schmidgall, E. R.; Moskowitz, B. M.; Dahlberg, E. D.; Kuhlman, K. R.

    2003-01-01

    Desert varnish is a black or reddish coating commonly found on rock samples from arid regions. Typically, the coating is very thin, less than half a millimeter thick. Previous research has shown that the primary components of desert varnish are silicon oxide clay minerals (60%), manganese and iron oxides (20-30%), and trace amounts of other compounds [1]. Desert varnish is thought to originate when windborne particles containing iron and manganese oxides are deposited onto rock surfaces where manganese oxidizing bacteria concentrate the manganese and form the varnish [4,5]. If desert varnish is indeed biogenic, then the presence of desert varnish on rock surfaces could serve as a biomarker, indicating the presence of microorganisms. This idea has considerable appeal, especially for Martian exploration [6]. Magnetic analysis techniques have not been extensively applied to desert varnish. The only previous magnetic study reported that based on room temperature demagnetization experiments, there were noticeable differences in magnetic properties between a sample of desert varnish and the substrate sandstone [7]. Based upon the results of the demagnetization experiments, the authors concluded that the primary magnetic component of desert varnish was either magnetite (Fe3O4) or maghemite ( Fe2O3).

  19. Emerging techniques for ultrasensitive protein analysis.

    PubMed

    Yang, Xiaolong; Tang, Yanan; Alt, Ryan R; Xie, Xiaoyu; Li, Feng

    2016-06-21

    Many important biomarkers for devastating diseases and biochemical processes are proteins present at ultralow levels. Traditional techniques, such as enzyme-linked immunosorbent assays (ELISA), mass spectrometry, and protein microarrays, are often not sensitive enough to detect proteins with concentrations below the picomolar level, thus requiring the development of analytical techniques with ultrahigh sensitivities. In this review, we highlight the recent advances in developing novel techniques, sensors, and assays for ultrasensitive protein analysis. Particular attention will be focused on three classes of signal generation and/or amplification mechanisms, including the uses of nanomaterials, nucleic acids, and digital platforms. PMID:26898911

  20. Quantifying sources of black carbon in western North America using observationally based analysis and an emission tagging technique in the Community Atmosphere Model

    NASA Astrophysics Data System (ADS)

    Zhang, R.; Wang, H.; Hegg, D. A.; Qian, Y.; Doherty, S. J.; Dang, C.; Ma, P.-L.; Rasch, P. J.; Fu, Q.

    2015-11-01

    The Community Atmosphere Model (CAM5), equipped with a technique to tag black carbon (BC) emissions by source regions and types, has been employed to establish source-receptor relationships for atmospheric BC and its deposition to snow over western North America. The CAM5 simulation was conducted with meteorological fields constrained by reanalysis for year 2013 when measurements of BC in both near-surface air and snow are available for model evaluation. We find that CAM5 has a significant low bias in predicted mixing ratios of BC in snow but only a small low bias in predicted atmospheric concentrations over northwestern USA and western Canada. Even with a strong low bias in snow mixing ratios, radiative transfer calculations show that the BC-in-snow darkening effect is substantially larger than the BC dimming effect at the surface by atmospheric BC. Local sources contribute more to near-surface atmospheric BC and to deposition than distant sources, while the latter are more important in the middle and upper troposphere where wet removal is relatively weak. Fossil fuel (FF) is the dominant source type for total column BC burden over the two regions. FF is also the dominant local source type for BC column burden, deposition, and near-surface BC, while for all distant source regions combined the contribution of biomass/biofuel (BB) is larger than FF. An observationally based positive matrix factorization (PMF) analysis of the snow-impurity chemistry is conducted to quantitatively evaluate the CAM5 BC source-type attribution. While CAM5 is qualitatively consistent with the PMF analysis with respect to partitioning of BC originating from BB and FF emissions, it significantly underestimates the relative contribution of BB. In addition to a possible low bias in BB emissions used in the simulation, the model is likely missing a significant source of snow darkening from local soil found in the observations.

  1. Quantifying sources of black carbon in Western North America using observationally based analysis and an emission tagging technique in the Community Atmosphere Model

    NASA Astrophysics Data System (ADS)

    Zhang, R.; Wang, H.; Hegg, D. A.; Qian, Y.; Doherty, S. J.; Dang, C.; Ma, P.-L.; Rasch, P. J.; Fu, Q.

    2015-05-01

    The Community Atmosphere Model (CAM5), equipped with a technique to tag black carbon (BC) emissions by source regions and types, has been employed to establish source-receptor relationships for atmospheric BC and its deposition to snow over Western North America. The CAM5 simulation was conducted with meteorological fields constrained by reanalysis for year 2013 when measurements of BC in both near-surface air and snow are available for model evaluation. We find that CAM5 has a significant low bias in predicted mixing ratios of BC in snow but only a small low bias in predicted atmospheric concentrations over the Northwest USA and West Canada. Even with a strong low bias in snow mixing ratios, radiative transfer calculations show that the BC-in-snow darkening effect is substantially larger than the BC dimming effect at the surface by atmospheric BC. Local sources contribute more to near-surface atmospheric BC and to deposition than distant sources, while the latter are more important in the middle and upper troposphere where wet removal is relatively weak. Fossil fuel (FF) is the dominant source type for total column BC burden over the two regions. FF is also the dominant local source type for BC column burden, deposition, and near-surface BC, while for all distant source regions combined the contribution of biomass/biofuel (BB) is larger than FF. An observationally based Positive Matrix Factorization (PMF) analysis of the snow-impurity chemistry is conducted to quantitatively evaluate the CAM5 BC source-type attribution. While CAM5 is qualitatively consistent with the PMF analysis with respect to partitioning of BC originating from BB and FF emissions, it significantly underestimates the relative contribution of BB. In addition to a possible low bias in BB emissions used in the simulation, the model is likely missing a significant source of snow darkening from local soil found in the observations.

  2. Quantifying sources of black carbon in Western North America using observationally based analysis and an emission tagging technique in the Community Atmosphere Model

    DOE PAGESBeta

    Zhang, R.; Wang, H.; Hegg, D. A.; Qian, Y.; Doherty, S. J.; Dang, C.; Ma, P.-L.; Rasch, P. J.; Fu, Q.

    2015-05-04

    The Community Atmosphere Model (CAM5), equipped with a technique to tag black carbon (BC) emissions by source regions and types, has been employed to establish source-receptor relationships for atmospheric BC and its deposition to snow over Western North America. The CAM5 simulation was conducted with meteorological fields constrained by reanalysis for year 2013 when measurements of BC in both near-surface air and snow are available for model evaluation. We find that CAM5 has a significant low bias in predicted mixing ratios of BC in snow but only a small low bias in predicted atmospheric concentrations over the Northwest USA andmore » West Canada. Even with a strong low bias in snow mixing ratios, radiative transfer calculations show that the BC-in-snow darkening effect is substantially larger than the BC dimming effect at the surface by atmospheric BC. Local sources contribute more to near-surface atmospheric BC and to deposition than distant sources, while the latter are more important in the middle and upper troposphere where wet removal is relatively weak. Fossil fuel (FF) is the dominant source type for total column BC burden over the two regions. FF is also the dominant local source type for BC column burden, deposition, and near-surface BC, while for all distant source regions combined the contribution of biomass/biofuel (BB) is larger than FF. An observationally based Positive Matrix Factorization (PMF) analysis of the snow-impurity chemistry is conducted to quantitatively evaluate the CAM5 BC source-type attribution. While CAM5 is qualitatively consistent with the PMF analysis with respect to partitioning of BC originating from BB and FF emissions, it significantly underestimates the relative contribution of BB. In addition to a possible low bias in BB emissions used in the simulation, the model is likely missing a significant source of snow darkening from local soil found in the observations.« less

  3. Quantifying sources of black carbon in western North America using observationally based analysis and an emission tagging technique in the Community Atmosphere Model

    DOE PAGESBeta

    Zhang, R.; Wang, H.; Hegg, D. A.; Qian, Y.; Doherty, S. J.; Dang, C.; Ma, P.-L.; Rasch, P. J.; Fu, Q.

    2015-11-18

    The Community Atmosphere Model (CAM5), equipped with a technique to tag black carbon (BC) emissions by source regions and types, has been employed to establish source–receptor relationships for atmospheric BC and its deposition to snow over western North America. The CAM5 simulation was conducted with meteorological fields constrained by reanalysis for year 2013 when measurements of BC in both near-surface air and snow are available for model evaluation. We find that CAM5 has a significant low bias in predicted mixing ratios of BC in snow but only a small low bias in predicted atmospheric concentrations over northwestern USA and westernmore » Canada. Even with a strong low bias in snow mixing ratios, radiative transfer calculations show that the BC-in-snow darkening effect is substantially larger than the BC dimming effect at the surface by atmospheric BC. Local sources contribute more to near-surface atmospheric BC and to deposition than distant sources, while the latter are more important in the middle and upper troposphere where wet removal is relatively weak. Fossil fuel (FF) is the dominant source type for total column BC burden over the two regions. FF is also the dominant local source type for BC column burden, deposition, and near-surface BC, while for all distant source regions combined the contribution of biomass/biofuel (BB) is larger than FF. An observationally based positive matrix factorization (PMF) analysis of the snow-impurity chemistry is conducted to quantitatively evaluate the CAM5 BC source-type attribution. While CAM5 is qualitatively consistent with the PMF analysis with respect to partitioning of BC originating from BB and FF emissions, it significantly underestimates the relative contribution of BB. In addition to a possible low bias in BB emissions used in the simulation, the model is likely missing a significant source of snow darkening from local soil found in the observations.« less

  4. Quantifying sources of black carbon in Western North America using observationally based analysis and an emission tagging technique in the Community Atmosphere Model

    SciTech Connect

    Zhang, Rudong; Wang, Hailong; Hegg, D. A.; Qian, Yun; Doherty, Sarah J.; Dang, Cheng; Ma, Po-Lun; Rasch, Philip J.; Fu, Qiang

    2015-11-18

    The Community Atmosphere Model (CAM5), equipped with a technique to tag black carbon (BC) emissions by source regions and types, has been employed to establish source-receptor relationships for atmospheric BC and its deposition to snow over Western North America. The CAM5 simulation was conducted with meteorological fields constrained by reanalysis for year 2013 when measurements of BC in both near-surface air and snow are available for model evaluation. We find that CAM5 has a significant low bias in predicted mixing ratios of BC in snow but only a small low bias in predicted atmospheric concentrations over the Northwest USA and West Canada. Even with a strong low bias in snow mixing ratios, radiative transfer calculations show that the BC-in-snow darkening effect is substantially larger than the BC dimming effect at the surface by atmospheric BC. Local sources contribute more to near-surface atmospheric BC and to deposition than distant sources, while the latter are more important in the middle and upper troposphere where wet removal is relatively weak. Fossil fuel (FF) is the dominant source type for total column BC burden over the two regions. FF is also the dominant local source type for BC column burden, deposition, and near-surface BC, while for all distant source regions combined the contribution of biomass/biofuel (BB) is larger than FF. An observationally based Positive Matrix Factorization (PMF) analysis of the snow-impurity chemistry is conducted to quantitatively evaluate the CAM5 BC source-type attribution. While CAM5 is qualitatively consistent with the PMF analysis with respect to partitioning of BC originating from BB and FF emissions, it significantly underestimates the relative contribution of BB. In addition to a possible low bias in BB emissions used in the simulation, the model is likely missing a significant source of snow darkening from local soil found in the observations.

  5. A review of sensitivity analysis techniques

    SciTech Connect

    Hamby, D.M.

    1993-12-31

    Mathematical models are utilized to approximate various highly complex engineering, physical, environmental, social, and economic phenomena. Model parameters exerting the most influence on model results are identified through a {open_quotes}sensitivity analysis.{close_quotes} A comprehensive review is presented of more than a dozen sensitivity analysis methods. The most fundamental of sensitivity techniques utilizes partial differentiation whereas the simplest approach requires varying parameter values one-at-a-time. Correlation analysis is used to determine relationships between independent and dependent variables. Regression analysis provides the most comprehensive sensitivity measure and is commonly utilized to build response surfaces that approximate complex models.

  6. Autofluorescence based diagnostic techniques for oral cancer

    PubMed Central

    Balasubramaniam, A. Murali; Sriraman, Rajkumari; Sindhuja, P.; Mohideen, Khadijah; Parameswar, R. Arjun; Muhamed Haris, K. T.

    2015-01-01

    Oral cancer is one of the most common cancers worldwide. Despite of various advancements in the treatment modalities, oral cancer mortalities are more, particularly in developing countries like India. This is mainly due to the delay in diagnosis of oral cancer. Delay in diagnosis greatly reduces prognosis of the treatment and also cause increased morbidity and mortality rates. Early diagnosis plays a key role in effective management of oral cancer. A rapid diagnostic technique can greatly aid in the early diagnosis of oral cancer. Now a day's many adjunctive oral cancer screening techniques are available for the early diagnosis of cancer. Among these, autofluorescence based diagnostic techniques are rapidly emerging as a powerful tool. These techniques are broadly discussed in this review. PMID:26538880

  7. Gold analysis by the gamma absorption technique.

    PubMed

    Kurtoglu, Arzu; Tugrul, A Beril

    2003-01-01

    Gold (Au) analyses are generally performed using destructive techniques. In this study, the Gamma Absorption Technique has been employed for gold analysis. A series of different gold alloys of known gold content were analysed and a calibration curve was obtained. This curve was then used for the analysis of unknown samples. Gold analyses can be made non-destructively, easily and quickly by the gamma absorption technique. The mass attenuation coefficients of the alloys were measured around the K-shell absorption edge of Au. Theoretical mass attenuation coefficient values were obtained using the WinXCom program and comparison of the experimental results with the theoretical values showed generally good and acceptable agreement. PMID:12485656

  8. Techniques for the Analysis of Human Movement.

    ERIC Educational Resources Information Center

    Grieve, D. W.; And Others

    This book presents the major analytical techniques that may be used in the appraisal of human movement. Chapter 1 is devoted to the photopgraphic analysis of movement with particular emphasis on cine filming. Cine film may be taken with little or no restriction on the performer's range of movement; information on the film is permanent and…

  9. Aerosol particle analysis by Raman scattering technique

    SciTech Connect

    Fung, K.H.; Tang, I.N.

    1992-10-01

    Laser Raman spectroscopy is a very versatile tool for chemical characterization of micron-sized particles. Such particles are abundant in nature, and in numerous energy-related processes. In order to elucidate the formation mechanisms and understand the subsequent chemical transformation under a variety of reaction conditions, it is imperative to develop analytical measurement techniques for in situ monitoring of these suspended particles. In this report, we outline our recent work on spontaneous Raman, resonance Raman and non-linear Raman scattering as a novel technique for chemical analysis of aerosol particles as well as supersaturated solution droplets.

  10. Respiratory monitoring system based on the nasal pressure technique for the analysis of sleep breathing disorders: Reduction of static and dynamic errors, and comparisons with thermistors and pneumotachographs

    NASA Astrophysics Data System (ADS)

    Alves de Mesquita, Jayme; Lopes de Melo, Pedro

    2004-03-01

    Thermally sensitive devices—thermistors—have usually been used to monitor sleep-breathing disorders. However, because of their long time constant, these devices are not able to provide a good characterization of fast events, like hypopneas. Nasal pressure recording technique (NPR) has recently been suggested to quantify airflow during sleep. It is claimed that the short time constants of the devices used to implement this technique would allow an accurate analysis of fast abnormal respiratory events. However, these devices present errors associated with nonlinearities and acoustic resonance that could reduce the diagnostic value of the NPR. Moreover, in spite of the high scientific and clinical potential, there is no detailed description of a complete instrumentation system to implement this promising technique in sleep studies. In this context, the purpose of this work was twofold: (1) describe the development of a flexible NPR device and (2) evaluate the performance of this device when compared to pneumotachographs (PNTs) and thermistors. After the design details are described, the system static accuracy is evaluated by a comparative analysis with a PNT. This analysis revealed a significant reduction (p<0.001) of the static error when system nonlinearities were reduced. The dynamic performance of the NPR system was investigated by frequency response analysis and time constant evaluations and the results showed that the developed device response was as good as PNT and around 100 times faster (τ=5,3 ms) than thermistors (τ=512 ms). Experimental results obtained in simulated clinical conditions and in a patient are presented as examples, and confirmed the good features achieved in engineering tests. These results are in close agreement with physiological fundamentals, supplying substantial evidence that the improved dynamic and static characteristics of this device can contribute to a more accurate implementation of medical research projects and to improve the

  11. Application of spectral analysis techniques to the intercomparison of aerosol data - Part 4: Combined maximum covariance analysis to bridge the gap between multi-sensor satellite retrievals and ground-based measurements

    NASA Astrophysics Data System (ADS)

    Li, J.; Carlson, B. E.; Lacis, A. A.

    2014-04-01

    The development of remote sensing techniques has greatly advanced our knowledge of atmospheric aerosols. Various satellite sensors and the associated retrieval algorithms all add to the information of global aerosol variability, while well-designed surface networks provide time series of highly accurate measurements at specific locations. In studying the variability of aerosol properties, aerosol climate effects, and constraining aerosol fields in climate models, it is essential to make the best use of all of the available information. In the previous three parts of this series, we demonstrated the usefulness of several spectral decomposition techniques in the analysis and comparison of temporal and spatial variability of aerosol optical depth using satellite and ground-based measurements. Specifically, Principal Component Analysis (PCA) successfully captures and isolates seasonal and interannual variability from different aerosol source regions, Maximum Covariance Analysis (MCA) provides a means to verify the variability in one satellite dataset against Aerosol Robotic Network (AERONET) data, and Combined Principal Component Analysis (CPCA) realized parallel comparison among multi-satellite, multi-sensor datasets. As the final part of the study, this paper introduces a novel technique that integrates both multi-sensor datasets and ground observations, and thus effectively bridges the gap between these two types of measurements. The Combined Maximum Covariance Analysis (CMCA) decomposes the cross covariance matrix between the combined multi-sensor satellite data field and AERONET station data. We show that this new method not only confirms the seasonal and interannual variability of aerosol optical depth, aerosol source regions and events represented by different satellite datasets, but also identifies the strengths and weaknesses of each dataset in capturing the variability associated with sources, events or aerosol types. Furthermore, by examining the spread of

  12. Infusing Reliability Techniques into Software Safety Analysis

    NASA Technical Reports Server (NTRS)

    Shi, Ying

    2015-01-01

    Software safety analysis for a large software intensive system is always a challenge. Software safety practitioners need to ensure that software related hazards are completely identified, controlled, and tracked. This paper discusses in detail how to incorporate the traditional reliability techniques into the entire software safety analysis process. In addition, this paper addresses how information can be effectively shared between the various practitioners involved in the software safety analyses. The author has successfully applied the approach to several aerospace applications. Examples are provided to illustrate the key steps of the proposed approach.

  13. Multiview video codec based on KTA techniques

    NASA Astrophysics Data System (ADS)

    Seo, Jungdong; Kim, Donghyun; Ryu, Seungchul; Sohn, Kwanghoon

    2011-03-01

    Multi-view video coding (MVC) is a video coding standard developed by MPEG and VCEG for multi-view video. It showed average PSNR gain of 1.5dB compared with view-independent coding by H.264/AVC. However, because resolutions of multi-view video are getting higher for more realistic 3D effect, high performance video codec is needed. MVC adopted hierarchical B-picture structure and inter-view prediction as core techniques. The hierarchical B-picture structure removes the temporal redundancy, and the inter-view prediction reduces the inter-view redundancy by compensated prediction from the reconstructed neighboring views. Nevertheless, MVC has inherent limitation in coding efficiency, because it is based on H.264/AVC. To overcome the limit, an enhanced video codec for multi-view video based on Key Technology Area (KTA) is proposed. KTA is a high efficiency video codec by Video Coding Expert Group (VCEG), and it was carried out for coding efficiency beyond H.264/AVC. The KTA software showed better coding gain than H.264/AVC by using additional coding techniques. The techniques and the inter-view prediction are implemented into the proposed codec, which showed high coding gain compared with the view-independent coding result by KTA. The results presents that the inter-view prediction can achieve higher efficiency in a multi-view video codec based on a high performance video codec such as HEVC.

  14. Accelerometer Data Analysis and Presentation Techniques

    NASA Technical Reports Server (NTRS)

    Rogers, Melissa J. B.; Hrovat, Kenneth; McPherson, Kevin; Moskowitz, Milton E.; Reckart, Timothy

    1997-01-01

    The NASA Lewis Research Center's Principal Investigator Microgravity Services project analyzes Orbital Acceleration Research Experiment and Space Acceleration Measurement System data for principal investigators of microgravity experiments. Principal investigators need a thorough understanding of data analysis techniques so that they can request appropriate analyses to best interpret accelerometer data. Accelerometer data sampling and filtering is introduced along with the related topics of resolution and aliasing. Specific information about the Orbital Acceleration Research Experiment and Space Acceleration Measurement System data sampling and filtering is given. Time domain data analysis techniques are discussed and example environment interpretations are made using plots of acceleration versus time, interval average acceleration versus time, interval root-mean-square acceleration versus time, trimmean acceleration versus time, quasi-steady three dimensional histograms, and prediction of quasi-steady levels at different locations. An introduction to Fourier transform theory and windowing is provided along with specific analysis techniques and data interpretations. The frequency domain analyses discussed are power spectral density versus frequency, cumulative root-mean-square acceleration versus frequency, root-mean-square acceleration versus frequency, one-third octave band root-mean-square acceleration versus frequency, and power spectral density versus frequency versus time (spectrogram). Instructions for accessing NASA Lewis Research Center accelerometer data and related information using the internet are provided.

  15. Laser Remote Sensing: Velocimetry Based Techniques

    NASA Astrophysics Data System (ADS)

    Molebny, Vasyl; Steinvall, Ove

    Laser-based velocity measurement is an area of the field of remote sensing where the coherent properties of laser radiation are the most exposed. Much of the published literature deals with the theory and techniques of remote sensing. We restrict our discussion to current trends in this area, gathered from recent conferences and professional journals. Remote wind sensing and vibrometry are promising in their new scientific, industrial, military, and biomedical applications, including improving flight safety, precise weapon correction, non-contact mine detection, optimization of wind farm operation, object identification based on its vibration signature, fluid flow studies, and vibrometry-associated diagnosis.

  16. COMBINING A NEW 3-D SEISMIC S-WAVE PROPAGATION ANALYSIS FOR REMOTE FRACTURE DETECTION WITH A ROBUST SUBSURFACE MICROFRACTURE-BASED VERIFICATION TECHNIQUE

    SciTech Connect

    Bob Hardage; M.M. Backus; M.V. DeAngelo; R.J. Graebner; S.E. Laubach; Paul Murray

    2004-02-01

    Fractures within the producing reservoirs at McElroy Field could not be studied with the industry-provided 3C3D seismic data used as a cost-sharing contribution in this study. The signal-to-noise character of the converted-SV data across the targeted reservoirs in these contributed data was not adequate for interpreting azimuth-dependent data effects. After illustrating the low signal quality of the converted-SV data at McElroy Field, the seismic portion of this report abandons the McElroy study site and defers to 3C3D seismic data acquired across a different fractured carbonate reservoir system to illustrate how 3C3D seismic data can provide useful information about fracture systems. Using these latter data, we illustrate how fast-S and slow-S data effects can be analyzed in the prestack domain to recognize fracture azimuth, and then demonstrate how fast-S and slow-S data volumes can be analyzed in the poststack domain to estimate fracture intensity. In the geologic portion of the report, we analyze published regional stress data near McElroy Field and numerous formation multi-imager (FMI) logs acquired across McElroy to develop possible fracture models for the McElroy system. Regional stress data imply a fracture orientation different from the orientations observed in most of the FMI logs. This report culminates Phase 2 of the study, ''Combining a New 3-D Seismic S-Wave Propagation Analysis for Remote Fracture Detection with a Robust Subsurface Microfracture-Based Verification Technique''. Phase 3 will not be initiated because wells were to be drilled in Phase 3 of the project to verify the validity of fracture-orientation maps and fracture-intensity maps produced in Phase 2. Such maps cannot be made across McElroy Field because of the limitations of the available 3C3D seismic data at the depth level of the reservoir target.

  17. An analysis of I/O efficient order-statistic-based techniques for noise power estimation in the HRMS sky survey's operational system

    NASA Technical Reports Server (NTRS)

    Zimmerman, G. A.; Olsen, E. T.

    1992-01-01

    Noise power estimation in the High-Resolution Microwave Survey (HRMS) sky survey element is considered as an example of a constant false alarm rate (CFAR) signal detection problem. Order-statistic-based noise power estimators for CFAR detection are considered in terms of required estimator accuracy and estimator dynamic range. By limiting the dynamic range of the value to be estimated, the performance of an order-statistic estimator can be achieved by simpler techniques requiring only a single pass of the data. Simple threshold-and-count techniques are examined, and it is shown how several parallel threshold-and-count estimation devices can be used to expand the dynamic range to meet HRMS system requirements with minimal hardware complexity. An input/output (I/O) efficient limited-precision order-statistic estimator with wide but limited dynamic range is also examined.

  18. Visual exploratory analysis of integrated chromosome 19 proteomic data derived from glioma cancer stem-cell lines based on novel nonlinear dimensional data reduction techniques

    NASA Astrophysics Data System (ADS)

    Lespinats, Sylvain; Pinker-Domenig, Katja; Meyer-Bäse, Uwe; Meyer-Bäse, Anke

    2015-05-01

    Chromosome 19 is known to be linked to neurodegeneration and many cancers. Glioma-derived cancer stem cells (GSCs) are tumor-initiating cells and may be refractory to radiation and chemotherapy and thus have important implications for tumor biology and therapeutics. The analysis and interpretation of large proteomic data sets requires the development of new data mining and visualization approaches. Traditional techniques are insufficient to interpret and visualize these resulting experimental data. The emphasis of this paper lies in the presentation of novel approaches for the visualization, clustering and projection representation to unveil hidden data structures relevant for the accurate interpretation of biological experiments. These qualitative and quantitative methods are applied to the proteomic analysis of data sets derived from the GSCs. The achieved clustering and visualization results provide a more detailed insight into the expression patterns for chromosome 19 proteins.

  19. Further development of ultrasonic techniques for non-destructive evaluation based on Fourier analysis of signals from irregular and inhomogeneous structures

    NASA Technical Reports Server (NTRS)

    Miller, J. G.

    1979-01-01

    To investigate the use of Fourier analysis techniques model systems had to be designed to test some of the general properties of the interaction of sound with an inhomogeneity. The first models investigated were suspensions of solid spheres in water. These systems allowed comparison between theoretical computation of the frequency dependence of the attenuation coefficient and measurement of the attenuation coefficient over a range of frequencies. Ultrasonic scattering processes in both suspensions of hard spheres in water, and suspensions of hard spheres in polyester resin were investigated. The second model system was constructed to test the applicability of partial wave analysis to the description of an inhomogeneity in a solid, and to test the range of material properties over which the measurement systems were valid.

  20. DETECTION OF DNA DAMAGE USING MELTING ANALYSIS TECHNIQUES

    EPA Science Inventory

    A rapid and simple fluorescence screening assay for UV radiation-, chemical-, and enzyme-induced DNA damage is reported. This assay is based on a melting/annealing analysis technique and has been used with both calf thymus DNA and plasmid DNA (puc 19 plasmid from E. coli). DN...

  1. A comparison of geostatistically based inverse techniques for use in performance assessment analysis at the Waste Isolation Pilot Plant Site: Results from Test Case No. 1

    SciTech Connect

    Zimmerman, D.A.; Gallegos, D.P.

    1993-10-01

    The groundwater flow pathway in the Culebra Dolomite aquifer at the Waste Isolation Pilot Plant (WIPP) has been identified as a potentially important pathway for radionuclide migration to the accessible environment. Consequently, uncertainties in the models used to describe flow and transport in the Culebra need to be addressed. A ``Geostatistics Test Problem`` is being developed to evaluate a number of inverse techniques that may be used for flow calculations in the WIPP performance assessment (PA). The Test Problem is actually a series of test cases, each being developed as a highly complex synthetic data set; the intent is for the ensemble of these data sets to span the range of possible conceptual models of groundwater flow at the WIPP site. The Test Problem analysis approach is to use a comparison of the probabilistic groundwater travel time (GWTT) estimates produced by each technique as the basis for the evaluation. Participants are given observations of head and transmissivity (possibly including measurement error) or other information such as drawdowns from pumping wells, and are asked to develop stochastic models of groundwater flow for the synthetic system. Cumulative distribution functions (CDFs) of groundwater flow (computed via particle tracking) are constructed using the head and transmissivity data generated through the application of each technique; one semi-analytical method generates the CDFs of groundwater flow directly. This paper describes the results from Test Case No. 1.

  2. Neutron-based nonintrusive inspection techniques

    NASA Astrophysics Data System (ADS)

    Gozani, Tsahi

    1997-02-01

    Non-intrusive inspection of large objects such as trucks, sea-going shipping containers, air cargo containers and pallets is gaining attention as a vital tool in combating terrorism, drug smuggling and other violation of international and national transportation and Customs laws. Neutrons are the preferred probing radiation when material specificity is required, which is most often the case. Great strides have been made in neutron based inspection techniques. Fast and thermal neutrons, whether in steady state or in microsecond, or even nanosecond pulses are being employed to interrogate, at high speeds, for explosives, drugs, chemical agents, and nuclear and many other smuggled materials. Existing neutron techniques will be compared and their current status reported.

  3. Flash Infrared Thermography Contrast Data Analysis Technique

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay

    2014-01-01

    This paper provides information on an IR Contrast technique that involves extracting normalized contrast versus time evolutions from the flash thermography inspection infrared video data. The analysis calculates thermal measurement features from the contrast evolution. In addition, simulation of the contrast evolution is achieved through calibration on measured contrast evolutions from many flat-bottom holes in the subject material. The measurement features and the contrast simulation are used to evaluate flash thermography data in order to characterize delamination-like anomalies. The thermal measurement features relate to the anomaly characteristics. The contrast evolution simulation is matched to the measured contrast evolution over an anomaly to provide an assessment of the anomaly depth and width which correspond to the depth and diameter of the equivalent flat-bottom hole (EFBH) similar to that used as input to the simulation. A similar analysis, in terms of diameter and depth of an equivalent uniform gap (EUG) providing a best match with the measured contrast evolution, is also provided. An edge detection technique called the half-max is used to measure width and length of the anomaly. Results of the half-max width and the EFBH/EUG diameter are compared to evaluate the anomaly. The information provided here is geared towards explaining the IR Contrast technique. Results from a limited amount of validation data on reinforced carbon-carbon (RCC) hardware are included in this paper.

  4. Forensic Analysis using Geological and Geochemical Techniques

    NASA Astrophysics Data System (ADS)

    Hoogewerff, J.

    2009-04-01

    Due to the globalisation of legal (and illegal) trade there is an increasing demand for techniques which can verify the geographical origin and transfer routes of many legal and illegal commodities and products. Although geological techniques have been used in forensic investigations since the emergence of forensics as a science in the late eighteen hundreds, the last decade has seen a marked increase in geo-scientists initiating concept studies using the latest analytical techniques, including studying natural abundance isotope variations, micro analysis with laser ablation ICPMS and geochemical mapping. Most of the concept studies have shown a good potential but uptake by the law enforcement and legal community has been limited due to concerns about the admissibility of the new methods. As an introduction to the UGU2009 session "Forensic Provenancing using Geological and Geochemical Techniques" I will give an overview of the state of the art of forensic geology and the issues that concern the admissibility of geological forensic evidence. I will use examples from the NITECRIME and FIRMS networks, the EU TRACE project and other projects and literature to illustrate the important issues at hand.

  5. Advanced NMR-based techniques for pore structure analysis of coal. Quarter report No. 4, 1 October 1992--30 December 1992

    SciTech Connect

    Smith, D.M.

    1992-12-31

    One of the main problems in coal utilization is the inability to properly characterize its complex pore structure. Coals typically have micro/ultra-micro pores but they also exhibit meso and macroporosity. Conventional pore size techniques (adsorption/condensation, mercury porosimetry) are limited because of this broad pore size range, microporosity, reactive nature of coal, samples must be completely dried, and network/percolation effects. Small angle scattering is limited because it probes both open and closed pores. Although one would not expect any single technique to provide a satisfactory description of a coal`s structure, it is apparent that better techniques are necessary. We believe that measurement of the NMR parameters of various gas phase and adsorbed phase NMR active probes can provide the resolution to this problem. We will investigate the dependence of the common NMR parameters such as chemical shifts and relaxation times of several different nuclei and compounds on the pore structure of model microporous solids, carbons, and coals. In particular, we will study the interaction between several small molecules and the pore surfaces in coals. These molecules have been selected for their chemical and physical properties. A special NMR probe will be constructed which will allow the concurrent measurement of NMR properties and adsorption uptake at a variety of temperatures. All samples will be subjected to a suite of ``conventional`` pore structure analyses. These include nitrogen adsorption at 77 K with BET analysis, CO{sub 2} and CH{sub 4} adsorption at 273 K with D-R (Dubinin-Radushkevich) analysis, helium pycnometry, and small angle X-ray scattering as well as gas diffusion measurements.

  6. The Network Protocol Analysis Technique in Snort

    NASA Astrophysics Data System (ADS)

    Wu, Qing-Xiu

    Network protocol analysis is a network sniffer to capture data for further analysis and understanding of the technical means necessary packets. Network sniffing is intercepted by packet assembly binary format of the original message content. In order to obtain the information contained. Required based on TCP / IP protocol stack protocol specification. Again to restore the data packets at protocol format and content in each protocol layer. Actual data transferred, as well as the application tier.

  7. Flood alert system based on bayesian techniques

    NASA Astrophysics Data System (ADS)

    Gulliver, Z.; Herrero, J.; Viesca, C.; Polo, M. J.

    2012-04-01

    The problem of floods in the Mediterranean regions is closely linked to the occurrence of torrential storms in dry regions, where even the water supply relies on adequate water management. Like other Mediterranean basins in Southern Spain, the Guadalhorce River Basin is a medium sized watershed (3856 km2) where recurrent yearly floods occur , mainly in autumn and spring periods, driven by cold front phenomena. The torrential character of the precipitation in such small basins, with a concentration time of less than 12 hours, produces flash flood events with catastrophic effects over the city of Malaga (600000 inhabitants). From this fact arises the need for specific alert tools which can forecast these kinds of phenomena. Bayesian networks (BN) have been emerging in the last decade as a very useful and reliable computational tool for water resources and for the decision making process. The joint use of Artificial Neural Networks (ANN) and BN have served us to recognize and simulate the two different types of hydrological behaviour in the basin: natural and regulated. This led to the establishment of causal relationships between precipitation, discharge from upstream reservoirs, and water levels at a gauging station. It was seen that a recurrent ANN model working at an hourly scale, considering daily precipitation and the two previous hourly values of reservoir discharge and water level, could provide R2 values of 0.86. BN's results slightly improve this fit, but contribute with uncertainty to the prediction. In our current work to Design a Weather Warning Service based on Bayesian techniques the first steps were carried out through an analysis of the correlations between the water level and rainfall at certain representative points in the basin, along with the upstream reservoir discharge. The lower correlation found between precipitation and water level emphasizes the highly regulated condition of the stream. The autocorrelations of the variables were also

  8. Multiclass pesticide analysis in fruit-based baby food: A comparative study of sample preparation techniques previous to gas chromatography-mass spectrometry.

    PubMed

    Petrarca, Mateus H; Fernandes, José O; Godoy, Helena T; Cunha, Sara C

    2016-12-01

    With the aim to develop a new gas chromatography-mass spectrometry method to analyze 24 pesticide residues in baby foods at the level imposed by established regulation two simple, rapid and environmental-friendly sample preparation techniques based on QuEChERS (quick, easy, cheap, effective, robust and safe) were compared - QuEChERS with dispersive liquid-liquid microextraction (DLLME) and QuEChERS with dispersive solid-phase extraction (d-SPE). Both sample preparation techniques achieved suitable performance criteria, including selectivity, linearity, acceptable recovery (70-120%) and precision (⩽20%). A higher enrichment factor was observed for DLLME and consequently better limits of detection and quantification were obtained. Nevertheless, d-SPE provided a more effective removal of matrix co-extractives from extracts than DLLME, which contributed to lower matrix effects. Twenty-two commercial fruit-based baby food samples were analyzed by the developed method, being procymidone detected in one sample at a level above the legal limit established by EU. PMID:27374564

  9. COSIMA data analysis using multivariate techniques

    NASA Astrophysics Data System (ADS)

    Silén, J.; Cottin, H.; Hilchenbach, M.; Kissel, J.; Lehto, H.; Siljeström, S.; Varmuza, K.

    2015-02-01

    We describe how to use multivariate analysis of complex TOF-SIMS (time-of-flight secondary ion mass spectrometry) spectra by introducing the method of random projections. The technique allows us to do full clustering and classification of the measured mass spectra. In this paper we use the tool for classification purposes. The presentation describes calibration experiments of 19 minerals on Ag and Au substrates using positive mode ion spectra. The discrimination between individual minerals gives a cross-validation Cohen κ for classification of typically about 80%. We intend to use the method as a fast tool to deduce a qualitative similarity of measurements.

  10. Oil species identification technique developed by Gabor wavelet analysis and support vector machine based on concentration-synchronous-matrix-fluorescence spectroscopy.

    PubMed

    Wang, Chunyan; Shi, Xiaofeng; Li, Wendong; Wang, Lin; Zhang, Jinliang; Yang, Chun; Wang, Zhendi

    2016-03-15

    Concentration-synchronous-matrix-fluorescence (CSMF) spectroscopy was applied to discriminate the oil species by characterizing the concentration dependent fluorescence properties of petroleum related samples. Seven days weathering experiment of 3 crude oil samples from the Bohai Sea platforms of China was carried out under controlled laboratory conditions and showed that weathering had no significant effect on the CSMF spectra. While different feature extraction methods, such as PCA, PLS and Gabor wavelet analysis, were applied to extract discriminative patterns from CSMF spectra, classifications were made via SVM to compare their respective performance of oil species recognition. Ideal correct rates of oil species recognition of 100% for the different types of oil spill samples and 92% for the closely-related source oil samples were achieved by combining Gabor wavelet with SVM, which indicated its advantages to be developed to a rapid, cost-effective, and accurate forensic oil spill identification technique. PMID:26795119

  11. Laser Scanning–Based Tissue Autofluorescence/Fluorescence Imaging (LS-TAFI), a New Technique for Analysis of Microanatomy in Whole-Mount Tissues

    PubMed Central

    Mori, Hidetoshi; Borowsky, Alexander D.; Bhat, Ramray; Ghajar, Cyrus M.; Seiki, Motoharu; Bissell, Mina J.

    2012-01-01

    Intact organ structure is essential in maintaining tissue specificity and cellular differentiation. Small physiological or genetic variations lead to changes in microanatomy that, if persistent, could have functional consequences and may easily be masked by the heterogeneity of tissue anatomy. Current imaging techniques rely on histological, two-dimensional sections requiring sample manipulation that are essentially two dimensional. We have developed a method for three-dimensional imaging of whole-mount, unsectioned mammalian tissues to elucidate subtle and detailed micro- and macroanatomies in adult organs and embryos. We analyzed intact or dissected organ whole mounts with laser scanning–based tissue autofluorescence/fluorescence imaging (LS-TAFI). We obtained clear visualization of microstructures within murine mammary glands and mammary tumors and other organs without the use of immunostaining and without probes or fluorescent reporter genes. Combining autofluorescence with reflected light signals from chromophore-stained tissues allowed identification of individual cells within three-dimensional structures of whole-mounted organs. This technique could be useful for rapid diagnosis of human clinical samples and possibly the effect of subtle variations such as low dose radiation. PMID:22542846

  12. Debonding damage analysis in composite-masonry strengthening systems with polymer- and mortar-based matrix by means of the acoustic emission technique

    NASA Astrophysics Data System (ADS)

    Verstrynge, E.; Wevers, M.; Ghiassi, B.; Lourenço, P. B.

    2016-01-01

    Different types of strengthening systems, based on fiber reinforced materials, are under investigation for external strengthening of historic masonry structures. A full characterization of the bond behavior and of the short- and long-term failure mechanisms is crucial to ensure effective design, compatibility with the historic substrate and durability of the strengthening solution. Therein, non-destructive techniques are essential for bond characterization, durability assessment and on-site condition monitoring. In this paper, the acoustic emission (AE) technique is evaluated for debonding characterization and localization on fiber reinforced polymer (FRP) and steel reinforced grout-strengthened clay bricks. Both types of strengthening systems are subjected to accelerated ageing tests under thermal cycles and to single-lap shear bond tests. During the reported experimental campaign, AE data from the accelerated ageing tests demonstrated the thermal incompatibility between brick and epoxy-bonded FRP composites, and debonding damage was successfully detected, characterized and located. In addition, a qualitative comparison is made with digital image correlation and infrared thermography, in view of efficient on-site debonding detection.

  13. Some Techniques for Computer-Based Assessment in Medical Education.

    ERIC Educational Resources Information Center

    Mooney, G. A.; Bligh, J. G.; Leinster, S. J.

    1998-01-01

    Presents a system of classification for describing computer-based assessment techniques based on the level of action and educational activity they offer. Illustrates 10 computer-based assessment techniques and discusses their educational value. Contains 14 references. (Author)

  14. Which Combinations of Techniques and Modes of Delivery in Internet-Based Interventions Effectively Change Health Behavior? A Meta-Analysis

    PubMed Central

    van Genugten, Lenneke; Webb, Thomas Llewelyn; van Empelen, Pepijn

    2016-01-01

    Background Many online interventions designed to promote health behaviors combine multiple behavior change techniques (BCTs), adopt different modes of delivery (MoD) (eg, text messages), and range in how usable they are. Research is therefore needed to examine the impact of these features on the effectiveness of online interventions. Objective This study applies Classification and Regression Trees (CART) analysis to meta-analytic data, in order to identify synergistic effects of BCTs, MoDs, and usability factors. Methods We analyzed data from Webb et al. This review included effect sizes from 52 online interventions targeting a variety of health behaviors and coded the use of 40 BCTs and 11 MoDs. Our research also developed a taxonomy for coding the usability of interventions. Meta-CART analyses were performed using the BCTs and MoDs as predictors and using treatment success (ie, effect size) as the outcome. Results Factors related to usability of the interventions influenced their efficacy. Specifically, subgroup analyses indicated that more efficient interventions (interventions that take little time to understand and use) are more likely to be effective than less efficient interventions. Meta-CART identified one synergistic effect: Interventions that included barrier identification/ problem solving and provided rewards for behavior change reported an average effect size that was smaller (ḡ=0.23, 95% CI 0.08-0.44) than interventions that used other combinations of techniques (ḡ=0.43, 95% CI 0.27-0.59). No synergistic effects were found for MoDs or for MoDs combined with BCTs. Conclusions Interventions that take little time to understand and use were more effective than those that require more time. Few specific combinations of BCTs that contribute to the effectiveness of online interventions were found. Furthermore, no synergistic effects between BCTs and MoDs were found, even though MoDs had strong effects when analyzed univariately in the original study

  15. Modular Sampling and Analysis Techniques for the Real-Time Analysis of Human Breath

    SciTech Connect

    Frank, M; Farquar, G; Adams, K; Bogan, M; Martin, A; Benner, H; Spadaccini, C; Steele, P; Davis, C; Loyola, B; Morgan, J; Sankaran, S

    2007-07-09

    At LLNL and UC Davis, we are developing several techniques for the real-time sampling and analysis of trace gases, aerosols and exhaled breath that could be useful for a modular, integrated system for breath analysis. Those techniques include single-particle bioaerosol mass spectrometry (BAMS) for the analysis of exhaled aerosol particles or droplets as well as breath samplers integrated with gas chromatography mass spectrometry (GC-MS) or MEMS-based differential mobility spectrometry (DMS). We describe these techniques and present recent data obtained from human breath or breath condensate, in particular, addressing the question of how environmental exposure influences the composition of breath.

  16. Comparison of performance of object-based image analysis techniques available in open source software (Spring and Orfeo Toolbox/Monteverdi) considering very high spatial resolution data

    NASA Astrophysics Data System (ADS)

    Teodoro, Ana C.; Araujo, Ricardo

    2016-01-01

    The use of unmanned aerial vehicles (UAVs) for remote sensing applications is becoming more frequent. However, this type of information can result in several software problems related to the huge amount of data available. Object-based image analysis (OBIA) has proven to be superior to pixel-based analysis for very high-resolution images. The main objective of this work was to explore the potentialities of the OBIA methods available in two different open source software applications, Spring and OTB/Monteverdi, in order to generate an urban land cover map. An orthomosaic derived from UAVs was considered, 10 different regions of interest were selected, and two different approaches were followed. The first one (Spring) uses the region growing segmentation algorithm followed by the Bhattacharya classifier. The second approach (OTB/Monteverdi) uses the mean shift segmentation algorithm followed by the support vector machine (SVM) classifier. Two strategies were followed: four classes were considered using Spring and thereafter seven classes were considered for OTB/Monteverdi. The SVM classifier produces slightly better results and presents a shorter processing time. However, the poor spectral resolution of the data (only RGB bands) is an important factor that limits the performance of the classifiers applied.

  17. Automated analysis of non-mass-enhancing lesions in breast MRI based on morphological, kinetic, and spatio-temporal moments and joint segmentation-motion compensation technique

    NASA Astrophysics Data System (ADS)

    Hoffmann, Sebastian; Shutler, Jamie D.; Lobbes, Marc; Burgeth, Bernhard; Meyer-Bäse, Anke

    2013-12-01

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) represents an established method for the detection and diagnosis of breast lesions. While mass-like enhancing lesions can be easily categorized according to the Breast Imaging Reporting and Data System (BI-RADS) MRI lexicon, a majority of diagnostically challenging lesions, the so called non-mass-like enhancing lesions, remain both qualitatively as well as quantitatively difficult to analyze. Thus, the evaluation of kinetic and/or morphological characteristics of non-masses represents a challenging task for an automated analysis and is of crucial importance for advancing current computer-aided diagnosis (CAD) systems. Compared to the well-characterized mass-enhancing lesions, non-masses have no well-defined and blurred tumor borders and a kinetic behavior that is not easily generalizable and thus discriminative for malignant and benign non-masses. To overcome these difficulties and pave the way for novel CAD systems for non-masses, we will evaluate several kinetic and morphological descriptors separately and a novel technique, the Zernike velocity moments, to capture the joint spatio-temporal behavior of these lesions, and additionally consider the impact of non-rigid motion compensation on a correct diagnosis.

  18. Architectural stability analysis of the rotary-laser scanning technique

    NASA Astrophysics Data System (ADS)

    Xue, Bin; Yang, Xiaoxia; Zhu, Jigui

    2016-03-01

    The rotary-laser scanning technique is an important method in scale measurements due to its high accuracy and large measurement range. This paper first introduces a newly designed measurement station which is able to provide two-dimensional measurement information including the azimuth and elevation by using the rotary-laser scanning technique, then presents the architectural stability analysis of this technique by detailed theoretical derivations. Based on the designed station, a validation using both experiment and simulation is presented in order to verify the analytic conclusion. The results show that the architectural stability of the rotary-laser scanning technique is only affected by the two scanning angles' difference. And the difference which brings the best architectural stability can be calculated by using pre-calibrated parameters of the two laser planes. This research gives us an insight into the rotary-laser scanning technique. Moreover, the measurement accuracy of the rotary-laser scanning technique can be further improved based on the results of the study.

  19. HELCATS - Heliospheric Cataloguing, Analysis and Techniques Service

    NASA Astrophysics Data System (ADS)

    Harrison, Richard; Davies, Jackie; Perry, Chris; Moestl, Christian; Rouillard, Alexis; Bothmer, Volker; Rodriguez, Luciano; Eastwood, Jonathan; Kilpua, Emilia; Gallagher, Peter

    2016-04-01

    Understanding the evolution of the solar wind is fundamental to advancing our knowledge of energy and mass transport in the solar system, rendering it crucial to space weather and its prediction. The advent of truly wide-angle heliospheric imaging has revolutionised the study of both transient (CMEs) and background (SIRs/CIRs) solar wind plasma structures, by enabling their direct and continuous observation out to 1 AU and beyond. The EU-funded FP7 HELCATS project combines European expertise in heliospheric imaging, built up in particular through lead involvement in NASA's STEREO mission, with expertise in solar and coronal imaging as well as in-situ and radio measurements of solar wind phenomena, in a programme of work that will enable a much wider exploitation and understanding of heliospheric imaging observations. With HELCATS, we are (1.) cataloguing transient and background solar wind structures imaged in the heliosphere by STEREO/HI, since launch in late October 2006 to date, including estimates of their kinematic properties based on a variety of established techniques and more speculative, approaches; (2.) evaluating these kinematic properties, and thereby the validity of these techniques, through comparison with solar source observations and in-situ measurements made at multiple points throughout the heliosphere; (3.) appraising the potential for initialising advanced numerical models based on these kinematic properties; (4.) assessing the complementarity of radio observations (in particular of Type II radio bursts and interplanetary scintillation) in combination with heliospheric imagery. We will, in this presentation, provide an overview of progress from the first 18 months of the HELCATS project.

  20. CT-based morphometric analysis of C1 laminar dimensions: C1 translaminar screw fixation is a feasible technique for salvage of atlantoaxial fusions

    PubMed Central

    Yew, Andrew; Lu, Derek; Lu, Daniel C.

    2015-01-01

    Background: Translaminar screw fixation has become an alternative in the fixation of the axial and subaxial cervical spine. We report utilization of this approach in the atlas as a salvage technique for atlantoaxial stabilization when C1 lateral mass screws are precluded. To assess the feasibility of translaminar fixation at the atlas, we have characterized the dimensions of the C1 lamina in the general adult population using computed tomography (CT)-based morphometry. Methods: A 46-year-old male with symptomatic atlantoaxial instability secondary to os odontoideum underwent bilateral C1 and C2 translaminar screw/rod fixation as C1 lateral mass fixation was precluded by an anomalous vertebral artery. The follow-up evaluation 2½ years postoperatively revealed an asymptomatic patient without recurrent neck/shoulder pain or clinical signs of instability. To better assess the feasibility of utilizing this approach in the general population, we retrospectively analyzed 502 consecutive cervical CT scans performed over a 3-month period in patients aged over 18 years at a single institution. Measurements of C1 bicortical diameter, bilateral laminar length, height, and angulation were performed. Laminar and screw dimensions were compared to assess instrumentation feasibility. Results: Review of CT imaging found that 75.9% of C1 lamina had a sufficient bicortical diameter, and 63.7% of C1 lamina had sufficient height to accept bilateral translaminar screw placement. Conclusions: CT-based measurement of atlas morphology in the general population revealed that a majority of C1 lamina had sufficient dimensions to accept translaminar screw placement. Although these screws appear to be a feasible alternative when lateral mass screws are precluded, further research is required to determine if they provide comparable fixation strength versus traditional instrumentation methods. PMID:26005585

  1. 77 FR 40552 - Federal Acquisition Regulation; Price Analysis Techniques

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-10

    ... Federal Acquisition Regulation; Price Analysis Techniques AGENCY: Department of Defense (DoD), General... clarify the use of a price analysis technique in order to establish a fair and reasonable price. DATES....404-1(b)(2) addresses various price analysis techniques and procedures the Government may use...

  2. Statistics and Machine Learning based Outlier Detection Techniques for Exoplanets

    NASA Astrophysics Data System (ADS)

    Goel, Amit; Montgomery, Michele

    2015-08-01

    Architectures of planetary systems are observable snapshots in time that can indicate formation and dynamic evolution of planets. The observable key parameters that we consider are planetary mass and orbital period. If planet masses are significantly less than their host star masses, then Keplerian Motion is defined as P^2 = a^3 where P is the orbital period in units of years and a is the orbital period in units of Astronomical Units (AU). Keplerian motion works on small scales such as the size of the Solar System but not on large scales such as the size of the Milky Way Galaxy. In this work, for confirmed exoplanets of known stellar mass, planetary mass, orbital period, and stellar age, we analyze Keplerian motion of systems based on stellar age to seek if Keplerian motion has an age dependency and to identify outliers. For detecting outliers, we apply several techniques based on statistical and machine learning methods such as probabilistic, linear, and proximity based models. In probabilistic and statistical models of outliers, the parameters of a closed form probability distributions are learned in order to detect the outliers. Linear models use regression analysis based techniques for detecting outliers. Proximity based models use distance based algorithms such as k-nearest neighbour, clustering algorithms such as k-means, or density based algorithms such as kernel density estimation. In this work, we will use unsupervised learning algorithms with only the proximity based models. In addition, we explore the relative strengths and weaknesses of the various techniques by validating the outliers. The validation criteria for the outliers is if the ratio of planetary mass to stellar mass is less than 0.001. In this work, we present our statistical analysis of the outliers thus detected.

  3. Effective learning techniques for military applications using the Personalized Assistant that Learns (PAL) enhanced Web-Based Temporal Analysis System (WebTAS)

    NASA Astrophysics Data System (ADS)

    LaMonica, Peter; Dziegiel, Roger; Liuzzi, Raymond; Hepler, James

    2009-05-01

    The Personalized Assistant that Learns (PAL) Program is a Defense Advanced Research Projects Agency (DARPA) research effort that is advancing technologies in the area of cognitive learning by developing cognitive assistants to support military users, such as commanders and decision makers. The Air Force Research Laboratory's (AFRL) Information Directorate leveraged several core PAL components and applied them to the Web-Based Temporal Analysis System (WebTAS) so that users of this system can have automated features, such as task learning, intelligent clustering, and entity extraction. WebTAS is a modular software toolset that supports fusion of large amounts of disparate data sets, visualization, project organization and management, pattern analysis and activity prediction, and includes various presentation aids. WebTAS is predominantly used by analysts within the intelligence community and with the addition of these automated features, many transition opportunities exist for this integrated technology. Further, AFRL completed an extensive test and evaluation of this integrated software to determine its effectiveness for military applications in terms of timeliness and situation awareness, and these findings and conclusions, as well as future work, will be presented in this report.

  4. Function Analysis and Decomposistion using Function Analysis Systems Technique

    SciTech Connect

    J. R. Wixson

    1999-06-01

    The "Father of Value Analysis", Lawrence D. Miles, was a design engineer for General Electric in Schenectady, New York. Miles developed the concept of function analysis to address difficulties in satisfying the requirements to fill shortages of high demand manufactured parts and electrical components during World War II. His concept of function analysis was further developed in the 1960s by Charles W. Bytheway, a design engineer at Sperry Univac in Salt Lake City, Utah. Charles Bytheway extended Mile's function analysis concepts and introduced the methodology called Function Analysis Systems Techniques (FAST) to the Society of American Value Engineers (SAVE) at their International Convention in 1965 (Bytheway 1965). FAST uses intuitive logic to decompose a high level, or objective function into secondary and lower level functions that are displayed in a logic diagram called a FAST model. Other techniques can then be applied to allocate functions to components, individuals, processes, or other entities that accomplish the functions. FAST is best applied in a team setting and proves to be an effective methodology for functional decomposition, allocation, and alternative development.

  5. Function Analysis and Decomposistion using Function Analysis Systems Technique

    SciTech Connect

    Wixson, James Robert

    1999-06-01

    The "Father of Value Analysis", Lawrence D. Miles, was a design engineer for General Electric in Schenectady, New York. Miles developed the concept of function analysis to address difficulties in satisfying the requirements to fill shortages of high demand manufactured parts and electrical components during World War II. His concept of function analysis was further developed in the 1960s by Charles W. Bytheway, a design engineer at Sperry Univac in Salt Lake City, Utah. Charles Bytheway extended Mile's function analysis concepts and introduced the methodology called Function Analysis Systems Technique (FAST) to the Society of American Value Engineers (SAVE) at their International Convention in 1965 (Bytheway 1965). FAST uses intuitive logic to decompose a high level, or objective function into secondary and lower level functions that are displayed in a logic diagram called a FAST model. Other techniques can then be applied to allocate functions to components, individuals, processes, or other entities that accomplish the functions. FAST is best applied in a team setting and proves to be an effective methodology for functional decomposition, allocation, and alternative development.

  6. Model-Based Safety Analysis

    NASA Technical Reports Server (NTRS)

    Joshi, Anjali; Heimdahl, Mats P. E.; Miller, Steven P.; Whalen, Mike W.

    2006-01-01

    System safety analysis techniques are well established and are used extensively during the design of safety-critical systems. Despite this, most of the techniques are highly subjective and dependent on the skill of the practitioner. Since these analyses are usually based on an informal system model, it is unlikely that they will be complete, consistent, and error free. In fact, the lack of precise models of the system architecture and its failure modes often forces the safety analysts to devote much of their effort to gathering architectural details about the system behavior from several sources and embedding this information in the safety artifacts such as the fault trees. This report describes Model-Based Safety Analysis, an approach in which the system and safety engineers share a common system model created using a model-based development process. By extending the system model with a fault model as well as relevant portions of the physical system to be controlled, automated support can be provided for much of the safety analysis. We believe that by using a common model for both system and safety engineering and automating parts of the safety analysis, we can both reduce the cost and improve the quality of the safety analysis. Here we present our vision of model-based safety analysis and discuss the advantages and challenges in making this approach practical.

  7. Analysis of diagnostic calorimeter data by the transfer function technique

    NASA Astrophysics Data System (ADS)

    Delogu, R. S.; Poggi, C.; Pimazzoni, A.; Rossi, G.; Serianni, G.

    2016-02-01

    This paper describes the analysis procedure applied to the thermal measurements on the rear side of a carbon fibre composite calorimeter with the purpose of reconstructing the energy flux due to an ion beam colliding on the front side. The method is based on the transfer function technique and allows a fast analysis by means of the fast Fourier transform algorithm. Its efficacy has been tested both on simulated and measured temperature profiles: in all cases, the energy flux features are well reproduced and beamlets are well resolved. Limits and restrictions of the method are also discussed, providing strategies to handle issues related to signal noise and digital processing.

  8. Analysis of diagnostic calorimeter data by the transfer function technique.

    PubMed

    Delogu, R S; Poggi, C; Pimazzoni, A; Rossi, G; Serianni, G

    2016-02-01

    This paper describes the analysis procedure applied to the thermal measurements on the rear side of a carbon fibre composite calorimeter with the purpose of reconstructing the energy flux due to an ion beam colliding on the front side. The method is based on the transfer function technique and allows a fast analysis by means of the fast Fourier transform algorithm. Its efficacy has been tested both on simulated and measured temperature profiles: in all cases, the energy flux features are well reproduced and beamlets are well resolved. Limits and restrictions of the method are also discussed, providing strategies to handle issues related to signal noise and digital processing. PMID:26932104

  9. Proteomic Analysis of Vitreous Biopsy Techniques

    PubMed Central

    Skeie, Jessica M.; Brown, Eric N.; Martinez, Harryl D.; Russell, Stephen R.; Birkholz, Emily S.; Folk, James C.; Boldt, H. Culver; Gehrs, Karen M.; Stone, Edwin M.; Wright, Michael E.; Mahajan, Vinit B.

    2013-01-01

    Purpose To compare vitreous biopsy methods using analysis platforms employed in proteomics biomarker discovery. Methods Vitreous biopsies from 10 eyes were collected sequentially using a 23-gauge needle and a 23-gauge vitreous cutter instrument. Paired specimens were evaluated by UV absorbance spectroscopy, SDS-PAGE, and mass-spectrometry (LC-MS/MS). Results The total protein concentration obtained with a needle and vitrectomy instrument biopsy averaged 1.10 mg/ml (SEM = 0.35) and 1.13 mg/ml (SEM = 0.25), respectively. In eight eyes with low or medium viscidity, there was a very high correlation (R2 = 0.934) between the biopsy methods. When data from two eyes with high viscidity vitreous were included, the correlation was reduced (R2 = 0.704). The molecular weight protein SDS-PAGE profiles of paired needle and vitreous cutter samples were similar, except for a minority of pairs with single band intensity variance. Using LC-MS/MS, equivalent peptides were identified with similar frequencies (R2 ≥ 0.90) in paired samples. Conclusion Proteins and peptides collected from vitreous needle biopsies are nearly equivalent to those obtained from a vitreous cutter instrument. This study suggests both techniques may be used for most proteomic and biomarker discovery studies of vitreoretinal diseases, although a minority of proteins and peptides may differ in concentration. PMID:23095728

  10. Nuclear based techniques for detection of contraband

    SciTech Connect

    Gozani, T.

    1993-12-31

    The detection of contraband such as explosives and drugs concealed in luggage or other container can be quite difficult. Nuclear techniques offer capabilities which are essential to having effective detection devices. This report describes the features of various nuclear techniques and instrumentation.

  11. Advanced Techniques for Root Cause Analysis

    Energy Science and Technology Software Center (ESTSC)

    2000-09-19

    Five items make up this package, or can be used individually. The Chronological Safety Management Template utilizes a linear adaptation of the Integrated Safety Management System laid out in the form of a template that greatly enhances the ability of the analyst to perform the first step of any investigation which is to gather all pertinent facts and identify causal factors. The Problem Analysis Tree is a simple three (3) level problem analysis tree whichmore » is easier for organizations outside of WSRC to use. Another part is the Systemic Root Cause Tree. One of the most basic and unique features of Expanded Root Cause Analysis is the Systemic Root Cause portion of the Expanded Root Cause Pyramid. The Systemic Root Causes are even more basic than the Programmatic Root Causes and represent Root Causes that cut across multiple (if not all) programs in an organization. the Systemic Root Cause portion contains 51 causes embedded at the bottom level of a three level Systemic Root Cause Tree that is divided into logical, organizationally based categorie to assist the analyst. The Computer Aided Root Cause Analysis that allows the analyst at each level of the Pyramid to a) obtain a brief description of the cause that is being considered, b) record a decision that the item is applicable, c) proceed to the next level of the Pyramid to see only those items at the next level of the tree that are relevant to the particular cause that has been chosen, and d) at the end of the process automatically print out a summary report of the incident, the causal factors as they relate to the safety management system, the probable causes, apparent causes, Programmatic Root Causes and Systemic Root Causes for each causal factor and the associated corrective action.« less

  12. A Quantitative Study of Gully Erosion Based on Object-Oriented Analysis Techniques: A Case Study in Beiyanzikou Catchment of Qixia, Shandong, China

    PubMed Central

    Wang, Tao; He, Fuhong; Zhang, Anding; Gu, Lijuan; Wen, Yangmao; Jiang, Weiguo; Shao, Hongbo

    2014-01-01

    This paper took a subregion in a small watershed gully system at Beiyanzikou catchment of Qixia, China, as a study and, using object-orientated image analysis (OBIA), extracted shoulder line of gullies from high spatial resolution digital orthophoto map (DOM) aerial photographs. Next, it proposed an accuracy assessment method based on the adjacent distance between the boundary classified by remote sensing and points measured by RTK-GPS along the shoulder line of gullies. Finally, the original surface was fitted using linear regression in accordance with the elevation of two extracted edges of experimental gullies, named Gully 1 and Gully 2, and the erosion volume was calculated. The results indicate that OBIA can effectively extract information of gullies; average range difference between points field measured along the edge of gullies and classified boundary is 0.3166 m, with variance of 0.2116 m. The erosion area and volume of two gullies are 2141.6250 m2, 5074.1790 m3 and 1316.1250 m2, 1591.5784 m3, respectively. The results of the study provide a new method for the quantitative study of small gully erosion. PMID:24616626

  13. Investigation of electroforming techniques, literature analysis report

    NASA Technical Reports Server (NTRS)

    Malone, G. A.

    1975-01-01

    A literature analysis is presented of reports, specifications, and documented experiences with the use of electroforming to produce copper and nickel structures for aerospace and other engineering applications. The literature period covered is from 1948 to 1974. Specific effort was made to correlate mechanical property data for the electrodeposited material with known electroforming solution compositions and operating conditions. From this survey, electrolytes are suggested for selection to electroform copper and nickel outer shells on regeneratively cooled thrust chamber liners, and other devices subject to thermal and pressure exposure, based on mechanical properties obtainable, performance under various thermal environments, and ease of process control for product reproducibility. Processes of potential value in obtaining sound bonds between electrodeposited copper and nickel and copper alloy substrates are also discussed.

  14. A technique for human error analysis (ATHEANA)

    SciTech Connect

    Cooper, S.E.; Ramey-Smith, A.M.; Wreathall, J.; Parry, G.W.

    1996-05-01

    Probabilistic risk assessment (PRA) has become an important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. Human reliability analysis (HRA) is a critical element of PRA; however, limitations in the analysis of human actions in PRAs have long been recognized as a constraint when using PRA. A multidisciplinary HRA framework has been developed with the objective of providing a structured approach for analyzing operating experience and understanding nuclear plant safety, human error, and the underlying factors that affect them. The concepts of the framework have matured into a rudimentary working HRA method. A trial application of the method has demonstrated that it is possible to identify potentially significant human failure events from actual operating experience which are not generally included in current PRAs, as well as to identify associated performance shaping factors and plant conditions that have an observable impact on the frequency of core damage. A general process was developed, albeit in preliminary form, that addresses the iterative steps of defining human failure events and estimating their probabilities using search schemes. Additionally, a knowledge- base was developed which describes the links between performance shaping factors and resulting unsafe actions.

  15. Using remote sensing techniques and field-based structural analysis to explore new gold and associated mineral sites around Al-Hajar mine, Asir terrane, Arabian Shield

    NASA Astrophysics Data System (ADS)

    Sonbul, Abdullah R.; El-Shafei, Mohamed K.; Bishta, Adel Z.

    2016-05-01

    Modern earth resource satellites provide huge amounts of digital imagery at different resolutions. These satellite imageries are considered one of the most significant sources of data for mineral exploration. Image processing techniques were applied to the exposed rocks around the Al-Aqiq area of the Asir terrane in the southern part of the Arabian Shield. The area under study has two sub-parallel N-S trending metamorphic belts of green-schist facies. The first belt is located southeast of Al-Aqiq, where the Al-Hajar Gold Mine is situated. It is essentially composed of metavolcanics and metasedimentary rocks, and it is intruded by different plutonic rocks of primarily diorite, syenite and porphyritic granite. The second belt is located northwest of Al-Aqiq, and it is composed of metavolcanics and metasedimentary rocks and is intruded by granite bodies. The current study aimed to distinguish the lithological units, detect and map the alteration zones, and extract the major fault lineaments around the Al-Hajar gold prospect. Digital satellite imageries, including Landsat 7 ETM + multispectral and panchromatic and SPOT-5 were used in addition to field verification. Areas with similar spectral signatures to the prospect were identified in the nearby metamorphic belt; it was considered as a target area and was inspected in the field. The relationships between the alteration zones, the mineral deposits and the structural elements were used to locate the ore-bearing zones in the subsurface. The metasedimentary units of the target area showed a dextral-ductile shearing top-to-the-north and the presence of dominant mineralized quartz vein-system. The area to the north of the Al-Hajar prospect showed also sub-parallel shear zones along which different types of alterations were detected. Field-based criteria such as hydrothermal breccia, jasper, iron gossans and porphyritic granite strongly indicate the presence of porphyry-type ore deposits in Al-Hajar metamorphic belt

  16. Using remote sensing techniques and field-based structural analysis to explore new gold and associated mineral sites around Al-Hajar mine, Asir terrane, Arabian Shield

    NASA Astrophysics Data System (ADS)

    Sonbul, Abdullah R.; El-Shafei, Mohamed K.; Bishta, Adel Z.

    2016-05-01

    Modern earth resource satellites provide huge amounts of digital imagery at different resolutions. These satellite imageries are considered one of the most significant sources of data for mineral exploration. Image processing techniques were applied to the exposed rocks around the Al-Aqiq area of the Asir terrane in the southern part of the Arabian Shield. The area under study has two sub-parallel N-S trending metamorphic belts of green-schist facies. The first belt is located southeast of Al-Aqiq, where the Al-Hajar Gold Mine is situated. It is essentially composed of metavolcanics and metasedimentary rocks, and it is intruded by different plutonic rocks of primarily diorite, syenite and porphyritic granite. The second belt is located northwest of Al-Aqiq, and it is composed of metavolcanics and metasedimentary rocks and is intruded by granite bodies. The current study aimed to distinguish the lithological units, detect and map the alteration zones, and extract the major fault lineaments around the Al-Hajar gold prospect. Digital satellite imageries, including Landsat 7 ETM + multispectral and panchromatic and SPOT-5 were used in addition to field verification. Areas with similar spectral signatures to the prospect were identified in the nearby metamorphic belt; it was considered as a target area and was inspected in the field. The relationships between the alteration zones, the mineral deposits and the structural elements were used to locate the ore-bearing zones in the subsurface. The metasedimentary units of the target area showed a dextral-ductile shearing top-to-the-north and the presence of dominant mineralized quartz vein-system. The area to the north of the Al-Hajar prospect showed also sub-parallel shear zones along which different types of alterations were detected. Field-based criteria such as hydrothermal breccia, jasper, iron gossans and porphyritic granite strongly indicate the presence of porphyry-type ore deposits in Al-Hajar metamorphic belt that

  17. DCT-based cyber defense techniques

    NASA Astrophysics Data System (ADS)

    Amsalem, Yaron; Puzanov, Anton; Bedinerman, Anton; Kutcher, Maxim; Hadar, Ofer

    2015-09-01

    With the increasing popularity of video streaming services and multimedia sharing via social networks, there is a need to protect the multimedia from malicious use. An attacker may use steganography and watermarking techniques to embed malicious content, in order to attack the end user. Most of the attack algorithms are robust to basic image processing techniques such as filtering, compression, noise addition, etc. Hence, in this article two novel, real-time, defense techniques are proposed: Smart threshold and anomaly correction. Both techniques operate at the DCT domain, and are applicable for JPEG images and H.264 I-Frames. The defense performance was evaluated against a highly robust attack, and the perceptual quality degradation was measured by the well-known PSNR and SSIM quality assessment metrics. A set of defense techniques is suggested for improving the defense efficiency. For the most aggressive attack configuration, the combination of all the defense techniques results in 80% protection against cyber-attacks with PSNR of 25.74 db.

  18. Modular techniques for dynamic fault-tree analysis

    NASA Technical Reports Server (NTRS)

    Patterson-Hine, F. A.; Dugan, Joanne B.

    1992-01-01

    It is noted that current approaches used to assess the dependability of complex systems such as Space Station Freedom and the Air Traffic Control System are incapable of handling the size and complexity of these highly integrated designs. A novel technique for modeling such systems which is built upon current techniques in Markov theory and combinatorial analysis is described. It enables the development of a hierarchical representation of system behavior which is more flexible than either technique alone. A solution strategy which is based on an object-oriented approach to model representation and evaluation is discussed. The technique is virtually transparent to the user since the fault tree models can be built graphically and the objects defined automatically. The tree modularization procedure allows the two model types, Markov and combinatoric, to coexist and does not require that the entire fault tree be translated to a Markov chain for evaluation. This effectively reduces the size of the Markov chain required and enables solutions with less truncation, making analysis of longer mission times possible. Using the fault-tolerant parallel processor as an example, a model is built and solved for a specific mission scenario and the solution approach is illustrated in detail.

  19. Neutron Activation Analysis: Techniques and Applications

    SciTech Connect

    MacLellan, Ryan

    2011-04-27

    The role of neutron activation analysis in low-energy low-background experimentsis discussed in terms of comparible methods. Radiochemical neutron activation analysis is introduce. The procedure of instrumental neutron activation analysis is detailed especially with respect to the measurement of trace amounts of natural radioactivity. The determination of reactor neutron spectrum parameters required for neutron activation analysis is also presented.

  20. Applying knowledge compilation techniques to model-based reasoning

    NASA Technical Reports Server (NTRS)

    Keller, Richard M.

    1991-01-01

    Researchers in the area of knowledge compilation are developing general purpose techniques for improving the efficiency of knowledge-based systems. In this article, an attempt is made to define knowledge compilation, to characterize several classes of knowledge compilation techniques, and to illustrate how some of these techniques can be applied to improve the performance of model-based reasoning systems.

  1. Node Augmentation Technique in Bayesian Network Evidence Analysis and Marshaling

    SciTech Connect

    Keselman, Dmitry; Tompkins, George H; Leishman, Deborah A

    2010-01-01

    Given a Bayesian network, sensitivity analysis is an important activity. This paper begins by describing a network augmentation technique which can simplifY the analysis. Next, we present two techniques which allow the user to determination the probability distribution of a hypothesis node under conditions of uncertain evidence; i.e. the state of an evidence node or nodes is described by a user specified probability distribution. Finally, we conclude with a discussion of three criteria for ranking evidence nodes based on their influence on a hypothesis node. All of these techniques have been used in conjunction with a commercial software package. A Bayesian network based on a directed acyclic graph (DAG) G is a graphical representation of a system of random variables that satisfies the following Markov property: any node (random variable) is independent of its non-descendants given the state of all its parents (Neapolitan, 2004). For simplicities sake, we consider only discrete variables with a finite number of states, though most of the conclusions may be generalized.

  2. Cochlear implant simulator for surgical technique analysis

    NASA Astrophysics Data System (ADS)

    Turok, Rebecca L.; Labadie, Robert F.; Wanna, George B.; Dawant, Benoit M.; Noble, Jack H.

    2014-03-01

    Cochlear Implant (CI) surgery is a procedure in which an electrode array is inserted into the cochlea. The electrode array is used to stimulate auditory nerve fibers and restore hearing for people with severe to profound hearing loss. The primary goals when placing the electrode array are to fully insert the array into the cochlea while minimizing trauma to the cochlea. Studying the relationship between surgical outcome and various surgical techniques has been difficult since trauma and electrode placement are generally unknown without histology. Our group has created a CI placement simulator that combines an interactive 3D visualization environment with a haptic-feedback-enabled controller. Surgical techniques and patient anatomy can be varied between simulations so that outcomes can be studied under varied conditions. With this system, we envision that through numerous trials we will be able to statistically analyze how outcomes relate to surgical techniques. As a first test of this system, in this work, we have designed an experiment in which we compare the spatial distribution of forces imparted to the cochlea in the array insertion procedure when using two different but commonly used surgical techniques for cochlear access, called round window and cochleostomy access. Our results suggest that CIs implanted using round window access may cause less trauma to deeper intracochlear structures than cochleostomy techniques. This result is of interest because it challenges traditional thinking in the otological community but might offer an explanation for recent anecdotal evidence that suggests that round window access techniques lead to better outcomes.

  3. Liquid refractometer based on fringe projection technique

    NASA Astrophysics Data System (ADS)

    de Angelis, Marco; De Nicola, Sergio; Ferraro, Pietro; Finizio, Andrea; Pierattini, Giovanni

    1999-08-01

    Measurement of the refractive index of liquids is of great importance in applications such as characterization and control of adulteration of liquid commonly used and in pollution monitoring. We present and discuss a fringe projection technique for measuring the index of refraction of transparent liquid materials.

  4. Analysis and calibration techniques for superconducting resonators

    NASA Astrophysics Data System (ADS)

    Cataldo, Giuseppe; Wollack, Edward J.; Barrentine, Emily M.; Brown, Ari D.; Moseley, S. Harvey; U-Yen, Kongpop

    2015-01-01

    A method is proposed and experimentally explored for in-situ calibration of complex transmission data for superconducting microwave resonators. This cryogenic calibration method accounts for the instrumental transmission response between the vector network analyzer reference plane and the device calibration plane. Once calibrated, the observed resonator response is analyzed in detail by two approaches. The first, a phenomenological model based on physically realizable rational functions, enables the extraction of multiple resonance frequencies and widths for coupled resonators without explicit specification of the circuit network. In the second, an ABCD-matrix representation for the distributed transmission line circuit is used to model the observed response from the characteristic impedance and propagation constant. When used in conjunction with electromagnetic simulations, the kinetic inductance fraction can be determined with this method with an accuracy of 2%. Datasets for superconducting microstrip and coplanar-waveguide resonator devices were investigated and a recovery within 1% of the observed complex transmission amplitude was achieved with both analysis approaches. The experimental configuration used in microwave characterization of the devices and self-consistent constraints for the electromagnetic constitutive relations for parameter extraction are also presented.

  5. Analysis and calibration techniques for superconducting resonators.

    PubMed

    Cataldo, Giuseppe; Wollack, Edward J; Barrentine, Emily M; Brown, Ari D; Moseley, S Harvey; U-Yen, Kongpop

    2015-01-01

    A method is proposed and experimentally explored for in-situ calibration of complex transmission data for superconducting microwave resonators. This cryogenic calibration method accounts for the instrumental transmission response between the vector network analyzer reference plane and the device calibration plane. Once calibrated, the observed resonator response is analyzed in detail by two approaches. The first, a phenomenological model based on physically realizable rational functions, enables the extraction of multiple resonance frequencies and widths for coupled resonators without explicit specification of the circuit network. In the second, an ABCD-matrix representation for the distributed transmission line circuit is used to model the observed response from the characteristic impedance and propagation constant. When used in conjunction with electromagnetic simulations, the kinetic inductance fraction can be determined with this method with an accuracy of 2%. Datasets for superconducting microstrip and coplanar-waveguide resonator devices were investigated and a recovery within 1% of the observed complex transmission amplitude was achieved with both analysis approaches. The experimental configuration used in microwave characterization of the devices and self-consistent constraints for the electromagnetic constitutive relations for parameter extraction are also presented. PMID:25638068

  6. Retention of denture bases fabricated by three different processing techniques – An in vivo study

    PubMed Central

    Chalapathi Kumar, V. H.; Surapaneni, Hemchand; Ravikiran, V.; Chandra, B. Sarat; Balusu, Srilatha; Reddy, V. Naveen

    2016-01-01

    Aim: Distortion due to Polymerization shrinkage compromises the retention. To evaluate the amount of retention of denture bases fabricated by conventional, anchorized, and injection molding polymerization techniques. Materials and Methods: Ten completely edentulous patients were selected, impressions were made, and master cast obtained was duplicated to fabricate denture bases by three polymerization techniques. Loop was attached to the finished denture bases to estimate the force required to dislodge them by retention apparatus. Readings were subjected to nonparametric Friedman two-way analysis of variance followed by Bonferroni correction methods and Wilcoxon matched-pairs signed-ranks test. Results: Denture bases fabricated by injection molding (3740 g), anchorized techniques (2913 g) recorded greater retention values than conventional technique (2468 g). Significant difference was seen between these techniques. Conclusions: Denture bases obtained by injection molding polymerization technique exhibited maximum retention, followed by anchorized technique, and least retention was seen in conventional molding technique. PMID:27382542

  7. Advanced NMR-based techniques for pore structure analysis of coal. Quarterly report No. 7, April 1, 1993--June 31, 1993

    SciTech Connect

    Smith, D.M.

    1993-09-01

    One of the main problems in coal utilization is the inability to properly characterize its complex pore structure. Coals typically have micro/ultramicro pores but they also exhibit meso and macroporosity. Conventional pore size techniques (adsorption/condensation, mercury porosimetry) are limited because of this broad pore size range, microporosity, reactive nature of coal, samples must be completely dried, and network/percolation effects. Small angle scattering is limited because it probes both open and closed pores. Although one would not expect any single technique to provide a satisfactory description of a coal`s structure, it is apparent that better techniques are necessary. Small angle scattering could be improved by combining scattering and adsorption measurements. Also, the measurement of NMR parameters of various gas phase and adsorbed phase NMR active probes can provide pore structure information. We will investigate the dependence of the common NMR parameters such as chemical shifts and relaxation times of several different nuclei and compounds on the pore structure of model microporous solids, carbons, and coals. In particular, we will study the interaction between several small molecules ({sup 129}Xe, {sup 3}He, {sup 2}H{sub 2},{sup 14}N{sub 2}, {sup 14}NH{sub 3}, {sup 15}N{sub 2},{sup 13}CH{sub 4}, {sup 13}CO{sub 2}) and the pore surfaces in coals.

  8. Advanced NMR-based techniques for pore structure analysis of coal. Quarterly report No. 3, July 1, 1992--September 30, 1992

    SciTech Connect

    Smith, D.M.

    1992-12-31

    One of the main problems in coal utilization is the inability to properly characterize its complex pore structure. Coals typically have micro/ultra-micro pores but they also exhibit meso and macroporosity. Conventional pore size techniques (adsorption/condensation, mercury porosimetry) are limited because of this broad pore size range, microporosity, reactive nature of coal, samples must be completely dried, and network/percolation effects. Small angle scattering is limited because it probes both open and closed pores. Although one would not expect any single technique to provide a satisfactory description of a coal`s structure, it is apparent that better techniques are necessary. We believe that measurement of the NMR parameters of various gas phase and adsorbed phase NMR active probes can provide the resolution to this problem. We now have two suites of well-characterized microporous materials including oxides (zeolites and silica gel) and activated carbons from our industrial partner, Air Products in Allentown, PA. Our current work may be divided into three areas: small-angle X-ray scattering (SAXS), adsorption, and NMR.

  9. Advanced NMR-based techniques for pore structure analysis of coal. Quarterly report No. 9, October 1, 1993--December 30, 1993

    SciTech Connect

    Smith, D.M.

    1993-12-31

    One of the main problems in coal utilization is the inability to properly characterize its complex pore structure. Coals typically have micro/ultra-micro pores but they also exhibit meso and macroporosity. Conventional pore size techniques (adsorption/condensation, mercury porosimetry) are limited because of this broad pore size range, microporosity, reactive nature of coal, samples must be completely dried, and network/percolation effects. Small angle scattering is limited because it probes both open and dosed pores. Although one would not expect any single technique to provide a satisfactory description of a coal`s structure, it is apparent that better techniques are necessary. Small angle scattering could be improved by combining scattering and adsorption measurements. Also, the measurement of NMR parameters of various gas phase and adsorbed phase NMR active probes can provide pore structure information. We will investigate the dependence of the common NMR parameters such as chemical shifts and relaxation times of several different nuclei and compounds on the pore structure of model microporous solids, carbons, and coals. In particular, we will study the interaction between several small molecules ({sup 129}Xe, {sup 3}He, {sup 14}N{sub 2}, {sup 14}NH{sub 3}, {sup 15}N{sub 2}, {sup 13}CH{sub 4}, {sup 13}CO{sub 2}) and pore surface. Our current work may be divided into three areas: small-angle X-ray scattering (SAXS), adsorption, and NMR.

  10. Advanced NMR-based techniques for pore structure analysis of coal. Quarter report {number_sign}8, 7/1/93--9/30/93

    SciTech Connect

    Smith, D.M.

    1993-12-31

    One of the main problems in coal utilization is the inability to properly characterize its complex pore structure. Coals typically have micro/ultramicro pores but they also exhibit meso and macroporosity. Conventional pore size techniques (adsorption/condensation, mercury porosimetry) are limited because of this broad pore size range, microporosity, reactive nature of coal, samples must be completely dried, and network/percolation effects. Small angle scattering is limited because it probes both open and closed pores. Although one would not expect any single technique to provide a satisfactory description of a coal`s structure, it is apparent that better techniques are necessary. Small angle scattering could be improved by combining scattering and adsorption measurements. Also, the measurement of NMR parameters of various gas phase and adsorbed phase NMR active probes can provide pore structure information. The dependence of the common NMR parameters such as chemical shifts and relaxation times of several different nuclei and compounds on the pore structure of model microporous solids, carbons, and coals is investigated. In particular, the interaction between several small molecules ({sup 129}Xe, {sup 3}He, {sup 14}N{sub 2},{sup 14}NH{sub 3},{sup 15}N{sub 2},{sup 13} CH{sub 4}, {sup 13}CO{sub 2}) and pore surface is studied.

  11. Advanced NMR-based techniques for pore structure analysis of coal. Quarterly report No. 6, January 1, 1993--March 31, 1993

    SciTech Connect

    Smith, D.M.

    1993-08-01

    One of the main problems in coal utilization is the inability to properly characterize its complex pore structure. Coals typically have micro/ultra-micro pores but they also exhibit meso and macroporosity. Conventional pore size techniques (adsorption/condensation, mercury porosimetry) are limited because of this broad pore size range, microporosity, reactive nature of coal, samples must be completely dried, and network/percolation effects. Small angle scattering is limited because it probes both open and closed pores. Although one would not expect any single technique to provide a satisfactory description of a coal`s structure, it is apparent that better techniques are necessary. Small angle scattering could be improved by combining scattering and adsorption measurements. Also, the measurement of NMR parameters of various gas phase and adsorbed phase NMR active probes can provide pore structure information. We will investigate the dependence of the common NMR parameters such as chemical shifts and relaxation times of several different nuclei and compounds on the pore structure of model microporous solids, carbons, and coals. In particular, we will study the interaction between several small molecules ({sup 129}Xe, {sup 3}He, {sup 2}H{sub 2}, {sup 14}N{sub 2},{sup 14}NH{sub 3}, {sup 15}N{sup 2}, {sup 13}CH{sub 4}, {sup 13}CO{sub 2}) and the pore surfaces in coals.

  12. IMAGE-BASED EROSION MEASUREMENT TECHNIQUE

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Two and three - dimensional analysis using close range digital photographs can be very useful in measuring changes in erosion on the landscape. Computer software exists for conducting photographic analysis but is often either cost prohibitive or very labor intensive to use. This paper describes a ...

  13. Typology of Delivery Quality: Latent Profile Analysis of Teacher Engagement and Delivery Techniques in a School-Based Prevention Intervention, "Keepin' It REAL" Curriculum

    ERIC Educational Resources Information Center

    Shin, YoungJu; Miller-Day, Michelle; Pettigrew, Jonathan; Hecht, Michael L.; Krieger, Janice L.

    2014-01-01

    Enhancing the delivery quality of school-based, evidence-based prevention programs is one key to ensuring uniform program effects on student outcomes. Program evaluations often focus on content dosage when implementing prevention curricula, however, less is known about implementation quality of prevention content, especially among teachers who may…

  14. A comparison of different texture analysis techniques

    SciTech Connect

    Wright, S.I.; Kocks, U.F.

    1996-08-01

    With the advent of automated techniques for measuring individual crystallographic orientations using electron diffraction, there has been an increase in the use of local orientation measurements for measuring textures in polycrystalline materials. Several studies have focused on the number of single orientation measurements necessary to achieve the statistics of more conventional texture measurement, techniques such as pole figure measurement using x-ray and neutron diffraction. This investigation considers this question but also is extended to consider the nature of the differences between textures measured using individual orientation measurements and those measured using x-ray diffraction.

  15. Noninvasive in vivo glucose sensing using an iris based technique

    NASA Astrophysics Data System (ADS)

    Webb, Anthony J.; Cameron, Brent D.

    2011-03-01

    Physiological glucose monitoring is important aspect in the treatment of individuals afflicted with diabetes mellitus. Although invasive techniques for glucose monitoring are widely available, it would be very beneficial to make such measurements in a noninvasive manner. In this study, a New Zealand White (NZW) rabbit animal model was utilized to evaluate a developed iris-based imaging technique for the in vivo measurement of physiological glucose concentration. The animals were anesthetized with isoflurane and an insulin/dextrose protocol was used to control blood glucose concentration. To further help restrict eye movement, a developed ocular fixation device was used. During the experimental time frame, near infrared illuminated iris images were acquired along with corresponding discrete blood glucose measurements taken with a handheld glucometer. Calibration was performed using an image based Partial Least Squares (PLS) technique. Independent validation was also performed to assess model performance along with Clarke Error Grid Analysis (CEGA). Initial validation results were promising and show that a high percentage of the predicted glucose concentrations are within 20% of the reference values.

  16. Surveying converter lining erosion state based on laser measurement technique

    NASA Astrophysics Data System (ADS)

    Li, Hongsheng; Shi, Tielin; Yang, Shuzi

    1998-08-01

    It is very important to survey the eroding state of the steelmaking converter lining real time so as to optimize technological process, extend converter durability and reduce steelmaking production costs. This paper gives one practical method based on the laser measure technique. It presents the basic principle of the measure technique. It presents the basic principle of the measure method, the composition of the measure system and the researches on key technological problems. The method is based on the technique of the laser range finding to net points on the surface of the surveyed converter lining, and the technology of angle finding to the laser beams. The angle signals would be used to help realizing the automatic scanning function also. The laser signals would be modulated and encoded. In the meantime, we would adopt the wavelet analysis and other filter algorithms, to denoise noisy data and extract useful information. And the main idea of some algorithms such as the net point measuring path planning and the measure device position optimal algorithm would also be given in order to improve the measure precision and real time property of the system.

  17. Development of Single-Nucleotide Polymorphism- Based Phylum-Specific PCR Amplification Technique: Application to the Community Analysis Using Ciliates as a Reference Organism

    PubMed Central

    Jung, Jae-Ho; Kim, Sanghee; Ryu, Seongho; Kim, Min-Seok; Baek, Ye-Seul; Kim, Se-Joo; Choi, Joong- Ki; Park, Joong-Ki; Min, Gi-Sik

    2012-01-01

    Despite recent advance in mass sequencing technologies such as pyrosequencing, assessment of culture-independent microbial eukaryote community structures using universal primers remains very difficult due to the tremendous richness and complexity of organisms in these communities. Use of a specific PCR marker targeting a particular group would provide enhanced sensitivity and more in-depth evaluation of microbial eukaryote communities compared to what can be achieved with universal primers. We discovered that many phylum- or group-specific single-nucleotide polymorphisms (SNPs) exist in small subunit ribosomal RNA (SSU rRNA) genes from diverse eukaryote groups. By applying this discovery to a known simple allele-discriminating (SAP) PCR method, we developed a technique that enables the identification of organisms belonging to a specific higher taxonomic group (or phylum) among diverse types of eukaryotes. We performed an assay using two complementary methods, pyrosequencing and clone library screening. In doing this, specificities for the group (ciliates) targeted in this study in bulked environmental samples were 94.6% for the clone library and 99.2% for pyrosequencing, respectively. In particular, our novel technique showed high selectivity for rare species, a feature that may be more important than the ability to identify quantitatively predominant species in community structure analyses. Additionally, our data revealed that a target-specific library (or ciliate-specific one for the present study) can better explain the ecological features of a sampling locality than a universal library. PMID:22965748

  18. The cast aluminum denture base. Part II: Technique.

    PubMed

    Halperin, A R; Halperin, G C

    1980-07-01

    A technique to wax-up and cast an aluminum base and a method to incorporate the base into the final denture base has been discussed. This technique does not use induction casting, rather it uses two casting ovens and a centrifugal casting machine. PMID:6991680

  19. Recent trends in particle size analysis techniques

    NASA Technical Reports Server (NTRS)

    Kang, S. H.

    1984-01-01

    Recent advances and developments in the particle-sizing technologies are briefly reviewed in accordance with three operating principles including particle size and shape descriptions. Significant trends of the particle size analysing equipment recently developed show that compact electronic circuitry and rapid data processing systems were mainly adopted in the instrument design. Some newly developed techniques characterizing the particulate system were also introduced.

  20. Speech recognition based on pattern recognition techniques

    NASA Astrophysics Data System (ADS)

    Rabiner, Lawrence R.

    1990-05-01

    Algorithms for speech recognition can be characterized broadly as pattern recognition approaches and acoustic phonetic approaches. To date, the greatest degree of success in speech recognition has been obtained using pattern recognition paradigms. The use of pattern recognition techniques were applied to the problems of isolated word (or discrete utterance) recognition, connected word recognition, and continuous speech recognition. It is shown that understanding (and consequently the resulting recognizer performance) is best to the simplest recognition tasks and is considerably less well developed for large scale recognition systems.

  1. Liquid tunable microlenses based on MEMS techniques

    NASA Astrophysics Data System (ADS)

    Zeng, Xuefeng; Jiang, Hongrui

    2013-08-01

    The recent rapid development in microlens technology has provided many opportunities for miniaturized optical systems, and has found a wide range of applications. Of these microlenses, tunable-focus microlenses are of special interest as their focal lengths can be tuned using micro-scale actuators integrated with the lens structure. Realization of such tunable microlens generally relies on the microelectromechanical system (MEMS) technologies. Here, we review the recent progress in tunable liquid microlenses. The underlying physics relevant to these microlenses are first discussed, followed by description of three main categories of tunable microlenses involving MEMS techniques, mechanically driven, electrically driven and those integrated within microfluidic systems.

  2. Liquid Tunable Microlenses based on MEMS techniques

    PubMed Central

    Zeng, Xuefeng; Jiang, Hongrui

    2013-01-01

    The recent rapid development in microlens technology has provided many opportunities for miniaturized optical systems, and has found a wide range of applications. Of these microlenses, tunable-focus microlenses are of special interest as their focal lengths can be tuned using micro-scale actuators integrated with the lens structure. Realization of such tunable microlens generally relies on the microelectromechanical system (MEMS) technologies. Here, we review the recent progress in tunable liquid microlenses. The underlying physics relevant to these microlenses are first discussed, followed by description of three main categories of tunable microlenses involving MEMS techniques, mechanically driven, electrically driven, and those integrated within microfluidic systems. PMID:24163480

  3. Hand-Based Biometric Analysis

    NASA Technical Reports Server (NTRS)

    Bebis, George (Inventor); Amayeh, Gholamreza (Inventor)

    2015-01-01

    Hand-based biometric analysis systems and techniques are described which provide robust hand-based identification and verification. An image of a hand is obtained, which is then segmented into a palm region and separate finger regions. Acquisition of the image is performed without requiring particular orientation or placement restrictions. Segmentation is performed without the use of reference points on the images. Each segment is analyzed by calculating a set of Zernike moment descriptors for the segment. The feature parameters thus obtained are then fused and compared to stored sets of descriptors in enrollment templates to arrive at an identity decision. By using Zernike moments, and through additional manipulation, the biometric analysis is invariant to rotation, scale, or translation or an in put image. Additionally, the analysis utilizes re-use of commonly-seen terms in Zernike calculations to achieve additional efficiencies over traditional Zernike moment calculation.

  4. Hand-Based Biometric Analysis

    NASA Technical Reports Server (NTRS)

    Bebis, George

    2013-01-01

    Hand-based biometric analysis systems and techniques provide robust hand-based identification and verification. An image of a hand is obtained, which is then segmented into a palm region and separate finger regions. Acquisition of the image is performed without requiring particular orientation or placement restrictions. Segmentation is performed without the use of reference points on the images. Each segment is analyzed by calculating a set of Zernike moment descriptors for the segment. The feature parameters thus obtained are then fused and compared to stored sets of descriptors in enrollment templates to arrive at an identity decision. By using Zernike moments, and through additional manipulation, the biometric analysis is invariant to rotation, scale, or translation or an input image. Additionally, the analysis uses re-use of commonly seen terms in Zernike calculations to achieve additional efficiencies over traditional Zernike moment calculation.

  5. Survey of immunoassay techniques for biological analysis

    SciTech Connect

    Burtis, C.A.

    1986-10-01

    Immunoassay is a very specific, sensitive, and widely applicable analytical technique. Recent advances in genetic engineering have led to the development of monoclonal antibodies which further improves the specificity of immunoassays. Originally, radioisotopes were used to label the antigens and antibodies used in immunoassays. However, in the last decade, numerous types of immunoassays have been developed which utilize enzymes and fluorescent dyes as labels. Given the technical, safety, health, and disposal problems associated with using radioisotopes, immunoassays that utilize the enzyme and fluorescent labels are rapidly replacing those using radioisotope labels. These newer techniques are as sensitive, are easily automated, have stable reagents, and do not have a disposal problem. 6 refs., 1 fig., 2 tabs.

  6. Analysis of Cultural Heritage by Accelerator Techniques and Analytical Imaging

    NASA Astrophysics Data System (ADS)

    Ide-Ektessabi, Ari; Toque, Jay Arre; Murayama, Yusuke

    2011-12-01

    In this paper we present the result of experimental investigation using two very important accelerator techniques: (1) synchrotron radiation XRF and XAFS; and (2) accelerator mass spectrometry and multispectral analytical imaging for the investigation of cultural heritage. We also want to introduce a complementary approach to the investigation of artworks which is noninvasive and nondestructive that can be applied in situ. Four major projects will be discussed to illustrate the potential applications of these accelerator and analytical imaging techniques: (1) investigation of Mongolian Textile (Genghis Khan and Kublai Khan Period) using XRF, AMS and electron microscopy; (2) XRF studies of pigments collected from Korean Buddhist paintings; (3) creating a database of elemental composition and spectral reflectance of more than 1000 Japanese pigments which have been used for traditional Japanese paintings; and (4) visible light-near infrared spectroscopy and multispectral imaging of degraded malachite and azurite. The XRF measurements of the Japanese and Korean pigments could be used to complement the results of pigment identification by analytical imaging through spectral reflectance reconstruction. On the other hand, analysis of the Mongolian textiles revealed that they were produced between 12th and 13th century. Elemental analysis of the samples showed that they contained traces of gold, copper, iron and titanium. Based on the age and trace elements in the samples, it was concluded that the textiles were produced during the height of power of the Mongol empire, which makes them a valuable cultural heritage. Finally, the analysis of the degraded and discolored malachite and azurite demonstrates how multispectral analytical imaging could be used to complement the results of high energy-based techniques.

  7. A uniform technique for flood frequency analysis.

    USGS Publications Warehouse

    Thomas, W.O., Jr.

    1985-01-01

    This uniform technique consisted of fitting the logarithms of annual peak discharges to a Pearson Type III distribution using the method of moments. The objective was to adopt a consistent approach for the estimation of floodflow frequencies that could be used in computing average annual flood losses for project evaluation. In addition, a consistent approach was needed for defining equitable flood-hazard zones as part of the National Flood Insurance Program. -from ASCE Publications Information

  8. Laser-induced breakdown spectroscopy technique for quantitative analysis of aqueous solution using matrix conversion based on plant fiber spunlaced nonwovens.

    PubMed

    Chen, Chenghan; Niu, Guanghui; Shi, Qi; Lin, Qingyu; Duan, Yixiang

    2015-10-01

    In the present work, laser-induced breakdown spectroscopy (LIBS) was applied to detect concentrations of chromium and nickel in aqueous solution in the form of matrix conversion using plant fiber spunlaced nonwovens as a solid-phase support, which can effectively avoid the inherent difficulties such as splashing, a quenching effect, and a shorter plasma lifetime during the liquid LIBS analysis. Drops of the sample solution were transferred to the plant fiber spunlaced nonwovens surface and uniformly diffused from the center to the whole area of the substrate. Owing to good hydrophilicity, the plant fiber spunlaced nonwovens can hold more of the liquid sample, and the surface of this material never wrinkles after being dried in a drying oven, which can effectively reduce the deviation during the LIBS analysis. In addition, the plant fiber spunlaced nonwovens used in the present work are relatively convenient and low cost. Also, the procedure of analysis was simple and fast, which are the unique features of LIBS technology. Therefore, this method has potential applications for practical and in situ analyses. To achieve sensitive elemental detection, the optimal delay time in this experiment was investigated. Under the optimized condition, the limits of detection for Cr and Ni are 0.7 and 5.7  μg·mL(-1), respectively. The results obtained in the present study show that the matrix conversion method is a feasible option for analyzing heavy metals in aqueous solutions by LIBS technology. PMID:26479603

  9. Statistical Analysis Techniques for Small Sample Sizes

    NASA Technical Reports Server (NTRS)

    Navard, S. E.

    1984-01-01

    The small sample sizes problem which is encountered when dealing with analysis of space-flight data is examined. Because of such a amount of data available, careful analyses are essential to extract the maximum amount of information with acceptable accuracy. Statistical analysis of small samples is described. The background material necessary for understanding statistical hypothesis testing is outlined and the various tests which can be done on small samples are explained. Emphasis is on the underlying assumptions of each test and on considerations needed to choose the most appropriate test for a given type of analysis.

  10. GAS CURTAIN EXPERIMENTAL TECHNIQUE AND ANALYSIS METHODOLOGIES

    SciTech Connect

    J. R. KAMM; ET AL

    2001-01-01

    The qualitative and quantitative relationship of numerical simulation to the physical phenomena being modeled is of paramount importance in computational physics. If the phenomena are dominated by irregular (i. e., nonsmooth or disordered) behavior, then pointwise comparisons cannot be made and statistical measures are required. The problem we consider is the gas curtain Richtmyer-Meshkov (RM) instability experiments of Rightley et al. (13), which exhibit complicated, disordered motion. We examine four spectral analysis methods for quantifying the experimental data and computed results: Fourier analysis, structure functions, fractal analysis, and continuous wavelet transforms. We investigate the applicability of these methods for quantifying the details of fluid mixing.

  11. Analysis of signal processing techniques in pulsed thermography

    NASA Astrophysics Data System (ADS)

    Lopez, Fernando; Ibarra-Castanedo, Clemente; Maldague, Xavier; de Paulo Nicolau, Vicente

    2013-05-01

    Pulsed Thermography (PT) is one of the most widely used approaches for the inspection of composites materials, being its main attraction the deployment in transient regime. However, due to the physical phenomena involved during the inspection, the signals acquired by the infrared camera are nearly always affected by external reflections and local emissivity variations. Furthermore, non-uniform heating at the surface and thermal losses at the edges of the material also represent constraints in the detection capability. For this reason, the thermographics signals should be processed in order to improve - qualitatively and quantitatively - the quality of the thermal images. Signal processing constitutes an important step in the chain of thermal image analysis, especially when defects characterization is required. Several of the signals processing techniques employed nowadays are based on the one-dimensional solution of Fourier's law of heat conduction. This investigation brings into discussion the three-most used techniques based on the 1D Fourier's law: Thermographic Signal Reconstruction (TSR), Differential Absolute Contrast (DAC) and Pulsed Phase Thermography (PPT), applied on carbon fiber laminated composites. It is of special interest to determine the detection capabilities of each technique, allowing in this way more reliable results when performing an inspection by PT.

  12. Trends and Techniques for Space Base Electronics

    NASA Technical Reports Server (NTRS)

    Trotter, J. D.; Wade, T. E.; Gassaway, J. D.

    1979-01-01

    Simulations of various phosphorus and boron diffusions in SOS were completed and a sputtering system, furnaces, and photolithography related equipment were set up. Double layer metal experiments initially utilized wet chemistry techniques. By incorporating ultrasonic etching of the vias, premetal cleaning a modified buffered HF, phosphorus doped vapox, and extended sintering, yields of 98% were obtained using the standard test pattern. A two dimensional modeling program was written for simulating short channel MOSFETs with nonuniform substrate doping. A key simplifying assumption used is that the majority carriers can be represented by a sheet charge at the silicon dioxide silicon interface. Although the program is incomplete, the two dimensional Poisson equation for the potential distribution was achieved. The status of other Z-D MOSFET simulation programs is summarized.

  13. Techniques for detumbling a disabled space base

    NASA Technical Reports Server (NTRS)

    Kaplan, M. H.

    1973-01-01

    Techniques and conceptual devices for carrying out detumbling operations are examined, and progress in the development of these concepts is discussed. Devices which reduce tumble to simple spin through active linear motion of a small mass are described, together with a Module for Automatic Dock and Detumble (MADD) that could perform an orbital transfer from the shuttle in order to track and dock at a preselected point on the distressed craft. Once docked, MADD could apply torques by firing thrustors to detumble the passive vehicle. Optimum combinations of mass-motion and external devices for various situation should be developed. The need for completely formulating the automatic control logic of MADD is also emphasized.

  14. Identification of Tea Storage Times by Linear Discrimination Analysis and Back-Propagation Neural Network Techniques Based on the Eigenvalues of Principal Components Analysis of E-Nose Sensor Signals

    PubMed Central

    Yu, Huichun; Wang, Yongwei; Wang, Jun

    2009-01-01

    An electronic nose (E-nose) was employed to detect the aroma of green tea after different storage times. Longjing green tea dry leaves, beverages and residues were detected with an E-nose, respectively. In order to decrease the data dimensionality and optimize the feature vector, the E-nose sensor response data were analyzed by principal components analysis (PCA) and the five main principal components values were extracted as the input for the discrimination analysis. The storage time (0, 60, 120, 180 and 240 days) was better discriminated by linear discrimination analysis (LDA) and was predicted by the back-propagation neural network (BPNN) method. The results showed that the discrimination and testing results based on the tea leaves were better than those based on tea beverages and tea residues. The mean errors of the tea leaf data were 9, 2.73, 3.93, 6.33 and 6.8 days, respectively. PMID:22408494

  15. Soil Analysis using the semi-parametric NAA technique

    SciTech Connect

    Zamboni, C. B.; Silveira, M. A. G.; Medina, N. H.

    2007-10-26

    The semi-parametric Neutron Activation Analysis technique, using Au as a flux monitor, was applied to measure element concentrations of Br, Ca, Cl, K, Mn and Na for soil characterization. The results were compared with those using the Instrumental Neutron Activation Analysis technique and they found to be compatible. The viability, advantages, and limitations of using these two analytic methodologies are discussed.

  16. 78 FR 37690 - Federal Acquisition Regulation; Price Analysis Techniques

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-21

    ... published a proposed rule in the Federal Register at 77 FR 40552 on July 10, 2012, to clarify and pinpoint a... Federal Acquisition Regulation; Price Analysis Techniques AGENCY: Department of Defense (DoD), General... clarify and give a precise reference in the use of a price analysis technique in order to establish a...

  17. Development of analysis techniques for remote sensing of vegetation resources

    NASA Technical Reports Server (NTRS)

    Draeger, W. C.

    1972-01-01

    Various data handling and analysis techniques are summarized for evaluation of ERTS-A and supporting high flight imagery. These evaluations are concerned with remote sensors applied to wildland and agricultural vegetation resource inventory problems. Monitoring California's annual grassland, automatic texture analysis, agricultural ground data collection techniques, and spectral measurements are included.

  18. PRACTICAL SENSITIVITY AND UNCERTAINTY ANALYSIS TECHNIQUES APPLIED TO AGRICULTURAL SYSTEMS MODELS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    We present a practical evaluation framework for analysis of two complex, process-based agricultural system models, WEPP and RZWQM. The evaluation framework combines sensitivity analysis and the uncertainty analysis techniques of first order error analysis (FOA) and Monte Carlo simulation with Latin ...

  19. Comparative analysis of techniques for measuring the modulation transfer functions of charge-coupled devices based on the generation of laser speckle.

    PubMed

    Pozo, Antonio Manuel; Rubiño, Manuel

    2005-03-20

    Two methods for measuring the modulation transfer function (MTF) of a charge-coupled device (CCD) that are based on the generation of laser speckle are analyzed and compared. The method based on a single-slit aperture is a quick method, although the measurements are limited to values of less than the Nyquist frequency of the device. The double-slit method permits the measurement of values of as much as some 1.8 times the Nyquist frequency, although it is a slower method because of the necessity to move the CCD. The difference between the MTF values obtained with the two methods is less than 0.1 in magnitude; the root-mean-square error between the two curves is 0.046 (4.6%). PMID:15813255

  20. Comparative analysis of techniques for measuring the modulation transfer functions of charge-coupled devices based on the generation of laser speckle

    NASA Astrophysics Data System (ADS)

    Pozo, Antonio Manuel; Rubiño, Manuel

    2005-03-01

    Two methods for measuring the modulation transfer function (MTF) of a charge-coupled device (CCD) that are based on the generation of laser speckle are analyzed and compared. The method based on a single-slit aperture is a quick method, although the measurements are limited to values of less than the Nyquist frequency of the device. The double-slit method permits the measurement of values of as much as some 1.8 times the Nyquist frequency, although it is a slower method because of the necessity to move the CCD. The difference between the MTF values obtained with the two methods is less than 0.1 in magnitude; the root-mean-square error between the two curves is 0.046 (4.6%).

  1. Morphometric techniques for orientation analysis of karst in northern Florida

    SciTech Connect

    Jenkins, D.T.; Beck, B.F.

    1985-01-01

    Morphometric techniques for the analysis of karst landscape orientation data based on swallet catchment areas can be highly inadequate. The long axes of catchment areas may not coincide with structural control, especially in regions having very low relief. Better structural correlation was observed using multiply linear trend measurements of closed depressions rather than drainage basins. Trend analysis was performed on four areas, approximately 25 km/sup 2/ each, forming a sequence from the Suwannee River to the Cody Escarpment in northern Florida. This area is a karst plain, mantled by 12 to 25 meters of unconsolidated sands and clays. Structural control was examined by tabulating the azimuths of distinct linear trends as determined from depression shape based on 1:24,000 topographic maps. The topography was characterized by 1872 individual swallet catchment areas or 1457 closed depressions. The common geomorphic technique of analyzing orientation data in 10/sup 0/ increments beginning with O/sup 0/ may yield incorrect peak width and placement. To correctly detect all significant orientation peaks all possible combinations of peak width and placement must be tested. Fifty-five different plots were reviewed and tested for each area.

  2. Visualization techniques for malware behavior analysis

    NASA Astrophysics Data System (ADS)

    Grégio, André R. A.; Santos, Rafael D. C.

    2011-06-01

    Malware spread via Internet is a great security threat, so studying their behavior is important to identify and classify them. Using SSDT hooking we can obtain malware behavior by running it in a controlled environment and capturing interactions with the target operating system regarding file, process, registry, network and mutex activities. This generates a chain of events that can be used to compare them with other known malware. In this paper we present a simple approach to convert malware behavior into activity graphs and show some visualization techniques that can be used to analyze malware behavior, individually or grouped.

  3. Nuclear reaction techniques in materials analysis

    SciTech Connect

    Amsel, G.; Lanford, W.A.

    1984-01-01

    This article discusses nuclear reaction microanalysis (NRA). In NRA, data accumulated in the frame of low-energy nuclear physics is put to advantage for analytical purposes. Unknown targets are bombarded and known reactions are observed. For NRA, the accelerator, detectors, spectrum recording and interpretation must be reliable, simple, and fast. Other MeV ion-beam analytical techniques are described which are complementary to NRA, such as Rutherford backscattering (RBS), proton-induced x-ray emission (PIXE), and the more recent method of elastic recoil detection (ERD). Applications for NRA range from solid-state physics and electrochemistry, semiconductor technology, metallurgy, materials science, and surface science to biology and archeology.

  4. 48 CFR 215.404-1 - Proposal analysis techniques.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 48 Federal Acquisition Regulations System 3 2011-10-01 2011-10-01 false Proposal analysis... Contract Pricing 215.404-1 Proposal analysis techniques. (1) Follow the procedures at PGI 215.404-1 for proposal analysis. (2) For spare parts or support equipment, perform an analysis of— (i) Those line...

  5. Accelerator based techniques for contraband detection

    NASA Astrophysics Data System (ADS)

    Vourvopoulos, George

    1994-05-01

    It has been shown that narcotics, explosives, and other contraband materials, contain various chemical elements such as H, C, N, O, P, S, and Cl in quantities and ratios that differentiate them from each other and from other innocuous substances. Neutrons and γ-rays have the ability to penetrate through various materials at large depths. They are thus able, in a non-intrusive way, to interrogate volumes ranging from suitcases to Sea-Land containers, and have the ability to image the object with an appreciable degree of reliability. Neutron induced reactions such as (n, γ), (n, n') (n, p) or proton induced γ-resonance absorption are some of the reactions currently investigated for the identification of the chemical elements mentioned above. Various DC and pulsed techniques are discussed and their advantages, characteristics, and current progress are shown. Areas where use of these methods is currently under evaluation are detection of hidden explosives, illicit drug interdiction, chemical war agents identification, nuclear waste assay, nuclear weapons destruction and others.

  6. Injection Locking Techniques for Spectrum Analysis

    SciTech Connect

    Gathma, Timothy D.; Buckwalter, James F.

    2011-04-19

    Wideband spectrum analysis supports future communication systems that reconfigure and adapt to the capacity of the spectral environment. While test equipment manufacturers offer wideband spectrum analyzers with excellent sensitivity and resolution, these spectrum analyzers typically cannot offer acceptable size, weight, and power (SWAP). CMOS integrated circuits offer the potential to fully integrate spectrum analysis capability with analog front-end circuitry and digital signal processing on a single chip. Unfortunately, CMOS lacks high-Q passives and wideband resonator tunability that is necessary for heterodyne implementations of spectrum analyzers. As an alternative to the heterodyne receiver architectures, two nonlinear methods for performing wideband, low-power spectrum analysis are presented. The first method involves injecting the spectrum of interest into an array of injection-locked oscillators. The second method employs the closed loop dynamics of both injection locking and phase locking to independently estimate the injected frequency and power.

  7. Uncertainty analysis technique for OMEGA Dante measurementsa)

    NASA Astrophysics Data System (ADS)

    May, M. J.; Widmann, K.; Sorce, C.; Park, H.-S.; Schneider, M.

    2010-10-01

    The Dante is an 18 channel x-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g., hohlraums, etc.) at x-ray energies between 50 eV and 10 keV. It is a main diagnostic installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the x-ray diodes, filters and mirrors, and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determined flux using a Monte Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.

  8. Uncertainty Analysis Technique for OMEGA Dante Measurements

    SciTech Connect

    May, M J; Widmann, K; Sorce, C; Park, H; Schneider, M

    2010-05-07

    The Dante is an 18 channel X-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g. hohlraums, etc.) at X-ray energies between 50 eV to 10 keV. It is a main diagnostics installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the X-ray diodes, filters and mirrors and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determined flux using a Monte-Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.

  9. Uncertainty analysis technique for OMEGA Dante measurements

    SciTech Connect

    May, M. J.; Widmann, K.; Sorce, C.; Park, H.-S.; Schneider, M.

    2010-10-15

    The Dante is an 18 channel x-ray filtered diode array which records the spectrally and temporally resolved radiation flux from various targets (e.g., hohlraums, etc.) at x-ray energies between 50 eV and 10 keV. It is a main diagnostic installed on the OMEGA laser facility at the Laboratory for Laser Energetics, University of Rochester. The absolute flux is determined from the photometric calibration of the x-ray diodes, filters and mirrors, and an unfold algorithm. Understanding the errors on this absolute measurement is critical for understanding hohlraum energetic physics. We present a new method for quantifying the uncertainties on the determined flux using a Monte Carlo parameter variation technique. This technique combines the uncertainties in both the unfold algorithm and the error from the absolute calibration of each channel into a one sigma Gaussian error function. One thousand test voltage sets are created using these error functions and processed by the unfold algorithm to produce individual spectra and fluxes. Statistical methods are applied to the resultant set of fluxes to estimate error bars on the measurements.

  10. Automated fluid analysis apparatus and techniques

    DOEpatents

    Szecsody, James E.

    2004-03-16

    An automated device that couples a pair of differently sized sample loops with a syringe pump and a source of degassed water. A fluid sample is mounted at an inlet port and delivered to the sample loops. A selected sample from the sample loops is diluted in the syringe pump with the degassed water and fed to a flow through detector for analysis. The sample inlet is also directly connected to the syringe pump to selectively perform analysis without dilution. The device is airtight and used to detect oxygen-sensitive species, such as dithionite in groundwater following a remedial injection to treat soil contamination.