Science.gov

Sample records for limits multiresolution analyses

  1. Evaluating Climate Causation of Conflict in Darfur Using Multi-temporal, Multi-resolution Satellite Image Datasets With Novel Analyses

    NASA Astrophysics Data System (ADS)

    Brown, I.; Wennbom, M.

    2013-12-01

    Climate change, population growth and changes in traditional lifestyles have led to instabilities in traditional demarcations between neighboring ethic and religious groups in the Sahel region. This has resulted in a number of conflicts as groups resort to arms to settle disputes. Such disputes often centre on or are justified by competition for resources. The conflict in Darfur has been controversially explained by resource scarcity resulting from climate change. Here we analyse established methods of using satellite imagery to assess vegetation health in Darfur. Multi-decadal time series of observations are available using low spatial resolution visible-near infrared imagery. Typically normalized difference vegetation index (NDVI) analyses are produced to describe changes in vegetation ';greenness' or ';health'. Such approaches have been widely used to evaluate the long term development of vegetation in relation to climate variations across a wide range of environments from the Arctic to the Sahel. These datasets typically measure peak NDVI observed over a given interval and may introduce bias. It is furthermore unclear how the spatial organization of sparse vegetation may affect low resolution NDVI products. We develop and assess alternative measures of vegetation including descriptors of the growing season, wetness and resource availability. Expanding the range of parameters used in the analysis reduces our dependence on peak NDVI. Furthermore, these descriptors provide a better characterization of the growing season than the single NDVI measure. Using multi-sensor data we combine high temporal/moderate spatial resolution data with low temporal/high spatial resolution data to improve the spatial representativity of the observations and to provide improved spatial analysis of vegetation patterns. The approach places the high resolution observations in the NDVI context space using a longer time series of lower resolution imagery. The vegetation descriptors

  2. Hair analyses: worthless for vitamins, limited for minerals

    SciTech Connect

    Hambridge, K.M.

    1982-11-01

    Despite many major and minor problems with interpretation of analytical data, chemical analyses of human hair have some potential value. Extensive research will be necessary to define this value, including correlation of hair concentrations of specific elements with those in other tissues and metabolic pools and definition of normal physiological concentration ranges. Many factors that may compromise the correct interpretation of analytical data require detailed evaluation for each specific element. Meanwhile, hair analyses are of some value in the comparison of different populations and, for example, in public health community surveys of environmental exposure to heavy metals. On an individual basis, their established usefulness is much more restricted and the limitations are especially notable for evaluation of mineral nutritional status. There is a wide gulf between the limited and mainly tentative scientific justification for their use on an individual basis and the current exploitation of multielement chemical analyses of human hair.

  3. Hair analyses: worthless for vitamins, limited for minerals.

    PubMed

    Hambidge, K M

    1982-11-01

    Despite many major and minor problems with interpretation of analytical data, chemical analyses of human hair have some potential value. Extensive research will be necessary to define this value, including correlation of hair concentrations of specific elements with those in other tissues and metabolic pools and definition of normal physiological concentration ranges. Many factors that may compromise the correct interpretation of analytical data require detailed evaluation for each specific element. Meanwhile, hair analyses are of some value in the comparison of different populations and, for example, in public health community surveys of environmental exposure to heavy metals. On an individual basis, their established usefulness is much more restricted and the limitations are especially notable for evaluation of mineral nutritional status. There is a wide gulf between the limited and mainly tentative scientific justification for their use on an individual basis and the current exploitation of multielement chemical analyses of human hair.

  4. Multiresolution Subdivision Snakes.

    PubMed

    Badoual, Anais; Schmitter, Daniel; Uhlmann, Virginie; Unser, Michael

    2017-03-01

    We present a new family of snakes that satisfy the property of multiresolution by exploiting subdivision schemes. We show in a generic way how to construct such snakes based on an admissible subdivision mask. We derive the necessary energy formulations and provide the formulas for their efficient computation. Depending on the choice of the mask, such models have the ability to reproduce trigonometric or polynomial curves. They can also be designed to be interpolating, a property that is useful in user-interactive applications. We provide explicit examples of subdivision snakes and illustrate their use for the segmentation of bioimages. We show that they are robust in the presence of noise and provide a multiresolution algorithm to enlarge their basin of attraction, which decreases their dependence on initialization compared to singleresolution snakes. We show the advantages of the proposed model in terms of computation and segmentation of structures with different sizes.

  5. Research potential and limitations of trace analyses of cremated remains.

    PubMed

    Harbeck, Michaela; Schleuder, Ramona; Schneider, Julius; Wiechmann, Ingrid; Schmahl, Wolfgang W; Grupe, Gisela

    2011-01-30

    Human cremation is a common funeral practice all over the world and will presumably become an even more popular choice for interment in the future. Mainly for purposes of identification, there is presently a growing need to perform trace analyses such as DNA or stable isotope analyses on human remains after cremation in order to clarify pending questions in civil or criminal court cases. The aim of this study was to experimentally test the potential and limitations of DNA and stable isotope analyses when conducted on cremated remains. For this purpose, tibiae from modern cattle were experimentally cremated by incinerating the bones in increments of 100°C until a maximum of 1000°C was reached. In addition, cremated human remains were collected from a modern crematory. The samples were investigated to determine level of DNA preservation and stable isotope values (C and N in collagen, C and O in the structural carbonate, and Sr in apatite). Furthermore, we assessed the integrity of microstructural organization, appearance under UV-light, collagen content, as well as the mineral and crystalline organization. This was conducted in order to provide a general background with which to explain observed changes in the trace analyses data sets. The goal is to develop an efficacious screening method for determining at which degree of burning bone still retains its original biological signals. We found that stable isotope analysis of the tested light elements in bone is only possible up to a heat exposure of 300°C while the isotopic signal from strontium remains unaltered even in bones exposed to very high temperatures. DNA-analyses seem theoretically possible up to a heat exposure of 600°C but can not be advised in every case because of the increased risk of contamination. While the macroscopic colour and UV-fluorescence of cremated bone give hints to temperature exposure of the bone's outer surface, its histological appearance can be used as a reliable indicator for the

  6. Linking properties to microstructure through multiresolution mechanics

    NASA Astrophysics Data System (ADS)

    McVeigh, Cahal James

    The macroscale mechanical and physical properties of materials are inherently linked to the underlying microstructure. Traditional continuum mechanics theories have focused on approximating the heterogeneous microstructure as a continuum, which is conducive to a partial differential equation mathematical description. Although this makes large scale simulation of material much more efficient than modeling the detailed microstructure, the relationship between microstructure and macroscale properties becomes unclear. In order to perform computational materials design, material models must clearly relate the key underlying microstructural parameters (cause) to macroscale properties (effect). In this thesis, microstructure evolution and instability events are related to macroscale mechanical properties through a new multiresolution continuum analysis approach. The multiresolution nature of this theory allows prediction of the evolving magnitude and scale of deformation as a direct function of the changing microstructure. This is achieved via a two-pronged approach: (a) Constitutive models which track evolving microstructure are developed and calibrated to direct numerical simulations (DNS) of the microstructure. (b) The conventional homogenized continuum equations of motion are extended via a virtual power approach to include extra coupled microscale stresses and stress couples which are active at each characteristic length scale within the microstructure. The multiresolution approach is applied to model the fracture toughness of a cemented carbide, failure of a steel alloy under quasi-static loading conditions and the initiation and velocity of adiabatic shear bands under high speed dynamic loading. In each case the multiresolution analysis predicts the important scale effects which control the macroscale material response. The strain fields predicted in the multiresolution continuum analyses compare well to those observed in direct numerical simulations of the

  7. Wavelet-Based Multiresolution Analyses of Signals

    DTIC Science & Technology

    1992-06-01

    classification. Some signals, notably those of a transient nature, are inherently difficult to analyze with these traditional tools. The Discrete Wavelet Transform has...scales. This thesis investigates dyadic discrete wavelet decompositions of signals. A new multiphase wavelet transform is proposed and investigated. The

  8. The Limited Informativeness of Meta-Analyses of Media Effects.

    PubMed

    Valkenburg, Patti M

    2015-09-01

    In this issue of Perspectives on Psychological Science, Christopher Ferguson reports on a meta-analysis examining the relationship between children's video game use and several outcome variables, including aggression and attention deficit symptoms (Ferguson, 2015, this issue). In this commentary, I compare Ferguson's nonsignificant effects sizes with earlier meta-analyses on the same topics that yielded larger, significant effect sizes. I argue that Ferguson's choice for partial effects sizes is unjustified on both methodological and theoretical grounds. I then plead for a more constructive debate on the effects of violent video games on children and adolescents. Until now, this debate has been dominated by two camps with diametrically opposed views on the effects of violent media on children. However, even the earliest media effects studies tell us that children can react quite differently to the same media content. Thus, if researchers truly want to understand how media affect children, rather than fight for the presence or absence of effects, they need to adopt a perspective that takes differential susceptibility to media effects more seriously.

  9. [The advantages and limitations of brain function analyses by PET].

    PubMed

    Kato, M; Taniwaki, T; Kuwabara, Y

    2000-12-01

    PET has been proved to be a powerful tool for exploring the brain function. We discussed the advantages and limitations of PET for analyzing the brain function on the basis of our clinical and experimental experiences of functional imaging. A multimodality PET study measuring cerebral energy metabolism (CMRO2 and CMRglc), cerebral blood flow (CBF), oxygen extraction fraction (OEF) and neurotransmitter function (presynaptic and postsynaptic) opens up a closer insight into a precise pathophysiology of the brain dysfunction: In cerebral infarction, it reveals a state of "misery perfusion" in the acute stage, "luxury perfusion" in the intermediate stage, and proportionately decreased CBF and CMRO2 in the chronic stage. Neurotransmitter function may identify specifically a neuronal subgroup of dysfunction. Owing to the low temporal resolution of PET, a neuronal activity may propagate transsynaptically to remote areas during the period of scanning, resulting in an obscured primary site of the neuronal activity. Uncoupling between neuronal activities and cerebral energy metabolism/CBF may occur under a certain state of brain pathology, particularly after an acute destructive lesion, according to our experimental studies. Neurotransmitter function may reveal the effect of drugs on the brain function, and may be useful for developing a new method of drug therapy for brain diseases in the future.

  10. Wavelet-based Multiresolution Particle Methods

    NASA Astrophysics Data System (ADS)

    Bergdorf, Michael; Koumoutsakos, Petros

    2006-03-01

    Particle methods offer a robust numerical tool for solving transport problems across disciplines, such as fluid dynamics, quantitative biology or computer graphics. Their strength lies in their stability, as they do not discretize the convection operator, and appealing numerical properties, such as small dissipation and dispersion errors. Many problems of interest are inherently multiscale, and their efficient solution requires either multiscale modeling approaches or spatially adaptive numerical schemes. We present a hybrid particle method that employs a multiresolution analysis to identify and adapt to small scales in the solution. The method combines the versatility and efficiency of grid-based Wavelet collocation methods while retaining the numerical properties and stability of particle methods. The accuracy and efficiency of this method is then assessed for transport and interface capturing problems in two and three dimensions, illustrating the capabilities and limitations of our approach.

  11. Safety assessment of historical masonry churches based on pre-assigned kinematic limit analysis, FE limit and pushover analyses

    SciTech Connect

    Milani, Gabriele Valente, Marco

    2014-10-06

    This study presents some results of a comprehensive numerical analysis on three masonry churches damaged by the recent Emilia-Romagna (Italy) seismic events occurred in May 2012. The numerical study comprises: (a) pushover analyses conducted with a commercial code, standard nonlinear material models and two different horizontal load distributions; (b) FE kinematic limit analyses performed using a non-commercial software based on a preliminary homogenization of the masonry materials and a subsequent limit analysis with triangular elements and interfaces; (c) kinematic limit analyses conducted in agreement with the Italian code and based on the a-priori assumption of preassigned failure mechanisms, where the masonry material is considered unable to withstand tensile stresses. All models are capable of giving information on the active failure mechanism and the base shear at failure, which, if properly made non-dimensional with the weight of the structure, gives also an indication of the horizontal peak ground acceleration causing the collapse of the church. The results obtained from all three models indicate that the collapse is usually due to the activation of partial mechanisms (apse, façade, lateral walls, etc.). Moreover the horizontal peak ground acceleration associated to the collapse is largely lower than that required in that seismic zone by the Italian code for ordinary buildings. These outcomes highlight that structural upgrading interventions would be extremely beneficial for the considerable reduction of the seismic vulnerability of such kind of historical structures.

  12. Riesz wavelets and multiresolution structures

    NASA Astrophysics Data System (ADS)

    Larson, David R.; Tang, Wai-Shing; Weber, Eric

    2001-12-01

    Multiresolution structures are important in applications, but they are also useful for analyzing properties of associated wavelets. Given a nonorthogonal (multi-) wavelet in a Hilbert space, we construct a core subspace. Subsequently, the dilates of the core subspace defines a ladder of nested subspaces. Of fundamental importance are two questions: 1) when is the core subspace shift invariant; and if yes, then 2) when is the core subspace generated by shifts of a single vector, i.e. there exists a scaling vector. If the wavelet generates a Riesz basis then the answer to question 1) is yes if and only if the wavelet is a biorthogonal wavelet. Additionally, if the wavelet generates a tight frame of arbitrary frame constant, then the core subspace is shift invariant. Question 1) is still open in case the wavelet generates a non-tight frame. We also present some known results to question 2) and provide some preliminary improvements. Our analysis here arises from investigating the dimension function and the multiplicity function of a wavelet. These two functions agree if the wavelet is orthogonal. Finally, we discuss how these questions are important for considering linear perturbation of wavelets. Utilizing the idea of the local commutant of a unitary system developed by Dai and Larson, we show that nearly all linear perturbations of two orthonormal wavelets form a Riesz wavelet. If in fact these wavelets correspond to a von Neumann algebra in the local commutant of a base wavelet, then the interpolated wavelet is biorthogonal. Moreover, we demonstrate that in this case the interpolated wavelets have a scaling vector if the base wavelet has a scaling vector.

  13. Carbon dioxide analysers: accuracy, alarm limits and effects of interfering gases.

    PubMed

    Lauber, R; Seeberger, B; Zbinden, A M

    1995-07-01

    Six mainstream and twelve sidestream infrared carbon dioxide (CO2) analysers were tested for accuracy of the CO2 display value, alarm activation and the effects of nitrous oxide (N2O), oxygen (O2) and water vapour according to the ISO Draft International Standard (DIS)#9918. Mainstream analysers (M-type): Novametrix Capnogard 1265; Hewlett Packard HP M1166A (CO2-module HP M1016A); Datascope Passport; Marquette Tramscope 12; Nellcor Ultra Cap N-6000; Hellige Vicom-sm SMU 611/612 ETC. Sidestream analysers: Brüel & Kjaer Type 1304; Datex Capnomac II; Marquette MGA-AS; Datascope Multinex; Ohmeda 4700 OxiCap (all type S1: respiratory cycles not demanded); Biochem BCI 9000; Bruker BCI 9100; Dräger Capnodig and PM 8020; Criticare Poet II; Hellige Vicom-sm SMU 611/612 A-GAS (all type S2: respiratory cycles demanded). The investigations were performed with premixed test gases (2.5, 5, 10 vol%, error < or = 1% rel.). Humidification (37 degrees C) of gases were generated by a Dräger Aquapor. Respiratory cycles were simulated by manually activated valves. All monitors complied with the tolerated accuracy bias in CO2 reading (< or = 12% or 4 mmHg of actual test gas value) for wet and dry test gases at all concentrations, except that the Marquette MGA-AS exceeded this accuracy limit with wet gases at 5 and 10 vol% CO2. Water condensed in the metal airway adapter of the HP M1166A at 37 degrees C gas temperature but not at 30 degrees C. The Servomex 2500 (nonclinical reference monitor), Passport (M-type), Multinex (S1-type) and Poet II (S2-type) showed the least bias for dry and wet gases. Nitrous oxide and O2 had practically no effect on the Capnodig and the errors in the others were max. 3.4 mmHg, still within the tolerated bias in the DIS (same as above). The difference between the display reading at alarm activation and the set point was in all monitors (except in the Capnodig: bias 1.75 mmHg at 5 vol% CO2) below the tolerated limit of the DIS (difference < or = 0.2 vol

  14. New low detection limits for EDXRF analyses on the basis of polycapillary optics and chemical sensors

    NASA Astrophysics Data System (ADS)

    Khamizov, R. K.; Kumakhov, M. A.; Nikitina, S. V.; Mikhin, V. A.

    2005-07-01

    The possibilities of increasing the sensitivity of energy dispersive X-ray fluorescence analysis (EDXRF) of solutions with the use of special preconcentrating sensors are described in the article. The sensors are made from polycapillary tubes or plates consisting of hundred thousands micro-channels, each containing a micro-grain of collecting sorbent. The kinetic regularities for preconcentration of micro-components from solutions are considered. Experimental results are given for EDXRF analyses of different solutions containing metals and other elements in trace amounts, and the detection limits of tens and hundreds ppb are demonstrated. The pilot sample of a new analytical instrument <> is shortly described.

  15. Incorporation of concentration data below the limit of quantification in population pharmacokinetic analyses

    PubMed Central

    Keizer, Ron J; Jansen, Robert S; Rosing, Hilde; Thijssen, Bas; Beijnen, Jos H; Schellens, Jan H M; Huitema, Alwin D R

    2015-01-01

    Handling of data below the lower limit of quantification (LLOQ), below the limit of quantification (BLOQ) in population pharmacokinetic (PopPK) analyses is important for reducing bias and imprecision in parameter estimation. We aimed to evaluate whether using the concentration data below the LLOQ has superior performance over several established methods. The performance of this approach (“All data”) was evaluated and compared to other methods: “Discard,” “LLOQ/2,” and “LIKE” (likelihood-based). An analytical and residual error model was constructed on the basis of in-house analytical method validations and analyses from literature, with additional included variability to account for model misspecification. Simulation analyses were performed for various levels of BLOQ, several structural PopPK models, and additional influences. Performance was evaluated by relative root mean squared error (RMSE), and run success for the various BLOQ approaches. Performance was also evaluated for a real PopPK data set. For all PopPK models and levels of censoring, RMSE values were lowest using “All data.” Performance of the “LIKE” method was better than the “LLOQ/2” or “Discard” method. Differences between all methods were small at the lowest level of BLOQ censoring. “LIKE” method resulted in low successful minimization (<50%) and covariance step success (<30%), although estimates were obtained in most runs (∼90%). For the real PK data set (7.4% BLOQ), similar parameter estimates were obtained using all methods. Incorporation of BLOQ concentrations showed superior performance in terms of bias and precision over established BLOQ methods, and shown to be feasible in a real PopPK analysis. PMID:26038706

  16. Optical design and system engineering of a multiresolution foveated laparoscope

    PubMed Central

    Qin, Yi; Hua, Hong

    2016-01-01

    The trade-off between the spatial resolution and field of view is one major limitation of state-of-the-art laparoscopes. In order to address this limitation, we demonstrated a multiresolution foveated laparoscope (MRFL) which is capable of simultaneously capturing both a wide-angle overview for situational awareness and a high-resolution zoomed-in view for accurate surgical operation. In this paper, we focus on presenting the optical design and system engineering process for developing the MRFL prototype. More specifically, the first-order specifications and properties of the optical system are discussed, followed by a detailed discussion on the optical design strategy and procedures of each subsystem. The optical performance of the final system, including diffraction efficiency, tolerance analysis, stray light and ghost image, is fully analyzed. Finally, the prototype assembly process and the final prototype are demonstrated. PMID:27139875

  17. Comparison of multiresolution techniques for digital signal processing

    NASA Astrophysics Data System (ADS)

    Hamlett, Neil A.

    1993-03-01

    A comprehensive study of multiresolution techniques is conducted. Background material in functional analysis and Quadrature Mirror Filter (QMF) banks is presented. The development of Mallat's algorithm for multiresolution decomposition and reconstruction is outlined and demonstrated to be equivalent to QMF banks. The Laplacian pyramid and the a trous algorithm are described and demonstrated. General multiresolution structures are constructed from cascades of QMF and pseudo-QMF banks and are demonstrated for applications in signal decomposition and reconstruction and for signal detection and identification.

  18. Multiresolution approach based on projection matrices

    SciTech Connect

    Vargas, Javier; Quiroga, Juan Antonio

    2009-03-01

    Active triangulation measurement systems with a rigid geometric configuration are inappropriate for scanning large objects with low measuring tolerances. The reason is that the ratio between the depth recovery error and the lateral extension is a constant that depends on the geometric setup. As a consequence, measuring large areas with low depth recovery error requires the use of multiresolution techniques. We propose a multiresolution technique based on a camera-projector system previously calibrated. The method consists of changing the camera or projector's parameters in order to increase the system depth sensitivity. A subpixel retroprojection error in the self-calibration process and a decrease of approximately one order of magnitude in the depth recovery error can be achieved using the proposed method.

  19. Multiresolution modulation for efficient broadcast of information

    NASA Astrophysics Data System (ADS)

    Grundstrom, Mika; Renfors, Markku

    1994-09-01

    Methods for reliable transmission of information, especially digital television, are considered. In the broadcast channel, several different receiver configurations and channel conditions make an optimization of the channel coding practically impossible. To efficiently utilize available spectrum and to allow robust reception in adverse channel conditions, joint source- channel coding is applied. This is achieved utilizing multiresolution modulation combined with unequal error protection in the channel coding part and data prioritization in source coder. These design parameters for the joint system are considered. The emphasis of this paper is on the modulation part. Multiresolution 32 QAM is presented. Simulations show good performance in the additive white gaussian noise channel and, moreover, results in multipath fading channel are encouraging as far as high priority level of the data is considered.

  20. EEG Multiresolution Analysis Using Wavelet Transform

    DTIC Science & Technology

    2007-11-02

    Wavelet transform (WT) is a new multiresolution time-frequency analysis method. WT possesses well localization feature both in tine and frequency...plays a key role in the diagnosing diseases and is useful for both physiological research and medical applications. Using the dyadic wavelet ... transform the EEG signals are successfully decomposed to the alpha rhythm (8-13Hz) beta rhythm (14-30Hz) theta rhythm (4-7Hz) and delta rhythm (0.3-3Hz) and

  1. Multiresolution saliency map based object segmentation

    NASA Astrophysics Data System (ADS)

    Yang, Jian; Wang, Xin; Dai, ZhenYou

    2015-11-01

    Salient objects' detection and segmentation are gaining increasing research interest in recent years. A saliency map can be obtained from different models presented in previous studies. Based on this saliency map, the most salient region (MSR) in an image can be extracted. This MSR, generally a rectangle, can be used as the initial parameters for object segmentation algorithms. However, to our knowledge, all of those saliency maps are represented in a unitary resolution although some models have even introduced multiscale principles in the calculation process. Furthermore, some segmentation methods, such as the well-known GrabCut algorithm, need more iteration time or additional interactions to get more precise results without predefined pixel types. A concept of a multiresolution saliency map is introduced. This saliency map is provided in a multiresolution format, which naturally follows the principle of the human visual mechanism. Moreover, the points in this map can be utilized to initialize parameters for GrabCut segmentation by labeling the feature pixels automatically. Both the computing speed and segmentation precision are evaluated. The results imply that this multiresolution saliency map-based object segmentation method is simple and efficient.

  2. 137Cs measurement uncertainties and detection limits for airborne gamma spectrometry (AGS) data analysed using a spectral windows method.

    PubMed

    Cresswell, A J; Sanderson, D C W; White, D C

    2006-02-01

    The uncertainties associated with airborne gamma spectrometry (AGS) measurements analysed using a spectral windows method, and associated detection limits, have been investigated. For individual short measurements over buried 137Cs activity detection limits of 10 kBq m(-2) are achieved. These detection limits are reduced for superficial activity and longer integration times. For superficial activity, detection limits below 1 kBq m(-2) are achievable. A comparison is made with the detection limits for other data processing methods.

  3. Using Controlled Landslide Initiation Experiments to Test Limit-Equilibrium Analyses of Slope Stability

    NASA Astrophysics Data System (ADS)

    Reid, M. E.; Iverson, R. M.; Brien, D. L.; Iverson, N. R.; Lahusen, R. G.; Logan, M.

    2004-12-01

    Most studies of landslide initiation employ limit equilibrium analyses of slope stability. Owing to a lack of detailed data, however, few studies have tested limit-equilibrium predictions against physical measurements of slope failure. We have conducted a series of field-scale, highly controlled landslide initiation experiments at the USGS debris-flow flume in Oregon; these experiments provide exceptional data to test limit equilibrium methods. In each of seven experiments, we attempted to induce failure in a 0.65m thick, 2m wide, 6m3 prism of loamy sand placed behind a retaining wall in the 31° sloping flume. We systematically investigated triggering of sliding by groundwater injection, by prolonged moderate-intensity sprinkling, and by bursts of high intensity sprinkling. We also used vibratory compaction to control soil porosity and thereby investigate differences in failure behavior of dense and loose soils. About 50 sensors were monitored at 20 Hz during the experiments, including nests of tiltmeters buried at 7 cm spacing to define subsurface failure geometry, and nests of tensiometers and pore-pressure sensors to define evolving pore-pressure fields. In addition, we performed ancillary laboratory tests to measure soil porosity, shear strength, hydraulic conductivity, and compressibility. In loose soils (porosity of 0.52 to 0.55), abrupt failure typically occurred along the flume bed after substantial soil deformation. In denser soils (porosity of 0.41 to 0.44), gradual failure occurred within the soil prism. All failure surfaces had a maximum length to depth ratio of about 7. In even denser soil (porosity of 0.39), we could not induce failure by sprinkling. The internal friction angle of the soils varied from 28° to 40° with decreasing porosity. We analyzed stability at failure, given the observed pore-pressure conditions just prior to large movement, using a 1-D infinite-slope method and a more complete 2-D Janbu method. Each method provides a static

  4. Exploring a Multi-resolution Approach Using AMIP Simulations

    SciTech Connect

    Sakaguchi, Koichi; Leung, Lai-Yung R.; Zhao, Chun; Yang, Qing; Lu, Jian; Hagos, Samson M.; Rauscher, Sara; Dong, Li; Ringler, Todd; Lauritzen, P. H.

    2015-07-31

    This study presents a diagnosis of a multi-resolution approach using the Model for Prediction Across Scales - Atmosphere (MPAS-A) for simulating regional climate. Four AMIP experiments are conducted for 1999-2009. In the first two experiments, MPAS-A is configured using global quasi-uniform grids at 120 km and 30 km grid spacing. In the other two experiments, MPAS-A is configured using variable-resolution (VR) mesh with local refinement at 30 km over North America and South America embedded inside a quasi-uniform domain at 120 km elsewhere. Precipitation and related fields in the four simulations are examined to determine how well the VR simulations reproduce the features simulated by the globally high-resolution model in the refined domain. In previous analyses of idealized aqua-planet simulations, the characteristics of the global high-resolution simulation in moist processes only developed near the boundary of the refined region. In contrast, the AMIP simulations with VR grids are able to reproduce the high-resolution characteristics across the refined domain, particularly in South America. This indicates the importance of finely resolved lower-boundary forcing such as topography and surface heterogeneity for the regional climate, and demonstrates the ability of the MPAS-A VR to replicate the large-scale moisture transport as simulated in the quasi-uniform high-resolution model. Outside of the refined domain, some upscale effects are detected through large-scale circulation but the overall climatic signals are not significant at regional scales. Our results provide support for the multi-resolution approach as a computationally efficient and physically consistent method for modeling regional climate.

  5. Multiresolutional models of uncertainty generation and reduction

    NASA Technical Reports Server (NTRS)

    Meystel, A.

    1989-01-01

    Kolmogorov's axiomatic principles of the probability theory, are reconsidered in the scope of their applicability to the processes of knowledge acquisition and interpretation. The model of uncertainty generation is modified in order to reflect the reality of engineering problems, particularly in the area of intelligent control. This model implies algorithms of learning which are organized in three groups which reflect the degree of conceptualization of the knowledge the system is dealing with. It is essential that these algorithms are motivated by and consistent with the multiresolutional model of knowledge representation which is reflected in the structure of models and the algorithms of learning.

  6. Color Quantization by Multiresolution Analysis

    NASA Astrophysics Data System (ADS)

    Ramella, Giuliana; di Baja, Gabriella Sanniti

    A color quantization method is presented, which is based on the analysis of the histogram at different resolutions computed on a Gaussian pyramid of the input image. Criteria based on persistence and dominance of peaks and pits of the histograms are introduced to detect the modes in the histogram of the input image and to define the reduced colormap. Important features of the method are, besides its limited computational cost, the possibility to obtain quantized images with a variable number of colors, depending on the user’s need, and that the number of colors in the resulting image does not need to be a priori fixed.

  7. Active pixel sensor array with multiresolution readout

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R. (Inventor); Kemeny, Sabrina E. (Inventor); Pain, Bedabrata (Inventor)

    1999-01-01

    An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node. There is also a readout circuit, part of which can be disposed at the bottom of each column of cells and be common to all the cells in the column. The imaging device can also include an electronic shutter formed on the substrate adjacent the photogate, and/or a storage section to allow for simultaneous integration. In addition, the imaging device can include a multiresolution imaging circuit to provide images of varying resolution. The multiresolution circuit could also be employed in an array where the photosensitive portion of each pixel cell is a photodiode. This latter embodiment could further be modified to facilitate low light imaging.

  8. Hanging-wall deformation above a normal fault: sequential limit analyses

    NASA Astrophysics Data System (ADS)

    Yuan, Xiaoping; Leroy, Yves M.; Maillot, Bertrand

    2015-04-01

    The deformation in the hanging wall above a segmented normal fault is analysed with the sequential limit analysis (SLA). The method combines some predictions on the dip and position of the active fault and axial surface, with geometrical evolution à la Suppe (Groshong, 1989). Two problems are considered. The first followed the prototype proposed by Patton (2005) with a pre-defined convex, segmented fault. The orientation of the upper segment of the normal fault is an unknown in the second problem. The loading in both problems consists of the retreat of the back wall and the sedimentation. This sedimentation starts from the lowest point of the topography and acts at the rate rs relative to the wall retreat rate. For the first problem, the normal fault either has a zero friction or a friction value set to 25o or 30o to fit the experimental results (Patton, 2005). In the zero friction case, a hanging wall anticline develops much like in the experiments. In the 25o friction case, slip on the upper segment is accompanied by rotation of the axial plane producing a broad shear zone rooted at the fault bend. The same observation is made in the 30o case, but without slip on the upper segment. Experimental outcomes show a behaviour in between these two latter cases. For the second problem, mechanics predicts a concave fault bend with an upper segment dip decreasing during extension. The axial surface rooting at the normal fault bend sees its dips increasing during extension resulting in a curved roll-over. Softening on the normal fault leads to a stepwise rotation responsible for strain partitioning into small blocks in the hanging wall. The rotation is due to the subsidence of the topography above the hanging wall. Sedimentation in the lowest region thus reduces the rotations. Note that these rotations predicted by mechanics are not accounted for in most geometrical approaches (Xiao and Suppe, 1992) and are observed in sand box experiments (Egholm et al., 2007, referring

  9. A Multiresolution Graphical Representation for Similarity Relationship and Multiresolution Clustering for Biological Sequences.

    PubMed

    Yang, Lianping; Zhang, Weilin

    2017-04-01

    How we can describe the similarity relationship between the biological sequences is a basic but important problem in bioinformatics. The first graphical representation method for the similarity relationship rather than for single sequence is proposed in this article, which makes the similarity intuitional. Some properties such as sensitivity and continuity of the similarity are proved theoretically, which indicate that the similarity describer has the advantage of both alignment and alignment-free methods. With the aid of multiresolution analysis tools, we can exhibit the similarity's different profiles, from high resolution to low resolution. Then the idea of multiresolution clustering is raised first. A reassortment analysis on a benchmark flu virus genome data set is to test our method and it shows a better performance than alignment method, especially in dealing with problems involving segments' order.

  10. Multiresolution moment filters: theory and applications.

    PubMed

    Sühling, Michael; Arigovindan, Muthuvel; Hunziker, Patrick; Unser, Michael

    2004-04-01

    We introduce local weighted geometric moments that are computed from an image within a sliding window at multiple scales. When the window function satisfies a two-scale relation, we prove that lower order moments can be computed efficiently at dyadic scales by using a multiresolution wavelet-like algorithm. We show that B-splines are well-suited window functions because, in addition to being refinable, they are positive, symmetric, separable, and very nearly isotropic (Gaussian shape). We present three applications of these multiscale local moments. The first is a feature-extraction method for detecting and characterizing elongated structures in images. The second is a noise-reduction method which can be viewed as a multiscale extension of Savitzky-Golay filtering. The third is a multiscale optical-flow algorithm that uses a local affine model for the motion field, extending the Lucas-Kanade optical-flow method. The results obtained in all cases are promising.

  11. Multiresolution Distance Volumes for Progressive Surface Compression

    SciTech Connect

    Laney, D E; Bertram, M; Duchaineau, M A; Max, N L

    2002-04-18

    We present a surface compression method that stores surfaces as wavelet-compressed signed-distance volumes. Our approach enables the representation of surfaces with complex topology and arbitrary numbers of components within a single multiresolution data structure. This data structure elegantly handles topological modification at high compression rates. Our method does not require the costly and sometimes infeasible base mesh construction step required by subdivision surface approaches. We present several improvements over previous attempts at compressing signed-distance functions, including an 0(n) distance transform, a zero set initialization method for triangle meshes, and a specialized thresholding algorithm. We demonstrate the potential of sampled distance volumes for surface compression and progressive reconstruction for complex high genus surfaces.

  12. Multi-resolution analysis for ENO schemes

    NASA Technical Reports Server (NTRS)

    Harten, Ami

    1991-01-01

    Given an function, u(x), which is represented by its cell-averages in cells which are formed by some unstructured grid, we show how to decompose the function into various scales of variation. This is done by considering a set of nested grids in which the given grid is the finest, and identifying in each locality the coarsest grid in the set from which u(x) can be recovered to a prescribed accuracy. This multi-resolution analysis was applied to essentially non-oscillatory (ENO) schemes in order to advance the solution by one time-step. This is accomplished by decomposing the numerical solution at the beginning of each time-step into levels of resolution, and performing the computation in each locality at the appropriate coarser grid. An efficient algorithm for implementing this program in the 1-D case is presented; this algorithm can be extended to the multi-dimensional case with Cartesian grids.

  13. A multiresolution image based approach for correction of partial volume effects in emission tomography.

    PubMed

    Boussion, N; Hatt, M; Lamare, F; Bizais, Y; Turzo, A; Cheze-Le Rest, C; Visvikis, D

    2006-04-07

    Partial volume effects (PVEs) are consequences of the limited spatial resolution in emission tomography. They lead to a loss of signal in tissues of size similar to the point spread function and induce activity spillover between regions. Although PVE can be corrected for by using algorithms that provide the correct radioactivity concentration in a series of regions of interest (ROIs), so far little attention has been given to the possibility of creating improved images as a result of PVE correction. Potential advantages of PVE-corrected images include the ability to accurately delineate functional volumes as well as improving tumour-to-background ratio, resulting in an associated improvement in the analysis of response to therapy studies and diagnostic examinations, respectively. The objective of our study was therefore to develop a methodology for PVE correction not only to enable the accurate recuperation of activity concentrations, but also to generate PVE-corrected images. In the multiresolution analysis that we define here, details of a high-resolution image H (MRI or CT) are extracted, transformed and integrated in a low-resolution image L (PET or SPECT). A discrete wavelet transform of both H and L images is performed by using the "à trous" algorithm, which allows the spatial frequencies (details, edges, textures) to be obtained easily at a level of resolution common to H and L. A model is then inferred to build the lacking details of L from the high-frequency details in H. The process was successfully tested on synthetic and simulated data, proving the ability to obtain accurately corrected images. Quantitative PVE correction was found to be comparable with a method considered as a reference but limited to ROI analyses. Visual improvement and quantitative correction were also obtained in two examples of clinical images, the first using a combined PET/CT scanner with a lymphoma patient and the second using a FDG brain PET and corresponding T1-weighted MRI

  14. A multiresolution image based approach for correction of partial volume effects in emission tomography

    NASA Astrophysics Data System (ADS)

    Boussion, N.; Hatt, M.; Lamare, F.; Bizais, Y.; Turzo, A.; Cheze-LeRest, C.; Visvikis, D.

    2006-04-01

    Partial volume effects (PVEs) are consequences of the limited spatial resolution in emission tomography. They lead to a loss of signal in tissues of size similar to the point spread function and induce activity spillover between regions. Although PVE can be corrected for by using algorithms that provide the correct radioactivity concentration in a series of regions of interest (ROIs), so far little attention has been given to the possibility of creating improved images as a result of PVE correction. Potential advantages of PVE-corrected images include the ability to accurately delineate functional volumes as well as improving tumour-to-background ratio, resulting in an associated improvement in the analysis of response to therapy studies and diagnostic examinations, respectively. The objective of our study was therefore to develop a methodology for PVE correction not only to enable the accurate recuperation of activity concentrations, but also to generate PVE-corrected images. In the multiresolution analysis that we define here, details of a high-resolution image H (MRI or CT) are extracted, transformed and integrated in a low-resolution image L (PET or SPECT). A discrete wavelet transform of both H and L images is performed by using the 'à trous' algorithm, which allows the spatial frequencies (details, edges, textures) to be obtained easily at a level of resolution common to H and L. A model is then inferred to build the lacking details of L from the high-frequency details in H. The process was successfully tested on synthetic and simulated data, proving the ability to obtain accurately corrected images. Quantitative PVE correction was found to be comparable with a method considered as a reference but limited to ROI analyses. Visual improvement and quantitative correction were also obtained in two examples of clinical images, the first using a combined PET/CT scanner with a lymphoma patient and the second using a FDG brain PET and corresponding T1-weighted MRI

  15. Microstructures, Forming Limit and Failure Analyses of Inconel 718 Sheets for Fabrication of Aerospace Components

    NASA Astrophysics Data System (ADS)

    Sajun Prasad, K.; Panda, Sushanta Kumar; Kar, Sujoy Kumar; Sen, Mainak; Murty, S. V. S. Naryana; Sharma, Sharad Chandra

    2017-02-01

    Recently, aerospace industries have shown increasing interest in forming limits of Inconel 718 sheet metals, which can be utilised in designing tools and selection of process parameters for successful fabrication of components. In the present work, stress-strain response with failure strains was evaluated by uniaxial tensile tests in different orientations, and two-stage work-hardening behavior was observed. In spite of highly preferred texture, tensile properties showed minor variations in different orientations due to the random distribution of nanoprecipitates. The forming limit strains were evaluated by deforming specimens in seven different strain paths using limiting dome height (LDH) test facility. Mostly, the specimens failed without prior indication of localized necking. Thus, fracture forming limit diagram (FFLD) was evaluated, and bending correction was imposed due to the use of sub-size hemispherical punch. The failure strains of FFLD were converted into major-minor stress space (σ-FFLD) and effective plastic strain-stress triaxiality space (ηEPS-FFLD) as failure criteria to avoid the strain path dependence. Moreover, FE model was developed, and the LDH, strain distribution and failure location were predicted successfully using above-mentioned failure criteria with two stages of work hardening. Fractographs were correlated with the fracture behavior and formability of sheet metal.

  16. Individual-based analyses reveal limited functional overlap in a coral reef fish community.

    PubMed

    Brandl, Simon J; Bellwood, David R

    2014-05-01

    Detailed knowledge of a species' functional niche is crucial for the study of ecological communities and processes. The extent of niche overlap, functional redundancy and functional complementarity is of particular importance if we are to understand ecosystem processes and their vulnerability to disturbances. Coral reefs are among the most threatened marine systems, and anthropogenic activity is changing the functional composition of reefs. The loss of herbivorous fishes is particularly concerning as the removal of algae is crucial for the growth and survival of corals. Yet, the foraging patterns of the various herbivorous fish species are poorly understood. Using a multidimensional framework, we present novel individual-based analyses of species' realized functional niches, which we apply to a herbivorous coral reef fish community. In calculating niche volumes for 21 species, based on their microhabitat utilization patterns during foraging, and computing functional overlaps, we provide a measurement of functional redundancy or complementarity. Complementarity is the inverse of redundancy and is defined as less than 50% overlap in niche volumes. The analyses reveal extensive complementarity with an average functional overlap of just 15.2%. Furthermore, the analyses divide herbivorous reef fishes into two broad groups. The first group (predominantly surgeonfishes and parrotfishes) comprises species feeding on exposed surfaces and predominantly open reef matrix or sandy substrata, resulting in small niche volumes and extensive complementarity. In contrast, the second group consists of species (predominantly rabbitfishes) that feed over a wider range of microhabitats, penetrating the reef matrix to exploit concealed surfaces of various substratum types. These species show high variation among individuals, leading to large niche volumes, more overlap and less complementarity. These results may have crucial consequences for our understanding of herbivorous processes on

  17. Molecular phylogenetic and zoospore ultrastructural analyses of Chytridium olla establish the limits of a monophyletic Chytridiales.

    PubMed

    Vélez, Carlos G; Letcher, Peter M; Schultz, Sabina; Powell, Martha J; Churchill, Perry F

    2011-01-01

    Chytridium olla A. Braun, the first described chytrid and an obligate algal parasite, is the type for the genus and thus the foundation of family Chytridiaceae, order Chytridiales, class Chytridiomycetes and phylum Chytridiomycota. Chytridium olla was isolated in coculture with its host, Oedogonium capilliforme. DNA was extracted from the coculture, and 18S, 28S and ITS1-5.8S-ITS2 rDNA were amplified with universal fungal primers. Free swimming zoospores and zoospores in mature sporangia were examined with electron microscopy. Molecular analyses placed C. olla in a clade in Chytridiales with isolates of Chytridium lagenaria and Phlyctochytrium planicorne. Ultrastructural analysis revealed C. olla to have a Group II-type zoospore, previously described for Chytridium lagenaria and Phlyctochytrium planicorne. On the basis of zoospore ultrastructure, family Chytridiaceae is emended to include the type of Chytridium and other species with a Group II-type zoospore, and the new family Chytriomycetaceae is delineated to include members of Chytridiales with a Group I-type zoospore.

  18. Multi-resolution analysis for ENO schemes

    NASA Technical Reports Server (NTRS)

    Harten, Ami

    1993-01-01

    Given a function u(x) which is represented by its cell-averages in cells which are formed by some unstructured grid, we show how to decompose the function into various scales of variation. This is done by considering a set of nested grids in which the given grid is the finest, and identifying in each locality the coarsest grid in the set from which u(x) can be recovered to a prescribed accuracy. We apply this multi-resolution analysis to Essentially Non-oscillatory Schemes (ENO) schemes in order to reduce the number of numerical flux computations which is needed in order to advance the solution by one time-step. This is accomplished by decomposing the numerical solution at the beginning of each time-step into levels of resolution, and performing the computation in each locality at the appropriate coarser grid. We present an efficient algorithm for implementing this program in the one-dimensional case; this algorithm can be extended to the multi-dimensional case with cartesian grids.

  19. Interactive visualization of multiresolution image stacks in 3D.

    PubMed

    Trotts, Issac; Mikula, Shawn; Jones, Edward G

    2007-04-15

    Conventional microscopy, electron microscopy, and imaging techniques such as MRI and PET commonly generate large stacks of images of the sectioned brain. In other domains, such as neurophysiology, variables such as space or time are also varied along a stack axis. Digital image sizes have been progressively increasing and in virtual microscopy, it is now common to work with individual image sizes that are several hundred megapixels and several gigabytes in size. The interactive visualization of these high-resolution, multiresolution images in 2D has been addressed previously [Sullivan, G., and Baker, R., 1994. Efficient quad-tree coding of images and video. IEEE Trans. Image Process. 3 (3), 327-331]. Here, we describe a method for interactive visualization of multiresolution image stacks in 3D. The method, characterized as quad-tree based multiresolution image stack interactive visualization using a texel projection based criterion, relies on accessing and projecting image tiles from multiresolution image stacks in such a way that, from the observer's perspective, image tiles all appear approximately the same size even though they are accessed from different tiers within the images comprising the stack. This method enables efficient navigation of high-resolution image stacks. We implement this method in a program called StackVis, which is a Windows-based, interactive 3D multiresolution image stack visualization system written in C++ and using OpenGL. It is freely available at http://brainmaps.org.

  20. Segmentation of textured images using a multiresolution Gaussian autoregressive model.

    PubMed

    Comer, M L; Delp, E J

    1999-01-01

    We present a new algorithm for segmentation of textured images using a multiresolution Bayesian approach. The new algorithm uses a multiresolution Gaussian autoregressive (MGAR) model for the pyramid representation of the observed image, and assumes a multiscale Markov random field model for the class label pyramid. The models used in this paper incorporate correlations between different levels of both the observed image pyramid and the class label pyramid. The criterion used for segmentation is the minimization of the expected value of the number of misclassified nodes in the multiresolution lattice. The estimate which satisfies this criterion is referred to as the "multiresolution maximization of the posterior marginals" (MMPM) estimate, and is a natural extension of the single-resolution "maximization of the posterior marginals" (MPM) estimate. Previous multiresolution segmentation techniques have been based on the maximum a posterior (MAP) estimation criterion, which has been shown to be less appropriate for segmentation than the MPM criterion. It is assumed that the number of distinct textures in the observed image is known. The parameters of the MGAR model-the means, prediction coefficients, and prediction error variances of the different textures-are unknown. A modified version of the expectation-maximization (EM) algorithm is used to estimate these parameters. The parameters of the Gibbs distribution for the label pyramid are assumed to be known. Experimental results demonstrating the performance of the algorithm are presented.

  1. Examination of metabolic responses to phosphorus limitation via proteomic analyses in the marine diatom Phaeodactylum tricornutum.

    PubMed

    Feng, Tian-Ya; Yang, Zhi-Kai; Zheng, Jian-Wei; Xie, Ying; Li, Da-Wei; Murugan, Shanmugaraj Bala; Yang, Wei-Dong; Liu, Jie-Sheng; Li, Hong-Ye

    2015-05-28

    Phosphorus (P) is an essential macronutrient for the survival of marine phytoplankton. In the present study, phytoplankton response to phosphorus limitation was studied by proteomic profiling in diatom Phaeodactylum tricornutum in both cellular and molecular levels. A total of 42 non-redundant proteins were identified, among which 8 proteins were found to be upregulated and 34 proteins were downregulated. The results also showed that the proteins associated with inorganic phosphate uptake were downregulated, whereas the proteins involved in organic phosphorus uptake such as alkaline phosphatase were upregulated. The proteins involved in metabolic responses such as protein degradation, lipid accumulation and photorespiration were upregulated whereas energy metabolism, photosynthesis, amino acid and nucleic acid metabolism tend to be downregulated. Overall our results showed the changes in protein levels of P. tricornutum during phosphorus stress. This study preludes for understanding the role of phosphorous in marine biogeochemical cycles and phytoplankton response to phosphorous scarcity in ocean. It also provides insight into the succession of phytoplankton community, providing scientific basis for elucidating the mechanism of algal blooms.

  2. Examination of metabolic responses to phosphorus limitation via proteomic analyses in the marine diatom Phaeodactylum tricornutum

    PubMed Central

    Feng, Tian-Ya; Yang, Zhi-Kai; Zheng, Jian-Wei; Xie, Ying; Li, Da-Wei; Murugan, Shanmugaraj Bala; Yang, Wei-Dong; Liu, Jie-Sheng; Li, Hong-Ye

    2015-01-01

    Phosphorus (P) is an essential macronutrient for the survival of marine phytoplankton. In the present study, phytoplankton response to phosphorus limitation was studied by proteomic profiling in diatom Phaeodactylum tricornutum in both cellular and molecular levels. A total of 42 non-redundant proteins were identified, among which 8 proteins were found to be upregulated and 34 proteins were downregulated. The results also showed that the proteins associated with inorganic phosphate uptake were downregulated, whereas the proteins involved in organic phosphorus uptake such as alkaline phosphatase were upregulated. The proteins involved in metabolic responses such as protein degradation, lipid accumulation and photorespiration were upregulated whereas energy metabolism, photosynthesis, amino acid and nucleic acid metabolism tend to be downregulated. Overall our results showed the changes in protein levels of P. tricornutum during phosphorus stress. This study preludes for understanding the role of phosphorous in marine biogeochemical cycles and phytoplankton response to phosphorous scarcity in ocean. It also provides insight into the succession of phytoplankton community, providing scientific basis for elucidating the mechanism of algal blooms. PMID:26020491

  3. Telescopic multi-resolution augmented reality

    NASA Astrophysics Data System (ADS)

    Jenkins, Jeffrey; Frenchi, Christopher; Szu, Harold

    2014-05-01

    To ensure a self-consistent scaling approximation, the underlying microscopic fluctuation components can naturally influence macroscopic means, which may give rise to emergent observable phenomena. In this paper, we describe a consistent macroscopic (cm-scale), mesoscopic (micron-scale), and microscopic (nano-scale) approach to introduce Telescopic Multi-Resolution (TMR) into current Augmented Reality (AR) visualization technology. We propose to couple TMR-AR by introducing an energy-matter interaction engine framework that is based on known Physics, Biology, Chemistry principles. An immediate payoff of TMR-AR is a self-consistent approximation of the interaction between microscopic observables and their direct effect on the macroscopic system that is driven by real-world measurements. Such an interdisciplinary approach enables us to achieve more than multiple scale, telescopic visualization of real and virtual information but also conducting thought experiments through AR. As a result of the consistency, this framework allows us to explore a large dimensionality parameter space of measured and unmeasured regions. Towards this direction, we explore how to build learnable libraries of biological, physical, and chemical mechanisms. Fusing analytical sensors with TMR-AR libraries provides a robust framework to optimize testing and evaluation through data-driven or virtual synthetic simulations. Visualizing mechanisms of interactions requires identification of observable image features that can indicate the presence of information in multiple spatial and temporal scales of analog data. The AR methodology was originally developed to enhance pilot-training as well as `make believe' entertainment industries in a user-friendly digital environment We believe TMR-AR can someday help us conduct thought experiments scientifically, to be pedagogically visualized in a zoom-in-and-out, consistent, multi-scale approximations.

  4. Multiresolution molecular mechanics: Implementation and efficiency

    NASA Astrophysics Data System (ADS)

    Biyikli, Emre; To, Albert C.

    2017-01-01

    Atomistic/continuum coupling methods combine accurate atomistic methods and efficient continuum methods to simulate the behavior of highly ordered crystalline systems. Coupled methods utilize the advantages of both approaches to simulate systems at a lower computational cost, while retaining the accuracy associated with atomistic methods. Many concurrent atomistic/continuum coupling methods have been proposed in the past; however, their true computational efficiency has not been demonstrated. The present work presents an efficient implementation of a concurrent coupling method called the Multiresolution Molecular Mechanics (MMM) for serial, parallel, and adaptive analysis. First, we present the features of the software implemented along with the associated technologies. The scalability of the software implementation is demonstrated, and the competing effects of multiscale modeling and parallelization are discussed. Then, the algorithms contributing to the efficiency of the software are presented. These include algorithms for eliminating latent ghost atoms from calculations and measurement-based dynamic balancing of parallel workload. The efficiency improvements made by these algorithms are demonstrated by benchmark tests. The efficiency of the software is found to be on par with LAMMPS, a state-of-the-art Molecular Dynamics (MD) simulation code, when performing full atomistic simulations. Speed-up of the MMM method is shown to be directly proportional to the reduction of the number of the atoms visited in force computation. Finally, an adaptive MMM analysis on a nanoindentation problem, containing over a million atoms, is performed, yielding an improvement of 6.3-8.5 times in efficiency, over the full atomistic MD method. For the first time, the efficiency of a concurrent atomistic/continuum coupling method is comprehensively investigated and demonstrated.

  5. MADNESS: A Multiresolution, Adaptive Numerical Environment for Scientific Simulation

    SciTech Connect

    Harrison, Robert J.; Beylkin, Gregory; Bischoff, Florian A.; Calvin, Justus A.; Fann, George I.; Fosso-Tande, Jacob; Galindo, Diego; Hammond, Jeff R.; Hartman-Baker, Rebecca; Hill, Judith C.; Jia, Jun; Kottmann, Jakob S.; Yvonne Ou, M-J.; Pei, Junchen; Ratcliff, Laura E.; Reuter, Matthew G.; Richie-Halford, Adam C.; Romero, Nichols A.; Sekino, Hideo; Shelton, William A.; Sundahl, Bryan E.; Thornton, W. Scott; Valeev, Edward F.; Vázquez-Mayagoitia, Álvaro; Vence, Nicholas; Yanai, Takeshi; Yokoi, Yukina

    2016-01-01

    MADNESS (multiresolution adaptive numerical environment for scientific simulation) is a high-level software environment for solving integral and differential equations in many dimensions that uses adaptive and fast harmonic analysis methods with guaranteed precision based on multiresolution analysis and separated representations. Underpinning the numerical capabilities is a powerful petascale parallel programming environment that aims to increase both programmer productivity and code scalability. This paper describes the features and capabilities of MADNESS and briefly discusses some current applications in chemistry and several areas of physics.

  6. MADNESS: A Multiresolution, Adaptive Numerical Environment for Scientific Simulation

    SciTech Connect

    Harrison, Robert J.; Beylkin, Gregory; Bischoff, Florian A.; Calvin, Justus A.; Fann, George I.; Fosso-Tande, Jacob; Galindo, Diego; Hammond, Jeff R.; Hartman-Baker, Rebecca; Hill, Judith C.; Jia, Jun; Kottmann, Jakob S.; Yvonne Ou, M-J.; Pei, Junchen; Ratcliff, Laura E.; Reuter, Matthew G.; Richie-Halford, Adam C.; Romero, Nichols A.; Sekino, Hideo; Shelton, William A.; Sundahl, Bryan E.; Thornton, W. Scott; Valeev, Edward F.; Vázquez-Mayagoitia, Álvaro; Vence, Nicholas; Yanai, Takeshi; Yokoi, Yukina

    2016-01-01

    We present MADNESS (multiresolution adaptive numerical environment for scientific simulation) that is a high-level software environment for solving integral and differential equations in many dimensions that uses adaptive and fast harmonic analysis methods with guaranteed precision that are based on multiresolution analysis and separated representations. Underpinning the numerical capabilities is a powerful petascale parallel programming environment that aims to increase both programmer productivity and code scalability. This paper describes the features and capabilities of MADNESS and briefly discusses some current applications in chemistry and several areas of physics.

  7. Multiresolution Analysis of UTAT B-spline Curves

    NASA Astrophysics Data System (ADS)

    Lamnii, A.; Mraoui, H.; Sbibih, D.; Zidna, A.

    2011-09-01

    In this paper, we describe a multiresolution curve representation based on periodic uniform tension algebraic trigonometric (UTAT) spline wavelets of class ??? and order four. Then we determine the decomposition and the reconstruction vectors corresponding to UTAT-spline spaces. Finally, we give some applications in order to illustrate the efficiency of the proposed approach.

  8. Efficient Error Calculation for Multiresolution Texture-Based Volume Visualization

    SciTech Connect

    LaMar, E; Hamann, B; Joy, K I

    2001-10-16

    Multiresolution texture-based volume visualization is an excellent technique to enable interactive rendering of massive data sets. Interactive manipulation of a transfer function is necessary for proper exploration of a data set. However, multiresolution techniques require assessing the accuracy of the resulting images, and re-computing the error after each change in a transfer function is very expensive. They extend their existing multiresolution volume visualization method by introducing a method for accelerating error calculations for multiresolution volume approximations. Computing the error for an approximation requires adding individual error terms. One error value must be computed once for each original voxel and its corresponding approximating voxel. For byte data, i.e., data sets where integer function values between 0 and 255 are given, they observe that the set of error pairs can be quite large, yet the set of unique error pairs is small. instead of evaluating the error function for each original voxel, they construct a table of the unique combinations and the number of their occurrences. To evaluate the error, they add the products of the error function for each unique error pair and the frequency of each error pair. This approach dramatically reduces the amount of computation time involved and allows them to re-compute the error associated with a new transfer function quickly.

  9. Parallel octree-based multiresolution mesh method for large-scale earthquake ground motion simulation

    NASA Astrophysics Data System (ADS)

    Kim, Eui Joong

    Large scale ground motion simulation requires supercomputing systems in order to obtain reliable and useful results within reasonable elapsed time. In this study, we develop a framework for terascale ground motion simulations in highly heterogeneous basins. As part of the development, we present a parallel octree-based multiresolution finite element methodology for the elastodynamic wave propagation problem. The octree-based multiresolution finite element method reduces memory use significantly and improves overall computational performance. The framework is comprised of three parts; (1) an octree-based mesh generator, Euclid developed by TV and O'Hallaron, (2) a parallel mesh partitioner, ParMETIS developed by Karypis et al.[2], and (3) a parallel octree-based multiresolution finite element solver, QUAKE developed in this study. Realistic earthquakes parameters, soil material properties, and sedimentary basins dimensions will produce extremely large meshes. The out-of-core versional octree-based mesh generator, Euclid overcomes the resulting severe memory limitations. By using a parallel, distributed-memory graph partitioning algorithm, ParMETIS partitions large meshes, overcoming the memory and cost problem. Despite capability of the Octree-Based Multiresolution Mesh Method ( OBM3), large problem sizes necessitate parallelism to handle large memory and work requirements. The parallel OBM 3 elastic wave propagation code, QUAKE has been developed to address these issues. The numerical methodology and the framework have been used to simulate the seismic response of both idealized systems and of the Greater Los Angeles basin to simple pulses and to a mainshock of the 1994 Northridge Earthquake, for frequencies of up to 1 Hz and domain size of 80 km x 80 km x 30 km. In the idealized models, QUAKE shows good agreement with the analytical Green's function solutions. In the realistic models for the Northridge earthquake mainshock, QUAKE qualitatively agrees, with at most

  10. Determining the 95% limit of detection for waterborne pathogen analyses from primary concentration to qPCR

    USGS Publications Warehouse

    Stokdyk, Joel P.; Firnstahl, Aaron; Spencer, Susan K.; Burch, Tucker R; Borchardt, Mark A.

    2016-01-01

    The limit of detection (LOD) for qPCR-based analyses is not consistently defined or determined in studies on waterborne pathogens. Moreover, the LODs reported often reflect the qPCR assay alone rather than the entire sample process. Our objective was to develop an approach to determine the 95% LOD (lowest concentration at which 95% of positive samples are detected) for the entire process of waterborne pathogen detection. We began by spiking the lowest concentration that was consistently positive at the qPCR step (based on its standard curve) into each procedural step working backwards (i.e., extraction, secondary concentration, primary concentration), which established a concentration that was detectable following losses of the pathogen from processing. Using the fraction of positive replicates (n = 10) at this concentration, we selected and analyzed a second, and then third, concentration. If the fraction of positive replicates equaled 1 or 0 for two concentrations, we selected another. We calculated the LOD using probit analysis. To demonstrate our approach we determined the 95% LOD for Salmonella enterica serovar Typhimurium, adenovirus 41, and vaccine-derived poliovirus Sabin 3, which were 11, 12, and 6 genomic copies (gc) per reaction (rxn), respectively (equivalent to 1.3, 1.5, and 4.0 gc L−1 assuming the 1500 L tap-water sample volume prescribed in EPA Method 1615). This approach limited the number of analyses required and was amenable to testing multiple genetic targets simultaneously (i.e., spiking a single sample with multiple microorganisms). An LOD determined this way can facilitate study design, guide the number of required technical replicates, aid method evaluation, and inform data interpretation.

  11. The paddle move commonly used in magic tricks as a means for analysing the perceptual limits of combined motion trajectories.

    PubMed

    Hergovich, Andreas; Gröbl, Kristian; Carbon, Claus-Christian

    2011-01-01

    Following Gustav Kuhn's inspiring technique of using magicians' acts as a source of insight into cognitive sciences, we used the 'paddle move' for testing the psychophysics of combined movement trajectories. The paddle move is a standard technique in magic consisting of a combined rotating and tilting movement. Careful control of the mutual speed parameters of the two movements makes it possible to inhibit the perception of the rotation, letting the 'magic' effect emerge--a sudden change of the tilted object. By using 3-D animated computer graphics we analysed the interaction of different angular speeds and the object shape/size parameters in evoking this motion disappearance effect. An angular speed of 540 degrees s(-1) (1.5 rev. s(-1)) sufficed to inhibit the perception of the rotary movement with the smallest object showing the strongest effect. 90.7% of the 172 participants were not able to perceive the rotary movement at an angular speed of 1125 degrees s(-1) (3.125 rev. s(-1)). Further analysis by multiple linear regression revealed major influences on the effectiveness of the magic trick of object height and object area, demonstrating the applicability of analysing key factors of magic tricks to reveal limits of the perceptual system.

  12. Applying multi-resolution numerical methods to geodynamics

    NASA Astrophysics Data System (ADS)

    Davies, David Rhodri

    structured grid solution strategies, the unstructured techniques utilized in 2-D would throw away the regular grid and, with it, the major benefits of the current solution algorithms. Alternative avenues towards multi-resolution must therefore be sought. A non-uniform structured method that produces similar advantages to unstructured grids is introduced here, in the context of the pre-existing 3-D spherical mantle dynamics code, TERRA. The method, based upon the multigrid refinement techniques employed in the field of computational engineering, is used to refine and solve on a radially non-uniform grid. It maintains the key benefits of TERRA's current configuration, whilst also overcoming many of its limitations. Highly efficient solutions to non-uniform problems are obtained. The scheme is highly resourceful in terms RAM, meaning that one can attempt calculations that would otherwise be impractical. In addition, the solution algorithm reduces the CPU-time needed to solve a given problem. Validation tests illustrate that the approach is accurate and robust. Furthermore, by being conceptually simple and straightforward to implement, the method negates the need to reformulate large sections of code. The technique is applied to highly advanced 3-D spherical mantle convection models. Due to its resourcefulness in terms of RAM, the modified code allows one to efficiently resolve thermal boundary layers at the dynamical regime of Earth's mantle. The simulations presented are therefore at superior vigor to the highest attained, to date, in 3-D spherical geometry, achieving Rayleigh numbers of order 109. Upwelling structures are examined, focussing upon the nature of deep mantle plumes. Previous studies have shown long-lived, anchored, coherent upwelling plumes to be a feature of low to moderate vigor convection. Since more vigorous convection traditionally shows greater time-dependence, the fixity of upwellings would not logically be expected for non-layered convection at higher

  13. Multiscale/multiresolution landslides susceptibility mapping

    NASA Astrophysics Data System (ADS)

    Grozavu, Adrian; Cătălin Stanga, Iulian; Valeriu Patriche, Cristian; Toader Juravle, Doru

    2014-05-01

    Within the European strategies, landslides are considered an important threatening that requires detailed studies to identify areas where these processes could occur in the future and to design scientific and technical plans for landslide risk mitigation. In this idea, assessing and mapping the landslide susceptibility is an important preliminary step. Generally, landslide susceptibility at small scale (for large regions) can be assessed through qualitative approach (expert judgements), based on a few variables, while studies at medium and large scale requires quantitative approach (e.g. multivariate statistics), a larger set of variables and, necessarily, the landslide inventory. Obviously, the results vary more or less from a scale to another, depending on the available input data, but also on the applied methodology. Since it is almost impossible to have a complete landslide inventory on large regions (e.g. at continental level), it is very important to verify the compatibility and the validity of results obtained at different scales, identifying the differences and fixing the inherent errors. This paper aims at assessing and mapping the landslide susceptibility at regional level through a multiscale-multiresolution approach from small scale and low resolution to large scale and high resolution of data and results, comparing the compatibility of results. While the first ones could be used for studies at european and national level, the later ones allows results validation, including through fields surveys. The test area, namely the Barlad Plateau (more than 9000 sq.km) is located in Eastern Romania, covering a region where both the natural environment and the human factor create a causal context that favor these processes. The landslide predictors were initially derived from various databases available at pan-european level and progressively completed and/or enhanced together with scale and the resolution: the topography (from SRTM at 90 meters to digital

  14. Developing a multiscale, multi-resolution agent-based brain tumor model by graphics processing units

    PubMed Central

    2011-01-01

    Multiscale agent-based modeling (MABM) has been widely used to simulate Glioblastoma Multiforme (GBM) and its progression. At the intracellular level, the MABM approach employs a system of ordinary differential equations to describe quantitatively specific intracellular molecular pathways that determine phenotypic switches among cells (e.g. from migration to proliferation and vice versa). At the intercellular level, MABM describes cell-cell interactions by a discrete module. At the tissue level, partial differential equations are employed to model the diffusion of chemoattractants, which are the input factors of the intracellular molecular pathway. Moreover, multiscale analysis makes it possible to explore the molecules that play important roles in determining the cellular phenotypic switches that in turn drive the whole GBM expansion. However, owing to limited computational resources, MABM is currently a theoretical biological model that uses relatively coarse grids to simulate a few cancer cells in a small slice of brain cancer tissue. In order to improve this theoretical model to simulate and predict actual GBM cancer progression in real time, a graphics processing unit (GPU)-based parallel computing algorithm was developed and combined with the multi-resolution design to speed up the MABM. The simulated results demonstrated that the GPU-based, multi-resolution and multiscale approach can accelerate the previous MABM around 30-fold with relatively fine grids in a large extracellular matrix. Therefore, the new model has great potential for simulating and predicting real-time GBM progression, if real experimental data are incorporated. PMID:22176732

  15. A multi-resolution envelope-power based model for speech intelligibility.

    PubMed

    Jørgensen, Søren; Ewert, Stephan D; Dau, Torsten

    2013-07-01

    The speech-based envelope power spectrum model (sEPSM) presented by Jørgensen and Dau [(2011). J. Acoust. Soc. Am. 130, 1475-1487] estimates the envelope power signal-to-noise ratio (SNRenv) after modulation-frequency selective processing. Changes in this metric were shown to account well for changes of speech intelligibility for normal-hearing listeners in conditions with additive stationary noise, reverberation, and nonlinear processing with spectral subtraction. In the latter condition, the standardized speech transmission index [(2003). IEC 60268-16] fails. However, the sEPSM is limited to conditions with stationary interferers, due to the long-term integration of the envelope power, and cannot account for increased intelligibility typically obtained with fluctuating maskers. Here, a multi-resolution version of the sEPSM is presented where the SNRenv is estimated in temporal segments with a modulation-filter dependent duration. The multi-resolution sEPSM is demonstrated to account for intelligibility obtained in conditions with stationary and fluctuating interferers, and noisy speech distorted by reverberation or spectral subtraction. The results support the hypothesis that the SNRenv is a powerful objective metric for speech intelligibility prediction.

  16. Multi-resolution Gabor wavelet feature extraction for needle detection in 3D ultrasound

    NASA Astrophysics Data System (ADS)

    Pourtaherian, Arash; Zinger, Svitlana; Mihajlovic, Nenad; de With, Peter H. N.; Huang, Jinfeng; Ng, Gary C.; Korsten, Hendrikus H. M.

    2015-12-01

    Ultrasound imaging is employed for needle guidance in various minimally invasive procedures such as biopsy guidance, regional anesthesia and brachytherapy. Unfortunately, a needle guidance using 2D ultrasound is very challenging, due to a poor needle visibility and a limited field of view. Nowadays, 3D ultrasound systems are available and more widely used. Consequently, with an appropriate 3D image-based needle detection technique, needle guidance and interventions may significantly be improved and simplified. In this paper, we present a multi-resolution Gabor transformation for an automated and reliable extraction of the needle-like structures in a 3D ultrasound volume. We study and identify the best combination of the Gabor wavelet frequencies. High precision in detecting the needle voxels leads to a robust and accurate localization of the needle for the intervention support. Evaluation in several ex-vivo cases shows that the multi-resolution analysis significantly improves the precision of the needle voxel detection from 0.23 to 0.32 at a high recall rate of 0.75 (gain 40%), where a better robustness and confidence were confirmed in the practical experiments.

  17. Protons are one of the limiting factors in determining sensitivity of nano surface-assisted (+)-mode LDI MS analyses.

    PubMed

    Cho, Eunji; Ahn, Miri; Kim, Young Hwan; Kim, Jongwon; Kim, Sunghwan

    2013-10-01

    A proton source employing a nanostructured gold surface for use in (+)-mode laser desorption ionization mass spectrometry (LDI-MS) was evaluated. Analysis of perdeuterated polyaromatic hydrocarbon compound dissolved in regular toluene, perdeuterated toluene, and deuterated methanol all showed that protonated ions were generated irregardless of solvent system. Therefore, it was concluded that residual water on the surface of the LDI plate was the major source of protons. The fact that residual water remaining after vacuum drying was the source of protons suggests that protons may be the limiting reagent in the LDI process and that overall ionization efficiency can be improved by incorporating an additional proton source. When extra proton sources, such as thiolate compounds and/or citric acid, were added to a nanostructured gold surface, the protonated signal abundance increased. These data show that protons are one of the limiting components in (+)-mode LDI MS analyses employing nanostructured gold surfaces. Therefore, it has been suggested that additional efforts are required to identify compounds that can act as proton donors without generating peaks that interfere with mass spectral interpretation.

  18. Protons are One of the Limiting Factors in Determining Sensitivity of Nano Surface-Assisted (+)-Mode LDI MS Analyses

    NASA Astrophysics Data System (ADS)

    Cho, Eunji; Ahn, Miri; Kim, Young Hwan; Kim, Jongwon; Kim, Sunghwan

    2013-10-01

    A proton source employing a nanostructured gold surface for use in (+)-mode laser desorption ionization mass spectrometry (LDI-MS) was evaluated. Analysis of perdeuterated polyaromatic hydrocarbon compound dissolved in regular toluene, perdeuterated toluene, and deuterated methanol all showed that protonated ions were generated irregardless of solvent system. Therefore, it was concluded that residual water on the surface of the LDI plate was the major source of protons. The fact that residual water remaining after vacuum drying was the source of protons suggests that protons may be the limiting reagent in the LDI process and that overall ionization efficiency can be improved by incorporating an additional proton source. When extra proton sources, such as thiolate compounds and/or citric acid, were added to a nanostructured gold surface, the protonated signal abundance increased. These data show that protons are one of the limiting components in (+)-mode LDI MS analyses employing nanostructured gold surfaces. Therefore, it has been suggested that additional efforts are required to identify compounds that can act as proton donors without generating peaks that interfere with mass spectral interpretation.

  19. Limiter

    DOEpatents

    Cohen, S.A.; Hosea, J.C.; Timberlake, J.R.

    1984-10-19

    A limiter with a specially contoured front face is provided. The front face of the limiter (the plasma-side face) is flat with a central indentation. In addition, the limiter shape is cylindrically symmetric so that the limiter can be rotated for greater heat distribution. This limiter shape accommodates the various power scrape-off distances lambda p, which depend on the parallel velocity, V/sub parallel/, of the impacting particles.

  20. a DTM Multi-Resolution Compressed Model for Efficient Data Storage and Network Transfer

    NASA Astrophysics Data System (ADS)

    Biagi, L.; Brovelli, M.; Zamboni, G.

    2011-08-01

    In recent years the technological evolution of terrestrial, aerial and satellite surveying, has considerably increased the measurement accuracy and, consequently, the quality of the derived information. At the same time, the smaller and smaller limitations on data storage devices, in terms of capacity and cost, has allowed the storage and the elaboration of a bigger number of instrumental observations. A significant example is the terrain height surveyed by LIDAR (LIght Detection And Ranging) technology where several height measurements for each square meter of land can be obtained. The availability of such a large quantity of observations is an essential requisite for an in-depth knowledge of the phenomena under study. But, at the same time, the most common Geographical Information Systems (GISs) show latency in visualizing and analyzing these kind of data. This problem becomes more evident in case of Internet GIS. These systems are based on the very frequent flow of geographical information over the internet and, for this reason, the band-width of the network and the size of the data to be transmitted are two fundamental factors to be considered in order to guarantee the actual usability of these technologies. In this paper we focus our attention on digital terrain models (DTM's) and we briefly analyse the problems about the definition of the minimal necessary information to store and transmit DTM's over network, with a fixed tolerance, starting from a huge number of observations. Then we propose an innovative compression approach for sparse observations by means of multi-resolution spline functions approximation. The method is able to provide metrical accuracy at least comparable to that provided by the most common deterministic interpolation algorithms (inverse distance weighting, local polynomial, radial basis functions). At the same time it dramatically reduces the number of information required for storing or for transmitting and rebuilding a digital terrain

  1. Limiter

    DOEpatents

    Cohen, Samuel A.; Hosea, Joel C.; Timberlake, John R.

    1986-01-01

    A limiter with a specially contoured front face accommodates the various power scrape-off distances .lambda..sub.p, which depend on the parallel velocity, V.sub..parallel., of the impacting particles. The front face of the limiter (the plasma-side face) is flat with a central indentation. In addition, the limiter shape is cylindrically symmetric so that the limiter can be rotated for greater heat distribution.

  2. Multiresolution and Explicit Methods for Vector Field Analysis and Visualization

    NASA Technical Reports Server (NTRS)

    Nielson, Gregory M.

    1997-01-01

    This is a request for a second renewal (3d year of funding) of a research project on the topic of multiresolution and explicit methods for vector field analysis and visualization. In this report, we describe the progress made on this research project during the second year and give a statement of the planned research for the third year. There are two aspects to this research project. The first is concerned with the development of techniques for computing tangent curves for use in visualizing flow fields. The second aspect of the research project is concerned with the development of multiresolution methods for curvilinear grids and their use as tools for visualization, analysis and archiving of flow data. We report on our work on the development of numerical methods for tangent curve computation first.

  3. A Multiresolution Method for Parameter Estimation of Diffusion Processes.

    PubMed

    Kou, S C; Olding, Benjamin P; Lysy, Martin; Liu, Jun S

    2012-12-01

    Diffusion process models are widely used in science, engineering and finance. Most diffusion processes are described by stochastic differential equations in continuous time. In practice, however, data is typically only observed at discrete time points. Except for a few very special cases, no analytic form exists for the likelihood of such discretely observed data. For this reason, parametric inference is often achieved by using discrete-time approximations, with accuracy controlled through the introduction of missing data. We present a new multiresolution Bayesian framework to address the inference difficulty. The methodology relies on the use of multiple approximations and extrapolation, and is significantly faster and more accurate than known strategies based on Gibbs sampling. We apply the multiresolution approach to three data-driven inference problems - one in biophysics and two in finance - one of which features a multivariate diffusion model with an entirely unobserved component.

  4. Boundary element based multiresolution shape optimisation in electrostatics

    NASA Astrophysics Data System (ADS)

    Bandara, Kosala; Cirak, Fehmi; Of, Günther; Steinbach, Olaf; Zapletal, Jan

    2015-09-01

    We consider the shape optimisation of high-voltage devices subject to electrostatic field equations by combining fast boundary elements with multiresolution subdivision surfaces. The geometry of the domain is described with subdivision surfaces and different resolutions of the same geometry are used for optimisation and analysis. The primal and adjoint problems are discretised with the boundary element method using a sufficiently fine control mesh. For shape optimisation the geometry is updated starting from the coarsest control mesh with increasingly finer control meshes. The multiresolution approach effectively prevents the appearance of non-physical geometry oscillations in the optimised shapes. Moreover, there is no need for mesh regeneration or smoothing during the optimisation due to the absence of a volume mesh. We present several numerical experiments and one industrial application to demonstrate the robustness and versatility of the developed approach.

  5. Multiresolution techniques for the classification of bioimage and biometric datasets

    NASA Astrophysics Data System (ADS)

    Chebira, Amina; Kovačević, Jelena

    2007-09-01

    We survey our work on adaptive multiresolution (MR) approaches to the classification of biological and fingerprint images. The system adds MR decomposition in front of a generic classifier consisting of feature computation and classification in each MR subspace, yielding local decisions, which are then combined into a global decision using a weighting algorithm. The system is tested on four different datasets, subcellular protein location images, drosophila embryo images, histological images and fingerprint images. Given the very high accuracies obtained for all four datasets, we demonstrate that the space-frequency localized information in the multiresolution subspaces adds significantly to the discriminative power of the system. Moreover, we show that a vastly reduced set of features is sufficient. Finally, we prove that frames are the class of MR techniques that performs the best in this context. This leads us to consider the construction of a new family of frames for classification, which we term lapped tight frame transforms.

  6. Multiresolutional encoding and decoding in embedded image and video coders

    NASA Astrophysics Data System (ADS)

    Xiong, Zixiang; Kim, Beong-Jo; Pearlman, William A.

    1998-07-01

    We address multiresolutional encoding and decoding within the embedded zerotree wavelet (EZW) framework for both images and video. By varying a resolution parameter, one can obtain decoded images at different resolutions from one single encoded bitstream, which is already rate scalable for EZW coders. Similarly one can decode video sequences at different rates and different spatial and temporal resolutions from one bitstream. Furthermore, a layered bitstream can be generated with multiresolutional encoding, from which the higher resolution layers can be used to increase the spatial/temporal resolution of the images/video obtained from the low resolution layer. In other words, we have achieved full scalability in rate and partial scalability in space and time. This added spatial/temporal scalability is significant for emerging multimedia applications such as fast decoding, image/video database browsing, telemedicine, multipoint video conferencing, and distance learning.

  7. A Multiresolution Method for Parameter Estimation of Diffusion Processes

    PubMed Central

    Kou, S. C.; Olding, Benjamin P.; Lysy, Martin; Liu, Jun S.

    2014-01-01

    Diffusion process models are widely used in science, engineering and finance. Most diffusion processes are described by stochastic differential equations in continuous time. In practice, however, data is typically only observed at discrete time points. Except for a few very special cases, no analytic form exists for the likelihood of such discretely observed data. For this reason, parametric inference is often achieved by using discrete-time approximations, with accuracy controlled through the introduction of missing data. We present a new multiresolution Bayesian framework to address the inference difficulty. The methodology relies on the use of multiple approximations and extrapolation, and is significantly faster and more accurate than known strategies based on Gibbs sampling. We apply the multiresolution approach to three data-driven inference problems – one in biophysics and two in finance – one of which features a multivariate diffusion model with an entirely unobserved component. PMID:25328259

  8. Specific binding of /sup 125/I-labeled human chorionic gonadotropin to gonadal tissue: comparison of limited-point saturation analyses to Scatchard analyses for determining binding capacities and factors affecting estimates of binding capacity

    SciTech Connect

    Spicer, L.J.; Ireland, J.J.

    1986-07-01

    Experiments were conducted to compare gonadotropin binding capacity calculated from limited-point saturation analyses to those obtained from Scatchard analyses, and to test the effects of membrane purity and source of gonadotropin receptors on determining the maximum percentage of radioiodinated hormone bound to receptors (maximum bindability). One- to four-point saturation analyses gave results comparable to results by Scatchard analyses when examining relative binding capacities of receptors. Crude testicular homogenates had lower estimates of maximum bindability of /sup 125/I-labeled human chorionic gonadotropin than more purified gonadotropin receptor preparations. Under similar preparation techniques, some gonadotropin receptor sources exhibited low maximum bindability.

  9. A sparse reconstruction method for the estimation of multiresolution emission fields via atmospheric inversion

    DOE PAGES

    Ray, J.; Lee, J.; Yadav, V.; ...

    2014-08-20

    We present a sparse reconstruction scheme that can also be used to ensure non-negativity when fitting wavelet-based random field models to limited observations in non-rectangular geometries. The method is relevant when multiresolution fields are estimated using linear inverse problems. Examples include the estimation of emission fields for many anthropogenic pollutants using atmospheric inversion or hydraulic conductivity in aquifers from flow measurements. The scheme is based on three new developments. Firstly, we extend an existing sparse reconstruction method, Stagewise Orthogonal Matching Pursuit (StOMP), to incorporate prior information on the target field. Secondly, we develop an iterative method that uses StOMP tomore » impose non-negativity on the estimated field. Finally, we devise a method, based on compressive sensing, to limit the estimated field within an irregularly shaped domain. We demonstrate the method on the estimation of fossil-fuel CO2 (ffCO2) emissions in the lower 48 states of the US. The application uses a recently developed multiresolution random field model and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also reduces the overall computational cost by a factor of two. Further, the sparse reconstruction scheme imposes non-negativity without introducing strong nonlinearities, such as those introduced by employing log-transformed fields, and thus reaps the benefits of simplicity and computational speed that are characteristic of linear inverse problems.« less

  10. Multi-Resolution Playback of Network Trace Files

    DTIC Science & Technology

    2015-06-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS MULTI-RESOLUTION PLAYBACK OF NETWORK TRACE FILES by Scott Fortner June 2015 Thesis Co-Advisors...PLAYBACK OF NETWORK TRACE FILES 5. FUNDING NUMBERS 6. AUTHOR(S) Scott Fortner 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval Postgraduate School...put forth a requirement for a non-proprietary network traffic replay system that is user friendly and can provide both replay of a network trace file as

  11. Multiple multiresolution representation of functions and calculus for fast computation

    SciTech Connect

    Fann, George I; Harrison, Robert J; Hill, Judith C; Jia, Jun; Galindo, Diego A

    2010-01-01

    We describe the mathematical representations, data structure and the implementation of the numerical calculus of functions in the software environment multiresolution analysis environment for scientific simulations, MADNESS. In MADNESS, each smooth function is represented using an adaptive pseudo-spectral expansion using the multiwavelet basis to a arbitrary but finite precision. This is an extension of the capabilities of most of the existing net, mesh and spectral based methods where the discretization is based on a single adaptive mesh, or expansions.

  12. Survey and analysis of multiresolution methods for turbulence data

    SciTech Connect

    Pulido, Jesus; Livescu, Daniel; Woodring, Jonathan; Ahrens, James; Hamann, Bernd

    2015-11-10

    This paper compares the effectiveness of various multi-resolution geometric representation methods, such as B-spline, Daubechies, Coiflet and Dual-tree wavelets, curvelets and surfacelets, to capture the structure of fully developed turbulence using a truncated set of coefficients. The turbulence dataset is obtained from a Direct Numerical Simulation of buoyancy driven turbulence on a 5123 mesh size, with an Atwood number, A = 0.05, and turbulent Reynolds number, Ret = 1800, and the methods are tested against quantities pertaining to both velocities and active scalar (density) fields and their derivatives, spectra, and the properties of constant density surfaces. The comparisons between the algorithms are given in terms of performance, accuracy, and compression properties. The results should provide useful information for multi-resolution analysis of turbulence, coherent feature extraction, compression for large datasets handling, as well as simulations algorithms based on multi-resolution methods. In conclusion, the final section provides recommendations for best decomposition algorithms based on several metrics related to computational efficiency and preservation of turbulence properties using a reduced set of coefficients.

  13. Sparse PDF Volumes for Consistent Multi-Resolution Volume Rendering

    PubMed Central

    Sicat, Ronell; Krüger, Jens; Möller, Torsten; Hadwiger, Markus

    2015-01-01

    This paper presents a new multi-resolution volume representation called sparse pdf volumes, which enables consistent multi-resolution volume rendering based on probability density functions (pdfs) of voxel neighborhoods. These pdfs are defined in the 4D domain jointly comprising the 3D volume and its 1D intensity range. Crucially, the computation of sparse pdf volumes exploits data coherence in 4D, resulting in a sparse representation with surprisingly low storage requirements. At run time, we dynamically apply transfer functions to the pdfs using simple and fast convolutions. Whereas standard low-pass filtering and down-sampling incur visible differences between resolution levels, the use of pdfs facilitates consistent results independent of the resolution level used. We describe the efficient out-of-core computation of large-scale sparse pdf volumes, using a novel iterative simplification procedure of a mixture of 4D Gaussians. Finally, our data structure is optimized to facilitate interactive multi-resolution volume rendering on GPUs. PMID:26146475

  14. Survey and analysis of multiresolution methods for turbulence data

    DOE PAGES

    Pulido, Jesus; Livescu, Daniel; Woodring, Jonathan; ...

    2015-11-10

    This paper compares the effectiveness of various multi-resolution geometric representation methods, such as B-spline, Daubechies, Coiflet and Dual-tree wavelets, curvelets and surfacelets, to capture the structure of fully developed turbulence using a truncated set of coefficients. The turbulence dataset is obtained from a Direct Numerical Simulation of buoyancy driven turbulence on a 5123 mesh size, with an Atwood number, A = 0.05, and turbulent Reynolds number, Ret = 1800, and the methods are tested against quantities pertaining to both velocities and active scalar (density) fields and their derivatives, spectra, and the properties of constant density surfaces. The comparisons between themore » algorithms are given in terms of performance, accuracy, and compression properties. The results should provide useful information for multi-resolution analysis of turbulence, coherent feature extraction, compression for large datasets handling, as well as simulations algorithms based on multi-resolution methods. In conclusion, the final section provides recommendations for best decomposition algorithms based on several metrics related to computational efficiency and preservation of turbulence properties using a reduced set of coefficients.« less

  15. Multiresolution persistent homology for excessively large biomolecular datasets

    NASA Astrophysics Data System (ADS)

    Xia, Kelin; Zhao, Zhixiong; Wei, Guo-Wei

    2015-10-01

    Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topological analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs.

  16. Multiresolution persistent homology for excessively large biomolecular datasets

    PubMed Central

    Xia, Kelin; Zhao, Zhixiong; Wei, Guo-Wei

    2015-01-01

    Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topological analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs. PMID:26450288

  17. Multiresolution persistent homology for excessively large biomolecular datasets

    SciTech Connect

    Xia, Kelin; Zhao, Zhixiong; Wei, Guo-Wei

    2015-10-07

    Although persistent homology has emerged as a promising tool for the topological simplification of complex data, it is computationally intractable for large datasets. We introduce multiresolution persistent homology to handle excessively large datasets. We match the resolution with the scale of interest so as to represent large scale datasets with appropriate resolution. We utilize flexibility-rigidity index to access the topological connectivity of the data set and define a rigidity density for the filtration analysis. By appropriately tuning the resolution of the rigidity density, we are able to focus the topological lens on the scale of interest. The proposed multiresolution topological analysis is validated by a hexagonal fractal image which has three distinct scales. We further demonstrate the proposed method for extracting topological fingerprints from DNA molecules. In particular, the topological persistence of a virus capsid with 273 780 atoms is successfully analyzed which would otherwise be inaccessible to the normal point cloud method and unreliable by using coarse-grained multiscale persistent homology. The proposed method has also been successfully applied to the protein domain classification, which is the first time that persistent homology is used for practical protein domain analysis, to our knowledge. The proposed multiresolution topological method has potential applications in arbitrary data sets, such as social networks, biological networks, and graphs.

  18. Wavelet multi-resolution analysis of energy transfer in turbulent premixed flames

    NASA Astrophysics Data System (ADS)

    Kim, Jeonglae; Bassenne, Maxime; Towery, Colin; Poludnenko, Alexei; Hamlington, Peter; Ihme, Matthias; Urzay, Javier

    2016-11-01

    Direct numerical simulations of turbulent premixed flames are examined using wavelet multi-resolution analyses (WMRA) as a diagnostics tool to evaluate the spatially localized inter-scale energy transfer in reacting flows. In non-reacting homogeneous-isotropic turbulence, the net energy transfer occurs from large to small scales on average, thus following the classical Kolmogorov energy cascade. However, in turbulent flames, our prior work suggests that thermal expansion leads to a small-scale pressure-work contribution that transfers energy in an inverse cascade on average, which has important consequences for LES modeling of reacting flows. The current study employs WMRA to investigate, simultaneously in physical and spectral spaces, the characteristics of this combustion-induced backscatter effect. The WMRA diagnostics provide spatial statistics of the spectra, scale-conditioned intermittency of velocity and vorticity, along with energy-transfer fluxes conditioned on the local progress variable.

  19. Stain Deconvolution Using Statistical Analysis of Multi-Resolution Stain Colour Representation

    PubMed Central

    Alsubaie, Najah; Trahearn, Nicholas; Raza, Shan E. Ahmed; Snead, David; Rajpoot, Nasir M.

    2017-01-01

    Stain colour estimation is a prominent factor of the analysis pipeline in most of histology image processing algorithms. Providing a reliable and efficient stain colour deconvolution approach is fundamental for robust algorithm. In this paper, we propose a novel method for stain colour deconvolution of histology images. This approach statistically analyses the multi-resolutional representation of the image to separate the independent observations out of the correlated ones. We then estimate the stain mixing matrix using filtered uncorrelated data. We conducted an extensive set of experiments to compare the proposed method to the recent state of the art methods and demonstrate the robustness of this approach using three different datasets of scanned slides, prepared in different labs using different scanners. PMID:28076381

  20. Hosting the preimplantation embryo: potentials and limitations of different approaches for analysing embryo-endometrium interactions in cattle.

    PubMed

    Ulbrich, Susanne E; Wolf, Eckhard; Bauersachs, Stefan

    2012-01-01

    Ongoing detailed investigations into embryo-maternal communication before implantation reveal that during early embryonic development a plethora of events are taking place. During the sexual cycle, remodelling and differentiation processes in the endometrium are controlled by ovarian hormones, mainly progesterone, to provide a suitable environment for establishment of pregnancy. In addition, embryonic signalling molecules initiate further sequences of events; of these molecules, prostaglandins are discussed herein as specifically important. Inadequate receptivity may impede preimplantation development and implantation, leading to embryonic losses. Because there are multiple factors affecting fertility, receptivity is difficult to comprehend. This review addresses different models and methods that are currently used and discusses their respective potentials and limitations in distinguishing key messages out of molecular twitter. Transcriptome, proteome and metabolome analyses generate comprehensive information and provide starting points for hypotheses, which need to be substantiated using further confirmatory methods. Appropriate in vivo and in vitro models are needed to disentangle the effects of participating factors in the embryo-maternal dialogue and to help distinguish associations from causalities. One interesting model is the study of somatic cell nuclear transfer embryos in normal recipient heifers. A multidisciplinary approach is needed to properly assess the importance of the uterine milieu for embryonic development and to use the large number of new findings to solve long-standing issues regarding fertility.

  1. Limitations of Species Delimitation Based on Phylogenetic Analyses: A Case Study in the Hypogymnia hypotrypa Group (Parmeliaceae, Ascomycota)

    PubMed Central

    Wei, Xinli; McCune, Bruce; Lumbsch, H. Thorsten; Li, Hui; Leavitt, Steven; Yamamoto, Yoshikazu; Tchabanenko, Svetlana; Wei, Jiangchun

    2016-01-01

    Delimiting species boundaries among closely related lineages often requires a range of independent data sets and analytical approaches. Similar to other organismal groups, robust species circumscriptions in fungi are increasingly investigated within an empirical framework. Here we attempt to delimit species boundaries in a closely related clade of lichen-forming fungi endemic to Asia, the Hypogymnia hypotrypa group (Parmeliaceae). In the current classification, the Hypogymnia hypotrypa group includes two species: H. hypotrypa and H. flavida, which are separated based on distinctive reproductive modes, the former producing soredia but absent in the latter. We reexamined the relationship between these two species using phenotypic characters and molecular sequence data (ITS, GPD, and MCM7 sequences) to address species boundaries in this group. In addition to morphological investigations, we used Bayesian clustering to identify potential genetic groups in the H. hypotrypa/H. flavida clade. We also used a variety of empirical, sequence-based species delimitation approaches, including: the “Automatic Barcode Gap Discovery” (ABGD), the Poisson tree process model (PTP), the General Mixed Yule Coalescent (GMYC), and the multispecies coalescent approach BPP. Different species delimitation scenarios were compared using Bayes factors delimitation analysis, in addition to comparisons of pairwise genetic distances, pairwise fixation indices (FST). The majority of the species delimitation analyses implemented in this study failed to support H. hypotrypa and H. flavida as distinct lineages, as did the Bayesian clustering analysis. However, strong support for the evolutionary independence of H. hypotrypa and H. flavida was inferred using BPP and further supported by Bayes factor delimitation. In spite of rigorous morphological comparisons and a wide range of sequence-based approaches to delimit species, species boundaries in the H. hypotrypa group remain uncertain. This study

  2. Physiological, biomass elemental composition and proteomic analyses of Escherichia coli ammonium-limited chemostat growth, and comparison with iron- and glucose-limited chemostat growth

    PubMed Central

    Folsom, James Patrick

    2015-01-01

    Escherichia coli physiological, biomass elemental composition and proteome acclimations to ammonium-limited chemostat growth were measured at four levels of nutrient scarcity controlled via chemostat dilution rate. These data were compared with published iron- and glucose-limited growth data collected from the same strain and at the same dilution rates to quantify general and nutrient-specific responses. Severe nutrient scarcity resulted in an overflow metabolism with differing organic byproduct profiles based on limiting nutrient and dilution rate. Ammonium-limited cultures secreted up to 35  % of the metabolized glucose carbon as organic byproducts with acetate representing the largest fraction; in comparison, iron-limited cultures secreted up to 70  % of the metabolized glucose carbon as lactate, and glucose-limited cultures secreted up to 4  % of the metabolized glucose carbon as formate. Biomass elemental composition differed with nutrient limitation; biomass from ammonium-limited cultures had a lower nitrogen content than biomass from either iron- or glucose-limited cultures. Proteomic analysis of central metabolism enzymes revealed that ammonium- and iron-limited cultures had a lower abundance of key tricarboxylic acid (TCA) cycle enzymes and higher abundance of key glycolysis enzymes compared with glucose-limited cultures. The overall results are largely consistent with cellular economics concepts, including metabolic tradeoff theory where the limiting nutrient is invested into essential pathways such as glycolysis instead of higher ATP-yielding, but non-essential, pathways such as the TCA cycle. The data provide a detailed insight into ecologically competitive metabolic strategies selected by evolution, templates for controlling metabolism for bioprocesses and a comprehensive dataset for validating in silico representations of metabolism. PMID:26018546

  3. Adaptive multi-resolution 3D Hartree-Fock-Bogoliubov solver for nuclear structure

    NASA Astrophysics Data System (ADS)

    Pei, J. C.; Fann, G. I.; Harrison, R. J.; Nazarewicz, W.; Shi, Yue; Thornton, S.

    2014-08-01

    Background: Complex many-body systems, such as triaxial and reflection-asymmetric nuclei, weakly bound halo states, cluster configurations, nuclear fragments produced in heavy-ion fusion reactions, cold Fermi gases, and pasta phases in neutron star crust, are all characterized by large sizes and complex topologies in which many geometrical symmetries characteristic of ground-state configurations are broken. A tool of choice to study such complex forms of matter is an adaptive multi-resolution wavelet analysis. This method has generated much excitement since it provides a common framework linking many diversified methodologies across different fields, including signal processing, data compression, harmonic analysis and operator theory, fractals, and quantum field theory. Purpose: To describe complex superfluid many-fermion systems, we introduce an adaptive pseudospectral method for solving self-consistent equations of nuclear density functional theory in three dimensions, without symmetry restrictions. Methods: The numerical method is based on the multi-resolution and computational harmonic analysis techniques with a multi-wavelet basis. The application of state-of-the-art parallel programming techniques include sophisticated object-oriented templates which parse the high-level code into distributed parallel tasks with a multi-thread task queue scheduler for each multi-core node. The internode communications are asynchronous. The algorithm is variational and is capable of solving coupled complex-geometric systems of equations adaptively, with functional and boundary constraints, in a finite spatial domain of very large size, limited by existing parallel computer memory. For smooth functions, user-defined finite precision is guaranteed. Results: The new adaptive multi-resolution Hartree-Fock-Bogoliubov (HFB) solver madness-hfb is benchmarked against a two-dimensional coordinate-space solver hfb-ax that is based on the B-spline technique and a three-dimensional solver

  4. Inferring Species Richness and Turnover by Statistical Multiresolution Texture Analysis of Satellite Imagery

    DTIC Science & Technology

    2012-10-24

    time series similarity measures for classification and change detection of ecosystem dynamics . Remote...for estimating species-richness, and introduce a method based on statistical wavelet multiresolution texture analysis to quantitatively assess...entropy for estimating species-richness, and introduce a method based on statistical wavelet multiresolution texture analysis to quantitatively

  5. A multi-resolution approach for optimal mass transport

    NASA Astrophysics Data System (ADS)

    Dominitz, Ayelet; Angenent, Sigurd; Tannenbaum, Allen

    2007-09-01

    Optimal mass transport is an important technique with numerous applications in econometrics, fluid dynamics, automatic control, statistical physics, shape optimization, expert systems, and meteorology. Motivated by certain problems in image registration and medical image visualization, in this note, we describe a simple gradient descent methodology for computing the optimal L2 transport mapping which may be easily implemented using a multiresolution scheme. We also indicate how the optimal transport map may be computed on the sphere. A numerical example is presented illustrating our ideas.

  6. Geometric multi-resolution analysis for dictionary learning

    NASA Astrophysics Data System (ADS)

    Maggioni, Mauro; Minsker, Stanislav; Strawn, Nate

    2015-09-01

    We present an efficient algorithm and theory for Geometric Multi-Resolution Analysis (GMRA), a procedure for dictionary learning. Sparse dictionary learning provides the necessary complexity reduction for the critical applications of compression, regression, and classification in high-dimensional data analysis. As such, it is a critical technique in data science and it is important to have techniques that admit both efficient implementation and strong theory for large classes of theoretical models. By construction, GMRA is computationally efficient and in this paper we describe how the GMRA correctly approximates a large class of plausible models (namely, the noisy manifolds).

  7. Adaptive multiresolution modeling of groundwater flow in heterogeneous porous media

    NASA Astrophysics Data System (ADS)

    Malenica, Luka; Gotovac, Hrvoje; Srzic, Veljko; Andric, Ivo

    2016-04-01

    Proposed methodology was originally developed by our scientific team in Split who designed multiresolution approach for analyzing flow and transport processes in highly heterogeneous porous media. The main properties of the adaptive Fup multi-resolution approach are: 1) computational capabilities of Fup basis functions with compact support capable to resolve all spatial and temporal scales, 2) multi-resolution presentation of heterogeneity as well as all other input and output variables, 3) accurate, adaptive and efficient strategy and 4) semi-analytical properties which increase our understanding of usually complex flow and transport processes in porous media. The main computational idea behind this approach is to separately find the minimum number of basis functions and resolution levels necessary to describe each flow and transport variable with the desired accuracy on a particular adaptive grid. Therefore, each variable is separately analyzed, and the adaptive and multi-scale nature of the methodology enables not only computational efficiency and accuracy, but it also describes subsurface processes closely related to their understood physical interpretation. The methodology inherently supports a mesh-free procedure, avoiding the classical numerical integration, and yields continuous velocity and flux fields, which is vitally important for flow and transport simulations. In this paper, we will show recent improvements within the proposed methodology. Since "state of the art" multiresolution approach usually uses method of lines and only spatial adaptive procedure, temporal approximation was rarely considered as a multiscale. Therefore, novel adaptive implicit Fup integration scheme is developed, resolving all time scales within each global time step. It means that algorithm uses smaller time steps only in lines where solution changes are intensive. Application of Fup basis functions enables continuous time approximation, simple interpolation calculations across

  8. Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms

    NASA Technical Reports Server (NTRS)

    Kurdila, Andrew J.; Sharpley, Robert C.

    1999-01-01

    This paper presents a final report on Wavelet and Multiresolution Analysis for Finite Element Networking Paradigms. The focus of this research is to derive and implement: 1) Wavelet based methodologies for the compression, transmission, decoding, and visualization of three dimensional finite element geometry and simulation data in a network environment; 2) methodologies for interactive algorithm monitoring and tracking in computational mechanics; and 3) Methodologies for interactive algorithm steering for the acceleration of large scale finite element simulations. Also included in this report are appendices describing the derivation of wavelet based Particle Image Velocity algorithms and reduced order input-output models for nonlinear systems by utilizing wavelet approximations.

  9. Parallel object-oriented, denoising system using wavelet multiresolution analysis

    DOEpatents

    Kamath, Chandrika; Baldwin, Chuck H.; Fodor, Imola K.; Tang, Nu A.

    2005-04-12

    The present invention provides a data de-noising system utilizing processors and wavelet denoising techniques. Data is read and displayed in different formats. The data is partitioned into regions and the regions are distributed onto the processors. Communication requirements are determined among the processors according to the wavelet denoising technique and the partitioning of the data. The data is transforming onto different multiresolution levels with the wavelet transform according to the wavelet denoising technique, the communication requirements, and the transformed data containing wavelet coefficients. The denoised data is then transformed into its original reading and displaying data format.

  10. Terascale Visualization: Multi-resolution Aspirin for Big-Data Headaches

    NASA Astrophysics Data System (ADS)

    Duchaineau, Mark

    2001-06-01

    Recent experience on the Accelerated Strategic Computing Initiative (ASCI) computers shows that computational physicists are successfully producing a prodigious collection of numbers on several thousand processors. But with this wealth of numbers comes an unprecedented difficulty in processing and moving them to provide useful insight and analysis. In this talk, a few simulations are highlighted where recent advancements in multiple-resolution mathematical representations and algorithms have provided some hope of seeing most of the physics of interest while keeping within the practical limits of the post-simulation storage and interactive data-exploration resources. A whole host of visualization research activities was spawned by the 1999 Gordon Bell Prize-winning computation of a shock-tube experiment showing Richtmyer-Meshkov turbulent instabilities. This includes efforts for the entire data pipeline from running simulation to interactive display: wavelet compression of field data, multi-resolution volume rendering and slice planes, out-of-core extraction and simplification of mixing-interface surfaces, shrink-wrapping to semi-regularize the surfaces, semi-structured surface wavelet compression, and view-dependent display-mesh optimization. More recently on the 12 TeraOps ASCI platform, initial results from a 5120-processor, billion-atom molecular dynamics simulation showed that 30-to-1 reductions in storage size can be achieved with no human-observable errors for the analysis required in simulations of supersonic crack propagation. This made it possible to store the 25 trillion bytes worth of simulation numbers in the available storage, which was under 1 trillion bytes. While multi-resolution methods and related systems are still in their infancy, for the largest-scale simulations there is often no other choice should the science require detailed exploration of the results.

  11. Application of multi-resolution 3D techniques in crime scene documentation with bloodstain pattern analysis.

    PubMed

    Hołowko, Elwira; Januszkiewicz, Kamil; Bolewicki, Paweł; Sitnik, Robert; Michoński, Jakub

    2016-10-01

    In forensic documentation with bloodstain pattern analysis (BPA) it is highly desirable to obtain non-invasively overall documentation of a crime scene, but also register in high resolution single evidence objects, like bloodstains. In this study, we propose a hierarchical 3D scanning platform designed according to the top-down approach known from the traditional forensic photography. The overall 3D model of a scene is obtained via integration of laser scans registered from different positions. Some parts of a scene being particularly interesting are documented using midrange scanner, and the smallest details are added in the highest resolution as close-up scans. The scanning devices are controlled using developed software equipped with advanced algorithms for point cloud processing. To verify the feasibility and effectiveness of multi-resolution 3D scanning in crime scene documentation, our platform was applied to document a murder scene simulated by the BPA experts from the Central Forensic Laboratory of the Police R&D, Warsaw, Poland. Applying the 3D scanning platform proved beneficial in the documentation of a crime scene combined with BPA. The multi-resolution 3D model enables virtual exploration of a scene in a three-dimensional environment, distance measurement, and gives a more realistic preservation of the evidences together with their surroundings. Moreover, high-resolution close-up scans aligned in a 3D model can be used to analyze bloodstains revealed at the crime scene. The result of BPA such as trajectories, and the area of origin are visualized and analyzed in an accurate model of a scene. At this stage, a simplified approach considering the trajectory of blood drop as a straight line is applied. Although the 3D scanning platform offers a new quality of crime scene documentation with BPA, some of the limitations of the technique are also mentioned.

  12. Global multi-resolution terrain elevation data 2010 (GMTED2010)

    USGS Publications Warehouse

    Danielson, Jeffrey J.; Gesch, Dean B.

    2011-01-01

    In 1996, the U.S. Geological Survey (USGS) developed a global topographic elevation model designated as GTOPO30 at a horizontal resolution of 30 arc-seconds for the entire Earth. Because no single source of topographic information covered the entire land surface, GTOPO30 was derived from eight raster and vector sources that included a substantial amount of U.S. Defense Mapping Agency data. The quality of the elevation data in GTOPO30 varies widely; there are no spatially-referenced metadata, and the major topographic features such as ridgelines and valleys are not well represented. Despite its coarse resolution and limited attributes, GTOPO30 has been widely used for a variety of hydrological, climatological, and geomorphological applications as well as military applications, where a regional, continental, or global scale topographic model is required. These applications have ranged from delineating drainage networks and watersheds to using digital elevation data for the extraction of topographic structure and three-dimensional (3D) visualization exercises (Jenson and Domingue, 1988; Verdin and Greenlee, 1996; Lehner and others, 2008). Many of the fundamental geophysical processes active at the Earth's surface are controlled or strongly influenced by topography, thus the critical need for high-quality terrain data (Gesch, 1994). U.S. Department of Defense requirements for mission planning, geographic registration of remotely sensed imagery, terrain visualization, and map production are similarly dependent on global topographic data. Since the time GTOPO30 was completed, the availability of higher-quality elevation data over large geographic areas has improved markedly. New data sources include global Digital Terrain Elevation Data (DTEDRegistered) from the Shuttle Radar Topography Mission (SRTM), Canadian elevation data, and data from the Ice, Cloud, and land Elevation Satellite (ICESat). Given the widespread use of GTOPO30 and the equivalent 30-arc

  13. Automated transformation-invariant shape recognition through wavelet multiresolution

    NASA Astrophysics Data System (ADS)

    Brault, Patrice; Mounier, Hugues

    2001-12-01

    We present here new results in Wavelet Multi-Resolution Analysis (W-MRA) applied to shape recognition in automatic vehicle driving applications. Different types of shapes have to be recognized in this framework. They pertain to most of the objects entering the sensors field of a car. These objects can be road signs, lane separation lines, moving or static obstacles, other automotive vehicles, or visual beacons. The recognition process must be invariant to global, affine or not, transformations which are : rotation, translation and scaling. It also has to be invariant to more local, elastic, deformations like the perspective (in particular with wide angle camera lenses), and also like deformations due to environmental conditions (weather : rain, mist, light reverberation) or optical and electrical signal noises. To demonstrate our method, an initial shape, with a known contour, is compared to the same contour altered by rotation, translation, scaling and perspective. The curvature computed for each contour point is used as a main criterion in the shape matching process. The original part of this work is to use wavelet descriptors, generated with a fast orthonormal W-MRA, rather than Fourier descriptors, in order to provide a multi-resolution description of the contour to be analyzed. In such way, the intrinsic spatial localization property of wavelet descriptors can be used and the recognition process can be speeded up. The most important part of this work is to demonstrate the potential performance of Wavelet-MRA in this application of shape recognition.

  14. Length scales in multi-resolution (hybrid) turbulence simulations

    NASA Astrophysics Data System (ADS)

    Lakshmipathy, Sunil; Girimaji, Sharath S.

    2007-11-01

    In direct numerical simulations (DNS) of turbulence, the smallest length scale in the flow is of the order of the Kolmogorov length scale η, which is determined from molecular viscosity and dissipation. The grid resolution should be of the order of η. In large eddy simulation (LES), the filter width determines the smallest scale of motion in the simulated field. But what is the smallest scale in hybrid or multi-resolution turbulence computation schemes? In many of these schemes, the filter is implicit, rather than explicit and the filter width is not known. This renders grid resolution studies very difficult, if not impossible in hybrid methods. For such schemes, we propose that the computational Kolmogorov scale which is determined using eddy viscosity and dissipation is the smallest scale of motion. We study the length-scale distribution in severally multi-resolution Partially-averaged Navier-Stokes (PANS) calculations. It is found that the smallest scale is indeed of the order of computational Kolmogorov scale and the length-scale distribution is strikingly similar to that in DNS computations. This finding paves the way for efficient and optimal utility of grid in multi-scale resolution computations. (This work was funded by Sandia Laboratories, Albuquerque, NM)

  15. Multisensor multiresolution data fusion for improvement in classification

    NASA Astrophysics Data System (ADS)

    Rubeena, V.; Tiwari, K. C.

    2016-04-01

    The rapid advancements in technology have facilitated easy availability of multisensor and multiresolution remote sensing data. Multisensor, multiresolution data contain complementary information and fusion of such data may result in application dependent significant information which may otherwise remain trapped within. The present work aims at improving classification by fusing features of coarse resolution hyperspectral (1 m) LWIR and fine resolution (20 cm) RGB data. The classification map comprises of eight classes. The class names are Road, Trees, Red Roof, Grey Roof, Concrete Roof, Vegetation, bare Soil and Unclassified. The processing methodology for hyperspectral LWIR data comprises of dimensionality reduction, resampling of data by interpolation technique for registering the two images at same spatial resolution, extraction of the spatial features to improve classification accuracy. In the case of fine resolution RGB data, the vegetation index is computed for classifying the vegetation class and the morphological building index is calculated for buildings. In order to extract the textural features, occurrence and co-occurence statistics is considered and the features will be extracted from all the three bands of RGB data. After extracting the features, Support Vector Machine (SVMs) has been used for training and classification. To increase the classification accuracy, post processing steps like removal of any spurious noise such as salt and pepper noise is done which is followed by filtering process by majority voting within the objects for better object classification.

  16. Multiresolution in CROCO (Coastal and Regional Ocean Community model)

    NASA Astrophysics Data System (ADS)

    Debreu, Laurent; Auclair, Francis; Benshila, Rachid; Capet, Xavier; Dumas, Franck; Julien, Swen; Marchesiello, Patrick

    2016-04-01

    CROCO (Coastal and Regional Ocean Community model [1]) is a new oceanic modeling system built upon ROMS_AGRIF and the non-hydrostatic kernel of SNH, gradually including algorithms from MARS3D (sediments)and HYCOM (vertical coordinates). An important objective of CROCO is to provide the possibility of running truly multiresolution simulations. Our previous work on structured mesh refinement [2] allowed us to run two-way nesting with the following major features: conservation, spatial and temporal refinement, coupling at the barotropic level. In this presentation, we will expose the current developments in CROCO towards multiresolution simulations: connection between neighboring grids at the same level of resolution and load balancing on parallel computers. Results of preliminary experiments will be given both on an idealized test case and on a realistic simulation of the Bay of Biscay with high resolution along the coast. References: [1] : CROCO : http://www.croco-ocean.org [2] : Debreu, L., P. Marchesiello, P. Penven, and G. Cambon, 2012: Two-way nesting in split-explicit ocean models: algorithms, implementation and validation. Ocean Modelling, 49-50, 1-21.

  17. Multiresolution mesh segmentation based on surface roughness and wavelet analysis

    NASA Astrophysics Data System (ADS)

    Roudet, Céline; Dupont, Florent; Baskurt, Atilla

    2007-01-01

    During the last decades, the three-dimensional objects have begun to compete with traditional multimedia (images, sounds and videos) and have been used by more and more applications. The common model used to represent them is a surfacic mesh due to its intrinsic simplicity and efficacity. In this paper, we present a new algorithm for the segmentation of semi-regular triangle meshes, via multiresolution analysis. Our method uses several measures which reflect the roughness of the surface for all meshes resulting from the decomposition of the initial model into different fine-to-coarse multiresolution meshes. The geometric data decomposition is based on the lifting scheme. Using that formulation, we have compared various interpolant prediction operators, associated or not with an update step. For each resolution level, the resulting approximation mesh is then partitioned into classes having almost constant roughness thanks to a clustering algorithm. Resulting classes gather regions having the same visual appearance in term of roughness. The last step consists in decomposing the mesh into connex groups of triangles using region growing ang merging algorithms. These connex surface patches are of particular interest for adaptive mesh compression, visualisation, indexation or watermarking.

  18. Drinking Patterns and the Development of Functional Limitations in Older Adults: Longitudinal Analyses of the Health and Retirement Survey

    PubMed Central

    Lin, James C.; Guerrieri-Bang, Joy; Moore, Alison A.

    2011-01-01

    OBJECTIVES To examine whether consistent low-risk drinking is associated with lower risk of developing functional limitations among older adults. METHODS Data were obtained from five waves of the Health and Retirement Study. Function was assessed by questions measuring four physical abilities and five instrumental activities of daily living. Five different drinking patterns were determined using data over two consecutive survey periods. RESULTS Over the follow-up periods, 38.6% of older adults developed functional limitations. Consistent low-risk drinkers had lower odds of developing functional limitations compared to consistent abstainers, and the effect of consistent low-risk drinking was greater among those 50–64 years compared to those ≥65 years. Other drinking patterns were not associated with lower odds of incident functional limitation. DISCUSSION Consistent low-risk drinking was associated with lower odds of developing functional limitations, and this association was greater among older middle-aged adults 50–64 years of age. PMID:21311049

  19. X-ray crystallographic analyses of pig pancreatic alpha-amylase with limit dextrin, oligosaccharide, and alpha-cyclodextrin.

    PubMed

    Larson, Steven B; Day, John S; McPherson, Alexander

    2010-04-13

    Further refinement of the model using maximum likelihood procedures and reevaluation of the native electron density map has shown that crystals of pig pancreatic alpha-amylase, whose structure we reported more than 15 years ago, in fact contain a substantial amount of carbohydrate. The carbohydrate fragments are the products of glycogen digestion carried out as an essential step of the protein's purification procedure. In particular, the substrate-binding cleft contains a limit dextrin of six glucose residues, one of which contains both alpha-(1,4) and alpha-(1,6) linkages to contiguous residues. The disaccharide in the original model, shared between two amylase molecules in the crystal lattice, but also occupying a portion of the substrate-binding cleft, is now seen to be a tetrasaccharide. There are, in addition, several other probable monosaccharide binding sites. Furthermore, we have further reviewed our X-ray diffraction analysis of alpha-amylase complexed with alpha-cyclodextrin. alpha-Amylase binds three cyclodextrin molecules. Glucose residues of two of the rings superimpose upon the limit dextrin and the tetrasaccharide. The limit dextrin superimposes in large part upon linear oligosaccharide inhibitors visualized by other investigators. By comprehensive integration of these complexes we have constructed a model for the binding of polysaccharides having the helical character known to be present in natural substrates such as starch and glycogen.

  20. Multiresolution subspace-based optimization method for inverse scattering problems.

    PubMed

    Oliveri, Giacomo; Zhong, Yu; Chen, Xudong; Massa, Andrea

    2011-10-01

    This paper investigates an approach to inverse scattering problems based on the integration of the subspace-based optimization method (SOM) within a multifocusing scheme in the framework of the contrast source formulation. The scattering equations are solved by a nested three-step procedure composed of (a) an outer multiresolution loop dealing with the identification of the regions of interest within the investigation domain through an iterative information-acquisition process, (b) a spectrum analysis step devoted to the reconstruction of the deterministic components of the contrast sources, and (c) an inner optimization loop aimed at retrieving the ambiguous components of the contrast sources through a conjugate gradient minimization of a suitable objective function. A set of representative reconstruction results is discussed to provide numerical evidence of the effectiveness of the proposed algorithmic approach as well as to assess the features and potentialities of the multifocusing integration in comparison with the state-of-the-art SOM implementation.

  1. Multiresolution stochastic simulations of reaction-diffusion processes.

    PubMed

    Bayati, Basil; Chatelain, Philippe; Koumoutsakos, Petros

    2008-10-21

    Stochastic simulations of reaction-diffusion processes are used extensively for the modeling of complex systems in areas ranging from biology and social sciences to ecosystems and materials processing. These processes often exhibit disparate scales that render their simulation prohibitive even for massive computational resources. The problem is resolved by introducing a novel stochastic multiresolution method that enables the efficient simulation of reaction-diffusion processes as modeled by many-particle systems. The proposed method quantifies and efficiently handles the associated stiffness in simulating the system dynamics and its computational efficiency and accuracy are demonstrated in simulations of a model problem described by the Fisher-Kolmogorov equation. The method is general and can be applied to other many-particle models of physical processes.

  2. Towards Online Multiresolution Community Detection in Large-Scale Networks

    PubMed Central

    Huang, Jianbin; Sun, Heli; Liu, Yaguang; Song, Qinbao; Weninger, Tim

    2011-01-01

    The investigation of community structure in networks has aroused great interest in multiple disciplines. One of the challenges is to find local communities from a starting vertex in a network without global information about the entire network. Many existing methods tend to be accurate depending on a priori assumptions of network properties and predefined parameters. In this paper, we introduce a new quality function of local community and present a fast local expansion algorithm for uncovering communities in large-scale networks. The proposed algorithm can detect multiresolution community from a source vertex or communities covering the whole network. Experimental results show that the proposed algorithm is efficient and well-behaved in both real-world and synthetic networks. PMID:21887325

  3. Adaptive Covariance Inflation in a Multi-Resolution Assimilation Scheme

    NASA Astrophysics Data System (ADS)

    Hickmann, K. S.; Godinez, H. C.

    2015-12-01

    When forecasts are performed using modern data assimilation methods observation and model error can be scaledependent. During data assimilation the blending of error across scales can result in model divergence since largeerrors at one scale can be propagated across scales during the analysis step. Wavelet based multi-resolution analysiscan be used to separate scales in model and observations during the application of an ensemble Kalman filter. However,this separation is done at the cost of implementing an ensemble Kalman filter at each scale. This presents problemswhen tuning the covariance inflation parameter at each scale. We present a method to adaptively tune a scale dependentcovariance inflation vector based on balancing the covariance of the innovation and the covariance of observations ofthe ensemble. Our methods are demonstrated on a one dimensional Kuramoto-Sivashinsky (K-S) model known todemonstrate non-linear interactions between scales.

  4. Copy-move forgery detection using multiresolution local binary patterns.

    PubMed

    Davarzani, Reza; Yaghmaie, Khashayar; Mozaffari, Saeed; Tapak, Meysam

    2013-09-10

    Copy-move forgery is one of the most popular tampering artifacts in digital images. In this paper, we present an efficient method for copy-move forgery detection using Multiresolution Local Binary Patterns (MLBP). The proposed method is robust to geometric distortions and illumination variations of duplicated regions. Furthermore, the proposed block-based method recovers parameters of the geometric transformations. First, the image is divided into overlapping blocks and feature vectors for each block are extracted using LBP operators. The feature vectors are sorted based on lexicographical order. Duplicated image blocks are determined in the block matching step using k-d tree for more time reduction. Finally, in order to both determine the parameters of geometric transformations and remove the possible false matches, RANSAC (RANdom SAmple Consensus) algorithm is used. Experimental results show that the proposed approach is able to precisely detect duplicated regions even after distortions such as rotation, scaling, JPEG compression, blurring and noise adding.

  5. Multiresolution strategies for the numerical solution of optimal control problems

    NASA Astrophysics Data System (ADS)

    Jain, Sachin

    There exist many numerical techniques for solving optimal control problems but less work has been done in the field of making these algorithms run faster and more robustly. The main motivation of this work is to solve optimal control problems accurately in a fast and efficient way. Optimal control problems are often characterized by discontinuities or switchings in the control variables. One way of accurately capturing the irregularities in the solution is to use a high resolution (dense) uniform grid. This requires a large amount of computational resources both in terms of CPU time and memory. Hence, in order to accurately capture any irregularities in the solution using a few computational resources, one can refine the mesh locally in the region close to an irregularity instead of refining the mesh uniformly over the whole domain. Therefore, a novel multiresolution scheme for data compression has been designed which is shown to outperform similar data compression schemes. Specifically, we have shown that the proposed approach results in fewer grid points in the grid compared to a common multiresolution data compression scheme. The validity of the proposed mesh refinement algorithm has been verified by solving several challenging initial-boundary value problems for evolution equations in 1D. The examples have demonstrated the stability and robustness of the proposed algorithm. The algorithm adapted dynamically to any existing or emerging irregularities in the solution by automatically allocating more grid points to the region where the solution exhibited sharp features and fewer points to the region where the solution was smooth. Thereby, the computational time and memory usage has been reduced significantly, while maintaining an accuracy equivalent to the one obtained using a fine uniform mesh. Next, a direct multiresolution-based approach for solving trajectory optimization problems is developed. The original optimal control problem is transcribed into a

  6. Robust 2D phase unwrapping based on multiresolution

    NASA Astrophysics Data System (ADS)

    Davidson, Gordon W.; Bamler, Richard

    1996-12-01

    An approach to 2D phase unwrapping for SAR interferometry is presented, based on separate steps of coarse phase and fine phase estimation. The coarse phase is constructed from instantaneous frequency estimates obtained using adaptive multiresolution, in which estimation is done of difference frequencies between resolution levels, and the frequency differences are summed over resolution levels such that a conservative phase gradient field is maintained. This allows a smoothed coarse unwrapped phase, which achieves the full terrain height, to be obtained with an unweighted least squares phase construction. The coarse phase is used to remove the bulk of the phase variation of the interferogram, allowing more accurate multilooking, and the resulting fine phase in unwrapped with weighted least squares. The unwrapping approach is verified on simulated interferograms.

  7. Out-of-Core Construction and Visualization of Multiresolution Surfaces

    SciTech Connect

    Lindstrom, P

    2003-02-03

    We present a method for end-to-end out-of-core simplification and view-dependent visualization of large surfaces. The method consists of three phases: (1) memory insensitive simplification; (2) memory insensitive construction of a multiresolution hierarchy; and (3) run-time, output-sensitive, view-dependent rendering and navigation of the mesh. The first two off-line phases are performed entirely on disk, and use only a small, constant amount of memory, whereas the run-time system pages in only the rendered parts of the mesh in a cache coherent manner. As a result, we are able to process and visualize arbitrarily large meshes given a sufficient amount of disk space; a constant multiple of the size of the input mesh. Similar to recent work on out-of-core simplification, our memory insensitive method uses vertex clustering on a rectilinear octree grid to coarsen and create a hierarchy for the mesh, and a quadric error metric to choose vertex positions at all levels of resolution. We show how the quadric information can be used to concisely represent vertex position, surface normal, error, and curvature information for anisotropic view-dependent coarsening and silhouette preservation. The run-time component of our system uses asynchronous rendering and view-dependent refinement driven by screen-space error and visibility. The system exploits frame-to-frame coherence and has been designed to allow preemptive refinement at the granularity of individual vertices to support refinement on a time budget. Our results indicate a significant improvement in processing speed over previous methods for out-of-core multiresolution surface construction. Meanwhile, all phases of the method are disk and memory efficient, and are fairly straightforward to implement.

  8. Out-of-Core Construction and Visualization of Multiresolution Surfaces

    SciTech Connect

    Lindstrom, P

    2002-11-04

    We present a method for end-to-end out-of-core simplification and view-dependent visualization of large surfaces. The method consists of three phases: (1) memory insensitive simplification; (2) memory insensitive construction of a multiresolution hierarchy; and (3) run-time, output-sensitive, view-dependent rendering and navigation of the mesh. The first two off-line phases are performed entirely on disk, and use only a small, constant amount of memory, whereas the run-time system pages in only the rendered parts of the mesh in a cache coherent manner. As a result, we are able to process and visualize arbitrarily large meshes given a sufficient amount of disk space; a constant multiple of the size of the input mesh. Similar to recent work on out-of-core simplification, our memory insensitive method uses vertex clustering on a uniform octree grid to coarsen a mesh and create a hierarchy, and a quadric error metric to choose vertex positions at all levels of resolution. We show how the quadric information can be used to concisely represent vertex position, surface normal, error, and curvature information for anisotropic view-dependent coarsening and silhouette preservation. The run-time component of our system uses asynchronous rendering and view-dependent refinement driven by screen-space error and visibility. The system exploits frame-to-frame coherence and has been designed to allow preemptive refinement at the granularity of individual vertices to support refinement on a time budget. Our results indicate a significant improvement in processing speed over previous methods for out-of-core multiresolution surface construction. Meanwhile, all phases of the method are both disk and memory efficient, and are fairly straightforward to implement.

  9. A Multi-Resolution Data Structure for Two-Dimensional Morse Functions

    SciTech Connect

    Bremer, P-T; Edelsbrunner, H; Hamann, B; Pascucci, V

    2003-07-30

    The efficient construction of simplified models is a central problem in the field of visualization. We combine topological and geometric methods to construct a multi-resolution data structure for functions over two-dimensional domains. Starting with the Morse-Smale complex we build a hierarchy by progressively canceling critical points in pairs. The data structure supports mesh traversal operations similar to traditional multi-resolution representations.

  10. Multi-resolution community detection based on generalized self-loop rescaling strategy

    NASA Astrophysics Data System (ADS)

    Xiang, Ju; Tang, Yan-Ni; Gao, Yuan-Yuan; Zhang, Yan; Deng, Ke; Xu, Xiao-Ke; Hu, Ke

    2015-08-01

    Community detection is of considerable importance for analyzing the structure and function of complex networks. Many real-world networks may possess community structures at multiple scales, and recently, various multi-resolution methods were proposed to identify the community structures at different scales. In this paper, we present a type of multi-resolution methods by using the generalized self-loop rescaling strategy. The self-loop rescaling strategy provides one uniform ansatz for the design of multi-resolution community detection methods. Many quality functions for community detection can be unified in the framework of the self-loop rescaling. The resulting multi-resolution quality functions can be optimized directly using the existing modularity-optimization algorithms. Several derived multi-resolution methods are applied to the analysis of community structures in several synthetic and real-world networks. The results show that these methods can find the pre-defined substructures in synthetic networks and real splits observed in real-world networks. Finally, we give a discussion on the methods themselves and their relationship. We hope that the study in the paper can be helpful for the understanding of the multi-resolution methods and provide useful insight into designing new community detection methods.

  11. Leaf Length Tracker: a novel approach to analyse leaf elongation close to the thermal limit of growth in the field.

    PubMed

    Nagelmüller, Sebastian; Kirchgessner, Norbert; Yates, Steven; Hiltpold, Maya; Walter, Achim

    2016-03-01

    Leaf growth in monocot crops such as wheat and barley largely follows the daily temperature course, particularly under cold but humid springtime field conditions. Knowledge of the temperature response of leaf extension, particularly variations close to the thermal limit of growth, helps define physiological growth constraints and breeding-related genotypic differences among cultivars. Here, we present a novel method, called 'Leaf Length Tracker' (LLT), suitable for measuring leaf elongation rates (LERs) of cereals and other grasses with high precision and high temporal resolution under field conditions. The method is based on image sequence analysis, using a marker tracking approach to calculate LERs. We applied the LLT to several varieties of winter wheat (Triticum aestivum), summer barley (Hordeum vulgare), and ryegrass (Lolium perenne), grown in the field and in growth cabinets under controlled conditions. LLT is easy to use and we demonstrate its reliability and precision under changing weather conditions that include temperature, wind, and rain. We found that leaf growth stopped at a base temperature of 0°C for all studied species and we detected significant genotype-specific differences in LER with rising temperature. The data obtained were statistically robust and were reproducible in the tested environments. Using LLT, we were able to detect subtle differences (sub-millimeter) in leaf growth patterns. This method will allow the collection of leaf growth data in a wide range of future field experiments on different graminoid species or varieties under varying environmental or treatment conditions.

  12. Leaf Length Tracker: a novel approach to analyse leaf elongation close to the thermal limit of growth in the field

    PubMed Central

    Kirchgessner, Norbert; Yates, Steven; Hiltpold, Maya; Walter, Achim

    2016-01-01

    Leaf growth in monocot crops such as wheat and barley largely follows the daily temperature course, particularly under cold but humid springtime field conditions. Knowledge of the temperature response of leaf extension, particularly variations close to the thermal limit of growth, helps define physiological growth constraints and breeding-related genotypic differences among cultivars. Here, we present a novel method, called ‘Leaf Length Tracker’ (LLT), suitable for measuring leaf elongation rates (LERs) of cereals and other grasses with high precision and high temporal resolution under field conditions. The method is based on image sequence analysis, using a marker tracking approach to calculate LERs. We applied the LLT to several varieties of winter wheat (Triticum aestivum), summer barley (Hordeum vulgare), and ryegrass (Lolium perenne), grown in the field and in growth cabinets under controlled conditions. LLT is easy to use and we demonstrate its reliability and precision under changing weather conditions that include temperature, wind, and rain. We found that leaf growth stopped at a base temperature of 0°C for all studied species and we detected significant genotype-specific differences in LER with rising temperature. The data obtained were statistically robust and were reproducible in the tested environments. Using LLT, we were able to detect subtle differences (sub-millimeter) in leaf growth patterns. This method will allow the collection of leaf growth data in a wide range of future field experiments on different graminoid species or varieties under varying environmental or treatment conditions. PMID:26818912

  13. Native and nonnative fish populations of the Colorado River are food limited--evidence from new food web analyses

    USGS Publications Warehouse

    Kennedy, Theodore A.; Cross, Wyatt F.; Hall, Robert O.; Baxter, Colden V.; Rosi-Marshall, Emma J.

    2013-01-01

    Fish populations in the Colorado River downstream from Glen Canyon Dam appear to be limited by the availability of high-quality invertebrate prey. Midge and blackfly production is low and nonnative rainbow trout in Glen Canyon and native fishes in Grand Canyon consume virtually all of the midge and blackfly biomass that is produced annually. In Glen Canyon, the invertebrate assemblage is dominated by nonnative New Zealand mudsnails, the food web has a simple structure, and transfers of energy from the base of the web (algae) to the top of the web (rainbow trout) are inefficient. The food webs in Grand Canyon are more complex relative to Glen Canyon, because, on average, each species in the web is involved in more interactions and feeding connections. Based on theory and on studies from other ecosystems, the structure and organization of Grand Canyon food webs should make them more stable and less susceptible to large changes following perturbations of the flow regime relative to food webs in Glen Canyon. In support of this hypothesis, Grand Canyon food webs were much less affected by a 2008 controlled flood relative to the food web in Glen Canyon.

  14. Multi-resolution voxel phantom modeling: a high-resolution eye model for computational dosimetry

    NASA Astrophysics Data System (ADS)

    Caracappa, Peter F.; Rhodes, Ashley; Fiedler, Derek

    2014-09-01

    Voxel models of the human body are commonly used for simulating radiation dose with a Monte Carlo radiation transport code. Due to memory limitations, the voxel resolution of these computational phantoms is typically too large to accurately represent the dimensions of small features such as the eye. Recently reduced recommended dose limits to the lens of the eye, which is a radiosensitive tissue with a significant concern for cataract formation, has lent increased importance to understanding the dose to this tissue. A high-resolution eye model is constructed using physiological data for the dimensions of radiosensitive tissues, and combined with an existing set of whole-body models to form a multi-resolution voxel phantom, which is used with the MCNPX code to calculate radiation dose from various exposure types. This phantom provides an accurate representation of the radiation transport through the structures of the eye. Two alternate methods of including a high-resolution eye model within an existing whole-body model are developed. The accuracy and performance of each method is compared against existing computational phantoms.

  15. Multi-resolution voxel phantom modeling: a high-resolution eye model for computational dosimetry.

    PubMed

    Caracappa, Peter F; Rhodes, Ashley; Fiedler, Derek

    2014-09-21

    Voxel models of the human body are commonly used for simulating radiation dose with a Monte Carlo radiation transport code. Due to memory limitations, the voxel resolution of these computational phantoms is typically too large to accurately represent the dimensions of small features such as the eye. Recently reduced recommended dose limits to the lens of the eye, which is a radiosensitive tissue with a significant concern for cataract formation, has lent increased importance to understanding the dose to this tissue. A high-resolution eye model is constructed using physiological data for the dimensions of radiosensitive tissues, and combined with an existing set of whole-body models to form a multi-resolution voxel phantom, which is used with the MCNPX code to calculate radiation dose from various exposure types. This phantom provides an accurate representation of the radiation transport through the structures of the eye. Two alternate methods of including a high-resolution eye model within an existing whole-body model are developed. The accuracy and performance of each method is compared against existing computational phantoms.

  16. Continuously zoom imaging probe for the multi-resolution foveated laparoscope

    PubMed Central

    Qin, Yi; Hua, Hong

    2016-01-01

    In modern minimally invasive surgeries (MIS), standard laparoscopes suffer from the tradeoff between the spatial resolution and field of view (FOV). The inability of simultaneously acquiring high-resolution images for accurate operation and wide-angle overviews for situational awareness limits the efficiency and outcome of the MIS. A dual view multi-resolution foveated laparoscope (MRFL) which can simultaneously provide the surgeon with a high-resolution view as well as a wide-angle overview was proposed and demonstrated to have great potential for improving the MIS. Although experiment results demonstrated the high-magnification probe has an adequate magnification for viewing surgical details, the dual-view MRFL is limited to two fixed levels of magnifications. A fine adjustment of the magnification is highly desired for obtaining high resolution images with desired field coverage. In this paper, a high magnification probe with continuous zooming capability without any mechanical moving parts is demonstrated. By taking the advantages of two electrically tunable lenses, one for optical zoom and the other for image focus compensation, the optical magnification of the high-magnification probe varies from 2 × to 3 × compared with that of the wide-angle probe, while the focused object position stays the same as the wide-angle probe. The optical design and the tunable lens analysis are presented, followed by prototype demonstration. PMID:27446645

  17. Continuously zoom imaging probe for the multi-resolution foveated laparoscope.

    PubMed

    Qin, Yi; Hua, Hong

    2016-04-01

    In modern minimally invasive surgeries (MIS), standard laparoscopes suffer from the tradeoff between the spatial resolution and field of view (FOV). The inability of simultaneously acquiring high-resolution images for accurate operation and wide-angle overviews for situational awareness limits the efficiency and outcome of the MIS. A dual view multi-resolution foveated laparoscope (MRFL) which can simultaneously provide the surgeon with a high-resolution view as well as a wide-angle overview was proposed and demonstrated to have great potential for improving the MIS. Although experiment results demonstrated the high-magnification probe has an adequate magnification for viewing surgical details, the dual-view MRFL is limited to two fixed levels of magnifications. A fine adjustment of the magnification is highly desired for obtaining high resolution images with desired field coverage. In this paper, a high magnification probe with continuous zooming capability without any mechanical moving parts is demonstrated. By taking the advantages of two electrically tunable lenses, one for optical zoom and the other for image focus compensation, the optical magnification of the high-magnification probe varies from 2 × to 3 × compared with that of the wide-angle probe, while the focused object position stays the same as the wide-angle probe. The optical design and the tunable lens analysis are presented, followed by prototype demonstration.

  18. Multi-tissue analyses reveal limited inter-annual and seasonal variation in mercury exposure in an Antarctic penguin community.

    PubMed

    Brasso, Rebecka L; Polito, Michael J; Emslie, Steven D

    2014-10-01

    Inter-annual variation in tissue mercury concentrations in birds can result from annual changes in the bioavailability of mercury or shifts in dietary composition and/or trophic level. We investigated potential annual variability in mercury dynamics in the Antarctic marine food web using Pygoscelis penguins as biomonitors. Eggshell membrane, chick down, and adult feathers were collected from three species of sympatrically breeding Pygoscelis penguins during the austral summers of 2006/2007-2010/2011. To evaluate the hypothesis that mercury concentrations in penguins exhibit significant inter-annual variation and to determine the potential source of such variation (dietary or environmental), we compared tissue mercury concentrations with trophic levels as indicated by δ(15)N values from all species and tissues. Overall, no inter-annual variation in mercury was observed in adult feathers suggesting that mercury exposure, on an annual scale, was consistent for Pygoscelis penguins. However, when examining tissues that reflected more discrete time periods (chick down and eggshell membrane) relative to adult feathers, we found some evidence of inter-annual variation in mercury exposure during penguins' pre-breeding and chick rearing periods. Evidence of inter-annual variation in penguin trophic level was also limited suggesting that foraging ecology and environmental factors related to the bioavailability of mercury may provide more explanatory power for mercury exposure compared to trophic level alone. Even so, the variable strength of relationships observed between trophic level and tissue mercury concentrations across and within Pygoscelis penguin species suggest that caution is required when selecting appropriate species and tissue combinations for environmental biomonitoring studies in Antarctica.

  19. Multi-Resolution Clustering Analysis and Visualization of Around One Million Synthetic Earthquake Events

    NASA Astrophysics Data System (ADS)

    Kaneko, J. Y.; Yuen, D. A.; Dzwinel, W.; Boryszko, K.; Ben-Zion, Y.; Sevre, E. O.

    2002-12-01

    The study of seismic patterns with synthetic data is important for analyzing the seismic hazard of faults because one can precisely control the spatial and temporal domains. Using modern clustering analysis from statistics and a recently introduced visualization software, AMIRA, we have examined the multi-resolution nature of a total assemblage involving 922,672 earthquake events in 4 numerically simulated models, which have different constitutive parameters, with 2 disparately different time intervals in a 3D spatial domain. The evolution of stress and slip on the fault plane was simulated with the 3D elastic dislocation theory for a configuration representing the central San Andreas Fault (Ben-Zion, J. Geophys. Res., 101, 5677-5706, 1996). The 4 different models represent various levels of fault zone disorder and have the following brittle properties and names: uniform properties (model U), a Parkfield type Asperity (A), fractal properties (F), and multi-size-heterogeneities (model M). We employed the MNN (mutual nearest neighbor) clustering method and developed a C-program that calculates simultaneously a number of parameters related to the location of the earthquakes and their magnitude values .Visualization was then used to look at the geometrical locations of the hypocenters and the evolution of seismic patterns. We wrote an AmiraScript that allows us to pass the parameters in an interactive format. With data sets consisting of 150 year time intervals, we have unveiled the distinctly multi-resolutional nature in the spatial-temporal pattern of small and large earthquake correlations shown previously by Eneva and Ben-Zion (J. Geophys. Res., 102, 24513-24528, 1997). In order to search for clearer possible stationary patterns and substructures within the clusters, we have also carried out the same analysis for corresponding data sets with time extending to several thousand years. The larger data sets were studied with finer and finer time intervals and multi

  20. An adaptive multiresolution gradient-augmented level set method for advection problems

    NASA Astrophysics Data System (ADS)

    Schneider, Kai; Kolomenskiy, Dmitry; Nave, Jean-Chtristophe

    2014-11-01

    Advection problems are encountered in many applications, such as transport of passive scalars modeling pollution or mixing in chemical engineering. In some problems, the solution develops small-scale features localized in a part of the computational domain. If the location of these features changes in time, the efficiency of the numerical method can be significantly improved by adapting the partition dynamically to the solution. We present a space-time adaptive scheme for solving advection equations in two space dimensions. The third order accurate gradient-augmented level set method using a semi-Lagrangian formulation with backward time integration is coupled with a point value multiresolution analysis using Hermite interpolation. Thus locally refined dyadic spatial grids are introduced which are efficiently implemented with dynamic quad-tree data structures. For adaptive time integration, an embedded Runge-Kutta method is employed. The precision of the new fully adaptive method is analysed and speed up of CPU time and memory compression with respect to the uniform grid discretization are reported.

  1. Multi-resolution multi-object statistical shape models based on the locality assumption.

    PubMed

    Wilms, Matthias; Handels, Heinz; Ehrhardt, Jan

    2017-02-17

    Statistical shape models learned from a population of previously observed training shapes are nowadays widely used in medical image analysis to aid segmentation or classification. However, providing an appropriate and representative training population of preferably manual segmentations is typically either very labor-intensive or even impossible. Therefore, statistical shape models in practice frequently suffer from the high-dimension-low-sample-size (HDLSS) problem resulting in models with insufficient expressiveness. In this paper, a novel approach for learning representative multi-resolution multi-object statistical shape models from a small number of training samples that adequately model the variability of each individual object as well as their interrelations is presented. The method is based on the assumption of locality, which means that local shape variations have limited effects in distant areas and, therefore, can be modeled independently. This locality assumption is integrated into the standard statistical shape modeling framework by manipulating the sample covariance matrix (non-zero covariances between distant landmarks are set to zero). To allow for multi-object modeling, a method for computing distances between points located on different object shapes is proposed. Furthermore, different levels of locality are introduced by deriving a multi-resolution scheme, which is equipped with a method to combine variability information modeled at different levels into a single shape model. This combined representation of global and local variability in a single shape model allows the use of the classical active shape model strategy for model-based image segmentation. An extensive evaluation based on a public data base of 247 chest radiographs is performed to show the modeling and segmentation capabilities of the proposed approach in single- and multi-object HDLSS scenarios. The new approach is not only compared to the classical shape modeling method but also

  2. Multi-Resolution Seismic Tomography Based on Recursive Tessellation Hierarchy

    SciTech Connect

    Simmons, N A; Myers, S C; Ramirez, A

    2009-07-01

    tomographic problems. They also apply the progressive inversion approach with Pn waves traveling within the Middle East region and compare the results to simple tomographic inversions. As expected from synthetic testing, the progressive approach results in detailed structure where there is high data density and broader regional anomalies where seismic information is sparse. The ultimate goal is to use these methods to produce a seamless, multi-resolution global tomographic model with local model resolution determined by the constraints afforded by available data. They envisage this new technique as the general approach to be employed for future multi-resolution model development with complex arrangements of regional and teleseismic information.

  3. Fast multiresolution search algorithm for optimal retrieval in large multimedia databases

    NASA Astrophysics Data System (ADS)

    Song, Byung C.; Kim, Myung J.; Ra, Jong Beom

    1999-12-01

    Most of the content-based image retrieval systems require a distance computation for each candidate image in the database. As a brute-force approach, the exhaustive search can be employed for this computation. However, this exhaustive search is time-consuming and limits the usefulness of such systems. Thus, there is a growing demand for a fast algorithm which provides the same retrieval results as the exhaustive search. In this paper, we prose a fast search algorithm based on a multi-resolution data structure. The proposed algorithm computes the lower bound of distance at each level and compares it with the latest minimum distance, starting from the low-resolution level. Once it is larger than the latest minimum distance, we can exclude the candidates without calculating the full- resolution distance. By doing this, we can dramatically reduce the total computational complexity. It is noticeable that the proposed fast algorithm provides not only the same retrieval results as the exhaustive search, but also a faster searching ability than existing fast algorithms. For additional performance improvement, we can easily combine the proposed algorithm with existing tree-based algorithms. The algorithm can also be used for the fast matching of various features such as luminance histograms, edge images, and local binary partition textures.

  4. Statistical Projections for Multi-resolution, Multi-dimensional Visual Data Exploration and Analysis

    SciTech Connect

    Hoa T. Nguyen; Stone, Daithi; E. Wes Bethel

    2016-01-01

    An ongoing challenge in visual exploration and analysis of large, multi-dimensional datasets is how to present useful, concise information to a user for some specific visualization tasks. Typical approaches to this problem have proposed either reduced-resolution versions of data, or projections of data, or both. These approaches still have some limitations such as consuming high computation or suffering from errors. In this work, we explore the use of a statistical metric as the basis for both projections and reduced-resolution versions of data, with a particular focus on preserving one key trait in data, namely variation. We use two different case studies to explore this idea, one that uses a synthetic dataset, and another that uses a large ensemble collection produced by an atmospheric modeling code to study long-term changes in global precipitation. The primary findings of our work are that in terms of preserving the variation signal inherent in data, that using a statistical measure more faithfully preserves this key characteristic across both multi-dimensional projections and multi-resolution representations than a methodology based upon averaging.

  5. Wavelet multiresolution analyses adapted for the fast solution of boundary value ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Jawerth, Bjoern; Sweldens, Wim

    1993-01-01

    We present ideas on how to use wavelets in the solution of boundary value ordinary differential equations. Rather than using classical wavelets, we adapt their construction so that they become (bi)orthogonal with respect to the inner product defined by the operator. The stiffness matrix in a Galerkin method then becomes diagonal and can thus be trivially inverted. We show how one can construct an O(N) algorithm for various constant and variable coefficient operators.

  6. Face Recognition with Multi-Resolution Spectral Feature Images

    PubMed Central

    Sun, Zhan-Li; Lam, Kin-Man; Dong, Zhao-Yang; Wang, Han; Gao, Qing-Wei; Zheng, Chun-Hou

    2013-01-01

    The one-sample-per-person problem has become an active research topic for face recognition in recent years because of its challenges and significance for real-world applications. However, achieving relatively higher recognition accuracy is still a difficult problem due to, usually, too few training samples being available and variations of illumination and expression. To alleviate the negative effects caused by these unfavorable factors, in this paper we propose a more accurate spectral feature image-based 2DLDA (two-dimensional linear discriminant analysis) ensemble algorithm for face recognition, with one sample image per person. In our algorithm, multi-resolution spectral feature images are constructed to represent the face images; this can greatly enlarge the training set. The proposed method is inspired by our finding that, among these spectral feature images, features extracted from some orientations and scales using 2DLDA are not sensitive to variations of illumination and expression. In order to maintain the positive characteristics of these filters and to make correct category assignments, the strategy of classifier committee learning (CCL) is designed to combine the results obtained from different spectral feature images. Using the above strategies, the negative effects caused by those unfavorable factors can be alleviated efficiently in face recognition. Experimental results on the standard databases demonstrate the feasibility and efficiency of the proposed method. PMID:23418451

  7. Cointegration and Nonstationarity in the Context of Multiresolution Analysis

    NASA Astrophysics Data System (ADS)

    Worden, K.; Cross, E. J.; Kyprianou, A.

    2011-07-01

    Cointegration has established itself as a powerful means of projecting out long-term trends from time-series data in the context of econometrics. Recent work by the current authors has further established that cointegration can be applied profitably in the context of structural health monitoring (SHM), where it is desirable to project out the effects of environmental and operational variations from data in order that they do not generate false positives in diagnostic tests. The concept of cointegration is partly built on a clear understanding of the ideas of stationarity and nonstationarity for time-series. Nonstationarity in this context is 'traditionally' established through the use of statistical tests, e.g. the hypothesis test based on the augmented Dickey-Fuller statistic. However, it is important to understand the distinction in this case between 'trend' stationarity and stationarity of the AR models typically fitted as part of the analysis process. The current paper will discuss this distinction in the context of SHM data and will extend the discussion by the introduction of multi-resolution (discrete wavelet) analysis as a means of characterising the time-scales on which nonstationarity manifests itself. The discussion will be based on synthetic data and also on experimental data for the guided-wave SHM of a composite plate.

  8. Multi-resolution Convolution Methodology for ICP Waveform Morphology Analysis.

    PubMed

    Shaw, Martin; Piper, Ian; Hawthorne, Christopher

    2016-01-01

    Intracranial pressure (ICP) monitoring is a key clinical tool in the assessment and treatment of patients in neurointensive care. ICP morphology analysis can be useful in the classification of waveform features.A methodology for the decomposition of an ICP signal into clinically relevant dimensions has been devised that allows the identification of important ICP waveform types. It has three main components. First, multi-resolution convolution analysis is used for the main signal decomposition. Then, an impulse function is created, with multiple parameters, that can represent any form in the signal under analysis. Finally, a simple, localised optimisation technique is used to find morphologies of interest in the decomposed data.A pilot application of this methodology using a simple signal has been performed. This has shown that the technique works with performance receiver operator characteristic area under the curve values for each of the waveform types: plateau wave, B wave and high and low compliance states of 0.936, 0.694, 0.676 and 0.698, respectively.This is a novel technique that showed some promise during the pilot analysis. However, it requires further optimisation to become a usable clinical tool for the automated analysis of ICP signals.

  9. Interactive multiscale tensor reconstruction for multiresolution volume visualization.

    PubMed

    Suter, Susanne K; Guitián, José A Iglesias; Marton, Fabio; Agus, Marco; Elsener, Andreas; Zollikofer, Christoph P E; Gopi, M; Gobbetti, Enrico; Pajarola, Renato

    2011-12-01

    Large scale and structurally complex volume datasets from high-resolution 3D imaging devices or computational simulations pose a number of technical challenges for interactive visual analysis. In this paper, we present the first integration of a multiscale volume representation based on tensor approximation within a GPU-accelerated out-of-core multiresolution rendering framework. Specific contributions include (a) a hierarchical brick-tensor decomposition approach for pre-processing large volume data, (b) a GPU accelerated tensor reconstruction implementation exploiting CUDA capabilities, and (c) an effective tensor-specific quantization strategy for reducing data transfer bandwidth and out-of-core memory footprint. Our multiscale representation allows for the extraction, analysis and display of structural features at variable spatial scales, while adaptive level-of-detail rendering methods make it possible to interactively explore large datasets within a constrained memory footprint. The quality and performance of our prototype system is evaluated on large structurally complex datasets, including gigabyte-sized micro-tomographic volumes.

  10. Eagle II: A prototype for multi-resolution combat modeling

    SciTech Connect

    Powell, D.R.; Hutchinson, J.L.

    1993-02-01

    Eagle 11 is a prototype analytic model derived from the integration of the low resolution Eagle model with the high resolution SIMNET model. This integration promises a new capability to allow for a more effective examination of proposed or existing combat systems that could not be easily evaluated using either Eagle or SIMNET alone. In essence, Eagle II becomes a multi-resolution combat model in which simulated combat units can exhibit both high and low fidelity behavior at different times during model execution. This capability allows a unit to behave in a highly manner only when required, thereby reducing the overall computational and manpower requirements for a given study. In this framework, the SIMNET portion enables a highly credible assessment of the performance of individual combat systems under consideration, encompassing both engineering performance and crew capabilities. However, when the assessment being conducted goes beyond system performance and extends to questions of force structure balance and sustainment, then SISMNET results can be used to ``calibrate`` the Eagle attrition process appropriate to the study at hand. Advancing technologies, changes in the world-wide threat, requirements for flexible response, declining defense budgets, and down-sizing of military forces motivate the development of manpower-efficient, low-cost, responsive tools for combat development studies. Eagle and SIMNET both serve as credible and useful tools. The integration of these two models promises enhanced capabilities to examine the broader, deeper, more complex battlefield of the future with higher fidelity, greater responsiveness and low overall cost.

  11. Multiresolution imaging of in-vivo ligand-receptor interactions

    NASA Astrophysics Data System (ADS)

    Thevenaz, Philippe; Millet, Philippe

    2001-05-01

    The aim of this study is to obtain voxel-by-voxel images of binding parameters between [11C]-flumazenil and benzodiazepine receptors using positron emission tomography (PET). We estimate five local parameters (k1, k2, B'max, kon/VR, koff) by fitting a three- compartment ligand-receptor model for each voxel of a PET time series. It proves difficult to fit the ligand-receptor model to the data. We trade noise and spatial resolution to get better results. Our strategy is based on the use of a multiresolution pyramid. It is much easier to solve the problem at coarse resolution because there are fewer data to process. To increase resolution, we expand the parameter maps to the next finer level and use them as initial solution to further optimization, which then proceeds at a fast pace and is more likely to escape false local minima. For this approach to work optimally, the residue between data at a given pyramid level and data at the next level must be as small as possible. We satisfy this constraint by working with spline-based least- squares pyramids. To achieve speed, the optimizer must be efficient, particularly when it is nearing the solution. To that effect, we have developed a Marquardt-Levenberg algorithm that exhibits superlinear convergence properties.

  12. On analysis of electroencephalogram by multiresolution-based energetic approach

    NASA Astrophysics Data System (ADS)

    Sevindir, Hulya Kodal; Yazici, Cuneyt; Siddiqi, A. H.; Aslan, Zafer

    2013-10-01

    Epilepsy is a common brain disorder where the normal neuronal activity gets affected. Electroencephalography (EEG) is the recording of electrical activity along the scalp produced by the firing of neurons within the brain. The main application of EEG is in the case of epilepsy. On a standard EEG some abnormalities indicate epileptic activity. EEG signals like many biomedical signals are highly non-stationary by their nature. For the investigation of biomedical signals, in particular EEG signals, wavelet analysis have found prominent position in the study for their ability to analyze such signals. Wavelet transform is capable of separating the signal energy among different frequency scales and a good compromise between temporal and frequency resolution is obtained. The present study is an attempt for better understanding of the mechanism causing the epileptic disorder and accurate prediction of occurrence of seizures. In the present paper following Magosso's work [12], we identify typical patterns of energy redistribution before and during the seizure using multiresolution wavelet analysis on Kocaeli University's Medical School's data.

  13. Effect of the determination method of the material parameters on the accuracy of forming limit analyses for 5000 series aluminum alloy

    NASA Astrophysics Data System (ADS)

    Hakoyama, Tomoyuki; Kuwabara, Toshihiko; Barlat, Frédéric

    2016-10-01

    The effect of the method used to determine the material parameters of a yield function on the accuracy of the forming limit strains predicted using the Marciniak-Kuczyński-type (M-K) forming limit analysis for a 5000 series aluminum alloy sheet is investigated. A tube subjected to tension-expansion loading under linear paths in the first quadrant of the stress space are performed to measure the multiaxial plastic deformation behavior and the forming limit strains of the test material. The anisotropic parameters and the exponent of the Yld2000-2d yield function (Barlat et al, 2003) are optimized to approximate the contours of the plastic work and/or the directions of the plastic strain rates. The M-K analyses are performed using the different model identifications based on the Yld2000-2d yield function. It is concluded that the yield function best capturing both the plastic work contours and the directions of the plastic strain rates leads to the most accurate predicted forming limit strains.

  14. A multiresolution spatial parametrization for the estimation of fossil-fuel carbon dioxide emissions via atmospheric inversions.

    SciTech Connect

    Ray, Jaideep; Lee, Jina; Lefantzi, Sophia; Yadav, Vineet; Michalak, Anna M.; van Bloemen Waanders, Bart Gustaaf; McKenna, Sean Andrew

    2013-04-01

    The estimation of fossil-fuel CO2 emissions (ffCO2) from limited ground-based and satellite measurements of CO2 concentrations will form a key component of the monitoring of treaties aimed at the abatement of greenhouse gas emissions. To that end, we construct a multiresolution spatial parametrization for fossil-fuel CO2 emissions (ffCO2), to be used in atmospheric inversions. Such a parametrization does not currently exist. The parametrization uses wavelets to accurately capture the multiscale, nonstationary nature of ffCO2 emissions and employs proxies of human habitation, e.g., images of lights at night and maps of built-up areas to reduce the dimensionality of the multiresolution parametrization. The parametrization is used in a synthetic data inversion to test its suitability for use in atmospheric inverse problem. This linear inverse problem is predicated on observations of ffCO2 concentrations collected at measurement towers. We adapt a convex optimization technique, commonly used in the reconstruction of compressively sensed images, to perform sparse reconstruction of the time-variant ffCO2 emission field. We also borrow concepts from compressive sensing to impose boundary conditions i.e., to limit ffCO2 emissions within an irregularly shaped region (the United States, in our case). We find that the optimization algorithm performs a data-driven sparsification of the spatial parametrization and retains only of those wavelets whose weights could be estimated from the observations. Further, our method for the imposition of boundary conditions leads to a 10computational saving over conventional means of doing so. We conclude with a discussion of the accuracy of the estimated emissions and the suitability of the spatial parametrization for use in inverse problems with a significant degree of regularization.

  15. Fast multipole and space adaptive multiresolution methods for the solution of the Poisson equation

    NASA Astrophysics Data System (ADS)

    Bilek, Petr; Duarte, Max; Nečas, David; Bourdon, Anne; Bonaventura, Zdeněk

    2016-09-01

    This work focuses on the conjunction of the fast multipole method (FMM) with the space adaptive multiresolution (MR) technique for grid adaptation. Since both methods, MR and FMM provide a priori error estimates, both achieve O(N) computational complexity, and both operate on the same hierarchical space division, their conjunction represents a natural choice when designing a numerically efficient and robust strategy for time dependent problems. Special attention is given to the use of these methods in the simulation of streamer discharges in air. We have designed a FMM Poisson solver on multiresolution adapted grid in 2D. The accuracy and the computation complexity of the solver has been verified for a set of manufactured solutions. We confirmed that the developed solver attains desired accuracy and this accuracy is controlled only by the number of terms in the multipole expansion in combination with the multiresolution accuracy tolerance. The implementation has a linear computation complexity O(N).

  16. Techniques and potential capabilities of multi-resolutional information (knowledge) processing

    NASA Technical Reports Server (NTRS)

    Meystel, A.

    1989-01-01

    A concept of nested hierarchical (multi-resolutional, pyramidal) information (knowledge) processing is introduced for a variety of systems including data and/or knowledge bases, vision, control, and manufacturing systems, industrial automated robots, and (self-programmed) autonomous intelligent machines. A set of practical recommendations is presented using a case study of a multiresolutional object representation. It is demonstrated here that any intelligent module transforms (sometimes, irreversibly) the knowledge it deals with, and this tranformation affects the subsequent computation processes, e.g., those of decision and control. Several types of knowledge transformation are reviewed. Definite conditions are analyzed, satisfaction of which is required for organization and processing of redundant information (knowledge) in the multi-resolutional systems. Providing a definite degree of redundancy is one of these conditions.

  17. On-the-Fly Decompression and Rendering of Multiresolution Terrain

    SciTech Connect

    Lindstrom, P; Cohen, J D

    2009-04-02

    We present a streaming geometry compression codec for multiresolution, uniformly-gridded, triangular terrain patches that supports very fast decompression. Our method is based on linear prediction and residual coding for lossless compression of the full-resolution data. As simplified patches on coarser levels in the hierarchy already incur some data loss, we optionally allow further quantization for more lossy compression. The quantization levels are adaptive on a per-patch basis, while still permitting seamless, adaptive tessellations of the terrain. Our geometry compression on such a hierarchy achieves compression ratios of 3:1 to 12:1. Our scheme is not only suitable for fast decompression on the CPU, but also for parallel decoding on the GPU with peak throughput over 2 billion triangles per second. Each terrain patch is independently decompressed on the fly from a variable-rate bitstream by a GPU geometry program with no branches or conditionals. Thus we can store the geometry compressed on the GPU, reducing storage and bandwidth requirements throughout the system. In our rendering approach, only compressed bitstreams and the decoded height values in the view-dependent 'cut' are explicitly stored on the GPU. Normal vectors are computed in a streaming fashion, and remaining geometry and texture coordinates, as well as mesh connectivity, are shared and re-used for all patches. We demonstrate and evaluate our algorithms on a small prototype system in which all compressed geometry fits in the GPU memory and decompression occurs on the fly every rendering frame without any cache maintenance.

  18. Extraction of texture features with a multiresolution neural network

    NASA Astrophysics Data System (ADS)

    Lepage, Richard; Laurendeau, Denis; Gagnon, Roger A.

    1992-09-01

    Texture is an important surface characteristic. Many industrial materials such as wood, textile, or paper are best characterized by their texture. Detection of defaults occurring on such materials or classification for quality control anD matching can be carried out through careful texture analysis. A system for the classification of pieces of wood used in the furniture industry is proposed. This paper is concerned with a neural network implementation of the features extraction and classification components of the proposed system. Texture appears differently depending at which spatial scale it is observed. A complete description of a texture thus implies an analysis at several spatial scales. We propose a compact pyramidal representation of the input image for multiresolution analysis. The feature extraction system is implemented on a multilayer artificial neural network. Each level of the pyramid, which is a representation of the input image at a given spatial resolution scale, is mapped into a layer of the neural network. A full resolution texture image is input at the base of the pyramid and a representation of the texture image at multiple resolutions is generated by the feedforward pyramid structure of the neural network. The receptive field of each neuron at a given pyramid level is preprogrammed as a discrete Gaussian low-pass filter. Meaningful characteristics of the textured image must be extracted if a good resolving power of the classifier must be achieved. Local dominant orientation is the principal feature which is extracted from the textured image. Local edge orientation is computed with a Sobel mask at four orientation angles (multiple of (pi) /4). The resulting intrinsic image, that is, the local dominant orientation image, is fed to the texture classification neural network. The classification network is a three-layer feedforward back-propagation neural network.

  19. Development of a laparoscope with multi-resolution foveation capability for minimally invasive surgery

    NASA Astrophysics Data System (ADS)

    Qin, Yi; Hua, Hong; Nguyen, Mike

    2013-03-01

    Laparoscope is the essential tool for minimally invasive surgery (MIS) within the abdominal cavity. However, the focal length of a conventional laparoscope is fixed. Therefore, it suffers from the tradeoff between field of view (FOV) and spatial resolution. In order to obtain large optical magnification to see more details, a conventional laparoscope is usually designed with a small working distance, typically less than 50mm. Such a small working distance limits the field of coverage, which causes the situational awareness challenge during the laparoscopic surgery. We developed a multi-resolution foveated laparoscope (MRFL) aiming to address this limitation. The MRFL was designed to support a large working distance range from 80mm to 180mm. It is able to simultaneously provide both wide-angle overview and high-resolution image of the surgical field in real time within a fully integrated system. The high-resolution imaging probe can automatically scan and engage to any subfield of the wide-angle view. During the surgery, MRFL does not need to move; therefore it can reduce the instruments conflicts. The FOV of the wide-angle imaging probe is 80° and that of the high-resolution imaging probe is 26.6°. The maximum resolution is about 45um in the object space at an 80mm working distance, which is about 5 times as good as a conventional laparoscope at a 50mm working distance. The prototype can realize an equivalent 10 million-pixel resolution by using only two HD cameras because of its foveation capability. It saves the bandwidth and improves the frame rate compared to the use of a super resolution camera. It has great potential to aid safety and accuracy of the laparoscopic surgery.

  20. Multiresolution image representation using combined 2-D and 1-D directional filter banks.

    PubMed

    Tanaka, Yuichi; Ikehara, Masaaki; Nguyen, Truong Q

    2009-02-01

    In this paper, effective multiresolution image representations using a combination of 2-D filter bank (FB) and directional wavelet transform (WT) are presented. The proposed methods yield simple implementation and low computation costs compared to previous 1-D and 2-D FB combinations or adaptive directional WT methods. Furthermore, they are nonredundant transforms and realize quad-tree like multiresolution representations. In applications on nonlinear approximation, image coding, and denoising, the proposed filter banks show visual quality improvements and have higher PSNR than the conventional separable WT or the contourlet.

  1. Multi-resolution statistical analysis of brain connectivity graphs in preclinical Alzheimer's disease.

    PubMed

    Kim, Won Hwa; Adluru, Nagesh; Chung, Moo K; Okonkwo, Ozioma C; Johnson, Sterling C; B Bendlin, Barbara; Singh, Vikas

    2015-09-01

    There is significant interest, both from basic and applied research perspectives, in understanding how structural/functional connectivity changes can explain behavioral symptoms and predict decline in neurodegenerative diseases such as Alzheimer's disease (AD). The first step in most such analyses is to encode the connectivity information as a graph; then, one may perform statistical inference on various 'global' graph theoretic summary measures (e.g., modularity, graph diameter) and/or at the level of individual edges (or connections). For AD in particular, clear differences in connectivity at the dementia stage of the disease (relative to healthy controls) have been identified. Despite such findings, AD-related connectivity changes in preclinical disease remain poorly characterized. Such preclinical datasets are typically smaller and group differences are weaker. In this paper, we propose a new multi-resolution method for performing statistical analysis of connectivity networks/graphs derived from neuroimaging data. At the high level, the method occupies the middle ground between the two contrasts - that is, to analyze global graph summary measures (global) or connectivity strengths or correlations for individual edges similar to voxel based analysis (local). Instead, our strategy derives a Wavelet representation at each primitive (connection edge) which captures the graph context at multiple resolutions. We provide extensive empirical evidence of how this framework offers improved statistical power by analyzing two distinct AD datasets. Here, connectivity is derived from diffusion tensor magnetic resonance images by running a tractography routine. We first present results showing significant connectivity differences between AD patients and controls that were not evident using standard approaches. Later, we show results on populations that are not diagnosed with AD but have a positive family history risk of AD where our algorithm helps in identifying potentially

  2. Multi-resolution Statistical Analysis of Brain Connectivity Graphs in Preclinical Alzheimer's Disease

    PubMed Central

    Kim, Won Hwa; Adluru, Nagesh; Chung, Moo K.; Okonkwo, Ozioma C.; Johnson, Sterling C.; Bendlin, Barbara; Singh, Vikas

    2015-01-01

    There is significant interest, both from basic and applied research perspectives, in understanding how structural/functional connectivity changes can explain behavioral symptoms and predict decline in neurodegenerative diseases such as Alzheimer's disease (AD). The first step in most such analyses is to encode the connectivity information as a graph; then, one may perform statistical inference on various ‘global’ graph theoretic summary measures (e.g., modularity, graph diameter) and/or at the level of individual edges (or connections). For AD in particular, clear differences in connectivity at the dementia stage of the disease (relative to healthy controls) have been identified. Despite such findings, AD-related connectivity changes in preclinical disease remain poorly characterized. Such preclinical datasets are typically smaller and group differences are weaker. In this paper, we propose a new multi-resolution method for performing statistical analysis of connectivity networks/graphs derived from neuroimaging data. At the high level, the method occupies the middle ground between the two contrasts — that is, to analyze global graph summary measures (global) or connectivity strengths or correlations for individual edges similar to voxel based analysis (local). Instead, our strategy derives a Wavelet representation at each primitive (connection edge) which captures the graph context at multiple resolutions. We provide extensive empirical evidence of how this framework offers improved statistical power by analyzing two distinct AD datasets. Here, connectivity is derived from diffusion tensor magnetic resonance images by running a tractography routine. We first present results showing significant connectivity differences between AD patients and controls that were not evident using standard approaches. Later, we show results on populations that are not diagnosed with AD but have a positive family history risk of AD where our algorithm helps in identifying

  3. A maximum-likelihood multi-resolution weak lensing mass reconstruction method

    NASA Astrophysics Data System (ADS)

    Khiabanian, Hossein

    Gravitational lensing is formed when the light from a distant source is "bent" around a massive object. Lensing analysis has increasingly become the method of choice for studying dark matter, so much that it is one of the main tools that will be employed in the future surveys to study the dark energy and its equation of state as well as the evolution of galaxy clustering. Unlike other popular techniques for selecting galaxy clusters (such as studying the X-ray emission or observing the over-densities of galaxies), weak gravitational lensing does not have the disadvantage of relying on the luminous matter and provides a parameter-free reconstruction of the projected mass distribution in clusters without dependence on baryon content. Gravitational lensing also provides a unique test for the presence of truly dark clusters, though it is otherwise an expensive detection method. Therefore it is essential to make use of all the information provided by the data to improve the quality of the lensing analysis. This thesis project has been motivated by the limitations encountered with the commonly used direct reconstruction methods of producing mass maps. We have developed a multi-resolution maximum-likelihood reconstruction method for producing two dimensional mass maps using weak gravitational lensing data. To utilize all the shear information, we employ an iterative inverse method with a properly selected regularization coefficient which fits the deflection potential at the position of each galaxy. By producing mass maps with multiple resolutions in the different parts of the observed field, we can achieve a uniform signal to noise level by increasing the resolution in regions of higher distortions or regions with an over-density of background galaxies. In addition, we are able to better study the sub- structure of the massive clusters at a resolution which is not attainable in the rest of the observed field.

  4. Multiresolution retinal vessel tracker based on directional smoothing

    NASA Astrophysics Data System (ADS)

    Englmeier, Karl-Hans; Bichler, Simon; Schmid, K.; Maurino, M.; Porta, Massimo; Bek, Toke; Ege, B.; Larsen, Ole V.; Hejlesen, Ok

    2002-04-01

    To support ophthalmologists in their routine and enable the quantitative assessment of vascular changes in color fundus photographs a multi-resolution approach was developed which segments the vessel tree efficiently and precisely in digital images of the retina. The algorithm starts at seed points, found in a preprocessing step and then follows the vessel, iteratively adjusting the direction of the search, and finding the center line of the vessels. As an addition, vessel branches and crossings are detected and stored in detailed lists. Every iteration of the Directional Smoothing Based (DSB) tracking process starts at a given point in the middle of a vessel. First rectangular windows for several directions in a neighborhood of this point are smoothed in the assumed direction of the vessel. The window, that results in the best contrast is then said to have the true direction of the vessel. The center point is moved into that direction 1/8th of the vessel width, and the algorithm continues with the next iteration. The vessel branch and crossing detection uses a list with unique vessel segment IDs and branch point IDs. During the tracking, when another vessel is crossed, the tracking is stopped. The newly traced vessel segment is stored in the vessel segment list, and the vessel, that had been traced before is broken up at the crossing- or branch point, and is stored as two different vessel segments. This approach has several advantages: - With directional smoothing, noise is eliminated, while the edges of the vessels are kept. - DSB works on high resolution images (3000 x 2000 pixel) as well as on low-resolution images (900 x 600 pixel), because a large area of the vessel is used to find the vessel direction - For the detection of venous beading the vessel width is measured for every step of the traced vessel. - With the lists of branch- and crossing points, we get a network of connected vessel segments, that can be used for further processing the retinal vessel

  5. Adaptive multiresolution semi-Lagrangian discontinuous Galerkin methods for the Vlasov equations

    NASA Astrophysics Data System (ADS)

    Besse, N.; Deriaz, E.; Madaule, É.

    2017-03-01

    We develop adaptive numerical schemes for the Vlasov equation by combining discontinuous Galerkin discretisation, multiresolution analysis and semi-Lagrangian time integration. We implement a tree based structure in order to achieve adaptivity. Both multi-wavelets and discontinuous Galerkin rely on a local polynomial basis. The schemes are tested and validated using Vlasov-Poisson equations for plasma physics and astrophysics.

  6. Limited Flow of Continental Crust at UHP Depths: Coupled Age and Trace-Element Analyses of Titanite in the Western Gneiss Region, Norway

    NASA Astrophysics Data System (ADS)

    Garber, J. M.; Hacker, B. R.; Kylander-Clark, A. R.

    2015-12-01

    Coupled age and trace-element data from titanites in the Western Gneiss Region (WGR) of Norway suggest that continental crust underwent limited recrystallization and ductile flow through ~40 My of deep subduction and subsequent exhumation. Precambrian igneous titanites in granitic to tonalitic orthogneisses from the WGR were metastably preserved though Caledonian ultrahigh-pressure (UHP) metamorphism and variably recrystallized through subsequent amphibolite-facies metamorphism from ~420-385 Ma. The inherited Precambrian titanites are not present everywhere but rather cluster primarily in a cooler "southern domain" (peak T ~650oC) and a hotter "northern domain" (peak T ~750-800oC).Titanite data were collected using LASS (laser-ablation split stream inductively-coupled plasma mass spectrometry) at UCSB, and a principal component analysis (PCA) was used to define age and trace-element populations. These data indicate that inherited titanites are LREE-enriched, HFSE-enriched, and have higher Th/U, consistent with Precambrian neocrystallization from a granitic melt. In contrast, the recrystallized titanites have generally lower Th/U and flat, LREE-depleted, or hump-shaped trace-element patterns. These data suggest that (1) Caledonian titanite recrystallization occurred in the presence of LREE-depleted melts or fluids, or that (2) recrystallization was accompanied by a "typical" granitic melt, but that titanite/bulk-rock distribution coefficients are different for neo- and recrystallization; on-going whole-rock analyses will clarify these hypotheses. Critically, the geochemical signature of recrystallized titanite in felsic orthogneisses is comparable across the entire WGR - emphasizing that the petrologic process of titanite recrystallization was similar orogen-wide, but was less extensive in the domains where inherited titanite was preserved. In this case, large volumes of crust outside of the "old domains" may also have retained metastable titanite during subduction

  7. The Global Multi-Resolution Topography (GMRT) Synthesis

    NASA Astrophysics Data System (ADS)

    Arko, R.; Ryan, W.; Carbotte, S.; Melkonian, A.; Coplan, J.; O'Hara, S.; Chayes, D.; Weissel, R.; Goodwillie, A.; Ferrini, V.; Stroker, K.; Virden, W.

    2007-12-01

    Topographic maps provide a backdrop for research in nearly every earth science discipline. There is particular demand for bathymetry data in the ocean basins, where existing coverage is sparse. Ships and submersibles worldwide are rapidly acquiring large volumes of new data with modern swath mapping systems. The science community is best served by a global topography compilation that is easily accessible, up-to-date, and delivers data in the highest possible (i.e. native) resolution. To meet this need, the NSF-supported Marine Geoscience Data System (MGDS; www.marine-geo.org) has partnered with the National Geophysical Data Center (NGDC; www.ngdc.noaa.gov) to produce the Global Multi-Resolution Topography (GMRT) synthesis - a continuously updated digital elevation model that is accessible through Open Geospatial Consortium (OGC; www.opengeospatial.org) Web services. GMRT had its genesis in 1992 with the NSF RIDGE Multibeam Synthesis (RMBS); later grew to include the Antarctic Multibeam Synthesis (AMBS); expanded again to include the NSF Ridge 2000 and MARGINS programs; and finally emerged as a global compilation in 2005 with the NSF Legacy of Ocean Exploration (LOE) project. The LOE project forged a permanent partnership between MGDS and NGDC, in which swath bathymetry data sets are routinely published and exchanged via the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH; www.openarchives.org). GMRT includes both color-shaded relief images and underlying elevation values at ten different resolutions as high as 100m. New data are edited, gridded, and tiled using tools originally developed by William Haxby at Lamont-Doherty Earth Observatory. Global and regional data sources include the NASA Shuttle Radar Topography Mission (SRTM; http://www.jpl.nasa.gov/srtm/); Smith & Sandwell Satellite Predicted Bathymetry (http://topex.ucsd.edu/marine_topo/); SCAR Subglacial Topographic Model of the Antarctic (BEDMAP; http://www.antarctica.ac.uk/bedmap/); and

  8. A sparse reconstruction method for the estimation of multi-resolution emission fields via atmospheric inversion

    DOE PAGES

    Ray, J.; Lee, J.; Yadav, V.; ...

    2015-04-29

    Atmospheric inversions are frequently used to estimate fluxes of atmospheric greenhouse gases (e.g., biospheric CO2 flux fields) at Earth's surface. These inversions typically assume that flux departures from a prior model are spatially smoothly varying, which are then modeled using a multi-variate Gaussian. When the field being estimated is spatially rough, multi-variate Gaussian models are difficult to construct and a wavelet-based field model may be more suitable. Unfortunately, such models are very high dimensional and are most conveniently used when the estimation method can simultaneously perform data-driven model simplification (removal of model parameters that cannot be reliably estimated) and fitting.more » Such sparse reconstruction methods are typically not used in atmospheric inversions. In this work, we devise a sparse reconstruction method, and illustrate it in an idealized atmospheric inversion problem for the estimation of fossil fuel CO2 (ffCO2) emissions in the lower 48 states of the USA. Our new method is based on stagewise orthogonal matching pursuit (StOMP), a method used to reconstruct compressively sensed images. Our adaptations bestow three properties to the sparse reconstruction procedure which are useful in atmospheric inversions. We have modified StOMP to incorporate prior information on the emission field being estimated and to enforce non-negativity on the estimated field. Finally, though based on wavelets, our method allows for the estimation of fields in non-rectangular geometries, e.g., emission fields inside geographical and political boundaries. Our idealized inversions use a recently developed multi-resolution (i.e., wavelet-based) random field model developed for ffCO2 emissions and synthetic observations of ffCO2 concentrations from a limited set of measurement sites. We find that our method for limiting the estimated field within an irregularly shaped region is about a factor of 10 faster than conventional approaches. It also

  9. A multi-resolution method for climate system modeling: application of Spherical Centroidal A multi-resolution method for climate system modeling: Application of Spherical Centroidal Voroni Tessellations

    SciTech Connect

    Ringler, Todd D; Gunzburger, Max; Ju, Lili

    2008-01-01

    During the next decade and beyond, climate system models will be challenged to resolve scales and processes that are far beyond their current scope. Each climate system component has its prototypical example of an unresolved process that may strongly influence the global climate system, ranging from eddy activity within ocean models, to ice streams within ice sheet models, to surface hydrological processes within land system models, to cloud processes within atmosphere models. These new demands will almost certainly result in the develop of multi-resolution schemes that are able, at least regional to faithfully simulate these fine-scale processes. Spherical Centroidal Voronoi Tessellations (SCVTs) offer one potential path toward the development of robust, multi-resolution climate system component models, SCVTs allow for the generation of high quality Voronoi diagrams and Delaunay triangulations through the use of an intuitive, user-defined density function, each of the examples provided, this method results in high-quality meshes where the quality measures are guaranteed to improve as the number of nodes is increased. Real-world examples are developed for the Greenland ice sheet and the North Atlantic ocean. Idealized examples are developed for ocean-ice shelf interaction and for regional atmospheric modeling. In addition to defining, developing and exhibiting SCVTs, we pair this mesh generation technique with a previously developed finite-volume method. Our numerical example is based on the nonlinear shallow-water equations spanning the entire surface of the sphere. This example is used to elucidate both the potential benefits of this multi-resolution method and the challenges ahead.

  10. A Multi-Resolution Approach for an Automated Fusion of Different Low-Cost 3D Sensors

    PubMed Central

    Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner

    2014-01-01

    The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory. PMID:24763255

  11. A multi-resolution approach for an automated fusion of different low-cost 3D sensors.

    PubMed

    Dupuis, Jan; Paulus, Stefan; Behmann, Jan; Plümer, Lutz; Kuhlmann, Heiner

    2014-04-24

    The 3D acquisition of object structures has become a common technique in many fields of work, e.g., industrial quality management, cultural heritage or crime scene documentation. The requirements on the measuring devices are versatile, because spacious scenes have to be imaged with a high level of detail for selected objects. Thus, the used measuring systems are expensive and require an experienced operator. With the rise of low-cost 3D imaging systems, their integration into the digital documentation process is possible. However, common low-cost sensors have the limitation of a trade-off between range and accuracy, providing either a low resolution of single objects or a limited imaging field. Therefore, the use of multiple sensors is desirable. We show the combined use of two low-cost sensors, the Microsoft Kinect and the David laserscanning system, to achieve low-resolved scans of the whole scene and a high level of detail for selected objects, respectively. Afterwards, the high-resolved David objects are automatically assigned to their corresponding Kinect object by the use of surface feature histograms and SVM-classification. The corresponding objects are fitted using an ICP-implementation to produce a multi-resolution map. The applicability is shown for a fictional crime scene and the reconstruction of a ballistic trajectory.

  12. Water storage variations extracted from GRACE data by combination of multi-resolution representation (MRR) and principal component analysis (PCA)

    NASA Astrophysics Data System (ADS)

    Ressler, Gerhard; Eicker, Annette; Lieb, Verena; Schmidt, Michael; Seitz, Florian; Shang, Kun; Shum, Che-Kwan

    2015-04-01

    Regionally changing hydrological conditions and their link to the availability of water for human consumption and agriculture is a challenging topic in the context of global change that is receiving increasing attention. Gravity field changes related to signals of land hydrology have been observed by the Gravity Recovery And Climate Experiment (GRACE) satellite mission over a period of more than 12 years. These changes are being analysed in our studies with respect to changing hydrological conditions, especially as a consequence of extreme weather situations and/or a change of climatic conditions. Typically, variations of the Earth's gravity field are modeled as a series expansion in terms of global spherical harmonics with time dependent harmonic coefficients. In order to investigate specific structures in the signal we alternatively apply a wavelet-based multi-resolution technique for the determination of regional spatiotemporal variations of the Earth's gravitational potential in combination with principal component analysis (PCA) for detailed evaluation of these structures. The multi-resolution representation (MRR) i.e. the composition of a signal considering different resolution levels is a suitable approach for spatial gravity modeling especially in case of inhomogeneous distribution of observation data on the one hand and because of the inhomogeneous structure of the Earth's gravity field itself on the other hand. In the MRR the signal is split into detail signals by applying low- and band-pass filters realized e.g. by spherical scaling and wavelet functions. Each detail signal is related to a specific resolution level and covers a certain part of the signal spectrum. Principal component analysis (PCA) enables for revealing specific signal patterns in the space as well as the time domain like trends and seasonal as well as semi seasonal variations. We apply the above mentioned combined technique to GRACE L1C residual potential differences that have been

  13. Multiresolution diffusion entropy analysis of time series: an application to births to teenagers in Texas

    NASA Astrophysics Data System (ADS)

    Scafetta, Nicola; West, Bruce J.

    2004-04-01

    The multiresolution diffusion entropy analysis is used to evaluate the stochastic information left in a time series after systematic removal of certain non-stationarities. This method allows us to establish whether the identified patterns are sufficient to capture all relevant information contained in a time series. If they do not, the method suggests the need for further interpretation to explain the residual memory in the signal. We apply the multiresolution diffusion entropy analysis to the daily count of births to teens in Texas from 1964 through 2000 because it is a typical example of a non-stationary time series, having an anomalous trend, an annual variation, as well as short time fluctuations. The analysis is repeated for the three main racial/ethnic groups in Texas (White, Hispanic and African American), as well as, to married and unmarried teens during the years from 1994 to 2000 and we study the differences that emerge among the groups.

  14. Adaptive multiresolution WENO schemes for multi-species kinematic flow models

    SciTech Connect

    Buerger, Raimund . E-mail: rburger@ing-mat.udec.cl; Kozakevicius, Alice . E-mail: alicek@smail.ufsm.br

    2007-06-10

    Multi-species kinematic flow models lead to strongly coupled, nonlinear systems of first-order, spatially one-dimensional conservation laws. The number of unknowns (the concentrations of the species) may be arbitrarily high. Models of this class include a multi-species generalization of the Lighthill-Whitham-Richards traffic model and a model for the sedimentation of polydisperse suspensions. Their solutions typically involve kinematic shocks separating areas of constancy, and should be approximated by high resolution schemes. A fifth-order weighted essentially non-oscillatory (WENO) scheme is combined with a multiresolution technique that adaptively generates a sparse point representation (SPR) of the evolving numerical solution. Thus, computational effort is concentrated on zones of strong variation near shocks. Numerical examples from the traffic and sedimentation models demonstrate the effectiveness of the resulting WENO multiresolution (WENO-MRS) scheme.

  15. Multiresolution motion planning for autonomous agents via wavelet-based cell decompositions.

    PubMed

    Cowlagi, Raghvendra V; Tsiotras, Panagiotis

    2012-10-01

    We present a path- and motion-planning scheme that is "multiresolution" both in the sense of representing the environment with high accuracy only locally and in the sense of addressing the vehicle kinematic and dynamic constraints only locally. The proposed scheme uses rectangular multiresolution cell decompositions, efficiently generated using the wavelet transform. The wavelet transform is widely used in signal and image processing, with emerging applications in autonomous sensing and perception systems. The proposed motion planner enables the simultaneous use of the wavelet transform in both the perception and in the motion-planning layers of vehicle autonomy, thus potentially reducing online computations. We rigorously prove the completeness of the proposed path-planning scheme, and we provide numerical simulation results to illustrate its efficacy.

  16. Aircraft target identification based on 2D ISAR images using multiresolution analysis wavelet

    NASA Astrophysics Data System (ADS)

    Fu, Qiang; Xiao, Huaitie; Hu, Xiangjiang

    2001-09-01

    The formation of 2D ISAR images for radar target identification hold much promise for additional distinguish- ability between targets. Since an image contains important information is a wide range of scales, and this information is often independent from one scale to another, wavelet analysis provides a method of identifying the spatial frequency content of an image and the local regions within the image where those spatial frequencies exist. In this paper, a multiresolution analysis wavelet method based on 2D ISAR images was proposed for use in aircraft radar target identification under the wide band high range resolution radar background. The proposed method was performed in three steps; first, radar backscatter signals were processed in the form of 2D ISAR images, then, Mallat's wavelet algorithm was used in the decomposition of images, finally, a three layer perceptron neural net was used as classifier. The result of experiments demonstrated that the feasibility of using multiresolution analysis wavelet for target identification.

  17. Multi-resolution imaging with an optimized number and distribution of sampling points.

    PubMed

    Capozzoli, Amedeo; Curcio, Claudio; Liseno, Angelo

    2014-05-05

    We propose an approach of interest in Imaging and Synthetic Aperture Radar (SAR) tomography, for the optimal determination of the scanning region dimension, of the number of sampling points therein, and their spatial distribution, in the case of single frequency monostatic multi-view and multi-static single-view target reflectivity reconstruction. The method recasts the reconstruction of the target reflectivity from the field data collected on the scanning region in terms of a finite dimensional algebraic linear inverse problem. The dimension of the scanning region, the number and the positions of the sampling points are optimally determined by optimizing the singular value behavior of the matrix defining the linear operator. Single resolution, multi-resolution and dynamic multi-resolution can be afforded by the method, allowing a flexibility not available in previous approaches. The performance has been evaluated via a numerical and experimental analysis.

  18. Efficient Human Action and Gait Analysis Using Multiresolution Motion Energy Histogram

    NASA Astrophysics Data System (ADS)

    Yu, Chih-Chang; Cheng, Hsu-Yung; Cheng, Chien-Hung; Fan, Kuo-Chin

    2010-12-01

    Average Motion Energy (AME) image is a good way to describe human motions. However, it has to face the computation efficiency problem with the increasing number of database templates. In this paper, we propose a histogram-based approach to improve the computation efficiency. We convert the human action/gait recognition problem to a histogram matching problem. In order to speed up the recognition process, we adopt a multiresolution structure on the Motion Energy Histogram (MEH). To utilize the multiresolution structure more efficiently, we propose an automated uneven partitioning method which is achieved by utilizing the quadtree decomposition results of MEH. In that case, the computation time is only relevant to the number of partitioned histogram bins, which is much less than the AME method. Two applications, action recognition and gait classification, are conducted in the experiments to demonstrate the feasibility and validity of the proposed approach.

  19. An Optimised System for Generating Multi-Resolution Dtms Using NASA Mro Datasets

    NASA Astrophysics Data System (ADS)

    Tao, Y.; Muller, J.-P.; Sidiropoulos, P.; Veitch-Michaelis, J.; Yershov, V.

    2016-06-01

    Within the EU FP-7 iMars project, a fully automated multi-resolution DTM processing chain, called Co-registration ASP-Gotcha Optimised (CASP-GO) has been developed, based on the open source NASA Ames Stereo Pipeline (ASP). CASP-GO includes tiepoint based multi-resolution image co-registration and an adaptive least squares correlation-based sub-pixel refinement method called Gotcha. The implemented system guarantees global geo-referencing compliance with respect to HRSC (and thence to MOLA), provides refined stereo matching completeness and accuracy based on the ASP normalised cross-correlation. We summarise issues discovered from experimenting with the use of the open-source ASP DTM processing chain and introduce our new working solutions. These issues include global co-registration accuracy, de-noising, dealing with failure in matching, matching confidence estimation, outlier definition and rejection scheme, various DTM artefacts, uncertainty estimation, and quality-efficiency trade-offs.

  20. Two-dimensional SPICE-linked multiresolution impedance method for low-frequency electromagnetic interactions.

    PubMed

    Eberdt, Michael; Brown, Patrick K; Lazzi, Gianluca

    2003-07-01

    A multiresolution impedance method for the solution of low-frequency electromagnetic interaction problems typically encountered in bioelectromagnetics is presented. While the impedance method in its original form is based on the discretization of the scattering objects into equal-sized cells, our formulation decreases the number of unknowns by using an automatic mesh generation method that does not yield equal-sized cells in the modeling space. Results indicate that our multiresolution mesh generation scheme can provide a 50%-80% reduction in cell count, providing new opportunities for the solution of low-frequency bioelectromagnetic problems that require a high level of detail only in specific regions of the modeling space. Furthermore, linking the mesh generator to a circuit simulator such as SPICE permits the addition of arbitrarily complex passive and active circuit elements to the generated impedance network, opening the door to significant advances in the modeling of bioelectromagnetic phenomena.

  1. Comparative evaluation of multiresolution optimization strategies for multimodality image registration by maximization of mutual information.

    PubMed

    Maes, F; Vandermeulen, D; Suetens, P

    1999-12-01

    Maximization of mutual information of voxel intensities has been demonstrated to be a very powerful criterion for three-dimensional medical image registration, allowing robust and accurate fully automated affine registration of multimodal images in a variety of applications, without the need for segmentation or other preprocessing of the images. In this paper, we investigate the performance of various optimization methods and multiresolution strategies for maximization of mutual information, aiming at increasing registration speed when matching large high-resolution images. We show that mutual information is a continuous function of the affine registration parameters when appropriate interpolation is used and we derive analytic expressions of its derivatives that allow numerically exact evaluation of its gradient. Various multiresolution gradient- and non-gradient-based optimization strategies, such as Powell, simplex, steepest-descent, conjugate-gradient, quasi-Newton and Levenberg-Marquardt methods, are evaluated for registration of computed tomography (CT) and magnetic resonance images of the brain. Speed-ups of a factor of 3 on average compared to Powell's method at full resolution are achieved with similar precision and without a loss of robustness with the simplex, conjugate-gradient and Levenberg-Marquardt method using a two-level multiresolution scheme. Large data sets such as 256(2) x 128 MR and 512(2) x 48 CT images can be registered with subvoxel precision in <5 min CPU time on current workstations.

  2. Combining nonlinear multiresolution system and vector quantization for still image compression

    NASA Astrophysics Data System (ADS)

    Wong, Yiu-fai

    1994-05-01

    It is popular to use multiresolution systems for image coding and compression. However, general-purpose techniques such as filter banks and wavelets are linear. While these systems are rigorous, nonlinear features in the signals cannot be utilized in a single entity for compression. Linear filters are known to blur the edges. Thus, the low-resolution images are typically blurred, carrying little information. We propose and demonstrate that edge- preserving filters such as median filters can be used in generating a multiresolution system using the Laplacian pyramid. The signals in the detail images are small and localized in the edge areas. Principal component vector quantization (PCVQ) is used to encode the detail images. PCVQ is a tree-structured VQ which allows fast codebook design and encoding/decoding. In encoding, the quantization error at each level is fed back through the pyramid to the previous level so that ultimately all the error is confined to the first level. With simple coding methods, we demonstrate that images with PSNR 33 dB can be obtained at 0.66 bpp without the use of entropy coding. When the rate is decreased to 0.25 bpp, the PSNR of 30 dB can still be achieved. Combined with an earlier result, our work demonstrate that nonlinear filters can be used for multiresolution systems and image coding.

  3. Two-channel multirate processing as general basis of two-dimensional multiresolution processing methods modeling

    NASA Astrophysics Data System (ADS)

    de Almeida, Maria d. G.

    2004-08-01

    The multirate processing of two-dimensional (2D) signals involves various types of sampling and matrices, due to different grid geometry. A more consistent theory is then needed in order to obtain better techniques and useful results in many areas, such as image and signal processing, biomedical, telecommunications, multimedia, remote sensing, optics. In this work, a 2-channel complementary filter banks theory, designed based on 2D multirate processing and complementary filters properties is presented with foundations for multiresolution levels methods modeling, for the processing of signals in two-dimensions, in nonseparable way. Signal analysis and synthesis using 2-channel complementary filter (CF) banks, the conditions under which the reconstruction of the 2D input signal is perfect and frequency division in the analysis part are developed. Since multiresolution decomposition of signals, wavelet representation and filter banks have a strong link, a relation of then with complementary filter banks is done. Other multiresolution levels methods can be derived from this theory and applications of them were found for compression, edge detection, 2D scaling and wavelets functions and digital TV systems.

  4. Combined adjustment of multi-resolution satellite imagery for improved geo-positioning accuracy

    NASA Astrophysics Data System (ADS)

    Tang, Shengjun; Wu, Bo; Zhu, Qing

    2016-04-01

    Due to the widespread availability of satellite imagery nowadays, it is common for regions to be covered by satellite imagery from multiple sources with multiple resolutions. This paper presents a combined adjustment approach to integrate multi-source multi-resolution satellite imagery for improved geo-positioning accuracy without the use of ground control points (GCPs). Instead of using all the rational polynomial coefficients (RPCs) of images for processing, only those dominating the geo-positioning accuracy are used in the combined adjustment. They, together with tie points identified in the images, are used as observations in the adjustment model. Proper weights are determined for each observation, and ridge parameters are determined for better convergence of the adjustment solution. The outputs from the combined adjustment are the improved dominating RPCs of images, from which improved geo-positioning accuracy can be obtained. Experiments using ZY-3, SPOT-7 and Pleiades-1 imagery in Hong Kong, and Cartosat-1 and Worldview-1 imagery in Catalonia, Spain demonstrate that the proposed method is able to effectively improve the geo-positioning accuracy of satellite images. The combined adjustment approach offers an alternative method to improve geo-positioning accuracy of satellite images. The approach enables the integration of multi-source and multi-resolution satellite imagery for generating more precise and consistent 3D spatial information, which permits the comparative and synergistic use of multi-resolution satellite images from multiple sources.

  5. Deconstructing a polygenetic landscape using LiDAR and multi-resolution analysis

    NASA Astrophysics Data System (ADS)

    Barrineau, Patrick; Dobreva, Iliyana; Bishop, Michael P.; Houser, Chris

    2016-04-01

    It is difficult to deconstruct a complex polygenetic landscape into distinct process-form regimes using digital elevation models (DEMs) and fundamental land-surface parameters. This study describes a multi-resolution analysis approach for extracting geomorphological information from a LiDAR-derived DEM over a stabilized aeolian landscape in south Texas that exhibits distinct process-form regimes associated with different stages in landscape evolution. Multi-resolution analysis was used to generate average altitudes using a Gaussian filter with a maximum radius of 1 km at 20 m intervals, resulting in 50 generated DEMs. This multi-resolution dataset was analyzed using Principal Components Analysis (PCA) to identify the dominant variance structure in the dataset. The first 4 principal components (PC) account for 99.9% of the variation, and classification of the variance structure reveals distinct multi-scale topographic variation associated with different process-form regimes and evolutionary stages. Our results suggest that this approach can be used to generate quantitatively rigorous morphometric maps to guide field-based sedimentological and geophysical investigations, which tend to use purposive sampling techniques resulting in bias and error.

  6. Assessment of Multiresolution Segmentation for Extracting Greenhouses from WORLDVIEW-2 Imagery

    NASA Astrophysics Data System (ADS)

    Aguilar, M. A.; Aguilar, F. J.; García Lorca, A.; Guirado, E.; Betlej, M.; Cichon, P.; Nemmaoui, A.; Vallario, A.; Parente, C.

    2016-06-01

    The latest breed of very high resolution (VHR) commercial satellites opens new possibilities for cartographic and remote sensing applications. In this way, object based image analysis (OBIA) approach has been proved as the best option when working with VHR satellite imagery. OBIA considers spectral, geometric, textural and topological attributes associated with meaningful image objects. Thus, the first step of OBIA, referred to as segmentation, is to delineate objects of interest. Determination of an optimal segmentation is crucial for a good performance of the second stage in OBIA, the classification process. The main goal of this work is to assess the multiresolution segmentation algorithm provided by eCognition software for delineating greenhouses from WorldView- 2 multispectral orthoimages. Specifically, the focus is on finding the optimal parameters of the multiresolution segmentation approach (i.e., Scale, Shape and Compactness) for plastic greenhouses. The optimum Scale parameter estimation was based on the idea of local variance of object heterogeneity within a scene (ESP2 tool). Moreover, different segmentation results were attained by using different combinations of Shape and Compactness values. Assessment of segmentation quality based on the discrepancy between reference polygons and corresponding image segments was carried out to identify the optimal setting of multiresolution segmentation parameters. Three discrepancy indices were used: Potential Segmentation Error (PSE), Number-of-Segments Ratio (NSR) and Euclidean Distance 2 (ED2).

  7. Combining nonlinear multiresolution system and vector quantization for still image compression

    SciTech Connect

    Wong, Y.

    1993-12-17

    It is popular to use multiresolution systems for image coding and compression. However, general-purpose techniques such as filter banks and wavelets are linear. While these systems are rigorous, nonlinear features in the signals cannot be utilized in a single entity for compression. Linear filters are known to blur the edges. Thus, the low-resolution images are typically blurred, carrying little information. We propose and demonstrate that edge-preserving filters such as median filters can be used in generating a multiresolution system using the Laplacian pyramid. The signals in the detail images are small and localized to the edge areas. Principal component vector quantization (PCVQ) is used to encode the detail images. PCVQ is a tree-structured VQ which allows fast codebook design and encoding/decoding. In encoding, the quantization error at each level is fed back through the pyramid to the previous level so that ultimately all the error is confined to the first level. With simple coding methods, we demonstrate that images with PSNR 33 dB can be obtained at 0.66 bpp without the use of entropy coding. When the rate is decreased to 0.25 bpp, the PSNR of 30 dB can still be achieved. Combined with an earlier result, our work demonstrate that nonlinear filters can be used for multiresolution systems and image coding.

  8. Three-dimensional wavelet transform and multiresolution surface reconstruction from volume data

    NASA Astrophysics Data System (ADS)

    Wang, Yun; Sloan, Kenneth R., Jr.

    1995-04-01

    Multiresolution surface reconstruction from volume data is very useful in medical imaging, data compression and multiresolution modeling. This paper presents a hierarchical structure for extracting multiresolution surfaces from volume data by using a 3-D wavelet transform. The hierarchical scheme is used to visualize different levels of detail of the surface and allows a user to explore different features of the surface at different scales. We use 3-D surface curvature as a smoothness condition to control the hierarchical level and the distance error between the reconstructed surface and the original data as the stopping criteria. A 3-D wavelet transform provides an appropriate hierarchical structure to build the volume pyramid. It can be constructed by the tensor products of 1-D wavelet transforms in three subspaces. We choose the symmetric and smoothing filters such as Haar, linear, pseudoCoiflet, cubic B-spline and their corresponding orthogonal wavelets to build the volume pyramid. The surface is reconstructed at each level of volume data by using the cell interpolation method. Some experimental results are shown through the comparison of the different filters based on the distance errors of the surfaces.

  9. Characterization and in-vivo evaluation of a multi-resolution foveated laparoscope for minimally invasive surgery

    PubMed Central

    Qin, Yi; Hua, Hong; Nguyen, Mike

    2014-01-01

    The state-of-the-art laparoscope lacks the ability to capture high-magnification and wide-angle images simultaneously, which introduces challenges when both close- up views for details and wide-angle overviews for orientation are required in clinical practice. A multi-resolution foveated laparoscope (MRFL) which can provide the surgeon both high-magnification close-up and wide-angle images was proposed to address the limitations of the state-of-art surgical laparoscopes. In this paper, we present the overall system design from both clinical and optical system perspectives along with a set of experiments to characterize the optical performances of our prototype system and describe our preliminary in-vivo evaluation of the prototype with a pig model. The experimental results demonstrate that at the optimum working distance of 120mm, the high-magnification probe has a resolution of 6.35lp/mm and image a surgical area of 53 × 40mm2; the wide-angle probe provides a surgical area coverage of 160 × 120mm2 with a resolution of 2.83lp/mm. The in-vivo evaluation demonstrates that MRFL has great potential in clinical applications for improving the safety and efficiency of the laparoscopic surgery. PMID:25136485

  10. Cost tradeoffs in consequence management at nuclear power plants: A risk based approach to setting optimal long-term interdiction limits for regulatory analyses

    SciTech Connect

    Mubayi, V.

    1995-05-01

    The consequences of severe accidents at nuclear power plants can be limited by various protective actions, including emergency responses and long-term measures, to reduce exposures of affected populations. Each of these protective actions involve costs to society. The costs of the long-term protective actions depend on the criterion adopted for the allowable level of long-term exposure. This criterion, called the ``long term interdiction limit,`` is expressed in terms of the projected dose to an individual over a certain time period from the long-term exposure pathways. The two measures of offsite consequences, latent cancers and costs, are inversely related and the choice of an interdiction limit is, in effect, a trade-off between these two measures. By monetizing the health effects (through ascribing a monetary value to life lost), the costs of the two consequence measures vary with the interdiction limit, the health effect costs increasing as the limit is relaxed and the protective action costs decreasing. The minimum of the total cost curve can be used to calculate an optimal long term interdiction limit. The calculation of such an optimal limit is presented for each of five US nuclear power plants which were analyzed for severe accident risk in the NUREG-1150 program by the Nuclear Regulatory Commission.

  11. Interest rate next-day variation prediction based on hybrid feedforward neural network, particle swarm optimization, and multiresolution techniques

    NASA Astrophysics Data System (ADS)

    Lahmiri, Salim

    2016-02-01

    Multiresolution analysis techniques including continuous wavelet transform, empirical mode decomposition, and variational mode decomposition are tested in the context of interest rate next-day variation prediction. In particular, multiresolution analysis techniques are used to decompose interest rate actual variation and feedforward neural network for training and prediction. Particle swarm optimization technique is adopted to optimize its initial weights. For comparison purpose, autoregressive moving average model, random walk process and the naive model are used as main reference models. In order to show the feasibility of the presented hybrid models that combine multiresolution analysis techniques and feedforward neural network optimized by particle swarm optimization, we used a set of six illustrative interest rates; including Moody's seasoned Aaa corporate bond yield, Moody's seasoned Baa corporate bond yield, 3-Month, 6-Month and 1-Year treasury bills, and effective federal fund rate. The forecasting results show that all multiresolution-based prediction systems outperform the conventional reference models on the criteria of mean absolute error, mean absolute deviation, and root mean-squared error. Therefore, it is advantageous to adopt hybrid multiresolution techniques and soft computing models to forecast interest rate daily variations as they provide good forecasting performance.

  12. a Web-Based Interactive Tool for Multi-Resolution 3d Models of a Maya Archaeological Site

    NASA Astrophysics Data System (ADS)

    Agugiaro, G.; Remondino, F.; Girardi, G.; von Schwerin, J.; Richards-Rissetto, H.; De Amicis, R.

    2011-09-01

    Continuous technological advances in surveying, computing and digital-content delivery are strongly contributing to a change in the way Cultural Heritage is "perceived": new tools and methodologies for documentation, reconstruction and research are being created to assist not only scholars, but also to reach more potential users (e.g. students and tourists) willing to access more detailed information about art history and archaeology. 3D computer-simulated models, sometimes set in virtual landscapes, offer for example the chance to explore possible hypothetical reconstructions, while on-line GIS resources can help interactive analyses of relationships and change over space and time. While for some research purposes a traditional 2D approach may suffice, this is not the case for more complex analyses concerning spatial and temporal features of architecture, like for example the relationship of architecture and landscape, visibility studies etc. The project aims therefore at creating a tool, called "QueryArch3D" tool, which enables the web-based visualisation and queries of an interactive, multi-resolution 3D model in the framework of Cultural Heritage. More specifically, a complete Maya archaeological site, located in Copan (Honduras), has been chosen as case study to test and demonstrate the platform's capabilities. Much of the site has been surveyed and modelled at different levels of detail (LoD) and the geometric model has been semantically segmented and integrated with attribute data gathered from several external data sources. The paper describes the characteristics of the research work, along with its implementation issues and the initial results of the developed prototype.

  13. Proof-of-concept demonstration of a miniaturized three-channel multiresolution imaging system

    NASA Astrophysics Data System (ADS)

    Belay, Gebirie Y.; Ottevaere, Heidi; Meuret, Youri; Vervaeke, Michael; Van Erps, Jürgen; Thienpont, Hugo

    2014-05-01

    Multichannel imaging systems have several potential applications such as multimedia, surveillance, medical imaging and machine vision, and have therefore been a hot research topic in recent years. Such imaging systems, inspired by natural compound eyes, have many channels, each covering only a portion of the total field-of-view of the system. As a result, these systems provide a wide field-of-view (FOV) while having a small volume and a low weight. Different approaches have been employed to realize a multichannel imaging system. We demonstrated that the different channels of the imaging system can be designed in such a way that they can have each different imaging properties (angular resolution, FOV, focal length). Using optical ray-tracing software (CODE V), we have designed a miniaturized multiresolution imaging system that contains three channels each consisting of four aspherical lens surfaces fabricated from PMMA material through ultra-precision diamond tooling. The first channel possesses the largest angular resolution (0.0096°) and narrowest FOV (7°), whereas the third channel has the widest FOV (80°) and the smallest angular resolution (0.078°). The second channel has intermediate properties. Such a multiresolution capability allows different image processing algorithms to be implemented on the different segments of an image sensor. This paper presents the experimental proof-of-concept demonstration of the imaging system using a commercial CMOS sensor and gives an in-depth analysis of the obtained results. Experimental images captured with the three channels are compared with the corresponding simulated images. The experimental MTF of the channels have also been calculated from the captured images of a slanted edge target test. This multichannel multiresolution approach opens the opportunity for low-cost compact imaging systems that can be equipped with smart imaging capabilities.

  14. A multiresolution method for climate system modeling: application of spherical centroidal Voronoi tessellations

    SciTech Connect

    Ringler, Todd; Ju, Lili; Gunzburger, Max

    2008-11-14

    During the next decade and beyond, climate system models will be challenged to resolve scales and processes that are far beyond their current scope. Each climate system component has its prototypical example of an unresolved process that may strongly influence the global climate system, ranging from eddy activity within ocean models, to ice streams within ice sheet models, to surface hydrological processes within land system models, to cloud processes within atmosphere models. These new demands will almost certainly result in the develop of multiresolution schemes that are able, at least regionally, to faithfully simulate these fine-scale processes. Spherical centroidal Voronoi tessellations (SCVTs) offer one potential path toward the development of a robust, multiresolution climate system model components. SCVTs allow for the generation of high quality Voronoi diagrams and Delaunay triangulations through the use of an intuitive, user-defined density function. In each of the examples provided, this method results in high-quality meshes where the quality measures are guaranteed to improve as the number of nodes is increased. Real-world examples are developed for the Greenland ice sheet and the North Atlantic ocean. Idealized examples are developed for ocean–ice shelf interaction and for regional atmospheric modeling. In addition to defining, developing, and exhibiting SCVTs, we pair this mesh generation technique with a previously developed finite-volume method. Our numerical example is based on the nonlinear, shallow water equations spanning the entire surface of the sphere. This example is used to elucidate both the potential benefits of this multiresolution method and the challenges ahead.

  15. Novel Laplacian scheme and multiresolution modal curvatures for structural damage identification

    NASA Astrophysics Data System (ADS)

    Cao, Maosen; Qiao, Pizhong

    2009-05-01

    Modal curvature is more sensitive to structural damage than directly measured mode shape, and the standard Laplace operator is commonly used to acquire the modal curvatures from the mode shapes. However, the standard Laplace operator is very prone to noise, which often leads to the degraded modal curvatures incapable of identifying damage. To overcome this problem, a novel Laplacian scheme is proposed, from which an improved damage identification algorithm is developed. The proposed step-by-step procedures in the algorithm include: (1) By progressively upsampling the standard Laplace operator, a new Laplace operator is constructed, from which a Laplace operator array is formed; (2) by applying the Laplace operator array to the retrieved mode shape of a damaged structure, the multiresolution curvature mode shapes are produced, on which the damage trait, previously shadowed under the standard Laplace operator, can be revealed by a ridge of multiresolution modal curvatures; (3) a Gaussian filter is then incorporated into the new Laplace operator to produce a more versatile Laplace operator with properties of both the smoothness and differential capabilities, in which the damage feature is effectively strengthened; and (4) a smoothened nonlinear energy operator is introduced to further enhance the damage feature by eliminating the trend interference of the multiresolution modal curvatures, and it results in a significantly improved damage trait. The proposed algorithm is tested using the data generated by an analytical crack beam model, and its applicability is validated with an experimental program of a delaminated composite beam using scanning laser vibrometer (SLV) to acquire mode shapes. The results are compared in each step, showing increasing degree of improvement for damage effect. Numerical and experimental results demonstrate that the proposed novel Laplacian scheme provides a promising damage identification algorithm, which exhibits apparent advantages (e

  16. Refocusing capabilities in a miniaturized multi-channel multi-resolution imaging system using a tunable lens

    NASA Astrophysics Data System (ADS)

    Smeesters, L.; Belay, Gebirie Y.; Ottevaere, H.; Meuret, Youri; Thienpont, H.

    2014-05-01

    Inspired by nature, many application domains might gain from combining the multi-channel design of the compound eyes of insects and the refocusing capability of the human eye in one compact configuration. Multi-channel refocusing imaging systems are nowadays only commercially available in bulky and expensive designs since classical refocusing mechanisms cannot be integrated in a miniaturized configuration. We designed a wafer-level multi-resolution two-channel imaging system with refocusing capabilities using a voltage tunable liquid lens. One channel is able to capture a wide field-of-view image (2x40°) of a surrounding with a low angular resolution (0.078°), whereas a detailed image of a small region of interest (2x7.57°) can be obtained with the high angular resolution channel (0.0098°). The latter high angular resolution channel contains the tunable lens and therefore also the refocusing capabilities. In this paper, we first discuss the working principle, tunability and optical quality of a voltage tunable liquid lens. Based on optical characterization measurements with a Mach-Zehnder interferometer, we designed a tunable lens model. The designed tunable lens model and its validation in an imaging setup show a diffraction-limited image quality. Following, we discuss the performance of the designed two-channel imaging system. Both the wide field-of-view and high angular resolution optical channels show a diffraction-limited performance, ensuring a good image quality. Moreover, we obtained an improved depth-of-field, from 0.254m until infinity, in comparison with the current state-of-the art published wafer-level multi-channel imaging systems, which show a depth-of-field from 9m until infinity.

  17. Spatiotemporal multi-resolution approximation of the Amari type neural field model.

    PubMed

    Aram, P; Freestone, D R; Dewar, M; Scerri, K; Jirsa, V; Grayden, D B; Kadirkamanathan, V

    2013-02-01

    Neural fields are spatially continuous state variables described by integro-differential equations, which are well suited to describe the spatiotemporal evolution of cortical activations on multiple scales. Here we develop a multi-resolution approximation (MRA) framework for the integro-difference equation (IDE) neural field model based on semi-orthogonal cardinal B-spline wavelets. In this way, a flexible framework is created, whereby both macroscopic and microscopic behavior of the system can be represented simultaneously. State and parameter estimation is performed using the expectation maximization (EM) algorithm. A synthetic example is provided to demonstrate the framework.

  18. Origin of mobility enhancement by chemical treatment of gate-dielectric surface in organic thin-film transistors: Quantitative analyses of various limiting factors in pentacene thin films

    NASA Astrophysics Data System (ADS)

    Matsubara, R.; Sakai, Y.; Nomura, T.; Sakai, M.; Kudo, K.; Majima, Y.; Knipp, D.; Nakamura, M.

    2015-11-01

    For the better performance of organic thin-film transistors (TFTs), gate-insulator surface treatments are often applied. However, the origin of mobility increase has not been well understood because mobility-limiting factors have not been compared quantitatively. In this work, we clarify the influence of gate-insulator surface treatments in pentacene thin-film transistors on the limiting factors of mobility, i.e., size of crystal-growth domain, crystallite size, HOMO-band-edge fluctuation, and carrier transport barrier at domain boundary. We quantitatively investigated these factors for pentacene TFTs with bare, hexamethyldisilazane-treated, and polyimide-coated SiO2 layers as gate dielectrics. By applying these surface treatments, size of crystal-growth domain increases but both crystallite size and HOMO-band-edge fluctuation remain unchanged. Analyzing the experimental results, we also show that the barrier height at the boundary between crystal-growth domains is not sensitive to the treatments. The results imply that the essential increase in mobility by these surface treatments is only due to the increase in size of crystal-growth domain or the decrease in the number of energy barriers at domain boundaries in the TFT channel.

  19. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    SciTech Connect

    Baldwin, C; Abdulla, G; Critchlow, T

    2003-01-31

    This paper discusses using the wavelets modeling technique as a mechanism for querying large-scale spatio-temporal scientific simulation data. Wavelets have been used successfully in time series analysis and in answering surprise and trend queries. Our approach however is driven by the need for compression, which is necessary for viable throughput given the size of the targeted data, along with the end user requirements from the discovery process. Our users would like to run fast queries to check the validity of the simulation algorithms used. In some cases users are welling to accept approximate results if the answer comes back within a reasonable time. In other cases they might want to identify a certain phenomena and track it over time. We face a unique problem because of the data set sizes. It may take months to generate one set of the targeted data; because of its shear size, the data cannot be stored on disk for long and thus needs to be analyzed immediately before it is sent to tape. We integrated wavelets within AQSIM, a system that we are developing to support exploration and analyses of tera-scale size data sets. We will discuss the way we utilized wavelets decomposition in our domain to facilitate compression and in answering a specific class of queries that is harder to answer with any other modeling technique. We will also discuss some of the shortcomings of our implementation and how to address them.

  20. IMFIT Integrated Modeling Applications Supporting Experimental Analysis: Multiple Time-Slice Kinetic EFIT Reconstructions, MHD Stability Limits, and Energy and Momentum Flux Analyses

    NASA Astrophysics Data System (ADS)

    Collier, A.; Lao, L. L.; Abla, G.; Chu, M. S.; Prater, R.; Smith, S. P.; St. John, H. E.; Guo, W.; Li, G.; Pan, C.; Ren, Q.; Park, J. M.; Bisai, N.; Srinivasan, R.; Sun, A. P.; Liu, Y.; Worrall, M.

    2010-11-01

    This presentation summarizes several useful applications provided by the IMFIT integrated modeling framework to support DIII-D and EAST research. IMFIT is based on Python and utilizes modular task-flow architecture with a central manager and extensive GUI support to coordinate tasks among component modules. The kinetic-EFIT application allows multiple time-slice reconstructions by fetching pressure profile data directly from MDS+ or from ONETWO or PTRANSP. The stability application analyzes a given reference equilibrium for stability limits by performing parameter perturbation studies with MHD codes such as DCON, GATO, ELITE, or PEST3. The transport task includes construction of experimental energy and momentum fluxes from profile analysis and comparison against theoretical models such as MMM95, GLF23, or TGLF.

  1. Multi-gene phylogenetic analyses reveal species limits, phylogeographic patterns, and evolutionary histories of key morphological traits in Entoloma (Agaricales, Basidiomycota).

    PubMed

    Morgado, L N; Noordeloos, M E; Lamoureux, Y; Geml, J

    2013-12-01

    Species from Entoloma subg. Entoloma are commonly recorded from both the Northern and Southern Hemispheres and, according to literature, most of them have at least Nearctic-Palearctic distributions. However, these records are based on morphological analysis, and studies relating morphology, molecular data and geographical distribution have not been reported. In this study, we used phylogenetic species recognition criteria through gene genealogical concordance (based on nuclear ITS, LSU, rpb2 and mitochondrial SSU) to answer specific questions considering species limits in Entoloma subg. Entoloma and their geographic distribution in Europe, North America and Australasia. The studied morphotaxa belong to sect. Entoloma, namely species like the notorious poisonous E. sinuatum (E. lividum auct.), E. prunuloides (type-species of sect. Entoloma), E. nitidum and the red-listed E. bloxamii. With a few exceptions, our results reveal strong phylogeographical partitions that were previously not known. For example, no collection from Australasia proved to be conspecific with the Northern Hemisphere specimens. Almost all North American collections represent distinct and sister taxa to the European ones. And even within Europe, new lineages were uncovered for the red-listed E. bloxamii, which were previously unknown due to a broad morphological species concept. Our results clearly demonstrate the power of the phylogenetic species concept to reveal evolutionary units, to redefine the morphological limits of the species addressed and to provide insights into the evolutionary history of key morphological characters for Entoloma systematics. New taxa are described, and new combinations are made, including E. fumosobrunneum, E. pseudoprunuloides, E. ochreoprunuloides and E. caesiolamellatum. Epitypes are selected for E. prunuloides and E. bloxamii. In addition, complete descriptions are given of some other taxa used in this study for which modern descriptions are lacking, viz. E

  2. Multiresolution wavelet analysis of the body surface ECG before and after angioplasty.

    PubMed

    Gramatikov, B; Yi-Chun, S; Rix, H; Caminal, P; Thakor, N V

    1995-01-01

    Electrocardiographic recordings of patients with coronary artery stenosis, made before and after angioplasty, were analyzed by the multiresolution wavelet transform (MRWT) technique. The MRWT decomposes the signal of interest into its coarse and detail components at successively finer scales. MRWT was carried out on different leads in order to compare the P-QRS-T complex from recordings made before with those made after percutaneous transluminal coronary angioplasty (PTCA). ECG signals before and after successful PTCA procedures show distinctive changes at certain scales, thus helping to identify whether the procedure has been successful. In six patients who underwent right coronary artery PTCA, varying levels of reperfusion were achieved, and the changes in the detail components of ECG were shown to correlate with the successful reperfusion. The detail components at scales 5 and 6, corresponding approximately to the frequencies in the range of 2.3-8.3 Hz, are shown to be the most sensitive to ischemia-reperfusion changes (p < 0.05). The same conclusion was reached by synthesizing the post-PTCA signals from pre-PTCA signals with the help of these detail components. For on-line monitoring a vector plot, analogous to vector cardiogram, of the two most sensitive MRWT detail components is proposed. Thus, multiresolution analysis of ECG may be useful as a monitoring and diagnostic tool during angioplasty procedures.

  3. A multi-resolution image analysis system for computer-assisted grading of neuroblastoma differentiation

    NASA Astrophysics Data System (ADS)

    Kong, Jun; Sertel, Olcay; Shimada, Hiroyuki; Boyer, Kim L.; Saltz, Joel H.; Gurcan, Metin N.

    2008-03-01

    Neuroblastic Tumor (NT) is one of the most commonly occurring tumors in children. Of all types of NTs, neuroblastoma is the most malignant tumor that can be further categorized into undifferentiated (UD), poorly-differentiated (PD) and differentiating (D) types, in terms of the grade of pathological differentiation. Currently, pathologists determine the grade of differentiation by visual examinations of tissue samples under the microscope. However, this process is subjective and, hence, may lead to intra- and inter-reader variability. In this paper, we propose a multi-resolution image analysis system that helps pathologists classify tissue samples according to their grades of differentiation. The inputs to this system are color images of haematoxylin and eosin (H&E) stained tissue samples. The complete image analysis system has five stages: segmentation, feature construction, feature extraction, classification and confidence evaluation. Due to the large number of input images, both parallel processing and multi-resolution analysis were carried out to reduce the execution time of the algorithm. Our training dataset consists of 387 images tiles of size 512x512 in pixels from three whole-slide images. We tested the developed system with an independent set of 24 whole-slide images, eight from each grade. The developed system has an accuracy of 83.3% in correctly identifying the grade of differentiation, and it takes about two hours, on average, to process each whole slide image.

  4. A multi-resolution approach to retrospectively-gated cardiac micro-CT reconstruction

    NASA Astrophysics Data System (ADS)

    Clark, D. P.; Johnson, G. A.; Badea, C. T.

    2014-03-01

    In preclinical research, micro-CT is commonly used to provide anatomical information; however, there is significant interest in using this technology to obtain functional information in cardiac studies. The fastest acquisition in 4D cardiac micro-CT imaging is achieved via retrospective gating, resulting in irregular angular projections after binning the projections into phases of the cardiac cycle. Under these conditions, analytical reconstruction algorithms, such as filtered back projection, suffer from streaking artifacts. Here, we propose a novel, multi-resolution, iterative reconstruction algorithm inspired by robust principal component analysis which prevents the introduction of streaking artifacts, while attempting to recover the highest temporal resolution supported by the projection data. The algorithm achieves these results through a unique combination of the split Bregman method and joint bilateral filtration. We illustrate the algorithm's performance using a contrast-enhanced, 2D slice through the MOBY mouse phantom and realistic projection acquisition and reconstruction parameters. Our results indicate that the algorithm is robust to under sampling levels of only 34 projections per cardiac phase and, therefore, has high potential in reducing both acquisition times and radiation dose. Another potential advantage of the multi-resolution scheme is the natural division of the reconstruction problem into a large number of independent sub-problems which can be solved in parallel. In future work, we will investigate the performance of this algorithm with retrospectively-gated, cardiac micro-CT data.

  5. Multi-focus and multi-modal fusion: a study of multi-resolution transforms

    NASA Astrophysics Data System (ADS)

    Giansiracusa, Michael; Lutz, Adam; Ezekiel, Soundararajan; Alford, Mark; Blasch, Erik; Bubalo, Adnan; Thomas, Millicent

    2016-05-01

    Automated image fusion has a wide range of applications across a multitude of fields such as biomedical diagnostics, night vision, and target recognition. Automation in the field of image fusion is difficult because there are many types of imagery data that can be fused using different multi-resolution transforms. The different image fusion transforms provide coefficients for image fusion, creating a large number of possibilities. This paper seeks to understand how automation could be conceived for selected the multiresolution transform for different applications, starting in the multifocus and multi-modal image sub-domains. The study analyzes the greatest effectiveness for each sub-domain, as well as identifying one or two transforms that are most effective for image fusion. The transform techniques are compared comprehensively to find a correlation between the fusion input characteristics and the optimal transform. The assessment is completed through the use of no-reference image fusion metrics including those of information theory based, image feature based, and structural similarity based methods.

  6. Reliability of multiresolution deconvolution for improving depth resolution in SIMS analysis

    NASA Astrophysics Data System (ADS)

    Boulakroune, M.'Hamed

    2016-11-01

    This paper deals the effectiveness and reliability of multiresolution deconvolution algorithm for recovery Secondary Ions Mass Spectrometry, SIMS, profiles altered by the measurement. This new algorithm is characterized as a regularized wavelet transform. It combines ideas from Tikhonov Miller regularization, wavelet analysis and deconvolution algorithms in order to benefit from the advantages of each. The SIMS profiles were obtained by analysis of two structures of boron in a silicon matrix using a Cameca-Ims6f instrument at oblique incidence. The first structure is large consisting of two distant wide boxes and the second one is thin structure containing ten delta-layers in which the deconvolution by zone was applied. It is shown that this new multiresolution algorithm gives best results. In particular, local application of the regularization parameter of blurred and estimated solutions at each resolution level provided to smoothed signals without creating artifacts related to noise content in the profile. This led to a significant improvement in the depth resolution and peaks' maximums.

  7. A multi-resolution multi-size-windows disparity estimation approach

    NASA Astrophysics Data System (ADS)

    Martinez Bauza, Judit; Shiralkar, Manish

    2011-03-01

    This paper describes an algorithm for estimating the disparity between 2 images of a stereo pair. The disparity is related to the depth of the objects in the scene. Being able to obtain the depth of the objects in the scene is useful in many applications such as virtual reality, 3D user interfaces, background-foreground segmentation, or depth-image-based synthesis. This last application has motivated the proposed algorithm as part of a system that estimates disparities from a stereo pair and synthesizes new views. Synthesizing virtual views enables the post-processing of 3D content to adapt to user preferences or viewing conditions, as well as enabling the interface with multi-view auto-stereoscopic displays. The proposed algorithm has been designed to fulfill the following constraints: (a) low memory requirements, (b) local and parallelizable processing, and (c) adaptability to a sudden reduction in processing resources. Our solution uses a multi-resolution multi-size-windows approach, implemented as a line-independent process, well-suited for GPU implementation. The multi-resolution approach provides adaptability to sudden reduction in processing capabilities, besides computational advantages; the windows-based image processing algorithm guarantees low-memory requirements and local processing.

  8. Wavelet-based multiresolution with n-th-root-of-2 Subdivision

    SciTech Connect

    Linsen, L; Pascucci, V; Duchaineau, M A; Hamann, B; Joy, K I

    2004-12-16

    Multiresolution methods are a common technique used for dealing with large-scale data and representing it at multiple levels of detail. The authors present a multiresolution hierarchy construction based on n{radical}2 subdivision, which has all the advantages of a regular data organization scheme while reducing the drawback of coarse granularity. The n{radical}2-subdivision scheme only doubles the number of vertices in each subdivision step regardless of dimension n. They describe the construction of 2D, 3D, and 4D hierarchies representing surfaces, volume data, and time-varying volume data, respectively. The 4D approach supports spatial and temporal scalability. For high-quality data approximation on each level of detail, they use downsampling filters based on n-variate B-spline wavelets. They present a B-spline wavelet lifting scheme for n{radical}2-subdivision steps to obtain small or narrow filters. Narrow filters support adaptive refinement and out-of-core data exploration techniques.

  9. QRS detection by lifting scheme constructing multi-resolution morphological decomposition.

    PubMed

    Zhang, Pu; Ma, Heather T; Zhang, Qinyu

    2014-01-01

    QRS complex detecting algorithm is core of ECG auto-diagnosis method and deeply influences cardiac cycle division for signal compression. However, ECG signals collected by noninvasive surface electrodes areusually mixed with several kinds of interference, and its waveform variation is the main reason for the hard realization of ECG processing. This paper proposes a QRS complex detecting algorithm based on multi-resolution mathematical morphological decomposition. This algorithm possesses superiorities in R peak detection of both mathematical morphological method and multi-resolution decomposition. Moreover, a lifting constructing method with Maximizationupdating operator is adopted to further improve the algorithm performance. And an efficient R peak search-back algorithm is employed to reduce the false positives (FP) and false negatives (FN). The proposed algorithm provides a good performance applying to MIT-BIH Arrhythmia Database, and achieves over 99% detection rate, sensitivity and positive predictivity, respectively, and calculation burden is low. Therefore, the proposed method is appropriate for portable medical devices in Telemedicine system.

  10. a Virtual Globe-Based Multi-Resolution Tin Surface Modeling and Visualizetion Method

    NASA Astrophysics Data System (ADS)

    Zheng, Xianwei; Xiong, Hanjiang; Gong, Jianya; Yue, Linwei

    2016-06-01

    The integration and visualization of geospatial data on a virtual globe play an significant role in understanding and analysis of the Earth surface processes. However, the current virtual globes always sacrifice the accuracy to ensure the efficiency for global data processing and visualization, which devalue their functionality for scientific applications. In this article, we propose a high-accuracy multi-resolution TIN pyramid construction and visualization method for virtual globe. Firstly, we introduce the cartographic principles to formulize the level of detail (LOD) generation so that the TIN model in each layer is controlled with a data quality standard. A maximum z-tolerance algorithm is then used to iteratively construct the multi-resolution TIN pyramid. Moreover, the extracted landscape features are incorporated into each-layer TIN, thus preserving the topological structure of terrain surface at different levels. In the proposed framework, a virtual node (VN)-based approach is developed to seamlessly partition and discretize each triangulation layer into tiles, which can be organized and stored with a global quad-tree index. Finally, the real time out-of-core spherical terrain rendering is realized on a virtual globe system VirtualWorld1.0. The experimental results showed that the proposed method can achieve an high-fidelity terrain representation, while produce a high quality underlying data that satisfies the demand for scientific analysis.

  11. An efficient multi-resolution GA approach to dental image alignment

    NASA Astrophysics Data System (ADS)

    Nassar, Diaa Eldin; Ogirala, Mythili; Adjeroh, Donald; Ammar, Hany

    2006-02-01

    Automating the process of postmortem identification of individuals using dental records is receiving an increased attention in forensic science, especially with the large volume of victims encountered in mass disasters. Dental radiograph alignment is a key step required for automating the dental identification process. In this paper, we address the problem of dental radiograph alignment using a Multi-Resolution Genetic Algorithm (MR-GA) approach. We use location and orientation information of edge points as features; we assume that affine transformations suffice to restore geometric discrepancies between two images of a tooth, we efficiently search the 6D space of affine parameters using GA progressively across multi-resolution image versions, and we use a Hausdorff distance measure to compute the similarity between a reference tooth and a query tooth subject to a possible alignment transform. Testing results based on 52 teeth-pair images suggest that our algorithm converges to reasonable solutions in more than 85% of the test cases, with most of the error in the remaining cases due to excessive misalignments.

  12. Three-dimensional tomographic reconstruction through two-dimensional multiresolution backprojection steps according to Marr's method

    NASA Astrophysics Data System (ADS)

    Stephanakis, Ioannis M.; Anastassopoulos, George C.

    2009-03-01

    A novel algorithm for 3-D tomographic reconstruction is proposed. The proposed algorithm is based on multiresolution techniques for local inversion of the 3-D Radon transform in confined subvolumes within the entire object space. Directional wavelet functions of the form ψm,nj(x)=2j/2ψ(2jwm,nx) are employed in a sequel of double filtering and 2-D backprojection operations performed on vertical and horizontal reconstruction planes using the method suggested by Marr and others. The densities of the 3-D object are found initially as backprojections of coarse wavelet functions of this form at directions on vertical and horizontal planes that intersect the object. As the algorithm evolves, finer planar wavelets intersecting a subvolume of medical interest within the original object may be used to reconstruct its details by double backprojection steps on vertical and horizontal planes in a similar fashion. Reduction in the complexity of the reconstruction algorithm is achieved due to the good localization properties of planar wavelets that render the details of the projections with small errors. Experimental results that illustrate multiresolution reconstruction at four successive levels of resolution are given for wavelets belonging to the Daubechies family.

  13. A hardware implementation of multiresolution filtering for broadband instrumentation

    SciTech Connect

    Kercel, S.W.; Dress, W.B.

    1995-12-01

    The authors have constructed a wavelet processing board that implements a 14-level wavelet transform. The board uses a high-speed, analog-to-digital (A/D) converter, a hardware queue, and five fixed-point digital signal processing (DSP) chips in a parallel pipeline architecture. All five processors are independently programmable. The board is designed as a general purpose engine for instrumentation applications requiring near real-time wavelet processing or multiscale filtering. The present application is the processing engine of a magnetic field monitor that covers 305 Hz through 5 MHz. The monitor is used for the detection of peak values of magnetic fields in nuclear power plants. This paper describes the design, development, simulation, and testing of the system. Specific issues include the conditioning of real-world signals for wavelet processing, practical trade-offs between queue length and filter length, selection of filter coefficients, simulation of a 14-octave filter bank, and limitations imposed by a fixed-point processor. Test results from the completed wavelet board are included.

  14. Hardware implementation of multiresolution filtering for broadband instrumentation

    NASA Astrophysics Data System (ADS)

    Kercel, Stephen W.; Dress, William B.

    1995-04-01

    The authors have constructed a wavelet processing board that implements a 14-level wavelet transform. The board uses a high-speed analog-to-digital (A/D) converter, a hardware queue, and five fixed-point digital signal processing (DSP) chips in a parallel pipeline architecture. All five processors are independently programmable. The board is designed as a general purpose engine for instrumentation applications requiring near real-time wavelet processing or multiscale filtering. The present application is the processing engine of a magnetic field monitor that covers 305 Hz through 5 MHz. The monitor is used for the detection of peak values of magnetic fields in nuclear power plants. This paper describes the design, development, simulation, and testing of the system. Specific issues include the conditioning of real-world signals for wavelet processing, practical trade-offs between queue length and filter length, selection of filter coefficients, simulation of a 14-octave filter bank, and limitations imposed by a fixed-point processor. Test results from the completed wavelet board are included.

  15. A Multiresolution Approach to Shear Wave Image Reconstruction

    PubMed Central

    Hollender, Peter; Bottenus, Nick; Trahey, Gregg

    2015-01-01

    Shear wave imaging techniques build maps of local elasticity estimating the local group velocity of induced mechanical waves. Velocity estimates are formed using the time delay in the motion profile of the medium at two or more points offset from the shear wave source. Because the absolute time-of-flight between any pair of locations scales with the distance between them, there is an inherent trade-off between robustness to time-of-flight errors and lateral spatial resolution based on the number and spacing of the receive points used for each estimate. This work proposes a method of using the time delays measured between all combinations of locations to estimate a noise-robust, high-resolution image. The time-of-flight problem is presented as an overdetermined system of linear equations that can be directly solved with and without spatial regularization terms. Finite element method simulations of acoustic radiation force-induced shear waves are used to illustrate the method, demonstrating superior contrast-to-noise ratio and lateral edge resolution characteristics compared to linear regression of arrival times. This technique may improve shear wave imaging in situations where time-of-flight noise is a limiting factor. PMID:26276953

  16. A multi-resolution analysis of lidar-DTMs to identify geomorphic processes from characteristic topographic length scales

    NASA Astrophysics Data System (ADS)

    Sangireddy, H.; Passalacqua, P.; Stark, C. P.

    2013-12-01

    Characteristic length scales are often present in topography, and they reflect the driving geomorphic processes. The wide availability of high resolution lidar Digital Terrain Models (DTMs) allows us to measure such characteristic scales, but new methods of topographic analysis are needed in order to do so. Here, we explore how transitions in probability distributions (pdfs) of topographic variables such as (log(area/slope)), defined as topoindex by Beven and Kirkby[1979], can be measured by Multi-Resolution Analysis (MRA) of lidar DTMs [Stark and Stark, 2001; Sangireddy et al.,2012] and used to infer dominant geomorphic processes such as non-linear diffusion and critical shear. We show this correlation between dominant geomorphic processes to characteristic length scales by comparing results from a landscape evolution model to natural landscapes. The landscape evolution model MARSSIM Howard[1994] includes components for modeling rock weathering, mass wasting by non-linear creep, detachment-limited channel erosion, and bedload sediment transport. We use MARSSIM to simulate steady state landscapes for a range of hillslope diffusivity and critical shear stresses. Using the MRA approach, we estimate modal values and inter-quartile ranges of slope, curvature, and topoindex as a function of resolution. We also construct pdfs at each resolution and identify and extract characteristic scale breaks. Following the approach of Tucker et al.,[2001], we measure the average length to channel from ridges, within the GeoNet framework developed by Passalacqua et al.,[2010] and compute pdfs for hillslope lengths at each scale defined in the MRA. We compare the hillslope diffusivity used in MARSSIM against inter-quartile ranges of topoindex and hillslope length scales, and observe power law relationships between the compared variables for simulated landscapes at steady state. We plot similar measures for natural landscapes and are able to qualitatively infer the dominant geomorphic

  17. Multiresolution pattern recognition of small volcanos in Magellan data

    NASA Technical Reports Server (NTRS)

    Smyth, P.; Anderson, C. H.; Aubele, J. C.; Crumpler, L. S.

    1992-01-01

    The Magellan data is a treasure-trove for scientific analysis of venusian geology, providing far more detail than was previously available from Pioneer Venus, Venera 15/16, or ground-based radar observations. However, at this point, planetary scientists are being overwhelmed by the sheer quantities of data collected--data analysis technology has not kept pace with our ability to collect and store it. In particular, 'small-shield' volcanos (less than 20 km in diameter) are the most abundant visible geologic feature on the planet. It is estimated, based on extrapolating from previous studies and knowledge of the underlying geologic processes, that there should be on the order of 10(exp 5) to 10(exp 6) of these volcanos visible in the Magellan data. Identifying and studying these volcanos is fundamental to a proper understanding of the geologic evolution of Venus. However, locating and parameterizing them in a manual manner is very time-consuming. Hence, we have undertaken the development of techniques to partially automate this task. The goal is not the unrealistic one of total automation, but rather the development of a useful tool to aid the project scientists. The primary constraints for this particular problem are as follows: (1) the method must be reasonably robust; and (2) the method must be reasonably fast. Unlike most geological features, the small volcanos of Venus can be ascribed to a basic process that produces features with a short list of readily defined characteristics differing significantly from other surface features on Venus. For pattern recognition purposes the relevant criteria include the following: (1) a circular planimetric outline; (2) known diameter frequency distribution from preliminary studies; (3) a limited number of basic morphological shapes; and (4) the common occurrence of a single, circular summit pit at the center of the edifice.

  18. Multiresolution quantification of deciduousness in West Central African forests

    NASA Astrophysics Data System (ADS)

    Viennois, G.; Barbier, N.; Fabre, I.; Couteron, P.

    2013-04-01

    The characterization of leaf phenology in tropical forests is of major importance and improves our understanding of earth-atmosphere-climate interactions. The availability of satellite optical data with a high temporal resolution has permitted the identification of unexpected phenological cycles, particularly over the Amazon region. A primary issue in these studies is the relationship between the optical reflectance of pixels of 1 km or more in size and ground information of limited spatial extent. In this paper, we demonstrate that optical data with high to very-high spatial resolution can help bridge this scale gap by providing snapshots of the canopy that allow discernment of the leaf-phenological stage of trees and the proportions of leaved crowns within the canopy. We also propose applications for broad-scale forest characterization and mapping in West Central Africa over an area of 141 000 km2. Eleven years of the Moderate Resolution Imaging Spectroradiometer (MODIS) Enhanced Vegetation Index (EVI) data were averaged over the wet and dry seasons to provide a dataset of optimal radiometric quality at a spatial resolution of 250 m. Sample areas covered at a very-high (GeoEye) and high (SPOT-5) spatial resolution were used to identify forest types and to quantify the proportion of leaved trees in the canopy. The dry season EVI was positively correlated with the proportion of leaved trees in the canopy. This relationship allowed the conversion of EVI into canopy deciduousness at the regional level. On this basis, ecologically important forest types could be mapped, including young secondary, open Marantaceae, Gilbertiodendron dewevrei and swamp forests. We show that in west central African forests, a large share of the variability in canopy reflectance, as captured by the EVI, is due to variation in the proportion of leaved trees in the upper canopy, thereby opening new perspectives for biodiversity and carbon-cycle applications.

  19. Multiresolution quantification of deciduousness in West-Central African forests

    NASA Astrophysics Data System (ADS)

    Viennois, G.; Barbier, N.; Fabre, I.; Couteron, P.

    2013-11-01

    The characterization of leaf phenology in tropical forests is of major importance for forest typology as well as to improve our understanding of earth-atmosphere-climate interactions or biogeochemical cycles. The availability of satellite optical data with a high temporal resolution has permitted the identification of unexpected phenological cycles, particularly over the Amazon region. A primary issue in these studies is the relationship between the optical reflectance of pixels of 1 km or more in size and ground information of limited spatial extent. In this paper, we demonstrate that optical data with high to very-high spatial resolution can help bridge this scale gap by providing snapshots of the canopy that allow discernment of the leaf-phenological stage of trees and the proportions of leaved crowns within the canopy. We also propose applications for broad-scale forest characterization and mapping in West-Central Africa over an area of 141 000 km2. Eleven years of the Moderate Resolution Imaging Spectroradiometer (MODIS) Enhanced Vegetation Index (EVI) data were averaged over the wet and dry seasons to provide a data set of optimal radiometric quality at a spatial resolution of 250 m. Sample areas covered at a very-high (GeoEye) and high (SPOT-5) spatial resolution were used to identify forest types and to quantify the proportion of leaved trees in the canopy. The dry-season EVI was positively correlated with the proportion of leaved trees in the canopy. This relationship allowed the conversion of EVI into canopy deciduousness at the regional level. On this basis, ecologically important forest types could be mapped, including young secondary, open Marantaceae, Gilbertiodendron dewevrei and swamp forests. We show that in West-Central African forests, a large share of the variability in canopy reflectance, as captured by the EVI, is due to variation in the proportion of leaved trees in the upper canopy, thereby opening new perspectives for biodiversity and

  20. A Non-Homogeneous, Spatio-Temporal, Wavelet Multiresolution Analysis and Its Application to the Analysis of Motion

    DTIC Science & Technology

    1993-12-01

    36 iv Page 3.3 Discrete Multiresolution Decomposition Algorithm ..... ........... 40 3.4 Spatio-Temporal Filter Bank Representation...List of Figures Figure Page 1. Spatial and temporal frequency sensitivity of motion cells ................... 3 2. STFT and wavelet filter banks ...construction of a wavelet filter bank that provides directional selectivity, 5) combining the coefficients obtained in the decomposition process to

  1. Multi-resolution statistical image reconstruction for mitigation of truncation effects: application to cone-beam CT of the head

    NASA Astrophysics Data System (ADS)

    Dang, Hao; Webster Stayman, J.; Sisniega, Alejandro; Zbijewski, Wojciech; Xu, Jennifer; Wang, Xiaohui; Foos, David H.; Aygun, Nafi; Koliatsos, Vassilis E.; Siewerdsen, Jeffrey H.

    2017-01-01

    A prototype cone-beam CT (CBCT) head scanner featuring model-based iterative reconstruction (MBIR) has been recently developed and demonstrated the potential for reliable detection of acute intracranial hemorrhage (ICH), which is vital to diagnosis of traumatic brain injury and hemorrhagic stroke. However, data truncation (e.g. due to the head holder) can result in artifacts that reduce image uniformity and challenge ICH detection. We propose a multi-resolution MBIR method with an extended reconstruction field of view (RFOV) to mitigate truncation effects in CBCT of the head. The image volume includes a fine voxel size in the (inner) nontruncated region and a coarse voxel size in the (outer) truncated region. This multi-resolution scheme allows extension of the RFOV to mitigate truncation effects while introducing minimal increase in computational complexity. The multi-resolution method was incorporated in a penalized weighted least-squares (PWLS) reconstruction framework previously developed for CBCT of the head. Experiments involving an anthropomorphic head phantom with truncation due to a carbon-fiber holder were shown to result in severe artifacts in conventional single-resolution PWLS, whereas extending the RFOV within the multi-resolution framework strongly reduced truncation artifacts. For the same extended RFOV, the multi-resolution approach reduced computation time compared to the single-resolution approach (viz. time reduced by 40.7%, 83.0%, and over 95% for an image volume of 6003, 8003, 10003 voxels). Algorithm parameters (e.g. regularization strength, the ratio of the fine and coarse voxel size, and RFOV size) were investigated to guide reliable parameter selection. The findings provide a promising method for truncation artifact reduction in CBCT and may be useful for other MBIR methods and applications for which truncation is a challenge.

  2. DTMs: discussion of a new multi-resolution function based model

    NASA Astrophysics Data System (ADS)

    Brovelli, M. A.; Biagi, L.; Zamboni, G.

    2012-04-01

    The diffusion of new technologies based on WebGIS and virtual globes allows DTMs distribution and three dimensional representations to the Web users' community. In the Web distribution of geographical information, the database storage size represents a critical point: given a specific interest area, typically the server needs to perform some preprocessing, the data have to be sent to the client, that applies some additional processing. The efficiency of all these actions is crucial to guarantee a near real time availability of the information. DTMs are obtained from the raw observations by some sampling or interpolation technique and typically are stored and distributed as Triangular Irregular Networks (TIN) or regular grids. A new approach to store and transmit DTMs has been studied and implemented. The basic idea is to use multi-resolution bilinear spline functions to interpolate the raw observations and to represent the terrain. More in detail, the algorithm performs the following actions. 1) The spatial distribution of the raw observations is investigated. In areas where few data are available, few levels of splines are activated while more levels are activated where the raw observations are denser: each new level corresponds to an halving of the spline support with respect to the previous level. 2) After the selection of the spline functions to be activated, the relevant coefficients are estimated by interpolating the raw observations. The interpolation is computed by batch least squares. 3) Finally, the estimated coefficients of the splines are stored. The algorithm guarantees a local resolution consistent with the data density, exploiting all the available information provided by the sample. The model can be defined "function based" because the coefficients of a given function are stored instead of a set of heights: in particular, the resolution level, the position and the coefficient of each activated spline function are stored by the server and are

  3. Pathfinder: multiresolution region-based searching of pathology images using IRM.

    PubMed

    Wang, J Z

    2000-01-01

    The fast growth of digitized pathology slides has created great challenges in research on image database retrieval. The prevalent retrieval technique involves human-supplied text annotations to describe slide contents. These pathology images typically have very high resolution, making it difficult to search based on image content. In this paper, we present Pathfinder, an efficient multiresolution region-based searching system for high-resolution pathology image libraries. The system uses wavelets and the IRM (Integrated Region Matching) distance. Experiments with a database of 70,000 pathology image fragments have demonstrated high retrieval accuracy and high speed. The algorithm can be combined with our previously developed wavelet-based progressive pathology image transmission and browsing algorithm and is expandable for medical image databases.

  4. Optimization as a Tool for Consistency Maintenance in Multi-Resolution Simulation

    NASA Technical Reports Server (NTRS)

    Drewry, Darren T; Reynolds, Jr , Paul F; Emanuel, William R

    2006-01-01

    The need for new approaches to the consistent simulation of related phenomena at multiple levels of resolution is great. While many fields of application would benefit from a complete and approachable solution to this problem, such solutions have proven extremely difficult. We present a multi-resolution simulation methodology that uses numerical optimization as a tool for maintaining external consistency between models of the same phenomena operating at different levels of temporal and/or spatial resolution. Our approach follows from previous work in the disparate fields of inverse modeling and spacetime constraint-based animation. As a case study, our methodology is applied to two environmental models of forest canopy processes that make overlapping predictions under unique sets of operating assumptions, and which execute at different temporal resolutions. Experimental results are presented and future directions are addressed.

  5. Multi-resolution entropy analysis of gait symmetry in neurological degenerative diseases and amyotrophic lateral sclerosis.

    PubMed

    Liao, Fuyuan; Wang, Jue; He, Ping

    2008-04-01

    Gait rhythm of patients with Parkinson's disease (PD), Huntington's disease (HD) and amyotrophic lateral sclerosis (ALS) has been studied focusing on the fractal and correlation properties of stride time fluctuations. In this study, we investigated gait asymmetry in these diseases using the multi-resolution entropy analysis of stance time fluctuations. Since stance time is likely to exhibit fluctuations across multiple spatial and temporal scales, the data series were decomposed into appropriate levels by applying stationary wavelet transform. The similarity between two corresponding wavelet coefficient series in terms of their regularities at each level was quantified based on a modified sample entropy method and a weighted sum was then used as gait symmetry index. We found that gait symmetry in subjects with PD and HD, especially with ALS is significantly disturbed. This method may be useful in characterizing certain pathologies of motor control and, possibly, in monitoring disease progression and evaluating the effect of an individual treatment.

  6. Pathfinder: multiresolution region-based searching of pathology images using IRM.

    PubMed Central

    Wang, J. Z.

    2000-01-01

    The fast growth of digitized pathology slides has created great challenges in research on image database retrieval. The prevalent retrieval technique involves human-supplied text annotations to describe slide contents. These pathology images typically have very high resolution, making it difficult to search based on image content. In this paper, we present Pathfinder, an efficient multiresolution region-based searching system for high-resolution pathology image libraries. The system uses wavelets and the IRM (Integrated Region Matching) distance. Experiments with a database of 70,000 pathology image fragments have demonstrated high retrieval accuracy and high speed. The algorithm can be combined with our previously developed wavelet-based progressive pathology image transmission and browsing algorithm and is expandable for medical image databases. Images Figure 4 Figure 6 Figure 7 Figure 5 Figure 8 Figure 9 PMID:11080011

  7. A multiresolution analysis for tensor-product splines using weighted spline wavelets

    NASA Astrophysics Data System (ADS)

    Kapl, Mario; Jüttler, Bert

    2009-09-01

    We construct biorthogonal spline wavelets for periodic splines which extend the notion of "lazy" wavelets for linear functions (where the wavelets are simply a subset of the scaling functions) to splines of higher degree. We then use the lifting scheme in order to improve the approximation properties with respect to a norm induced by a weighted inner product with a piecewise constant weight function. Using the lifted wavelets we define a multiresolution analysis of tensor-product spline functions and apply it to image compression of black-and-white images. By performing-as a model problem-image compression with black-and-white images, we demonstrate that the use of a weight function allows to adapt the norm to the specific problem.

  8. Multiresolution field map estimation using golden section search for water-fat separation.

    PubMed

    Lu, Wenmiao; Hargreaves, Brian A

    2008-07-01

    Many diagnostic MRI sequences demand reliable and uniform fat suppression. Multipoint water-fat separation methods, which are based on chemical-shift induced phase differences, have shown great success in the presence of field inhomogeneities. This work presents a computationally efficient and robust field map estimation method. The method begins with subsampling image data into a multiresolution image pyramidal structure, and then utilizes a golden section search to directly locate possible field map values at the coarsest level of the pyramidal structure. The field map estimate is refined and propagated to increasingly finer resolutions in an efficient manner until the full-resolution field map is obtained for final water-fat separation. The proposed method is validated with multiecho sequences where long echo-spacings normally impose great challenges on reliable field map estimation.

  9. Multiresolution parametric estimation of transparent motions and denoising of fluoroscopic images.

    PubMed

    Auvray, Vincent; Liénard, Jean; Bouthemy, Patrick

    2005-01-01

    We describe a novel multiresolution parametric framework to estimate transparent motions typically present in X-Ray exams. Assuming the presence if two transparent layers, it computes two affine velocity fields by minimizing an appropriate objective function with an incremental Gauss-Newton technique. We have designed a realistic simulation scheme of fluoroscopic image sequences to validate our method on data with ground truth and different levels of noise. An experiment on real clinical images is also reported. We then exploit this transparent-motion estimation method to denoise two layers image sequences using a motion-compensated estimation method. In accordance with theory, we show that we reach a denoising factor of 2/3 in a few iterations without bringing any local artifacts in the image sequence.

  10. Multiresolutional schemata for unsupervised learning of autonomous robots for 3D space operation

    NASA Technical Reports Server (NTRS)

    Lacaze, Alberto; Meystel, Michael; Meystel, Alex

    1994-01-01

    This paper describes a novel approach to the development of a learning control system for autonomous space robot (ASR) which presents the ASR as a 'baby' -- that is, a system with no a priori knowledge of the world in which it operates, but with behavior acquisition techniques that allows it to build this knowledge from the experiences of actions within a particular environment (we will call it an Astro-baby). The learning techniques are rooted in the recursive algorithm for inductive generation of nested schemata molded from processes of early cognitive development in humans. The algorithm extracts data from the environment and by means of correlation and abduction, it creates schemata that are used for control. This system is robust enough to deal with a constantly changing environment because such changes provoke the creation of new schemata by generalizing from experiences, while still maintaining minimal computational complexity, thanks to the system's multiresolutional nature.

  11. Exploring a Multiresolution Modeling Approach within the Shallow-Water Equations

    SciTech Connect

    Ringler, Todd D.; Jacobsen, Doug; Gunzburger, Max; Ju, Lili; Duda, Michael; Skamarock, William

    2011-11-01

    The ability to solve the global shallow-water equations with a conforming, variable-resolution mesh is evaluated using standard shallow-water test cases. While the long-term motivation for this study is the creation of a global climate modeling framework capable of resolving different spatial and temporal scales in different regions, the process begins with an analysis of the shallow-water system in order to better understand the strengths and weaknesses of the approach developed herein. The multiresolution meshes are spherical centroidal Voronoi tessellations where a single, user-supplied density function determines the region(s) of fine- and coarsemesh resolution. The shallow-water system is explored with a suite of meshes ranging from quasi-uniform resolution meshes, where the grid spacing is globally uniform, to highly variable resolution meshes, where the grid spacing varies by a factor of 16 between the fine and coarse regions. The potential vorticity is found to be conserved to within machine precision and the total available energy is conserved to within a time-truncation error. This result holds for the full suite of meshes, ranging from quasi-uniform resolution and highly variable resolution meshes. Based on shallow-water test cases 2 and 5, the primary conclusion of this study is that solution error is controlled primarily by the grid resolution in the coarsest part of the model domain. This conclusion is consistent with results obtained by others.When these variable-resolution meshes are used for the simulation of an unstable zonal jet, the core features of the growing instability are found to be largely unchanged as the variation in the mesh resolution increases. The main differences between the simulations occur outside the region of mesh refinement and these differences are attributed to the additional truncation error that accompanies increases in grid spacing. Overall, the results demonstrate support for this approach as a path toward

  12. A high-fidelity multiresolution digital elevation model for Earth systems

    NASA Astrophysics Data System (ADS)

    Duan, Xinqiao; Li, Lin; Zhu, Haihong; Ying, Shen

    2017-01-01

    The impact of topography on Earth systems variability is well recognised. As numerical simulations evolved to incorporate broader scales and finer processes, accurately assimilating or transforming the topography to produce more exact land-atmosphere-ocean interactions, has proven to be quite challenging. Numerical schemes of Earth systems often use empirical parameterisation at sub-grid scale with downscaling to express topographic endogenous processes, or rely on insecure point interpolation to induce topographic forcing, which creates bias and input uncertainties. Digital elevation model (DEM) generalisation provides more sophisticated systematic topographic transformation, but existing methods are often difficult to be incorporated because of unwarranted grid quality. Meanwhile, approaches over discrete sets often employ heuristic approximation, which are generally not best performed. Based on DEM generalisation, this article proposes a high-fidelity multiresolution DEM with guaranteed grid quality for Earth systems. The generalised DEM surface is initially approximated as a triangulated irregular network (TIN) via selected feature points and possible input features. The TIN surface is then optimised through an energy-minimised centroidal Voronoi tessellation (CVT). By devising a robust discrete curvature as density function and exact geometry clipping as energy reference, the developed curvature CVT (cCVT) converges, the generalised surface evolves to a further approximation to the original DEM surface, and the points with the dual triangles become spatially equalised with the curvature distribution, exhibiting a quasi-uniform high-quality and adaptive variable resolution. The cCVT model was then evaluated on real lidar-derived DEM datasets and compared to the classical heuristic model. The experimental results show that the cCVT multiresolution model outperforms classical heuristic DEM generalisations in terms of both surface approximation precision and

  13. Multitemporal Multi-Resolution SAR Data for Urbanization Mapping and Monitoring: Midterm Results

    NASA Astrophysics Data System (ADS)

    Ban, Yifang; Gamba, Paolo; Jacob, Alexander; Salentinig, Andreas

    2014-11-01

    The objective of this research is to evaluate spaceborne SAR data for urban extent extraction, urban land cover mapping and urbanization monitoring. The methodology includes urban extraction using KTH-Pavia urban extractor and multi-resolution SAR data, as well as object-based classification of urban land cover using KTH-SEG and TerraSAR-X data. The urban extend extraction is based on spatial indices and Grey Level Co-occurrence Matrix (GLCM) textures while the object-based classification is based on KTH-SEG, an edge-aware region growing and merging algorithm. ENVISAT ASAR C-VV data at 30m and 75m resolution as well as TerraSAR-X data at 1m and 3m resolution were selected for this research. The results show that the KTH-Pavia Urban Extractor is effective in extracting urban areas and small towns from single-date single polarization ERS-1 SAR and ENVISAT ASAR data and urbanization monitoring could be performed in a timely and reliable manner at low-cost. The results also show that multi-resolution urban extractions showed more reliable results due to the reduction of the commission error even though the overall accuracy does not change significantly. For urban land cover mapping, KTH-SEG was effective for classification of TerraSAR-X and TanDEM-X data with best accuracy of 83% achieved. These findings indicate that operational global urban mapping and urbanization monitoring is possible with multitemporal spaceborne SAR data, especially with the recent launch of Sentinel-1 that provides SAR data with global coverage, operational reliability and quick data delivery.

  14. Multitemporal Multi-Resolution SAR Data for Urbanization Mapping and Monitoring: Midterm Results

    NASA Astrophysics Data System (ADS)

    Ban, Yifang; Gamba, Paolo; Jacob, Alexander; Salentinig, Andreas

    2014-11-01

    The objective of this research is to evaluate spaceborne SAR data for urban extent extraction, urban land cover mapping and urbanization monitoring. The methodology includes urban extraction using KTH- Pavia urban extractor and multi-resolution SAR data, as well as object-based classification of urban land cover using KTH-SEG and TerraSAR-X data. The urban extend extraction is based on spatial indices and Grey Level Co-occurrence Matrix (GLCM) textures while the object-based classification is based on KTH-SEG, an edge-aware region growing and merging algorithm. ENVISAT ASAR C-VV data at 30m and 75m resolution as well as TerraSAR-X data at 1m and 3m resolution were selected for this research. The results show that the KTH-Pavia Urban Extractor is effective in extracting urban areas and small towns from single-date single polarization ERS-1 SAR and ENVISAT ASAR data and urbanization monitoring could be performed in a timely and reliable manner at low-cost. The results also show that multi-resolution urban extractions showed more reliable results due to the reduction of the commission error even though the overall accuracy does not change significantly. For urban land cover mapping, KTH-SEG was effective for classification of TerraSAR- X and TanDEM-X data with best accuracy of 83% achieved. These findings indicate that operational global urban mapping and urbanization monitoring is possible with multitemporal spaceborne SAR data, especially with the recent launch of Sentinel-1 that provides SAR data with global coverage, operational reliability and quick data delivery.

  15. Approaches to optimal aquifer management and intelligent control in a multiresolutional decision support system

    NASA Astrophysics Data System (ADS)

    Orr, Shlomo; Meystel, Alexander M.

    2005-03-01

    Despite remarkable new developments in stochastic hydrology and adaptations of advanced methods from operations research, stochastic control, and artificial intelligence, solutions of complex real-world problems in hydrogeology have been quite limited. The main reason is the ultimate reliance on first-principle models that lead to complex, distributed-parameter partial differential equations (PDE) on a given scale. While the addition of uncertainty, and hence, stochasticity or randomness has increased insight and highlighted important relationships between uncertainty, reliability, risk, and their effect on the cost function, it has also (a) introduced additional complexity that results in prohibitive computer power even for just a single uncertain/random parameter; and (b) led to the recognition in our inability to assess the full uncertainty even when including all uncertain parameters. A paradigm shift is introduced: an adaptation of new methods of intelligent control that will relax the dependency on rigid, computer-intensive, stochastic PDE, and will shift the emphasis to a goal-oriented, flexible, adaptive, multiresolutional decision support system (MRDS) with strong unsupervised learning (oriented towards anticipation rather than prediction) and highly efficient optimization capability, which could provide the needed solutions of real-world aquifer management problems. The article highlights the links between past developments and future optimization/planning/control of hydrogeologic systems. Malgré de remarquables nouveaux développements en hydrologie stochastique ainsi que de remarquables adaptations de méthodes avancées pour les opérations de recherche, le contrôle stochastique, et l'intelligence artificielle, solutions pour les problèmes complexes en hydrogéologie sont restées assez limitées. La principale raison est l'ultime confiance en les modèles qui conduisent à des équations partielles complexes aux paramètres distribués (PDE) à une

  16. Imaging the behavior of molecules in biological systems: breaking the 3D speed barrier with 3D multi-resolution microscopy.

    PubMed

    Welsher, Kevin; Yang, Haw

    2015-01-01

    The overwhelming effort in the development of new microscopy methods has been focused on increasing the spatial and temporal resolution in all three dimensions to enable the measurement of the molecular scale phenomena at the heart of biological processes. However, there exists a significant speed barrier to existing 3D imaging methods, which is associated with the overhead required to image large volumes. This overhead can be overcome to provide nearly unlimited temporal precision by simply focusing on a single molecule or particle via real-time 3D single-particle tracking and the newly developed 3D Multi-resolution Microscopy (3D-MM). Here, we investigate the optical and mechanical limits of real-time 3D single-particle tracking in the context of other methods. In particular, we investigate the use of an optical cantilever for position sensitive detection, finding that this method yields system magnifications of over 3000×. We also investigate the ideal PID control parameters and their effect on the power spectrum of simulated trajectories. Taken together, these data suggest that the speed limit in real-time 3D single particle-tracking is a result of slow piezoelectric stage response as opposed to optical sensitivity or PID control.

  17. A method for multi-resolution characterization on porous surfaces by using a laser confocal scanning microscope

    NASA Astrophysics Data System (ADS)

    Zou, Yibo; Kaestner, Markus; Reithmeier, Eduard

    2015-11-01

    In this paper, a new method for multi-resolution characterization is introduced to analyze porous surfaces on cylinder liners. The main purpose of this new approach is to investigate the influence of resolution and magnification of different optical lenses on measuring the 3D geometry of pores based on 3D microscopy topographical surface metrology. Two optical sensors (20× lens and 50× lens) have been applied to acquire the porous surface data for the primal investigation. A feature-based image matching algorithm is introduced for the purpose of registering identical microstructures in different datasets with different pixel resolutions. The correlation between the sensor's resolution and the numerical parameters' values regarding the pores geometry is studied statistically. Finally, the preliminary results of multi-resolution characterization are presented and the impact of using a sensor with higher resolution on measuring the same object is discussed.

  18. Sociopolitical Analyses.

    ERIC Educational Resources Information Center

    Van Galen, Jane, Ed.; And Others

    1992-01-01

    This theme issue of the serial "Educational Foundations" contains four articles devoted to the topic of "Sociopolitical Analyses." In "An Interview with Peter L. McLaren," Mary Leach presented the views of Peter L. McLaren on topics of local and national discourses, values, and the politics of difference. Landon E.…

  19. Multiresolution analysis of the spatiotemporal variability in global radiation observed by a dense network of 99 pyranometers

    NASA Astrophysics Data System (ADS)

    Lakshmi Madhavan, Bomidi; Deneke, Hartwig; Witthuhn, Jonas; Macke, Andreas

    2017-03-01

    The time series of global radiation observed by a dense network of 99 autonomous pyranometers during the HOPE campaign around Jülich, Germany, are investigated with a multiresolution analysis based on the maximum overlap discrete wavelet transform and the Haar wavelet. For different sky conditions, typical wavelet power spectra are calculated to quantify the timescale dependence of variability in global transmittance. Distinctly higher variability is observed at all frequencies in the power spectra of global transmittance under broken-cloud conditions compared to clear, cirrus, or overcast skies. The spatial autocorrelation function including its frequency dependence is determined to quantify the degree of similarity of two time series measurements as a function of their spatial separation. Distances ranging from 100 m to 10 km are considered, and a rapid decrease of the autocorrelation function is found with increasing frequency and distance. For frequencies above 1/3 min-1 and points separated by more than 1 km, variations in transmittance become completely uncorrelated. A method is introduced to estimate the deviation between a point measurement and a spatially averaged value for a surrounding domain, which takes into account domain size and averaging period, and is used to explore the representativeness of a single pyranometer observation for its surrounding region. Two distinct mechanisms are identified, which limit the representativeness; on the one hand, spatial averaging reduces variability and thus modifies the shape of the power spectrum. On the other hand, the correlation of variations of the spatially averaged field and a point measurement decreases rapidly with increasing temporal frequency. For a grid box of 10 km × 10 km and averaging periods of 1.5-3 h, the deviation of global transmittance between a point measurement and an area-averaged value depends on the prevailing sky conditions: 2.8 (clear), 1.8 (cirrus), 1.5 (overcast), and 4.2 % (broken

  20. Towards multi-resolution global climate modeling with ECHAM6-FESOM. Part II: climate variability

    NASA Astrophysics Data System (ADS)

    Rackow, T.; Goessling, H. F.; Jung, T.; Sidorenko, D.; Semmler, T.; Barbi, D.; Handorf, D.

    2016-06-01

    This study forms part II of two papers describing ECHAM6-FESOM, a newly established global climate model with a unique multi-resolution sea ice-ocean component. While part I deals with the model description and the mean climate state, here we examine the internal climate variability of the model under constant present-day (1990) conditions. We (1) assess the internal variations in the model in terms of objective variability performance indices, (2) analyze variations in global mean surface temperature and put them in context to variations in the observed record, with particular emphasis on the recent warming slowdown, (3) analyze and validate the most common atmospheric and oceanic variability patterns, (4) diagnose the potential predictability of various climate indices, and (5) put the multi-resolution approach to the test by comparing two setups that differ only in oceanic resolution in the equatorial belt, where one ocean mesh keeps the coarse ~1° resolution applied in the adjacent open-ocean regions and the other mesh is gradually refined to ~0.25°. Objective variability performance indices show that, in the considered setups, ECHAM6-FESOM performs overall favourably compared to five well-established climate models. Internal variations of the global mean surface temperature in the model are consistent with observed fluctuations and suggest that the recent warming slowdown can be explained as a once-in-one-hundred-years event caused by internal climate variability; periods of strong cooling in the model (`hiatus' analogs) are mainly associated with ENSO-related variability and to a lesser degree also to PDO shifts, with the AMO playing a minor role. Common atmospheric and oceanic variability patterns are simulated largely consistent with their real counterparts. Typical deficits also found in other models at similar resolutions remain, in particular too weak non-seasonal variability of SSTs over large parts of the ocean and episodic periods of almost absent

  1. Multiresolution iterative reconstruction in high-resolution extremity cone-beam CT

    NASA Astrophysics Data System (ADS)

    Cao, Qian; Zbijewski, Wojciech; Sisniega, Alejandro; Yorkston, John; Siewerdsen, Jeffrey H.; Webster Stayman, J.

    2016-10-01

    Application of model-based iterative reconstruction (MBIR) to high resolution cone-beam CT (CBCT) is computationally challenging because of the very fine discretization (voxel size  <100 µm) of the reconstructed volume. Moreover, standard MBIR techniques require that the complete transaxial support for the acquired projections is reconstructed, thus precluding acceleration by restricting the reconstruction to a region-of-interest. To reduce the computational burden of high resolution MBIR, we propose a multiresolution penalized-weighted least squares (PWLS) algorithm, where the volume is parameterized as a union of fine and coarse voxel grids as well as selective binning of detector pixels. We introduce a penalty function designed to regularize across the boundaries between the two grids. The algorithm was evaluated in simulation studies emulating an extremity CBCT system and in a physical study on a test-bench. Artifacts arising from the mismatched discretization of the fine and coarse sub-volumes were investigated. The fine grid region was parameterized using 0.15 mm voxels and the voxel size in the coarse grid region was varied by changing a downsampling factor. No significant artifacts were found in either of the regions for downsampling factors of up to 4×. For a typical extremities CBCT volume size, this downsampling corresponds to an acceleration of the reconstruction that is more than five times faster than a brute force solution that applies fine voxel parameterization to the entire volume. For certain configurations of the coarse and fine grid regions, in particular when the boundary between the regions does not cross high attenuation gradients, downsampling factors as high as 10×  can be used without introducing artifacts, yielding a ~50×  speedup in PWLS. The proposed multiresolution algorithm significantly reduces the computational burden of high resolution iterative CBCT reconstruction and can be extended to other applications of

  2. Current limiters

    SciTech Connect

    Loescher, D.H.; Noren, K.

    1996-09-01

    The current that flows between the electrical test equipment and the nuclear explosive must be limited to safe levels during electrical tests conducted on nuclear explosives at the DOE Pantex facility. The safest way to limit the current is to use batteries that can provide only acceptably low current into a short circuit; unfortunately this is not always possible. When it is not possible, current limiters, along with other design features, are used to limit the current. Three types of current limiters, the fuse blower, the resistor limiter, and the MOSFET-pass-transistor limiters, are used extensively in Pantex test equipment. Detailed failure mode and effects analyses were conducted on these limiters. Two other types of limiters were also analyzed. It was found that there is no best type of limiter that should be used in all applications. The fuse blower has advantages when many circuits must be monitored, a low insertion voltage drop is important, and size and weight must be kept low. However, this limiter has many failure modes that can lead to the loss of over current protection. The resistor limiter is simple and inexpensive, but is normally usable only on circuits for which the nominal current is less than a few tens of milliamperes. The MOSFET limiter can be used on high current circuits, but it has a number of single point failure modes that can lead to a loss of protective action. Because bad component placement or poor wire routing can defeat any limiter, placement and routing must be designed carefully and documented thoroughly.

  3. A new multi-resolution hybrid wavelet for analysis and image compression

    NASA Astrophysics Data System (ADS)

    Kekre, Hemant B.; Sarode, Tanuja K.; Vig, Rekha

    2015-12-01

    Most of the current image- and video-related applications require higher resolution of images and higher data rates during transmission, better compression techniques are constantly being sought after. This paper proposes a new and unique hybrid wavelet technique which has been used for image analysis and compression. The proposed hybrid wavelet combines the properties of existing orthogonal transforms in the most desirable way and also provides for multi-resolution analysis. These wavelets have unique properties that they can be generated for various sizes and types by using different component transforms and varying the number of components at each level of resolution. These hybrid wavelets have been applied to various standard images like Lena (512 × 512), Cameraman (256 × 256) and the values of peak signal to noise ratio (PSNR) obtained are compared with those obtained using some standard existing compression techniques. Considerable improvement in the values of PSNR, as much as 5.95 dB higher than the standard methods, has been observed, which shows that hybrid wavelet gives better compression. Images of various sizes like Scenery (200 × 200), Fruit (375 × 375) and Barbara (112 × 224) have also been compressed using these wavelets to demonstrate their use for different sizes and shapes.

  4. Multi-core/GPU accelerated multi-resolution simulations of compressible flows

    NASA Astrophysics Data System (ADS)

    Hejazialhosseini, Babak; Rossinelli, Diego; Koumoutsakos, Petros

    2010-11-01

    We develop a multi-resolution solver for single and multi-phase compressible flow simulations by coupling average interpolating wavelets and local time stepping schemes with high order finite volume schemes. Wavelets allow for high compression rates and explicit control over the error in adaptive representation of the flow field, but their efficient parallel implementation is hindered by the use of traditional data parallel models. In this work we demonstrate that this methodology can be implemented so that it can benefit from the processing power of emerging hybrid multicore and multi-GPU architectures. This is achieved by exploiting task-based parallelism paradigm and the concept of wavelet blocks combined with OpenCL and Intel Threading Building Blocks. The solver is able to handle high resolution jumps and benefits from adaptive time integration using local time stepping schemes as implemented on heterogeneous multi-core/GPU architectures. We demonstrate the accuracy of our method and the performance of our solver on different architectures for 2D simulations of shock-bubble interaction and Richtmeyer-Meshkov instability.

  5. A method of image multi-resolution processing based on FPGA + DSP architecture

    NASA Astrophysics Data System (ADS)

    Peng, Xiaohan; Zhong, Sheng; Lu, Hongqiang

    2015-10-01

    In real-time image processing, with the improvement of resolution and frame rate of camera imaging, not only the requirement of processing capacity is improving, but also the requirement of the optimization of process is improving. With regards to the FPGA + DSP architecture image processing system, there are three common methods to overcome the challenge above. The first is using higher performance DSP. For example, DSP with higher core frequency or with more cores can be used. The second is optimizing the processing method, make the algorithm to accomplish the same processing results but spend less time. Last but not least, pre-processing in the FPGA can make the image processing more efficient. A method of multi-resolution pre-processing by FPGA based on FPGA + DSP architecture is proposed here. It takes advantage of built-in first in first out (FIFO) and external synchronous dynamic random access memory (SDRAM) to buffer the images which come from image detector, and provides down-sampled images or cut-down images for DSP flexibly and efficiently according to the request parameters sent by DSP. DSP can thus get the degraded image instead of the whole image to process, shortening the processing time and transmission time greatly. The method results in alleviating the burden of image processing of DSP and also solving the problem of single method of image resolution reduction cannot meet the requirements of image processing task of DSP.

  6. Uncertainty Quantification in Multi-Scale Coronary Simulations Using Multi-resolution Expansion

    NASA Astrophysics Data System (ADS)

    Tran, Justin; Schiavazzi, Daniele; Ramachandra, Abhay; Kahn, Andrew; Marsden, Alison

    2016-11-01

    Computational simulations of coronary flow can provide non-invasive information on hemodynamics that can aid in surgical planning and research on disease propagation. In this study, patient-specific geometries of the aorta and coronary arteries are constructed from CT imaging data and finite element flow simulations are carried out using the open source software SimVascular. Lumped parameter networks (LPN), consisting of circuit representations of vascular hemodynamics and coronary physiology, are used as coupled boundary conditions for the solver. The outputs of these simulations depend on a set of clinically-derived input parameters that define the geometry and boundary conditions, however their values are subjected to uncertainty. We quantify the effects of uncertainty from two sources: uncertainty in the material properties of the vessel wall and uncertainty in the lumped parameter models whose values are estimated by assimilating patient-specific clinical and literature data. We use a generalized multi-resolution chaos approach to propagate the uncertainty. The advantages of this approach lies in its ability to support inputs sampled from arbitrary distributions and its built-in adaptivity that efficiently approximates stochastic responses characterized by steep gradients.

  7. Five Micron High Resolution MALDI Mass Spectrometry Imaging with Simple, Interchangeable, Multi-Resolution Optical System

    DOE PAGES

    Feenstra, Adam D.; Dueñas, Maria Emilia; Lee, Young Jin

    2017-01-03

    High-spatial resolution mass spectrometry imaging (MSI) is crucial for the mapping of chemical distributions at the cellular and subcellular level. Here in this work, we improved our previous laser optical system for matrix-assisted laser desorption ionization (MALDI)-MSI, from ~9 μm practical laser spot size to a practical laser spot size of ~4 μm, thereby allowing for 5 μm resolution imaging without oversampling. This is accomplished through a combination of spatial filtering, beam expansion, and reduction of the final focal length. Most importantly, the new laser optics system allows for simple modification of the spot size solely through the interchanging ofmore » the beam expander component. Using 10×, 5×, and no beam expander, we could routinely change between ~4, ~7, and ~45 μm laser spot size, in less than 5 min. We applied this multi-resolution MALDI-MSI system to a single maize root tissue section with three different spatial resolutions of 5, 10, and 50 μm and compared the differences in imaging quality and signal sensitivity. Lastly, we also demonstrated the difference in depth of focus between the optical systems with 10× and 5× beam expanders.« less

  8. Multiresolution molecular dynamics algorithm for realistic materials modeling on parallel computers

    NASA Astrophysics Data System (ADS)

    Nakano, Aiichiro; Kalia, Rajiv K.; Vashishta, Priya

    1994-12-01

    For realistic modeling of materials, a molecular-dynamics (MD) algorithm is developed based on multiresolutions in both space and time. Materials of interest are characterized by the long-range Coulomb, steric and charge-dipole interactions as well as three-body covalent potentials. The long-range Coulomb interaction is computed with the fast multipole method. For bulk systems with periodic boundary conditions, infinite summation over repeated image charges is carried out with the reduced cell multipole method. Short- and medium-range non-Coulombic interactions are computed with the multiple time-step approach. A separable tensor decomposition scheme is used to compute three-body potentials. For a 4.2 million-particle SiO 2 system, one MD step takes only 4.8 seconds on the 512-node Intel Touchstone Delta machine and 10.3 seconds on 64 nodes of an IBM SP1 system. The constant-grain parallel efficiency of the program is η' = 0.92 and the communication overhead is 8% on the Delta machine. On the SP1 system, η' = 0.91 and communication overhead is 7%.

  9. A three-channel miniaturized optical system for multi-resolution imaging

    NASA Astrophysics Data System (ADS)

    Belay, Gebirie Y.; Ottevaere, Heidi; Meuret, Youri; Thienpont, Hugo

    2013-09-01

    Inspired by the natural compound eyes of insects, multichannel imaging systems embrace many channels that scramble their entire Field-Of-View (FOV). Our aim in this work was to attain multi-resolution capability into a multi-channel imaging system by manipulating the available channels to possess different imaging properties (focal length, angular resolution). We have designed a three-channel imaging system where the first and third channels have highest and lowest angular resolution of 0.0096° and 0.078° and narrowest and widest FOVs of 7° and 80°, respectively. The design of the channels has been done for a single wavelength of 587.6 nm using CODE V. The three channels each consist of 4 aspherical lens surfaces and an absorbing baffle that avoids crosstalk among the neighbouring channels. The aspherical lens surfaces have been fabricated in PMMA by ultra-precision diamond tooling and the baffles by metal additive manufacturing. The profiles of the fabricated lens surfaces have been measured with an accurate multi-sensor coordinate measuring machine and compared with the corresponding profiles of the designed lens surfaces. The fabricated lens profiles are then incorporated into CODE V to realistically model the three channels and also compare their performances with those of the nominal design. We can conclude that the performances of the two latter models are in a good agreement.

  10. Interactive, Internet Delivery of Scientific Visualization viaStructured, Prerendered Multiresolution Imagery

    SciTech Connect

    Chen, Jerry; Yoon, Ilmi; Bethel, E. Wes

    2005-04-20

    We present a novel approach for highly interactive remote delivery of visualization results. Instead of real-time rendering across the internet, our approach, inspired by QuickTime VR's Object Movieconcept, delivers pre-rendered images corresponding to different viewpoints and different time steps to provide the experience of 3D and temporal navigation. We use tiled, multiresolution image streaming to consume minimum bandwidth while providing the maximum resolution that a user can perceive from a given viewpoint. Since image data, a viewpoint and time stamps are the only required inputs, our approach is generally applicable to all visualization and graphics rendering applications capable of generating image files in an ordered fashion. Our design is a form of latency-tolerant remote visualization, where visualization and Rendering time is effectively decoupled from interactive exploration. Our approach trades off increased interactivity, flexible resolution (for individual clients), reduced load and effective reuse of coherent frames between multiple users (from the servers perspective) at the expense of unconstrained exploration. A normal web server is the vehicle for providing on-demand images to the remote client application, which uses client-pull to obtain and cache only those images required to fulfill the interaction needs. This paper presents an architectural description of the system along with a performance characterization for stage of production, delivery and viewing pipeline.

  11. Multiscale and multiresolution modeling of shales and their flow and morphological properties

    PubMed Central

    Tahmasebi, Pejman; Javadpour, Farzam; Sahimi, Muhammad

    2015-01-01

    The need for more accessible energy resources makes shale formations increasingly important. Characterization of such low-permeability formations is complicated, due to the presence of multiscale features, and defies conventional methods. High-quality 3D imaging may be an ultimate solution for revealing the complexities of such porous media, but acquiring them is costly and time consuming. High-quality 2D images, on the other hand, are widely available. A novel three-step, multiscale, multiresolution reconstruction method is presented that directly uses 2D images in order to develop 3D models of shales. It uses a high-resolution 2D image representing the small-scale features to reproduce the nanopores and their network, a large scale, low-resolution 2D image to create the larger-scale characteristics, and generates stochastic realizations of the porous formation. The method is used to develop a model for a shale system for which the full 3D image is available and its properties can be computed. The predictions of the reconstructed models are in excellent agreement with the data. The method is, however, quite general and can be used for reconstructing models of other important heterogeneous materials and media. Two biological examples and from materials science are also reconstructed to demonstrate the generality of the method. PMID:26560178

  12. Identification of Buried Objects in GPR Using Amplitude Modulated Signals Extracted from Multiresolution Monogenic Signal Analysis

    PubMed Central

    Qiao, Lihong; Qin, Yao; Ren, Xiaozhen; Wang, Qifu

    2015-01-01

    It is necessary to detect the target reflections in ground penetrating radar (GPR) images, so that surface metal targets can be identified successfully. In order to accurately locate buried metal objects, a novel method called the Multiresolution Monogenic Signal Analysis (MMSA) system is applied in ground penetrating radar (GPR) images. This process includes four steps. First the image is decomposed by the MMSA to extract the amplitude component of the B-scan image. The amplitude component enhances the target reflection and suppresses the direct wave and reflective wave to a large extent. Then we use the region of interest extraction method to locate the genuine target reflections from spurious reflections by calculating the normalized variance of the amplitude component. To find the apexes of the targets, a Hough transform is used in the restricted area. Finally, we estimate the horizontal and vertical position of the target. In terms of buried object detection, the proposed system exhibits promising performance, as shown in the experimental results. PMID:26690146

  13. Multiresolution scanning imager with spatially uniform noise response based on a new class of Hadamard masks

    NASA Astrophysics Data System (ADS)

    Bone, Donald J.; Popescu, Dan C.

    2000-05-01

    In spite the prodigious growth in the market for digital cameras, they have yet to displace film-based cameras in the consumer market. This is largely due to the high cost of photographic resolution sensors. One possible approach to producing a low cost, high resolution sensor is to linearly scan a masked low resolution sensor. Masking of the sensor elements allows transform domain imaging. Multiple displaced exposures of such a masked sensor permits the device to acquire a linear transform of a higher resolution representation of the image than that defined by the sensor element dimensions. Various approaches have been developed in the past along these lines, but they often suffer from poor sensitivity, difficulty in being adapted to a 2D sensor or spatially variable noise response. This paper presents an approach based on a new class of Hadamard masks--Uniform Noise Hadamard Masks--which has superior sensitivity to simple sampling approaches and retains the multiresolution capabilities of certain Hadamard matrices, while overcoming the non-uniform noise response problems of some simple Hadamard based masks.

  14. Three-dimensional ductile fracture analysis with a hybrid multiresolution approach and microtomography

    NASA Astrophysics Data System (ADS)

    Tang, Shan; Kopacz, Adrian M.; Chan O'Keeffe, Stephanie; Olson, Gregory B.; Liu, Wing Kam

    2013-11-01

    A modified-JIC test on CT (compact tension) specimens of an alloy (Ti-Modified 4330 steel) was carried out. The microstructure (primary and secondary inclusions) in the fracture process zone and fracture surface are reconstructed with a microtomography technique. The zig-zag fracture profile resulting from nucleation of microvoid sheets at the secondary population of inclusions is observed. Embedding the experimentally reconstructed microstructure into the fracture process zone, the ductile fracture process occurring at different length scales within the microstructure is modeled by a hybrid multiresolution approach. In combination with the large scale simulation, detailed studies and statistical analysis show that shearing of microvoids (the secondary population of voids) determines the mixed mode zig-zag fracture profile. The deformation in the macro and micro zones along with the interaction between them affects the fracture process. The observed zig-zag fracture profile in the experiment is also reasonably captured. Simulations can provide a more detailed understanding of the mechanics of the fracture process than experiments which is beneficial in microstructure design to improve performance of alloys.

  15. Fully automated analysis of multi-resolution four-channel micro-array genotyping data

    NASA Astrophysics Data System (ADS)

    Abbaspour, Mohsen; Abugharbieh, Rafeef; Podder, Mohua; Tebbutt, Scott J.

    2006-03-01

    We present a fully-automated and robust microarray image analysis system for handling multi-resolution images (down to 3-micron with sizes up to 80 MBs per channel). The system is developed to provide rapid and accurate data extraction for our recently developed microarray analysis and quality control tool (SNP Chart). Currently available commercial microarray image analysis applications are inefficient, due to the considerable user interaction typically required. Four-channel DNA microarray technology is a robust and accurate tool for determining genotypes of multiple genetic markers in individuals. It plays an important role in the state of the art trend where traditional medical treatments are to be replaced by personalized genetic medicine, i.e. individualized therapy based on the patient's genetic heritage. However, fast, robust, and precise image processing tools are required for the prospective practical use of microarray-based genetic testing for predicting disease susceptibilities and drug effects in clinical practice, which require a turn-around timeline compatible with clinical decision-making. In this paper we have developed a fully-automated image analysis platform for the rapid investigation of hundreds of genetic variations across multiple genes. Validation tests indicate very high accuracy levels for genotyping results. Our method achieves a significant reduction in analysis time, from several hours to just a few minutes, and is completely automated requiring no manual interaction or guidance.

  16. Combination of geodetic measurements by means of a multi-resolution representation

    NASA Astrophysics Data System (ADS)

    Goebel, G.; Schmidt, M. G.; Börger, K.; List, H.; Bosch, W.

    2010-12-01

    Recent and in particular current satellite gravity missions provide important contributions for global Earth gravity models, and these global models can be refined by airborne and terrestrial gravity observations. The most common representation of a gravity field model in terms of spherical harmonics has the disadvantages that it is difficult to represent small spatial details and cannot handle data gaps appropriately. An adequate modeling using a multi-resolution representation (MRP) is necessary in order to exploit the highest degree of information out of all these mentioned measurements. The MRP provides a simple hierarchical framework for identifying the properties of a signal. The procedure starts from the measurements, performs the decomposition into frequency-dependent detail signals by applying a pyramidal algorithm and allows for data compression and filtering, i.e. data manipulations. Since different geodetic measurement types (terrestrial, airborne, spaceborne) cover different parts of the frequency spectrum, it seems reasonable to calculate the detail signals of the lower levels mainly from satellite data, the detail signals of medium levels mainly from airborne and the detail signals of the higher levels mainly from terrestrial data. A concept is presented how these different measurement types can be combined within the MRP. In this presentation the basic principles on strategies and concepts for the generation of MRPs will be shown. Examples of regional gravity field determination are presented.

  17. Segmentation of Breast Lesions in Ultrasound Images through Multiresolution Analysis Using Undecimated Discrete Wavelet Transform.

    PubMed

    Prabusankarlal, K M; Thirumoorthy, P; Manavalan, R

    2016-11-01

    Earliest detection and diagnosis of breast cancer reduces mortality rate of patients by increasing the treatment options. A novel method for the segmentation of breast ultrasound images is proposed in this work. The proposed method utilizes undecimated discrete wavelet transform to perform multiresolution analysis of the input ultrasound image. As the resolution level increases, although the effect of noise reduces, the details of the image also dilute. The appropriate resolution level, which contains essential details of the tumor, is automatically selected through mean structural similarity. The feature vector for each pixel is constructed by sampling intra-resolution and inter-resolution data of the image. The dimensionality of feature vectors is reduced by using principal components analysis. The reduced set of feature vectors is segmented into two disjoint clusters using spatial regularized fuzzy c-means algorithm. The proposed algorithm is evaluated by using four validation metrics on a breast ultrasound database of 150 images including 90 benign and 60 malignant cases. The algorithm produced significantly better segmentation results (Dice coef = 0.8595, boundary displacement error = 9.796, dvi = 1.744, and global consistency error = 0.1835) than the other three state of the art methods.

  18. Multi-resolution cell orientation congruence descriptors for epithelium segmentation in endometrial histology images.

    PubMed

    Li, Guannan; Raza, Shan E Ahmed; Rajpoot, Nasir M

    2017-04-01

    It has been recently shown that recurrent miscarriage can be caused by abnormally high ratio of number of uterine natural killer (UNK) cells to the number of stromal cells in human female uterus lining. Due to high workload, the counting of UNK and stromal cells needs to be automated using computer algorithms. However, stromal cells are very similar in appearance to epithelial cells which must be excluded in the counting process. To exclude the epithelial cells from the counting process it is necessary to identify epithelial regions. There are two types of epithelial layers that can be encountered in the endometrium: luminal epithelium and glandular epithelium. To the best of our knowledge, there is no existing method that addresses the segmentation of both types of epithelium simultaneously in endometrial histology images. In this paper, we propose a multi-resolution Cell Orientation Congruence (COCo) descriptor which exploits the fact that neighbouring epithelial cells exhibit similarity in terms of their orientations. Our experimental results show that the proposed descriptors yield accurate results in simultaneously segmenting both luminal and glandular epithelium.

  19. Wavelet multiresolution analysis of the three vorticity components in a turbulent far wake.

    PubMed

    Zhou, T; Rinoshika, A; Hao, Z; Zhou, Y; Chua, L P

    2006-03-01

    The main objective of the present study is to examine the characteristics of the vortical structures in a turbulent far wake using the wavelet multiresolution technique by decomposing the vorticity into a number of orthogonal wavelet components based on different central frequencies. The three vorticity components were measured simultaneously using an eight-wire probe at three Reynolds numbers, namely 2000, 4000, and 6000. It is found that the dominant contributions to the vorticity variances are from the intermediate and relatively small-scale structures. The contributions from the large and intermediate-scale structures to the vorticity variances decrease with the increase of Reynolds number. The contributions from the small-scale structures to all three vorticity variances jump significantly when Reynolds number is changed from 2000 to 4000, which is connected to previous observations in the near wake that there is a significant increase in the generation of small-scale structures once the Reynolds number reaches about 5000. This result reinforces the conception that turbulence "remembers" its origin.

  20. Multiresolution Approach for Noncontact Measurements of Arterial Pulse Using Thermal Imaging

    NASA Astrophysics Data System (ADS)

    Chekmenev, Sergey Y.; Farag, Aly A.; Miller, William M.; Essock, Edward A.; Bhatnagar, Aruni

    This chapter presents a novel computer vision methodology for noncontact and nonintrusive measurements of arterial pulse. This is the only investigation that links the knowledge of human physiology and anatomy, advances in thermal infrared (IR) imaging and computer vision to produce noncontact and nonintrusive measurements of the arterial pulse in both time and frequency domains. The proposed approach has a physical and physiological basis and as such is of a fundamental nature. A thermal IR camera was used to capture the heat pattern from superficial arteries, and a blood vessel model was proposed to describe the pulsatile nature of the blood flow. A multiresolution wavelet-based signal analysis approach was applied to extract the arterial pulse waveform, which lends itself to various physiological measurements. We validated our results using a traditional contact vital signs monitor as a ground truth. Eight people of different age, race and gender have been tested in our study consistent with Health Insurance Portability and Accountability Act (HIPAA) regulations and internal review board approval. The resultant arterial pulse waveforms exactly matched the ground truth oximetry readings. The essence of our approach is the automatic detection of region of measurement (ROM) of the arterial pulse, from which the arterial pulse waveform is extracted. To the best of our knowledge, the correspondence between noncontact thermal IR imaging-based measurements of the arterial pulse in the time domain and traditional contact approaches has never been reported in the literature.

  1. Multiresolution analysis over graphs for a motor imagery based online BCI game.

    PubMed

    Asensio-Cubero, Javier; Gan, John Q; Palaniappan, Ramaswamy

    2016-01-01

    Multiresolution analysis (MRA) over graph representation of EEG data has proved to be a promising method for offline brain-computer interfacing (BCI) data analysis. For the first time we aim to prove the feasibility of the graph lifting transform in an online BCI system. Instead of developing a pointer device or a wheel-chair controller as test bed for human-machine interaction, we have designed and developed an engaging game which can be controlled by means of imaginary limb movements. Some modifications to the existing MRA analysis over graphs for BCI have also been proposed, such as the use of common spatial patterns for feature extraction at the different levels of decomposition, and sequential floating forward search as a best basis selection technique. In the online game experiment we obtained for three classes an average classification rate of 63.0% for fourteen naive subjects. The application of a best basis selection method helps significantly decrease the computing resources needed. The present study allows us to further understand and assess the benefits of the use of tailored wavelet analysis for processing motor imagery data and contributes to the further development of BCI for gaming purposes.

  2. Five Micron High Resolution MALDI Mass Spectrometry Imaging with Simple, Interchangeable, Multi-Resolution Optical System

    NASA Astrophysics Data System (ADS)

    Feenstra, Adam D.; Dueñas, Maria Emilia; Lee, Young Jin

    2017-01-01

    High-spatial resolution mass spectrometry imaging (MSI) is crucial for the mapping of chemical distributions at the cellular and subcellular level. In this work, we improved our previous laser optical system for matrix-assisted laser desorption ionization (MALDI)-MSI, from 9 μm practical laser spot size to a practical laser spot size of 4 μm, thereby allowing for 5 μm resolution imaging without oversampling. This is accomplished through a combination of spatial filtering, beam expansion, and reduction of the final focal length. Most importantly, the new laser optics system allows for simple modification of the spot size solely through the interchanging of the beam expander component. Using 10×, 5×, and no beam expander, we could routinely change between 4, 7, and 45 μm laser spot size, in less than 5 min. We applied this multi-resolution MALDI-MSI system to a single maize root tissue section with three different spatial resolutions of 5, 10, and 50 μm and compared the differences in imaging quality and signal sensitivity. We also demonstrated the difference in depth of focus between the optical systems with 10× and 5× beam expanders.

  3. Towards autonomous on-road driving via multiresolutional and hierarchical moving-object prediction

    NASA Astrophysics Data System (ADS)

    Ajot, Jerome; Schlenoff, Craig I.; Madhavan, Raj

    2004-12-01

    In this paper, we present the PRIDE framework (Prediction In Dynamic Environments), which is a hierarchical multi-resolutional approach for moving object prediction that incorporates multiple prediction algorithms into a single, unifying framework. PRIDE is based upon the 4D/RCS (Real-time Control System) and provides information to planners at the level of granularity that is appropriate for their planning horizon. The lower levels of the framework utilize estimation theoretic short-term predictions based upon an extended Kalman filter that provide predictions and associated uncertainty measures. The upper levels utilize a probabilistic prediction approach based upon situation recognition with an underlying cost model that provide predictions that incorporate environmental information and constraints. These predictions are made at lower frequencies and at a level of resolution more in line with the needs of higher-level planners. PRIDE is run in the systems" world model independently of the planner and the control system. The results of the prediction are made available to a planner to allow it to make accurate plans in dynamic environments. We have applied this approach to an on-road driving control hierarchy being developed as part of the DARPA Mobile Autonomous Robotic Systems (MARS) effort.

  4. A multiformalism and multiresolution modelling environment: application to the cardiovascular system and its regulation

    PubMed Central

    Hernández, Alfredo I.; Le Rolle, Virginie; Defontaine, Antoine; Carrault, Guy

    2009-01-01

    The role of modelling and simulation on the systemic analysis of living systems is now clearly established. Emerging disciplines, such as Systems Biology, and world-wide research actions, such as the Physiome project or the Virtual Physiological Human, are based on an intensive use of modelling and simulation methodologies and tools. One of the key aspects in this context is to perform an efficient integration of various models representing different biological or physiological functions, at different resolutions, spanning through different scales. This paper presents a multi-formalism modelling and simulation environment (M2SL) that has been conceived to ease model integration. A given model is represented as a set of coupled and atomic model components that may be based on different mathematical formalisms with heterogeneous structural and dynamical properties. A co-simulation approach is used to solve these hybrid systems. The pioneering model of the overall regulation of the cardiovascular system, proposed by Guyton, Coleman & Granger in 1972 has been implemented under M2SL and a pulsatile ventricular model, based on a time-varying elastance has been integrated, in a multi-resolution approach. Simulations reproducing physiological conditions and using different coupling methods show the benefits of the proposed environment. PMID:19884187

  5. Automatic multiresolution age-related macular degeneration detection from fundus images

    NASA Astrophysics Data System (ADS)

    Garnier, Mickaël.; Hurtut, Thomas; Ben Tahar, Houssem; Cheriet, Farida

    2014-03-01

    Age-related Macular Degeneration (AMD) is a leading cause of legal blindness. As the disease progress, visual loss occurs rapidly, therefore early diagnosis is required for timely treatment. Automatic, fast and robust screening of this widespread disease should allow an early detection. Most of the automatic diagnosis methods in the literature are based on a complex segmentation of the drusen, targeting a specific symptom of the disease. In this paper, we present a preliminary study for AMD detection from color fundus photographs using a multiresolution texture analysis. We analyze the texture at several scales by using a wavelet decomposition in order to identify all the relevant texture patterns. Textural information is captured using both the sign and magnitude components of the completed model of Local Binary Patterns. An image is finally described with the textural pattern distributions of the wavelet coefficient images obtained at each level of decomposition. We use a Linear Discriminant Analysis for feature dimension reduction, to avoid the curse of dimensionality problem, and image classification. Experiments were conducted on a dataset containing 45 images (23 healthy and 22 diseased) of variable quality and captured by different cameras. Our method achieved a recognition rate of 93:3%, with a specificity of 95:5% and a sensitivity of 91:3%. This approach shows promising results at low costs that in agreement with medical experts as well as robustness to both image quality and fundus camera model.

  6. Multiresolution analysis (discrete wavelet transform) through Daubechies family for emotion recognition in speech.

    NASA Astrophysics Data System (ADS)

    Campo, D.; Quintero, O. L.; Bastidas, M.

    2016-04-01

    We propose a study of the mathematical properties of voice as an audio signal. This work includes signals in which the channel conditions are not ideal for emotion recognition. Multiresolution analysis- discrete wavelet transform - was performed through the use of Daubechies Wavelet Family (Db1-Haar, Db6, Db8, Db10) allowing the decomposition of the initial audio signal into sets of coefficients on which a set of features was extracted and analyzed statistically in order to differentiate emotional states. ANNs proved to be a system that allows an appropriate classification of such states. This study shows that the extracted features using wavelet decomposition are enough to analyze and extract emotional content in audio signals presenting a high accuracy rate in classification of emotional states without the need to use other kinds of classical frequency-time features. Accordingly, this paper seeks to characterize mathematically the six basic emotions in humans: boredom, disgust, happiness, anxiety, anger and sadness, also included the neutrality, for a total of seven states to identify.

  7. Multiscale and multiresolution modeling of shales and their flow and morphological properties.

    PubMed

    Tahmasebi, Pejman; Javadpour, Farzam; Sahimi, Muhammad

    2015-11-12

    The need for more accessible energy resources makes shale formations increasingly important. Characterization of such low-permeability formations is complicated, due to the presence of multiscale features, and defies conventional methods. High-quality 3D imaging may be an ultimate solution for revealing the complexities of such porous media, but acquiring them is costly and time consuming. High-quality 2D images, on the other hand, are widely available. A novel three-step, multiscale, multiresolution reconstruction method is presented that directly uses 2D images in order to develop 3D models of shales. It uses a high-resolution 2D image representing the small-scale features to reproduce the nanopores and their network, a large scale, low-resolution 2D image to create the larger-scale characteristics, and generates stochastic realizations of the porous formation. The method is used to develop a model for a shale system for which the full 3D image is available and its properties can be computed. The predictions of the reconstructed models are in excellent agreement with the data. The method is, however, quite general and can be used for reconstructing models of other important heterogeneous materials and media. Two biological examples and from materials science are also reconstructed to demonstrate the generality of the method.

  8. Multi-resolution analysis of high density spatial and temporal cloud inhomogeneity fields from HOPE campaign

    NASA Astrophysics Data System (ADS)

    Lakshmi Madhavan, Bomidi; Deneke, Hartwig; Macke, Andreas

    2015-04-01

    Clouds are the most complex structures in both spatial and temporal scales of the Earth's atmosphere that effect the downward surface reaching fluxes and thus contribute to large uncertainty in the global radiation budget. Within the framework of High Definition Clouds and Precipitation for advancing Climate Prediction (HD(CP)2) Observational Prototype Experiment (HOPE), a high density network of 99 pyranometer stations was set up around Jülich, Germany (~ 10 × 12 km2 area) during April to July 2013 to capture the small-scale variability in cloud induced radiation fields at the surface. In this study, we perform multi-resolution analysis of the downward solar irradiance variability at the surface from the pyranometer network to investigate the dependence of temporal and spatial averaging scales on the variance and spatial correlation for different cloud regimes. Preliminary results indicate that correlation is strongly scale-dependent where as the variance is dependent on the length of averaging period. Implications of our findings will be useful for quantifying the effect of spatial collocation while validating the satellite inferred solar irradiance estimates, and also to explore the link between cloud structure and radiation. We will present the details of our analysis and results.

  9. Multi-resolution multi-sensitivity design for parallel-hole SPECT collimators

    NASA Astrophysics Data System (ADS)

    Li, Yanzhao; Xiao, Peng; Zhu, Xiaohua; Xie, Qingguo

    2016-07-01

    Multi-resolution multi-sensitivity (MRMS) collimator offering adjustable trade-off between resolution and sensitivity, can make a SPECT system adaptive. We propose in this paper a new idea for MRMS design based on, for the first time, parallel-hole collimators for clinical SPECT. Multiple collimation states with varied resolution/sensitivity trade-offs can be formed by slightly changing the collimator’s inner structure. To validate the idea, the GE LEHR collimator is selected as the design prototype and is modeled using a ray-tracing technique. Point images are generated for several states of the design. Results show that the collimation states of the design can obtain similar point response characteristics to parallel-hole collimators, and can be used just like parallel-hole collimators in clinical SPECT imaging. Ray-tracing modeling also shows that the proposed design can offer varied resolution/sensitivity trade-offs: at 100 mm before the collimator, the highest resolution state provides 6.9 mm full width at a half maximum (FWHM) with a nearly minimum sensitivity of about 96.2 cps MBq-1, while the lowest resolution state obtains 10.6 mm FWHM with the highest sensitivity of about 167.6 cps MBq-1. Further comparisons of the states on image qualities are conducted through Monte Carlo simulation of a hot-spot phantom which contains five hot spots with varied sizes. Contrast-to-noise ratios (CNR) of the spots are calculated and compared, showing that different spots can prefer different collimation states: the larger spots obtain better CNRs by using the larger sensitivity states, and the smaller spots prefer the higher resolution states. In conclusion, the proposed idea can be an effective approach for MRMS design for parallel-hole SPECT collimators.

  10. A decadal observation of vegetation dynamics using multi-resolution satellite images

    NASA Astrophysics Data System (ADS)

    Chiang, Yang-Sheng; Chen, Kun-Shan; Chu, Chang-Jen

    2012-10-01

    Vegetation cover not just affects the habitability of the earth, but also provides potential terrestrial mechanism for mitigation of greenhouse gases. This study aims at quantifying such green resources by incorporating multi-resolution satellite images from different platforms, including Formosat-2(RSI), SPOT(HRV/HRG), and Terra(MODIS), to investigate vegetation fractional cover (VFC) and its inter-/intra-annual variation in Taiwan. Given different sensor capabilities in terms of their spatial coverage and resolution, infusion of NDVIs at different scales was used to determine fraction of vegetation cover based on NDVI. Field campaign has been constantly conducted on a monthly basis for 6 years to calibrate the critical NDVI threshold for the presence of vegetation cover, with test sites covering IPCC-defined land cover types of Taiwan. Based on the proposed method, we analyzed spatio- temporal changes of VFC for the entire Taiwan Island. A bimodal sequence of VFC was observed for intra-annual variation based on MODIS data, with level around 5% and two peaks in spring and autumn marking the principal dual-cropping agriculture pattern in southwestern Taiwan. Compared to anthropogenic-prone variation, the inter-annual VFC (Aug.-Oct.) derived from HRV/HRG/RSI reveals that the moderate variations (3%) and the oscillations were strongly linked with regional climate pattern and major disturbances resulting from extreme weather events. Two distinct cycles (2002-2005 and 2005-2009) were identified in the decadal observations, with VFC peaks at 87.60% and 88.12% in 2003 and 2006, respectively. This time-series mapping of VFC can be used to examine vegetation dynamics and its response associated with short-term and long-term anthropogenic/natural events.

  11. Multi-resolution multi-sensitivity design for parallel-hole SPECT collimators.

    PubMed

    Li, Yanzhao; Xiao, Peng; Zhu, Xiaohua; Xie, Qingguo

    2016-07-21

    Multi-resolution multi-sensitivity (MRMS) collimator offering adjustable trade-off between resolution and sensitivity, can make a SPECT system adaptive. We propose in this paper a new idea for MRMS design based on, for the first time, parallel-hole collimators for clinical SPECT. Multiple collimation states with varied resolution/sensitivity trade-offs can be formed by slightly changing the collimator's inner structure. To validate the idea, the GE LEHR collimator is selected as the design prototype and is modeled using a ray-tracing technique. Point images are generated for several states of the design. Results show that the collimation states of the design can obtain similar point response characteristics to parallel-hole collimators, and can be used just like parallel-hole collimators in clinical SPECT imaging. Ray-tracing modeling also shows that the proposed design can offer varied resolution/sensitivity trade-offs: at 100 mm before the collimator, the highest resolution state provides 6.9 mm full width at a half maximum (FWHM) with a nearly minimum sensitivity of about 96.2 cps MBq(-1), while the lowest resolution state obtains 10.6 mm FWHM with the highest sensitivity of about 167.6 cps MBq(-1). Further comparisons of the states on image qualities are conducted through Monte Carlo simulation of a hot-spot phantom which contains five hot spots with varied sizes. Contrast-to-noise ratios (CNR) of the spots are calculated and compared, showing that different spots can prefer different collimation states: the larger spots obtain better CNRs by using the larger sensitivity states, and the smaller spots prefer the higher resolution states. In conclusion, the proposed idea can be an effective approach for MRMS design for parallel-hole SPECT collimators.

  12. Assessment of multiresolution segmentation for delimiting drumlins in digital elevation models

    NASA Astrophysics Data System (ADS)

    Eisank, Clemens; Smith, Mike; Hillier, John

    2014-06-01

    Mapping or "delimiting" landforms is one of geomorphology's primary tools. Computer-based techniques such as land-surface segmentation allow the emulation of the process of manual landform delineation. Land-surface segmentation exhaustively subdivides a digital elevation model (DEM) into morphometrically-homogeneous irregularly-shaped regions, called terrain segments. Terrain segments can be created from various land-surface parameters (LSP) at multiple scales, and may therefore potentially correspond to the spatial extents of landforms such as drumlins. However, this depends on the segmentation algorithm, the parameterization, and the LSPs. In the present study we assess the widely used multiresolution segmentation (MRS) algorithm for its potential in providing terrain segments which delimit drumlins. Supervised testing was based on five 5-m DEMs that represented a set of 173 synthetic drumlins at random but representative positions in the same landscape. Five LSPs were tested, and four variants were computed for each LSP to assess the impact of median filtering of DEMs, and logarithmic transformation of LSPs. The testing scheme (1) employs MRS to partition each LSP exhaustively into 200 coarser scales of terrain segments by increasing the scale parameter (SP), (2) identifies the spatially best matching terrain segment for each reference drumlin, and (3) computes four segmentation accuracy metrics for quantifying the overall spatial match between drumlin segments and reference drumlins. Results of 100 tests showed that MRS tends to perform best on LSPs that are regionally derived from filtered DEMs, and then log-transformed. MRS delineated 97% of the detected drumlins at SP values between 1 and 50. Drumlin delimitation rates with values up to 50% are in line with the success of manual interpretations. Synthetic DEMs are well-suited for assessing landform quantification methods such as MRS, since subjectivity in the reference data is avoided which increases the

  13. Assessment of multiresolution segmentation for delimiting drumlins in digital elevation models.

    PubMed

    Eisank, Clemens; Smith, Mike; Hillier, John

    2014-06-01

    Mapping or "delimiting" landforms is one of geomorphology's primary tools. Computer-based techniques such as land-surface segmentation allow the emulation of the process of manual landform delineation. Land-surface segmentation exhaustively subdivides a digital elevation model (DEM) into morphometrically-homogeneous irregularly-shaped regions, called terrain segments. Terrain segments can be created from various land-surface parameters (LSP) at multiple scales, and may therefore potentially correspond to the spatial extents of landforms such as drumlins. However, this depends on the segmentation algorithm, the parameterization, and the LSPs. In the present study we assess the widely used multiresolution segmentation (MRS) algorithm for its potential in providing terrain segments which delimit drumlins. Supervised testing was based on five 5-m DEMs that represented a set of 173 synthetic drumlins at random but representative positions in the same landscape. Five LSPs were tested, and four variants were computed for each LSP to assess the impact of median filtering of DEMs, and logarithmic transformation of LSPs. The testing scheme (1) employs MRS to partition each LSP exhaustively into 200 coarser scales of terrain segments by increasing the scale parameter (SP), (2) identifies the spatially best matching terrain segment for each reference drumlin, and (3) computes four segmentation accuracy metrics for quantifying the overall spatial match between drumlin segments and reference drumlins. Results of 100 tests showed that MRS tends to perform best on LSPs that are regionally derived from filtered DEMs, and then log-transformed. MRS delineated 97% of the detected drumlins at SP values between 1 and 50. Drumlin delimitation rates with values up to 50% are in line with the success of manual interpretations. Synthetic DEMs are well-suited for assessing landform quantification methods such as MRS, since subjectivity in the reference data is avoided which increases the

  14. Assessment of multiresolution segmentation for delimiting drumlins in digital elevation models

    PubMed Central

    Eisank, Clemens; Smith, Mike; Hillier, John

    2014-01-01

    Mapping or “delimiting” landforms is one of geomorphology's primary tools. Computer-based techniques such as land-surface segmentation allow the emulation of the process of manual landform delineation. Land-surface segmentation exhaustively subdivides a digital elevation model (DEM) into morphometrically-homogeneous irregularly-shaped regions, called terrain segments. Terrain segments can be created from various land-surface parameters (LSP) at multiple scales, and may therefore potentially correspond to the spatial extents of landforms such as drumlins. However, this depends on the segmentation algorithm, the parameterization, and the LSPs. In the present study we assess the widely used multiresolution segmentation (MRS) algorithm for its potential in providing terrain segments which delimit drumlins. Supervised testing was based on five 5-m DEMs that represented a set of 173 synthetic drumlins at random but representative positions in the same landscape. Five LSPs were tested, and four variants were computed for each LSP to assess the impact of median filtering of DEMs, and logarithmic transformation of LSPs. The testing scheme (1) employs MRS to partition each LSP exhaustively into 200 coarser scales of terrain segments by increasing the scale parameter (SP), (2) identifies the spatially best matching terrain segment for each reference drumlin, and (3) computes four segmentation accuracy metrics for quantifying the overall spatial match between drumlin segments and reference drumlins. Results of 100 tests showed that MRS tends to perform best on LSPs that are regionally derived from filtered DEMs, and then log-transformed. MRS delineated 97% of the detected drumlins at SP values between 1 and 50. Drumlin delimitation rates with values up to 50% are in line with the success of manual interpretations. Synthetic DEMs are well-suited for assessing landform quantification methods such as MRS, since subjectivity in the reference data is avoided which increases the

  15. Retrieval of Precipitation Profiles from Multiresolution, Multifrequency, Active and Passive Microwave Observations

    NASA Technical Reports Server (NTRS)

    Grecu, Mircea; Anagnostou, Emmanouil N.; Olson, William S.; Starr, David OC. (Technical Monitor)

    2002-01-01

    In this study, a technique for estimating vertical profiles of precipitation from multifrequency, multiresolution active and passive microwave observations is investigated using both simulated and airborne data. The technique is applicable to the Tropical Rainfall Measuring Mission (TRMM) satellite multi-frequency active and passive observations. These observations are characterized by various spatial and sampling resolutions. This makes the retrieval problem mathematically more difficult and ill-determined because the quality of information decreases with decreasing resolution. A model that, given reflectivity profiles and a small set of parameters (including the cloud water content, the intercept drop size distribution, and a variable describing the frozen hydrometeor properties), simulates high-resolution brightness temperatures is used. The high-resolution simulated brightness temperatures are convolved at the real sensor resolution. An optimal estimation procedure is used to minimize the differences between simulated and observed brightness temperatures. The retrieval technique is investigated using cloud model synthetic and airborne data from the Fourth Convection And Moisture Experiment. Simulated high-resolution brightness temperatures and reflectivities and airborne observation strong are convolved at the resolution of the TRMM instruments and retrievals are performed and analyzed relative to the reference data used in observations synthesis. An illustration of the possible use of the technique in satellite rainfall estimation is presented through an application to TRMM data. The study suggests improvements in combined active and passive retrievals even when the instruments resolutions are significantly different. Future work needs to better quantify the retrievals performance, especially in connection with satellite applications, and the uncertainty of the models used in retrieval.

  16. Quantum simulations of nuclei and nuclear pasta with the multiresolution adaptive numerical environment for scientific simulations

    NASA Astrophysics Data System (ADS)

    Sagert, I.; Fann, G. I.; Fattoyev, F. J.; Postnikov, S.; Horowitz, C. J.

    2016-05-01

    Background: Neutron star and supernova matter at densities just below the nuclear matter saturation density is expected to form a lattice of exotic shapes. These so-called nuclear pasta phases are caused by Coulomb frustration. Their elastic and transport properties are believed to play an important role for thermal and magnetic field evolution, rotation, and oscillation of neutron stars. Furthermore, they can impact neutrino opacities in core-collapse supernovae. Purpose: In this work, we present proof-of-principle three-dimensional (3D) Skyrme Hartree-Fock (SHF) simulations of nuclear pasta with the Multi-resolution ADaptive Numerical Environment for Scientific Simulations (MADNESS). Methods: We perform benchmark studies of 16O, 208Pb, and 238U nuclear ground states and calculate binding energies via 3D SHF simulations. Results are compared with experimentally measured binding energies as well as with theoretically predicted values from an established SHF code. The nuclear pasta simulation is initialized in the so-called waffle geometry as obtained by the Indiana University Molecular Dynamics (IUMD) code. The size of the unit cell is 24 fm with an average density of about ρ =0.05 fm-3 , proton fraction of Yp=0.3 , and temperature of T =0 MeV. Results: Our calculations reproduce the binding energies and shapes of light and heavy nuclei with different geometries. For the pasta simulation, we find that the final geometry is very similar to the initial waffle state. We compare calculations with and without spin-orbit forces. We find that while subtle differences are present, the pasta phase remains in the waffle geometry. Conclusions: Within the MADNESS framework, we can successfully perform calculations of inhomogeneous nuclear matter. By using pasta configurations from IUMD it is possible to explore different geometries and test the impact of self-consistent calculations on the latter.

  17. W-matrices, nonorthogonal multiresolution analysis, and finite signals of arbitrary length

    SciTech Connect

    Kwong, M.K.; Tang, P.T.P.

    1994-12-31

    Wavelet theory and discrete wavelet transforms have had great impact on the field of signal and image processing. In this paper the authors propose a new class of discrete transforms. It ``includes`` the classical Haar and Daubechies transforms. These transforms treat the endpoints of a signal in a different manner from that of conventional techniques. This new approach allows the authors to efficiently handle signals of any length; thus, one is not restricted to work with signal or image sizes that are multiples of a power of 2. Their method does not lengthen the output signal and does not require an additional bookkeeping vector. An exciting result is the uncovering of a new and simple transform that performs very well for compression purposes. It has compact support of length 4, and so is its inverse. The coefficients are symmetrical, and the associated scaling function is fairly smooth The Associated dual wavelet has vanishing moments up to order 2. Numerical results comparing the performance of this transform with that of the Daubechies D{sub 4} transform are given. The multiresolution decomposition, however, is not orthogonal. They will see why this apparent defect is not a real problem in practice. Furthermore, they will give a method to compute an orthogonal compensation that gives them the best approximation possible with the given scaling space. The transform can be described completely within the context of matrix theory and linear algebra. Thus, even without prior knowledge of wavelet theory, one can easily grasp the concrete algorithm and apply it to specific problems within a very short time, without having to master complex functional analysis. At the end of the paper, they shall make the connection to wavelet theory.

  18. Multiresolution Wavelet Based Adaptive Numerical Dissipation Control for Shock-Turbulence Computations

    NASA Technical Reports Server (NTRS)

    Sjoegreen, B.; Yee, H. C.

    2001-01-01

    The recently developed essentially fourth-order or higher low dissipative shock-capturing scheme of Yee, Sandham and Djomehri (1999) aimed at minimizing nu- merical dissipations for high speed compressible viscous flows containing shocks, shears and turbulence. To detect non smooth behavior and control the amount of numerical dissipation to be added, Yee et al. employed an artificial compression method (ACM) of Harten (1978) but utilize it in an entirely different context than Harten originally intended. The ACM sensor consists of two tuning parameters and is highly physical problem dependent. To minimize the tuning of parameters and physical problem dependence, new sensors with improved detection properties are proposed. The new sensors are derived from utilizing appropriate non-orthogonal wavelet basis functions and they can be used to completely switch to the extra numerical dissipation outside shock layers. The non-dissipative spatial base scheme of arbitrarily high order of accuracy can be maintained without compromising its stability at all parts of the domain where the solution is smooth. Two types of redundant non-orthogonal wavelet basis functions are considered. One is the B-spline wavelet (Mallat & Zhong 1992) used by Gerritsen and Olsson (1996) in an adaptive mesh refinement method, to determine regions where re nement should be done. The other is the modification of the multiresolution method of Harten (1995) by converting it to a new, redundant, non-orthogonal wavelet. The wavelet sensor is then obtained by computing the estimated Lipschitz exponent of a chosen physical quantity (or vector) to be sensed on a chosen wavelet basis function. Both wavelet sensors can be viewed as dual purpose adaptive methods leading to dynamic numerical dissipation control and improved grid adaptation indicators. Consequently, they are useful not only for shock-turbulence computations but also for computational aeroacoustics and numerical combustion. In addition, these

  19. Multi-resolution Shape Analysis via Non-Euclidean Wavelets: Applications to Mesh Segmentation and Surface Alignment Problems.

    PubMed

    Kim, Won Hwa; Chung, Moo K; Singh, Vikas

    2013-01-01

    The analysis of 3-D shape meshes is a fundamental problem in computer vision, graphics, and medical imaging. Frequently, the needs of the application require that our analysis take a multi-resolution view of the shape's local and global topology, and that the solution is consistent across multiple scales. Unfortunately, the preferred mathematical construct which offers this behavior in classical image/signal processing, Wavelets, is no longer applicable in this general setting (data with non-uniform topology). In particular, the traditional definition does not allow writing out an expansion for graphs that do not correspond to the uniformly sampled lattice (e.g., images). In this paper, we adapt recent results in harmonic analysis, to derive Non-Euclidean Wavelets based algorithms for a range of shape analysis problems in vision and medical imaging. We show how descriptors derived from the dual domain representation offer native multi-resolution behavior for characterizing local/global topology around vertices. With only minor modifications, the framework yields a method for extracting interest/key points from shapes, a surprisingly simple algorithm for 3-D shape segmentation (competitive with state of the art), and a method for surface alignment (without landmarks). We give an extensive set of comparison results on a large shape segmentation benchmark and derive a uniqueness theorem for the surface alignment problem.

  20. MRAG-I2D: Multi-resolution adapted grids for remeshed vortex methods on multicore architectures

    NASA Astrophysics Data System (ADS)

    Rossinelli, Diego; Hejazialhosseini, Babak; van Rees, Wim; Gazzola, Mattia; Bergdorf, Michael; Koumoutsakos, Petros

    2015-05-01

    We present MRAG-I2D, an open source software framework, for multiresolution simulations of two-dimensional, incompressible, viscous flows on multicore architectures. The spatiotemporal scales of the flow field are captured by remeshed vortex methods enhanced by high order average-interpolating wavelets and local time-stepping. The multiresolution solver of the Poisson equation relies on the development of a novel, tree-based multipole method. MRAG-I2D implements a number of HPC strategies to map efficiently the irregular computational workload of wavelet-adapted grids on multicore nodes. The capabilities of the present software are compared to the current state-of-the-art in terms of accuracy, compression rates and time-to-solution. Benchmarks include the inviscid evolution of an elliptical vortex, flow past an impulsively started cylinder at Re = 40- 40 000 and simulations of self-propelled anguilliform swimmers. The results indicate that the present software has the same or better accuracy than state-of-the-art solvers while it exhibits unprecedented performance in terms of time-to-solution.

  1. Flight assessment of a real time multi-resolution image fusion system for use in degraded visual environments

    NASA Astrophysics Data System (ADS)

    Smith, M. I.; Sadler, J. R. E.

    2007-04-01

    Military helicopter operations are often constrained by environmental conditions, including low light levels and poor weather. Recent experience has also shown the difficulty presented by certain terrain when operating at low altitude by day and night. For example, poor pilot cues over featureless terrain with low scene contrast, together with obscuration of vision due to wind-blown and re-circulated dust at low level (brown out). These sorts of conditions can result in loss of spatial awareness and precise control of the aircraft. Atmospheric obscurants such as fog, cloud, rain and snow can similarly lead to hazardous situations and reduced situational awareness. Day Night All Weather (DNAW) systems applied research sponsored by UK Ministry of Defence (MoD) has developed a multi-resolution real time Image Fusion system that has been flown as part of a wider flight trials programme investigating increased situational awareness. Dual-band multi-resolution adaptive image fusion was performed in real-time using imagery from a Thermal Imager and a Low Light TV, both co-bore sighted on a rotary wing trials aircraft. A number of sorties were flown in a range of climatic and environmental conditions during both day and night. (Neutral density filters were used on the Low Light TV during daytime sorties.) This paper reports on the results of the flight trial evaluation and discusses the benefits offered by the use of Image Fusion in degraded visual environments.

  2. Global Multi-Resolution Topography (GMRT) Synthesis - Version 2.0

    NASA Astrophysics Data System (ADS)

    Ferrini, V.; Coplan, J.; Carbotte, S. M.; Ryan, W. B.; O'Hara, S.; Morton, J. J.

    2010-12-01

    The detailed morphology of the global ocean floor is poorly known, with most areas mapped only at low resolution using satellite-based measurements. Ship-based sonars provide data at resolution sufficient to quantify seafloor features related to the active processes of erosion, sediment flow, volcanism, and faulting. To date, these data have been collected in a small fraction of the global ocean (<10%). The Global Multi-Resolution Topography (GMRT) synthesis makes use of sonar data collected by scientists and institutions worldwide, merging them into a single continuously updated compilation of high-resolution seafloor topography. Several applications, including GeoMapApp (http://www.geomapapp.org) and Virtual Ocean (http://www.virtualocean.org), make use of the GMRT Synthesis and provide direct access to images and underlying gridded data. Source multibeam files included in the compilation can also accessed through custom functionality in GeoMapApp. The GMRT Synthesis began in 1992 as the Ridge Multibeam Synthesis. It was subsequently expanded to include bathymetry data from the Southern Ocean, and now includes data from throughout the global oceans. Our design strategy has been to make data available at the full native resolution of shipboard sonar systems, which historically has been ~100 m in the deep sea (Ryan et al., 2009). A new release of the GMRT Synthesis in Fall of 2010 includes several significant improvements over our initial strategy. In addition to increasing the number of cruises included in the compilation by over 25%, we have developed a new protocol for handling multibeam source data, which has improved the overall quality of the compilation. The new tileset also includes a discrete layer of sonar data in the public domain that are gridded to the full resolution of the sonar system, with data gridded 25 m in some areas. This discrete layer of sonar data has been provided to Google for integration into Google’s default ocean base map. NOAA

  3. Anatomy assisted PET image reconstruction incorporating multi-resolution joint entropy

    NASA Astrophysics Data System (ADS)

    Tang, Jing; Rahmim, Arman

    2015-01-01

    A promising approach in PET image reconstruction is to incorporate high resolution anatomical information (measured from MR or CT) taking the anato-functional similarity measures such as mutual information or joint entropy (JE) as the prior. These similarity measures only classify voxels based on intensity values, while neglecting structural spatial information. In this work, we developed an anatomy-assisted maximum a posteriori (MAP) reconstruction algorithm wherein the JE measure is supplied by spatial information generated using wavelet multi-resolution analysis. The proposed wavelet-based JE (WJE) MAP algorithm involves calculation of derivatives of the subband JE measures with respect to individual PET image voxel intensities, which we have shown can be computed very similarly to how the inverse wavelet transform is implemented. We performed a simulation study with the BrainWeb phantom creating PET data corresponding to different noise levels. Realistically simulated T1-weighted MR images provided by BrainWeb modeling were applied in the anatomy-assisted reconstruction with the WJE-MAP algorithm and the intensity-only JE-MAP algorithm. Quantitative analysis showed that the WJE-MAP algorithm performed similarly to the JE-MAP algorithm at low noise level in the gray matter (GM) and white matter (WM) regions in terms of noise versus bias tradeoff. When noise increased to medium level in the simulated data, the WJE-MAP algorithm started to surpass the JE-MAP algorithm in the GM region, which is less uniform with smaller isolated structures compared to the WM region. In the high noise level simulation, the WJE-MAP algorithm presented clear improvement over the JE-MAP algorithm in both the GM and WM regions. In addition to the simulation study, we applied the reconstruction algorithms to real patient studies involving DPA-173 PET data and Florbetapir PET data with corresponding T1-MPRAGE MRI images. Compared to the intensity-only JE-MAP algorithm, the WJE

  4. Virtual volume resection using multi-resolution triangular representation of B-spline surfaces.

    PubMed

    Ruskó, László; Mátéka, Ilona; Kriston, András

    2013-08-01

    Computer assisted analysis of organs has an important role in clinical diagnosis and therapy planning. As well as the visualization, the manipulation of 3-dimensional (3D) objects are key features of medical image processing tools. The goal of this work was to develop an efficient and easy to use tool that allows the physician to partition a segmented organ into its segments or lobes. The proposed tool allows the user to define a cutting surface by drawing some traces on 2D sections of a 3D object, cut the object into two pieces with a smooth surface that fits the input traces, and iterate the process until the object is partitioned at the desired level. The tool is based on an algorithm that interpolates the user-defined traces with B-spline surface and computes a binary cutting volume that represents the different sides of the surface. The computation of the cutting volume is based on the multi-resolution triangulation of the B-spline surface. The proposed algorithm was integrated into an open-source medical image processing framework. Using the tool, the user can select the object to be partitioned (e.g. segmented liver), define the cutting surface based on the corresponding medical image (medical image visualizing the internal structure of the liver), cut the selected object, and iterate the process. In case of liver segment separation, the cuts can be performed according to a predefined sequence, which makes it possible to label the temporary as well as the final partitions (lobes, segments) automatically. The presented tool was evaluated for anatomical segment separation of the liver involving 14 cases and virtual liver tumor resection involving one case. The segment separation was repeated 3 different times by one physician for all cases, and the average and the standard deviation of segment volumes were computed. According to the test experiences the presented algorithm proved to be efficient and user-friendly enough to perform free form cuts for liver

  5. Multiresolution edge detection using enhanced fuzzy c-means clustering for ultrasound image speckle reduction

    SciTech Connect

    Tsantis, Stavros; Spiliopoulos, Stavros; Karnabatidis, Dimitrios; Skouroliakou, Aikaterini; Hazle, John D.; Kagadis, George C. E-mail: George.Kagadis@med.upatras.gr

    2014-07-15

    Purpose: Speckle suppression in ultrasound (US) images of various anatomic structures via a novel speckle noise reduction algorithm. Methods: The proposed algorithm employs an enhanced fuzzy c-means (EFCM) clustering and multiresolution wavelet analysis to distinguish edges from speckle noise in US images. The edge detection procedure involves a coarse-to-fine strategy with spatial and interscale constraints so as to classify wavelet local maxima distribution at different frequency bands. As an outcome, an edge map across scales is derived whereas the wavelet coefficients that correspond to speckle are suppressed in the inverse wavelet transform acquiring the denoised US image. Results: A total of 34 thyroid, liver, and breast US examinations were performed on a Logiq 9 US system. Each of these images was subjected to the proposed EFCM algorithm and, for comparison, to commercial speckle reduction imaging (SRI) software and another well-known denoising approach, Pizurica's method. The quantification of the speckle suppression performance in the selected set of US images was carried out via Speckle Suppression Index (SSI) with results of 0.61, 0.71, and 0.73 for EFCM, SRI, and Pizurica's methods, respectively. Peak signal-to-noise ratios of 35.12, 33.95, and 29.78 and edge preservation indices of 0.94, 0.93, and 0.86 were found for the EFCM, SIR, and Pizurica's method, respectively, demonstrating that the proposed method achieves superior speckle reduction performance and edge preservation properties. Based on two independent radiologists’ qualitative evaluation the proposed method significantly improved image characteristics over standard baseline B mode images, and those processed with the Pizurica's method. Furthermore, it yielded results similar to those for SRI for breast and thyroid images significantly better results than SRI for liver imaging, thus improving diagnostic accuracy in both superficial and in-depth structures. Conclusions: A new wavelet

  6. Comparing supervised and unsupervised multiresolution segmentation approaches for extracting buildings from very high resolution imagery

    PubMed Central

    Belgiu, Mariana; Drǎguţ, Lucian

    2014-01-01

    Although multiresolution segmentation (MRS) is a powerful technique for dealing with very high resolution imagery, some of the image objects that it generates do not match the geometries of the target objects, which reduces the classification accuracy. MRS can, however, be guided to produce results that approach the desired object geometry using either supervised or unsupervised approaches. Although some studies have suggested that a supervised approach is preferable, there has been no comparative evaluation of these two approaches. Therefore, in this study, we have compared supervised and unsupervised approaches to MRS. One supervised and two unsupervised segmentation methods were tested on three areas using QuickBird and WorldView-2 satellite imagery. The results were assessed using both segmentation evaluation methods and an accuracy assessment of the resulting building classifications. Thus, differences in the geometries of the image objects and in the potential to achieve satisfactory thematic accuracies were evaluated. The two approaches yielded remarkably similar classification results, with overall accuracies ranging from 82% to 86%. The performance of one of the unsupervised methods was unexpectedly similar to that of the supervised method; they identified almost identical scale parameters as being optimal for segmenting buildings, resulting in very similar geometries for the resulting image objects. The second unsupervised method produced very different image objects from the supervised method, but their classification accuracies were still very similar. The latter result was unexpected because, contrary to previously published findings, it suggests a high degree of independence between the segmentation results and classification accuracy. The results of this study have two important implications. The first is that object-based image analysis can be automated without sacrificing classification accuracy, and the second is that the previously accepted idea

  7. Comparing supervised and unsupervised multiresolution segmentation approaches for extracting buildings from very high resolution imagery

    NASA Astrophysics Data System (ADS)

    Belgiu, Mariana; ǎguţ, Lucian, , Dr

    2014-10-01

    Although multiresolution segmentation (MRS) is a powerful technique for dealing with very high resolution imagery, some of the image objects that it generates do not match the geometries of the target objects, which reduces the classification accuracy. MRS can, however, be guided to produce results that approach the desired object geometry using either supervised or unsupervised approaches. Although some studies have suggested that a supervised approach is preferable, there has been no comparative evaluation of these two approaches. Therefore, in this study, we have compared supervised and unsupervised approaches to MRS. One supervised and two unsupervised segmentation methods were tested on three areas using QuickBird and WorldView-2 satellite imagery. The results were assessed using both segmentation evaluation methods and an accuracy assessment of the resulting building classifications. Thus, differences in the geometries of the image objects and in the potential to achieve satisfactory thematic accuracies were evaluated. The two approaches yielded remarkably similar classification results, with overall accuracies ranging from 82% to 86%. The performance of one of the unsupervised methods was unexpectedly similar to that of the supervised method; they identified almost identical scale parameters as being optimal for segmenting buildings, resulting in very similar geometries for the resulting image objects. The second unsupervised method produced very different image objects from the supervised method, but their classification accuracies were still very similar. The latter result was unexpected because, contrary to previously published findings, it suggests a high degree of independence between the segmentation results and classification accuracy. The results of this study have two important implications. The first is that object-based image analysis can be automated without sacrificing classification accuracy, and the second is that the previously accepted idea

  8. Comparing supervised and unsupervised multiresolution segmentation approaches for extracting buildings from very high resolution imagery.

    PubMed

    Belgiu, Mariana; Dr Guţ, Lucian

    2014-10-01

    Although multiresolution segmentation (MRS) is a powerful technique for dealing with very high resolution imagery, some of the image objects that it generates do not match the geometries of the target objects, which reduces the classification accuracy. MRS can, however, be guided to produce results that approach the desired object geometry using either supervised or unsupervised approaches. Although some studies have suggested that a supervised approach is preferable, there has been no comparative evaluation of these two approaches. Therefore, in this study, we have compared supervised and unsupervised approaches to MRS. One supervised and two unsupervised segmentation methods were tested on three areas using QuickBird and WorldView-2 satellite imagery. The results were assessed using both segmentation evaluation methods and an accuracy assessment of the resulting building classifications. Thus, differences in the geometries of the image objects and in the potential to achieve satisfactory thematic accuracies were evaluated. The two approaches yielded remarkably similar classification results, with overall accuracies ranging from 82% to 86%. The performance of one of the unsupervised methods was unexpectedly similar to that of the supervised method; they identified almost identical scale parameters as being optimal for segmenting buildings, resulting in very similar geometries for the resulting image objects. The second unsupervised method produced very different image objects from the supervised method, but their classification accuracies were still very similar. The latter result was unexpected because, contrary to previously published findings, it suggests a high degree of independence between the segmentation results and classification accuracy. The results of this study have two important implications. The first is that object-based image analysis can be automated without sacrificing classification accuracy, and the second is that the previously accepted idea

  9. Applicability of Multi-Seasonal X-Band SAR Imagery for Multiresolution Segmentation: a Case Study in a Riparian Mixed Forest

    NASA Astrophysics Data System (ADS)

    Dabiri, Z.; Hölbling, D.; Lang, S.; Bartsch, A.

    2015-12-01

    The increasing availability of synthetic aperture radar (SAR) data from a range of different sensors necessitates efficient methods for semi-automated information extraction at multiple spatial scales for different fields of application. The focus of the presented study is two-fold: 1) to evaluate the applicability of multi-temporal TerraSAR-X imagery for multiresolution segmentation, and 2) to identify suitable Scale Parameters through different weighing of different homogeneity criteria, mainly colour variance. Multiresolution segmentation was used for segmentation of multi-temporal TerraSAR-X imagery, and the ESP (Estimation of Scale Parameter) tool was used to identify suitable Scale Parameters for image segmentation. The validation of the segmentation results was performed using very high resolution WorldView-2 imagery and a reference map, which was created by an ecological expert. The results of multiresolution segmentation revealed that in the context of object-based image analysis the TerraSAR-X images are applicable for generating optimal image objects. Furthermore, ESP tool can be used as an indicator for estimation of Scale Parameter for multiresolution segmentation of TerraSAR-X imagery. Additionally, for more reliable results, this study suggests that the homogeneity criterion of colour, in a variance based segmentation algorithm, needs to be set to high values. Setting the shape/colour criteria to 0.005/0.995 or 0.00/1 led to the best results and to the creation of adequate image objects.

  10. The Multi-Resolution Land Characteristics (MRLC) Consortium - 20 Years of Development and Integration of U.S. National Land Cover Data

    EPA Science Inventory

    The Multi-Resolution Land Characteristics (MRLC) Consortium is a good example of the national benefits of federal collaboration. It started in the mid-1990s as a small group of federal agencies with the straightforward goal of compiling a comprehensive national Landsat dataset t...

  11. The planetary hydraulics analysis based on a multi-resolution stereo DTMs and LISFLOOD-FP model: Case study in Mars

    NASA Astrophysics Data System (ADS)

    Kim, J.; Schumann, G.; Neal, J. C.; Lin, S.

    2013-12-01

    Earth is the only planet possessing an active hydrological system based on H2O circulation. However, after Mariner 9 discovered fluvial channels on Mars with similar features to Earth, it became clear that some solid planets and satellites once had water flows or pseudo hydrological systems of other liquids. After liquid water was identified as the agent of ancient martian fluvial activities, the valley and channels on the martian surface were investigated by a number of remote sensing and in-suit measurements. Among all available data sets, the stereo DTM and ortho from various successful orbital sensor, such as High Resolution Stereo Camera (HRSC), Context Camera (CTX), and High Resolution Imaging Science Experiment (HiRISE), are being most widely used to trace the origin and consequences of martian hydrological channels. However, geomorphological analysis, with stereo DTM and ortho images over fluvial areas, has some limitations, and so a quantitative modeling method utilizing various spatial resolution DTMs is required. Thus in this study we tested the application of hydraulics analysis with multi-resolution martian DTMs, constructed in line with Kim and Muller's (2009) approach. An advanced LISFLOOD-FP model (Bates et al., 2010), which simulates in-channel dynamic wave behavior by solving 2D shallow water equations without advection, was introduced to conduct a high accuracy simulation together with 150-1.2m DTMs over test sites including Athabasca and Bahram valles. For application to a martian surface, technically the acceleration of gravity in LISFLOOD-FP was reduced to the martian value of 3.71 m s-2 and the Manning's n value (friction), the only free parameter in the model, was adjusted for martian gravity by scaling it. The approach employing multi-resolution stereo DTMs and LISFLOOD-FP was superior compared with the other research cases using a single DTM source for hydraulics analysis. HRSC DTMs, covering 50-150m resolutions was used to trace rough

  12. Venus - Dynamic Interior, Gravity Field and Topography Analyzed by Multiresolution Methods

    NASA Astrophysics Data System (ADS)

    Pauer, M.

    2003-12-01

    The goal of our effort is to find such an interior structure of Venus which best predicts the geoid data. Our models are based on different kinds of topography support. The predicted data are compared with observed ones on the basis of common spectral methods and localization methods. First, we apply the principle of isostasy and we look for an average apparent depth of compensation (ADC). For the whole spectrum, dominated by the low degrees, a 165 km depth is found which might correspond to a bottom of the lithosphere. However, the predicted geoid does not fit well to the observed data in the whole spectral interval. Studying the degree-dependent ADC and the admittance function we obtain a uniform depth of compensation around 35 km for degrees higher than 40. For the geoid at degrees lower than 40 we propose a dynamic origin. This hypothesis is investigated in the framework of the internal loading theory. Assuming that the buoyancy force does not vary with depth (which roughly corresponds to a plume-like style of mantle convection) we can well explain about 90% of both geoid and topography. The best fit to the data and the observed admittance function is found for the viscosity profile with a ~100 km thick lithosphere and a viscosity increase by factor 10-100 through the mantle. Second, we analyze our results by means of multiresolution methods. This technique is generally a useful tool for filtering the full-spectra signal. In comparison with the spherical harmonics the wavelet base (or some other suitable function) is well localized (i.e. has non-zero amplitudes only in a vicinity of the point of interest). So using this method we obtain true field anomalies without artificial oscillations. In our study of geoid and topography of Venus we can also look at localized "qualitative" fields: correlation and admittance. There are two major approaches - spectral one presented by Simons et al. (1997) and spatial one presented by Kido et al. (2003). We use the later one

  13. A new multiresolution method applied to the 3D reconstruction of small bodies

    NASA Astrophysics Data System (ADS)

    Capanna, C.; Jorda, L.; Lamy, P. L.; Gesquiere, G.

    2012-12-01

    The knowledge of the three-dimensional (3D) shape of small solar system bodies, such as asteroids and comets, is essential in determining their global physical properties (volume, density, rotational parameters). It also allows performing geomorphological studies of their surface through the characterization of topographic features, such as craters, faults, landslides, grooves, hills, etc.. In the case of small bodies, the shape is often only constrained by images obtained by interplanetary spacecrafts. Several techniques are available to retrieve 3D global shapes from these images. Stereography which relies on control points has been extensively used in the past, most recently to reconstruct the nucleus of comet 9P/Tempel 1 [Thomas (2007)]. The most accurate methods are however photogrammetry and photoclinometry, often used in conjunction with stereography. Stereophotogrammetry (SPG) has been used to reconstruct the shapes of the nucleus of comet 19P/Borrelly [Oberst (2004)] and of the asteroid (21) Lutetia [Preusker (2012)]. Stereophotoclinometry (SPC) has allowed retrieving an accurate shape of the asteroids (25143) Itokawa [Gaskell (2008)] and (2867) Steins [Jorda (2012)]. We present a new photoclinometry method based on the deformation of a 3D triangular mesh [Capanna (2012)] using a multi-resolution scheme which starts from a sphere of 300 facets and yields a shape model with 100; 000 facets. Our strategy is inspired by the "Full Multigrid" method [Botsch (2007)] and consists in going alternatively between two resolutions in order to obtain an optimized shape model at a given resolution before going to the higher resolution. In order to improve the robustness of our method, we use a set of control points obtained by stereography. Our method has been tested on images acquired by the OSIRIS visible camera, aboard the Rosetta spacecraft of the European Space Agency, during the fly-by of asteroid (21) Lutetia in July 2010. We present the corresponding 3D shape

  14. Multi-resolution 3D visualization of the early stages of cellular uptake of peptide-coated nanoparticles

    SciTech Connect

    Welsher, Kevin; Yang, Haw

    2014-02-23

    A detailed understanding of the cellular uptake process is essential to the development of cellular delivery strategies and to the study of viral trafficking. However, visualization of the entire process, encompassing the fast dynamics (local to the freely diffusing nanoparticle) as well the state of the larger-scale cellular environment, remains challenging. Here, we introduce a three-dimensional multi-resolution method to capture, in real time, the transient events leading to cellular binding and uptake of peptide (HIV1-Tat)-modified nanoparticles. Applying this new method to observe the landing of nanoparticles on the cellular contour in three dimensions revealed long-range deceleration of the delivery particle, possibly due to interactions with cellular receptors. Furthermore, by using the nanoparticle as a nanoscale ‘dynamics pen’, we discovered an unexpected correlation between small membrane terrain structures and local nanoparticle dynamics. This approach could help to reveal the hidden mechanistic steps in a variety of multiscale processes.

  15. Individual refinement of attenuation correction maps for hybrid PET/MR based on multi-resolution regional learning.

    PubMed

    Shi, Kuangyu; Fürst, Sebastian; Sun, Liang; Lukas, Mathias; Navab, Nassir; Förster, Stefan; Ziegler, Sibylle I

    2016-11-19

    PET/MR is an emerging hybrid imaging modality. However, attenuation correction (AC) remains challenging for hybrid PET/MR in generating accurate PET images. Segmentation-based methods on special MR sequences are most widely recommended by vendors. However, their accuracy is usually not high. Individual refinement of available certified attenuation maps may be helpful for further clinical applications. In this study, we proposed a multi-resolution regional learning (MRRL) scheme to utilize the internal consistency of the patient data. The anatomical and AC MR sequences of the same subject were employed to guide the refinement of the provided AC maps. The developed algorithm was tested on 9 patients scanned consecutively with PET/MR and PET/CT (7 [(18)F]FDG and 2 [(18)F]FET). The preliminary results showed that MRRL can improve the accuracy of segmented attenuation maps and consequently the accuracy of PET reconstructions.

  16. A Multi-resolution, Multi-epoch Low Radio Frequency Survey of the Kepler K2 Mission Campaign 1 Field

    NASA Astrophysics Data System (ADS)

    Tingay, S. J.; Hancock, P. J.; Wayth, R. B.; Intema, H.; Jagannathan, P.; Mooley, K.

    2016-10-01

    We present the first dedicated radio continuum survey of a Kepler K2 mission field, Field 1, covering the North Galactic Cap. The survey is wide field, contemporaneous, multi-epoch, and multi-resolution in nature and was conducted at low radio frequencies between 140 and 200 MHz. The multi-epoch and ultra wide field (but relatively low resolution) part of the survey was provided by 15 nights of observation using the Murchison Widefield Array (MWA) over a period of approximately a month, contemporaneous with K2 observations of the field. The multi-resolution aspect of the survey was provided by the low resolution (4‧) MWA imaging, complemented by non-contemporaneous but much higher resolution (20″) observations using the Giant Metrewave Radio Telescope (GMRT). The survey is, therefore, sensitive to the details of radio structures across a wide range of angular scales. Consistent with other recent low radio frequency surveys, no significant radio transients or variables were detected in the survey. The resulting source catalogs consist of 1085 and 1468 detections in the two MWA observation bands (centered at 154 and 185 MHz, respectively) and 7445 detections in the GMRT observation band (centered at 148 MHz), over 314 square degrees. The survey is presented as a significant resource for multi-wavelength investigations of the more than 21,000 target objects in the K2 field. We briefly examine our survey data against K2 target lists for dwarf star types (stellar types M and L) that have been known to produce radio flares.

  17. Landscape pattern analysis for assessing ecosystem condition: Development of a multi-resolution method and application to watershed-delineated landscapes in Pennsylvania

    NASA Astrophysics Data System (ADS)

    Johnson, Glen D.

    Protection of ecological resources requires the study and management of whole landscape-level ecosystems. The subsequent need for characterizing landscape structure has led to a variety of measurements for assessing different aspects of spatial patterns; however, most of these measurements are known to depend on both the spatial extent of a specified landscape and the measurement grain; therefore, multi-scale measurements would be more informative. In response, a new method is developed for obtaining a multi-resolution characterization of fragmentation patterns in land cover raster maps within a fixed geographic extent. The concept of conditional entropy is applied to quantify landscape fragmentation as one moves from larger "parent" land cover pixels to smaller "child" pixels that are hierarchically nested within the parent pixels. When applied over a range of resolutions, one obtains a "conditional entropy profile" that can be defined by three parameters. A method for stochastically simulating landscapes is also developed which allows evaluation of the expected behavior of conditional entropy profiles under known landscape generating mechanisms. This modeling approach also allows for determining sample distributions of different landscape measurements via Monte Carlo simulations. Using an eight-category raster map that was based on 30-meter resolution LANDSAT TM images, a suite of landscape measurements was obtained for each of 102 Pennsylvania watersheds (a complete tessellation of the state). This included conditional entropy profiles based on the random filter for degrading raster map resolutions. For these watersheds, the conditional entropy profiles are quite sensitive to changing pattern, and together with the readily-available marginal land cover proportions, appear to be very valuable for categorizing landscapes with respect to common types. These profiles have the further appeal of presenting multi-scale fragmentation patterns in a way that can be easily

  18. Multi-resolution X-ray CT research applied on geo-materials

    NASA Astrophysics Data System (ADS)

    Cnudde, Dr.

    2009-04-01

    Many research topics in geology concern the study of internal processes of geo-materials on a pore-scale level in order to estimate their macroscopic behaviour. The microstructure of a porous medium and the physical characteristics of the solids and the fluids that occupy the pore space determine several macroscopic transport properties of the medium. Understanding the relationship between microstructure and transport is therefore of great theoretical and practical interest in many fields of technology. High resolution X-ray CT is becoming a widely used technique to study geo-materials in 3D at a pore-scale level. To be able to distinguish between the different components of a sample on a pore-scale level, it is important to obtain a high resolution, good contrast and a low noise level. The resolution that can be reached not only depends on the sample size and composition, but also on the specifications of the used X-ray source and X-ray detector and on the geometry of the system. An estimate of the achievable resolution with a certain setup can be derived by dividing the diameter of the sample by the number of pixel columns in the detector. For higher resolutions, the resolution is mainly limited by the focal spot size of the X-ray tube. Other factors like sample movement and deformation by thermal or mechanical effects also have a negative influence on the system's resolution, but they can usually be suppressed by a well-considered positioning of the sample and by monitoring its environment. Image contrast is subject to the amount of X-ray absorption by the sample. It depends both on the energy of the X-rays and on the density and atomic number of the present components. Contrast can be improved by carefully selecting the main X-ray energy level, which depends both on the X-ray source and the used detector. In some cases, it can be enhanced by doping the sample with a contrast agent. Both contrast and noise level depend on the detectability of the transmitted X

  19. Robust Texture Analysis Using Multi-Resolution Gray-Scale Invariant Features for Breast Sonographic Tumor Diagnosis.

    PubMed

    Min-Chun Yang; Woo Kyung Moon; Wang, Yu-Chiang Frank; Min Sun Bae; Chiun-Sheng Huang; Jeon-Hor Chen; Ruey-Feng Chang

    2013-12-01

    Computer-aided diagnosis (CAD) systems in gray-scale breast ultrasound images have the potential to reduce unnecessary biopsy of breast masses. The purpose of our study is to develop a robust CAD system based on the texture analysis. First, gray-scale invariant features are extracted from ultrasound images via multi-resolution ranklet transform. Thus, one can apply linear support vector machines (SVMs) on the resulting gray-level co-occurrence matrix (GLCM)-based texture features for discriminating the benign and malignant masses. To verify the effectiveness and robustness of the proposed texture analysis, breast ultrasound images obtained from three different platforms are evaluated based on cross-platform training/testing and leave-one-out cross-validation (LOO-CV) schemes. We compare our proposed features with those extracted by wavelet transform in terms of receiver operating characteristic (ROC) analysis. The AUC values derived from the area under the curve for the three databases via ranklet transform are 0.918 (95% confidence interval [CI], 0.848 to 0.961), 0.943 (95% CI, 0.906 to 0.968), and 0.934 (95% CI, 0.883 to 0.961), respectively, while those via wavelet transform are 0.847 (95% CI, 0.762 to 0.910), 0.922 (95% CI, 0.878 to 0.958), and 0.867 (95% CI, 0.798 to 0.914), respectively. Experiments with cross-platform training/testing scheme between each database reveal that the diagnostic performance of our texture analysis using ranklet transform is less sensitive to the sonographic ultrasound platforms. Also, we adopt several co-occurrence statistics in terms of quantization levels and orientations (i.e., descriptor settings) for computing the co-occurrence matrices with 0.632+ bootstrap estimators to verify the use of the proposed texture analysis. These experiments suggest that the texture analysis using multi-resolution gray-scale invariant features via ranklet transform is useful for designing a robust CAD system.

  20. Analyses to improve operational flexibility

    SciTech Connect

    Trikouros, N.G.

    1986-01-01

    Operational flexibility is greatly enhanced if the technical bases for plant limits and design margins are fully understood, and the analyses necessary to evaluate the effect of plant modifications or changes in operating modes on these parameters can be performed as required. If a condition should arise that might jeopardize a plant limit or reduce operational flexibility, it would be necessary to understand the basis for the limit or the specific condition limiting operational flexibility and be capable of performing a reanalysis to either demonstrate that the limit will not be violated or to change the limit. This paper provides examples of GPU Nuclear efforts in this regard. Examples of Oyster Creek and Three Mile Island operating experiences are discussed.

  1. Genome-wide DNA polymorphism analyses using VariScan

    PubMed Central

    Hutter, Stephan; Vilella, Albert J; Rozas, Julio

    2006-01-01

    Background DNA sequence polymorphisms analysis can provide valuable information on the evolutionary forces shaping nucleotide variation, and provides an insight into the functional significance of genomic regions. The recent ongoing genome projects will radically improve our capabilities to detect specific genomic regions shaped by natural selection. Current available methods and software, however, are unsatisfactory for such genome-wide analysis. Results We have developed methods for the analysis of DNA sequence polymorphisms at the genome-wide scale. These methods, which have been tested on a coalescent-simulated and actual data files from mouse and human, have been implemented in the VariScan software package version 2.0. Additionally, we have also incorporated a graphical-user interface. The main features of this software are: i) exhaustive population-genetic analyses including those based on the coalescent theory; ii) analysis adapted to the shallow data generated by the high-throughput genome projects; iii) use of genome annotations to conduct a comprehensive analyses separately for different functional regions; iv) identification of relevant genomic regions by the sliding-window and wavelet-multiresolution approaches; v) visualization of the results integrated with current genome annotations in commonly available genome browsers. Conclusion VariScan is a powerful and flexible suite of software for the analysis of DNA polymorphisms. The current version implements new algorithms, methods, and capabilities, providing an important tool for an exhaustive exploratory analysis of genome-wide DNA polymorphism data. PMID:16968531

  2. Classification of glioblastoma and metastasis for neuropathology intraoperative diagnosis: a multi-resolution textural approach to model the background

    NASA Astrophysics Data System (ADS)

    Ahmad Fauzi, Mohammad Faizal; Gokozan, Hamza Numan; Elder, Brad; Puduvalli, Vinay K.; Otero, Jose J.; Gurcan, Metin N.

    2014-03-01

    Brain cancer surgery requires intraoperative consultation by neuropathology to guide surgical decisions regarding the extent to which the tumor undergoes gross total resection. In this context, the differential diagnosis between glioblastoma and metastatic cancer is challenging as the decision must be made during surgery in a short time-frame (typically 30 minutes). We propose a method to classify glioblastoma versus metastatic cancer based on extracting textural features from the non-nuclei region of cytologic preparations. For glioblastoma, these regions of interest are filled with glial processes between the nuclei, which appear as anisotropic thin linear structures. For metastasis, these regions correspond to a more homogeneous appearance, thus suitable texture features can be extracted from these regions to distinguish between the two tissue types. In our work, we use the Discrete Wavelet Frames to characterize the underlying texture due to its multi-resolution capability in modeling underlying texture. The textural characterization is carried out in primarily the non-nuclei regions after nuclei regions are segmented by adapting our visually meaningful decomposition segmentation algorithm to this problem. k-nearest neighbor method was then used to classify the features into glioblastoma or metastasis cancer class. Experiment on 53 images (29 glioblastomas and 24 metastases) resulted in average accuracy as high as 89.7% for glioblastoma, 87.5% for metastasis and 88.7% overall. Further studies are underway to incorporate nuclei region features into classification on an expanded dataset, as well as expanding the classification to more types of cancers.

  3. The Multi-Resolution Land Characteristics (MRLC) Consortium: 20 years of development and integration of USA national land cover data

    USGS Publications Warehouse

    Wickham, James D.; Homer, Collin G.; Vogelmann, James E.; McKerrow, Alexa; Mueller, Rick; Herold, Nate; Coluston, John

    2014-01-01

    The Multi-Resolution Land Characteristics (MRLC) Consortium demonstrates the national benefits of USA Federal collaboration. Starting in the mid-1990s as a small group with the straightforward goal of compiling a comprehensive national Landsat dataset that could be used to meet agencies’ needs, MRLC has grown into a group of 10 USA Federal Agencies that coordinate the production of five different products, including the National Land Cover Database (NLCD), the Coastal Change Analysis Program (C-CAP), the Cropland Data Layer (CDL), the Gap Analysis Program (GAP), and the Landscape Fire and Resource Management Planning Tools (LANDFIRE). As a set, the products include almost every aspect of land cover from impervious surface to detailed crop and vegetation types to fire fuel classes. Some products can be used for land cover change assessments because they cover multiple time periods. The MRLC Consortium has become a collaborative forum, where members share research, methodological approaches, and data to produce products using established protocols, and we believe it is a model for the production of integrated land cover products at national to continental scales. We provide a brief overview of each of the main products produced by MRLC and examples of how each product has been used. We follow that with a discussion of the impact of the MRLC program and a brief overview of future plans.

  4. SAPHIR - a multi-scale, multi-resolution modeling environment targeting blood pressure regulation and fluid homeostasis.

    PubMed

    Thomas, S; Abdulhay, Enas; Baconnier, Pierre; Fontecave, Julie; Francoise, Jean-Pierre; Guillaud, Francois; Hannaert, Patrick; Hernandez, Alfredo; Le Rolle, Virginie; Maziere, Pierre; Tahi, Fariza; Zehraoui, Farida

    2007-01-01

    We present progress on a comprehensive, modular, interactive modeling environment centered on overall regulation of blood pressure and body fluid homeostasis. We call the project SAPHIR, for "a Systems Approach for PHysiological Integration of Renal, cardiac, and respiratory functions". The project uses state-of-the-art multi-scale simulation methods. The basic core model will give succinct input-output (reduced-dimension) descriptions of all relevant organ systems and regulatory processes, and it will be modular, multi-resolution, and extensible, in the sense that detailed submodules of any process(es) can be "plugged-in" to the basic model in order to explore, eg. system-level implications of local perturbations. The goal is to keep the basic core model compact enough to insure fast execution time (in view of eventual use in the clinic) and yet to allow elaborate detailed modules of target tissues or organs in order to focus on the problem area while maintaining the system-level regulatory compensations.

  5. Multi-resolution 3D visualization of the early stages of cellular uptake of peptide-coated nanoparticles

    DOE PAGES

    Welsher, Kevin; Yang, Haw

    2014-02-23

    A detailed understanding of the cellular uptake process is essential to the development of cellular delivery strategies and to the study of viral trafficking. However, visualization of the entire process, encompassing the fast dynamics (local to the freely diffusing nanoparticle) as well the state of the larger-scale cellular environment, remains challenging. Here, we introduce a three-dimensional multi-resolution method to capture, in real time, the transient events leading to cellular binding and uptake of peptide (HIV1-Tat)-modified nanoparticles. Applying this new method to observe the landing of nanoparticles on the cellular contour in three dimensions revealed long-range deceleration of the delivery particle,more » possibly due to interactions with cellular receptors. Furthermore, by using the nanoparticle as a nanoscale ‘dynamics pen’, we discovered an unexpected correlation between small membrane terrain structures and local nanoparticle dynamics. This approach could help to reveal the hidden mechanistic steps in a variety of multiscale processes.« less

  6. Computerized mappings of the cerebral cortex: a multiresolution flattening method and a surface-based coordinate system

    NASA Technical Reports Server (NTRS)

    Drury, H. A.; Van Essen, D. C.; Anderson, C. H.; Lee, C. W.; Coogan, T. A.; Lewis, J. W.

    1996-01-01

    We present a new method for generating two-dimensional maps of the cerebral cortex. Our computerized, two-stage flattening method takes as its input any well-defined representation of a surface within the three-dimensional cortex. The first stage rapidly converts this surface to a topologically correct two-dimensional map, without regard for the amount of distortion introduced. The second stage reduces distortions using a multiresolution strategy that makes gross shape changes on a coarsely sampled map and further shape refinements on progressively finer resolution maps. We demonstrate the utility of this approach by creating flat maps of the entire cerebral cortex in the macaque monkey and by displaying various types of experimental data on such maps. We also introduce a surface-based coordinate system that has advantages over conventional stereotaxic coordinates and is relevant to studies of cortical organization in humans as well as non-human primates. Together, these methods provide an improved basis for quantitative studies of individual variability in cortical organization.

  7. A Multiresolution Hazard Model for Multicenter Survival Studies: Application to Tamoxifen Treatment in Early Stage Breast Cancer

    PubMed Central

    BOUMAN, Peter; MENG, Xiao-Li; DIGNAM, James; DUKIĆ, Vanja

    2014-01-01

    In multicenter studies, one often needs to make inference about a population survival curve based on multiple, possibly heterogeneous survival data from individual centers. We investigate a flexible Bayesian method for estimating a population survival curve based on a semiparametric multiresolution hazard model that can incorporate covariates and account for center heterogeneity. The method yields a smooth estimate of the survival curve for “multiple resolutions” or time scales of interest. The Bayesian model used has the capability to accommodate general forms of censoring and a priori smoothness assumptions. We develop a model checking and diagnostic technique based on the posterior predictive distribution and use it to identify departures from the model assumptions. The hazard estimator is used to analyze data from 110 centers that participated in a multicenter randomized clinical trial to evaluate tamoxifen in the treatment of early stage breast cancer. Of particular interest are the estimates of center heterogeneity in the baseline hazard curves and in the treatment effects, after adjustment for a few key clinical covariates. Our analysis suggests that the treatment effect estimates are rather robust, even for a collection of small trial centers, despite variations in center characteristics. PMID:25620824

  8. Evaluation of the Global Multi-Resolution Terrain Elevation Data 2010 (GMTED2010) Using ICESat Geodetic Control

    NASA Technical Reports Server (NTRS)

    Carabajal, Claudia C.; Harding, David J.; Boy, Jean-Paul; Danielson, Jeffrey J.; Gesch, Dean B.; Suchdeo, Vijay P.

    2011-01-01

    Supported by NASA's Earth Surface and Interior (ESI) Program, we are producing a global set of Ground Control Points (GCPs) derived from the Ice, Cloud and land Elevation Satellite (ICESat) altimetry data. From February of 2003, to October of 2009, ICESat obtained nearly global measurements of land topography (+/- 86deg latitudes) with unprecedented accuracy, sampling the Earth's surface at discrete approx.50 m diameter laser footprints spaced 170 m along the altimetry profiles. We apply stringent editing to select the highest quality elevations, and use these GCPs to characterize and quantify spatially varying elevation biases in Digital Elevation Models (DEMs). In this paper, we present an evaluation of the soon to be released Global Multi-resolution Terrain Elevation Data 2010 (GMTED2010). Elevation biases and error statistics have been analyzed as a function of land cover and relief. The GMTED2010 products are a large improvement over previous sources of elevation data at comparable resolutions. RMSEs for all products and terrain conditions are below 7 m and typically are about 4 m. The GMTED2010 products are biased upward with respect to the ICESat GCPs on average by approximately 3 m.

  9. Evaluation of the Global Multi-Resolution Terrain Elevation Data 2010 (GMTED2010) using ICESat geodetic control

    USGS Publications Warehouse

    Carabajal, C.C.; Harding, D.J.; Boy, J.-P.; Danielson, J.J.; Gesch, D.B.; Suchdeo, V.P.

    2011-01-01

    Supported by NASA's Earth Surface and Interior (ESI) Program, we are producing a global set of Ground Control Points (GCPs) derived from the Ice, Cloud and land Elevation Satellite (ICESat) altimetry data. From February of 2003, to October of 2009, ICESat obtained nearly global measurements of land topography (?? 86?? latitudes) with unprecedented accuracy, sampling the Earth's surface at discrete ???50 m diameter laser footprints spaced 170 m along the altimetry profiles. We apply stringent editing to select the highest quality elevations, and use these GCPs to characterize and quantify spatially varying elevation biases in Digital Elevation Models (DEMs). In this paper, we present an evaluation of the soon to be released Global Multi-resolution Terrain Elevation Data 2010 (GMTED2010). Elevation biases and error statistics have been analyzed as a function of land cover and relief. The GMTED2010 products are a large improvement over previous sources of elevation data at comparable resolutions. RMSEs for all products and terrain conditions are below 7 m and typically are about 4 m. The GMTED2010 products are biased upward with respect to the ICESat GCPs on average by approximately 3 m. ?? 2011 Copyright Society of Photo-Optical Instrumentation Engineers (SPIE).

  10. A GPU-based High-order Multi-resolution Framework for Compressible Flows at All Mach Numbers

    NASA Astrophysics Data System (ADS)

    Forster, Christopher J.; Smith, Marc K.

    2016-11-01

    The Wavelet Adaptive Multiresolution Representation (WAMR) method is a general and robust technique for providing grid adaptivity around the evolution of features in the solutions of partial differential equations and is capable of resolving length scales spanning 6 orders of magnitude. A new flow solver based on the WAMR method and specifically parallelized for the GPU computing architecture has been developed. The compressible formulation of the Navier-Stokes equations is solved using a preconditioned dual-time stepping method that provides accurate solutions for flows at all Mach numbers. The dual-time stepping method allows for control over the residuals of the governing equations and is used to complement the spatial error control provided by the WAMR method. An analytical inverse preconditioning matrix has been derived for an arbitrary number of species that allows preconditioning to be efficiently implemented on the GPU architecture. Additional modifications required for the combination of wavelet-adaptive grids and preconditioned dual-time stepping on the GPU architecture will be discussed. Verification using the Taylor-Green vortex to demonstrate the accuracy of the method will be presented.

  11. Optimally combined confidence limits

    NASA Astrophysics Data System (ADS)

    Janot, P.; Le Diberder, F.

    1998-02-01

    An analytical and optimal procedure to combine statistically independent sets of confidence levels on a quantity is presented. This procedure does not impose any constraint on the methods followed by each analysis to derive its own limit. It incorporates the a priori statistical power of each of the analyses to be combined, in order to optimize the overall sensitivity. It can, in particular, be used to combine the mass limits obtained by several analyses searching for the Higgs boson in different decay channels, with different selection efficiencies, mass resolution and expected background. It can also be used to combine the mass limits obtained by several experiments (e.g. ALEPH, DELPHI, L3 and OPAL, at LEP 2) independently of the method followed by each of these experiments to derive their own limit. A method to derive the limit set by one analysis is also presented, along with an unbiased prescription to optimize the expected mass limit in the no-signal-hypothesis.

  12. It's About Time: Multi-Resolution Timers for Scalable Performance Debugging

    SciTech Connect

    White III, James B

    2007-01-01

    Traditional performance profiling of highly parallel applications does not always give enough information to diagnose performance bugs, particularly those caused by load imbalances and performance variability, yet the data files for such profiling can grow linearly with parallel task count. In response to these limitations, I have developed application timers designed to limit data and reporting volumes at high task counts without dispersing the signals of load imbalance and performance variability. I will describe use of these timers to diagnose actual performance bugs running the Parallel Ocean Program on a Cray XT4.

  13. Multiresolution quantum chemistry in multiwavelet bases: excited states from time-dependent Hartree–Fock and density functional theory via linear response

    SciTech Connect

    Yanai, Takeshi; Fann, George I.; Beylkin, Gregory; Harrison, Robert J.

    2015-02-25

    Using the fully numerical method for time-dependent Hartree–Fock and density functional theory (TD-HF/DFT) with the Tamm–Dancoff (TD) approximation we use a multiresolution analysis (MRA) approach to present our findings. From a reformulation with effective use of the density matrix operator, we obtain a general form of the HF/DFT linear response equation in the first quantization formalism. It can be readily rewritten as an integral equation with the bound-state Helmholtz (BSH) kernel for the Green's function. The MRA implementation of the resultant equation permits excited state calculations without virtual orbitals. Moreover, the integral equation is efficiently and adaptively solved using a numerical multiresolution solver with multiwavelet bases. Our implementation of the TD-HF/DFT methods is applied for calculating the excitation energies of H2, Be, N2, H2O, and C2H4 molecules. The numerical errors of the calculated excitation energies converge in proportion to the residuals of the equation in the molecular orbitals and response functions. The energies of the excited states at a variety of length scales ranging from short-range valence excitations to long-range Rydberg-type ones are consistently accurate. It is shown that the multiresolution calculations yield the correct exponential asymptotic tails for the response functions, whereas those computed with Gaussian basis functions are too diffuse or decay too rapidly. Finally, we introduce a simple asymptotic correction to the local spin-density approximation (LSDA) so that in the TDDFT calculations, the excited states are correctly bound.

  14. Multiresolution quantum chemistry in multiwavelet bases: excited states from time-dependent Hartree-Fock and density functional theory via linear response.

    PubMed

    Yanai, Takeshi; Fann, George I; Beylkin, Gregory; Harrison, Robert J

    2015-12-21

    A fully numerical method for the time-dependent Hartree-Fock and density functional theory (TD-HF/DFT) with the Tamm-Dancoff (TD) approximation is presented in a multiresolution analysis (MRA) approach. From a reformulation with effective use of the density matrix operator, we obtain a general form of the HF/DFT linear response equation in the first quantization formalism. It can be readily rewritten as an integral equation with the bound-state Helmholtz (BSH) kernel for the Green's function. The MRA implementation of the resultant equation permits excited state calculations without virtual orbitals. The integral equation is efficiently and adaptively solved using a numerical multiresolution solver with multiwavelet bases. Our implementation of the TD-HF/DFT methods is applied for calculating the excitation energies of H2, Be, N2, H2O, and C2H4 molecules. The numerical errors of the calculated excitation energies converge in proportion to the residuals of the equation in the molecular orbitals and response functions. The energies of the excited states at a variety of length scales ranging from short-range valence excitations to long-range Rydberg-type ones are consistently accurate. It is shown that the multiresolution calculations yield the correct exponential asymptotic tails for the response functions, whereas those computed with Gaussian basis functions are too diffuse or decay too rapidly. We introduce a simple asymptotic correction to the local spin-density approximation (LSDA) so that in the TDDFT calculations, the excited states are correctly bound.

  15. Multi-resolution level sets with shape priors: a validation report for 2D segmentation of prostate gland in T2W MR images.

    PubMed

    Al-Qunaieer, Fares S; Tizhoosh, Hamid R; Rahnamayan, Shahryar

    2014-12-01

    The level set approach to segmentation of medical images has received considerable attention in recent years. Evolving an initial contour to converge to anatomical boundaries of an organ or tumor is a very appealing method, especially when it is based on a well-defined mathematical foundation. However, one drawback of such evolving method is its high computation time. It is desirable to design and implement algorithms that are not only accurate and robust but also fast in execution. Bresson et al. have proposed a variational model using both boundary and region information as well as shape priors. The latter can be a significant factor in medical image analysis. In this work, we combine the variational model of level set with a multi-resolution approach to accelerate the processing. The question is whether a multi-resolution context can make the segmentation faster without affecting the accuracy. As well, we investigate the question whether a premature convergence, which happens in a much shorter time, would reduce accuracy. We examine multiple semiautomated configurations to segment the prostate gland in T2W MR images. Comprehensive experimentation is conducted using a data set of a 100 patients (1,235 images) to verify the effectiveness of the multi-resolution level set with shape priors. The results show that the convergence speed can be increased by a factor of ≈ 2.5 without affecting the segmentation accuracy. Furthermore, a premature convergence approach drastically increases the segmentation speed by a factor of ≈ 17.9.

  16. Multiresolution quantum chemistry in multiwavelet bases: excited states from time-dependent Hartree–Fock and density functional theory via linear response

    DOE PAGES

    Yanai, Takeshi; Fann, George I.; Beylkin, Gregory; ...

    2015-02-25

    Using the fully numerical method for time-dependent Hartree–Fock and density functional theory (TD-HF/DFT) with the Tamm–Dancoff (TD) approximation we use a multiresolution analysis (MRA) approach to present our findings. From a reformulation with effective use of the density matrix operator, we obtain a general form of the HF/DFT linear response equation in the first quantization formalism. It can be readily rewritten as an integral equation with the bound-state Helmholtz (BSH) kernel for the Green's function. The MRA implementation of the resultant equation permits excited state calculations without virtual orbitals. Moreover, the integral equation is efficiently and adaptively solved using amore » numerical multiresolution solver with multiwavelet bases. Our implementation of the TD-HF/DFT methods is applied for calculating the excitation energies of H2, Be, N2, H2O, and C2H4 molecules. The numerical errors of the calculated excitation energies converge in proportion to the residuals of the equation in the molecular orbitals and response functions. The energies of the excited states at a variety of length scales ranging from short-range valence excitations to long-range Rydberg-type ones are consistently accurate. It is shown that the multiresolution calculations yield the correct exponential asymptotic tails for the response functions, whereas those computed with Gaussian basis functions are too diffuse or decay too rapidly. Finally, we introduce a simple asymptotic correction to the local spin-density approximation (LSDA) so that in the TDDFT calculations, the excited states are correctly bound.« less

  17. Multiresolution constrained least-squares algorithm for direct estimation of time activity curves from dynamic ECT projection data

    NASA Astrophysics Data System (ADS)

    Maltz, Jonathan S.

    2000-06-01

    We present an algorithm which is able to reconstruct dynamic emission computed tomography (ECT) image series directly from inconsistent projection data that have been obtained using a rotating camera. By finding a reduced dimension time-activity curve (TAC) basis with which all physiologically feasible TAC's in an image may be accurately approximated, we are able to recast this large non-linear problem as one of constrained linear least squares (CLLSQ) and to reduce parameter vector dimension by a factor of 20. Implicit is the assumption that each pixel may be modeled using a single compartment model, as is typical in 99mTc teboroxime wash-in wash-out studies; and that the blood input function is known. A disadvantage of the change of basis is that TAC non-negativity is no longer ensured. As a consequence, non-negativity constraints must appear in the CLLSQ formulation. A warm-start multiresolution approach is proposed, whereby the problem is initially solved at a resolution below that finally desired. At the next iteration, the number of reconstructed pixels is increased and the solution of the lower resolution problem is then used to warm-start the estimation of the higher resolution kinetic parameters. We demonstrate the algorithm by applying it to dynamic myocardial slice phantom projection data at resolutions of 16 X 16 and 32 X 32 pixels. We find that the warm-start method employed leads to computational savings of between 2 and 4 times when compared to cold start execution times. A 20% RMS error in the reconstructed TAC's is achieved for a total number of detected sinogram counts of 1 X 105 for the 16 X 16 problem and at 1 X 106 counts for the 32 X 32 grid. These errors are 1.5 - 2 times greater than those obtained in conventional (consistent projection) SPECT imaging at similar count levels.

  18. Anisotropic multi-resolution analysis in 2D, application to long-range correlations in cloud mm-radar fields

    SciTech Connect

    Davis, A.B.; Clothiaux, E.

    1999-03-01

    Because of Earth`s gravitational field, its atmosphere is strongly anisotropic with respect to the vertical; the effect of the Earth`s rotation on synoptic wind patterns also causes a more subtle form of anisotropy in the horizontal plane. The authors survey various approaches to statistically robust anisotropy from a wavelet perspective and present a new one adapted to strongly non-isotropic fields that are sampled on a rectangular grid with a large aspect ratio. This novel technique uses an anisotropic version of Multi-Resolution Analysis (MRA) in image analysis; the authors form a tensor product of the standard dyadic Haar basis, where the dividing ratio is {lambda}{sub z} = 2, and a nonstandard triadic counterpart, where the dividing ratio is {lambda}{sub x} = 3. The natural support of the field is therefore 2{sup n} pixels (vertically) by 3{sup n} pixels (horizontally) where n is the number of levels in the MRA. The natural triadic basis includes the French top-hat wavelet which resonates with bumps in the field whereas the Haar wavelet responds to ramps or steps. The complete 2D basis has one scaling function and five wavelets. The resulting anisotropic MRA is designed for application to the liquid water content (LWC) field in boundary-layer clouds, as the prevailing wind advects them by a vertically pointing mm-radar system. Spatial correlations are notoriously long-range in cloud structure and the authors use the wavelet coefficients from the new MRA to characterize these correlations in a multifractal analysis scheme. In the present study, the MRA is used (in synthesis mode) to generate fields that mimic cloud structure quite realistically although only a few parameters are used to control the randomness of the LWC`s wavelet coefficients.

  19. Detecting hidden spatial and spatio-temporal structures in glasses and complex physical systems by multiresolution network clustering.

    PubMed

    Ronhovde, P; Chakrabarty, S; Hu, D; Sahu, M; Sahu, K K; Kelton, K F; Mauro, N A; Nussinov, Z

    2011-09-01

    We elaborate on a general method that we recently introduced for characterizing the "natural" structures in complex physical systems via multi-scale network analysis. The method is based on "community detection" wherein interacting particles are partitioned into an "ideal gas" of optimally decoupled groups of particles. Specifically, we construct a set of network representations ("replicas") of the physical system based on interatomic potentials and apply a multiscale clustering ("multiresolution community detection") analysis using information-based correlations among the replicas. Replicas may i) be different representations of an identical static system, ii) embody dynamics by considering replicas to be time separated snapshots of the system (with a tunable time separation), or iii) encode general correlations when different replicas correspond to different representations of the entire history of the system as it evolves in space-time. Inputs for our method are the inter-particle potentials or experimentally measured two (or higher order) particle correlations. We apply our method to computer simulations of a binary Kob-Andersen Lennard-Jones system in a mixture ratio of A(80)B(20) , a ternary model system with components "A", "B", and "C" in ratios of A(88)B(7)C(5) (as in Al(88)Y(7)Fe(5) , and to atomic coordinates in a Zr(80)Pt(20) system as gleaned by reverse Monte Carlo analysis of experimentally determined structure factors. We identify the dominant structures (disjoint or overlapping) and general length scales by analyzing extrema of the information theory measures. We speculate on possible links between i) physical transitions or crossovers and ii) changes in structures found by this method as well as phase transitions associated with the computational complexity of the community detection problem. We also briefly consider continuum approaches and discuss rigidity and the shear penetration depth in amorphous systems; this latter length scale increases as

  20. Hierarchical progressive surveys. Multi-resolution HEALPix data structures for astronomical images, catalogues, and 3-dimensional data cubes

    NASA Astrophysics Data System (ADS)

    Fernique, P.; Allen, M. G.; Boch, T.; Oberto, A.; Pineau, F.-X.; Durand, D.; Bot, C.; Cambrésy, L.; Derriere, S.; Genova, F.; Bonnarel, F.

    2015-06-01

    Context. Scientific exploitation of the ever increasing volumes of astronomical data requires efficient and practical methods for data access, visualisation, and analysis. Hierarchical sky tessellation techniques enable a multi-resolution approach to organising data on angular scales from the full sky down to the individual image pixels. Aims: We aim to show that the hierarchical progressive survey (HiPS) scheme for describing astronomical images, source catalogues, and three-dimensional data cubes is a practical solution to managing large volumes of heterogeneous data and that it enables a new level of scientific interoperability across large collections of data of these different data types. Methods: HiPS uses the HEALPix tessellation of the sphere to define a hierarchical tile and pixel structure to describe and organise astronomical data. HiPS is designed to conserve the scientific properties of the data alongside both visualisation considerations and emphasis on the ease of implementation. We describe the development of HiPS to manage a large number of diverse image surveys, as well as the extension of hierarchical image systems to cube and catalogue data. We demonstrate the interoperability of HiPS and multi-order coverage (MOC) maps and highlight the HiPS mechanism to provide links to the original data. Results: Hierarchical progressive surveys have been generated by various data centres and groups for ˜200 data collections including many wide area sky surveys, and archives of pointed observations. These can be accessed and visualised in Aladin, Aladin Lite, and other applications. HiPS provides a basis for further innovations in the use of hierarchical data structures to facilitate the description and statistical analysis of large astronomical data sets.

  1. Investigations of homologous disaccharides by elastic incoherent neutron scattering and wavelet multiresolution analysis

    NASA Astrophysics Data System (ADS)

    Magazù, S.; Migliardo, F.; Vertessy, B. G.; Caccamo, M. T.

    2013-10-01

    In the present paper the results of a wavevector and thermal analysis of Elastic Incoherent Neutron Scattering (EINS) data collected on water mixtures of three homologous disaccharides through a wavelet approach are reported. The wavelet analysis allows to compare both the spatial properties of the three systems in the wavevector range of Q = 0.27 Å-1 ÷ 4.27 Å-1. It emerges that, differently from previous analyses, for trehalose the scalograms are constantly lower and sharper in respect to maltose and sucrose, giving rise to a global spectral density along the wavevector range markedly less extended. As far as the thermal analysis is concerned, the global scattered intensity profiles suggest a higher thermal restrain of trehalose in respect to the other two homologous disaccharides.

  2. Comparison of three different methods to merge multiresolution and multispectral data: Landsat TM and SPOT panchromatic

    USGS Publications Warehouse

    Chavez, P.S.; Sides, S.C.; Anderson, J.A.

    1991-01-01

    The merging of multisensor image data is becoming a widely used procedure because of the complementary nature of various data sets. Ideally, the method used to merge data sets with high-spatial and high-spectral resolution should not distort the spectral characteristics of the high-spectral resolution data. This paper compares the results of three different methods used to merge the information contents of the Landsat Thematic Mapper (TM) and Satellite Pour l'Observation de la Terre (SPOT) panchromatic data. The comparison is based on spectral characteristics and is made using statistical, visual, and graphical analyses of the results. The three methods used to merge the information contents of the Landsat TM and SPOT panchromatic data were the Hue-Intensity-Saturation (HIS), Principal Component Analysis (PCA), and High-Pass Filter (HPF) procedures. The HIS method distorted the spectral characteristics of the data the most. The HPF method distorted the spectral characteristics the least; the distortions were minimal and difficult to detect. -Authors

  3. VizieR Online Data Catalog: Multi-resolution images of M33 (Boquien+, 2015)

    NASA Astrophysics Data System (ADS)

    Boquien, M.; Calzetti, D.; Aalto, S.; Boselli, A.; Braine, J.; Buat, V.; Combes, F.; Israel, F.; Kramer, C.; Lord, S.; Relano, M.; Rosolowsky, E.; Stacey, G.; Tabatabaei, F.; van der Tak, F.; van der Werf, P.; Verley, S.; Xilouris, M.

    2015-02-01

    The FITS file contains maps of the flux in star formation tracing bands, maps of the SFR, maps of the attenuation in star formation tracing bands, and a map of the stellar mass of M33, each from a resolution of 8"/pixel to 512"/pixel. The FUV GALEX data from NGS were obtained directly from the GALEX website through GALEXVIEW. The observation was carried out on 25 November 2003 for a total exposure time of 3334s. Hα+[NII] observations were carried out in November 1995 on the Burrel Schmidt telescope at Kitt Peak National Observatory. The observations and the data processing are analysed in detail in Hoopes & Walterbos (2000ApJ...541..597H). The Spitzer IRAC 8um image sensitive to the emission of Polycyclic Aromatic Hydrocarbons (PAH) and the MIPS 24um image sensitive to the emission of Very Small Grains (VSG) were obtained from the NASA Extragalactic Database and have been analysed by Hinz et al. (2004ApJS..154..259H) and Verley et al. (2007A&A...476.1161V, Cat. J/A+A/476/1161). The PACS data at 70um and 100um, which are sensitive to the warm dust heated by massive stars, come from two different programmes. The 100um image was obtained in the context of the Herschel HerM33es open time key project (Kramer et al., 2010A&A...518L..67K, observation ID 1342189079 and 1342189080). The observation was carried out in parallel mode on 7 January 2010 for a duration of 6.3h. It consisted in 2 orthogonal scans at a speed of 20"/s, with a leg length of 7'. The 70um image was obtained as a follow-up open time cycle 2 programme (OT2mboquien4, observation ID 1342247408 and 1342247409). M33 was scanned on 25 June 2012 at a speed of 20"/s in 2 orthogonal directions over 50' with 5 repetitions of this scheme in order to match the depth of the 100um image. The total duration of the observation was 9.9h. The cube, cube.fits files, contains 16 extensions: * FUV * HALPHA * 8 * 24 * 70 * 100 * SFR_FUV * SFR_HALPHA * SFR_24 * SFR_70 * SFR_100 * SFRFUV24 * SFRHALPHA24 * A_FUV * A

  4. Land cover characterization and mapping of continental southeast Asia using multi-resolution satellite sensor data

    USGS Publications Warehouse

    Giri, Chandra; Defourny, Pierre; Shrestha, Surendra

    2003-01-01

    Land use/land cover change, particularly that of tropical deforestation and forest degradation, has been occurring at an unprecedented rate and scale in Southeast Asia. The rapid rate of economic development, demographics and poverty are believed to be the underlying forces responsible for the change. Accurate and up-to-date information to support the above statement is, however, not available. The available data, if any, are outdated and are not comparable for various technical reasons. Time series analysis of land cover change and the identification of the driving forces responsible for these changes are needed for the sustainable management of natural resources and also for projecting future land cover trajectories. We analysed the multi-temporal and multi-seasonal NOAA Advanced Very High Resolution Radiometer (AVHRR) satellite data of 1985/86 and 1992 to (1) prepare historical land cover maps and (2) to identify areas undergoing major land cover transformations (called ‘hot spots’). The identified ‘hot spot’ areas were investigated in detail using high-resolution satellite sensor data such as Landsat and SPOT supplemented by intensive field surveys. Shifting cultivation, intensification of agricultural activities and change of cropping patterns, and conversion of forest to agricultural land were found to be the principal reasons for land use/land cover change in the Oudomxay province of Lao PDR, the Mekong Delta of Vietnam and the Loei province of Thailand, respectively. Moreover, typical land use/land cover change patterns of the ‘hot spot’ areas were also examined. In addition, we developed an operational methodology for land use/land cover change analysis at the national level with the help of national remote sensing institutions.

  5. Deconstructing a Polygenetic Landscape Using LiDAR and Multi-Resolution Analysis

    NASA Astrophysics Data System (ADS)

    Houser, C.; Barrineau, C. P.; Dobreva, I. D.; Bishop, M. P.

    2015-12-01

    In many earth surface systems characteristic morphologies are associated with various regimes both past and present. Aeolian systems contain a variety of features differentiated largely by morphometric differences, which in turn reflect age and divergent process regimes. Using quantitative analysis of high-resolution elevation data to generate detailed information regarding these characteristic morphometries enables geomorphologists to effectively map process regimes from a distance. Combined with satellite imagery and other types of remotely sensed data, the outputs can even help to delineate phases of activity within aeolian systems. The differentiation of regimes and identification of relict features together enables a greater level of rigor to analyses leading to field-based investigations, which are highly dependent on site-specific historical contexts that often obscure distinctions between separate process-form regimes. We present results from a Principal Components Analysis (PCA) performed on a LiDAR-derived elevation model of a largely stabilized aeolian system in South Texas. The resulting components are layered and classified to generate a map of aeolian morphometric signatures for a portion of the landscape. Several of these areas do not immediately appear to be aeolian in nature in satellite imagery or LiDAR-derived models, yet field observations and historical imagery reveal the PCA did in fact identify stabilized and relict dune features. This methodology enables researchers to generate a morphometric classification of the land surface. We believe this method is a valuable and innovative tool for researchers identifying process regimes within a study area, particularly in field-based investigations that rely heavily on site-specific context.

  6. Multi-resolution Analysis of the slip history of 1999 Chi-Chi, Taiwan earthquake

    NASA Astrophysics Data System (ADS)

    Ji, C.; Helmberger, D. V.

    2001-05-01

    Studies of large earthquakes have revealed strong heterogeneity in faulting slip distributions at mid-crustal depths. These results are inferred from modeling l ocal GPS and strong motion records but are usually limited by the lack of data density. Here we report on the fault complexity of the large (Magnitude 7.6) Chi- Chi earthquake obtained by inverting densely and well distributed static measure ments consisting of 119 GPS and 23 doubly integrated strong motion records, whic h is the best static data set yet recorded for a large earthquake. We show that the slip of the Chi-Chi earthquake was concentrated on the surface of a "wedge shaped" block. Furthermore, similar to our previous study in 1999 Hector Mine ea rthquake (Ji et al., 2001), the static data, teleseismic body wave and local str ong motion data are used to constrain the rupture process. A simulated annealing method combined with wavelet transform approach is employed to solve for the sl ip histories on subfault elements with variable sizes. The sizes are adjusted it eratively based on data type and distribution to produce an optimal balance betw een resolution and reliability. Results indicate strong local variations in rupt ure characteristics with relatively rapid changes in the middle and southern por tion producing relatively strong accelerations.

  7. Multispectral image sharpening using a shift-invariant wavelet transform and adaptive processing of multiresolution edges

    USGS Publications Warehouse

    Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.

    2002-01-01

    Enhanced false color images from mid-IR, near-IR (NIR), and visible bands of the Landsat thematic mapper (TM) are commonly used for visually interpreting land cover type. Described here is a technique for sharpening or fusion of NIR with higher resolution panchromatic (Pan) that uses a shift-invariant implementation of the discrete wavelet transform (SIDWT) and a reported pixel-based selection rule to combine coefficients. There can be contrast reversals (e.g., at soil-vegetation boundaries between NIR and visible band images) and consequently degraded sharpening and edge artifacts. To improve performance for these conditions, I used a local area-based correlation technique originally reported for comparing image-pyramid-derived edges for the adaptive processing of wavelet-derived edge data. Also, using the redundant data of the SIDWT improves edge data generation. There is additional improvement because sharpened subband imagery is used with the edge-correlation process. A reported technique for sharpening three-band spectral imagery used forward and inverse intensity, hue, and saturation transforms and wavelet-based sharpening of intensity. This technique had limitations with opposite contrast data, and in this study sharpening was applied to single-band multispectral-Pan image pairs. Sharpening used simulated 30-m NIR imagery produced by degrading the spatial resolution of a higher resolution reference. Performance, evaluated by comparison between sharpened and reference image, was improved when sharpened subband data were used with the edge correlation.

  8. Creation of a Multiresolution and Multiaccuracy Dtm: Problems and Solutions for Heli-Dem Case Study

    NASA Astrophysics Data System (ADS)

    Biagi, L.; Carcano, L.; Lucchese, A.; Negretti, M.

    2013-01-01

    The work is part of "HELI-DEM" (HELvetia-Italy Digital Elevation Model) project, funded by the European Regional Development Fund within the Italy-Switzerland cooperation program. The aim of the project is the creation of a unique DTM for the alpine and subalpine area between Italy (Piedmont, Lombardy) and Switzerland (Ticino and Grisons Cantons); at present, different DTMs, that are in different reference frames and have been obtained with different technologies, accuracies, and resolutions, have been acquired. The final DTM should be correctly georeferenced and produced validating and integrating the data that are available for the project. DTMs are fundamental in hydrogeological studies, especially in alpine areas where hydrogeological risks may exist. Moreover, when an event, like for example a landslide, happens at the border between countries, a unique and integrated DTM which covers the interest area is useful to analyze the scenario. In this sense, HELI-DEM project is helpful. To perform analyses along the borders between countries, transnational geographic information is needed: a transnational DTM can be obtained by merging regional low resolution DTMs. Moreover high resolution local DTMs should be used where they are available. To be merged, low and high resolution DTMs should be in the same three dimensional reference frame, should not present biases and should be consistent in the overlapping areas. Cross-validation between the different DTMs is therefore needed. Two different problems should be solved: the merging of regional, partly overlapping low and medium resolution DTMs into a unique low/medium resolution DTM and the merging with other local high resolution/high accuracy height data. This paper discusses the preliminary processing of the data for the fusion of low and high resolution DTMs in a study-case area within the Lombardy region: Valtellina valley. In this region the Lombardy regional low resolution DTM is available, with a horizontal

  9. Multi-resolution processing for fractal analysis of airborne remotely sensed data

    NASA Technical Reports Server (NTRS)

    Jaggi, S.; Quattrochi, D.; Lam, N.

    1992-01-01

    Fractal geometry is increasingly becoming a useful tool for modeling natural phenomenon. As an alternative to Euclidean concepts, fractals allow for a more accurate representation of the nature of complexity in natural boundaries and surfaces. Since they are characterized by self-similarity, an ideal fractal surface is scale-independent; i.e. at different scales a fractal surface looks the same. This is not exactly true for natural surfaces. When viewed at different spatial resolutions parts of natural surfaces look alike in a statistical manner and only for a limited range of scales. Images acquired by NASA's Thermal Infrared Multispectral Scanner are used to compute the fractal dimension as a function of spatial resolution. Three methods are used to determine the fractal dimension - Schelberg's line-divider method, the variogram method, and the triangular prism method. A description of these methods and the results of applying these methods to a remotely-sensed image is also presented. Five flights were flown in succession at altitudes of 2 km (low), 6 km (mid), 12 km (high), and then back again at 6 km and 2 km. The area selected was the Ross Barnett reservoir near Jackson, Mississippi. The mission was flown during the predawn hours of 1 Feb. 1992. Radiosonde data was collected for that duration to profile the characteristics of the atmosphere. This corresponds to 3 different pixel sizes - 5m, 15m, and 30m. After, simulating different spatial sampling intervals within the same image for each of the 3 image sets, the results are cross-correlated to compare the extent of detail and complexity that is obtained when data is taken at lower spatial intervals.

  10. Data Collection Methods for Validation of Advanced Multi-Resolution Fast Reactor Simulations

    SciTech Connect

    Tokuhiro, Akiro; Ruggles, Art; Pointer, David

    2015-01-22

    In pool-type Sodium Fast Reactors (SFR) the regions most susceptible to thermal striping are the upper instrumentation structure (UIS) and the intermediate heat exchanger (IHX). This project experimentally and computationally (CFD) investigated the thermal mixing in the region exiting the reactor core to the UIS. The thermal mixing phenomenon was simulated using two vertical jets at different velocities and temperatures as prototypic of two adjacent channels out of the core. Thermal jet mixing of anticipated flows at different temperatures and velocities were investigated. Velocity profiles are measured throughout the flow region using Ultrasonic Doppler Velocimetry (UDV), and temperatures along the geometric centerline between the jets were recorded using a thermocouple array. CFD simulations, using COMSOL, were used to initially understand the flow, then to design the experimental apparatus and finally to compare simulation results and measurements characterizing the flows. The experimental results and CFD simulations show that the flow field is characterized into three regions with respective transitions, namely, convective mixing, (flow direction) transitional, and post-mixing. Both experiments and CFD simulations support this observation. For the anticipated SFR conditions the flow is momentum dominated and thus thermal mixing is limited due to the short flow length associated from the exit of the core to the bottom of the UIS. This means that there will be thermal striping at any surface where poorly mixed streams impinge; rather unless lateral mixing is ‘actively promoted out of the core, thermal striping will prevail. Furthermore we note that CFD can be considered a ‘separate effects (computational) test’ and is recommended as part of any integral analysis. To this effect, poorly mixed streams then have potential impact on the rest of the SFR design and scaling, especially placement of internal components, such as the IHX that may see poorly mixed

  11. Multi-resolution integrated modeling for basin-scale water resources management and policy analysis

    SciTech Connect

    Gupta, Hoshin V. ,; Brookshire, David S.; Springer, E. P.; Wagener, Thorsten

    2004-01-01

    Approximately one-third of the land surface of the Earth is considered to be arid or semi-arid with an annual average of less than 12-14 inches of rainfall. The availability of water in such regions is of course, particularly sensitive to climate variability while the demand for water is experiencing explosive population growth. The competition for available water is exerting considerable pressure on the water resources management. Policy and decision makers in the southwestern U.S. increasingly have to cope with over-stressed rivers and aquifers as population and water demands grow. Other factors such as endangered species and Native American water rights further complicate the management problems. Further, as groundwater tables are drawn down due to pumping in excess of natural recharge, considerable (potentially irreversible) environmental impacts begin to be felt as, for example, rivers run dry for significant portions of the year, riparian habitats disappear (with consequent effects on the bio-diversity of the region), aquifers compact resulting in large scale subsidence, and water quality begins to suffer. The current drought (1999-2002) in the southwestern U.S. is raising new concerns about how to sustain the combination of agricultural, urban and in-stream uses of water that underlie the socio-economic and ecological structure in the region. The water stressed nature of arid and semi-arid environments means that competing water uses of various kinds vie for access to a highly limited resource. If basin-scale water sustainability is to be achieved, managers must somehow achieve a balance between supply and demand throughout the basin, not just for the surface water or stream. The need to move water around a basin such as the Rio Grande or Colorado River to achieve this balance has created the stimulus for water transfers and water markets, and for accurate hydrologic information to sustain such institutions [Matthews et al. 2002; Brookshire et al 2003

  12. Benefits of an ultra large and multiresolution ensemble for estimating available wind power

    NASA Astrophysics Data System (ADS)

    Berndt, Jonas; Hoppe, Charlotte; Elbern, Hendrik

    2016-04-01

    In this study we investigate the benefits of an ultra large ensemble with up to 1000 members including multiple nesting with a target horizontal resolution of 1 km. The ensemble shall be used as a basis to detect events of extreme errors in wind power forecasting. Forecast value is the wind vector at wind turbine hub height (~ 100 m) in the short range (1 to 24 hour). Current wind power forecast systems rest already on NWP ensemble models. However, only calibrated ensembles from meteorological institutions serve as input so far, with limited spatial resolution (˜10 - 80 km) and member number (˜ 50). Perturbations related to the specific merits of wind power production are yet missing. Thus, single extreme error events which are not detected by such ensemble power forecasts occur infrequently. The numerical forecast model used in this study is the Weather Research and Forecasting Model (WRF). Model uncertainties are represented by stochastic parametrization of sub-grid processes via stochastically perturbed parametrization tendencies and in conjunction via the complementary stochastic kinetic-energy backscatter scheme already provided by WRF. We perform continuous ensemble updates by comparing each ensemble member with available observations using a sequential importance resampling filter to improve the model accuracy while maintaining ensemble spread. Additionally, we use different ensemble systems from global models (ECMWF and GFS) as input and boundary conditions to capture different synoptic conditions. Critical weather situations which are connected to extreme error events are located and corresponding perturbation techniques are applied. The demanding computational effort is overcome by utilising the supercomputer JUQUEEN at the Forschungszentrum Juelich.

  13. Development of RESTful services and map-based user interface tools for access to the Global Multi-Resolution Topography (GMRT) Synthesis

    NASA Astrophysics Data System (ADS)

    Ferrini, V. L.; Morton, J. J.; Barg, B.

    2015-12-01

    The Global Multi-Resolution Topography (GMRT, http://gmrt.marine-geo.org) synthesis is a multi-resolution compilation of quality controlled multibeam sonar data, collected by scientists and institutions worldwide, that is merged with gridded terrestrial and marine elevation data. The multi-resolutional elevation components of GMRT are delivered to the user through a variety of interfaces as both images and grids. The GMRT provides quantitative access to gridded data and images to the full native resolution of the sonar as well as attribution information and access to source data files. To construct the GMRT, multibeam sonar data are evaluated, cleaned and gridded by the MGDS Team and are then merged with gridded global and regional elevation data that are available at a variety of scales from 1km resolution to sub-meter resolution. As of June 2015, GMRT included processed swath data from nearly 850 research cruises with over 2.7 million ship-track miles of coverage. Several new services were developed over the past year to improve access to the GMRT Synthesis. In addition to our long-standing Web Map Services, we now offer RESTful services to provide programmatic access to gridded data in standard formats including ArcASCII, GeoTIFF, COARDS/CF-compliant NetCDF, and GMT NetCDF, as well as access to custom images of the GMRT in JPEG format. An attribution metadata XML service was also developed to return all relevant information about component data in an area, including cruise names, multibeam file names, and gridded data components. These new services are compliant with the EarthCube GeoWS Building Blocks specifications. Supplemental services include the release of data processing reports for each cruise included in the GMRT and data querying services that return elevation values at a point and great circle arc profiles using the highest available resolution data. Our new and improved map-based web application, GMRT MapTool, provides user access to the GMRT

  14. NOAA's National Snow Analyses

    NASA Astrophysics Data System (ADS)

    Carroll, T. R.; Cline, D. W.; Olheiser, C. M.; Rost, A. A.; Nilsson, A. O.; Fall, G. M.; Li, L.; Bovitz, C. T.

    2005-12-01

    NOAA's National Operational Hydrologic Remote Sensing Center (NOHRSC) routinely ingests all of the electronically available, real-time, ground-based, snow data; airborne snow water equivalent data; satellite areal extent of snow cover information; and numerical weather prediction (NWP) model forcings for the coterminous U.S. The NWP model forcings are physically downscaled from their native 13 km2 spatial resolution to a 1 km2 resolution for the CONUS. The downscaled NWP forcings drive an energy-and-mass-balance snow accumulation and ablation model at a 1 km2 spatial resolution and at a 1 hour temporal resolution for the country. The ground-based, airborne, and satellite snow observations are assimilated into the snow model's simulated state variables using a Newtonian nudging technique. The principle advantages of the assimilation technique are: (1) approximate balance is maintained in the snow model, (2) physical processes are easily accommodated in the model, and (3) asynoptic data are incorporated at the appropriate times. The snow model is reinitialized with the assimilated snow observations to generate a variety of snow products that combine to form NOAA's NOHRSC National Snow Analyses (NSA). The NOHRSC NSA incorporate all of the available information necessary and available to produce a "best estimate" of real-time snow cover conditions at 1 km2 spatial resolution and 1 hour temporal resolution for the country. The NOHRSC NSA consist of a variety of daily, operational, products that characterize real-time snowpack conditions including: snow water equivalent, snow depth, surface and internal snowpack temperatures, surface and blowing snow sublimation, and snowmelt for the CONUS. The products are generated and distributed in a variety of formats including: interactive maps, time-series, alphanumeric products (e.g., mean areal snow water equivalent on a hydrologic basin-by-basin basis), text and map discussions, map animations, and quantitative gridded products

  15. Multiresolutional Optic Flow

    DTIC Science & Technology

    2007-11-02

    author’s feet on the ground while letting his head stay in the clouds . As a collaborator on the project, thanks go to LCDR Joseph Skufca, USN, for his...pair of binoculars. Although one is able to zoom in and watch what sign the catcher is giving to the pitcher, it is impossible to simultaneously see...cannot at the same time see what motion the catcher makes with his …ngers as a sign to the pitcher. Here the window is so large that small scale motion

  16. On Limits

    NASA Technical Reports Server (NTRS)

    Holzmann, Gerard J.

    2008-01-01

    In the last 3 decades or so, the size of systems we have been able to verify formally with automated tools has increased dramatically. At each point in this development, we encountered a different set of limits -- many of which we were eventually able to overcome. Today, we may have reached some limits that may be much harder to conquer. The problem I will discuss is the following: given a hypothetical machine with infinite memory that is seamlessly shared among infinitely many CPUs (or CPU cores), what is the largest problem size that we could solve?

  17. Spacelab Charcoal Analyses

    NASA Technical Reports Server (NTRS)

    Slivon, L. E.; Hernon-Kenny, L. A.; Katona, V. R.; Dejarme, L. E.

    1995-01-01

    This report describes analytical methods and results obtained from chemical analysis of 31 charcoal samples in five sets. Each set was obtained from a single scrubber used to filter ambient air on board a Spacelab mission. Analysis of the charcoal samples was conducted by thermal desorption followed by gas chromatography/mass spectrometry (GC/MS). All samples were analyzed using identical methods. The method used for these analyses was able to detect compounds independent of their polarity or volatility. In addition to the charcoal samples, analyses of three Environmental Control and Life Support System (ECLSS) water samples were conducted specifically for trimethylamine.

  18. Wavelet Analyses and Applications

    ERIC Educational Resources Information Center

    Bordeianu, Cristian C.; Landau, Rubin H.; Paez, Manuel J.

    2009-01-01

    It is shown how a modern extension of Fourier analysis known as wavelet analysis is applied to signals containing multiscale information. First, a continuous wavelet transform is used to analyse the spectrum of a nonstationary signal (one whose form changes in time). The spectral analysis of such a signal gives the strength of the signal in each…

  19. Apollo 14 microbial analyses

    NASA Technical Reports Server (NTRS)

    Taylor, G. R.

    1972-01-01

    Extensive microbiological analyses that were performed on the Apollo 14 prime and backup crewmembers and ancillary personnel are discussed. The crewmembers were subjected to four separate and quite different environments during the 137-day monitoring period. The relation between each of these environments and observed changes in the microflora of each astronaut are presented.

  20. Atmospheric tether mission analyses

    NASA Technical Reports Server (NTRS)

    1996-01-01

    NASA is considering the use of tethered satellites to explore regions of the atmosphere inaccessible to spacecraft or high altitude research balloons. This report summarizes the Lockheed Martin Astronautics (LMA) effort for the engineering study team assessment of an Orbiter-based atmospheric tether mission. Lockheed Martin responsibilities included design recommendations for the deployer and tether, as well as tether dynamic analyses for the mission. Three tether configurations were studied including single line, multistrand (Hoytether) and tape designs.

  1. LDEF Satellite Radiation Analyses

    NASA Technical Reports Server (NTRS)

    Armstrong, T. W.; Colborn, B. L.

    1996-01-01

    Model calculations and analyses have been carried out to compare with several sets of data (dose, induced radioactivity in various experiment samples and spacecraft components, fission foil measurements, and LET spectra) from passive radiation dosimetry on the Long Duration Exposure Facility (LDEF) satellite, which was recovered after almost six years in space. The calculations and data comparisons are used to estimate the accuracy of current models and methods for predicting the ionizing radiation environment in low earth orbit. The emphasis is on checking the accuracy of trapped proton flux and anisotropy models.

  2. Laser Radar Analyses.

    DTIC Science & Technology

    1983-07-15

    back-propagated aperture radius, Rm , in the corresponding expression (Equations (79)-(72)) of Reference 1. It should also be noted that Equation (18...incoherent detection efficiency approaches unity as Rm goes to zero and approaches zero in a limiting fashion as Rm goes to infinity, as required from the...the first two integrations reduce to the case previously treated, Equations (18) and (20), with Rm replaced by Rmi and RMO respectively. For the

  3. Force Limited Vibration Testing

    NASA Technical Reports Server (NTRS)

    Scharton, Terry; Chang, Kurng Y.

    2005-01-01

    This slide presentation reviews the concept and applications of Force Limited Vibration Testing. The goal of vibration testing of aerospace hardware is to identify problems that would result in flight failures. The commonly used aerospace vibration tests uses artificially high shaker forces and responses at the resonance frequencies of the test item. It has become common to limit the acceleration responses in the test to those predicted for the flight. This requires an analysis of the acceleration response, and requires placing accelerometers on the test item. With the advent of piezoelectric gages it has become possible to improve vibration testing. The basic equations have are reviewed. Force limits are analogous and complementary to the acceleration specifications used in conventional vibration testing. Just as the acceleration specification is the frequency spectrum envelope of the in-flight acceleration at the interface between the test item and flight mounting structure, the force limit is the envelope of the in-flight force at the interface . In force limited vibration tests, both the acceleration and force specifications are needed, and the force specification is generally based on and proportional to the acceleration specification. Therefore, force limiting does not compensate for errors in the development of the acceleration specification, e.g., too much conservatism or the lack thereof. These errors will carry over into the force specification. Since in-flight vibratory force data are scarce, force limits are often derived from coupled system analyses and impedance information obtained from measurements or finite element models (FEM). Fortunately, data on the interface forces between systems and components are now available from system acoustic and vibration tests of development test models and from a few flight experiments. Semi-empirical methods of predicting force limits are currently being developed on the basis of the limited flight and system test

  4. Age Limits.

    PubMed

    Antfolk, Jan

    2017-03-01

    Whereas women of all ages prefer slightly older sexual partners, men-regardless of their age-have a preference for women in their 20s. Earlier research has suggested that this difference between the sexes' age preferences is resolved according to women's preferences. This research has not, however, sufficiently considered that the age range of considered partners might change over the life span. Here we investigated the age limits (youngest and oldest) of considered and actual sex partners in a population-based sample of 2,655 adults (aged 18-50 years). Over the investigated age span, women reported a narrower age range than men and women tended to prefer slightly older men. We also show that men's age range widens as they get older: While they continue to consider sex with young women, men also consider sex with women their own age or older. Contrary to earlier suggestions, men's sexual activity thus reflects also their own age range, although their potential interest in younger women is not likely converted into sexual activity. Compared to homosexual men, bisexual and heterosexual men were more unlikely to convert young preferences into actual behavior, supporting female-choice theory.

  5. Amino acid analyses of Apollo 14 samples.

    NASA Technical Reports Server (NTRS)

    Gehrke, C. W.; Zumwalt, R. W.; Kuo, K.; Aue, W. A.; Stalling, D. L.; Kvenvolden, K. A.; Ponnamperuma, C.

    1972-01-01

    Detection limits were between 300 pg and 1 ng for different amino acids, in an analysis by gas-liquid chromatography of water extracts from Apollo 14 lunar fines in which amino acids were converted to their N-trifluoro-acetyl-n-butyl esters. Initial analyses of water and HCl extracts of sample 14240 and 14298 samples showed no amino acids above background levels.

  6. EEG analyses with SOBI.

    SciTech Connect

    Glickman, Matthew R.; Tang, Akaysha

    2009-02-01

    The motivating vision behind Sandia's MENTOR/PAL LDRD project has been that of systems which use real-time psychophysiological data to support and enhance human performance, both individually and of groups. Relevant and significant psychophysiological data being a necessary prerequisite to such systems, this LDRD has focused on identifying and refining such signals. The project has focused in particular on EEG (electroencephalogram) data as a promising candidate signal because it (potentially) provides a broad window on brain activity with relatively low cost and logistical constraints. We report here on two analyses performed on EEG data collected in this project using the SOBI (Second Order Blind Identification) algorithm to identify two independent sources of brain activity: one in the frontal lobe and one in the occipital. The first study looks at directional influences between the two components, while the second study looks at inferring gender based upon the frontal component.

  7. Genetic analyses of captive Alala (Corvus hawaiiensis) using AFLP analyses

    USGS Publications Warehouse

    Jarvi, Susan I.; Bianchi, Kiara R.

    2006-01-01

    affected by the mutation rate at microsatellite loci, thus introducing a bias. Also, the number of loci that can be studied is frequently limited to fewer than 10. This theoretically represents a maximum of one marker for each of 10 chromosomes. Dominant markers like AFLP allow a larger fraction of the genome to be screened. Large numbers of loci can be screened by AFLP to resolve very small individual differences that can be used for identification of individuals, estimates of pairwise relatedness and, in some cases, for parentage analyses. Since AFLP is a dominant marker (can not distinguish between +/+ homozygote versus +/- heterozygote), it has limitations for parentage analyses. Only when both parents are homozygous for the absence of alleles (-/-) and offspring show a presence (+/+ or +/-) can the parents be excluded. In this case, microsatellites become preferable as they have the potential to exclude individual parents when the other parent is unknown. Another limitation of AFLP is that the loci are generally less polymorphic (only two alleles/locus) than microsatellite loci (often >10 alleles/locus). While generally fewer than 10 highly polymorphic microsatellite loci are enough to exclude and assign parentage, it might require up to 100 or more AFLP loci. While there are pros and cons to different methodologies, the total number of loci evaluated by AFLP generally offsets the limitations imposed due to the dominant nature of this approach and end results between methods are generally comparable. Overall objectives of this study were to evaluate the level of genetic diversity in the captive population of Alala, to compare genetic data with currently available pedigree information, and to determine the extent of relatedness of mating pairs and among founding individuals.

  8. Network Class Superposition Analyses

    PubMed Central

    Pearson, Carl A. B.; Zeng, Chen; Simha, Rahul

    2013-01-01

    Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., for the yeast cell cycle process [1]), considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix , which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for derived from Boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with . We show how to generate Derrida plots based on . We show that -based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on . We motivate all of these results in terms of a popular molecular biology Boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for , for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses. PMID:23565141

  9. Network class superposition analyses.

    PubMed

    Pearson, Carl A B; Zeng, Chen; Simha, Rahul

    2013-01-01

    Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., ≈ 10(30) for the yeast cell cycle process), considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix T, which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for T derived from boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying T to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with T. We show how to generate Derrida plots based on T. We show that T-based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on T. We motivate all of these results in terms of a popular molecular biology boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for T, for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses.

  10. A system for generating multi-resolution Digital Terrain Models of Mars based on the ESA Mars Express and NASA Mars Reconnaissance Orbiter data

    NASA Astrophysics Data System (ADS)

    Yershov, V.

    2015-10-01

    We describe a processing system for generating multiresolution digital terrain models (DTM) of Mars within the the iMars project of the European Seventh Framework Programme. This system is based on a non-rigorous sensor model for processing highresolution stereoscopic images obtained fromthe High Resolution Imaging Science Experiment (HiRISE) camera and Context Camera (CTX) onboard the NASA Mars Reconnaissance Orbiter (MRO) spacecraft. The system includes geodetic control based on the polynomial fit of the input CTX images with respect to to a reference image obtained from the ESA Mars Express High Resolution Stereo Camera (HRSC). The input image processing is based on the Integrated Software for Images and Spectrometers (ISIS) and the NASA Ames stereo pipeline. The accuracy of the produced CTX DTM is improved by aligning it with the reference HRSC DTMand the altimetry data from the Mars Orbiter Laser Altimeter (MOLA) onboard the Mars Global Surveyor (MGS) spacecraft. The higher-resolution HiRISE imagery data are processed in the the same way, except that the reference images and DTMs are taken from the CTX results obtained during the first processing stage. A quality assessment of image photogrammetric registration is demonstrated by using data generated by the NASA Ames stereo pipeline and the BAE Socet system. Such DTMs will be produced for all available stereo-pairs and be displayed asWMS layers within the iMarsWeb GIS.

  11. Top-down and bottom-up inventory approach for above ground forest biomass and carbon monitoring in REDD framework using multi-resolution satellite data.

    PubMed

    Sharma, Laxmi Kant; Nathawat, Mahendra Singh; Sinha, Suman

    2013-10-01

    This study deals with the future scope of REDD (Reduced Emissions from Deforestation and forest Degradation) and REDD+ regimes for measuring and monitoring the current state and dynamics of carbon stocks over time with integrated geospatial and field-based biomass inventory approach. Multi-temporal and multi-resolution geospatial synergic approach incorporating satellite sensors from moderate to high resolution with stratified random sampling design is used. The inventory process involves a continuous forest inventory to facilitate the quantification of possible CO2 reductions over time using statistical up-scaling procedures on various levels. The combined approach was applied on a regional scale taking Himachal Pradesh (India), as a case study, with a hierarchy of forest strata representing the forest structure found in India. Biophysical modeling implemented revealed power regression model as the best fit (R (2) = 0.82) to model the relationship between Normalized Difference Vegetation Index and biomass which was further implemented to calculate multi-temporal above ground biomass and carbon sequestration. The calculated value of net carbon sequestered by the forests totaled to 11.52 million tons (Mt) over the period of 20 years at the rate of 0.58 Mt per year since 1990 while CO2 equivalent reduced from the environment by the forests under study during 20 years comes to 42.26 Mt in the study area.

  12. Multi-resolution graph-based clustering analysis for lithofacies identification from well log data: Case study of intraplatform bank gas fields, Amu Darya Basin

    NASA Astrophysics Data System (ADS)

    Tian, Yu; Xu, Hong; Zhang, Xing-Yang; Wang, Hong-Jun; Guo, Tong-Cui; Zhang, Liang-Jie; Gong, Xing-Lin

    2016-12-01

    In this study, we used the multi-resolution graph-based clustering (MRGC) method for determining the electrofacies (EF) and lithofacies (LF) from well log data obtained from the intraplatform bank gas fields located in the Amu Darya Basin. The MRGC could automatically determine the optimal number of clusters without prior knowledge about the structure or cluster numbers of the analyzed data set and allowed the users to control the level of detail actually needed to define the EF. Based on the LF identification and successful EF calibration using core data, an MRGC EF partition model including five clusters and a quantitative LF interpretation chart were constructed. The EF clusters 1 to 5 were interpreted as lagoon, anhydrite flat, interbank, low-energy bank, and high-energy bank, and the coincidence rate in the cored interval could reach 85%. We concluded that the MRGC could be accurately applied to predict the LF in non-cored but logged wells. Therefore, continuous EF clusters were partitioned and corresponding LF were interpreted, and the distribution and petrophysical characteristics of different LF were analyzed in the framework of sequence stratigraphy.

  13. Computational AstroStatistics: fast and efficient tools for analysing huge astronomical data sources

    NASA Astrophysics Data System (ADS)

    Nichol, Robert C.; Chong, S.; Connolly, A. J.; Davies, S.; Genovese, C.; Hopkins, A. M.; Miller, C. J.; Moore, A. W.; Pelleg, D.; Richards, G. T.; Schneider, J.; Szapudi, I.; Wasserman, L.

    I present here a review of past and present multi-disciplinary research of the Pittsburgh Computational AstroStatistics (PiCA) group. This group is dedicated to developing fast and efficient statistical algorithms for analysing huge astronomical data sources. I begin with a short review of multi-resolutional kd-trees which are the building blocks for many of our algorithms. For example, quick range queries and fast N-point correlation functions. I will present new results from the use of Mixture Models (Connolly et al. 2000) in desity estimation of multi-color data from the Sloan Digital Sky Survey (SDSS). Specifically, the selection of quasars and the automated identification of X-ray sources. I will also present a brief overview of the False Discovery Rate (FDR) procedure (Miller et al. 2001) and show how it has been used in the detection of "Baryon Wiggles" in the local galaxy power spectrum and source identification in radio data. Finally, I will look forward to new research on an automatied Bayes Network anomaly detector and the possible use of the Locally Linear Embedding algorithm (LLE; Roweis & Saul 2000) for spectral classification of SDSS spectra.

  14. Light scattering computation model for nonspherical aerosol particles based on multi-resolution time-domain scheme: model development and validation.

    PubMed

    Hu, Shuai; Gao, Taichang; Li, Hao; Yang, Bo; Zhang, Feng; Chen, Ming; Liu, Lei

    2017-01-23

    Due to the inadequate understanding of the scattering properties of nonspherical aerosols, considerable uncertainties still exist in the radiative transfer numerical simulation. To this end, a new scattering model for nonspherical aerosols is established based on Multi-Resolution Time-Domain (MRTD) scheme. The model is comprised of three modules: near field calculation module, near-to-far transformation module and scattering parameters computation module, in which, the near electromagnetic field is calculated by MRTD technique, the near-to-far transformation scheme is performed by volume integral method, and the calculation models for extinction and absorption cross section are directly derived from Maxwell's curl equations in the frequency domain. To achieve higher computational efficiency, the model is further parallelized by MPI non-blocking repeated communication technique. The accuracy of the scattering model is validated against Lorenz-Mie, Aden-Kerker and T-matrix theories for spherical particles, particles with inclusions and nonspherical particles. At last, the parallel computational efficiency of the MRTD scattering model is quantitatively discussed as well. The results obtained by parallel MRTD scattering model show an excellent agreement with those of the well-tested scattering theories, where the relative simulation errors of the phase function are less than 5% for most scattering angles. In backward directions, the simulation errors are much larger than that in forward scattering directions due to the stair approximation in particle construction. The computational accuracy of the integral scattering parameters like extinction and absorption efficiencies is higher than phase matrix, where the simulation errors of extinction and absorption efficiencies for the particle with a size parameter of 10 achieve -0.4891% and -1.6933%, respectively.

  15. Restricted versus Unrestricted Learning: Synthesis of Recent Meta-Analyses

    ERIC Educational Resources Information Center

    Johnson, Genevieve

    2007-01-01

    Meta-analysis is a method of quantitatively summarizing the results of experimental research. This article summarizes four meta-analyses published since 2003 that compare the effect of DE and traditional education (TE) on student learning. Despite limitations, synthesis of these meta-analyses establish, at the very least, equivalent learning…

  16. Recent Trends in Conducting School-Based Experimental Functional Analyses

    ERIC Educational Resources Information Center

    Carter, Stacy L.

    2009-01-01

    Demonstrations of school-based experimental functional analyses have received limited attention within the literature. School settings present unique practical and ethical concerns related to the implementation of experimental analyses which were originally developed within clinical settings. Recent examples have made definite contributions toward…

  17. EEO Implications of Job Analyses.

    ERIC Educational Resources Information Center

    Lacy, D. Patrick, Jr.

    1979-01-01

    Discusses job analyses as they relate to the requirements of Title VII of the Civil Rights Act of 1964, the Equal Pay Act of 1963, and the Rehabilitation Act of 1973. Argues that job analyses can establish the job-relatedness of entrance requirements and aid in defenses against charges of discrimination. Journal availability: see EA 511 615.

  18. FUEL CASK IMPACT LIMITER VULNERABILITIES

    SciTech Connect

    Leduc, D; Jeffery England, J; Roy Rothermel, R

    2009-02-09

    Cylindrical fuel casks often have impact limiters surrounding just the ends of the cask shaft in a typical 'dumbbell' arrangement. The primary purpose of these impact limiters is to absorb energy to reduce loads on the cask structure during impacts associated with a severe accident. Impact limiters are also credited in many packages with protecting closure seals and maintaining lower peak temperatures during fire events. For this credit to be taken in safety analyses, the impact limiter attachment system must be shown to retain the impact limiter following Normal Conditions of Transport (NCT) and Hypothetical Accident Conditions (HAC) impacts. Large casks are often certified by analysis only because of the costs associated with testing. Therefore, some cask impact limiter attachment systems have not been tested in real impacts. A recent structural analysis of the T-3 Spent Fuel Containment Cask found problems with the design of the impact limiter attachment system. Assumptions in the original Safety Analysis for Packaging (SARP) concerning the loading in the attachment bolts were found to be inaccurate in certain drop orientations. This paper documents the lessons learned and their applicability to impact limiter attachment system designs.

  19. Time-series analysis of multi-resolution optical imagery for quantifying forest cover loss in Sumatra and Kalimantan, Indonesia

    NASA Astrophysics Data System (ADS)

    Broich, Mark; Hansen, Matthew C.; Potapov, Peter; Adusei, Bernard; Lindquist, Erik; Stehman, Stephen V.

    2011-04-01

    Indonesia than change maps based on image composites. Unlike other time-series analyses employing observations with a consistent periodicity, our study area was characterized by highly unequal observation counts and frequencies due to persistent cloud cover, scan line corrector off (SLC-off) gaps, and the absence of a complete archive. Our method accounts for this variation by generating a generic variable space. We evaluated our results against an independent probability sample-based estimate of gross forest cover loss and expert mapped gross forest cover loss at 64 sample sites. The mapped gross forest cover loss for Sumatra and Kalimantan was 2.86% of the land area, or 2.86 Mha from 2000 to 2005, with the highest concentration having occurred in Riau and Kalimantan Tengah provinces.

  20. Uncertainty quantification approaches for advanced reactor analyses.

    SciTech Connect

    Briggs, L. L.; Nuclear Engineering Division

    2009-03-24

    The original approach to nuclear reactor design or safety analyses was to make very conservative modeling assumptions so as to ensure meeting the required safety margins. Traditional regulation, as established by the U. S. Nuclear Regulatory Commission required conservatisms which have subsequently been shown to be excessive. The commission has therefore moved away from excessively conservative evaluations and has determined best-estimate calculations to be an acceptable alternative to conservative models, provided the best-estimate results are accompanied by an uncertainty evaluation which can demonstrate that, when a set of analysis cases which statistically account for uncertainties of all types are generated, there is a 95% probability that at least 95% of the cases meet the safety margins. To date, nearly all published work addressing uncertainty evaluations of nuclear power plant calculations has focused on light water reactors and on large-break loss-of-coolant accident (LBLOCA) analyses. However, there is nothing in the uncertainty evaluation methodologies that is limited to a specific type of reactor or to specific types of plant scenarios. These same methodologies can be equally well applied to analyses for high-temperature gas-cooled reactors and to liquid metal reactors, and they can be applied to steady-state calculations, operational transients, or severe accident scenarios. This report reviews and compares both statistical and deterministic uncertainty evaluation approaches. Recommendations are given for selection of an uncertainty methodology and for considerations to be factored into the process of evaluating uncertainties for advanced reactor best-estimate analyses.

  1. Nonindependence and sensitivity analyses in ecological and evolutionary meta-analyses.

    PubMed

    Noble, Daniel W A; Lagisz, Malgorzata; O'dea, Rose E; Nakagawa, Shinichi

    2017-01-30

    Meta-analysis is an important tool for synthesizing research on a variety of topics in ecology and evolution, including molecular ecology, but can be susceptible to nonindependence. Nonindependence can affect two major interrelated components of a meta-analysis: (i) the calculation of effect size statistics and (ii) the estimation of overall meta-analytic estimates and their uncertainty. While some solutions to nonindependence exist at the statistical analysis stages, there is little advice on what to do when complex analyses are not possible, or when studies with nonindependent experimental designs exist in the data. Here we argue that exploring the effects of procedural decisions in a meta-analysis (e.g. inclusion of different quality data, choice of effect size) and statistical assumptions (e.g. assuming no phylogenetic covariance) using sensitivity analyses are extremely important in assessing the impact of nonindependence. Sensitivity analyses can provide greater confidence in results and highlight important limitations of empirical work (e.g. impact of study design on overall effects). Despite their importance, sensitivity analyses are seldom applied to problems of nonindependence. To encourage better practice for dealing with nonindependence in meta-analytic studies, we present accessible examples demonstrating the impact that ignoring nonindependence can have on meta-analytic estimates. We also provide pragmatic solutions for dealing with nonindependent study designs, and for analysing dependent effect sizes. Additionally, we offer reporting guidelines that will facilitate disclosure of the sources of nonindependence in meta-analyses, leading to greater transparency and more robust conclusions.

  2. Analysing the ventricular fibrillation waveform.

    PubMed

    Reed, Matthew J; Clegg, Gareth R; Robertson, Colin E

    2003-04-01

    The surface electrocardiogram associated with ventricular fibrillation has been of interest to researchers for some time. Over the last few decades, techniques have been developed to analyse this signal in an attempt to obtain more information about the state of the myocardium and the chances of successful defibrillation. This review looks at the implications of analysing the VF waveform and discusses the various techniques that have been used, including fast Fourier transform analysis, wavelet transform analysis and mathematical techniques such as chaos theory.

  3. Pawnee Nation Energy Option Analyses

    SciTech Connect

    Matlock, M.; Kersey, K.; Riding In, C.

    2009-07-21

    Pawnee Nation of Oklahoma Energy Option Analyses In 2003, the Pawnee Nation leadership identified the need for the tribe to comprehensively address its energy issues. During a strategic energy planning workshop a general framework was laid out and the Pawnee Nation Energy Task Force was created to work toward further development of the tribe’s energy vision. The overarching goals of the “first steps” project were to identify the most appropriate focus for its strategic energy initiatives going forward, and to provide information necessary to take the next steps in pursuit of the “best fit” energy options. Description of Activities Performed The research team reviewed existing data pertaining to the availability of biomass (focusing on woody biomass, agricultural biomass/bio-energy crops, and methane capture), solar, wind and hydropower resources on the Pawnee-owned lands. Using these data, combined with assumptions about costs and revenue streams, the research team performed preliminary feasibility assessments for each resource category. The research team also reviewed available funding resources and made recommendations to Pawnee Nation highlighting those resources with the greatest potential for financially-viable development, both in the near-term and over a longer time horizon. Findings and Recommendations Due to a lack of financial incentives for renewable energy, particularly at the state level, combined mediocre renewable energy resources, renewable energy development opportunities are limited for Pawnee Nation. However, near-term potential exists for development of solar hot water at the gym, and an exterior wood-fired boiler system at the tribe’s main administrative building. Pawnee Nation should also explore options for developing LFGTE resources in collaboration with the City of Pawnee. Significant potential may also exist for development of bio-energy resources within the next decade. Pawnee Nation representatives should closely monitor

  4. EPOXI Trajectory and Maneuver Analyses

    NASA Technical Reports Server (NTRS)

    Chung, Min-Kun J.; Bhaskaran, Shyamkumar; Chesley, Steven R.; Halsell, C. Allen; Helfrich, Clifford E.; Jefferson, David C.; McElrath, Timothy P.; Rush, Brian P.; Wang, Tseng-Chan M.; Yen, Chen-wan L.

    2011-01-01

    The EPOXI mission is a NASA Discovery Mission of Opportunity combining two separate investigations: Extrasolar Planet Observation and Characterization (EPOCh) and Deep Impact eXtended Investigation (DIXI). Both investigations reused the DI instruments and spacecraft that successfully flew by the comet Tempel-1 (4 July 2005). For EPOCh, the goal was to find exoplanets with the high resolution imager, while for DIXI it was to fly by the comet Hartley 2 (4 Nov 2010). This paper documents the navigation experience of the earlier ma-neuver analyses critical for the EPOXI mission including statistical ?V analyses and other useful analyses in designing maneuvers. It also recounts the trajectory design leading up to the final reference trajectory to Hartley 2.

  5. NEXT Ion Thruster Performance Dispersion Analyses

    NASA Technical Reports Server (NTRS)

    Soulas, George C.; Patterson, Michael J.

    2008-01-01

    The NEXT ion thruster is a low specific mass, high performance thruster with a nominal throttling range of 0.5 to 7 kW. Numerous engineering model and one prototype model thrusters have been manufactured and tested. Of significant importance to propulsion system performance is thruster-to-thruster performance dispersions. This type of information can provide a bandwidth of expected performance variations both on a thruster and a component level. Knowledge of these dispersions can be used to more conservatively predict thruster service life capability and thruster performance for mission planning, facilitate future thruster performance comparisons, and verify power processor capabilities are compatible with the thruster design. This study compiles the test results of five engineering model thrusters and one flight-like thruster to determine unit-to-unit dispersions in thruster performance. Component level performance dispersion analyses will include discharge chamber voltages, currents, and losses; accelerator currents, electron backstreaming limits, and perveance limits; and neutralizer keeper and coupling voltages and the spot-to-plume mode transition flow rates. Thruster level performance dispersion analyses will include thrust efficiency.

  6. Feed analyses and their interpretation

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Compositional analysis is central to determining the nutritional value of feedstuffs. The utility of the values and how they should be used depends on how representative the feed subsample is, the nutritional relevance of the assays, analytical variability of the analyses, and whether a feed is suit...

  7. Analysing Children's Drawings: Applied Imagination

    ERIC Educational Resources Information Center

    Bland, Derek

    2012-01-01

    This article centres on a research project in which freehand drawings provided a richly creative and colourful data source of children's imagined, ideal learning environments. Issues concerning the analysis of the visual data are discussed, in particular, how imaginative content was analysed and how the analytical process was dependent on an…

  8. Seamless multiresolution isosurfaces using wavelets

    SciTech Connect

    Udeshi, T.; Hudson, R.; Papka, M. E.

    2000-04-11

    Data sets that are being produced by today's simulations, such as the ones generated by DOE's ASCI program, are too large for real-time exploration and visualization. Therefore, new methods of visualizing these data sets need to be investigated. The authors present a method that combines isosurface representations of different resolutions into a seamless solution, virtually free of cracks and overlaps. The solution combines existing isosurface generation algorithms and wavelet theory to produce a real-time solution to multiple-resolution isosurfaces.

  9. EU-FP7-iMARS: analysis of Mars multi-resolution images using auto-coregistration, data mining and crowd source techniques

    NASA Astrophysics Data System (ADS)

    Ivanov, Anton; Muller, Jan-Peter; Tao, Yu; Kim, Jung-Rack; Gwinner, Klaus; Van Gasselt, Stephan; Morley, Jeremy; Houghton, Robert; Bamford, Steven; Sidiropoulos, Panagiotis; Fanara, Lida; Waenlish, Marita; Walter, Sebastian; Steinkert, Ralf; Schreiner, Bjorn; Cantini, Federico; Wardlaw, Jessica; Sprinks, James; Giordano, Michele; Marsh, Stuart

    2016-07-01

    Understanding planetary atmosphere-surface and extra-terrestrial-surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 15 years, especially in 3D imaging of surface shape. This has led to the ability to be able to overlay different epochs back in time to the mid 1970s, to examine time-varying changes, such as the recent discovery of mass movement, tracking inter-year seasonal changes and looking for occurrences of fresh craters. Within the EU FP-7 iMars project, UCL have developed a fully automated multi-resolution DTM processing chain, called the Co-registration ASP-Gotcha Optimised (CASP-GO), based on the open source NASA Ames Stereo Pipeline (ASP), which is being applied to the production of planetwide DTMs and ORIs (OrthoRectified Images) from CTX and HiRISE. Alongside the production of individual strip CTX & HiRISE DTMs & ORIs, DLR have processed HRSC mosaics of ORIs and DTMs for complete areas in a consistent manner using photogrammetric bundle block adjustment techniques. A novel automated co-registration and orthorectification chain has been developed and is being applied to level-1 EDR images taken by the 4 NASA orbital cameras since 1976 using the HRSC map products (both mosaics and orbital strips) as a map-base. The project has also included Mars Radar profiles from Mars Express and Mars Reconnaissance Orbiter missions. A webGIS has been developed for displaying this time sequence of imagery and a demonstration will be shown applied to one of the map-sheets. Automated quality control techniques are applied to screen for suitable images and these are extended to detect temporal changes in features on the surface such as mass movements, streaks, spiders, impact craters, CO2 geysers and Swiss Cheese terrain. These data mining techniques are then being employed within a citizen science project within the Zooniverse family

  10. EU-FP7-iMars: Analysis of Mars Multi-Resolution Images using Auto-Coregistration, Data Mining and Crowd Source Techniques

    NASA Astrophysics Data System (ADS)

    Ivanov, Anton; Oberst, Jürgen; Yershov, Vladimir; Muller, Jan-Peter; Kim, Jung-Rack; Gwinner, Klaus; Van Gasselt, Stephan; Morley, Jeremy; Houghton, Robert; Bamford, Steven; Sidiropoulos, Panagiotis

    Understanding the role of different planetary surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 15 years, especially in 3D imaging of surface shape. This has led to the ability to be able to overlay different epochs back to the mid-1970s, examine time-varying changes (such as the recent discovery of boulder movement, tracking inter-year seasonal changes and looking for occurrences of fresh craters. Consequently we are seeing a dramatic improvement in our understanding of surface formation processes. Since January 2004, the ESA Mars Express has been acquiring global data, especially HRSC stereo (12.5-25 m nadir images) with 87% coverage with more than 65% useful for stereo mapping. NASA began imaging the surface of Mars, initially from flybys in the 1960s and then from the first orbiter with image resolution less than 100 m in the late 1970s from Viking Orbiter. The most recent orbiter, NASA MRO, has acquired surface imagery of around 1% of the Martian surface from HiRISE (at ≈20 cm) and ≈5% from CTX (≈6 m) in stereo. Within the iMars project (http://i-Mars.eu), a fully automated large-scale processing (“Big Data”) solution is being developed to generate the best possible multi-resolution DTM of Mars. In addition, HRSC OrthoRectified Images (ORI) will be used as a georeference basis so that all higher resolution ORIs will be co-registered to the HRSC DTMs (50-100m grid) products generated at DLR and, from CTX (6-20 m grid) and HiRISE (1-3 m grids) on a large-scale Linux cluster based at MSSL. The HRSC products will be employed to provide a geographic reference for all current, future and historical NASA products using automated co-registration based on feature points and initial results will be shown here. In 2015, many of the entire NASA and ESA orbital images will be co-registered and the updated georeferencing

  11. Adaptive limit margin detection and limit avoidance

    NASA Astrophysics Data System (ADS)

    Yavrucuk, Ilkay

    This thesis concerns the development of methods, algorithms, and control laws for the development of an adaptive flight envelope protection system to be used for both manned and unmanned aircraft. The proposed method lifts the requirement for detailed a priori information of aircraft dynamics by enabling adaptation to system uncertainty. The system can be used for limits that can be either measured or related to selected measurable quantities. Specifically, an adaptive technique for predicting limit margins and calculating the corresponding allowable control or controller command margins of an aircraft is described in an effort to enable true carefree maneuvering. This new approach utilizes adaptive neural network based loops for the approximation of required aircraft dynamics. For limits that reach their maximum value in steady state, a constructed estimator model is used to predict the maneuvering quasi-steady response behavior---the so called dynamic trim---of the limit parameters and the corresponding control or command margins. Linearly Parameterized Neural Networks as well as Single Hidden Layer Neural Networks are used for on-line adaptation. The approach does not require any off-line training of the neural networks, instead all learning is achieved during flight. Lyapunov based weight update laws are derived. The method is extended for multi-channelled control limiting for aircraft subject to multiple limits, and for automatic control and command limiting for UAV's. Simulation evaluations of the method using a linear helicopter model and a nonlinear Generalized Tiltrotor Simulation (GTRSIM) model are presented. Limit avoidance methods are integrated and tested through the implementation of an artificial pilot model and an active-stick controller model for tactile cueing in the tiltrotor simulation, GTRSIM. Load factor, angle-of-attack, and torque limits are considered as examples. Similarly, the method is applied to the Georgia Tech's Yamaha R-Max (GTMax

  12. Workload analyse of assembling process

    NASA Astrophysics Data System (ADS)

    Ghenghea, L. D.

    2015-11-01

    The workload is the most important indicator for managers responsible of industrial technological processes no matter if these are automated, mechanized or simply manual in each case, machines or workers will be in the focus of workload measurements. The paper deals with workload analyses made to a most part manual assembling technology for roller bearings assembling process, executed in a big company, with integrated bearings manufacturing processes. In this analyses the delay sample technique have been used to identify and divide all bearing assemblers activities, to get information about time parts from 480 minutes day work time that workers allow to each activity. The developed study shows some ways to increase the process productivity without supplementary investments and also indicated the process automation could be the solution to gain maximum productivity.

  13. Nonlinear structural crash dynamics analyses

    NASA Technical Reports Server (NTRS)

    Hayduk, R. J.; Thomson, R. G.; Wittlin, G.; Kamat, M. P.

    1979-01-01

    Presented in this paper are the results of three nonlinear computer programs, KRASH, ACTION and DYCAST used to analyze the dynamic response of a twin-engine, low-wing airplane section subjected to a 8.38 m/s (27.5 ft/s) vertical impact velocity crash condition. This impact condition simulates the vertical sink rate in a shallow aircraft landing or takeoff accident. The three distinct analysis techniques for nonlinear dynamic response of aircraft structures are briefly examined and compared versus each other and the experimental data. The report contains brief descriptions of the three computer programs, the respective aircraft section mathematical models, pertinent data from the experimental test performed at NASA Langley, and a comparison of the analyses versus test results. Cost and accuracy comparisons between the three analyses are made to illustrate the possible uses of the different nonlinear programs and their future potential.

  14. Supplementary report on antilock analyses

    NASA Technical Reports Server (NTRS)

    Zellner, J. W.

    1985-01-01

    Generic modulator analysis was performed to quantify the effects of dump and reapply pressure rates on antilock stability and performance. Analysis will include dump and reapply rates, and lumped modulator delay. Based on the results of the generic modulator analysis and earlier toggle optimization analysis (with Mitsubishi modulator), a recommended preliminary antilock design was synthesized and its response and performance simulated. The results of these analyses are documented.

  15. EU-FP7-iMars: Analysis of Mars Multi-Resolution Images using Auto-Coregistration, Data Mining and Crowd Source Techniques: an overview and a request for scientific inputs.

    NASA Astrophysics Data System (ADS)

    Muller, Jan-Peter; Gwinner, Klaus; van Gasselt, Stephan; Ivanov, Anton; Morley, Jeremy; Houghton, Robert; Bamford, Steven; Yershov, Vladimir; Sidirpoulos, Panagiotis; Kim, Jungrack

    2014-05-01

    Understanding the role of different planetary surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 7 years, especially in 3D imaging of surface shape (down to resolutions of 10cm) and subsequent terrain correction of imagery from orbiting spacecraft. This has led to the ability to be able to overlay different epochs back to the mid-1970s, examine time-varying changes (such as the recent discovery of boulder movement [Orloff et al., 2011] or the sublimation of sub-surface ice revealed by meteoritic impact [Byrne et al., 2009] as well as examine geophysical phenomena, such as surface roughness on different length scales. Consequently we are seeing a dramatic improvement in our understanding of surface formation processes. Since January 2004 the ESA Mars Express has been acquiring global data, especially HRSC stereo (12.5-25m nadir images) with 87% coverage with images ≤25m and more than 65% useful for stereo mapping (e.g. atmosphere sufficiently clear). It has been demonstrated [Gwinner et al., 2010] that HRSC has the highest possible planimetric accuracy of ≤25m and is well co-registered with MOLA, which represents the global 3D reference frame. HRSC 3D and terrain-corrected image products therefore represent the best available 3D reference data for Mars. NASA began imaging the surface of Mars, initially from flybys in the 1960s with the first orbiter with images ≤100m in the late 1970s from Viking Orbiter. The most recent orbiter to begin imaging in November 2006 is the NASA MRO which has acquired surface imagery of around 1% of the Martian surface from HiRISE (at ≡20cm) and ≡5% from CTX (≡6m) in stereo. Unfortunately, for most of these NASA images, especially MGS, MO, VO and HiRISE their accuracy of georeferencing is often worse than the quality of Mars reference data from HRSC. This reduces their value for analysing

  16. EU-FP7-iMars: Analysis of Mars Multi-Resolution Images using Auto-Coregistration, Data Mining and Crowd Source Techniques: One year on with a focus on auto-DTM, auto-coregistration and citizen science.

    NASA Astrophysics Data System (ADS)

    Muller, Jan-Peter; Sidiropoulos, Panagiotis; Yershov, Vladimir; Gwinner, Klaus; van Gasselt, Stephan; Walter, Sebastian; Ivanov, Anton; Morley, Jeremy; Sprinks, James; Houghton, Robert; Bamford, Stephen; Kim, Jung-Rack

    2015-04-01

    Understanding the role of different planetary surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 8 years, especially in 3D imaging of surface shape (down to resolutions of 10cm) and subsequent terrain correction of imagery from orbiting spacecraft. This has led to the ability to be able to overlay different epochs back to the mid-1970s, examine time-varying changes (such as impact craters, RSLs, CO2 geysers, gullies, boulder movements and a host of ice-related phenomena). Consequently we are seeing a dramatic improvement in our understanding of surface formation processes. Since January 2004 the ESA Mars Express has been acquiring global data, especially HRSC stereo (12.5-25m nadir images) with 98% coverage with images ≤100m and more than 70% useful for stereo mapping (e.g. atmosphere sufficiently clear). It has been demonstrated [Gwinner et al., 2010] that HRSC has the highest possible planimetric accuracy of ≤25m and is well co-registered with MOLA, which represents the global 3D reference frame. HRSC 3D and terrain-corrected image products therefore represent the best available 3D reference data for Mars. Recently [Gwinner et al., 2015] have shown the ability to generate mosaiced DTM and BRDF-corrected surface reflectance maps. NASA began imaging the surface of Mars, initially from flybys in the 1960s with the first orbiter with images ≤100m in the late 1970s from Viking Orbiter. The most recent orbiter to begin imaging in November 2006 is the NASA MRO which has acquired surface imagery of around 1% of the Martian surface from HiRISE (at ≈25cm) and ≈5% from CTX (≈6m) in stereo. Unfortunately, for most of these NASA images, especially MGS, MO, VO and HiRISE their accuracy of georeferencing is often worse than the quality of Mars reference data from HRSC. This reduces their value for analysing changes in time

  17. Neutronic Analyses of the Trade Demonstration Facility

    SciTech Connect

    Rubbia, C.

    2004-09-15

    The TRiga Accelerator-Driven Experiment (TRADE), to be performed in the TRIGA reactor of the ENEA-Casaccia Centre in Italy, consists of the coupling of an external proton accelerator to a target to be installed in the central channel of the reactor scrammed to subcriticality. This pilot experiment, aimed at a global demonstration of the accelerator-driven system concept, is based on an original idea of C. Rubbia. The present paper reports the results of some neutronic analyses focused on the feasibility of TRADE. Results show that all relevant experiments (at different power levels in a wide range of subcriticalities) can be carried out with relatively limited modifications to the present TRIGA reactor.

  18. Analysing photonic structures in plants

    PubMed Central

    Vignolini, Silvia; Moyroud, Edwige; Glover, Beverley J.; Steiner, Ullrich

    2013-01-01

    The outer layers of a range of plant tissues, including flower petals, leaves and fruits, exhibit an intriguing variation of microscopic structures. Some of these structures include ordered periodic multilayers and diffraction gratings that give rise to interesting optical appearances. The colour arising from such structures is generally brighter than pigment-based colour. Here, we describe the main types of photonic structures found in plants and discuss the experimental approaches that can be used to analyse them. These experimental approaches allow identification of the physical mechanisms producing structural colours with a high degree of confidence. PMID:23883949

  19. Laser power beaming system analyses

    NASA Technical Reports Server (NTRS)

    Zeiders, Glenn W., Jr.

    1993-01-01

    The successful demonstration of the PAMELA adaptive optics hardware and the fabrication of the BTOS truss structure were identified by the program office as the two most critical elements of the NASA power beaming program, so it was these that received attention during this program. Much of the effort was expended in direct program support at MSFC, but detailed technical analyses of the AMP deterministic control scheme and the BTOS truss structure (both the JPL design and a spherical one) were prepared and are attached, and recommendations are given.

  20. Summary of LDEF battery analyses

    NASA Technical Reports Server (NTRS)

    Johnson, Chris; Thaller, Larry; Bittner, Harlin; Deligiannis, Frank; Tiller, Smith; Sullivan, David; Bene, James

    1992-01-01

    Tests and analyses of NiCd, LiSO2, and LiCf batteries flown on the Long Duration Exposure Facility (LDEF) includes results from NASA, Aerospace, and commercial labs. The LiSO2 cells illustrate six-year degradation of internal components acceptable for space applications, with up to 85 percent battery capacity remaining on discharge of some returned cells. LiCf batteries completed their mission, but lost any remaining capacity due to internal degradation. Returned NiCd batteries tested an GSFC showed slight case distortion due to pressure build up, but were functioning as designed.

  1. Analysing photonic structures in plants.

    PubMed

    Vignolini, Silvia; Moyroud, Edwige; Glover, Beverley J; Steiner, Ullrich

    2013-10-06

    The outer layers of a range of plant tissues, including flower petals, leaves and fruits, exhibit an intriguing variation of microscopic structures. Some of these structures include ordered periodic multilayers and diffraction gratings that give rise to interesting optical appearances. The colour arising from such structures is generally brighter than pigment-based colour. Here, we describe the main types of photonic structures found in plants and discuss the experimental approaches that can be used to analyse them. These experimental approaches allow identification of the physical mechanisms producing structural colours with a high degree of confidence.

  2. Waste Stream Analyses for Nuclear Fuel Cycles

    SciTech Connect

    N. R. Soelberg

    2010-08-01

    A high-level study was performed in Fiscal Year 2009 for the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE) Advanced Fuel Cycle Initiative (AFCI) to provide information for a range of nuclear fuel cycle options (Wigeland 2009). At that time, some fuel cycle options could not be adequately evaluated since they were not well defined and lacked sufficient information. As a result, five families of these fuel cycle options are being studied during Fiscal Year 2010 by the Systems Analysis Campaign for the DOE NE Fuel Cycle Research and Development (FCRD) program. The quality and completeness of data available to date for the fuel cycle options is insufficient to perform quantitative radioactive waste analyses using recommended metrics. This study has been limited thus far to qualitative analyses of waste streams from the candidate fuel cycle options, because quantitative data for wastes from the front end, fuel fabrication, reactor core structure, and used fuel for these options is generally not yet available.

  3. Computational analyses of multilevel discourse comprehension.

    PubMed

    Graesser, Arthur C; McNamara, Danielle S

    2011-04-01

    The proposed multilevel framework of discourse comprehension includes the surface code, the textbase, the situation model, the genre and rhetorical structure, and the pragmatic communication level. We describe these five levels when comprehension succeeds and also when there are communication misalignments and comprehension breakdowns. A computer tool has been developed, called Coh-Metrix, that scales discourse (oral or print) on dozens of measures associated with the first four discourse levels. The measurement of these levels with an automated tool helps researchers track and better understand multilevel discourse comprehension. Two sets of analyses illustrate the utility of Coh-Metrix in discourse theory and educational practice. First, Coh-Metrix was used to measure the cohesion of the text base and situation model, as well as potential extraneous variables, in a sample of published studies that manipulated text cohesion. This analysis helped us better understand what was precisely manipulated in these studies and the implications for discourse comprehension mechanisms. Second, Coh-Metrix analyses are reported for samples of narrative and science texts in order to advance the argument that traditional text difficulty measures are limited because they fail to accommodate most of the levels of the multilevel discourse comprehension framework.

  4. Analyses of containment structures with corrosion damage

    SciTech Connect

    Cherry, J.L.

    1997-01-01

    Corrosion damage that has been found in a number of nuclear power plant containment structures can degrade the pressure capacity of the vessel. This has prompted concerns regarding the capacity of corroded containments to withstand accident loadings. To address these concerns, finite element analyses have been performed for a typical PWR Ice Condenser containment structure. Using ABAQUS, the pressure capacity was calculated for a typical vessel with no corrosion damage. Multiple analyses were then performed with the location of the corrosion and the amount of corrosion varied in each analysis. Using a strain-based failure criterion, a {open_quotes}lower bound{close_quotes}, {open_quotes}best estimate{close_quotes}, and {open_quotes}upper bound{close_quotes} failure level was predicted for each case. These limits were established by: determining the amount of variability that exists in material properties of typical containments, estimating the amount of uncertainty associated with the level of modeling detail and modeling assumptions, and estimating the effect of corrosion on the material properties.

  5. THOR Turbulence Electron Analyser: TEA

    NASA Astrophysics Data System (ADS)

    Fazakerley, Andrew; Moore, Tom; Owen, Chris; Pollock, Craig; Wicks, Rob; Samara, Marilia; Rae, Jonny; Hancock, Barry; Kataria, Dhiren; Rust, Duncan

    2016-04-01

    Turbulence Heating ObserveR (THOR) is the first mission ever flown in space dedicated to plasma turbulence. The Turbulence Electron Analyser (TEA) will measure the plasma electron populations in the mission's Regions of Interest. It will collect a 3D electron velocity distribution with cadences as short as 5 ms. The instrument will be capable of measuring energies up to 30 keV. TEA consists of multiple electrostatic analyser heads arranged so as to measure electrons arriving from look directions covering the full sky, i.e. 4 pi solid angle. The baseline concept is similar to the successful FPI-DES instrument currently operating on the MMS mission. TEA is intended to have a similar angular resolution, but a larger geometric factor. In comparison to earlier missions, TEA improves on the measurement cadence. For example, MMS FPI-DES routinely operates at 30 ms cadence. The objective of measuring distributions at rates as fast as 5 ms is driven by the mission's scientific requirements to resolve electron gyroscale size structures, where plasma heating and fluctuation dissipation is predicted to occur. TEA will therefore be capable of making measurements of the evolution of distribution functions across thin (a few km) current sheets travelling past the spacecraft at up to 600 km/s, of the Power Spectral Density of fluctuations of electron moments and of distributions fast enough to match frequencies with waves expected to be dissipating turbulence (e.g. with 100 Hz whistler waves).

  6. Analyse par ondelettes de la rugosité des grains de sable

    NASA Astrophysics Data System (ADS)

    Drolon, Hervé; Hoyez, Bernard; Druaux, Fabrice; Faure, Alain

    1999-04-01

    By using the concept of multiresolution analysis and the wavelet transform, we develop a new shape descriptor for sedimentary particles which allows the analysis of their roughness at different scales. The performance of the method is tested on a problem of sediment classification and compared to the fractal dimension.

  7. Perturbation analyses of intermolecular interactions

    NASA Astrophysics Data System (ADS)

    Koyama, Yohei M.; Kobayashi, Tetsuya J.; Ueda, Hiroki R.

    2011-08-01

    Conformational fluctuations of a protein molecule are important to its function, and it is known that environmental molecules, such as water molecules, ions, and ligand molecules, significantly affect the function by changing the conformational fluctuations. However, it is difficult to systematically understand the role of environmental molecules because intermolecular interactions related to the conformational fluctuations are complicated. To identify important intermolecular interactions with regard to the conformational fluctuations, we develop herein (i) distance-independent and (ii) distance-dependent perturbation analyses of the intermolecular interactions. We show that these perturbation analyses can be realized by performing (i) a principal component analysis using conditional expectations of truncated and shifted intermolecular potential energy terms and (ii) a functional principal component analysis using products of intermolecular forces and conditional cumulative densities. We refer to these analyses as intermolecular perturbation analysis (IPA) and distance-dependent intermolecular perturbation analysis (DIPA), respectively. For comparison of the IPA and the DIPA, we apply them to the alanine dipeptide isomerization in explicit water. Although the first IPA principal components discriminate two states (the α state and PPII (polyproline II) + β states) for larger cutoff length, the separation between the PPII state and the β state is unclear in the second IPA principal components. On the other hand, in the large cutoff value, DIPA eigenvalues converge faster than that for IPA and the top two DIPA principal components clearly identify the three states. By using the DIPA biplot, the contributions of the dipeptide-water interactions to each state are analyzed systematically. Since the DIPA improves the state identification and the convergence rate with retaining distance information, we conclude that the DIPA is a more practical method compared with the

  8. LANSCE beam current limiter

    SciTech Connect

    Gallegos, F.R.

    1996-06-01

    The Radiation Security System (RSS) at the Los Alamos Neutron Science Center (LANSCE) provides personnel protection from prompt radiation due to accelerated beam. Active instrumentation, such as the Beam Current Limiter, is a component of the RSS. The current limiter is designed to limit the average current in a beam line below a specific level, thus minimizing the maximum current available for a beam spill accident. The beam current limiter is a self-contained, electrically isolated toroidal beam transformer which continuously monitors beam current. It is designed as fail-safe instrumentation. The design philosophy, hardware design, operation, and limitations of the device are described.

  9. LANSCE beam current limiter

    SciTech Connect

    Gallegos, F.R.

    1997-01-01

    The Radiation Security System (RSS) at the Los Alamos Neutron Science Center (LANSCE) provides personnel protection from prompt radiation due to accelerated beam. Active instrumentation, such as the beam current limiter, is a component of the RSS. The current limiter is designed to limit the average current in a beamline below a specific level, thus minimizing the maximum current available for a beam spill accident. The beam current limiter is a self-contained, electrically isolated toroidal beam transformer which continuously monitors beam current. It is designed as fail-safe instrumentation. The design philosophy, hardware design, operation, and limitations of the device are described. {copyright} {ital 1997 American Institute of Physics.}

  10. Uncertainty and Sensitivity Analyses Plan

    SciTech Connect

    Simpson, J.C.; Ramsdell, J.V. Jr.

    1993-04-01

    Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project.

  11. Chemical analyses of provided samples

    NASA Technical Reports Server (NTRS)

    Becker, Christopher H.

    1993-01-01

    Two batches of samples were received and chemical analysis was performed of the surface and near surface regions of the samples by the surface analysis by laser ionization (SALI) method. The samples included four one-inch optics and several paint samples. The analyses emphasized surface contamination or modification. In these studies, pulsed sputtering by 7 keV Ar+ and primarily single-photon ionization (SPI) by coherent 118 nm radiation (at approximately 5 x 10(exp 5) W/cm(sup 2) were used. For two of the samples, also multiphoton ionization (MPI) at 266 nm (approximately 5 x 10(exp 11) W/cm(sup 2) was used. Most notable among the results was the silicone contamination on Mg2 mirror 28-92, and that the Long Duration Exposure Facility (LDEF) paint sample had been enriched in K and Na and depleted in Zn, Si, B, and organic compounds relative to the control paint.

  12. Geomorphic analyses from space imagery

    NASA Technical Reports Server (NTRS)

    Morisawa, M.

    1985-01-01

    One of the most obvious applications of space imagery to geomorphological analyses is in the study of drainage patterns and channel networks. LANDSAT, high altitude photography and other types of remote sensing imagery are excellent for depicting stream networks on a regional scale because of their broad coverage in a single image. They offer a valuable tool for comparing and analyzing drainage patterns and channel networks all over the world. Three aspects considered in this geomorphological study are: (1) the origin, evolution and rates of development of drainage systems; (2) the topological studies of network and channel arrangements; and (3) the adjustment of streams to tectonic events and geologic structure (i.e., the mode and rate of adjustment).

  13. Genetic Analyses of Integrin Signaling

    PubMed Central

    Wickström, Sara A.; Radovanac, Korana; Fässler, Reinhard

    2011-01-01

    The development of multicellular organisms, as well as maintenance of organ architecture and function, requires robust regulation of cell fates. This is in part achieved by conserved signaling pathways through which cells process extracellular information and translate this information into changes in proliferation, differentiation, migration, and cell shape. Gene deletion studies in higher eukaryotes have assigned critical roles for components of the extracellular matrix (ECM) and their cellular receptors in a vast number of developmental processes, indicating that a large proportion of this signaling is regulated by cell-ECM interactions. In addition, genetic alterations in components of this signaling axis play causative roles in several human diseases. This review will discuss what genetic analyses in mice and lower organisms have taught us about adhesion signaling in development and disease. PMID:21421914

  14. Isotopic signatures by bulk analyses

    SciTech Connect

    Efurd, D.W.; Rokop, D.J.

    1997-12-01

    Los Alamos National Laboratory has developed a series of measurement techniques for identification of nuclear signatures by analyzing bulk samples. Two specific applications for isotopic fingerprinting to identify the origin of anthropogenic radioactivity in bulk samples are presented. The first example is the analyses of environmental samples collected in the US Arctic to determine the impact of dumping of radionuclides in this polar region. Analyses of sediment and biota samples indicate that for the areas sampled the anthropogenic radionuclide content of sediments was predominantly the result of the deposition of global fallout. The anthropogenic radionuclide concentrations in fish, birds and mammals were very low. It can be surmised that marine food chains are presently not significantly affected. The second example is isotopic fingerprinting of water and sediment samples from the Rocky Flats Facility (RFP). The largest source of anthropogenic radioactivity presently affecting surface-waters at RFP is the sediments that are currently residing in the holding ponds. One gram of sediment from a holding pond contains approximately 50 times more plutonium than 1 liter of water from the pond. Essentially 100% of the uranium in Ponds A-1 and A-2 originated as depleted uranium. The largest source of radioactivity in the terminal Ponds A-4, B-5 and C-2 was naturally occurring uranium and its decay product radium. The uranium concentrations in the waters collected from the terminal ponds contained 0.05% or less of the interim standard calculated derived concentration guide for uranium in waters available to the public. All of the radioactivity observed in soil, sediment and water samples collected at RFP was naturally occurring, the result of processes at RFP or the result of global fallout. No extraneous anthropogenic alpha, beta or gamma activities were detected. The plutonium concentrations in Pond C-2 appear to vary seasonally.

  15. Fourier Transform Mass Spectrometry: The Transformation of Modern Environmental Analyses

    PubMed Central

    Lim, Lucy; Yan, Fangzhi; Bach, Stephen; Pihakari, Katianna; Klein, David

    2016-01-01

    Unknown compounds in environmental samples are difficult to identify using standard mass spectrometric methods. Fourier transform mass spectrometry (FTMS) has revolutionized how environmental analyses are performed. With its unsurpassed mass accuracy, high resolution and sensitivity, researchers now have a tool for difficult and complex environmental analyses. Two features of FTMS are responsible for changing the face of how complex analyses are accomplished. First is the ability to quickly and with high mass accuracy determine the presence of unknown chemical residues in samples. For years, the field has been limited by mass spectrometric methods that were based on knowing what compounds of interest were. Secondly, by utilizing the high resolution capabilities coupled with the low detection limits of FTMS, analysts also could dilute the sample sufficiently to minimize the ionization changes from varied matrices. PMID:26784175

  16. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 3 2012-01-01 2012-01-01 false Uncertainty analyses. 436.24 Section 436.24 Energy... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... by conducting additional analyses using any standard engineering economics method such as...

  17. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 3 2014-01-01 2014-01-01 false Uncertainty analyses. 436.24 Section 436.24 Energy... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... by conducting additional analyses using any standard engineering economics method such as...

  18. Detector limitations, STAR

    SciTech Connect

    Underwood, D. G.

    1998-07-13

    Every detector has limitations in terms of solid angle, particular technologies chosen, cracks due to mechanical structure, etc. If all of the presently planned parts of STAR [Solenoidal Tracker At RHIC] were in place, these factors would not seriously limit our ability to exploit the spin physics possible in RHIC. What is of greater concern at the moment is the construction schedule for components such as the Electromagnetic Calorimeters, and the limited funding for various levels of triggers.

  19. Computing Confidence Limits

    NASA Technical Reports Server (NTRS)

    Biggs, Robert E.

    1991-01-01

    Confidence Limits Program (CLP) calculates upper and lower confidence limits associated with observed outcome of N independent trials with M occurrences of event of interest. Calculates probability of event of interest for confidence levels of 50, 55, 60, 65, 70, 75, 80, 85, 90, 95, 96, 97, 98, and 99 percent. Provides graphical presentation of all limits and how they relate to maximum-likelihood value. Written in IBM PC BASIC.

  20. Photovoltaics: Life-cycle Analyses

    SciTech Connect

    Fthenakis V. M.; Kim, H.C.

    2009-10-02

    Life-cycle analysis is an invaluable tool for investigating the environmental profile of a product or technology from cradle to grave. Such life-cycle analyses of energy technologies are essential, especially as material and energy flows are often interwoven, and divergent emissions into the environment may occur at different life-cycle-stages. This approach is well exemplified by our description of material and energy flows in four commercial PV technologies, i.e., mono-crystalline silicon, multi-crystalline silicon, ribbon-silicon, and cadmium telluride. The same life-cycle approach is applied to the balance of system that supports flat, fixed PV modules during operation. We also discuss the life-cycle environmental metrics for a concentration PV system with a tracker and lenses to capture more sunlight per cell area than the flat, fixed system but requires large auxiliary components. Select life-cycle risk indicators for PV, i.e., fatalities, injures, and maximum consequences are evaluated in a comparative context with other electricity-generation pathways.

  1. Comparison between Inbreeding Analyses Methodologies.

    PubMed

    Esparza, Mireia; Martínez-Abadías, Neus; Sjøvold, Torstein; González-José, Rolando; Hernández, Miquel

    2015-12-01

    Surnames are widely used in inbreeding analysis, but the validity of results has often been questioned due to the failure to comply with the prerequisites of the method. Here we analyze inbreeding in Hallstatt (Austria) between the 17th and the 19th centuries both using genealogies and surnames. The high and significant correlation of the results obtained by both methods demonstrates the validity of the use of surnames in this kind of studies. On the other hand, the inbreeding values obtained (0.24 x 10⁻³ in the genealogies analysis and 2.66 x 10⁻³ in the surnames analysis) are lower than those observed in Europe for this period and for this kind of population, demonstrating the falseness of the apparent isolation of Hallstatt's population. The temporal trend of inbreeding in both analyses does not follow the European general pattern, but shows a maximum in 1850 with a later decrease along the second half of the 19th century. This is probably due to the high migration rate that is implied by the construction of transport infrastructures around the 1870's.

  2. Helicopter tail rotor noise analyses

    NASA Technical Reports Server (NTRS)

    George, A. R.; Chou, S. T.

    1986-01-01

    A study was made of helicopter tail rotor noise, particularly that due to interactions with the main rotor tip vortices, and with the fuselage separation mean wake. The tail rotor blade-main rotor tip vortex interaction is modelled as an airfoil of infinite span cutting through a moving vortex. The vortex and the geometry information required by the analyses are obtained through a free wake geometry analysis of the main rotor. The acoustic pressure-time histories for the tail rotor blade-vortex interactions are then calculated. These acoustic results are compared to tail rotor loading and thickness noise, and are found to be significant to the overall tail rotor noise generation. Under most helicopter operating conditions, large acoustic pressure fluctuations can be generated due to a series of skewed main rotor tip vortices passing through the tail rotor disk. The noise generation depends strongly upon the helicopter operating conditions and the location of the tail rotor relative to the main rotor.

  3. Proteins analysed as virtual knots

    NASA Astrophysics Data System (ADS)

    Alexander, Keith; Taylor, Alexander J.; Dennis, Mark R.

    2017-02-01

    Long, flexible physical filaments are naturally tangled and knotted, from macroscopic string down to long-chain molecules. The existence of knotting in a filament naturally affects its configuration and properties, and may be very stable or disappear rapidly under manipulation and interaction. Knotting has been previously identified in protein backbone chains, for which these mechanical constraints are of fundamental importance to their molecular functionality, despite their being open curves in which the knots are not mathematically well defined; knotting can only be identified by closing the termini of the chain somehow. We introduce a new method for resolving knotting in open curves using virtual knots, which are a wider class of topological objects that do not require a classical closure and so naturally capture the topological ambiguity inherent in open curves. We describe the results of analysing proteins in the Protein Data Bank by this new scheme, recovering and extending previous knotting results, and identifying topological interest in some new cases. The statistics of virtual knots in protein chains are compared with those of open random walks and Hamiltonian subchains on cubic lattices, identifying a regime of open curves in which the virtual knotting description is likely to be important.

  4. Network analyses in systems pharmacology

    PubMed Central

    Berger, Seth I.; Iyengar, Ravi

    2009-01-01

    Systems pharmacology is an emerging area of pharmacology which utilizes network analysis of drug action as one of its approaches. By considering drug actions and side effects in the context of the regulatory networks within which the drug targets and disease gene products function, network analysis promises to greatly increase our knowledge of the mechanisms underlying the multiple actions of drugs. Systems pharmacology can provide new approaches for drug discovery for complex diseases. The integrated approach used in systems pharmacology can allow for drug action to be considered in the context of the whole genome. Network-based studies are becoming an increasingly important tool in understanding the relationships between drug action and disease susceptibility genes. This review discusses how analysis of biological networks has contributed to the genesis of systems pharmacology and how these studies have improved global understanding of drug targets, suggested new targets and approaches for therapeutics, and provided a deeper understanding of the effects of drugs. Taken together, these types of analyses can lead to new therapeutic options while improving the safety and efficacy of existing medications. Contact: ravi.iyengar@mssm.edu PMID:19648136

  5. Proteins analysed as virtual knots

    PubMed Central

    Alexander, Keith; Taylor, Alexander J.; Dennis, Mark R.

    2017-01-01

    Long, flexible physical filaments are naturally tangled and knotted, from macroscopic string down to long-chain molecules. The existence of knotting in a filament naturally affects its configuration and properties, and may be very stable or disappear rapidly under manipulation and interaction. Knotting has been previously identified in protein backbone chains, for which these mechanical constraints are of fundamental importance to their molecular functionality, despite their being open curves in which the knots are not mathematically well defined; knotting can only be identified by closing the termini of the chain somehow. We introduce a new method for resolving knotting in open curves using virtual knots, which are a wider class of topological objects that do not require a classical closure and so naturally capture the topological ambiguity inherent in open curves. We describe the results of analysing proteins in the Protein Data Bank by this new scheme, recovering and extending previous knotting results, and identifying topological interest in some new cases. The statistics of virtual knots in protein chains are compared with those of open random walks and Hamiltonian subchains on cubic lattices, identifying a regime of open curves in which the virtual knotting description is likely to be important. PMID:28205562

  6. Accelerated safety analyses - structural analyses Phase I - structural sensitivity evaluation of single- and double-shell waste storage tanks

    SciTech Connect

    Becker, D.L.

    1994-11-01

    Accelerated Safety Analyses - Phase I (ASA-Phase I) have been conducted to assess the appropriateness of existing tank farm operational controls and/or limits as now stipulated in the Operational Safety Requirements (OSRs) and Operating Specification Documents, and to establish a technical basis for the waste tank operating safety envelope. Structural sensitivity analyses were performed to assess the response of the different waste tank configurations to variations in loading conditions, uncertainties in loading parameters, and uncertainties in material characteristics. Extensive documentation of the sensitivity analyses conducted and results obtained are provided in the detailed ASA-Phase I report, Structural Sensitivity Evaluation of Single- and Double-Shell Waste Tanks for Accelerated Safety Analysis - Phase I. This document provides a summary of the accelerated safety analyses sensitivity evaluations and the resulting findings.

  7. Limits to Inclusion

    ERIC Educational Resources Information Center

    Hansen, Janne Hedegaard

    2012-01-01

    In this article, I will argue that a theoretical identification of the limit to inclusion is needed in the conceptual identification of inclusion. On the one hand, inclusion is formulated as a vision that is, in principle, limitless. On the other hand, there seems to be an agreement that inclusion has a limit in the pedagogical practice. However,…

  8. Speed limits of aircraft

    NASA Technical Reports Server (NTRS)

    Everling, E

    1923-01-01

    This paper is restricted to the question of attainable speed limits and attacks the problem from different angles. Theoretical limits due to air resistance are presented along with design factors which may affect speed such as wing loads, wing areas, wing section shifting, landing speeds, drag-lift ratios, and power coefficients.

  9. Dose limits for astronauts

    NASA Technical Reports Server (NTRS)

    Sinclair, W. K.

    2000-01-01

    Radiation exposures to individuals in space can greatly exceed natural radiation exposure on Earth and possibly normal occupational radiation exposures as well. Consequently, procedures limiting exposures would be necessary. Limitations were proposed by the Radiobiological Advisory Panel of the National Academy of Sciences/National Research Council in 1970. This panel recommended short-term limits to avoid deterministic effects and a single career limit (of 4 Sv) based on a doubling of the cancer risk in men aged 35 to 55. Later, when risk estimates for cancer had increased and were recognized to be age and sex dependent, the NCRP, in Report No. 98 in 1989, recommended a range of career limits based on age and sex from 1 to 4 Sv. NCRP is again in the process of revising recommendations for astronaut exposure, partly because risk estimates have increased further and partly to recognize trends in limiting radiation exposure occupationally on the ground. The result of these considerations is likely to be similar short-term limits for deterministic effects but modified career limits.

  10. Characterizing limit order prices

    NASA Astrophysics Data System (ADS)

    Withanawasam, R. M.; Whigham, P. A.; Crack, Timothy Falcon

    2013-11-01

    A computational model of a limit order book is used to study the effect of different limit order distribution offsets. Reference prices such as same side/contra side best market prices and last traded price are considered in combination with different price offset distributions. We show that when characterizing limit order prices, varying the offset distribution only produces different behavior when the reference price is the contra side best price. Irrespective of the underlying mechanisms used in computing the limit order prices, the shape of the price graph and the behavior of the average order book profile distribution are strikingly similar in all the considered reference prices/offset distributions. This implies that existing averaging methods can cancel variabilities in limit order book shape/attributes and may be misleading.

  11. Subgroup analyses of clinical effectiveness to support health technology assessments.

    PubMed

    Paget, Marie-Ange; Chuang-Stein, Christy; Fletcher, Christine; Reid, Carol

    2011-01-01

    Subgroup analysis is an integral part of access and reimbursement dossiers, in particular health technology assessment (HTA), and their HTA recommendations are often limited to subpopulations. HTA recommendations for subpopulations are not always clear and without controversies. In this paper, we review several HTA guidelines regarding subgroup analyses. We describe good statistical principles for subgroup analyses of clinical effectiveness to support HTAs and include case examples where HTA recommendations were given to subpopulations only. Unlike regulatory submissions, pharmaceutical statisticians in most companies have had limited involvement in the planning, design and preparation of HTA/payers submissions. We hope to change this by highlighting how pharmaceutical statisticians should contribute to payers' submissions. This includes early engagement in reimbursement strategy discussions to influence the design, analysis and interpretation of phase III randomized clinical trials as well as meta-analyses/network meta-analyses. The focus on this paper is on subgroup analyses relating to clinical effectiveness as we believe this is the first key step of statistical involvement and influence in the preparation of HTA and reimbursement submissions.

  12. Designing forgiveness interventions: guidance from five meta-analyses.

    PubMed

    Recine, Ann C

    2015-06-01

    The Nursing Interventions Classification system includes forgiveness facilitation as part of the research-based taxonomy of nursing interventions. Nurses need practical guidance in finding the type of intervention that works best in the nursing realm. Five meta-analyses of forgiveness interventions were reviewed to illuminate best practice. The only studies included were meta-analyses of forgiveness interventions in which the authors calculated effect size. Forgiveness interventions were shown to be helpful in addressing mental/emotional health. Components of effective interventions include recalling the offense, empathizing with the offender, committing to forgive, and overcoming feelings of unforgiveness. The meta-analyses showed that people receiving forgiveness interventions reported more forgiveness than those who had no intervention. Forgiveness interventions resulted in more hope and less depression and anxiety than no treatment. A process-based intervention is more effective than a shorter cognitive decision-based model. Limitations of the meta-analyses included inconsistency of measures and a lack of consensus on a definition of forgiveness. Notwithstanding these limitations, the meta-analyses offer strong evidence of what contributes to the effectiveness of forgiveness interventions. The implications of the studies are useful for designing evidence-based clinical forgiveness interventions to enhance nursing practice.

  13. The seed bank longevity index revisited: limited reliability evident from a burial experiment and database analyses

    PubMed Central

    Saatkamp, Arne; Affre, Laurence; Dutoit, Thierry; Poschlod, Peter

    2009-01-01

    Background and Aims Seed survival in the soil contributes to population persistence and community diversity, creating a need for reliable measures of soil seed bank persistence. Several methods estimate soil seed bank persistence, most of which count seedlings emerging from soil samples. Seasonality, depth distribution and presence (or absence) in vegetation are then used to classify a species' soil seed bank into persistent or transient, often synthesized into a longevity index. This study aims to determine if counts of seedlings from soil samples yield reliable seed bank persistence estimates and if this is correlated to seed production. Methods Seeds of 38 annual weeds taken from arable fields were buried in the field and their viability tested by germination and tetrazolium tests at 6 month intervals for 2·5 years. This direct measure of soil seed survival was compared with indirect estimates from the literature, which use seedling emergence from soil samples to determine seed bank persistence. Published databases were used to explore the generality of the influence of reproductive capacity on seed bank persistence estimates from seedling emergence data. Key Results There was no relationship between a species' soil seed survival in the burial experiment and its seed bank persistence estimate from published data using seedling emergence from soil samples. The analysis of complementary data from published databases revealed that while seed bank persistence estimates based on seedling emergence from soil samples are generally correlated with seed production, estimates of seed banks from burial experiments are not. Conclusions The results can be explained in terms of the seed size–seed number trade-off, which suggests that the higher number of smaller seeds is compensated after germination. Soil seed bank persistence estimates correlated to seed production are therefore not useful for studies on population persistence or community diversity. Confusion of soil seed survival and seed production can be avoided by separate use of soil seed abundance and experimental soil seed survival. PMID:19549641

  14. Including Fleet Usage Variability in Reliability Analyses for Both Safety and Economic Limit

    DTIC Science & Technology

    1977-08-01

    of categories which it is practical to use. The three other steps can be easily accomplished on a desk top or programmable calculator for any number of... programmable calculator by using the Poisson approximation as described in Section II. The code and necessary inputs are described in the Appendix. This code not...CODE The code given below is programmed in BASIC compatible with the Hewlett-Packard H-P9830 programmable calculator . Output from this program is given

  15. Analyses of Transistor Punchthrough Failures

    NASA Technical Reports Server (NTRS)

    Nicolas, David P.

    1999-01-01

    The failure of two transistors in the Altitude Switch Assembly for the Solid Rocket Booster followed by two additional failures a year later presented a challenge to failure analysts. These devices had successfully worked for many years on numerous missions. There was no history of failures with this type of device. Extensive checks of the test procedures gave no indication for a source of the cause. The devices were manufactured more than twenty years ago and failure information on this lot date code was not readily available. External visual exam, radiography, PEID, and leak testing were performed with nominal results Electrical testing indicated nearly identical base-emitter and base-collector characteristics (both forward and reverse) with a low resistance short emitter to collector. These characteristics are indicative of a classic failure mechanism called punchthrough. In failure analysis punchthrough refers to an condition where a relatively low voltage pulse causes the device to conduct very hard producing localized areas of thermal runaway or "hot spots". At one or more of these hot spots, the excessive currents melt the silicon. Heavily doped emitter material diffuses through the base region to the collector forming a diffusion pipe shorting the emitter to base to collector. Upon cooling, an alloy junction forms between the pipe and the base region. Generally, the hot spot (punch-through site) is under the bond and no surface artifact is visible. The devices were delidded and the internal structures were examined microscopically. The gold emitter lead was melted on one device, but others had anomalies in the metallization around the in-tact emitter bonds. The SEM examination confirmed some anomalies to be cosmetic defects while other anomalies were artifacts of the punchthrough site. Subsequent to these analyses, the contractor determined that some irregular testing procedures occurred at the time of the failures heretofore unreported. These testing

  16. Aerothermodynamic Analyses of Towed Ballutes

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.; Buck, Greg; Moss, James N.; Nielsen, Eric; Berger, Karen; Jones, William T.; Rudavsky, Rena

    2006-01-01

    A ballute (balloon-parachute) is an inflatable, aerodynamic drag device for application to planetary entry vehicles. Two challenging aspects of aerothermal simulation of towed ballutes are considered. The first challenge, simulation of a complete system including inflatable tethers and a trailing toroidal ballute, is addressed using the unstructured-grid, Navier-Stokes solver FUN3D. Auxiliary simulations of a semi-infinite cylinder using the rarefied flow, Direct Simulation Monte Carlo solver, DSV2, provide additional insight into limiting behavior of the aerothermal environment around tethers directly exposed to the free stream. Simulations reveal pressures higher than stagnation and corresponding large heating rates on the tether as it emerges from the spacecraft base flow and passes through the spacecraft bow shock. The footprint of the tether shock on the toroidal ballute is also subject to heating amplification. Design options to accommodate or reduce these environments are discussed. The second challenge addresses time-accurate simulation to detect the onset of unsteady flow interactions as a function of geometry and Reynolds number. Video of unsteady interactions measured in the Langley Aerothermodynamic Laboratory 20-Inch Mach 6 Air Tunnel and CFD simulations using the structured grid, Navier-Stokes solver LAURA are compared for flow over a rigid spacecraft-sting-toroid system. The experimental data provides qualitative information on the amplitude and onset of unsteady motion which is captured in the numerical simulations. The presence of severe unsteady fluid - structure interactions is undesirable and numerical simulation must be able to predict the onset of such motion.

  17. Optical limiting materials

    DOEpatents

    McBranch, Duncan W.; Mattes, Benjamin R.; Koskelo, Aaron C.; Heeger, Alan J.; Robinson, Jeanne M.; Smilowitz, Laura B.; Klimov, Victor I.; Cha, Myoungsik; Sariciftci, N. Serdar; Hummelen, Jan C.

    1998-01-01

    Optical limiting materials. Methanofullerenes, fulleroids and/or other fullerenes chemically altered for enhanced solubility, in liquid solution, and in solid blends with transparent glass (SiO.sub.2) gels or polymers, or semiconducting (conjugated) polymers, are shown to be useful as optical limiters (optical surge protectors). The nonlinear absorption is tunable such that the energy transmitted through such blends saturates at high input energy per pulse over a wide range of wavelengths from 400-1100 nm by selecting the host material for its absorption wavelength and ability to transfer the absorbed energy into the optical limiting composition dissolved therein. This phenomenon should be generalizable to other compositions than substituted fullerenes.

  18. CONTROL LIMITER DEVICE

    DOEpatents

    DeShong, J.A.

    1960-03-01

    A control-limiting device for monltoring a control system is described. The system comprises a conditionsensing device, a condition-varying device exerting a control over the condition, and a control means to actuate the condition-varying device. A control-limiting device integrates the total movement or other change of the condition-varying device over any interval of time during a continuum of overlapping periods of time, and if the tothl movement or change of the condition-varying device exceeds a preset value, the control- limiting device will switch the control of the operated apparatus from automatic to manual control.

  19. Novel limiter pump topologies

    SciTech Connect

    Schultz, J.H.

    1981-01-01

    The use of limiter pumps as the principle plasma exhaust system of a magnetic confinement fusion device promises significant simplification, when compared to previously investigating divertor based systems. Further simplifications, such as the integration of the exhaust system with a radio frequency heating system and with the main reactor shield and structure are investigated below. The integrity of limiters in a reactor environment is threatened by many mechanisms, the most severe of which may be erosion by sputtering. Two novel topolgies are suggested which allow high erosion without limiter failure.

  20. Genomic analyses of the CAM plant pineapple.

    PubMed

    Zhang, Jisen; Liu, Juan; Ming, Ray

    2014-07-01

    The innovation of crassulacean acid metabolism (CAM) photosynthesis in arid and/or low CO2 conditions is a remarkable case of adaptation in flowering plants. As the most important crop that utilizes CAM photosynthesis, the genetic and genomic resources of pineapple have been developed over many years. Genetic diversity studies using various types of DNA markers led to the reclassification of the two genera Ananas and Pseudananas and nine species into one genus Ananas and two species, A. comosus and A. macrodontes with five botanical varieties in A. comosus. Five genetic maps have been constructed using F1 or F2 populations, and high-density genetic maps generated by genotype sequencing are essential resources for sequencing and assembling the pineapple genome and for marker-assisted selection. There are abundant expression sequence tag resources but limited genomic sequences in pineapple. Genes involved in the CAM pathway has been analysed in several CAM plants but only a few of them are from pineapple. A reference genome of pineapple is being generated and will accelerate genetic and genomic research in this major CAM crop. This reference genome of pineapple provides the foundation for studying the origin and regulatory mechanism of CAM photosynthesis, and the opportunity to evaluate the classification of Ananas species and botanical cultivars.

  1. Limited Scleroderma (CREST Syndrome)

    MedlinePlus

    ... small, frequent meals Avoid spicy or fatty foods, chocolate, caffeine, and alcohol Don't exercise immediately before ... also may be helpful. Because limited scleroderma can affect your appearance and your ability to perform simple ...

  2. PLT rotating pumped limiter

    SciTech Connect

    Cohen, S.A.; Budny, R.V.; Corso, V.; Boychuck, J.; Grisham, L.; Heifetz, D.; Hosea, J.; Luyber, S.; Loprest, P.; Manos, D.

    1984-07-01

    A limiter with a specially contoured front face and the ability to rotate during tokamak discharges has been installed in a PLT pump duct. These features have been selected to handle the unique particle removal and heat load requirements of ICRF heating and lower-hybrid current-drive experiments. The limiter has been conditioned and commissioned in an ion-beam test stand by irradiation with 1 MW power, 200 ms duration beams of 40 keV hydrogen ions. Operation in PLT during ohmic discharges has proven the ability of the limiter to reduce localized heating caused by energetic electron bombardment and to remove about 2% of the ions lost to the PLT walls and limiters.

  3. Peak acceleration limiter

    NASA Technical Reports Server (NTRS)

    Chapman, C. P.

    1972-01-01

    Device is described that limits accelerations by shutting off shaker table power very rapidly in acceleration tests. Absolute value of accelerometer signal is used to trigger electronic switch which terminates test and sounds alarm.

  4. PEAK LIMITING AMPLIFIER

    DOEpatents

    Goldsworthy, W.W.; Robinson, J.B.

    1959-03-31

    A peak voltage amplitude limiting system adapted for use with a cascade type amplifier is described. In its detailed aspects, the invention includes an amplifier having at least a first triode tube and a second triode tube, the cathode of the second tube being connected to the anode of the first tube. A peak limiter triode tube has its control grid coupled to thc anode of the second tube and its anode connected to the cathode of the second tube. The operation of the limiter is controlled by a bias voltage source connected to the control grid of the limiter tube and the output of the system is taken from the anode of the second tube.

  5. Classical-Quantum Limits

    NASA Astrophysics Data System (ADS)

    Oliynyk, Todd A.

    2016-12-01

    We introduce a new approach to analyzing the interaction between classical and quantum systems that is based on a limiting procedure applied to multi-particle Schrödinger equations. The limit equations obtained by this procedure, which we refer to as the classical-quantum limit, govern the interaction between classical and quantum systems, and they possess many desirable properties that are inherited in the limit from the multi-particle quantum system. As an application, we use the classical-quantum limit equations to identify the source of the non-local signalling that is known to occur in the classical-quantum hybrid scheme of Hall and Reginatto. We also derive the first order correction to the classical-quantum limit equation to obtain a fully consistent first order approximation to the Schrödinger equation that should be accurate for modeling the interaction between particles of disparate mass in the regime where the particles with the larger masses are effectively classical.

  6. Pawnee Nation Energy Option Analyses

    SciTech Connect

    Matlock, M.; Kersey, K.; Riding In, C.

    2009-07-31

    introduced two model energy codes Pawnee Nation should consider for adoption. Summary of Current and Expected Future Electricity Usage The research team provided a summary overview of electricity usage patterns in current buildings and included discussion of known plans for new construction. Utility Options Review Pawnee Nation electric utility options were analyzed through a four-phase process, which included: 1) summarizing the relevant utility background information; 2) gathering relevant utility assessment data; 3) developing a set of realistic Pawnee electric utility service options, and 4) analyzing the various Pawnee electric utility service options for the Pawnee Energy Team’s consideration. III. Findings and Recommendations Due to a lack of financial incentives for renewable energy, particularly at the state level, combined mediocre renewable energy resources, renewable energy development opportunities are limited for Pawnee Nation. However, near-term potential exists for development of solar hot water at the gym, and an exterior wood-fired boiler system at the tribe’s main administrative building. Pawnee Nation should also explore options for developing LFGTE resources in collaboration with the City of Pawnee. Significant potential may also exist for development of bio-energy resources within the next decade. Pawnee Nation representatives should closely monitor market developments in the bio-energy industry, establish contacts with research institutions with which the tribe could potentially partner in grant-funded research initiatives. In addition, a substantial effort by the Kaw and Cherokee tribes is underway to pursue wind development at the Chilocco School Site in northern Oklahoma where Pawnee is a joint landowner. Pawnee Nation representatives should become actively involved in these development discussions and should explore the potential for joint investment in wind development at the Chilocco site.

  7. The limits to tree height.

    PubMed

    Koch, George W; Sillett, Stephen C; Jennings, Gregory M; Davis, Stephen D

    2004-04-22

    Trees grow tall where resources are abundant, stresses are minor, and competition for light places a premium on height growth. The height to which trees can grow and the biophysical determinants of maximum height are poorly understood. Some models predict heights of up to 120 m in the absence of mechanical damage, but there are historical accounts of taller trees. Current hypotheses of height limitation focus on increasing water transport constraints in taller trees and the resulting reductions in leaf photosynthesis. We studied redwoods (Sequoia sempervirens), including the tallest known tree on Earth (112.7 m), in wet temperate forests of northern California. Our regression analyses of height gradients in leaf functional characteristics estimate a maximum tree height of 122-130 m barring mechanical damage, similar to the tallest recorded trees of the past. As trees grow taller, increasing leaf water stress due to gravity and path length resistance may ultimately limit leaf expansion and photosynthesis for further height growth, even with ample soil moisture.

  8. Information-limiting correlations

    PubMed Central

    Moreno-Bote, Rubén; Beck, Jeffrey; Kanitscheider, Ingmar; Pitkow, Xaq; Latham, Peter; Pouget, Alexandre

    2015-01-01

    Computational strategies used by the brain strongly depend on the amount of information that can be stored in population activity, which in turn strongly depends on the pattern of noise correlations. In vivo, noise correlations tend to be positive and proportional to the similarity in tuning properties. Such correlations are thought to limit information, which has led to the suggestion that decorrelation increases information. In contrast, we found, analytically and numerically, that decorrelation does not imply an increase in information. Instead, the only information-limiting correlations are what we refer to as differential correlations: correlations proportional to the product of the derivatives of the tuning curves. Unfortunately, differential correlations are likely to be very small and buried under correlations that do not limit information, making them particularly difficult to detect. We found, however, that the effect of differential correlations on information can be detected with relatively simple decoders. PMID:25195105

  9. SU(2) uncertainty limits

    NASA Astrophysics Data System (ADS)

    Shabbir, Saroosh; Björk, Gunnar

    2016-05-01

    Although progress has been made recently in defining nontrivial uncertainty limits for the SU(2) group, a description of the intermediate states bound by these limits remains lacking. In this paper we enumerate possible uncertainty relations for the SU(2) group that involve all three observables and that are, moreover, invariant under SU(2) transformations. We demonstrate that these relations however, even taken as a group, do not provide sharp, saturable bounds. To find sharp bounds, we systematically calculate the variance of the SU(2) operators for all pure states belonging to the N =2 and N =3 polarization excitation manifold (corresponding to spin 1 and spin 3/2). Lastly, and perhaps counter to expectation, we note that even pure states can reach the maximum uncertainty limit.

  10. Force Limit System

    NASA Technical Reports Server (NTRS)

    Pawlik, Ralph; Krause, David; Bremenour, Frank

    2011-01-01

    The Force Limit System (FLS) was developed to protect test specimens from inadvertent overload. The load limit value is fully adjustable by the operator and works independently of the test system control as a mechanical (non-electrical) device. When a test specimen is loaded via an electromechanical or hydraulic test system, a chance of an overload condition exists. An overload applied to a specimen could result in irreparable damage to the specimen and/or fixturing. The FLS restricts the maximum load that an actuator can apply to a test specimen. When testing limited-run test articles or using very expensive fixtures, the use of such a device is highly recommended. Test setups typically use electronic peak protection, which can be the source of overload due to malfunctioning components or the inability to react quickly enough to load spikes. The FLS works independently of the electronic overload protection.

  11. Optimal Limited Contingency Planning

    NASA Technical Reports Server (NTRS)

    Meuleau, Nicolas; Smith, David E.

    2003-01-01

    For a given problem, the optimal Markov policy over a finite horizon is a conditional plan containing a potentially large number of branches. However, there are applications where it is desirable to strictly limit the number of decision points and branches in a plan. This raises the question of how one goes about finding optimal plans containing only a limited number of branches. In this paper, we present an any-time algorithm for optimal k-contingency planning. It is the first optimal algorithm for limited contingency planning that is not an explicit enumeration of possible contingent plans. By modelling the problem as a partially observable Markov decision process, it implements the Bellman optimality principle and prunes the solution space. We present experimental results of applying this algorithm to some simple test cases.

  12. Comparison of retrospective analyses of the global ocean heat content

    NASA Astrophysics Data System (ADS)

    Chepurin, Gennady A.; Carton, James A.

    1999-07-01

    In this study, we compare seven retrospective analyses of basin- to global-scale upper ocean temperature. The analyses span a minimum of 10 years during the 50-year period since World War II. Three of the analyses (WOA-94, WHITE, BMRC) are based on objective analysis and thus, do not rely on a numerical forecast model. The remaining four (NCEP, WAJSOWICZ, ROSATI, SODA) are based on data assimilation in which the numerical forecast is provided by some form of the Geophysical Fluid Dynamics Laboratory Modular Ocean Model driven by historical winds. The comparison presented here is limited to heat content in the upper 250 m, information that is available for all analyses. The results are presented in three frequency bands: seasonal, interannual (periods of 1-5 years), and decadal (periods of 5-25 years). At seasonal frequencies, all of the analyses are quite similar. Otherwise, the differences among analyses are limited to the regions of the western boundary currents, and some regions in the Southern Hemisphere. At interannual frequencies, significant differences appear between the objective analyses and the data assimilation analyses. Along the equator in the Pacific, where variability is dominated by El Niño, the objective analyses have somewhat noisier fields, as well as reduced variance prior to 1980 due to lack of observations. Still, the correlation among analyses generally exceeds 80% in this region. Along the equator in the Atlantic, the correlation is lower (30-60%) although inspection of the time series shows that the same biennial progression of warm and cool events appears in all analyses since 1980. In the midlatitude Pacific agreement among objective analyses and data assimilation analyses is good. The analysis of Rosati et al. [Rosati, A., Gudgel, R., Miyakoda, K., 1995. Decadal analysis produced from an ocean assimilation system. Mon. Weather Rev., 123, 2, 206.] differs somewhat from the others apparently because in this analysis, the forecast model

  13. Considerations for planning and evaluating economic analyses of telemental health.

    PubMed

    Luxton, David D

    2013-08-01

    The economic evaluation of telemental health (TMH) is necessary to inform ways to decrease the cost of delivering care, to improve access to care, and to make decisions about the allocation of resources. Previous reviews of telehealth economic analysis studies have concluded that there are significant methodological deficiencies and inconsistencies that limit the ability to make generalized conclusions about the costs and benefits of telehealth programs. Published economic evaluations specific to TMH are also limited. There are unique factors that influence costs in TMH that are necessary for those who are planning and evaluating economic analyses to consider. The purpose of this review is to summarize the main problems and limitations of published economic analyses, to discuss considerations specific to TMH, and to inform and encourage the economic evaluation of TMH in both the public and private sectors. The topics presented here include perspective of costs, direct and indirect costs, and technology, as well as research methodology considerations. The integration of economic analyses into effectiveness trials, the standardization of outcome measurement, and the development of TMH economic evaluation guidelines are recommended.

  14. Defined by Limitations

    ERIC Educational Resources Information Center

    Arriola, Sonya; Murphy, Katy

    2010-01-01

    Undocumented students are a population defined by limitations. Their lack of legal residency and any supporting paperwork (e.g., Social Security number, government issued identification) renders them essentially invisible to the American and state governments. They cannot legally work. In many states, they cannot legally drive. After the age of…

  15. Limits to Stability

    ERIC Educational Resources Information Center

    Cottey, Alan

    2012-01-01

    The author reflects briefly on what limited degree of global ecological stability and human cultural stability may be achieved, provided that humanity retains hope and does not give way to despair or hide in denial. These thoughts were triggered by a recent conference on International Stability and Systems Engineering. (Contains 5 notes.)

  16. The HEL Upper Limit

    NASA Astrophysics Data System (ADS)

    Billingsley, J. P.

    2002-07-01

    A threshold particle velocity, Vf, derived by Professor E.R. Fitzgerald for the onset of atomic lattice Disintegration Phenomena (LDP) is shown to exceed and/or compare rather well with the maximum experimental Hugoniot Elastic Limit (HEL) particle (mass) velocities (UpHEL) for selected hard strong mineral/ceramic materials.

  17. The Value of Limitations

    ERIC Educational Resources Information Center

    Hardy, Lee

    2006-01-01

    David Horner, a recent president of North Park College and Theological Seminary has suggested that, in light of the tension between the demands of free inquiry and the need for religious inculcation, Christian colleges have two options: either redefine academic freedom or limit it and be up front and principled about it. In this article, the…

  18. Learning without Limits

    ERIC Educational Resources Information Center

    Hart, Susan; Dixon, Annabelle; Drummond, Mary Jane; McIntyre, Donald

    2004-01-01

    This book explores ways of teaching that are free from determinist beliefs about ability. In a detailed critique of the practices of ability labelling and ability-focused teaching, "Learning without Limits" examines the damage these practices can do to young people, teachers and the curriculum. Drawing on a research project at the…

  19. The Limits of Laughter.

    ERIC Educational Resources Information Center

    Mindess, Harvey

    1983-01-01

    Three incidents which elucidate the limits of laughter are described. Most persons enjoy humor as comic relief, but when humor strikes a blow at something they hold dear, they find it very hard to laugh. People are upset by an irreverent attitude toward things they hold in esteem. (RM)

  20. Fracture mechanics validity limits

    NASA Technical Reports Server (NTRS)

    Lambert, Dennis M.; Ernst, Hugo A.

    1994-01-01

    Fracture behavior is characteristics of a dramatic loss of strength compared to elastic deformation behavior. Fracture parameters have been developed and exhibit a range within which each is valid for predicting growth. Each is limited by the assumptions made in its development: all are defined within a specific context. For example, the stress intensity parameters, K, and the crack driving force, G, are derived using an assumption of linear elasticity. To use K or G, the zone of plasticity must be small as compared to the physical dimensions of the object being loaded. This insures an elastic response, and in this context, K and G will work well. Rice's J-integral has been used beyond the limits imposed on K and G. J requires an assumption of nonlinear elasticity, which is not characteristic of real material behavior, but is thought to be a reasonable approximation if unloading is kept to a minimum. As well, the constraint cannot change dramatically (typically, the crack extension is limited to ten-percent of the initial remaining ligament length). Rice, et al investigated the properties required of J-type parameters, J(sub x), and showed that the time rate, dJ(sub x)/dt, must not be a function of the crack extension rate, da/dt. Ernst devised the modified-J parameter, J(sub M), that meets this criterion. J(sub M) correlates fracture data to much higher crack growth than does J. Ultimately, a limit of the validity of J(sub M) is anticipated, and this has been estimated to be at a crack extension of about 40-percent of the initial remaining ligament length. None of the various parameters can be expected to describe fracture in an environment of gross plasticity, in which case the process is better described by deformation parameters, e.g., stress and strain. In the current study, various schemes to identify the onset of the plasticity-dominated behavior, i.e., the end of fracture mechanics validity, are presented. Each validity limit parameter is developed in

  1. Finite Element analyses of soil bioengineered slopes

    NASA Astrophysics Data System (ADS)

    Tamagnini, Roberto; Switala, Barbara Maria; Sudan Acharya, Madhu; Wu, Wei; Graf, Frank; Auer, Michael; te Kamp, Lothar

    2014-05-01

    Soil Bioengineering methods are not only effective from an economical point of view, but they are also interesting as fully ecological solutions. The presented project is aimed to define a numerical model which includes the impact of vegetation on slope stability, considering both mechanical and hydrological effects. In this project, a constitutive model has been developed that accounts for the multi-phase nature of the soil, namely the partly saturated condition and it also includes the effects of a biological component. The constitutive equation is implemented in the Finite Element (FE) software Comes-Geo with an implicit integration scheme that accounts for the collapse of the soils structure due to wetting. The mathematical formulation of the constitutive equations is introduced by means of thermodynamics and it simulates the growth of the biological system during the time. The numerical code is then applied in the analysis of an ideal rainfall induced landslide. The slope is analyzed for vegetated and non-vegetated conditions. The final results allow to quantitatively assessing the impact of vegetation on slope stability. This allows drawing conclusions and choosing whenever it is worthful to use soil bioengineering methods in slope stabilization instead of traditional approaches. The application of the FE methods show some advantages with respect to the commonly used limit equilibrium analyses, because it can account for the real coupled strain-diffusion nature of the problem. The mechanical strength of roots is in fact influenced by the stress evolution into the slope. Moreover, FE method does not need a pre-definition of any failure surface. FE method can also be used in monitoring the progressive failure of the soil bio-engineered system as it calculates the amount of displacements and strains of the model slope. The preliminary study results show that the formulated equations can be useful for analysis and evaluation of different soil bio

  2. HLA region excluded by linkage analyses of early onset periodontitis

    SciTech Connect

    Sun, C.; Wang, S.; Lopez, N.

    1994-09-01

    Previous studies suggested that HLA genes may influence susceptibility to early-onset periodontitis (EOP). Segregation analyses indicate that EOP may be due to a single major gene. We conducted linkage analyses to assess possible HLA effects on EOP. Fifty families with two or more close relatives affected by EOP were ascertained in Virginia and Chile. A microsatellite polymorphism within the HLA region (at the tumor necrosis factor beta locus) was typed using PCR. Linkage analyses used a donimant model most strongly supported by previous studies. Assuming locus homogeneity, our results exclude a susceptibility gene within 10 cM on either side of our marker locus. This encompasses all of the HLA region. Analyses assuming alternative models gave qualitatively similar results. Allowing for locus heterogeneity, our data still provide no support for HLA-region involvement. However, our data do not statistically exclude (LOD <-2.0) hypotheses of disease-locus heterogeneity, including models where up to half of our families could contain an EOP disease gene located in the HLA region. This is due to the limited power of even our relatively large collection of families and the inherent difficulties of mapping genes for disorders that have complex and heterogeneous etiologies. Additional statistical analyses, recruitment of families, and typing of flanking DNA markers are planned to more conclusively address these issues with respect to the HLA region and other candidate locations in the human genome. Additional results for markers covering most of the human genome will also be presented.

  3. Limits in decision making arise from limits in memory retrieval

    PubMed Central

    Giguère, Gyslain; Love, Bradley C.

    2013-01-01

    Some decisions, such as predicting the winner of a baseball game, are challenging in part because outcomes are probabilistic. When making such decisions, one view is that humans stochastically and selectively retrieve a small set of relevant memories that provides evidence for competing options. We show that optimal performance at test is impossible when retrieving information in this fashion, no matter how extensive training is, because limited retrieval introduces noise into the decision process that cannot be overcome. One implication is that people should be more accurate in predicting future events when trained on idealized rather than on the actual distributions of items. In other words, we predict the best way to convey information to people is to present it in a distorted, idealized form. Idealization of training distributions is predicted to reduce the harmful noise induced by immutable bottlenecks in people’s memory retrieval processes. In contrast, machine learning systems that selectively weight (i.e., retrieve) all training examples at test should not benefit from idealization. These conjectures are strongly supported by several studies and supporting analyses. Unlike machine systems, people’s test performance on a target distribution is higher when they are trained on an idealized version of the distribution rather than on the actual target distribution. Optimal machine classifiers modified to selectively and stochastically sample from memory match the pattern of human performance. These results suggest firm limits on human rationality and have broad implications for how to train humans tasked with important classification decisions, such as radiologists, baggage screeners, intelligence analysts, and gamblers. PMID:23610402

  4. MELCOR analyses for accident progression issues

    SciTech Connect

    Dingman, S.E.; Shaffer, C.J.; Payne, A.C.; Carmel, M.K. )

    1991-01-01

    Results of calculations performed with MELCOR and HECTR in support of the NUREG-1150 study are presented in this report. The analyses examined a wide range of issues. The analyses included integral calculations covering an entire accident sequence, as well as calculations that addressed specific issues that could affect several accident sequences. The results of the analyses for Grand Gulf, Peach Bottom, LaSalle, and Sequoyah are described, and the major conclusions are summarized. 23 refs., 69 figs., 8 tabs.

  5. Electron/proton spectrometer certification documentation analyses

    NASA Technical Reports Server (NTRS)

    Gleeson, P.

    1972-01-01

    A compilation of analyses generated during the development of the electron-proton spectrometer for the Skylab program is presented. The data documents the analyses required by the electron-proton spectrometer verification plan. The verification plan was generated to satisfy the ancillary hardware requirements of the Apollo Applications program. The certification of the spectrometer requires that various tests, inspections, and analyses be documented, approved, and accepted by reliability and quality control personnel of the spectrometer development program.

  6. Database-Driven Analyses of Astronomical Spectra

    NASA Astrophysics Data System (ADS)

    Cami, Jan

    2012-03-01

    species to the fullerene species C60 and C70 [4]. Given the large number and variety of molecules detected in space, molecular infrared spectroscopy can be used to study pretty much any astrophysical environment that is not too energetic to dissociate the molecules. At the lowest energies, it is interesting to note that molecules such as CN have been used to measure the temperature of the Cosmic Microwave Background (see e.g., Ref. 15). The great diagnostic potential of infrared molecular spectroscopy comes at a price though. Extracting the physical parameters from the observations requires expertise in knowing how various physical processes and instrumental characteristics play together in producing the observed spectra. In addition to the astronomical aspects, this often includes interpreting and understanding the limitations of laboratory data and quantum-chemical calculations; the study of the interaction of matter with radiation at microscopic scales (called radiative transfer, akin to ray tracing) and the effects of observing (e.g., smoothing and resampling) on the resulting spectra and possible instrumental effects (e.g., fringes). All this is not trivial. To make matters worse, observational spectra often contain many components, and might include spectral contributions stemming from very different physical conditions. Fully analyzing such observations is thus a time-consuming task that requires mastery of several techniques. And with ever-increasing rates of observational data acquisition, it seems clear that in the near future, some form of automation is required to handle the data stream. It is thus appealing to consider what part of such analyses could be done without too much human intervention. Two different aspects can be separated: the first step involves simply identifying the molecular species present in the observations. Once the molecular inventory is known, we can try to extract the physical parameters from the observed spectral properties. For both

  7. 7 CFR 94.102 - Analyses available.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ..., Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) COMMODITY LABORATORY TESTING PROGRAMS... analyses for total ash, fat by acid hydrolysis, moisture, salt, protein, beta-carotene,...

  8. 7 CFR 94.102 - Analyses available.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ..., Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED) COMMODITY LABORATORY TESTING PROGRAMS... analyses for total ash, fat by acid hydrolysis, moisture, salt, protein, beta-carotene,...

  9. Telescopic limiting magnitudes

    NASA Technical Reports Server (NTRS)

    Schaefer, Bradley E.

    1990-01-01

    The prediction of the magnitude of the faintest star visible through a telescope by a visual observer is a difficult problem in physiology. Many prediction formulas have been advanced over the years, but most do not even consider the magnification used. Here, the prediction algorithm problem is attacked with two complimentary approaches: (1) First, a theoretical algorithm was developed based on physiological data for the sensitivity of the eye. This algorithm also accounts for the transmission of the atmosphere and the telescope, the brightness of the sky, the color of the star, the age of the observer, the aperture, and the magnification. (2) Second, 314 observed values for the limiting magnitude were collected as a test of the formula. It is found that the formula does accurately predict the average observed limiting magnitudes under all conditions.

  10. Heat flux limiting sleeves

    DOEpatents

    Harris, William G.

    1985-01-01

    A heat limiting tubular sleeve extending over only a portion of a tube having a generally uniform outside diameter, the sleeve being open on both ends, having one end thereof larger in diameter than the other end thereof and having a wall thickness which decreases in the same direction as the diameter of the sleeve decreases so that the heat transfer through the sleeve and tube is less adjacent the large diameter end of the sleeve than adjacent the other end thereof.

  11. Respiratory factors limiting exercise.

    PubMed

    Bye, P T; Farkas, G A; Roussos, C

    1983-01-01

    The question of respiratory factors limiting exercise has been examined in terms of possible limitations arising from the function of gas exchange, the respiratory mechanics, the energetics of the respiratory muscles, or the development of respiratory muscle fatigue. Exercise capacity is curtailed in the presence of marked hypoxia, and this is readily observed in patients with chronic airflow limitation and interstitial lung disease and in some athletes at high intensities of exercise. In patients with interstitial lung disease, gas exchange abnormality--partly the result of diffusion disequilibrium for oxygen transfer--occurs during exercise despite abnormally high ventilations. In contrast, in certain athletes arterial hypoxemia has been documented during heavy exercise, apparently as a result of relative hypoventilation. During strenuous exercise the maximum expiratory flow volume curves are attained both by patients with chronic airflow limitation and by normal subjects, in particular when they breathe dense gas, so that a mechanical constraint is imposed on further increases in ventilation. Similarly, the force velocity characteristics of the inspiratory muscles may also impose a constraint to further increases in inspiratory flows that affects the ability to increase ventilation. In addition, the oxygen cost of maintaining high ventilations is large. Analysis of results from blood flow experiments reveal a substantial increase in blood flow to the respiratory muscles during exercise, with the result that oxygen supply to the rest of the body may be lessened. Alternatively, high exercise ventilations may not be sustained indefinitely owing to the development of respiratory muscle fatigue that results in hypoventilation and reduced arterial oxygen tension.

  12. Quantum limits of thermometry

    SciTech Connect

    Stace, Thomas M.

    2010-07-15

    The precision of typical thermometers consisting of N particles scales as {approx}1/{radical}(N). For high-precision thermometry and thermometric standards, this presents an important theoretical noise floor. Here it is demonstrated that thermometry may be mapped onto the problem of phase estimation, and using techniques from optimal phase estimation, it follows that the scaling of the precision of a thermometer may in principle be improved to {approx}1/N, representing a Heisenberg limit to thermometry.

  13. Limits of social mobilization

    PubMed Central

    Rutherford, Alex; Cebrian, Manuel; Dsouza, Sohan; Moro, Esteban; Pentland, Alex; Rahwan, Iyad

    2013-01-01

    The Internet and social media have enabled the mobilization of large crowds to achieve time-critical feats, ranging from mapping crises in real time, to organizing mass rallies, to conducting search-and-rescue operations over large geographies. Despite significant success, selection bias may lead to inflated expectations of the efficacy of social mobilization for these tasks. What are the limits of social mobilization, and how reliable is it in operating at these limits? We build on recent results on the spatiotemporal structure of social and information networks to elucidate the constraints they pose on social mobilization. We use the DARPA Network Challenge as our working scenario, in which social media were used to locate 10 balloons across the United States. We conduct high-resolution simulations for referral-based crowdsourcing and obtain a statistical characterization of the population recruited, geography covered, and time to completion. Our results demonstrate that the outcome is plausible without the presence of mass media but lies at the limit of what time-critical social mobilization can achieve. Success relies critically on highly connected individuals willing to mobilize people in distant locations, overcoming the local trapping of diffusion in highly dense areas. However, even under these highly favorable conditions, the risk of unsuccessful search remains significant. These findings have implications for the design of better incentive schemes for social mobilization. They also call for caution in estimating the reliability of this capability. PMID:23576719

  14. LIMITS ON QUAOAR'S ATMOSPHERE

    SciTech Connect

    Fraser, Wesley C.; Gwyn, Stephen; Kavelaars, J. J.; Trujillo, Chad; Stephens, Andrew W.; Gimeno, German

    2013-09-10

    Here we present high cadence photometry taken by the Acquisition Camera on Gemini South, of a close passage by the {approx}540 km radius Kuiper belt object, (50000) Quaoar, of a r' = 20.2 background star. Observations before and after the event show that the apparent impact parameter of the event was 0.''019 {+-} 0.''004, corresponding to a close approach of 580 {+-} 120 km to the center of Quaoar. No signatures of occultation by either Quaoar's limb or its potential atmosphere are detectable in the relative photometry of Quaoar and the target star, which were unresolved during closest approach. From this photometry we are able to put constraints on any potential atmosphere Quaoar might have. Using a Markov chain Monte Carlo and likelihood approach, we place pressure upper limits on sublimation supported, isothermal atmospheres of pure N{sub 2}, CO, and CH{sub 4}. For N{sub 2} and CO, the upper limit surface pressures are 1 and 0.7 {mu}bar, respectively. The surface temperature required for such low sublimation pressures is {approx}33 K, much lower than Quaoar's mean temperature of {approx}44 K measured by others. We conclude that Quaoar cannot have an isothermal N{sub 2} or CO atmosphere. We cannot eliminate the possibility of a CH{sub 4} atmosphere, but place upper surface pressure and mean temperature limits of {approx}138 nbar and {approx}44 K, respectively.

  15. Limits of social mobilization.

    PubMed

    Rutherford, Alex; Cebrian, Manuel; Dsouza, Sohan; Moro, Esteban; Pentland, Alex; Rahwan, Iyad

    2013-04-16

    The Internet and social media have enabled the mobilization of large crowds to achieve time-critical feats, ranging from mapping crises in real time, to organizing mass rallies, to conducting search-and-rescue operations over large geographies. Despite significant success, selection bias may lead to inflated expectations of the efficacy of social mobilization for these tasks. What are the limits of social mobilization, and how reliable is it in operating at these limits? We build on recent results on the spatiotemporal structure of social and information networks to elucidate the constraints they pose on social mobilization. We use the DARPA Network Challenge as our working scenario, in which social media were used to locate 10 balloons across the United States. We conduct high-resolution simulations for referral-based crowdsourcing and obtain a statistical characterization of the population recruited, geography covered, and time to completion. Our results demonstrate that the outcome is plausible without the presence of mass media but lies at the limit of what time-critical social mobilization can achieve. Success relies critically on highly connected individuals willing to mobilize people in distant locations, overcoming the local trapping of diffusion in highly dense areas. However, even under these highly favorable conditions, the risk of unsuccessful search remains significant. These findings have implications for the design of better incentive schemes for social mobilization. They also call for caution in estimating the reliability of this capability.

  16. EU-FP7-iMARS: analysis of Mars multi-resolution images using auto-coregistration, data mining and crowd source techniques: A Mid-term Report

    NASA Astrophysics Data System (ADS)

    Muller, J.-P.; Yershov, V.; Sidiropoulos, P.; Gwinner, K.; Willner, K.; Fanara, L.; Waelisch, M.; van Gasselt, S.; Walter, S.; Ivanov, A.; Cantini, F.; Morley, J. G.; Sprinks, J.; Giordano, M.; Wardlaw, J.; Kim, J.-R.; Chen, W.-T.; Houghton, R.; Bamford, S.

    2015-10-01

    Understanding the role of different solid surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 8 years, especially in 3D imaging of surface shape (down to resolutions of 10s of cms) and subsequent terrain correction of imagery from orbiting spacecraft. This has led to the potential to be able to overlay different epochs back to the mid-1970s. Within iMars, a processing system has been developed to generate 3D Digital Terrain Models (DTMs) and corresponding OrthoRectified Images (ORIs) fully automatically from NASA MRO HiRISE and CTX stereo-pairs which are coregistered to corresponding HRSC ORI/DTMs. In parallel, iMars has developed a fully automated processing chain for co-registering level-1 (EDR) images from all previous NASA orbital missions to these HRSC ORIs and in the case of HiRISE these are further co-registered to previously co-registered CTX-to-HRSC ORIs. Examples will be shown of these multi-resolution ORIs and the application of different data mining algorithms to change detection using these co-registered images. iMars has recently launched a citizen science experiment to evaluate best practices for future citizen scientist validation of such data mining processed results. An example of the iMars website will be shown along with an embedded Version 0 prototype of a webGIS based on OGC standards.

  17. 49 CFR 1180.7 - Market analyses.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 49 Transportation 8 2011-10-01 2011-10-01 false Market analyses. 1180.7 Section 1180.7..., TRACKAGE RIGHTS, AND LEASE PROCEDURES General Acquisition Procedures § 1180.7 Market analyses. (a) For... identify and address relevant markets and issues, and provide additional information as requested by...

  18. 49 CFR 1180.7 - Market analyses.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 8 2010-10-01 2010-10-01 false Market analyses. 1180.7 Section 1180.7..., TRACKAGE RIGHTS, AND LEASE PROCEDURES General Acquisition Procedures § 1180.7 Market analyses. (a) For... identify and address relevant markets and issues, and provide additional information as requested by...

  19. Aviation System Analysis Capability Executive Assistant Analyses

    NASA Technical Reports Server (NTRS)

    Roberts, Eileen; Kostiuk, Peter

    1999-01-01

    This document describes the analyses that may be incorporated into the Aviation System Analysis Capability Executive Assistant. The document will be used as a discussion tool to enable NASA and other integrated aviation system entities to evaluate, discuss, and prioritize analyses.

  20. 49 CFR 1180.7 - Market analyses.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 49 Transportation 8 2014-10-01 2014-10-01 false Market analyses. 1180.7 Section 1180.7 Transportation Other Regulations Relating to Transportation (Continued) SURFACE TRANSPORTATION BOARD, DEPARTMENT..., TRACKAGE RIGHTS, AND LEASE PROCEDURES General Acquisition Procedures § 1180.7 Market analyses. (a)...

  1. 49 CFR 1180.7 - Market analyses.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 49 Transportation 8 2012-10-01 2012-10-01 false Market analyses. 1180.7 Section 1180.7 Transportation Other Regulations Relating to Transportation (Continued) SURFACE TRANSPORTATION BOARD, DEPARTMENT..., TRACKAGE RIGHTS, AND LEASE PROCEDURES General Acquisition Procedures § 1180.7 Market analyses. (a)...

  2. 49 CFR 1180.7 - Market analyses.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 49 Transportation 8 2013-10-01 2013-10-01 false Market analyses. 1180.7 Section 1180.7 Transportation Other Regulations Relating to Transportation (Continued) SURFACE TRANSPORTATION BOARD, DEPARTMENT..., TRACKAGE RIGHTS, AND LEASE PROCEDURES General Acquisition Procedures § 1180.7 Market analyses. (a)...

  3. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle...

  4. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle...

  5. 10 CFR 436.24 - Uncertainty analyses.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Procedures for Life Cycle Cost Analyses § 436.24 Uncertainty analyses. If particular items of cost data or... impact of uncertainty on the calculation of life cycle cost effectiveness or the assignment of rank order... and probabilistic analysis. If additional analysis casts substantial doubt on the life cycle...

  6. Operator-free flow injection analyser

    PubMed Central

    de Faria, Lourival C.

    1991-01-01

    A flow injection analyser has been constructed to allow an operator-free determination of up to 40 samples. Besides the usual FIA apparatus, the analyser includes a home-made sample introduction device made with three electromechanical three-way valves and an auto-sampler from Technicon which has been adapted to be commanded by an external digital signal. The analyser is controlled by a single board SDK-8085 microcomputer. The necessary interface to couple the analyser components to the microcomputer is also described. The analyser was evaluated for a Cr(VI)-FIA determination showing a very good performance with a relative standard deviation for 15 signals from the injection of 100 μl of a 1.0 mg.ml-1 standard Cr(VI) solution being equal to 0.5%. PMID:18924899

  7. Limits to growth reconsidered.

    PubMed

    Hagen, E E

    1972-01-01

    In their book, ''The Limits of Growth,'' the authors conclude that through pollution, exhaustion of natural resources, and limits to the food supply, the world faces a catastrophic fall in population and in living standards by the middle of the 21st century. The authors fail to state, however, 1 centrally important assumption underlying their results, but which is present through their omission of the contrary assumption. In their model the continuing technical progress that has been a primary feature of the material world for the past 200 years suddenly ceases. The assumptions of the model presented in ''Limits of Growth'' are not the assumptions other analysts make - these are strangely unrealistic. These assumptions require closer examination. The assumption concerning population assumes that the sole determinants of birthrates are the level of industrialization and the food supply. This is not good demography, for demographers have long recognized that it may have been the decrease in death rates, not industrialization or the rise in income, that caused the decrease in birthrates. Furthermore, their theory that many of the natural resources are irreplacable is like the belief that the sun revolves around the earth. It is obvious and false. It neglects that part of technical process which includes invention of new natural resources. Technical advance is needed and the following are some of the problems that technical advance must overcome: 1) a need to discover how to increase food production progressively while preventing the runoff of chemical fertilizers from the soil into waterways, 2) the ''natural'' minerals on which until recently all have depended are ''biodegradable,'' 3) there is a similar problem with radioactive nuclear wastes; 4) energy must dissipate into heat; and 5) there is a need to hasten the decline in birthrates throughout the world. In conclusion, indefinitely continuing growth is not regarded as desirable only as possible.

  8. Limits to Cloud Susceptibility

    NASA Technical Reports Server (NTRS)

    Coakley, James A., Jr.

    2002-01-01

    1-kilometer AVHRR observations of ship tracks in low-level clouds off the west coast of the U S. were used to determine limits for the degree to which clouds might be altered by increases in anthropogenic aerosols. Hundreds of tracks were analyzed to determine whether the changes in droplet radii, visible optical depths, and cloud top altitudes that result from the influx of particles from underlying ships were consistent with expectations based on simple models for the indirect effect of aerosols. The models predict substantial increases in sunlight reflected by polluted clouds due to the increases in droplet numbers and cloud liquid water that result from the elevated particle concentrations. Contrary to the model predictions, the analysis of ship tracks revealed a 15-20% reduction in liquid water for the polluted clouds. Studies performed with a large-eddy cloud simulation model suggested that the shortfall in cloud liquid water found in the satellite observations might be attributed to the restriction that the 1-kilometer pixels be completely covered by either polluted or unpolluted cloud. The simulation model revealed that a substantial fraction of the indirect effect is caused by a horizontal redistribution of cloud water in the polluted clouds. Cloud-free gaps in polluted clouds fill in with cloud water while the cloud-free gaps in the surrounding unpolluted clouds remain cloud-free. By limiting the analysis to only overcast pixels, the current study failed to account for the gap-filling predicted by the simulation model. This finding and an analysis of the spatial variability of marine stratus suggest new ways to analyze ship tracks to determine the limit to which particle pollution will alter the amount of sunlight reflected by clouds.

  9. Fault current limiter

    DOEpatents

    Darmann, Francis Anthony

    2013-10-08

    A fault current limiter (FCL) includes a series of high permeability posts for collectively define a core for the FCL. A DC coil, for the purposes of saturating a portion of the high permeability posts, surrounds the complete structure outside of an enclosure in the form of a vessel. The vessel contains a dielectric insulation medium. AC coils, for transporting AC current, are wound on insulating formers and electrically interconnected to each other in a manner such that the senses of the magnetic field produced by each AC coil in the corresponding high permeability core are opposing. There are insulation barriers between phases to improve dielectric withstand properties of the dielectric medium.

  10. The HEL Upper Limit

    NASA Astrophysics Data System (ADS)

    Billingsley, James

    2001-06-01

    A threshold particle velocity, Vf, derived by Professor E.R. Fitzgerald for the onset of atomic lattice disintegration phenomena (LDP) is shown to exceed or compare rather well with the maximum experimentally observed Hugoniot Elastic Limit (HEL) particle or mass velocities (Uphel) for certain hard and strong mineral/ceramic materials. They are: diamond, quartz, sapphire, alumina, silicon carbide, titanium diboride, and partially stabilized zirconia. The LDP is caused by a conflict between the DeBroglie momentum-wavelength relation and the minimum wavelength allowed in a lattice row of atoms.

  11. On the skill of high frequency precipitation analyses

    NASA Astrophysics Data System (ADS)

    Kann, A.; Meirold-Mautner, I.; Schmid, F.; Kirchengast, G.; Fuchsberger, J.

    2014-10-01

    The ability of radar-rain gauge merging algorithms to precisely analyse convective precipitation patterns is of high interest for many applications, e.g. hydrological modelling. However, due to drawbacks of methods like cross-validation and due to the limited availability of reference datasets on high temporal and spatial scale, an adequate validation is usually hardly possible, especially on an operational basis. The present study evaluates the skill of very high resolution and frequently updated precipitation analyses (rapid-INCA) by means of a very dense station network (WegenerNet), operated in a limited domain of the south-eastern parts of Austria (Styria). Based on case studies and a longer term validation over the convective season 2011, a general underestimation of the rapid-INCA precipitation amounts is shown, although the temporal and spatial variability of the errors is - by convective nature - high. The contribution of the rain gauge measurements to the analysis skill is crucial. However, the capability of the analyses to precisely assess the convective precipitation distribution predominantly depends on the representativeness of the stations under the prevalent convective condition.

  12. Lean limit phenomena

    NASA Technical Reports Server (NTRS)

    Law, C. K.

    1984-01-01

    The concept of flammability limits in the presence of flame interaction, and the existence of negative flame speeds are discussed. Downstream interaction between two counterflow premixed flames of different stoichiometries are experimentally studied. Various flame configurations are observed and quantified; these include the binary system of two lean or rich flames, the triplet system of a lean and a rich flame separated by a diffusion flame, and single diffusion flames with some degree of premixedness. Extinction limits are determined for methane/air and butane/air mixtures over the entire range of mixture concentrations. The results show that the extent of flame interaction depends on the separation distance between the flames which are functions of the mixtures' concentrations, the stretch rate, and the effective Lewis numbers (Le). In particular, in a positively-stretched flow field Le 1 ( 1) mixtures tend to interact strongly (weakly), while the converse holds for flames in a negatively-stretched flow. Also established was the existence of negative flames whose propagation velocity is in the same general direction as that of the bulk convective flow, being supported by diffusion alone. Their existence demonstrates the tendency of flames to resist extinction, and further emphasizes the possibility of very lean or rich mixtures to undergo combustion.

  13. (Limiting the greenhouse effect)

    SciTech Connect

    Rayner, S.

    1991-01-07

    Traveler attended the Dahlem Research Conference organized by the Freien Universitat, Berlin. The subject of the conference was Limiting the Greenhouse Effect: Options for Controlling Atmospheric CO{sub 2} Accumulation. Like all Dahlem workshops, this was a meeting of scientific experts, although the disciplines represented were broader than usual, ranging across anthropology, economics, international relations, forestry, engineering, and atmospheric chemistry. Participation by scientists from developing countries was limited. The conference was divided into four multidisciplinary working groups. Traveler acted as moderator for Group 3 which examined the question What knowledge is required to tackle the principal social and institutional barriers to reducing CO{sub 2} emissions'' The working rapporteur was Jesse Ausubel of Rockefeller University. Other working groups examined the economic costs, benefits, and technical feasibility of options to reduce emissions per unit of energy service; the options for reducing energy use per unit of GNP; and the significant of linkage between strategies to reduce CO{sub 2} emissions and other goals. Draft reports of the working groups are appended. Overall, the conference identified a number of important research needs in all four areas. It may prove particularly important in bringing the social and institutional research needs relevant to climate change closer to the forefront of the scientific and policy communities than hitherto.

  14. Limits to biofuels

    NASA Astrophysics Data System (ADS)

    Johansson, S.

    2013-06-01

    Biofuel production is dependent upon agriculture and forestry systems, and the expectations of future biofuel potential are high. A study of the global food production and biofuel production from edible crops implies that biofuel produced from edible parts of crops lead to a global deficit of food. This is rather well known, which is why there is a strong urge to develop biofuel systems that make use of residues or products from forest to eliminate competition with food production. However, biofuel from agro-residues still depend upon the crop production system, and there are many parameters to deal with in order to investigate the sustainability of biofuel production. There is a theoretical limit to how much biofuel can be achieved globally from agro-residues and this amounts to approximately one third of todays' use of fossil fuels in the transport sector. In reality this theoretical potential may be eliminated by the energy use in the biomass-conversion technologies and production systems, depending on what type of assessment method is used. By surveying existing studies on biofuel conversion the theoretical limit of biofuels from 2010 years' agricultural production was found to be either non-existent due to energy consumption in the conversion process, or up to 2-6000TWh (biogas from residues and waste and ethanol from woody biomass) in the more optimistic cases.

  15. Molecular Biomarker Analyses Using Circulating Tumor Cells

    PubMed Central

    Punnoose, Elizabeth A.; Atwal, Siminder K.; Spoerke, Jill M.; Savage, Heidi; Pandita, Ajay; Yeh, Ru-Fang; Pirzkall, Andrea; Fine, Bernard M.; Amler, Lukas C.; Chen, Daniel S.; Lackner, Mark R.

    2010-01-01

    Background Evaluation of cancer biomarkers from blood could significantly enable biomarker assessment by providing a relatively non-invasive source of representative tumor material. Circulating Tumor Cells (CTCs) isolated from blood of metastatic cancer patients hold significant promise in this regard. Methodology/Principal Findings Using spiked tumor-cells we evaluated CTC capture on different CTC technology platforms, including CellSearch® and two biochip platforms, and used the isolated CTCs to develop and optimize assays for molecular characterization of CTCs. We report similar performance for the various platforms tested in capturing CTCs, and find that capture efficiency is dependent on the level of EpCAM expression. We demonstrate that captured CTCs are amenable to biomarker analyses such as HER2 status, qRT-PCR for breast cancer subtype markers, KRAS mutation detection, and EGFR staining by immunofluorescence (IF). We quantify cell surface expression of EGFR in metastatic lung cancer patient samples. In addition, we determined HER2 status by IF and FISH in CTCs from metastatic breast cancer patients. In the majority of patients (89%) we found concordance with HER2 status from patient tumor tissue, though in a subset of patients (11%), HER2 status in CTCs differed from that observed in the primary tumor. Surprisingly, we found CTC counts to be higher in ER+ patients in comparison to HER2+ and triple negative patients, which could be explained by low EpCAM expression and a more mesenchymal phenotype of tumors belonging to the basal-like molecular subtype of breast cancer. Conclusions/Significance Our data suggests that molecular characterization from captured CTCs is possible and can potentially provide real-time information on biomarker status. In this regard, CTCs hold significant promise as a source of tumor material to facilitate clinical biomarker evaluation. However, limitations exist from a purely EpCAM based capture system and addition of antibodies

  16. Fundamental energetic limits of radio communication systems

    NASA Astrophysics Data System (ADS)

    Baudais, Jean-Yves

    2017-02-01

    The evaluation of the energy consumption of a radiocommunication requires to analyse the life cycle of the elements used. However, this analysis does not specify the energetic limits. Theoretical approaches allow one to draw these limits, which are known in multiple cases of information transmission. However, the answers are not always satisfactory, in particular in the case of time-varying channels. After a brief presentation of the notion of energetic limits of a radiocommunication, and beginning with a global approach, we show that, contrary to the published results, the energetic limits always differ from zero if the physical constraints are correctly expressed. xml:lang="fr" Cependant, les réponses ne sont pas toujours satisfaisantes, particulièrement dans le cas de canaux variants dans le temps. Après une rapide présentation des notions d'énergie limite d'une radiocommunication, et en commençant par une approche globale du problème, nous montrons que, contrairement aux résultats publiés, les limites énergétiques sont toujours différentes de zéro si les contraintes physiques sont correctement exprimées.

  17. Level II Ergonomic Analyses, Dover AFB, DE

    DTIC Science & Technology

    1999-02-01

    IERA-RS-BR-TR-1999-0002 UNITED STATES AIR FORCE IERA Level II Ergonomie Analyses, Dover AFB, DE Andrew Marcotte Marilyn Joyce The Joyce...Project (070401881, Washington, DC 20503. 1. AGENCY USE ONLY (Leave blank) 2. REPORT DATE 4. TITLE AND SUBTITLE Level II Ergonomie Analyses, Dover...1.0 INTRODUCTION 1-1 1.1 Purpose Of The Level II Ergonomie Analyses : 1-1 1.2 Approach 1-1 1.2.1 Initial Shop Selection and Administration of the

  18. Clinical limitations of Invisalign.

    PubMed

    Phan, Xiem; Ling, Paul H

    2007-04-01

    Adult patients seeking orthodontic treatment are increasingly motivated by esthetic considerations. The majority of these patients reject wearing labial fixed appliances and are looking instead to more esthetic treatment options, including lingual orthodontics and Invisalign appliances. Since Align Technology introduced the Invisalign appliance in 1999 in an extensive public campaign, the appliance has gained tremendous attention from adult patients and dental professionals. The transparency of the Invisalign appliance enhances its esthetic appeal for those adult patients who are averse to wearing conventional labial fixed orthodontic appliances. Although guidelines about the types of malocclusions that this technique can treat exist, few clinical studies have assessed the effectiveness of the appliance. A few recent studies have outlined some of the limitations associated with this technique that clinicians should recognize early before choosing treatment options.

  19. Limiting magnitude of hypertelescopes

    NASA Astrophysics Data System (ADS)

    Surya, Arun

    Optical stellar interferometers have demonstrated milli-arcsecond resolution with few apertures spaced hundreds of meters apart. To obtain rich direct images, many apertures will be needed, for a better sampling of the incoming wavefront. The coherent imaging thus achievable improves the sensitivity with respect to the incoherent combination of successive fringed exposures, heretofore achieved in the form of optical aperture synthesis. For efficient use of highly diluted apertures, this can be done with pupil densification, a technique also called ``Hypertelescope Imaging". Using numerical simulations we have found out the limiting magnitude of hypertelescopes over different baselines and pupil densifications. Here we discuss the advantages of using hypertelescope systems over classical pairwise optical interferometry.

  20. [Limitations of anticoagulant therapy].

    PubMed

    Martí-Fàbregas, J; Delgado-Mederos, R; Mateo, J

    2012-03-01

    Vitamin K antagonists have been shown to be effective in the primary and secondary prevention of systemic and cerebral emboli in patients with cardiac causes of embolism, especially atrial fibrillation. The reduced risk of stroke is greater in secondary prevention, although this reduction is accompanied by an inherent risk of hemorrhagic complications, among which cerebral hemorrhage is especially serious. The therapeutic window of these agents is limited and the best benefit/risk profile is obtained with an INR of between 2 and 3. The anticoagulant effect obtained shows marked variability, requiring frequent clinical and laboratory monitoring of the treatment. The introduction of oral anticoagulants that would aid the administration of these agents with equal or greater efficacy and lower risk is required.

  1. The Limits to Relevance

    NASA Astrophysics Data System (ADS)

    Averill, M.; Briggle, A.

    2006-12-01

    Science policy and knowledge production lately have taken a pragmatic turn. Funding agencies increasingly are requiring scientists to explain the relevance of their work to society. This stems in part from mounting critiques of the "linear model" of knowledge production in which scientists operating according to their own interests or disciplinary standards are presumed to automatically produce knowledge that is of relevance outside of their narrow communities. Many contend that funded scientific research should be linked more directly to societal goals, which implies a shift in the kind of research that will be funded. While both authors support the concept of useful science, we question the exact meaning of "relevance" and the wisdom of allowing it to control research agendas. We hope to contribute to the conversation by thinking more critically about the meaning and limits of the term "relevance" and the trade-offs implicit in a narrow utilitarian approach. The paper will consider which interests tend to be privileged by an emphasis on relevance and address issues such as whose goals ought to be pursued and why, and who gets to decide. We will consider how relevance, narrowly construed, may actually limit the ultimate utility of scientific research. The paper also will reflect on the worthiness of research goals themselves and their relationship to a broader view of what it means to be human and to live in society. Just as there is more to being human than the pragmatic demands of daily life, there is more at issue with knowledge production than finding the most efficient ways to satisfy consumer preferences or fix near-term policy problems. We will conclude by calling for a balanced approach to funding research that addresses society's most pressing needs but also supports innovative research with less immediately apparent application.

  2. Broadband seismic illumination and resolution analyses based on staining algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Bo; Jia, Xiao-Feng; Xie, Xiao-Bi

    2016-09-01

    Seismic migration moves reflections to their true subsurface positions and yields seismic images of subsurface areas. However, due to limited acquisition aperture, complex overburden structure and target dipping angle, the migration often generates a distorted image of the actual subsurface structure. Seismic illumination and resolution analyses provide a quantitative description of how the above-mentioned factors distort the image. The point spread function (PSF) gives the resolution of the depth image and carries full information about the factors affecting the quality of the image. The staining algorithm establishes a correspondence between a certain structure and its relevant wavefield and reflected data. In this paper, we use the staining algorithm to calculate the PSFs, then use these PSFs for extracting the acquisition dip response and correcting the original depth image by deconvolution. We present relevant results of the SEG salt model. The staining algorithm provides an efficient tool for calculating the PSF and for conducting broadband seismic illumination and resolution analyses.

  3. 7 CFR 1400.204 - Limited partnerships, limited liability partnerships, limited liability companies, corporations...

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 10 2012-01-01 2012-01-01 false Limited partnerships, limited liability partnerships... AND SUBSEQUENT CROP, PROGRAM, OR FISCAL YEARS Payment Eligibility § 1400.204 Limited partnerships, limited liability partnerships, limited liability companies, corporations, and other similar...

  4. 7 CFR 1400.204 - Limited partnerships, limited liability partnerships, limited liability companies, corporations...

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 10 2013-01-01 2013-01-01 false Limited partnerships, limited liability partnerships... AND SUBSEQUENT CROP, PROGRAM, OR FISCAL YEARS Payment Eligibility § 1400.204 Limited partnerships, limited liability partnerships, limited liability companies, corporations, and other similar...

  5. 7 CFR 1400.204 - Limited partnerships, limited liability partnerships, limited liability companies, corporations...

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 10 2014-01-01 2014-01-01 false Limited partnerships, limited liability partnerships... AND SUBSEQUENT CROP, PROGRAM, OR FISCAL YEARS Payment Eligibility § 1400.204 Limited partnerships, limited liability partnerships, limited liability companies, corporations, and other similar...

  6. 7 CFR 1400.204 - Limited partnerships, limited liability partnerships, limited liability companies, corporations...

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 10 2011-01-01 2011-01-01 false Limited partnerships, limited liability partnerships... AND SUBSEQUENT CROP, PROGRAM, OR FISCAL YEARS Payment Eligibility § 1400.204 Limited partnerships, limited liability partnerships, limited liability companies, corporations, and other similar...

  7. METHODS OF DEALING WITH VALUES BELOW THE LIMIT OF DETECTION USING SAS

    EPA Science Inventory

    Due to limitations of chemical analysis procedures, small concentrations cannot be precisely measured. These concentrations are said to be below the limit of detection (LOD). In statistical analyses, these values are often censored and substituted with a constant value, such ...

  8. Interactive graphics for functional data analyses.

    PubMed

    Wrobel, Julia; Park, So Young; Staicu, Ana Maria; Goldsmith, Jeff

    Although there are established graphics that accompany the most common functional data analyses, generating these graphics for each dataset and analysis can be cumbersome and time consuming. Often, the barriers to visualization inhibit useful exploratory data analyses and prevent the development of intuition for a method and its application to a particular dataset. The refund.shiny package was developed to address these issues for several of the most common functional data analyses. After conducting an analysis, the plot shiny() function is used to generate an interactive visualization environment that contains several distinct graphics, many of which are updated in response to user input. These visualizations reduce the burden of exploratory analyses and can serve as a useful tool for the communication of results to non-statisticians.

  9. Quality control considerations in performing washability analyses

    SciTech Connect

    Graham, R.D.

    1984-10-01

    The author describes, in considerable detail, the procedures for carrying out washability analyses as laid down in ASTM Standard Test Method D4371. These include sampling, sample preparation, hydrometer standardisation, washability testing, and analysis of specific gravity fractions.

  10. SCM Forcing Data Derived from NWP Analyses

    DOE Data Explorer

    Jakob, Christian

    2008-01-15

    Forcing data, suitable for use with single column models (SCMs) and cloud resolving models (CRMs), have been derived from NWP analyses for the ARM (Atmospheric Radiation Measurement) Tropical Western Pacific (TWP) sites of Manus Island and Nauru.

  11. Anthocyanin analyses of Vaccinium fruit dietary supplements

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Vaccinium fruit ingredients within dietary supplements were identified by comparisons with anthocyanin analyses of known Vaccinium profiles (demonstration of anthocyanin fingerprinting). Available Vaccinium supplements were purchased and analyzed; their anthocyanin profiles (based on HPLC separation...

  12. Comparison with Russian analyses of meteor impact

    SciTech Connect

    Canavan, G.H.

    1997-06-01

    The inversion model for meteor impacts is used to discuss Russian analyses and compare principal results. For common input parameters, the models produce consistent estimates of impactor parameters. Directions for future research are discussed and prioritized.

  13. 49 CFR 1572.107 - Other analyses.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... ASSESSMENTS Standards for Security Threat Assessments § 1572.107 Other analyses. (a) TSA may determine that an... the search conducted under this part reveals extensive foreign or domestic criminal convictions,...

  14. A History of Rotorcraft Comprehensive Analyses

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne

    2013-01-01

    A history of the development of rotorcraft comprehensive analyses is presented. Comprehensive analyses are digital computer programs that calculate the aeromechanical behavior of the rotor and aircraft, bringing together the most advanced models of the geometry, structure, dynamics, and aerodynamics available in rotary wing technology. The development of the major codes of the last five decades from industry, government, and universities is described. A number of common themes observed in this history are discussed.

  15. Analyses and forecasts with LAWS winds

    NASA Technical Reports Server (NTRS)

    Wang, Muyin; Paegle, Jan

    1994-01-01

    Horizontal fluxes of atmospheric water vapor are studied for summer months during 1989 and 1992 over North and South America based on analyses from European Center for Medium Range Weather Forecasts, US National Meteorological Center, and United Kingdom Meteorological Office. The calculations are performed over 20 deg by 20 deg box-shaped midlatitude domains located to the east of the Rocky Mountains in North America, and to the east of the Andes Mountains in South America. The fluxes are determined from operational center gridded analyses of wind and moisture. Differences in the monthly mean moisture flux divergence determined from these analyses are as large as 7 cm/month precipitable water equivalent over South America, and 3 cm/month over North America. Gridded analyses at higher spatial and temporal resolution exhibit better agreement in the moisture budget study. However, significant discrepancies of the moisture flux divergence computed from different gridded analyses still exist. The conclusion is more pessimistic than Rasmusson's estimate based on station data. Further analysis reveals that the most significant sources of error result from model surface elevation fields, gaps in the data archive, and uncertainties in the wind and specific humidity analyses. Uncertainties in the wind analyses are the most important problem. The low-level jets, in particular, are substantially different in the different data archives. Part of the reason for this may be due to the way the different analysis models parameterized physical processes affecting low-level jets. The results support the inference that the noise/signal ratio of the moisture budget may be improved more rapidly by providing better wind observations and analyses than by providing better moisture data.

  16. Physical limits to magnetogenetics

    PubMed Central

    Meister, Markus

    2016-01-01

    This is an analysis of how magnetic fields affect biological molecules and cells. It was prompted by a series of prominent reports regarding magnetism in biological systems. The first claims to have identified a protein complex that acts like a compass needle to guide magnetic orientation in animals (Qin et al., 2016). Two other articles report magnetic control of membrane conductance by attaching ferritin to an ion channel protein and then tugging the ferritin or heating it with a magnetic field (Stanley et al., 2015; Wheeler et al., 2016). Here I argue that these claims conflict with basic laws of physics. The discrepancies are large: from 5 to 10 log units. If the reported phenomena do in fact occur, they must have causes entirely different from the ones proposed by the authors. The paramagnetic nature of protein complexes is found to seriously limit their utility for engineering magnetically sensitive cells. DOI: http://dx.doi.org/10.7554/eLife.17210.001 PMID:27529126

  17. Limits of computational biology.

    PubMed

    Bray, Dennis

    2015-01-01

    Are we close to a complete inventory of living processes so that we might expect in the near future to reproduce every essential aspect necessary for life? Or are there mechanisms and processes in cells and organisms that are presently inaccessible to us? Here I argue that a close examination of a particularly well-understood system--that of Escherichia coli chemotaxis--shows we are still a long way from a complete description. There is a level of molecular uncertainty, particularly that responsible for fine-tuning and adaptation to myriad external conditions, which we presently cannot resolve or reproduce on a computer. Moreover, the same uncertainty exists for any process in any organism and is especially pronounced and important in higher animals such as humans. Embryonic development, tissue homeostasis, immune recognition, memory formation, and survival in the real world, all depend on vast numbers of subtle variations in cell chemistry most of which are presently unknown or only poorly characterized. Overcoming these limitations will require us to not only accumulate large quantities of highly detailed data but also develop new computational methods able to recapitulate the massively parallel processing of living cells.

  18. Limits of computational biology

    PubMed Central

    Bray, Dennis

    2015-01-01

    Abstract Are we close to a complete inventory of living processes so that we might expect in the near future to reproduce every essential aspect necessary for life? Or are there mechanisms and processes in cells and organisms that are presently inaccessible to us? Here I argue that a close examination of a particularly well-understood system— that of Escherichia coli chemotaxis— shows we are still a long way from a complete description. There is a level of molecular uncertainty, particularly that responsible for fine-tuning and adaptation to myriad external conditions, which we presently cannot resolve or reproduce on a computer. Moreover, the same uncertainty exists for any process in any organism and is especially pronounced and important in higher animals such as humans. Embryonic development, tissue homeostasis, immune recognition, memory formation, and survival in the real world, all depend on vast numbers of subtle variations in cell chemistry most of which are presently unknown or only poorly characterized. Overcoming these limitations will require us to not only accumulate large quantities of highly detailed data but also develop new computational methods able to recapitulate the massively parallel processing of living cells. PMID:25318467

  19. Integrated Current Limiter

    NASA Astrophysics Data System (ADS)

    Pappalardo, S.; Alfonso, M. M.; Mirabella, I. B.

    2011-10-01

    The LCL has been extensively used in ESA scientific satellites and since a few years ago is being also the baseline device for earth observation satellites such as CRYOSAT 1 and 2, SENTINAL 1, 2 and 3, EARTWATCH, etc. It seems that the use of this LCL is also being considered as an alternative to fuse approach for commercial telecommunication satellites. Scope of this document is to provide a technical description of the Integrated Current Limiter device (shortly ICL later on) developed inside the domain of ESTECContract22049-09-NL-A Twith STMicroelectronics s.r.l. (ref. Invitation to Tender AO/1-5784/08/NL/A T). The design of the ICL device takes into account both ESA and power electronics designer's experience. This experience is more than 25 years long in Europe. The ICL design has been leaded in order to be fully compliant with the applicable specification issued by ESA and the major European power electronics manufacturers that have participated in its edition.

  20. Advancing the quantification of humid tropical forest cover loss with multi-resolution optical remote sensing data: Sampling & wall-to-wall mapping

    NASA Astrophysics Data System (ADS)

    Broich, Mark

    Humid tropical forest cover loss is threatening the sustainability of ecosystem goods and services as vast forest areas are rapidly cleared for industrial scale agriculture and tree plantations. Despite the importance of humid tropical forest in the provision of ecosystem services and economic development opportunities, the spatial and temporal distribution of forest cover loss across large areas is not well quantified. Here I improve the quantification of humid tropical forest cover loss using two remote sensing-based methods: sampling and wall-to-wall mapping. In all of the presented studies, the integration of coarse spatial, high temporal resolution data with moderate spatial, low temporal resolution data enable advances in quantifying forest cover loss in the humid tropics. Imagery from the Moderate Resolution Imaging Spectroradiometer (MODIS) are used as the source of coarse spatial resolution, high temporal resolution data and imagery from the Landsat Enhanced Thematic Mapper Plus (ETM+) sensor are used as the source of moderate spatial, low temporal resolution data. In a first study, I compare the precision of different sampling designs for the Brazilian Amazon using the annual deforestation maps derived by the Brazilian Space Agency for reference. I show that sampling designs can provide reliable deforestation estimates; furthermore, sampling designs guided by MODIS data can provide more efficient estimates than the systematic design used for the United Nations Food and Agricultural Organization Forest Resource Assessment 2010. Sampling approaches, such as the one demonstrated, are viable in regions where data limitations, such as cloud contamination, limit exhaustive mapping methods. Cloud-contaminated regions experiencing high rates of change include Insular Southeast Asia, specifically Indonesia and Malaysia. Due to persistent cloud cover, forest cover loss in Indonesia has only been mapped at a 5-10 year interval using photo interpretation of single

  1. Confidence limits and their errors

    SciTech Connect

    Rajendran Raja

    2002-03-22

    Confidence limits are common place in physics analysis. Great care must be taken in their calculation and use especially in cases of limited statistics. We introduce the concept of statistical errors of confidence limits and argue that not only should limits be calculated but also their errors in order to represent the results of the analysis to the fullest. We show that comparison of two different limits from two different experiments becomes easier when their errors are also quoted. Use of errors of confidence limits will lead to abatement of the debate on which method is best suited to calculate confidence limits.

  2. EU-FP7-iMARS: Analysis of Mars Multi-Resolution Images Using Auto-Coregistration Data Mining and Crowd Source Techniques: Processed Results - a First Look

    NASA Astrophysics Data System (ADS)

    Muller, Jan-Peter; Tao, Yu; Sidiropoulos, Panagiotis; Gwinner, Klaus; Willner, Konrad; Fanara, Lida; Waehlisch, Marita; van Gasselt, Stephan; Walter, Sebastian; Steikert, Ralf; Schreiner, Bjoern; Ivanov, Anton; Cantini, Federico; Wardlaw, Jessica; Morley, Jeremy; Sprinks, James; Giordano, Michele; Marsh, Stuart; Kim, Jungrack; Houghton, Robert; Bamford, Steven

    2016-06-01

    Understanding planetary atmosphere-surface exchange and extra-terrestrial-surface formation processes within our Solar System is one of the fundamental goals of planetary science research. There has been a revolution in planetary surface observations over the last 15 years, especially in 3D imaging of surface shape. This has led to the ability to overlay image data and derived information from different epochs, back in time to the mid 1970s, to examine changes through time, such as the recent discovery of mass movement, tracking inter-year seasonal changes and looking for occurrences of fresh craters. Within the EU FP-7 iMars project, we have developed a fully automated multi-resolution DTM processing chain, called the Coregistration ASP-Gotcha Optimised (CASP-GO), based on the open source NASA Ames Stereo Pipeline (ASP) [Tao et al., this conference], which is being applied to the production of planetwide DTMs and ORIs (OrthoRectified Images) from CTX and HiRISE. Alongside the production of individual strip CTX & HiRISE DTMs & ORIs, DLR [Gwinner et al., 2015] have processed HRSC mosaics of ORIs and DTMs for complete areas in a consistent manner using photogrammetric bundle block adjustment techniques. A novel automated co-registration and orthorectification chain has been developed by [Sidiropoulos & Muller, this conference]. Using the HRSC map products (both mosaics and orbital strips) as a map-base it is being applied to many of the 400,000 level-1 EDR images taken by the 4 NASA orbital cameras. In particular, the NASA Viking Orbiter camera (VO), Mars Orbiter Camera (MOC), Context Camera (CTX) as well as the High Resolution Imaging Science Experiment (HiRISE) back to 1976. A webGIS has been developed [van Gasselt et al., this conference] for displaying this time sequence of imagery and will be demonstrated showing an example from one of the HRSC quadrangle map-sheets. Automated quality control [Sidiropoulos & Muller, 2015] techniques are applied to screen for

  3. Prismatic analyser concept for neutron spectrometers

    SciTech Connect

    Birk, Jonas O.; Jacobsen, Johan; Hansen, Rasmus L.; Lefmann, Kim; Markó, Márton; Niedermayer, Christof; Freeman, Paul G.; Christensen, Niels B.; Månsson, Martin; Rønnow, Henrik M.

    2014-11-15

    Developments in modern neutron spectroscopy have led to typical sample sizes decreasing from few cm to several mm in diameter samples. We demonstrate how small samples together with the right choice of analyser and detector components makes distance collimation an important concept in crystal analyser spectrometers. We further show that this opens new possibilities where neutrons with different energies are reflected by the same analyser but counted in different detectors, thus improving both energy resolution and total count rate compared to conventional spectrometers. The technique can readily be combined with advanced focussing geometries and with multiplexing instrument designs. We present a combination of simulations and data showing three different energies simultaneously reflected from one analyser. Experiments were performed on a cold triple axis instrument and on a prototype inverse geometry Time-of-flight spectrometer installed at PSI, Switzerland, and shows excellent agreement with the predictions. Typical improvements will be 2.0 times finer resolution and a factor of 1.9 in flux gain compared to a focussing Rowland geometry, or of 3.3 times finer resolution and a factor of 2.4 in flux gain compared to a single flat analyser slab.

  4. Geomagnetic local and regional harmonic analyses.

    USGS Publications Warehouse

    Alldredge, L.R.

    1982-01-01

    Procedures are developed for using rectangular and cylindrical harmonic analyses in local and regional areas. Both the linear least squares analysis, applicable when component data are available, and the nonlinear least squares analysis, applicable when only total field data are available, are treated. When component data are available, it is advantageous to work with residual fields obtained by subtracting components derived from a harmonic potential from the observed components. When only total field intensity data are available, they must be used directly. Residual values cannot be used. Cylindrical harmonic analyses are indicated when fields tend toward cylindrical symmetry; otherwise, rectangular harmonic analyses will be more advantageous. Examples illustrating each type of analysis are given.-Author

  5. NEUTRONICS ANALYSES FOR SNS TARGETS DEPOSITIONS

    SciTech Connect

    Popova, Irina I; Remec, Igor; Gallmeier, Franz X

    2016-01-01

    In order to deposit Spallation Neutron Source (SNS) spent facility components replaced due to end-of-life radiation-induced material damage or burn-up, or because of mechanical failure or design improvements, waste classification analyses are being performed. These analyses include an accurate estimate of the radionuclide inventory, on which base components are classified and an appropriate container for transport and storage is determined. After the choice for the container is made, transport calculations are performed for the facility component to be placed inside the container, ensuring compliance with waste management regulations. When necessary, additional shielding is added. Most of the effort is concentrated on the target deposition, which normally takes place once or twice per year. Additionally, the second target station (STS) is in a process of design and waste management analyses for the STS target are being developed to support a deposition plan

  6. Proteomic Analyses of the Vitreous Humour

    PubMed Central

    Angi, Martina; Kalirai, Helen; Coupland, Sarah E.; Damato, Bertil E.; Semeraro, Francesco; Romano, Mario R.

    2012-01-01

    The human vitreous humour (VH) is a transparent, highly hydrated gel, which occupies the posterior segment of the eye between the lens and the retina. Physiological and pathological conditions of the retina are reflected in the protein composition of the VH, which can be sampled as part of routine surgical procedures. Historically, many studies have investigated levels of individual proteins in VH from healthy and diseased eyes. In the last decade, proteomics analyses have been performed to characterise the proteome of the human VH and explore networks of functionally related proteins, providing insight into the aetiology of diabetic retinopathy and proliferative vitreoretinopathy. Recent proteomic studies on the VH from animal models of autoimmune uveitis have identified new signalling pathways associated to autoimmune triggers and intravitreal inflammation. This paper aims to guide biological scientists through the different proteomic techniques that have been used to analyse the VH and present future perspectives for the study of intravitreal inflammation using proteomic analyses. PMID:22973072

  7. A qualitative method for analysing multivoicedness

    PubMed Central

    Aveling, Emma-Louise; Gillespie, Alex; Cornish, Flora

    2015-01-01

    ‘Multivoicedness’ and the ‘multivoiced Self’ have become important theoretical concepts guiding research. Drawing on the tradition of dialogism, the Self is conceptualised as being constituted by a multiplicity of dynamic, interacting voices. Despite the growth in literature and empirical research, there remains a paucity of established methodological tools for analysing the multivoiced Self using qualitative data. In this article, we set out a systematic, practical ‘how-to’ guide for analysing multivoicedness. Using theoretically derived tools, our three-step method comprises: identifying the voices of I-positions within the Self’s talk (or text), identifying the voices of ‘inner-Others’, and examining the dialogue and relationships between the different voices. We elaborate each step and illustrate our method using examples from a published paper in which data were analysed using this method. We conclude by offering more general principles for the use of the method and discussing potential applications. PMID:26664292

  8. Potentials and limits to basin stability estimation

    NASA Astrophysics Data System (ADS)

    Schultz, Paul; Menck, Peter J.; Heitzig, Jobst; Kurths, Jürgen

    2017-02-01

    Stability assessment methods for dynamical systems have recently been complemented by basin stability and derived measures, i.e. probabilistic statements whether systems remain in a basin of attraction given a distribution of perturbations. Their application requires numerical estimation via Monte Carlo sampling and integration of differential equations. Here, we analyse the applicability of basin stability to systems with basin geometries that are challenging for this numerical method, having fractal basin boundaries and riddled or intermingled basins of attraction. We find that numerical basin stability estimation is still meaningful for fractal boundaries but reaches its limits for riddled basins with holes.

  9. Cryocoolers near their low-temperature limit

    NASA Astrophysics Data System (ADS)

    de Waele, A. T. A. M.

    2015-07-01

    This paper analyses the recently-observed temperature-time dependence in a GM-cooler near its low-temperature limit. The paper mainly focusses on GM-coolers with 4He as the working fluid, but some attention is also paid to pulse-tube refrigerators (PTR's) using 3He and many features of the treatment equally apply to Stirling coolers. Ample attention is paid to the thermodynamics of the cycle by considering the isentropes in the Tp-diagrams of 4He and 3He. The role of the line, where the thermal expansion coefficient is zero, is emphasized. Some fundamental thermodynamic relationships are derived.

  10. Force limit specifications vs. design limit loads in vibration testing

    NASA Technical Reports Server (NTRS)

    Chang, K. Y.

    2000-01-01

    The purpose of the work presented herein is to discuss the results of force limit notching during vibration testing with respect to the traditional limit load design criteria. By using a single-degree-of-freedom (SDOF) system approach, this work shows that with an appropriate force specification the notched response due to force limiting will result in loads comparable with the structural design limit criteria.

  11. Advanced laser stratospheric monitoring systems analyses

    NASA Technical Reports Server (NTRS)

    Larsen, J. C.

    1984-01-01

    This report describes the software support supplied by Systems and Applied Sciences Corporation for the study of Advanced Laser Stratospheric Monitoring Systems Analyses under contract No. NAS1-15806. This report discusses improvements to the Langley spectroscopic data base, development of LHS instrument control software and data analyses and validation software. The effect of diurnal variations on the retrieved concentrations of NO, NO2 and C L O from a space and balloon borne measurement platform are discussed along with the selection of optimum IF channels for sensing stratospheric species from space.

  12. Analysing particulate deposition to plant canopies

    NASA Astrophysics Data System (ADS)

    Bache, D. H.

    Experimental measurements of the deposition of Lycopodium spores to a plant canopy were analysed to generate specific estimates of the relative significance of sedimentation, impaction and the effective foliage density fp. For the particular case analysed impaction appeared to be the dominating trapping mechanism and it was demonstrated that considerable aerodynamic shading was present. Using an estimate of fp. a consistant picture emerged in the behaviour of the canopy when both wet and dry and when tested against independent data on the trapping characteristics of individual elements. These conclusions differed significantly from those derived using a model in which impaction was neglected and lead to an apparent overestimate of fp.

  13. Metabolic flux analyses for serine alkaline protease production.

    PubMed

    Çalik; Çalik; Takaç; Özdamar

    2000-12-01

    The intracellular metabolic fluxes through the central carbon pathways in Bacillus licheniformis in serine alkaline protease (SAP) production were calculated to predict the potential strategies for increasing the performance of the bacilli, by using two optimization approaches, i.e. the theoretical data-based (TDA) and the theoretical data-based capacity (TDC) analyses based on respectively minimum in-vivo SAP accumulation rate and maximum SAP synthesis rate assumptions, at low-, medium-, and high-oxygen transfer conditions. At all periods of low-oxygen transfer condition, in lag and early exponential periods of medium-oxygen transfer (MOT) condition, and SAP synthesis period of high-oxygen transfer (HOT) condition, the TDA and TDC analyses revealed that SAP overproduction capacity is almost equal to the observed SAP production due to the regulation effect of the oxygen transfer. In the growth and early SAP synthesis period and in SAP synthesis period at MOT condition the calculated results of the two analyses reveal that SAP synthesis rate of the microorganism can be increased 7.2 and 16.7 folds, respectively; whereas, in the growth and early SAP synthesis period at HOT condition it can be increased 12.6 folds. The diversions in the biochemical reaction network and the influence of the oxygen transfer on the performance of the bacilli were also presented. The results encourage the application of metabolic engineering for lifting the rate limitations and for improving the genetic regulations in order to increase the SAP production.

  14. Limiting depth of magnetization in cratonic lithosphere

    NASA Technical Reports Server (NTRS)

    Toft, Paul B.; Haggerty, Stephen E.

    1988-01-01

    Values of magnetic susceptibility and natural remanent magnetization (NRM) of clino-pyroxene-garnet-plagioclase granulite facies lower crustal xenoliths from a kimberlite in west Africa are correlated to bulk geochemistry and specific gravity. Thermomagnetic and alternating-field demagnetization analyses identify magnetite (Mt) and native iron as the dominant magnetic phases (totaling not more than 0.1 vol pct of the rocks) along with subsidiary sulfides. Oxidation states of the granulites are not greater than MW, observed Mt occurs as rims on coarse (about 1 micron) Fe particles, and inferred single domain-pseudosingle domain Mt may be a result of oxidation of fine-grained Fe. The deepest limit of lithospheric ferromagnetism is 95 km, but a limit of 70 km is most reasonable for the West African Craton and for modeling Magsat anomalies over exposed Precambrian shields.

  15. Limitations and challenges of genetic barcode quantification

    PubMed Central

    Thielecke, Lars; Aranyossy, Tim; Dahl, Andreas; Tiwari, Rajiv; Roeder, Ingo; Geiger, Hartmut; Fehse, Boris; Glauche, Ingmar; Cornils, Kerstin

    2017-01-01

    Genetic barcodes are increasingly used to track individual cells and to quantitatively assess their clonal contributions over time. Although barcode quantification relies entirely on counting sequencing reads, detailed studies about the method’s accuracy are still limited. We report on a systematic investigation of the relation between barcode abundance and resulting read counts after amplification and sequencing using cell-mixtures that contain barcodes with known frequencies (“miniBulks”). We evaluated the influence of protocol modifications to identify potential sources of error and elucidate possible limitations of the quantification approach. Based on these findings we designed an advanced barcode construct (BC32) to improved barcode calling and quantification, and to ensure a sensitive detection of even highly diluted barcodes. Our results emphasize the importance of using curated barcode libraries to obtain interpretable quantitative data and underline the need for rigorous analyses of any utilized barcode library in terms of reliability and reproducibility. PMID:28256524

  16. Automated Quality Assurance of Online NIR Analysers

    PubMed Central

    Aaljoki, Kari

    2005-01-01

    Modern NIR analysers produce valuable data for closed-loop process control and optimisation practically in real time. Thus it is highly important to keep them in the best possible shape. Quality assurance (QA) of NIR analysers is an interesting and complex issue because it is not only the instrument and sample handling that has to be monitored. At the same time, validity of prediction models has to be assured. A system for fully automated QA of NIR analysers is described. The system takes care of collecting and organising spectra from various instruments, relevant laboratory, and process management system (PMS) data. Validation of spectra is based on simple diagnostics values derived from the spectra. Predictions are validated against laboratory (LIMS) or other online analyser results (collected from PMS). The system features automated alarming, reporting, trending, and charting functions for major key variables for easy visual inspection. Various textual and graphical reports are sent to maintenance people through email. The software was written with Borland Delphi 7 Enterprise. Oracle and PMS ODBC interfaces were used for accessing LIMS and PMS data using appropriate SQL queries. It will be shown that it is possible to take actions even before the quality of predictions is seriously affected, thus maximising the overall uptime of the instrument. PMID:18924628

  17. Cosmetology: Task Analyses. Competency-Based Education.

    ERIC Educational Resources Information Center

    Henrico County Public Schools, Glen Allen, VA. Virginia Vocational Curriculum Center.

    These task analyses are designed to be used in combination with the "Trade and Industrial Education Service Area Resource" in order to implement competency-based education in the cosmetology program in Virginia. The task analysis document contains the task inventory, suggested task sequence lists, and content outlines for the secondary…

  18. The Economic Cost of Homosexuality: Multilevel Analyses

    ERIC Educational Resources Information Center

    Baumle, Amanda K.; Poston, Dudley, Jr.

    2011-01-01

    This article builds on earlier studies that have examined "the economic cost of homosexuality," by using data from the 2000 U.S. Census and by employing multilevel analyses. Our findings indicate that partnered gay men experience a 12.5 percent earnings penalty compared to married heterosexual men, and a statistically insignificant earnings…

  19. Written Case Analyses and Critical Reflection.

    ERIC Educational Resources Information Center

    Harrington, Helen L.; And Others

    1996-01-01

    The study investigated the use of case-based pedagogy to develop critical reflection in prospective teachers. Analysis of students written analyses of dilemma-based cases found patterns showing evidence of students open-mindedness, sense of professional responsibility, and wholeheartedness in approach to teaching. (DB)

  20. Chemical Analyses of Silicon Aerogel Samples

    SciTech Connect

    van der Werf, I.; Palmisano, F.; De Leo, Raffaele; Marrone, Stefano

    2008-04-01

    After five years of operating, two Aerogel counters: A1 and A2, taking data in Hall A at Jefferson Lab, suffered a loss of performance. In this note possible causes of degradation have been studied. In particular, various chemical and physical analyses have been carried out on several Aerogel tiles and on adhesive tape in order to reveal the presence of contaminants.