Sample records for scale analyse technique

  1. A variance-decomposition approach to investigating multiscale habitat associations

    USGS Publications Warehouse

    Lawler, J.J.; Edwards, T.C.

    2006-01-01

    The recognition of the importance of spatial scale in ecology has led many researchers to take multiscale approaches to studying habitat associations. However, few of the studies that investigate habitat associations at multiple spatial scales have considered the potential effects of cross-scale correlations in measured habitat variables. When cross-scale correlations in such studies are strong, conclusions drawn about the relative strength of habitat associations at different spatial scales may be inaccurate. Here we adapt and demonstrate an analytical technique based on variance decomposition for quantifying the influence of cross-scale correlations on multiscale habitat associations. We used the technique to quantify the variation in nest-site locations of Red-naped Sapsuckers (Sphyrapicus nuchalis) and Northern Flickers (Colaptes auratus) associated with habitat descriptors at three spatial scales. We demonstrate how the method can be used to identify components of variation that are associated only with factors at a single spatial scale as well as shared components of variation that represent cross-scale correlations. Despite the fact that no explanatory variables in our models were highly correlated (r < 0.60), we found that shared components of variation reflecting cross-scale correlations accounted for roughly half of the deviance explained by the models. These results highlight the importance of both conducting habitat analyses at multiple spatial scales and of quantifying the effects of cross-scale correlations in such analyses. Given the limits of conventional analytical techniques, we recommend alternative methods, such as the variance-decomposition technique demonstrated here, for analyzing habitat associations at multiple spatial scales. ?? The Cooper Ornithological Society 2006.

  2. Soilscopes for soilscapes

    USDA-ARS?s Scientific Manuscript database

    The objective of this presentation is to provoke discussion on status of scale concepts and techniques in soil systems analyses that operate with data collected at different scales and have to overcome the scale mismatch among components of knowledge acquisition, packaging and use for societal needs...

  3. Surface diagnostics for scale analysis.

    PubMed

    Dunn, S; Impey, S; Kimpton, C; Parsons, S A; Doyle, J; Jefferson, B

    2004-01-01

    Stainless steel, polymethylmethacrylate and polytetrafluoroethylene coupons were analysed for surface topographical and adhesion force characteristics using tapping mode atomic force microscopy and force-distance microscopy techniques. The two polymer materials were surface modified by polishing with silicon carbide papers of known grade. The struvite scaling rate was determined for each coupon and related to the data gained from the surface analysis. The scaling rate correlated well with adhesion force measurements indicating that lower energy materials scale at a lower rate. The techniques outlined in the paper provide a method for the rapid screening of materials in potential scaling applications.

  4. Problems of allometric scaling analysis: examples from mammalian reproductive biology.

    PubMed

    Martin, Robert D; Genoud, Michel; Hemelrijk, Charlotte K

    2005-05-01

    Biological scaling analyses employing the widely used bivariate allometric model are beset by at least four interacting problems: (1) choice of an appropriate best-fit line with due attention to the influence of outliers; (2) objective recognition of divergent subsets in the data (allometric grades); (3) potential restrictions on statistical independence resulting from phylogenetic inertia; and (4) the need for extreme caution in inferring causation from correlation. A new non-parametric line-fitting technique has been developed that eliminates requirements for normality of distribution, greatly reduces the influence of outliers and permits objective recognition of grade shifts in substantial datasets. This technique is applied in scaling analyses of mammalian gestation periods and of neonatal body mass in primates. These analyses feed into a re-examination, conducted with partial correlation analysis, of the maternal energy hypothesis relating to mammalian brain evolution, which suggests links between body size and brain size in neonates and adults, gestation period and basal metabolic rate. Much has been made of the potential problem of phylogenetic inertia as a confounding factor in scaling analyses. However, this problem may be less severe than suspected earlier because nested analyses of variance conducted on residual variation (rather than on raw values) reveals that there is considerable variance at low taxonomic levels. In fact, limited divergence in body size between closely related species is one of the prime examples of phylogenetic inertia. One common approach to eliminating perceived problems of phylogenetic inertia in allometric analyses has been calculation of 'independent contrast values'. It is demonstrated that the reasoning behind this approach is flawed in several ways. Calculation of contrast values for closely related species of similar body size is, in fact, highly questionable, particularly when there are major deviations from the best-fit line for the scaling relationship under scrutiny.

  5. Pairwise-Comparison Software

    NASA Technical Reports Server (NTRS)

    Ricks, Wendell R.

    1995-01-01

    Pairwise comparison (PWC) is computer program that collects data for psychometric scaling techniques now used in cognitive research. It applies technique of pairwise comparisons, which is one of many techniques commonly used to acquire the data necessary for analyses. PWC administers task, collects data from test subject, and formats data for analysis. Written in Turbo Pascal v6.0.

  6. Identifying scales of pattern in ecological data: a comparison of lacunarity, spectral and wavelet analyses

    Treesearch

    Sari C. Saunders; Jiquan Chen; Thomas D. Drummer; Eric J. Gustafson; Kimberley D. Brosofske

    2005-01-01

    Identifying scales of pattern in ecological systems and coupling patterns to processes that create them are ongoing challenges. We examined the utility of three techniques (lacunarity, spectral, and wavelet analysis) for detecting scales of pattern of ecological data. We compared the information obtained using these methods for four datasets, including: surface...

  7. The use of single-date MODIS imagery for estimating large-scale urban impervious surface fraction with spectral mixture analysis and machine learning techniques

    NASA Astrophysics Data System (ADS)

    Deng, Chengbin; Wu, Changshan

    2013-12-01

    Urban impervious surface information is essential for urban and environmental applications at the regional/national scales. As a popular image processing technique, spectral mixture analysis (SMA) has rarely been applied to coarse-resolution imagery due to the difficulty of deriving endmember spectra using traditional endmember selection methods, particularly within heterogeneous urban environments. To address this problem, we derived endmember signatures through a least squares solution (LSS) technique with known abundances of sample pixels, and integrated these endmember signatures into SMA for mapping large-scale impervious surface fraction. In addition, with the same sample set, we carried out objective comparative analyses among SMA (i.e. fully constrained and unconstrained SMA) and machine learning (i.e. Cubist regression tree and Random Forests) techniques. Analysis of results suggests three major conclusions. First, with the extrapolated endmember spectra from stratified random training samples, the SMA approaches performed relatively well, as indicated by small MAE values. Second, Random Forests yields more reliable results than Cubist regression tree, and its accuracy is improved with increased sample sizes. Finally, comparative analyses suggest a tentative guide for selecting an optimal approach for large-scale fractional imperviousness estimation: unconstrained SMA might be a favorable option with a small number of samples, while Random Forests might be preferred if a large number of samples are available.

  8. The Relationship between Classroom Management Strategies and Student Misbehaviors.

    ERIC Educational Resources Information Center

    Skiba, Russell J.

    Because research has determined that specific management techniques can have an effect on the classroom behavior of students, an observational rating scale was developed to assess the type of management techniques six elementary teachers in a program for behaviorally disordered children used to control behavior. Correlational analyses were used to…

  9. Lightweight and Statistical Techniques for Petascale Debugging: Correctness on Petascale Systems (CoPS) Preliminry Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de Supinski, B R; Miller, B P; Liblit, B

    2011-09-13

    Petascale platforms with O(10{sup 5}) and O(10{sup 6}) processing cores are driving advancements in a wide range of scientific disciplines. These large systems create unprecedented application development challenges. Scalable correctness tools are critical to shorten the time-to-solution on these systems. Currently, many DOE application developers use primitive manual debugging based on printf or traditional debuggers such as TotalView or DDT. This paradigm breaks down beyond a few thousand cores, yet bugs often arise above that scale. Programmers must reproduce problems in smaller runs to analyze them with traditional tools, or else perform repeated runs at scale using only primitive techniques.more » Even when traditional tools run at scale, the approach wastes substantial effort and computation cycles. Continued scientific progress demands new paradigms for debugging large-scale applications. The Correctness on Petascale Systems (CoPS) project is developing a revolutionary debugging scheme that will reduce the debugging problem to a scale that human developers can comprehend. The scheme can provide precise diagnoses of the root causes of failure, including suggestions of the location and the type of errors down to the level of code regions or even a single execution point. Our fundamentally new strategy combines and expands three relatively new complementary debugging approaches. The Stack Trace Analysis Tool (STAT), a 2011 R&D 100 Award Winner, identifies behavior equivalence classes in MPI jobs and highlights behavior when elements of the class demonstrate divergent behavior, often the first indicator of an error. The Cooperative Bug Isolation (CBI) project has developed statistical techniques for isolating programming errors in widely deployed code that we will adapt to large-scale parallel applications. Finally, we are developing a new approach to parallelizing expensive correctness analyses, such as analysis of memory usage in the Memgrind tool. In the first two years of the project, we have successfully extended STAT to determine the relative progress of different MPI processes. We have shown that the STAT, which is now included in the debugging tools distributed by Cray with their large-scale systems, substantially reduces the scale at which traditional debugging techniques are applied. We have extended CBI to large-scale systems and developed new compiler based analyses that reduce its instrumentation overhead. Our results demonstrate that CBI can identify the source of errors in large-scale applications. Finally, we have developed MPIecho, a new technique that will reduce the time required to perform key correctness analyses, such as the detection of writes to unallocated memory. Overall, our research results are the foundations for new debugging paradigms that will improve application scientist productivity by reducing the time to determine which package or module contains the root cause of a problem that arises at all scales of our high end systems. While we have made substantial progress in the first two years of CoPS research, significant work remains. While STAT provides scalable debugging assistance for incorrect application runs, we could apply its techniques to assertions in order to observe deviations from expected behavior. Further, we must continue to refine STAT's techniques to represent behavioral equivalence classes efficiently as we expect systems with millions of threads in the next year. We are exploring new CBI techniques that can assess the likelihood that execution deviations from past behavior are the source of erroneous execution. Finally, we must develop usable correctness analyses that apply the MPIecho parallelization strategy in order to locate coding errors. We expect to make substantial progress on these directions in the next year but anticipate that significant work will remain to provide usable, scalable debugging paradigms.« less

  10. Non-contact tensile viscoelastic characterization of microscale biological materials

    NASA Astrophysics Data System (ADS)

    Li, Yuhui; Hong, Yuan; Xu, Guang-Kui; Liu, Shaobao; Shi, Qiang; Tang, Deding; Yang, Hui; Genin, Guy M.; Lu, Tian Jian; Xu, Feng

    2018-06-01

    Many structures and materials in nature and physiology have important "meso-scale" structures at the micron length-scale whose tensile responses have proven difficult to characterize mechanically. Although techniques such as atomic force microscopy and micro- and nano-identation are mature for compression and indentation testing at the nano-scale, and standard uniaxial and shear rheometry techniques exist for the macroscale, few techniques are applicable for tensile-testing at the micrometre-scale, leaving a gap in our understanding of hierarchical biomaterials. Here, we present a novel magnetic mechanical testing (MMT) system that enables viscoelastic tensile testing at this critical length scale. The MMT system applies non-contact loading, avoiding gripping and surface interaction effects. We demonstrate application of the MMT system to the first analyses of the pure tensile responses of several native and engineered tissue systems at the mesoscale, showing the broad potential of the system for exploring micro- and meso-scale analysis of structured and hierarchical biological systems.

  11. Non-contact tensile viscoelastic characterization of microscale biological materials

    NASA Astrophysics Data System (ADS)

    Li, Yuhui; Hong, Yuan; Xu, Guang-Kui; Liu, Shaobao; Shi, Qiang; Tang, Deding; Yang, Hui; Genin, Guy M.; Lu, Tian Jian; Xu, Feng

    2018-01-01

    Many structures and materials in nature and physiology have important "meso-scale" structures at the micron length-scale whose tensile responses have proven difficult to characterize mechanically. Although techniques such as atomic force microscopy and micro- and nano-identation are mature for compression and indentation testing at the nano-scale, and standard uniaxial and shear rheometry techniques exist for the macroscale, few techniques are applicable for tensile-testing at the micrometre-scale, leaving a gap in our understanding of hierarchical biomaterials. Here, we present a novel magnetic mechanical testing (MMT) system that enables viscoelastic tensile testing at this critical length scale. The MMT system applies non-contact loading, avoiding gripping and surface interaction effects. We demonstrate application of the MMT system to the first analyses of the pure tensile responses of several native and engineered tissue systems at the mesoscale, showing the broad potential of the system for exploring micro- and meso-scale analysis of structured and hierarchical biological systems.

  12. Comparison of quartz crystallographic preferred orientations identified with optical fabric analysis, electron backscatter and neutron diffraction techniques.

    PubMed

    Hunter, N J R; Wilson, C J L; Luzin, V

    2017-02-01

    Three techniques are used to measure crystallographic preferred orientations (CPO) in a naturally deformed quartz mylonite: transmitted light cross-polarized microscopy using an automated fabric analyser, electron backscatter diffraction (EBSD) and neutron diffraction. Pole figure densities attributable to crystal-plastic deformation are variably recognizable across the techniques, particularly between fabric analyser and diffraction instruments. Although fabric analyser techniques offer rapid acquisition with minimal sample preparation, difficulties may exist when gathering orientation data parallel with the incident beam. Overall, we have found that EBSD and fabric analyser techniques are best suited for studying CPO distributions at the grain scale, where individual orientations can be linked to their source grain or nearest neighbours. Neutron diffraction serves as the best qualitative and quantitative means of estimating the bulk CPO, due to its three-dimensional data acquisition, greater sample area coverage, and larger sample size. However, a number of sampling methods can be applied to FA and EBSD data to make similar approximations. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  13. Non-destructive evaluation of laboratory scale hydraulic fracturing using acoustic emission

    NASA Astrophysics Data System (ADS)

    Hampton, Jesse Clay

    The primary objective of this research is to develop techniques to characterize hydraulic fractures and fracturing processes using acoustic emission monitoring based on laboratory scale hydraulic fracturing experiments. Individual microcrack AE source characterization is performed to understand the failure mechanisms associated with small failures along pre-existing discontinuities and grain boundaries. Individual microcrack analysis methods include moment tensor inversion techniques to elucidate the mode of failure, crack slip and crack normal direction vectors, and relative volumetric deformation of an individual microcrack. Differentiation between individual microcrack analysis and AE cloud based techniques is studied in efforts to refine discrete fracture network (DFN) creation and regional damage quantification of densely fractured media. Regional damage estimations from combinations of individual microcrack analyses and AE cloud density plotting are used to investigate the usefulness of weighting cloud based AE analysis techniques with microcrack source data. Two granite types were used in several sample configurations including multi-block systems. Laboratory hydraulic fracturing was performed with sample sizes ranging from 15 x 15 x 25 cm3 to 30 x 30 x 25 cm 3 in both unconfined and true-triaxially confined stress states using different types of materials. Hydraulic fracture testing in rock block systems containing a large natural fracture was investigated in terms of AE response throughout fracture interactions. Investigations of differing scale analyses showed the usefulness of individual microcrack characterization as well as DFN and cloud based techniques. Individual microcrack characterization weighting cloud based techniques correlated well with post-test damage evaluations.

  14. Development of a Scaling Technique for Sociometric Data.

    ERIC Educational Resources Information Center

    Peper, John B.; Chansky, Norman M.

    This study explored the stability and interjudge agreements of a sociometric scaling device to which children could easily respond, which teachers could easily administer and score, and which provided scores that researchers could use in parametric statistical analyses. Each student was paired with every other member of his class. He voted on each…

  15. Using GIS Mapping to Target Public Health Interventions: Examining Birth Outcomes Across GIS Techniques.

    PubMed

    MacQuillan, E L; Curtis, A B; Baker, K M; Paul, R; Back, Y O

    2017-08-01

    With advances in spatial analysis techniques, there has been a trend in recent public health research to assess the contribution of area-level factors to health disparity for a number of outcomes, including births. Although it is widely accepted that health disparity is best addressed by targeted, evidence-based and data-driven community efforts, and despite national and local focus in the U.S. to reduce infant mortality and improve maternal-child health, there is little work exploring how choice of scale and specific GIS visualization technique may alter the perception of analyses focused on health disparity in birth outcomes. Retrospective cohort study. Spatial analysis of individual-level vital records data for low birthweight and preterm births born to black women from 2007 to 2012 in one mid-sized Midwest city using different geographic information systems (GIS) visualization techniques [geocoded address records were aggregated at two levels of scale and additionally mapped using kernel density estimation (KDE)]. GIS analyses in this study support our hypothesis that choice of geographic scale (neighborhood or census tract) for aggregated birth data can alter programmatic decision-making. Results indicate that the relative merits of aggregated visualization or the use of KDE technique depend on the scale of intervention. The KDE map proved useful in targeting specific areas for interventions in cities with smaller populations and larger census tracts, where they allow for greater specificity in identifying intervention areas. When public health programmers seek to inform intervention placement in highly populated areas, however, aggregated data at the census tract level may be preferred, since it requires lower investments in terms of time and cartographic skill and, unlike neighborhood, census tracts are standardized in that they become smaller as the population density of an area increases.

  16. [Adverse Effect Predictions Based on Computational Toxicology Techniques and Large-scale Databases].

    PubMed

    Uesawa, Yoshihiro

    2018-01-01

     Understanding the features of chemical structures related to the adverse effects of drugs is useful for identifying potential adverse effects of new drugs. This can be based on the limited information available from post-marketing surveillance, assessment of the potential toxicities of metabolites and illegal drugs with unclear characteristics, screening of lead compounds at the drug discovery stage, and identification of leads for the discovery of new pharmacological mechanisms. This present paper describes techniques used in computational toxicology to investigate the content of large-scale spontaneous report databases of adverse effects, and it is illustrated with examples. Furthermore, volcano plotting, a new visualization method for clarifying the relationships between drugs and adverse effects via comprehensive analyses, will be introduced. These analyses may produce a great amount of data that can be applied to drug repositioning.

  17. Anti-control of chaos of single time-scale brushless DC motor.

    PubMed

    Ge, Zheng-Ming; Chang, Ching-Ming; Chen, Yen-Sheng

    2006-09-15

    Anti-control of chaos of single time-scale brushless DC motors is studied in this paper. In order to analyse a variety of periodic and chaotic phenomena, we employ several numerical techniques such as phase portraits, bifurcation diagrams and Lyapunov exponents. Anti-control of chaos can be achieved by adding an external constant term or an external periodic term.

  18. Chapter 14: Electron Microscopy on Thin Films for Solar Cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romero, Manuel; Abou-Ras, Daniel; Nichterwitz, Melanie

    2016-07-22

    This chapter overviews the various techniques applied in scanning electron microscopy (SEM) and transmission electron microscopy (TEM), and highlights their possibilities and also limitations. It gives the various imaging and analysis techniques applied on a scanning electron microscope. The chapter shows that imaging is divided into that making use of secondary electrons (SEs) and of backscattered electrons (BSEs), resulting in different contrasts in the images and thus providing information on compositions, microstructures, and surface potentials. Whenever aiming for imaging and analyses at scales of down to the angstroms range, TEM and its related techniques are appropriate tools. In many cases,more » also SEM techniques provide the access to various material properties of the individual layers, not requiring specimen preparation as time consuming as TEM techniques. Finally, the chapter dedicates to cross-sectional specimen preparation for electron microscopy. The preparation decides indeed on the quality of imaging and analyses.« less

  19. Hindlimb muscle architecture in non-human great apes and a comparison of methods for analysing inter-species variation

    PubMed Central

    Myatt, Julia P; Crompton, Robin H; Thorpe, Susannah K S

    2011-01-01

    By relating an animal's morphology to its functional role and the behaviours performed, we can further develop our understanding of the selective factors and constraints acting on the adaptations of great apes. Comparison of muscle architecture between different ape species, however, is difficult because only small sample sizes are ever available. Further, such samples are often comprised of different age–sex classes, so studies have to rely on scaling techniques to remove body mass differences. However, the reliability of such scaling techniques has been questioned. As datasets increase in size, more reliable statistical analysis may eventually become possible. Here we employ geometric and allometric scaling techniques, and ancovas (a form of general linear model, GLM) to highlight and explore the different methods available for comparing functional morphology in the non-human great apes. Our results underline the importance of regressing data against a suitable body size variable to ascertain the relationship (geometric or allometric) and of choosing appropriate exponents by which to scale data. ancova models, while likely to be more robust than scaling for species comparisons when sample sizes are high, suffer from reduced power when sample sizes are low. Therefore, until sample sizes are radically increased it is preferable to include scaling analyses along with ancovas in data exploration. Overall, the results obtained from the different methods show little significant variation, whether in muscle belly mass, fascicle length or physiological cross-sectional area between the different species. This may reflect relatively close evolutionary relationships of the non-human great apes; a universal influence on morphology of generalised orthograde locomotor behaviours or, quite likely, both. PMID:21507000

  20. Stable isotope probing to study functional components of complex microbial ecosystems.

    PubMed

    Mazard, Sophie; Schäfer, Hendrik

    2014-01-01

    This protocol presents a method of dissecting the DNA or RNA of key organisms involved in a specific biochemical process within a complex ecosystem. Stable isotope probing (SIP) allows the labelling and separation of nucleic acids from community members that are involved in important biochemical transformations, yet are often not the most numerically abundant members of a community. This pure culture-independent technique circumvents limitations of traditional microbial isolation techniques or data mining from large-scale whole-community metagenomic studies to tease out the identities and genomic repertoires of microorganisms participating in biological nutrient cycles. SIP experiments can be applied to virtually any ecosystem and biochemical pathway under investigation provided a suitable stable isotope substrate is available. This versatile methodology allows a wide range of analyses to be performed, from fatty-acid analyses, community structure and ecology studies, and targeted metagenomics involving nucleic acid sequencing. SIP experiments provide an effective alternative to large-scale whole-community metagenomic studies by specifically targeting the organisms or biochemical transformations of interest, thereby reducing the sequencing effort and time-consuming bioinformatics analyses of large datasets.

  1. Scalable fabrication of perovskite solar cells

    DOE PAGES

    Li, Zhen; Klein, Talysa R.; Kim, Dong Hoe; ...

    2018-03-27

    Perovskite materials use earth-abundant elements, have low formation energies for deposition and are compatible with roll-to-roll and other high-volume manufacturing techniques. These features make perovskite solar cells (PSCs) suitable for terawatt-scale energy production with low production costs and low capital expenditure. Demonstrations of performance comparable to that of other thin-film photovoltaics (PVs) and improvements in laboratory-scale cell stability have recently made scale up of this PV technology an intense area of research focus. Here, we review recent progress and challenges in scaling up PSCs and related efforts to enable the terawatt-scale manufacturing and deployment of this PV technology. We discussmore » common device and module architectures, scalable deposition methods and progress in the scalable deposition of perovskite and charge-transport layers. We also provide an overview of device and module stability, module-level characterization techniques and techno-economic analyses of perovskite PV modules.« less

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Zhen; Klein, Talysa R.; Kim, Dong Hoe

    Perovskite materials use earth-abundant elements, have low formation energies for deposition and are compatible with roll-to-roll and other high-volume manufacturing techniques. These features make perovskite solar cells (PSCs) suitable for terawatt-scale energy production with low production costs and low capital expenditure. Demonstrations of performance comparable to that of other thin-film photovoltaics (PVs) and improvements in laboratory-scale cell stability have recently made scale up of this PV technology an intense area of research focus. Here, we review recent progress and challenges in scaling up PSCs and related efforts to enable the terawatt-scale manufacturing and deployment of this PV technology. We discussmore » common device and module architectures, scalable deposition methods and progress in the scalable deposition of perovskite and charge-transport layers. We also provide an overview of device and module stability, module-level characterization techniques and techno-economic analyses of perovskite PV modules.« less

  3. VIRTIS on Venus Express: retrieval of real surface emissivity on global scales

    NASA Astrophysics Data System (ADS)

    Arnold, Gabriele E.; Kappel, David; Haus, Rainer; Telléz Pedroza, Laura; Piccioni, Giuseppe; Drossart, Pierre

    2015-09-01

    The extraction of surface emissivity data provides the data base for surface composition analyses and enables to evaluate Venus' geology. The Visible and InfraRed Thermal Imaging Spectrometer (VIRTIS) aboard ESA's Venus Express mission measured, inter alia, the nightside thermal emission of Venus in the near infrared atmospheric windows between 1.0 and 1.2 μm. These data can be used to determine information about surface properties on global scales. This requires a sophisticated approach to understand and consider the effects and interferences of different atmospheric and surface parameters influencing the retrieved values. In the present work, results of a new technique for retrieval of the 1.0 - 1.2 μm - surface emissivity are summarized. It includes a Multi-Window Retrieval Technique, a Multi-Spectrum Retrieval technique (MSR), and a detailed reliability analysis. The MWT bases on a detailed radiative transfer model making simultaneous use of information from different atmospheric windows of an individual spectrum. MSR regularizes the retrieval by incorporating available a priori mean values, standard deviations as well as spatial-temporal correlations of parameters to be retrieved. The capability of this method is shown for a selected surface target area. Implications for geologic investigations are discussed. Based on these results, the work draws conclusions for future Venus surface composition analyses on global scales using spectral remote sensing techniques. In that context, requirements for observational scenarios and instrumental performances are investigated, and recommendations are derived to optimize spectral measurements for Venus' surface studies.

  4. Characterization of the Polycaprolactone Melt Crystallization: Complementary Optical Microscopy, DSC, and AFM Studies

    PubMed Central

    Speranza, V.; Sorrentino, A.; De Santis, F.; Pantani, R.

    2014-01-01

    The first stages of the crystallization of polycaprolactone (PCL) were studied using several techniques. The crystallization exotherms measured by differential scanning calorimetry (DSC) were analyzed and compared with results obtained by polarized optical microscopy (POM), rheology, and atomic force microscope (AFM). The experimental results suggest a strong influence of the observation scale. In particular, the AFM, even if limited on time scale, appears to be the most sensitive technique to detect the first stages of crystallization. On the contrary, at least in the case analysed in this work, rheology appears to be the least sensitive technique. DSC and POM provide closer results. This suggests that the definition of induction time in the polymer crystallization is a vague concept that, in any case, requires the definition of the technique used for its characterization. PMID:24523644

  5. Characterization of the polycaprolactone melt crystallization: complementary optical microscopy, DSC, and AFM studies.

    PubMed

    Speranza, V; Sorrentino, A; De Santis, F; Pantani, R

    2014-01-01

    The first stages of the crystallization of polycaprolactone (PCL) were studied using several techniques. The crystallization exotherms measured by differential scanning calorimetry (DSC) were analyzed and compared with results obtained by polarized optical microscopy (POM), rheology, and atomic force microscope (AFM). The experimental results suggest a strong influence of the observation scale. In particular, the AFM, even if limited on time scale, appears to be the most sensitive technique to detect the first stages of crystallization. On the contrary, at least in the case analysed in this work, rheology appears to be the least sensitive technique. DSC and POM provide closer results. This suggests that the definition of induction time in the polymer crystallization is a vague concept that, in any case, requires the definition of the technique used for its characterization.

  6. Riding the Right Wavelet: Detecting Fracture and Fault Orientation Scale Transitions Using Morlet Wavelets

    NASA Astrophysics Data System (ADS)

    Rizzo, R. E.; Healy, D.; Farrell, N. J.; Smith, M.

    2016-12-01

    The analysis of images through two-dimensional (2D) continuous wavelet transforms makes it possible to acquire local information at different scales of resolution. This characteristic allows us to use wavelet analysis to quantify anisotropic random fields such as networks of fractures. Previous studies [1] have used 2D anisotropic Mexican hat wavelets to analyse the organisation of fracture networks from cm- to km-scales. However, Antoine et al. [2] explained that this technique can have a relatively poor directional selectivity. This suggests the use of a wavelet whose transform is more sensitive to directions of linear features, i.e. 2D Morlet wavelets [3]. In this work, we use a fully-anisotropic Morlet wavelet as implemented by Neupauer & Powell [4], which is anisotropic in its real and imaginary parts and also in its magnitude. We demonstrate the validity of this analytical technique by application to both synthetic - generated according to known distributions of orientations and lengths - and experimentally produced fracture networks. We have analysed SEM Back Scattered Electron images of thin sections of Hopeman Sandstone (Scotland, UK) deformed under triaxial conditions. We find that the Morlet wavelet, compared to the Mexican hat, is more precise in detecting dominant orientations in fracture scale transition at every scale from intra-grain fractures (µm-scale) up to the faults cutting the whole thin section (cm-scale). Through this analysis we can determine the relationship between the initial orientation of tensile microcracks and the final geometry of the through-going shear fault, with total areal coverage of the analysed image. By comparing thin sections from experiments at different confining pressures, we can quantitatively explore the relationship between the observed geometry and the inferred mechanical processes. [1] Ouillon et al., Nonlinear Processes in Geophysics (1995) 2:158 - 177. [2] Antoine et al., Cambridge University Press (2008) 192-194. [3] Antoine et al., Signal Processing (1993) 31:241 - 272. [4] Neupauer & Powell, Computer & Geosciences (2005) 31:456 - 471.

  7. Evaluation of interpolation techniques for the creation of gridded daily precipitation (1 × 1 km2); Cyprus, 1980-2010

    NASA Astrophysics Data System (ADS)

    Camera, Corrado; Bruggeman, Adriana; Hadjinicolaou, Panos; Pashiardis, Stelios; Lange, Manfred A.

    2014-01-01

    High-resolution gridded daily data sets are essential for natural resource management and the analyses of climate changes and their effects. This study aims to evaluate the performance of 15 simple or complex interpolation techniques in reproducing daily precipitation at a resolution of 1 km2 over topographically complex areas. Methods are tested considering two different sets of observation densities and different rainfall amounts. We used rainfall data that were recorded at 74 and 145 observational stations, respectively, spread over the 5760 km2 of the Republic of Cyprus, in the Eastern Mediterranean. Regression analyses utilizing geographical copredictors and neighboring interpolation techniques were evaluated both in isolation and combined. Linear multiple regression (LMR) and geographically weighted regression methods (GWR) were tested. These included a step-wise selection of covariables, as well as inverse distance weighting (IDW), kriging, and 3D-thin plate splines (TPS). The relative rank of the different techniques changes with different station density and rainfall amounts. Our results indicate that TPS performs well for low station density and large-scale events and also when coupled with regression models. It performs poorly for high station density. The opposite is observed when using IDW. Simple IDW performs best for local events, while a combination of step-wise GWR and IDW proves to be the best method for large-scale events and high station density. This study indicates that the use of step-wise regression with a variable set of geographic parameters can improve the interpolation of large-scale events because it facilitates the representation of local climate dynamics.

  8. Opened end-to-side technique for end-to-side anastomosis and analyses by an elastic true-to-scale silicone rubber model.

    PubMed

    Mücke, Thomas; Ritschl, Lucas M; Balasso, Andrea; Wolff, Klaus-Dietrich; Mitchell, David A; Liepsch, Dieter

    2014-01-01

    The end-to-side anastomosis is frequently used in microvascular free flap transfer, but detailed rheological analyses are not available. The purpose of this study was to introduce a new modified end-to-side (Opened End-to-Side, OES-) technique and compare the resulting flow pattern to a conventional technique. The new technique was based on a bi-triangulated preparation of the branching-vessel end, resulting in a "fish-mouthed" opening. We performed two different types of end-to-side anastomoses in forty pig coronary arteries and produced one elastic, true-to-scale silicone rubber model of each anastomosis. Then we installed the transparent models in a circulatory experimental setup that simulated the physiological human blood flow. Flow velocity was measured with the one-component Laser-Doppler-Anemometer system, recording flow axial and perpendicular to the model at four defined cross-sections for seven heart cycles in each model. Maximal and minimal axial velocities ranged in the conventional model between 0.269 and -0.122 m/s and in the experimental model between 0.313 and -0.153 m/s. A less disturbed flow velocity distribution was seen in the experimental model distal to the anastomosis. The OES-technique showed superior flow profiles distal to the anastomosis with minor tendencies of flow separation and represents a new alternative for end-to-side anastomosis. Copyright © 2013 Wiley Periodicals, Inc.

  9. Identification of particle-laden flow features from wavelet decomposition

    NASA Astrophysics Data System (ADS)

    Jackson, A.; Turnbull, B.

    2017-12-01

    A wavelet decomposition based technique is applied to air pressure data obtained from laboratory-scale powder snow avalanches. This technique is shown to be a powerful tool for identifying both repeatable and chaotic features at any frequency within the signal. Additionally, this technique is demonstrated to be a robust method for the removal of noise from the signal as well as being capable of removing other contaminants from the signal. Whilst powder snow avalanches are the focus of the experiments analysed here, the features identified can provide insight to other particle-laden gravity currents and the technique described is applicable to a wide variety of experimental signals.

  10. Scale in Remote Sensing and GIS: An Advancement in Methods Towards a Science of Scale

    NASA Technical Reports Server (NTRS)

    Quattrochi, Dale A.

    1998-01-01

    The term "scale", both in space and time, is central to remote sensing and geographic information systems (GIS). The emergence and widespread use of GIS technologies, including remote sensing, has generated significant interest in addressing scale as a generic topic, and in the development and implementation of techniques for dealing explicitly with the vicissitudes of scale as a multidisciplinary issue. As science becomes more complex and utilizes databases that are capable of performing complex space-time data analyses, it becomes paramount that we develop the tools and techniques needed to operate at multiple scales, to work with data whose scales are not necessarily ideal, and to produce results that can be aggregated or disaggregated in ways that suit the decision-making process. Contemporary science is constantly coping with compromises, and the data available for a particular study rarely fit perfectly with the scales at which the processes being investigated operate, or the scales that policy-makers require to make sound, rational decisions. This presentation discusses some of the problems associated with scale as related to remote sensing and GIS, and describes some of the questions that need to be addressed in approaching the development of a multidisciplinary "science of scale". Techniques for dealing with multiple scaled data that have been developed or explored recently are described as a means for recognizing scale as a generic issue, along with associated theory and tools that can be of simultaneous value to a large number of disciplines. These can be used to seek answers to a host of interrelated questions in the interest of providing a formal structure for the management and manipulation of scale and its universality as a key concept from a multidisciplinary perspective.

  11. Scale in Remote Sensing and GIS: An Advancement in Methods Towards a Science of Scale

    NASA Technical Reports Server (NTRS)

    Quattrochi, D. A.

    1998-01-01

    The term "scale", both in space and time, is central to remote sensing and Geographic Information Systems (GIS). The emergence and widespread use of GIS technologies, including remote sensing, has generated significant interest in addressing scale as a generic topic, and in the development and implementation of techniques for dealing explicitly with the vicissitudes of scale as a multidisciplinary issue. As science becomes more complex and utilizes databases that are capable of performing complex space-time data analyses, it becomes paramount that we develop the tools and techniques needed to operate at multiple scales, to work with data whose scales are not necessarily ideal, and to produce results that can be aggregated or disaggregated ways that suit the decision-making process. Contemporary science is constantly coping with compromises, and the data available for a particular study rarely fit perfectly with the scales at which the processes being investigated operate, or the scales that policy-makers require to make sound, rational decisions. This presentation discusses some of the problems associated with scale as related to remote sensing and GIS, and describes some of the questions that need to be addressed in approaching the development of a multidisciplinary "science of scale". Techniques for dealing with multiple scaled data that have been developed or explored recently are described as a means for recognizing scale as a generic issue, along with associated theory and tools that can be of simultaneous value to a large number of disciplines. These can be used to seek answers to a host of interrelated questions in the interest of providing a formal structure for the management and manipulation of scale and its universality as a key concept from a multidisciplinary perspective.

  12. A review of volume‐area scaling of glaciers

    PubMed Central

    Bahr, David B.; Kaser, Georg

    2015-01-01

    Abstract Volume‐area power law scaling, one of a set of analytical scaling techniques based on principals of dimensional analysis, has become an increasingly important and widely used method for estimating the future response of the world's glaciers and ice caps to environmental change. Over 60 papers since 1988 have been published in the glaciological and environmental change literature containing applications of volume‐area scaling, mostly for the purpose of estimating total global glacier and ice cap volume and modeling future contributions to sea level rise from glaciers and ice caps. The application of the theory is not entirely straightforward, however, and many of the recently published results contain analyses that are in conflict with the theory as originally described by Bahr et al. (1997). In this review we describe the general theory of scaling for glaciers in full three‐dimensional detail without simplifications, including an improved derivation of both the volume‐area scaling exponent γ and a new derivation of the multiplicative scaling parameter c. We discuss some common misconceptions of the theory, presenting examples of both appropriate and inappropriate applications. We also discuss potential future developments in power law scaling beyond its present uses, the relationship between power law scaling and other modeling approaches, and some of the advantages and limitations of scaling techniques. PMID:27478877

  13. Rock surface roughness measurement using CSI technique and analysis of surface characterization by qualitative and quantitative results

    NASA Astrophysics Data System (ADS)

    Mukhtar, Husneni; Montgomery, Paul; Gianto; Susanto, K.

    2016-01-01

    In order to develop image processing that is widely used in geo-processing and analysis, we introduce an alternative technique for the characterization of rock samples. The technique that we have used for characterizing inhomogeneous surfaces is based on Coherence Scanning Interferometry (CSI). An optical probe is first used to scan over the depth of the surface roughness of the sample. Then, to analyse the measured fringe data, we use the Five Sample Adaptive method to obtain quantitative results of the surface shape. To analyse the surface roughness parameters, Hmm and Rq, a new window resizing analysis technique is employed. The results of the morphology and surface roughness analysis show micron and nano-scale information which is characteristic of each rock type and its history. These could be used for mineral identification and studies in rock movement on different surfaces. Image processing is thus used to define the physical parameters of the rock surface.

  14. A Psychometric Analysis of the Italian Version of the eHealth Literacy Scale Using Item Response and Classical Test Theory Methods

    PubMed Central

    Dima, Alexandra Lelia; Schulz, Peter Johannes

    2017-01-01

    Background The eHealth Literacy Scale (eHEALS) is a tool to assess consumers’ comfort and skills in using information technologies for health. Although evidence exists of reliability and construct validity of the scale, less agreement exists on structural validity. Objective The aim of this study was to validate the Italian version of the eHealth Literacy Scale (I-eHEALS) in a community sample with a focus on its structural validity, by applying psychometric techniques that account for item difficulty. Methods Two Web-based surveys were conducted among a total of 296 people living in the Italian-speaking region of Switzerland (Ticino). After examining the latent variables underlying the observed variables of the Italian scale via principal component analysis (PCA), fit indices for two alternative models were calculated using confirmatory factor analysis (CFA). The scale structure was examined via parametric and nonparametric item response theory (IRT) analyses accounting for differences between items regarding the proportion of answers indicating high ability. Convergent validity was assessed by correlations with theoretically related constructs. Results CFA showed a suboptimal model fit for both models. IRT analyses confirmed all items measure a single dimension as intended. Reliability and construct validity of the final scale were also confirmed. The contrasting results of factor analysis (FA) and IRT analyses highlight the importance of considering differences in item difficulty when examining health literacy scales. Conclusions The findings support the reliability and validity of the translated scale and its use for assessing Italian-speaking consumers’ eHealth literacy. PMID:28400356

  15. A New Look at the Psychometrics of the Parenting Scale through the Lens of Item Response Theory

    PubMed Central

    Lorber, Michael F.; Xu, Shu; Smith Slep, Amy M.; Bulling, Lisanne; O'Leary, Susan G.

    2015-01-01

    The psychometrics of the Parenting Scale's Overreactivity and Laxness subscales were evaluated using item response theory (IRT) techniques. The IRT analyses were based on two community samples of cohabiting parents of 3- to 8-year-old children, combined to yield an N of 852 families. The results supported the utility of the Overreactivity and Laxness subscales, particularly in discriminating among parents in the mid to upper reaches of each construct. The original versions of the Overreactivity and Laxness subscales were more reliable than alternative, shorter versions identified in replicated factor analyses from previously published research and in IRT analyses in the present research. Moreover, in several cases, the original versions of these subscales, in comparison with the shortened versions, exhibited greater six-month stabilities and correlations with child externalizing behavior and couple relationship satisfaction. Reliability was greater for the Laxness than for the Overreactivity subscale. Item performance on each subscale was highly variable. Together, the present findings are generally supportive of the psychometrics of the Parenting Scale, particularly for clinical research and practice. They also suggest areas for further development. PMID:24828855

  16. A new look at the psychometrics of the parenting scale through the lens of item response theory.

    PubMed

    Lorber, Michael F; Xu, Shu; Slep, Amy M Smith; Bulling, Lisanne; O'Leary, Susan G

    2014-01-01

    The psychometrics of the Parenting Scale's Overreactivity and Laxness subscales were evaluated using item response theory (IRT) techniques. The IRT analyses were based on 2 community samples of cohabiting parents of 3- to 8-year-old children, combined to yield a total sample size of 852 families. The results supported the utility of the Overreactivity and Laxness subscales, particularly in discriminating among parents in the mid to upper reaches of each construct. The original versions of the Overreactivity and Laxness subscales were more reliable than alternative, shorter versions identified in replicated factor analyses from previously published research and in IRT analyses in the present research. Moreover, in several cases, the original versions of these subscales, in comparison with the shortened versions, exhibited greater 6-month stabilities and correlations with child externalizing behavior and couple relationship satisfaction. Reliability was greater for the Laxness than for the Overreactivity subscale. Item performance on each subscale was highly variable. Together, the present findings are generally supportive of the psychometrics of the Parenting Scale, particularly for clinical research and practice. They also suggest areas for further development.

  17. Possibilities of LA-ICP-MS technique for the spatial elemental analysis of the recent fish scales: Line scan vs. depth profiling

    NASA Astrophysics Data System (ADS)

    Holá, Markéta; Kalvoda, Jiří; Nováková, Hana; Škoda, Radek; Kanický, Viktor

    2011-01-01

    LA-ICP-MS and solution based ICP-MS in combination with electron microprobe are presented as a method for the determination of the elemental spatial distribution in fish scales which represent an example of a heterogeneous layered bone structure. Two different LA-ICP-MS techniques were tested on recent common carp ( Cyprinus carpio) scales: A line scan through the whole fish scale perpendicular to the growth rings. The ablation crater of 55 μm width and 50 μm depth allowed analysis of the elemental distribution in the external layer. Suitable ablation conditions providing a deeper ablation crater gave average values from the external HAP layer and the collagen basal plate. Depth profiling using spot analysis was tested in fish scales for the first time. Spot analysis allows information to be obtained about the depth profile of the elements at the selected position on the sample. The combination of all mentioned laser ablation techniques provides complete information about the elemental distribution in the fish scale samples. The results were compared with the solution based ICP-MS and EMP analyses. The fact that the results of depth profiling are in a good agreement both with EMP and PIXE results and, with the assumed ways of incorporation of the studied elements in the HAP structure, suggests a very good potential for this method.

  18. Substrate-Mediated Laser Ablation under Ambient Conditions for Spatially-Resolved Tissue Proteomics

    PubMed Central

    Fatou, Benoit; Wisztorski, Maxence; Focsa, Cristian; Salzet, Michel; Ziskind, Michael; Fournier, Isabelle

    2015-01-01

    Numerous applications of ambient Mass Spectrometry (MS) have been demonstrated over the past decade. They promoted the emergence of various micro-sampling techniques such as Laser Ablation/Droplet Capture (LADC). LADC consists in the ablation of analytes from a surface and their subsequent capture in a solvent droplet which can then be analyzed by MS. LADC is thus generally performed in the UV or IR range, using a wavelength at which analytes or the matrix absorb. In this work, we explore the potential of visible range LADC (532 nm) as a micro-sampling technology for large-scale proteomics analyses. We demonstrate that biomolecule analyses using 532 nm LADC are possible, despite the low absorbance of biomolecules at this wavelength. This is due to the preponderance of an indirect substrate-mediated ablation mechanism at low laser energy which contrasts with the conventional direct ablation driven by sample absorption. Using our custom LADC system and taking advantage of this substrate-mediated ablation mechanism, we were able to perform large-scale proteomic analyses of micro-sampled tissue sections and demonstrated the possible identification of proteins with relevant biological functions. Consequently, the 532 nm LADC technique offers a new tool for biological and clinical applications. PMID:26674367

  19. The Rayleigh curve as a model for effort distribution over the life of medium scale software systems. M.S. Thesis - Maryland Univ.

    NASA Technical Reports Server (NTRS)

    Picasso, G. O.; Basili, V. R.

    1982-01-01

    It is noted that previous investigations into the applicability of Rayleigh curve model to medium scale software development efforts have met with mixed results. The results of these investigations are confirmed by analyses of runs and smoothing. The reasons for the models' failure are found in the subcycle effort data. There are four contributing factors: uniqueness of the environment studied, the influence of holidays, varying management techniques and differences in the data studied.

  20. Spatial and Temporal scales of time-averaged 700 MB height anomalies

    NASA Technical Reports Server (NTRS)

    Gutzler, D.

    1981-01-01

    The monthly and seasonal forecasting technique is based to a large extent on the extrapolation of trends in the positions of the centers of time averaged geopotential height anomalies. The complete forecasted height pattern is subsequently drawn around the forecasted anomaly centers. The efficacy of this technique was tested and time series of observed monthly mean and 5 day mean 700 mb geopotential heights were examined. Autocorrelation statistics are generated to document the tendency for persistence of anomalies. These statistics are compared to a red noise hypothesis to check for evidence of possible preferred time scales of persistence. Space-time spectral analyses at middle latitudes are checked for evidence of periodicities which could be associated with predictable month-to-month trends. A local measure of the average spatial scale of anomalies is devised for guidance in the completion of the anomaly pattern around the forecasted centers.

  1. Introduction to 2005 National Technical Report

    Treesearch

    Mark J. Ambrose

    2007-01-01

    This annual technical report is a product of the Forest Health Monitoring (FHM) program. The report provides information about a variety of issues relating to forest health at the national scale. Previous FHM national reports have had a dual focus of presenting analyses of the latest available data and showcasing innovative techniques for analyzing forest health data....

  2. Large scale meteorological patterns and moisture sources during precipitation extremes over South Asia

    NASA Astrophysics Data System (ADS)

    Mehmood, S.; Ashfaq, M.; Evans, K. J.; Black, R. X.; Hsu, H. H.

    2017-12-01

    Extreme precipitation during summer season has shown an increasing trend across South Asia in recent decades, causing an exponential increase in weather related losses. Here we combine a cluster analyses technique (Agglomerative Hierarchical Clustering) with a Lagrangian based moisture analyses technique to investigate potential commonalities in the characteristics of the large scale meteorological patterns (LSMP) and moisture anomalies associated with the observed extreme precipitation events, and their representation in the Department of Energy model ACME. Using precipitation observations from the Indian Meteorological Department (IMD) and Asian Precipitation Highly Resolved Observational Data Integration Towards Evaluation (APHRODITE), and atmospheric variables from Era-Interim Reanalysis, we first identify LSMP both in upper and lower troposphere that are responsible for wide spread precipitation extreme events during 1980-2015 period. For each of the selected extreme event, we perform moisture source analyses to identify major evaporative sources that sustain anomalous moisture supply during the course of the event, with a particular focus on local terrestrial moisture recycling. Further, we perform similar analyses on two sets of five-member ensemble of ACME model (1-degree and ¼ degree) to investigate the ability of ACME model in simulating precipitation extremes associated with each of the LSMP patterns and associated anomalous moisture sourcing from each of the terrestrial and oceanic evaporative region. Comparison of low and high-resolution model configurations provides insight about the influence of horizontal grid spacing in the simulation of extreme precipitation and the governing mechanisms.

  3. The Visual Analogue Scale for Rating, Ranking and Paired-Comparison (VAS-RRP): A new technique for psychological measurement.

    PubMed

    Sung, Yao-Ting; Wu, Jeng-Shin

    2018-04-17

    Traditionally, the visual analogue scale (VAS) has been proposed to overcome the limitations of ordinal measures from Likert-type scales. However, the function of VASs to overcome the limitations of response styles to Likert-type scales has not yet been addressed. Previous research using ranking and paired comparisons to compensate for the response styles of Likert-type scales has suffered from limitations, such as that the total score of ipsative measures is a constant that cannot be analyzed by means of many common statistical techniques. In this study we propose a new scale, called the Visual Analogue Scale for Rating, Ranking, and Paired-Comparison (VAS-RRP), which can be used to collect rating, ranking, and paired-comparison data simultaneously, while avoiding the limitations of each of these data collection methods. The characteristics, use, and analytic method of VAS-RRPs, as well as how they overcome the disadvantages of Likert-type scales, ranking, and VASs, are discussed. On the basis of analyses of simulated and empirical data, this study showed that VAS-RRPs improved reliability, response style bias, and parameter recovery. Finally, we have also designed a VAS-RRP Generator for researchers' construction and administration of their own VAS-RRPs.

  4. A Psychometric Analysis of the Italian Version of the eHealth Literacy Scale Using Item Response and Classical Test Theory Methods.

    PubMed

    Diviani, Nicola; Dima, Alexandra Lelia; Schulz, Peter Johannes

    2017-04-11

    The eHealth Literacy Scale (eHEALS) is a tool to assess consumers' comfort and skills in using information technologies for health. Although evidence exists of reliability and construct validity of the scale, less agreement exists on structural validity. The aim of this study was to validate the Italian version of the eHealth Literacy Scale (I-eHEALS) in a community sample with a focus on its structural validity, by applying psychometric techniques that account for item difficulty. Two Web-based surveys were conducted among a total of 296 people living in the Italian-speaking region of Switzerland (Ticino). After examining the latent variables underlying the observed variables of the Italian scale via principal component analysis (PCA), fit indices for two alternative models were calculated using confirmatory factor analysis (CFA). The scale structure was examined via parametric and nonparametric item response theory (IRT) analyses accounting for differences between items regarding the proportion of answers indicating high ability. Convergent validity was assessed by correlations with theoretically related constructs. CFA showed a suboptimal model fit for both models. IRT analyses confirmed all items measure a single dimension as intended. Reliability and construct validity of the final scale were also confirmed. The contrasting results of factor analysis (FA) and IRT analyses highlight the importance of considering differences in item difficulty when examining health literacy scales. The findings support the reliability and validity of the translated scale and its use for assessing Italian-speaking consumers' eHealth literacy. ©Nicola Diviani, Alexandra Lelia Dima, Peter Johannes Schulz. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 11.04.2017.

  5. Scalable graphene production: perspectives and challenges of plasma applications

    NASA Astrophysics Data System (ADS)

    Levchenko, Igor; Ostrikov, Kostya (Ken); Zheng, Jie; Li, Xingguo; Keidar, Michael; B. K. Teo, Kenneth

    2016-05-01

    Graphene, a newly discovered and extensively investigated material, has many unique and extraordinary properties which promise major technological advances in fields ranging from electronics to mechanical engineering and food production. Unfortunately, complex techniques and high production costs hinder commonplace applications. Scaling of existing graphene production techniques to the industrial level without compromising its properties is a current challenge. This article focuses on the perspectives and challenges of scalability, equipment, and technological perspectives of the plasma-based techniques which offer many unique possibilities for the synthesis of graphene and graphene-containing products. The plasma-based processes are amenable for scaling and could also be useful to enhance the controllability of the conventional chemical vapour deposition method and some other techniques, and to ensure a good quality of the produced graphene. We examine the unique features of the plasma-enhanced graphene production approaches, including the techniques based on inductively-coupled and arc discharges, in the context of their potential scaling to mass production following the generic scaling approaches applicable to the existing processes and systems. This work analyses a large amount of the recent literature on graphene production by various techniques and summarizes the results in a tabular form to provide a simple and convenient comparison of several available techniques. Our analysis reveals a significant potential of scalability for plasma-based technologies, based on the scaling-related process characteristics. Among other processes, a greater yield of 1 g × h-1 m-2 was reached for the arc discharge technology, whereas the other plasma-based techniques show process yields comparable to the neutral-gas based methods. Selected plasma-based techniques show lower energy consumption than in thermal CVD processes, and the ability to produce graphene flakes of various sizes reaching hundreds of square millimetres, and the thickness varying from a monolayer to 10-20 layers. Additional factors such as electrical voltage and current, not available in thermal CVD processes could potentially lead to better scalability, flexibility and control of the plasma-based processes. Advantages and disadvantages of various systems are also considered.

  6. Scalable graphene production: perspectives and challenges of plasma applications.

    PubMed

    Levchenko, Igor; Ostrikov, Kostya Ken; Zheng, Jie; Li, Xingguo; Keidar, Michael; B K Teo, Kenneth

    2016-05-19

    Graphene, a newly discovered and extensively investigated material, has many unique and extraordinary properties which promise major technological advances in fields ranging from electronics to mechanical engineering and food production. Unfortunately, complex techniques and high production costs hinder commonplace applications. Scaling of existing graphene production techniques to the industrial level without compromising its properties is a current challenge. This article focuses on the perspectives and challenges of scalability, equipment, and technological perspectives of the plasma-based techniques which offer many unique possibilities for the synthesis of graphene and graphene-containing products. The plasma-based processes are amenable for scaling and could also be useful to enhance the controllability of the conventional chemical vapour deposition method and some other techniques, and to ensure a good quality of the produced graphene. We examine the unique features of the plasma-enhanced graphene production approaches, including the techniques based on inductively-coupled and arc discharges, in the context of their potential scaling to mass production following the generic scaling approaches applicable to the existing processes and systems. This work analyses a large amount of the recent literature on graphene production by various techniques and summarizes the results in a tabular form to provide a simple and convenient comparison of several available techniques. Our analysis reveals a significant potential of scalability for plasma-based technologies, based on the scaling-related process characteristics. Among other processes, a greater yield of 1 g × h(-1) m(-2) was reached for the arc discharge technology, whereas the other plasma-based techniques show process yields comparable to the neutral-gas based methods. Selected plasma-based techniques show lower energy consumption than in thermal CVD processes, and the ability to produce graphene flakes of various sizes reaching hundreds of square millimetres, and the thickness varying from a monolayer to 10-20 layers. Additional factors such as electrical voltage and current, not available in thermal CVD processes could potentially lead to better scalability, flexibility and control of the plasma-based processes. Advantages and disadvantages of various systems are also considered.

  7. Experimental Quasi-Microwave Whole-Body Averaged SAR Estimation Method Using Cylindrical-External Field Scanning

    NASA Astrophysics Data System (ADS)

    Kawamura, Yoshifumi; Hikage, Takashi; Nojima, Toshio

    The aim of this study is to develop a new whole-body averaged specific absorption rate (SAR) estimation method based on the external-cylindrical field scanning technique. This technique is adopted with the goal of simplifying the dosimetry estimation of human phantoms that have different postures or sizes. An experimental scaled model system is constructed. In order to examine the validity of the proposed method for realistic human models, we discuss the pros and cons of measurements and numerical analyses based on the finite-difference time-domain (FDTD) method. We consider the anatomical European human phantoms and plane-wave in the 2GHz mobile phone frequency band. The measured whole-body averaged SAR results obtained by the proposed method are compared with the results of the FDTD analyses.

  8. Urdu translation and validation of shorter version of Positive Affect and Negative Affect Schedule (PANAS) on Pakistani bank employees.

    PubMed

    Akhter, Noreen

    2017-10-01

    To translate, adapt and validate shorter version of positive affect and negative affect scale on Pakistani corporate employees. This cross-sectional study was conducted in the twin cities of Islamabad and Rawalpindi from October 2014 to December 2015. The study was completed into two independent parts. In part one, the scale was translated by forward translation. Then it was pilot-tested and administered on customer services employees from commercial banks and the telecommunication sector. Data of the pilot study was analysed by using exploratory factor analysis to extract the initial factor of positive affect and negative affect scale. Part two comprised the main study. Commercial bank employees were included in the sample using convenient sampling technique. Data of the main study was analysed using confirmatory factor analysis in order to establish construct validity of positive affect and negative affect scale. There were145 participants in the first part of the study and 495 in the second. Results of confirmatory factor analysis confirmed the two-factor structure of positive affect and negative affect scale suggesting that the scale has two distinct domains, i.e. positive affect and negative affect. The shorter version of positive affect and negative affect scale was found to be a valid and reliable measure.

  9. A study of residence time distribution using radiotracer technique in the large scale plant facility

    NASA Astrophysics Data System (ADS)

    Wetchagarun, S.; Tippayakul, C.; Petchrak, A.; Sukrod, K.; Khoonkamjorn, P.

    2017-06-01

    As the demand for troubleshooting of large industrial plants increases, radiotracer techniques, which have capability to provide fast, online and effective detections to plant problems, have been continually developed. One of the good potential applications of the radiotracer for troubleshooting in a process plant is the analysis of Residence Time Distribution (RTD). In this paper, the study of RTD in a large scale plant facility using radiotracer technique was presented. The objective of this work is to gain experience on the RTD analysis using radiotracer technique in a “larger than laboratory” scale plant setup which can be comparable to the real industrial application. The experiment was carried out at the sedimentation tank in the water treatment facility of Thailand Institute of Nuclear Technology (Public Organization). Br-82 was selected to use in this work due to its chemical property, its suitable half-life and its on-site availability. NH4Br in the form of aqueous solution was injected into the system as the radiotracer. Six NaI detectors were placed along the pipelines and at the tank in order to determine the RTD of the system. The RTD and the Mean Residence Time (MRT) of the tank was analysed and calculated from the measured data. The experience and knowledge attained from this study is important for extending this technique to be applied to industrial facilities in the future.

  10. Recent advances in rapid and nondestructive determination of fat content and fatty acids composition of muscle foods.

    PubMed

    Tao, Feifei; Ngadi, Michael

    2018-06-13

    Conventional methods for determining fat content and fatty acids (FAs) composition are generally based on the solvent extraction and gas chromatography techniques, respectively, which are time consuming, laborious, destructive to samples and require use of hazard solvents. These disadvantages make them impossible for large-scale detection or being applied to the production line of meat factories. In this context, the great necessity of developing rapid and nondestructive techniques for fat and FAs analyses has been highlighted. Measurement techniques based on near-infrared spectroscopy, Raman spectroscopy, nuclear magnetic resonance and hyperspectral imaging have provided interesting and promising results for fat and FAs prediction in varieties of foods. Thus, the goal of this article is to give an overview of the current research progress in application of the four important techniques for fat and FAs analyses of muscle foods, which consist of pork, beef, lamb, chicken meat, fish and fish oil. The measurement techniques are described in terms of their working principles, features, and application advantages. Research advances for these techniques for specific food are summarized in detail and the factors influencing their modeling results are discussed. Perspectives on the current situation, future trends and challenges associated with the measurement techniques are also discussed.

  11. A Different Approach to the Scientific Research Methods Course: Effects of a Small-Scale Research Project on Pre-Service Teachers

    ERIC Educational Resources Information Center

    Bastürk, Savas

    2017-01-01

    Selecting and applying appropriate research techniques, analysing data using information and communication technologies, transferring the obtained results of the analysis into tables and interpreting them are the performance indicators evaluated by the Ministry of National Education under teacher competencies. At the beginning of the courses that…

  12. A Developmental Scale of Mental Computation with Part-Whole Numbers

    ERIC Educational Resources Information Center

    Callingham, Rosemary; Watson, Jane

    2004-01-01

    In this article, data from a study of the mental computation competence of students in grades 3 to 10 are presented. Students responded to mental computation items, presented orally, that included operations applied to fractions, decimals and percents. The data were analysed using Rasch modelling techniques, and a six-level hierarchy of part-whole…

  13. Introduction to:Forest health monitoring program

    Treesearch

    Mark J. Ambrose

    2009-01-01

    This annual technical report is a product of the Forest Health Monitoring (FHM) Program. The report provides information about a variety of issues relating to forest health at a national scale. FHM national reports have the dual focus of presenting analyses of the latest available data and showcasing innovative techniques for analyzing forest health data. The report is...

  14. Large-scale production of lipoplexes with long shelf-life.

    PubMed

    Clement, Jule; Kiefer, Karin; Kimpfler, Andrea; Garidel, Patrick; Peschka-Süss, Regine

    2005-01-01

    The instability of lipoplex formulations is a major obstacle to overcome before their commercial application in gene therapy. In this study, a continuous mixing technique for the large-scale preparation of lipoplexes followed by lyophilisation for increased stability and shelf-life has been developed. Lipoplexes were analysed for transfection efficiency and cytotoxicity in human aorta smooth muscle cells (HASMC) and a rat smooth muscle cell line (A-10 SMC). Homogeneity of lipid/DNA-products was investigated by photon correlation spectroscopy (PCS) and cryotransmission electron microscopy (cryo-TEM). Studies have been undertaken with DAC-30, a composition of 3beta-[N-(N,N'-dimethylaminoethane)-carbamoyl]-cholesterol (DAC-Chol) and dioleylphosphatidylethanolamine (DOPE) and a green fluorescent protein (GFP) expressing marker plasmid. A continuous mixing technique was compared to the small-scale preparation of lipoplexes by pipetting. Individual steps of the continuous mixing process were evaluated in order to optimise the manufacturing technique: lipid/plasmid ratio, composition of transfection medium, pre-treatment of the lipid, size of the mixing device, mixing procedure and the influence of the lyophilisation process. It could be shown that the method developed for production of lipoplexes on a large scale under sterile conditions led to lipoplexes with good transfection efficiencies combined with low cytotoxicity, improved characteristics and long shelf-life.

  15. Current challenges in quantifying preferential flow through the vadose zone

    NASA Astrophysics Data System (ADS)

    Koestel, John; Larsbo, Mats; Jarvis, Nick

    2017-04-01

    In this presentation, we give an overview of current challenges in quantifying preferential flow through the vadose zone. A review of the literature suggests that current generation models do not fully reflect the present state of process understanding and empirical knowledge of preferential flow. We believe that the development of improved models will be stimulated by the increasingly widespread application of novel imaging technologies as well as future advances in computational power and numerical techniques. One of the main challenges in this respect is to bridge the large gap between the scales at which preferential flow occurs (pore to Darcy scales) and the scale of interest for management (fields, catchments, regions). Studies at the pore scale are being supported by the development of 3-D non-invasive imaging and numerical simulation techniques. These studies are leading to a better understanding of how macropore network topology and initial/boundary conditions control key state variables like matric potential and thus the strength of preferential flow. Extrapolation of this knowledge to larger scales would require support from theoretical frameworks such as key concepts from percolation and network theory, since we lack measurement technologies to quantify macropore networks at these large scales. Linked hydro-geophysical measurement techniques that produce highly spatially and temporally resolved data enable investigation of the larger-scale heterogeneities that can generate preferential flow patterns at pedon, hillslope and field scales. At larger regional and global scales, improved methods of data-mining and analyses of large datasets (machine learning) may help in parameterizing models as well as lead to new insights into the relationships between soil susceptibility to preferential flow and site attributes (climate, land uses, soil types).

  16. Integrating High-Resolution Datasets to Target Mitigation Efforts for Improving Air Quality and Public Health in Urban Neighborhoods

    PubMed Central

    Shandas, Vivek; Voelkel, Jackson; Rao, Meenakshi; George, Linda

    2016-01-01

    Reducing exposure to degraded air quality is essential for building healthy cities. Although air quality and population vary at fine spatial scales, current regulatory and public health frameworks assess human exposures using county- or city-scales. We build on a spatial analysis technique, dasymetric mapping, for allocating urban populations that, together with emerging fine-scale measurements of air pollution, addresses three objectives: (1) evaluate the role of spatial scale in estimating exposure; (2) identify urban communities that are disproportionately burdened by poor air quality; and (3) estimate reduction in mobile sources of pollutants due to local tree-planting efforts using nitrogen dioxide. Our results show a maximum value of 197% difference between cadastrally-informed dasymetric system (CIDS) and standard estimations of population exposure to degraded air quality for small spatial extent analyses, and a lack of substantial difference for large spatial extent analyses. These results provide the foundation for improving policies for managing air quality, and targeting mitigation efforts to address challenges of environmental justice. PMID:27527205

  17. Does computerizing paper-and-pencil job attitude scales make a difference? New IRT analyses offer insight.

    PubMed

    Donovan, M A; Drasgow, F; Probst, T M

    2000-04-01

    The measurement equivalence of 2 scales of the Job Descriptive Index (JDI; P. C. Smith, L. M. Kendall, & C. L. Hulin, 1969), the Supervisor Satisfaction scale and the Coworker Satisfaction scale, was examined across computerized and paper-and-pencil administrations. In this study, employees in 2 organizations (N = 1,777) were administered paper-and-pencil versions of the scales, and employees in a third organization (N = 509) were administered a computerized version. A newly developed item response theory (IRT) technique for examining differential test functioning (N. S. Raju, W. J. van der Linden, & P. F. Fleer, 1995) was used to examine measurement equivalence across media. Results support the measurement equivalence of the JDI Supervisor and Coworker scales across administration media. The implications of these findings for both practitioners and organizational researchers are discussed.

  18. Application of multivariate statistical techniques in microbial ecology.

    PubMed

    Paliy, O; Shankar, V

    2016-03-01

    Recent advances in high-throughput methods of molecular analyses have led to an explosion of studies generating large-scale ecological data sets. In particular, noticeable effect has been attained in the field of microbial ecology, where new experimental approaches provided in-depth assessments of the composition, functions and dynamic changes of complex microbial communities. Because even a single high-throughput experiment produces large amount of data, powerful statistical techniques of multivariate analysis are well suited to analyse and interpret these data sets. Many different multivariate techniques are available, and often it is not clear which method should be applied to a particular data set. In this review, we describe and compare the most widely used multivariate statistical techniques including exploratory, interpretive and discriminatory procedures. We consider several important limitations and assumptions of these methods, and we present examples of how these approaches have been utilized in recent studies to provide insight into the ecology of the microbial world. Finally, we offer suggestions for the selection of appropriate methods based on the research question and data set structure. © 2016 John Wiley & Sons Ltd.

  19. Using software agents to preserve individual health data confidentiality in micro-scale geographical analyses.

    PubMed

    Kamel Boulos, Maged N; Cai, Qiang; Padget, Julian A; Rushton, Gerard

    2006-04-01

    Confidentiality constraints often preclude the release of disaggregate data about individuals, which limits the types and accuracy of the results of geographical health analyses that could be done. Access to individually geocoded (disaggregate) data often involves lengthy and cumbersome procedures through review boards and committees for approval (and sometimes is not possible). Moreover, current data confidentiality-preserving solutions compatible with fine-level spatial analyses either lack flexibility or yield less than optimal results (because of confidentiality-preserving changes they introduce to disaggregate data), or both. In this paper, we present a simulation case study to illustrate how some analyses cannot be (or will suffer if) done on aggregate data. We then quickly review some existing data confidentiality-preserving techniques, and move on to explore a solution based on software agents with the potential of providing flexible, controlled (software-only) access to unmodified confidential disaggregate data and returning only results that do not expose any person-identifiable details. The solution is thus appropriate for micro-scale geographical analyses where no person-identifiable details are required in the final results (i.e., only aggregate results are needed). Our proposed software agent technique also enables post-coordinated analyses to be designed and carried out on the confidential database(s), as needed, compared to a more conventional solution based on the Web Services model that would only support a rigid, pre-coordinated (pre-determined) and rather limited set of analyses. The paper also provides an exploratory discussion of mobility, security, and trust issues associated with software agents, as well as possible directions/solutions to address these issues, including the use of virtual organizations. Successful partnerships between stakeholder organizations, proper collaboration agreements, clear policies, and unambiguous interpretations of laws and regulations are also much needed to support and ensure the success of any technological solution.

  20. SIMS analyses of minor and trace element distributions in fracture calcite from Yucca Mountain, Nevada, USA

    NASA Astrophysics Data System (ADS)

    Denniston, Rhawn F.; Shearer, Charles K.; Layne, Graham D.; Vaniman, David T.

    1997-05-01

    Fracture-lining calcite samples from Yucca Mountain, Nevada, obtained as part of the extensive vertical sampling in studies of this site as a potential high-level waste repository, have been characterized according to microbeam-scale (25-30 μm) trace and minor element chemistry, and cathodoluminescent zonation patterns. As bulk chemical analyses are limited in spatial resolution and are subject to contamination by intergrown phases, a technique for analysis by secondary ion mass spectrometry (SIMS) of minor (Mn, Fe, Sr) and trace (REE) elements in calcite was developed and applied to eighteen calcite samples from four boreholes and one trench. SIMS analyses of REE in calcite and dolomite have been shown to be quantitative to abundances < 1 × chondrite. Although the low secondary ion yields associated with carbonates forced higher counting times than is necessary in most silicates, Mn, Fe, Sr, and REE analyses were obtained with sub-ppm detection limits and 2-15% analytical precision. Bulk chemical signatures noted by Vaniman (1994) allowed correlation of minor and trace element signatures in Yucca Mountain calcite with location of calcite precipitation (saturated vs. unsaturated zone). For example, upper unsaturated zone calcite exhibits pronounced negative Ce and Eu anomalies not observed in calcite collected below in the deep unsaturated zone. These chemical distinctions served as fingerprints which were applied to growth zones in order to examine temporal changes in calcite crystallization histories; analyses of such fine-scale zonal variations are unattainable using bulk analytical techniques. In addition, LREE (particularly Ce) scavenging of calcite-precipitating solutions by manganese oxide phases is discussed as the mechanism for Ce-depletion in unsaturated zone calcite.

  1. Multistage, multiseasonal and multiband imagery to identify and qualify non-forest vegetation resources

    NASA Technical Reports Server (NTRS)

    Driscoll, R. S.; Francis, R. E.

    1970-01-01

    A description of space and supporting aircraft photography for the interpretation and analyses of non-forest (shrubby and herbaceous) native vegetation is presented. The research includes the development of a multiple sampling technique to assign quantitative area values of specific plant community types included within an assigned space photograph map unit. Also, investigations of aerial film type, scale, and season of photography for identification and quantity measures of shrubby and herbaceous vegetation were conducted. Some work was done to develop automated interpretation techniques with film image density measurement devices.

  2. Hospital survey on patient safety culture: psychometric analysis on a Scottish sample.

    PubMed

    Sarac, Cakil; Flin, Rhona; Mearns, Kathryn; Jackson, Jeanette

    2011-10-01

    To investigate the psychometric properties of the Hospital Survey on Patient Safety Culture on a Scottish NHS data set. The data were collected from 1969 clinical staff (estimated 22% response rate) from one acute hospital from each of seven Scottish Health boards. Using a split-half validation technique, the data were randomly split; an exploratory factor analysis was conducted on the calibration data set, and confirmatory factor analyses were conducted on the validation data set to investigate and check the original US model fit in a Scottish sample. Following the split-half validation technique, exploratory factor analysis results showed a 10-factor optimal measurement model. The confirmatory factor analyses were then performed to compare the model fit of two competing models (10-factor alternative model vs 12-factor original model). An S-B scaled χ(2) square difference test demonstrated that the original 12-factor model performed significantly better in a Scottish sample. Furthermore, reliability analyses of each component yielded satisfactory results. The mean scores on the climate dimensions in the Scottish sample were comparable with those found in other European countries. This study provided evidence that the original 12-factor structure of the Hospital Survey on Patient Safety Culture scale has been replicated in this Scottish sample. Therefore, no modifications are required to the original 12-factor model, which is suggested for use, since it would allow researchers the possibility of cross-national comparisons.

  3. Orbital refill of propulsion vehicle tankage

    NASA Technical Reports Server (NTRS)

    Merino, F.; Risberg, J. A.; Hill, M.

    1980-01-01

    Techniques for orbital refueling of space based vehicles were developed and experimental programs to verify these techniques were identified. Orbital refueling operations were developed for two cryogenic orbital transfer vehicles (OTV's) and an Earth storable low thrust liquid propellant vehicle. Refueling operations were performed assuming an orbiter tanker for near term missions and an orbital depot. Analyses were conducted using liquid hydrogen and N2O4. The influence of a pressurization system and acquisition device on operations was also considered. Analyses showed that vehicle refill operations will be more difficult with a cryogen than with an earth storable. The major elements of a successful refill with cryogens include tank prechill and fill. Propellant quantities expended for tank prechill appear to to insignificant. Techniques were identified to avoid loss of liquid or excessive tank pressures during refill. It was determined that refill operations will be similar whether or not an orbiter tanker or orbital depot is available. Modeling analyses were performed for prechill and fill tests to be conducted assuming the Spacelab as a test bed, and a 1/10 scale model OTV (with LN2 as a test fluid) as an experimental package.

  4. Estimating Interaction Effects With Incomplete Predictor Variables

    PubMed Central

    Enders, Craig K.; Baraldi, Amanda N.; Cham, Heining

    2014-01-01

    The existing missing data literature does not provide a clear prescription for estimating interaction effects with missing data, particularly when the interaction involves a pair of continuous variables. In this article, we describe maximum likelihood and multiple imputation procedures for this common analysis problem. We outline 3 latent variable model specifications for interaction analyses with missing data. These models apply procedures from the latent variable interaction literature to analyses with a single indicator per construct (e.g., a regression analysis with scale scores). We also discuss multiple imputation for interaction effects, emphasizing an approach that applies standard imputation procedures to the product of 2 raw score predictors. We thoroughly describe the process of probing interaction effects with maximum likelihood and multiple imputation. For both missing data handling techniques, we outline centering and transformation strategies that researchers can implement in popular software packages, and we use a series of real data analyses to illustrate these methods. Finally, we use computer simulations to evaluate the performance of the proposed techniques. PMID:24707955

  5. Contour advection with surgery: A technique for investigating finescale structure in tracer transport

    NASA Technical Reports Server (NTRS)

    Waugh, Darryn W.; Plumb, R. Alan

    1994-01-01

    We present a trajectory technique, contour advection with surgery (CAS), for tracing the evolution of material contours in a specified (including observed) evolving flow. CAS uses the algorithms developed by Dritschel for contour dynamics/surgery to trace the evolution of specified contours. The contours are represented by a series of particles, which are advected by a specified, gridded, wind distribution. The resolution of the contours is preserved by continually adjusting the number of particles, and finescale features are produced that are not present in the input data (and cannot easily be generated using standard trajectory techniques). The reliability, and dependence on the spatial and temporal resolution of the wind field, of the CAS procedure is examined by comparisons with high-resolution numerical data (from contour dynamics calculations and from a general circulation model), and with routine stratospheric analyses. These comparisons show that the large-scale motions dominate the deformation field and that CAS can accurately reproduce small scales from low-resolution wind fields. The CAS technique therefore enables examination of atmospheric tracer transport at previously unattainable resolution.

  6. Change Detection Analysis of Water Pollution in Coimbatore Region using Different Color Models

    NASA Astrophysics Data System (ADS)

    Jiji, G. Wiselin; Devi, R. Naveena

    2017-12-01

    The data acquired through remote sensing satellites furnish facts about the land and water at varying resolutions and has been widely used for several change detection studies. Apart from the existence of many change detection methodologies and techniques, emergence of new ones continues to subsist. Existing change detection techniques exploit images that are either in gray scale or RGB color model. In this paper we introduced color models for performing change detection for water pollution. Here the polluted lakes are classified and post-classification change detection techniques are applied to RGB images and results obtained are analysed for changes to exist or not. Furthermore RGB images obtained after classification when converted to any of the two color models YCbCr and YIQ is found to produce the same results as that of the RGB model images. Thus it can be concluded that other color models like YCbCr, YIQ can be used as substitution to RGB color model for analysing change detection with regard to water pollution.

  7. A novel household water insecurity scale: Procedures and psychometric analysis among postpartum women in western Kenya.

    PubMed

    Boateng, Godfred O; Collins, Shalean M; Mbullo, Patrick; Wekesa, Pauline; Onono, Maricianah; Neilands, Torsten B; Young, Sera L

    2018-01-01

    Our ability to measure household-level food insecurity has revealed its critical role in a range of physical, psychosocial, and health outcomes. Currently, there is no analogous, standardized instrument for quantifying household-level water insecurity, which prevents us from understanding both its prevalence and consequences. Therefore, our objectives were to develop and validate a household water insecurity scale appropriate for use in our cohort in western Kenya. We used a range of qualitative techniques to develop a preliminary set of 29 household water insecurity questions and administered those questions at 15 and 18 months postpartum, concurrent with a suite of other survey modules. These data were complemented by data on quantity of water used and stored, and microbiological quality. Inter-item and item-total correlations were performed to reduce scale items to 20. Exploratory factor and parallel analyses were used to determine the latent factor structure; a unidimensional scale was hypothesized and tested using confirmatory factor and bifactor analyses, along with multiple statistical fit indices. Reliability was assessed using Cronbach's alpha and the coefficient of stability, which produced a coefficient alpha of 0.97 at 15 and 18 months postpartum and a coefficient of stability of 0.62. Predictive, convergent and discriminant validity of the final household water insecurity scale were supported based on relationships with food insecurity, perceived stress, per capita household water use, and time and money spent acquiring water. The resultant scale is a valid and reliable instrument. It can be used in this setting to test a range of hypotheses about the role of household water insecurity in numerous physical and psychosocial health outcomes, to identify the households most vulnerable to water insecurity, and to evaluate the effects of water-related interventions. To extend its applicability, we encourage efforts to develop a cross-culturally valid scale using robust qualitative and quantitative techniques.

  8. Analysing attitude data through ridit schemes.

    PubMed

    El-rouby, M G

    1994-12-02

    The attitudes of individuals and populations on various issues are usually assessed through sample surveys. Responses to survey questions are then scaled and combined into a meaningful whole which defines the measured attitude. The applied scales may be of nominal, ordinal, interval, or ratio nature depending upon the degree of sophistication the researcher wants to introduce into the measurement. This paper discusses methods of analysis for categorical variables of the type used in attitude and human behavior research, and recommends adoption of ridit analysis, a technique which has been successfully applied to epidemiological, clinical investigation, laboratory, and microbiological data. The ridit methodology is described after reviewing some general attitude scaling methods and problems of analysis related to them. The ridit method is then applied to a recent study conducted to assess health care service quality in North Carolina. This technique is conceptually and computationally more simple than other conventional statistical methods, and is also distribution-free. Basic requirements and limitations on its use are indicated.

  9. Ecological hierarchies and self-organisation - Pattern analysis, modelling and process integration across scales

    USGS Publications Warehouse

    Reuter, H.; Jopp, F.; Blanco-Moreno, J. M.; Damgaard, C.; Matsinos, Y.; DeAngelis, D.L.

    2010-01-01

    A continuing discussion in applied and theoretical ecology focuses on the relationship of different organisational levels and on how ecological systems interact across scales. We address principal approaches to cope with complex across-level issues in ecology by applying elements of hierarchy theory and the theory of complex adaptive systems. A top-down approach, often characterised by the use of statistical techniques, can be applied to analyse large-scale dynamics and identify constraints exerted on lower levels. Current developments are illustrated with examples from the analysis of within-community spatial patterns and large-scale vegetation patterns. A bottom-up approach allows one to elucidate how interactions of individuals shape dynamics at higher levels in a self-organisation process; e.g., population development and community composition. This may be facilitated by various modelling tools, which provide the distinction between focal levels and resulting properties. For instance, resilience in grassland communities has been analysed with a cellular automaton approach, and the driving forces in rodent population oscillations have been identified with an agent-based model. Both modelling tools illustrate the principles of analysing higher level processes by representing the interactions of basic components.The focus of most ecological investigations on either top-down or bottom-up approaches may not be appropriate, if strong cross-scale relationships predominate. Here, we propose an 'across-scale-approach', closely interweaving the inherent potentials of both approaches. This combination of analytical and synthesising approaches will enable ecologists to establish a more coherent access to cross-level interactions in ecological systems. ?? 2010 Gesellschaft f??r ??kologie.

  10. Error Estimation in an Optimal Interpolation Scheme for High Spatial and Temporal Resolution SST Analyses

    NASA Technical Reports Server (NTRS)

    Rigney, Matt; Jedlovec, Gary; LaFontaine, Frank; Shafer, Jaclyn

    2010-01-01

    Heat and moisture exchange between ocean surface and atmosphere plays an integral role in short-term, regional NWP. Current SST products lack both spatial and temporal resolution to accurately capture small-scale features that affect heat and moisture flux. NASA satellite is used to produce high spatial and temporal resolution SST analysis using an OI technique.

  11. On the analysis of local and global features for hyperemia grading

    NASA Astrophysics Data System (ADS)

    Sánchez, L.; Barreira, N.; Sánchez, N.; Mosquera, A.; Pena-Verdeal, H.; Yebra-Pimentel, E.

    2017-03-01

    In optometry, hyperemia is the accumulation of blood flow in the conjunctival tissue. Dry eye syndrome or allergic conjunctivitis are two of its main causes. Its main symptom is the presence of a red hue in the eye that optometrists evaluate according to a scale in a subjective manner. In this paper, we propose an automatic approach to the problem of hyperemia grading in the bulbar conjunctiva. We compute several image features on images of the patients' eyes, analyse the relations among them by using feature selection techniques and transform the feature vector of each image to the value in the adequate range by means of machine learning techniques. We analyse different areas of the conjunctiva to evaluate their importance for the diagnosis. Our results show that it is possible to mimic the experts' behaviour through the proposed approach.

  12. Analysis of a municipal wastewater treatment plant using a neural network-based pattern analysis

    USGS Publications Warehouse

    Hong, Y.-S.T.; Rosen, Michael R.; Bhamidimarri, R.

    2003-01-01

    This paper addresses the problem of how to capture the complex relationships that exist between process variables and to diagnose the dynamic behaviour of a municipal wastewater treatment plant (WTP). Due to the complex biological reaction mechanisms, the highly time-varying, and multivariable aspects of the real WTP, the diagnosis of the WTP are still difficult in practice. The application of intelligent techniques, which can analyse the multi-dimensional process data using a sophisticated visualisation technique, can be useful for analysing and diagnosing the activated-sludge WTP. In this paper, the Kohonen Self-Organising Feature Maps (KSOFM) neural network is applied to analyse the multi-dimensional process data, and to diagnose the inter-relationship of the process variables in a real activated-sludge WTP. By using component planes, some detailed local relationships between the process variables, e.g., responses of the process variables under different operating conditions, as well as the global information is discovered. The operating condition and the inter-relationship among the process variables in the WTP have been diagnosed and extracted by the information obtained from the clustering analysis of the maps. It is concluded that the KSOFM technique provides an effective analysing and diagnosing tool to understand the system behaviour and to extract knowledge contained in multi-dimensional data of a large-scale WTP. ?? 2003 Elsevier Science Ltd. All rights reserved.

  13. Preferential flow from pore to landscape scales

    NASA Astrophysics Data System (ADS)

    Koestel, J. K.; Jarvis, N.; Larsbo, M.

    2017-12-01

    In this presentation, we give a brief personal overview of some recent progress in quantifying preferential flow in the vadose zone, based on our own work and those of other researchers. One key challenge is to bridge the gap between the scales at which preferential flow occurs (i.e. pore to Darcy scales) and the scales of interest for management (i.e. fields, catchments, regions). We present results of recent studies that exemplify the potential of 3-D non-invasive imaging techniques to visualize and quantify flow processes at the pore scale. These studies should lead to a better understanding of how the topology of macropore networks control key state variables like matric potential and thus the strength of preferential flow under variable initial and boundary conditions. Extrapolation of this process knowledge to larger scales will remain difficult, since measurement technologies to quantify macropore networks at these larger scales are lacking. Recent work suggests that the application of key concepts from percolation theory could be useful in this context. Investigation of the larger Darcy-scale heterogeneities that generate preferential flow patterns at the soil profile, hillslope and field scales has been facilitated by hydro-geophysical measurement techniques that produce highly spatially and temporally resolved data. At larger regional and global scales, improved methods of data-mining and analyses of large datasets (machine learning) may help to parameterize models as well as lead to new insights into the relationships between soil susceptibility to preferential flow and site attributes (climate, land uses, soil types).

  14. Linear static structural and vibration analysis on high-performance computers

    NASA Technical Reports Server (NTRS)

    Baddourah, M. A.; Storaasli, O. O.; Bostic, S. W.

    1993-01-01

    Parallel computers offer the oppurtunity to significantly reduce the computation time necessary to analyze large-scale aerospace structures. This paper presents algorithms developed for and implemented on massively-parallel computers hereafter referred to as Scalable High-Performance Computers (SHPC), for the most computationally intensive tasks involved in structural analysis, namely, generation and assembly of system matrices, solution of systems of equations and calculation of the eigenvalues and eigenvectors. Results on SHPC are presented for large-scale structural problems (i.e. models for High-Speed Civil Transport). The goal of this research is to develop a new, efficient technique which extends structural analysis to SHPC and makes large-scale structural analyses tractable.

  15. Evaluating National Environmental Sustainability: Performance Measures and Influential Factors for OECD-Member Countries featuring Canadian Performance and Policy Implications

    NASA Astrophysics Data System (ADS)

    Calbick, Kenneth S.

    This research reviews five studies that evaluate national environmental sustainability with composite indices; performs uncertainty and sensitivity analyses of techniques for building a composite index; completes principal components factor analysis to help build subindices measuring waste and pollution, sustainable energy, sustainable food, nature conservation, and sustainable cities (Due to its current importance, the greenhouse gases (GHG) indicator is included individually as another policy measure.); analyses factors that seem to influence performance: climate, population growth, population density, economic output, technological development, industrial structure, energy prices, environmental governance, pollution abatement and control expenditures, and environmental pricing; and explores Canadian policy implications of the results. The techniques to build composite indices include performance indicator selection, missing data treatment, normalisation technique, scale-effect adjustments, weights, and aggregation method. Scale-effect adjustments and normalisation method are significant sources of uncertainty inducing 68% of the observed variation in a country's final rank at the 95% level of confidence. Choice of indicators also introduces substantial variation as well. To compensate for this variation, the current study recommends that a composite index should always be analysed with other policy subindices and individual indicators. Moreover, the connection between population and consumption indicates that per capita scale-effect adjustments should be used for certain indicators. Rather than ranking normalisation, studies should use a method that retains information from the raw indicator values. Multiple regression and cluster analyses indicate economic output, environmental governance, and energy prices are major influential factors, with energy prices the most important. It is statistically significant for five out of seven performance measures at the 95% level of confidence: 37% variance explained on the environmental sustainability performance composite indicator out of 73%, 55% (of 55%) on the waste and pollution subindex, 20% (of 70%) on the sustainable energy subindex, 5% (of 100%) on the sustainable cities subindex, and 55% (of 81%) on the GHG indicator. Energy prices are relevant to Canadian policy; increasing prices could substantially improve Canada's performance. Policy makers should increase energy prices through a carbon pricing strategy that is congruent with the ecological fiscal reform advanced by the National Round Table on the Environment and the Economy. Keywords: sustainable development; composite indices; environmental policy; environmental governance; energy prices; Canada.

  16. Exploring Chondrule and CAI Rims Using Micro- and Nano-Scale Petrological and Compositional Analysis

    NASA Astrophysics Data System (ADS)

    Cartwright, J. A.; Perez-Huerta, A.; Leitner, J.; Vollmer, C.

    2017-12-01

    As the major components within chondrites, chondrules (mm-sized droplets of quenched silicate melt) and calcium-aluminum-rich inclusions (CAI, refractory) represent the most abundant and the earliest materials that solidified from the solar nebula. However, the exact formation mechanisms of these clasts, and whether these processes are related, remains unconstrained, despite extensive petrological and compositional study. By taking advantage of recent advances in nano-scale tomographical techniques, we have undertaken a combined micro- and nano-scale study of CAI and chondrule rim morphologies, to investigate their formation mechanisms. The target lithologies for this research are Wark-Lovering rims (WLR), and fine-grained rims (FGR) around CAIs and chondrules respectively, present within many chondrites. The FGRs, which are up to 100 µm thick, are of particular interest as recent studies have identified presolar grains within them. These grains predate the formation of our Solar System, suggesting FGR formation under nebular conditions. By contrast, WLRs are 10-20 µm thick, made of different compositional layers, and likely formed by flash-heating shortly after CAI formation, thus recording nebular conditions. A detailed multi-scale study of these respective rims will enable us to better understand their formation histories and determine the potential for commonality between these two phases, despite reports of an observed formation age difference of up to 2-3 Myr. We are using a combination of complimentary techniques on our selected target areas: 1) Micro-scale characterization using standard microscopic and compositional techniques (SEM-EBSD, EMPA); 2) Nano-scale characterization of structures using transmission electron microscopy (TEM) and elemental, isotopic and tomographic analysis with NanoSIMS and atom probe tomography (APT). Preliminary nano-scale APT analysis of FGR morphologies within the Allende carbonaceous chondrite has successfully discerned complex chondritic mineralogies and compositional differences across boundaries, which is one of the first applications of in-situ APT techniques to chondrites. Further data reduction will allow us to characterize the exact phases present, and further chondrite analyses are in progress.

  17. Measuring pain phenomena after spinal cord injury: Development and psychometric properties of the SCI-QOL Pain Interference and Pain Behavior assessment tools.

    PubMed

    Cohen, Matthew L; Kisala, Pamela A; Dyson-Hudson, Trevor A; Tulsky, David S

    2018-05-01

    To develop modern patient-reported outcome measures that assess pain interference and pain behavior after spinal cord injury (SCI). Grounded-theory based qualitative item development; large-scale item calibration field-testing; confirmatory factor analyses; graded response model item response theory analyses; statistical linking techniques to transform scores to the Patient Reported Outcome Measurement Information System (PROMIS) metric. Five SCI Model Systems centers and one Department of Veterans Affairs medical center in the United States. Adults with traumatic SCI. N/A. Spinal Cord Injury - Quality of Life (SCI-QOL) Pain Interference item bank, SCI-QOL Pain Interference short form, and SCI-QOL Pain Behavior scale. Seven hundred fifty-seven individuals with traumatic SCI completed 58 items addressing various aspects of pain. Items were then separated by whether they assessed pain interference or pain behavior, and poorly functioning items were removed. Confirmatory factor analyses confirmed that each set of items was unidimensional, and item response theory analyses were used to estimate slopes and thresholds for the items. Ultimately, 7 items (4 from PROMIS) comprised the Pain Behavior scale and 25 items (18 from PROMIS) comprised the Pain Interference item bank. Ten of these 25 items were selected to form the Pain Interference short form. The SCI-QOL Pain Interference item bank and the SCI-QOL Pain Behavior scale demonstrated robust psychometric properties. The Pain Interference item bank is available as a computer adaptive test or short form for research and clinical applications, and scores are transformed to the PROMIS metric.

  18. Recent advances in analysis and prediction of Rock Falls, Rock Slides, and Rock Avalanches using 3D point clouds

    NASA Astrophysics Data System (ADS)

    Abellan, A.; Carrea, D.; Jaboyedoff, M.; Riquelme, A.; Tomas, R.; Royan, M. J.; Vilaplana, J. M.; Gauvin, N.

    2014-12-01

    The acquisition of dense terrain information using well-established 3D techniques (e.g. LiDAR, photogrammetry) and the use of new mobile platforms (e.g. Unmanned Aerial Vehicles) together with the increasingly efficient post-processing workflows for image treatment (e.g. Structure From Motion) are opening up new possibilities for analysing, modeling and predicting rock slope failures. Examples of applications at different scales ranging from the monitoring of small changes at unprecedented level of detail (e.g. sub millimeter-scale deformation under lab-scale conditions) to the detection of slope deformation at regional scale. In this communication we will show the main accomplishments of the Swiss National Foundation project "Characterizing and analysing 3D temporal slope evolution" carried out at Risk Analysis group (Univ. of Lausanne) in close collaboration with the RISKNAT and INTERES groups (Univ. of Barcelona and Univ. of Alicante, respectively). We have recently developed a series of innovative approaches for rock slope analysis using 3D point clouds, some examples include: the development of semi-automatic methodologies for the identification and extraction of rock-slope features such as discontinuities, type of material, rockfalls occurrence and deformation. Moreover, we have been improving our knowledge in progressive rupture characterization thanks to several algorithms, some examples include the computing of 3D deformation, the use of filtering techniques on permanently based TLS, the use of rock slope failure analogies at different scales (laboratory simulations, monitoring at glacier's front, etc.), the modelling of the influence of external forces such as precipitation on the acceleration of the deformation rate, etc. We have also been interested on the analysis of rock slope deformation prior to the occurrence of fragmental rockfalls and the interaction of this deformation with the spatial location of future events. In spite of these recent advances, a great challenge still remains in the development of new algorithms for more accurate techniques for 3D point cloud treatment (e.g. filtering, segmentation, etc.) aiming to improve rock slope characterization and monitoring, a series of exciting research findings are expected in the forthcoming years.

  19. Identification and measurement of shrub type vegetation on large scale aerial photography

    NASA Technical Reports Server (NTRS)

    Driscoll, R. S.

    1970-01-01

    Important range-shrub species were identified at acceptable levels of accuracy on large-scale 70 mm color and color infrared aerial photographs. Identification of individual shrubs was significantly higher, however, on color infrared. Photoscales smaller than 1:2400 had limited value except for mature individuals of relatively tall species, and then only if crown margins did not overlap and sharp contrast was evident between the species and background. Larger scale photos were required for low-growing species in dense stands. The crown cover for individual species was estimated from the aerial photos either with a measuring magnifier or a projected-scale micrometer. These crown cover measurements provide techniques for earth-resource analyses when used in conjunction with space and high-altitude remotely procured photos.

  20. A morphologic characterisation of the 1963 Vajont Slide, Italy, using long-range terrestrial photogrammetry

    NASA Astrophysics Data System (ADS)

    Wolter, Andrea; Stead, Doug; Clague, John J.

    2014-02-01

    The 1963 Vajont Slide in northeast Italy is an important engineering and geological event. Although the landslide has been extensively studied, new insights can be derived by applying modern techniques such as remote sensing and numerical modelling. This paper presents the first digital terrestrial photogrammetric analyses of the failure scar, landslide deposits, and the area surrounding the failure, with a focus on the scar. We processed photogrammetric models to produce discontinuity stereonets, residual maps and profiles, and slope and aspect maps, all of which provide information on the failure scar morphology. Our analyses enabled the creation of a preliminary semi-quantitative morphologic classification of the Vajont failure scar based on the large-scale tectonic folds and step-paths that define it. The analyses and morphologic classification have implications for the kinematics, dynamics, and mechanism of the slide. Metre- and decametre-scale features affected the initiation, direction, and displacement rate of sliding. The most complexly folded and stepped areas occur close to the intersection of orthogonal synclinal features related to the Dinaric and Neoalpine deformation events. Our analyses also highlight, for the first time, the evolution of the Vajont failure scar from 1963 to the present.

  1. Environment effects from SRB exhaust effluents: Technique development and preliminary assessment

    NASA Technical Reports Server (NTRS)

    Goldford, A. I.; Adelfang, S. I.; Hickey, J. S.; Smith, S. R.; Welty, R. P.; White, G. L.

    1977-01-01

    Techniques to determine the environmental effects from the space shuttle SRB (Solid Rocket Booster) exhaust effluents are used to perform a preliminary climatological assessment. The exhaust effluent chemistry study was performed and the exhaust effluent species were determined. A reasonable exhaust particle size distribution is constructed for use in nozzle analyses and for the deposition model. The preliminary assessment is used to identify problems that are associated with the full-scale assessment; therefore, these preliminary air quality results are used with caution in drawing conclusion regarding the environmental effects of the space shuttle exhaust effluents.

  2. Seeing is believing: on the use of image databases for visually exploring plant organelle dynamics.

    PubMed

    Mano, Shoji; Miwa, Tomoki; Nishikawa, Shuh-ichi; Mimura, Tetsuro; Nishimura, Mikio

    2009-12-01

    Organelle dynamics vary dramatically depending on cell type, developmental stage and environmental stimuli, so that various parameters, such as size, number and behavior, are required for the description of the dynamics of each organelle. Imaging techniques are superior to other techniques for describing organelle dynamics because these parameters are visually exhibited. Therefore, as the results can be seen immediately, investigators can more easily grasp organelle dynamics. At present, imaging techniques are emerging as fundamental tools in plant organelle research, and the development of new methodologies to visualize organelles and the improvement of analytical tools and equipment have allowed the large-scale generation of image and movie data. Accordingly, image databases that accumulate information on organelle dynamics are an increasingly indispensable part of modern plant organelle research. In addition, image databases are potentially rich data sources for computational analyses, as image and movie data reposited in the databases contain valuable and significant information, such as size, number, length and velocity. Computational analytical tools support image-based data mining, such as segmentation, quantification and statistical analyses, to extract biologically meaningful information from each database and combine them to construct models. In this review, we outline the image databases that are dedicated to plant organelle research and present their potential as resources for image-based computational analyses.

  3. Solutions multiples thermocapillaires en zone flottante à gravité nulle

    NASA Astrophysics Data System (ADS)

    Chénier, E.; Delcarte, C.; Labrosse, G.

    1998-04-01

    An original model is adopted to analyse the melted phase hydrodynamics, in the floating zone technique configuration for crystal growth. In particular, a small capillary scale located near the fusion fronts is taken into account. Its size turns out to influence significantly the flow structure. For the first time, multiple solutions are exhibited in zero gravity. Un modèle original a été adopté pour analyser l'hydrodynamique de la phase fondue pour la technique de la zone flottante, en croissance cristalline. En particulier, une petite échelle capillaire, située près des fronts de fusion, est prise en considération. Son extension se révèle influencer significativement la structure des écoulements. L'existence de solutions multiples est, pour la première fois, mise en évidence en gravité zéro.

  4. Quantitative study of Xanthosoma violaceum leaf surfaces using RIMAPS and variogram techniques.

    PubMed

    Favret, Eduardo A; Fuentes, Néstor O; Molina, Ana M

    2006-08-01

    Two new imaging techniques (rotated image with maximum averaged power spectrum (RIMAPS) and variogram) are presented for the study and description of leaf surfaces. Xanthosoma violaceum was analyzed to illustrate the characteristics of both techniques. Both techniques produce a quantitative description of leaf surface topography. RIMAPS combines digitized images rotation with Fourier transform, and it is used to detect patterns orientation and characteristics of surface topography. Variogram relates the mathematical variance of a surface with the area of the sample window observed. It gives the typical scale lengths of the surface patterns. RIMAPS detects the morphological variations of the surface topography pattern between fresh and dried (herbarium) samples of the leaf. The variogram method finds the characteristic dimensions of the leaf microstructure, i.e., cell length, papillae diameter, etc., showing that there are not significant differences between dry and fresh samples. The results obtained show the robustness of RIMAPS and variogram analyses to detect, distinguish, and characterize leaf surfaces, as well as give scale lengths. Both techniques are tools for the biologist to study variations of the leaf surface when different patterns are present. The use of RIMAPS and variogram opens a wide spectrum of possibilities by providing a systematic, quantitative description of the leaf surface topography.

  5. Towards AI-powered personalization in MOOC learning

    NASA Astrophysics Data System (ADS)

    Yu, Han; Miao, Chunyan; Leung, Cyril; White, Timothy John

    2017-12-01

    Massive Open Online Courses (MOOCs) represent a form of large-scale learning that is changing the landscape of higher education. In this paper, we offer a perspective on how advances in artificial intelligence (AI) may enhance learning and research on MOOCs. We focus on emerging AI techniques including how knowledge representation tools can enable students to adjust the sequence of learning to fit their own needs; how optimization techniques can efficiently match community teaching assistants to MOOC mediation tasks to offer personal attention to learners; and how virtual learning companions with human traits such as curiosity and emotions can enhance learning experience on a large scale. These new capabilities will also bring opportunities for educational researchers to analyse students' learning skills and uncover points along learning paths where students with different backgrounds may require different help. Ethical considerations related to the application of AI in MOOC education research are also discussed.

  6. Estimating Mass of Inflatable Aerodynamic Decelerators Using Dimensionless Parameters

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.

    2011-01-01

    This paper describes a technique for estimating mass for inflatable aerodynamic decelerators. The technique uses dimensional analysis to identify a set of dimensionless parameters for inflation pressure, mass of inflation gas, and mass of flexible material. The dimensionless parameters enable scaling of an inflatable concept with geometry parameters (e.g., diameter), environmental conditions (e.g., dynamic pressure), inflation gas properties (e.g., molecular mass), and mass growth allowance. This technique is applicable for attached (e.g., tension cone, hypercone, and stacked toroid) and trailing inflatable aerodynamic decelerators. The technique uses simple engineering approximations that were developed by NASA in the 1960s and 1970s, as well as some recent important developments. The NASA Mars Entry and Descent Landing System Analysis (EDL-SA) project used this technique to estimate the masses of the inflatable concepts that were used in the analysis. The EDL-SA results compared well with two independent sets of high-fidelity finite element analyses.

  7. Characterization of the ionosphere above the Murchison Radio Observatory using the Murchison Widefield Array

    NASA Astrophysics Data System (ADS)

    Jordan, C. H.; Murray, S.; Trott, C. M.; Wayth, R. B.; Mitchell, D. A.; Rahimi, M.; Pindor, B.; Procopio, P.; Morgan, J.

    2017-11-01

    We detail new techniques for analysing ionospheric activity, using Epoch of Reionization data sets obtained with the Murchison Widefield Array, calibrated by the `real-time system' (RTS). Using the high spatial- and temporal-resolution information of the ionosphere provided by the RTS calibration solutions over 19 nights of observing, we find four distinct types of ionospheric activity, and have developed a metric to provide an `at a glance' value for data quality under differing ionospheric conditions. For each ionospheric type, we analyse variations of this metric as we reduce the number of pierce points, revealing that a modest number of pierce points is required to identify the intensity of ionospheric activity; it is possible to calibrate in real-time, providing continuous information of the phase screen. We also analyse temporal correlations, determine diffractive scales, examine the relative fractions of time occupied by various types of ionospheric activity and detail a method to reconstruct the total electron content responsible for the ionospheric data we observe. These techniques have been developed to be instrument agnostic, useful for application on LOw Frequency ARray and Square Kilometre Array-Low.

  8. Convective dynamics - Panel report

    NASA Technical Reports Server (NTRS)

    Carbone, Richard; Foote, G. Brant; Moncrieff, Mitch; Gal-Chen, Tzvi; Cotton, William; Heymsfield, Gerald

    1990-01-01

    Aspects of highly organized forms of deep convection at midlatitudes are reviewed. Past emphasis in field work and cloud modeling has been directed toward severe weather as evidenced by research on tornadoes, hail, and strong surface winds. A number of specific issues concerning future thrusts, tactics, and techniques in convective dynamics are presented. These subjects include; convective modes and parameterization, global structure and scale interaction, convective energetics, transport studies, anvils and scale interaction, and scale selection. Also discussed are analysis workshops, four-dimensional data assimilation, matching models with observations, network Doppler analyses, mesoscale variability, and high-resolution/high-performance Doppler. It is also noted, that, classical surface measurements and soundings, flight-level research aircraft data, passive satellite data, and traditional photogrammetric studies are examples of datasets that require assimilation and integration.

  9. Readout circuit with novel background suppression for long wavelength infrared focal plane arrays

    NASA Astrophysics Data System (ADS)

    Xie, L.; Xia, X. J.; Zhou, Y. F.; Wen, Y.; Sun, W. F.; Shi, L. X.

    2011-02-01

    In this article, a novel pixel readout circuit using a switched-capacitor integrator mode background suppression technique is presented for long wavelength infrared focal plane arrays. This circuit can improve dynamic range and signal-to-noise ratio by suppressing the large background current during integration. Compared with other background suppression techniques, the new background suppression technique is less sensitive to the process mismatch and has no additional shot noise. The proposed circuit is theoretically analysed and simulated while taking into account the non-ideal characteristics. The result shows that the background suppression non-uniformity is ultra-low even for a large process mismatch. The background suppression non-uniformity of the proposed circuit can also remain very small with technology scaling.

  10. Evaluating uncertainty in predicting spatially variable representative elementary scales in fractured aquifers, with application to Turkey Creek Basin, Colorado

    USGS Publications Warehouse

    Wellman, Tristan P.; Poeter, Eileen P.

    2006-01-01

    Computational limitations and sparse field data often mandate use of continuum representation for modeling hydrologic processes in large‐scale fractured aquifers. Selecting appropriate element size is of primary importance because continuum approximation is not valid for all scales. The traditional approach is to select elements by identifying a single representative elementary scale (RES) for the region of interest. Recent advances indicate RES may be spatially variable, prompting unanswered questions regarding the ability of sparse data to spatially resolve continuum equivalents in fractured aquifers. We address this uncertainty of estimating RES using two techniques. In one technique we employ data‐conditioned realizations generated by sequential Gaussian simulation. For the other we develop a new approach using conditioned random walks and nonparametric bootstrapping (CRWN). We evaluate the effectiveness of each method under three fracture densities, three data sets, and two groups of RES analysis parameters. In sum, 18 separate RES analyses are evaluated, which indicate RES magnitudes may be reasonably bounded using uncertainty analysis, even for limited data sets and complex fracture structure. In addition, we conduct a field study to estimate RES magnitudes and resulting uncertainty for Turkey Creek Basin, a crystalline fractured rock aquifer located 30 km southwest of Denver, Colorado. Analyses indicate RES does not correlate to rock type or local relief in several instances but is generally lower within incised creek valleys and higher along mountain fronts. Results of this study suggest that (1) CRWN is an effective and computationally efficient method to estimate uncertainty, (2) RES predictions are well constrained using uncertainty analysis, and (3) for aquifers such as Turkey Creek Basin, spatial variability of RES is significant and complex.

  11. Disentangling WTP per QALY data: different analytical approaches, different answers.

    PubMed

    Gyrd-Hansen, Dorte; Kjaer, Trine

    2012-03-01

    A large random sample of the Danish general population was asked to value health improvements by way of both the time trade-off elicitation technique and willingness-to-pay (WTP) using contingent valuation methods. The data demonstrate a high degree of heterogeneity across respondents in their relative valuations on the two scales. This has implications for data analysis. We show that the estimates of WTP per QALY are highly sensitive to the analytical strategy. For both open-ended and dichotomous choice data we demonstrate that choice of aggregated approach (ratios of means) or disaggregated approach (means of ratios) affects estimates markedly as does the interpretation of the constant term (which allows for disproportionality across the two scales) in the regression analyses. We propose that future research should focus on why some respondents are unwilling to trade on the time trade-off scale, on how to interpret the constant value in the regression analyses, and on how best to capture the heterogeneity in preference structures when applying mixed multinomial logit. Copyright © 2011 John Wiley & Sons, Ltd.

  12. Development of the scale of hygıene behavıors for nursıng students.

    PubMed

    Ipek Coban, Gulay; Bilgin, Sonay

    2015-08-21

    There is a need to have an appropriate instrument to measure the hygiene behaviors for nursing students. This study was carried out to develop a Hygiene Behavior Scale (HBS). The population of the study is composed of the students of students of nursing department. A total of 416 participants were included in this study. The students in the sampling group were asked to write a composition containing their feelings and thoughts about hygiene. These compositions were analysed and 87 items about positive and negative behaviors were determined. These items were presented to expert opinion and after necessary editions, reliability and validity analyses were conducted. The resulting HBS consists of 25 items across the following three domains: Personal hygiene, handwashing technique and food-related hygiene . The final model in confirmatory factor analysis showed that this 25-item HBS indicated a good fit of the model. The value of the Cronbach's a for the total scale was 0.90. The HBS is determined to be quite highly valid and reliable, sufficient measuring instrument to determine hygiene behaviors of nursing students.

  13. New Ground Truth Capability from InSAR Time Series Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buckley, S; Vincent, P; Yang, D

    2005-07-13

    We demonstrate that next-generation interferometric synthetic aperture radar (InSAR) processing techniques applied to existing data provide rich InSAR ground truth content for exploitation in seismic source identification. InSAR time series analyses utilize tens of interferograms and can be implemented in different ways. In one such approach, conventional InSAR displacement maps are inverted in a final post-processing step. Alternatively, computationally intensive data reduction can be performed with specialized InSAR processing algorithms. The typical final result of these approaches is a synthesized set of cumulative displacement maps. Examples from our recent work demonstrate that these InSAR processing techniques can provide appealing newmore » ground truth capabilities. We construct movies showing the areal and temporal evolution of deformation associated with previous nuclear tests. In other analyses, we extract time histories of centimeter-scale surface displacement associated with tunneling. The potential exists to identify millimeter per year surface movements when sufficient data exists for InSAR techniques to isolate and remove phase signatures associated with digital elevation model errors and the atmosphere.« less

  14. Literacy Skills in Children With Cochlear Implants: The Importance of Early Oral Language and Joint Storybook Reading

    PubMed Central

    Ambrose, Sophie E.; Eisenberg, Laurie S.

    2009-01-01

    The goal of this study was to longitudinally examine relationships between early factors (child and mother) that may influence children's phonological awareness and reading skills 3 years later in a group of young children with cochlear implants (N = 16). Mothers and children were videotaped during two storybook interactions, and children's oral language skills were assessed using the “Reynell Developmental Language Scales, third edition.” Three years later, phonological awareness, reading skills, and language skills were assessed using the “Phonological Awareness Test,” the “Woodcock–Johnson-III Diagnostic Reading Battery,” and the “Oral Written Language Scales.” Variables included in the data analyses were child (age, age at implant, and language skills) and mother factors (facilitative language techniques) and children's phonological awareness and reading standard scores. Results indicate that children's early expressive oral language skills and mothers’ use of a higher level facilitative language technique (open-ended question) during storybook reading, although related, each contributed uniquely to children's literacy skills. Individual analyses revealed that the children with expressive standard scores below 70 at Time 1 also performed below average (<85) on phonological awareness and total reading tasks 3 years later. Guidelines for professionals are provided to support literacy skills in young children with cochlear implants. PMID:18417463

  15. Direct magnitude estimates of speech intelligibility in dysarthria: effects of a chosen standard.

    PubMed

    Weismer, Gary; Laures, Jacqueline S

    2002-06-01

    Direct magnitude estimation (DME) has been used frequently as a perceptual scaling technique in studies of the speech intelligibility of persons with speech disorders. The technique is typically used with a standard, or reference stimulus, chosen as a good exemplar of "midrange" intelligibility. In several published studies, the standard has been chosen subjectively, usually on the basis of the expertise of the investigators. The current experiment demonstrates that a fixed set of sentence-level utterances, obtained from 4 individuals with dysarthria (2 with Parkinson disease, 2 with traumatic brain injury) as well as 3 neurologically normal speakers, is scaled differently depending on the identity of the standard. Four different standards were used in the main experiment, three of which were judged qualitatively in two independent evaluations to be good exemplars of midrange intelligibility. Acoustic analyses did not reveal obvious differences between these four standards but suggested that the standard with the worst-scaled intelligibility had much poorer voice source characteristics compared to the other three standards. Results are discussed in terms of possible standardization of midrange intelligibility exemplars for DME experiments.

  16. Characterizing scale- and location-dependent correlation of water retention parameters with soil physical properties using wavelet techniques.

    PubMed

    Shu, Qiaosheng; Liu, Zuoxin; Si, Bingcheng

    2008-01-01

    Understanding the correlation between soil hydraulic parameters and soil physical properties is a prerequisite for the prediction of soil hydraulic properties from soil physical properties. The objective of this study was to examine the scale- and location-dependent correlation between two water retention parameters (alpha and n) in the van Genuchten (1980) function and soil physical properties (sand content, bulk density [Bd], and organic carbon content) using wavelet techniques. Soil samples were collected from a transect from Fuxin, China. Soil water retention curves were measured, and the van Genuchten parameters were obtained through curve fitting. Wavelet coherency analysis was used to elucidate the location- and scale-dependent relationships between these parameters and soil physical properties. Results showed that the wavelet coherence between alpha and sand content was significantly different from red noise at small scales (8-20 m) and from a distance of 30 to 470 m. Their wavelet phase spectrum was predominantly out of phase, indicating negative correlation between these two variables. The strong negative correlation between alpha and Bd existed mainly at medium scales (30-80 m). However, parameter n had a strong positive correlation only with Bd at scales between 20 and 80 m. Neither of the two retention parameters had significant wavelet coherency with organic carbon content. These results suggested that location-dependent scale analyses are necessary to improve the performance for soil water retention characteristic predictions.

  17. Large-scale inverse model analyses employing fast randomized data reduction

    NASA Astrophysics Data System (ADS)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  18. Photorefractive detection of tagged photons in ultrasound modulated optical tomography of thick biological tissues.

    PubMed

    Ramaz, F; Forget, B; Atlan, M; Boccara, A C; Gross, M; Delaye, P; Roosen, G

    2004-11-01

    We present a new and simple method to obtain ultrasound modulated optical tomography images in thick biological tissues with the use of a photorefractive crystal. The technique offers the advantage of spatially adapting the output speckle wavefront by analysing the signal diffracted by the interference pattern between this output field and a reference beam, recorded inside the photorefractive crystal. Averaging out due to random phases of the speckle grains vanishes, and we can use a fast single photodetector to measure the ultrasound modulated optical contrast. This technique offers a promising way to make direct measurements within the decorrelation time scale of living tissues.

  19. Excitation of high wavenumber fluctuations by externally-imposed helical fields in edge pedestal plasmas

    NASA Astrophysics Data System (ADS)

    Singh, R.; Kim, J.-H.; Jhang, Hogun; Das, S.

    2018-03-01

    Two-step mode coupling analyses for nonlinear excitation of the ballooning mode (BM) in pedestal plasma by external helical magnetic field perturbation [Resonant Magnetic Perturbations (RMP)] are presented. This technique allows calculating the effect of higher harmonic sidebands generated by interaction of long scale RMP pump and BM. It is shown that RMP field perturbations can modify the BM growth rate and frequency through nonlinear Reynolds stress and magnetic stress. In particular, it is shown that both stresses can efficiently excite high wavenumber BM fluctuations which, in turn, can enhance the transport in the pedestal. Another notable feature of this analysis is the existence of short scale (high- k y ) nonlinear instability at Alfven time scale near the ideal BM threshold boundary.

  20. Characterizing the human postural control system using detrended fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Teresa Blázquez, M.; Anguiano, Marta; de Saavedra, Fernando Arias; Lallena, Antonio M.; Carpena, Pedro

    2010-01-01

    Detrended fluctuation analysis is used to study the behaviour of the time series of the position of the center of pressure, output from the activity of a human postural control system. The results suggest that these trajectories present a crossover in their scaling properties from persistent (for high frequencies, short-range time scale) to anti-persistent (for low frequencies, long-range time scale) behaviours. The values of the scaling exponent found for the persistent parts of the trajectories are very similar for all the cases analysed. The similarity of the results obtained for the measurements done with both eyes open and both eyes closed indicate either that the visual system may be disregarded by the postural control system, while maintaining quiet standing, or that the control mechanisms associated with each type of information (visual, vestibular and somatosensory) cannot be disentangled with this technique.

  1. Wavelet Analyses of Oil Prices, USD Variations and Impact on Logistics

    NASA Astrophysics Data System (ADS)

    Melek, M.; Tokgozlu, A.; Aslan, Z.

    2009-07-01

    This paper is related with temporal variations of historical oil prices and Dollar and Euro in Turkey. Daily data based on OECD and Central Bank of Turkey records beginning from 1946 has been considered. 1D-continuous wavelets and wavelet packets analysis techniques have been applied on data. Wavelet techniques help to detect abrupt changing's, increasing and decreasing trends of data. Estimation of variables has been presented by using linear regression estimation techniques. The results of this study have been compared with the small and large scale effects. Transportation costs of track show a similar variation with fuel prices. The second part of the paper is related with estimation of imports, exports, costs, total number of vehicles and annual variations by considering temporal variation of oil prices and Dollar currency in Turkey. Wavelet techniques offer a user friendly methodology to interpret some local effects on increasing trend of imports and exports data.

  2. PWC - PAIRWISE COMPARISON SOFTWARE: SOFTWARE PROGRAM FOR PAIRWISE COMPARISON TASK FOR PSYCHOMETRIC SCALING AND COGNITIVE RESEARCH

    NASA Technical Reports Server (NTRS)

    Ricks, W. R.

    1994-01-01

    PWC is used for pair-wise comparisons in both psychometric scaling techniques and cognitive research. The cognitive tasks and processes of a human operator of automated systems are now prominent considerations when defining system requirements. Recent developments in cognitive research have emphasized the potential utility of psychometric scaling techniques, such as multidimensional scaling, for representing human knowledge and cognitive processing structures. Such techniques involve collecting measurements of stimulus-relatedness from human observers. When data are analyzed using this scaling approach, an n-dimensional representation of the stimuli is produced. This resulting representation is said to describe the subject's cognitive or perceptual view of the stimuli. PWC applies one of the many techniques commonly used to acquire the data necessary for these types of analyses: pair-wise comparisons. PWC administers the task, collects the data from the test subject, and formats the data for analysis. It therefore addresses many of the limitations of the traditional "pen-and-paper" methods. By automating the data collection process, subjects are prevented from going back to check previous responses, the possibility of erroneous data transfer is eliminated, and the burden of the administration and taking of the test is eased. By using randomization, PWC ensures that subjects see the stimuli pairs presented in random order, and that each subject sees pairs in a different random order. PWC is written in Turbo Pascal v6.0 for IBM PC compatible computers running MS-DOS. The program has also been successfully compiled with Turbo Pascal v7.0. A sample executable is provided. PWC requires 30K of RAM for execution. The standard distribution medium for this program is a 5.25 inch 360K MS-DOS format diskette. Two electronic versions of the documentation are included on the diskette: one in ASCII format and one in MS Word for Windows format. PWC was developed in 1993.

  3. A review of approaches to identifying patient phenotype cohorts using electronic health records

    PubMed Central

    Shivade, Chaitanya; Raghavan, Preethi; Fosler-Lussier, Eric; Embi, Peter J; Elhadad, Noemie; Johnson, Stephen B; Lai, Albert M

    2014-01-01

    Objective To summarize literature describing approaches aimed at automatically identifying patients with a common phenotype. Materials and methods We performed a review of studies describing systems or reporting techniques developed for identifying cohorts of patients with specific phenotypes. Every full text article published in (1) Journal of American Medical Informatics Association, (2) Journal of Biomedical Informatics, (3) Proceedings of the Annual American Medical Informatics Association Symposium, and (4) Proceedings of Clinical Research Informatics Conference within the past 3 years was assessed for inclusion in the review. Only articles using automated techniques were included. Results Ninety-seven articles met our inclusion criteria. Forty-six used natural language processing (NLP)-based techniques, 24 described rule-based systems, 41 used statistical analyses, data mining, or machine learning techniques, while 22 described hybrid systems. Nine articles described the architecture of large-scale systems developed for determining cohort eligibility of patients. Discussion We observe that there is a rise in the number of studies associated with cohort identification using electronic medical records. Statistical analyses or machine learning, followed by NLP techniques, are gaining popularity over the years in comparison with rule-based systems. Conclusions There are a variety of approaches for classifying patients into a particular phenotype. Different techniques and data sources are used, and good performance is reported on datasets at respective institutions. However, no system makes comprehensive use of electronic medical records addressing all of their known weaknesses. PMID:24201027

  4. Detrended Cross Correlation Analysis: a new way to figure out the underlying cause of global warming

    NASA Astrophysics Data System (ADS)

    Hazra, S.; Bera, S. K.

    2016-12-01

    Analysing non-stationary time series is a challenging task in earth science, seismology, solar physics, climate, biology, finance etc. Most of the cases external noise like oscillation, high frequency noise, low frequency noise in different scales lead to erroneous result. Many statistical methods are proposed to find the correlation between two non-stationary time series. N. Scafetta and B. J. West, Phys. Rev. Lett. 90, 248701 (2003), reported a strong relationship between solar flare intermittency (SFI) and global temperature anomalies (GTA) using diffusion entropy analysis. It has been recently shown that detrended cross correlation analysis (DCCA) is better technique to remove the effects of any unwanted signal as well as local and periodic trend. Thus DCCA technique is more suitable to find the correlation between two non-stationary time series. By this technique, correlation coefficient at different scale can be estimated. Motivated by this here we have applied a new DCCA technique to find the relationship between SFI and GTA. We have also applied this technique to find the relationship between GTA and carbon di-oxide density, GTA and methane density on earth atmosphere. In future we will try to find the relationship between GTA and aerosols present in earth atmosphere, water vapour density on earth atmosphere, ozone depletion etc. This analysis will help us for better understanding about the reason behind global warming

  5. Correlated Amino Acid and Mineralogical Analyses of Milligram and Submilligram Samples of Carbonaceous Chondrite Lonewolf Nunataks 94101

    NASA Technical Reports Server (NTRS)

    Burton, S.; Berger, E. L.; Locke, D. R.; Lewis, E. K.

    2018-01-01

    Amino acids, the building blocks of proteins, have been found to be indigenous in the eight carbonaceous chondrite groups. The abundances, structural, enantiomeric and isotopic compositions of amino acids differ significantly among meteorites of different groups and petrologic types. These results suggest parent-body conditions (thermal or aqueous alteration), mineralogy, and the preservation of amino acids are linked. Previously, elucidating specific relationships between amino acids and mineralogy was not possible because the samples analyzed for amino acids were much larger than the scale at which petrologic heterogeneity is observed (sub mm-scale differences corresponding to sub-mg samples); for example, Pizzarello and coworkers measured amino acid abundances and performed X-ray diffraction (XRD) on several samples of the Murchison meteorite, but these analyses were performed on bulk samples that were 500 mg or larger. Advances in the sensitivity of amino acid measurements by liquid chromatography with fluorescence detection/time-of-flight mass spectrometry (LC-FD/TOF-MS), and application of techniques such as high resolution X-ray diffraction (HR-XRD) and scanning electron microscopy (SEM) with energy dispersive spectroscopy (EDS) for mineralogical characterizations have now enabled coordinated analyses on the scale at which mineral heterogeneity is observed. In this work, we have analyzed samples of the Lonewolf Nunataks (LON) 94101 CM2 carbonaceous chondrite. We are investigating the link(s) between parent body processes, mineralogical context, and amino acid compositions in meteorites on bulk samples (approx. 20mg) and mineral separates (< or = 3mg) from several of spatial locations within our allocated samples. Preliminary results of these analyses are presented here.

  6. 3-D Printing as a Tool to Investigate the Effects of Changes in Rock Microstructures on Permeability

    NASA Astrophysics Data System (ADS)

    Head, D. A.; Vanorio, T.

    2016-12-01

    Rocks are naturally heterogeneous; two rock samples with identical bulk properties can vary widely in microstructure. Understanding the evolutionary trends of rock properties requires the ability to connect time-lapse measurements of properties at different scales: the macro- scale used in the laboratory and field analyses capturing the bulk scale changes and the micro- scale used in imaging and digital techniques capturing the changes to the pore space. However, measuring those properties at different scales is very challenging, and sometimes impossible. The advent of modern 3D printing has provided an unprecedented opportunity to link those scales by combining the strengths of digital and experimental rock physics. To determine the feasibility of this technique we characterized the resolution capabilities of two different 3D printers. To calibrate our digital models with our printed models, we created a sample with an analytically solvable permeability. This allowed us to directly compare analytic calculation, numerical simulation, and laboratory measurement of permeability of the exact same sample. Next we took a CT-scanned model of a natural carbonate pore space, then iteratively digitally manipulated, 3D printed, and measured the flow properties in the laboratory. This approach allowed us to access multiple scales digitally and experimentally, to test hypotheses about how changes in rock microstructure due to compaction and dissolution affect bulk transport properties, and to connect laboratory measurements of porosity and permeability to quantities that are traditionally impossible to measure in the laboratory such as changes in surface area and tortuosity. As 3D printing technology continues to advance, we expect this technique to contribute to our ability to characterize the properties of remote and/or delicate samples as well as to test the impact of microstructural alteration on bulk physical properties in the lab in a highly consistent, repeatable manner.

  7. Efficiency and optimal size of hospitals: Results of a systematic search

    PubMed Central

    Guglielmo, Annamaria

    2017-01-01

    Background National Health Systems managers have been subject in recent years to considerable pressure to increase concentration and allow mergers. This pressure has been justified by a belief that larger hospitals lead to lower average costs and better clinical outcomes through the exploitation of economies of scale. In this context, the opportunity to measure scale efficiency is crucial to address the question of optimal productive size and to manage a fair allocation of resources. Methods and findings This paper analyses the stance of existing research on scale efficiency and optimal size of the hospital sector. We performed a systematic search of 45 past years (1969–2014) of research published in peer-reviewed scientific journals recorded by the Social Sciences Citation Index concerning this topic. We classified articles by the journal’s category, research topic, hospital setting, method and primary data analysis technique. Results showed that most of the studies were focussed on the analysis of technical and scale efficiency or on input / output ratio using Data Envelopment Analysis. We also find increasing interest concerning the effect of possible changes in hospital size on quality of care. Conclusions Studies analysed in this review showed that economies of scale are present for merging hospitals. Results supported the current policy of expanding larger hospitals and restructuring/closing smaller hospitals. In terms of beds, studies reported consistent evidence of economies of scale for hospitals with 200–300 beds. Diseconomies of scale can be expected to occur below 200 beds and above 600 beds. PMID:28355255

  8. Modeling and Analysis of Structural Dynamics for a One-Tenth Scale Model NGST Sunshield

    NASA Technical Reports Server (NTRS)

    Johnston, John; Lienard, Sebastien; Brodeur, Steve (Technical Monitor)

    2001-01-01

    New modeling and analysis techniques have been developed for predicting the dynamic behavior of the Next Generation Space Telescope (NGST) sunshield. The sunshield consists of multiple layers of pretensioned, thin-film membranes supported by deployable booms. Modeling the structural dynamic behavior of the sunshield is a challenging aspect of the problem due to the effects of membrane wrinkling. A finite element model of the sunshield was developed using an approximate engineering approach, the cable network method, to account for membrane wrinkling effects. Ground testing of a one-tenth scale model of the NGST sunshield were carried out to provide data for validating the analytical model. A series of analyses were performed to predict the behavior of the sunshield under the ground test conditions. Modal analyses were performed to predict the frequencies and mode shapes of the test article and transient response analyses were completed to simulate impulse excitation tests. Comparison was made between analytical predictions and test measurements for the dynamic behavior of the sunshield. In general, the results show good agreement with the analytical model correctly predicting the approximate frequency and mode shapes for the significant structural modes.

  9. Three-dimensional hydrogen microscopy using a high-energy proton probe

    NASA Astrophysics Data System (ADS)

    Dollinger, G.; Reichart, P.; Datzmann, G.; Hauptner, A.; Körner, H.-J.

    2003-01-01

    It is a challenge to measure two-dimensional or three-dimensional (3D) hydrogen profiles on a micrometer scale. Quantitative hydrogen analyses of micrometer resolution are demonstrated utilizing proton-proton scattering at a high-energy proton microprobe. It has more than an-order-of-magnitude better position resolution and in addition higher sensitivity than any other technique for 3D hydrogen analyses. This type of hydrogen imaging opens plenty room to characterize microstructured materials, and semiconductor devices or objects in microbiology. The first hydrogen image obtained with a 10 MeV proton microprobe shows the hydrogen distribution of the microcapillary system being present in the wing of a mayfly and demonstrates the potential of the method.

  10. An industrial information integration approach to in-orbit spacecraft

    NASA Astrophysics Data System (ADS)

    Du, Xiaoning; Wang, Hong; Du, Yuhao; Xu, Li Da; Chaudhry, Sohail; Bi, Zhuming; Guo, Rong; Huang, Yongxuan; Li, Jisheng

    2017-01-01

    To operate an in-orbit spacecraft, the spacecraft status has to be monitored autonomously by collecting and analysing real-time data, and then detecting abnormities and malfunctions of system components. To develop an information system for spacecraft state detection, we investigate the feasibility of using ontology-based artificial intelligence in the system development. We propose a new modelling technique based on the semantic web, agent, scenarios and ontologies model. In modelling, the subjects of astronautics fields are classified, corresponding agents and scenarios are defined, and they are connected by the semantic web to analyse data and detect failures. We introduce the modelling methodologies and the resulted framework of the status detection information system in this paper. We discuss system components as well as their interactions in details. The system has been prototyped and tested to illustrate its feasibility and effectiveness. The proposed modelling technique is generic which can be extended and applied to the system development of other large-scale and complex information systems.

  11. Process cost and facility considerations in the selection of primary cell culture clarification technology.

    PubMed

    Felo, Michael; Christensen, Brandon; Higgins, John

    2013-01-01

    The bioreactor volume delineating the selection of primary clarification technology is not always easily defined. Development of a commercial scale process for the manufacture of therapeutic proteins requires scale-up from a few liters to thousands of liters. While the separation techniques used for protein purification are largely conserved across scales, the separation techniques for primary cell culture clarification vary with scale. Process models were developed to compare monoclonal antibody production costs using two cell culture clarification technologies. One process model was created for cell culture clarification by disc stack centrifugation with depth filtration. A second process model was created for clarification by multi-stage depth filtration. Analyses were performed to examine the influence of bioreactor volume, product titer, depth filter capacity, and facility utilization on overall operating costs. At bioreactor volumes <1,000 L, clarification using multi-stage depth filtration offers cost savings compared to clarification using centrifugation. For bioreactor volumes >5,000 L, clarification using centrifugation followed by depth filtration offers significant cost savings. For bioreactor volumes of ∼ 2,000 L, clarification costs are similar between depth filtration and centrifugation. At this scale, factors including facility utilization, available capital, ease of process development, implementation timelines, and process performance characterization play an important role in clarification technology selection. In the case study presented, a multi-product facility selected multi-stage depth filtration for cell culture clarification at the 500 and 2,000 L scales of operation. Facility implementation timelines, process development activities, equipment commissioning and validation, scale-up effects, and process robustness are examined. © 2013 American Institute of Chemical Engineers.

  12. QuickRNASeq lifts large-scale RNA-seq data analyses to the next level of automation and interactive visualization.

    PubMed

    Zhao, Shanrong; Xi, Li; Quan, Jie; Xi, Hualin; Zhang, Ying; von Schack, David; Vincent, Michael; Zhang, Baohong

    2016-01-08

    RNA sequencing (RNA-seq), a next-generation sequencing technique for transcriptome profiling, is being increasingly used, in part driven by the decreasing cost of sequencing. Nevertheless, the analysis of the massive amounts of data generated by large-scale RNA-seq remains a challenge. Multiple algorithms pertinent to basic analyses have been developed, and there is an increasing need to automate the use of these tools so as to obtain results in an efficient and user friendly manner. Increased automation and improved visualization of the results will help make the results and findings of the analyses readily available to experimental scientists. By combing the best open source tools developed for RNA-seq data analyses and the most advanced web 2.0 technologies, we have implemented QuickRNASeq, a pipeline for large-scale RNA-seq data analyses and visualization. The QuickRNASeq workflow consists of three main steps. In Step #1, each individual sample is processed, including mapping RNA-seq reads to a reference genome, counting the numbers of mapped reads, quality control of the aligned reads, and SNP (single nucleotide polymorphism) calling. Step #1 is computationally intensive, and can be processed in parallel. In Step #2, the results from individual samples are merged, and an integrated and interactive project report is generated. All analyses results in the report are accessible via a single HTML entry webpage. Step #3 is the data interpretation and presentation step. The rich visualization features implemented here allow end users to interactively explore the results of RNA-seq data analyses, and to gain more insights into RNA-seq datasets. In addition, we used a real world dataset to demonstrate the simplicity and efficiency of QuickRNASeq in RNA-seq data analyses and interactive visualizations. The seamless integration of automated capabilites with interactive visualizations in QuickRNASeq is not available in other published RNA-seq pipelines. The high degree of automation and interactivity in QuickRNASeq leads to a substantial reduction in the time and effort required prior to further downstream analyses and interpretation of the analyses findings. QuickRNASeq advances primary RNA-seq data analyses to the next level of automation, and is mature for public release and adoption.

  13. Development of an inflatable radiator system. [for space shuttles

    NASA Technical Reports Server (NTRS)

    Leach, J. W.

    1976-01-01

    Conceptual designs of an inflatable radiator system developed for supplying short duration supplementary cooling of space vehicles are described along with parametric trade studies, materials evaluation/selection studies, thermal and structural analyses, and numerous element tests. Fabrication techniques developed in constructing the engineering models and performance data from the model thermal vacuum tests are included. Application of these data to refining the designs of the flight articles and to constructing a full scale prototype radiator is discussed.

  14. Quantitative fractography by digital image processing: NIH Image macro tools for stereo pair analysis and 3-D reconstruction.

    PubMed

    Hein, L R

    2001-10-01

    A set of NIH Image macro programs was developed to make qualitative and quantitative analyses from digital stereo pictures produced by scanning electron microscopes. These tools were designed for image alignment, anaglyph representation, animation, reconstruction of true elevation surfaces, reconstruction of elevation profiles, true-scale elevation mapping and, for the quantitative approach, surface area and roughness calculations. Limitations on time processing, scanning techniques and programming concepts are also discussed.

  15. Analyses and forecasts of a tornadic supercell outbreak using a 3DVAR system ensemble

    NASA Astrophysics Data System (ADS)

    Zhuang, Zhaorong; Yussouf, Nusrat; Gao, Jidong

    2016-05-01

    As part of NOAA's "Warn-On-Forecast" initiative, a convective-scale data assimilation and prediction system was developed using the WRF-ARW model and ARPS 3DVAR data assimilation technique. The system was then evaluated using retrospective short-range ensemble analyses and probabilistic forecasts of the tornadic supercell outbreak event that occurred on 24 May 2011 in Oklahoma, USA. A 36-member multi-physics ensemble system provided the initial and boundary conditions for a 3-km convective-scale ensemble system. Radial velocity and reflectivity observations from four WSR-88Ds were assimilated into the ensemble using the ARPS 3DVAR technique. Five data assimilation and forecast experiments were conducted to evaluate the sensitivity of the system to data assimilation frequencies, in-cloud temperature adjustment schemes, and fixed- and mixed-microphysics ensembles. The results indicated that the experiment with 5-min assimilation frequency quickly built up the storm and produced a more accurate analysis compared with the 10-min assimilation frequency experiment. The predicted vertical vorticity from the moist-adiabatic in-cloud temperature adjustment scheme was larger in magnitude than that from the latent heat scheme. Cycled data assimilation yielded good forecasts, where the ensemble probability of high vertical vorticity matched reasonably well with the observed tornado damage path. Overall, the results of the study suggest that the 3DVAR analysis and forecast system can provide reasonable forecasts of tornadic supercell storms.

  16. Correlation of finite-element structural dynamic analysis with measured free vibration characteristics for a full-scale helicopter fuselage

    NASA Technical Reports Server (NTRS)

    Kenigsberg, I. J.; Dean, M. W.; Malatino, R.

    1974-01-01

    The correlation achieved with each program provides the material for a discussion of modeling techniques developed for general application to finite-element dynamic analyses of helicopter airframes. Included are the selection of static and dynamic degrees of freedom, cockpit structural modeling, and the extent of flexible-frame modeling in the transmission support region and in the vicinity of large cut-outs. The sensitivity of predicted results to these modeling assumptions are discussed. Both the Sikorsky Finite-Element Airframe Vibration analysis Program (FRAN/Vibration Analysis) and the NASA Structural Analysis Program (NASTRAN) have been correlated with data taken in full-scale vibration tests of a modified CH-53A helicopter.

  17. Implications of cellular models of dopamine neurons for disease

    PubMed Central

    Evans, Rebekah C.; Oster, Andrew M.; Pissadaki, Eleftheria K.; Drion, Guillaume; Kuznetsov, Alexey S.; Gutkin, Boris S.

    2016-01-01

    This review addresses the present state of single-cell models of the firing pattern of midbrain dopamine neurons and the insights that can be gained from these models into the underlying mechanisms for diseases such as Parkinson's, addiction, and schizophrenia. We will explain the analytical technique of separation of time scales and show how it can produce insights into mechanisms using simplified single-compartment models. We also use morphologically realistic multicompartmental models to address spatially heterogeneous aspects of neural signaling and neural metabolism. Separation of time scale analyses are applied to pacemaking, bursting, and depolarization block in dopamine neurons. Differences in subpopulations with respect to metabolic load are addressed using multicompartmental models. PMID:27582295

  18. Nonlinear model-order reduction for compressible flow solvers using the Discrete Empirical Interpolation Method

    NASA Astrophysics Data System (ADS)

    Fosas de Pando, Miguel; Schmid, Peter J.; Sipp, Denis

    2016-11-01

    Nonlinear model reduction for large-scale flows is an essential component in many fluid applications such as flow control, optimization, parameter space exploration and statistical analysis. In this article, we generalize the POD-DEIM method, introduced by Chaturantabut & Sorensen [1], to address nonlocal nonlinearities in the equations without loss of performance or efficiency. The nonlinear terms are represented by nested DEIM-approximations using multiple expansion bases based on the Proper Orthogonal Decomposition. These extensions are imperative, for example, for applications of the POD-DEIM method to large-scale compressible flows. The efficient implementation of the presented model-reduction technique follows our earlier work [2] on linearized and adjoint analyses and takes advantage of the modular structure of our compressible flow solver. The efficacy of the nonlinear model-reduction technique is demonstrated to the flow around an airfoil and its acoustic footprint. We could obtain an accurate and robust low-dimensional model that captures the main features of the full flow.

  19. New atom probe approaches to studying segregation in nanocrystalline materials.

    PubMed

    Samudrala, S K; Felfer, P J; Araullo-Peters, V J; Cao, Y; Liao, X Z; Cairney, J M

    2013-09-01

    Atom probe is a technique that is highly suited to the study of nanocrystalline materials. It can provide accurate atomic-scale information about the composition of grain boundaries in three dimensions. In this paper we have analysed the microstructure of a nanocrystalline super-duplex stainless steel prepared by high pressure torsion (HPT). Not all of the grain boundaries in this alloy display obvious segregation, making visualisation of the microstructure challenging. In addition, the grain boundaries present in the atom probe data acquired from this alloy have complex shapes that are curved at the scale of the dataset and the interfacial excess varies considerably over the boundaries, making the accurate characterisation of the distribution of solute challenging using existing analysis techniques. In this paper we present two new data treatment methods that allow the visualisation of boundaries with little or no segregation, the delineation of boundaries for further analysis and the quantitative analysis of Gibbsian interfacial excess at boundaries, including the capability of excess mapping. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. Therapist self-report of evidence-based practices in usual care for adolescent behavior problems: factor and construct validity.

    PubMed

    Hogue, Aaron; Dauber, Sarah; Henderson, Craig E

    2014-01-01

    This study introduces a therapist-report measure of evidence-based practices for adolescent conduct and substance use problems. The Inventory of Therapy Techniques-Adolescent Behavior Problems (ITT-ABP) is a post-session measure of 27 techniques representing four approaches: cognitive-behavioral therapy (CBT), family therapy (FT), motivational interviewing (MI), and drug counseling (DC). A total of 822 protocols were collected from 32 therapists treating 71 adolescents in six usual care sites. Factor analyses identified three clinically coherent scales with strong internal consistency across the full sample: FT (8 items; α = .79), MI/CBT (8 items; α = .87), and DC (9 items, α = .90). The scales discriminated between therapists working in a family-oriented site versus other sites and showed moderate convergent validity with therapist reports of allegiance and skill in each approach. The ITT-ABP holds promise as a cost-efficient quality assurance tool for supporting high-fidelity delivery of evidence-based practices in usual care.

  1. Ultra-high molecular weight silphenylene-siloxane polymers

    NASA Technical Reports Server (NTRS)

    Patterson, W. J.; Hundley, N. H.; Ludwick, L. M.

    1984-01-01

    Silphenylene-siloxane copolymers with molecular weights above one million were prepared using a two stage polymerization technique. The technique was successfully scaled up to produce 50 grams of this high polymer in a single run. The reactive monomer approach was also investigated using the following aminosilanes: bis(dimethylamino)dimethylsilane, N,N-bis(pyrrolidinyl)dimethylsilane and N,N-bis(gamma-butyrolactam)dimethylsilane). Thermal analyses were performed in both air and nitrogen. The experimental polymers decomposed at 540 to 562 C, as opposed to 408 to 426 C for commercial silicones. Differential scanning calorimetry showed a glass transition (Tg) at -50 to -55 C for the silphenylene-siloxane copolymer while the commercial silicones had Tg's at -96 to -112 C.

  2. Molecular-Scale Electronics: From Concept to Function.

    PubMed

    Xiang, Dong; Wang, Xiaolong; Jia, Chuancheng; Lee, Takhee; Guo, Xuefeng

    2016-04-13

    Creating functional electrical circuits using individual or ensemble molecules, often termed as "molecular-scale electronics", not only meets the increasing technical demands of the miniaturization of traditional Si-based electronic devices, but also provides an ideal window of exploring the intrinsic properties of materials at the molecular level. This Review covers the major advances with the most general applicability and emphasizes new insights into the development of efficient platform methodologies for building reliable molecular electronic devices with desired functionalities through the combination of programmed bottom-up self-assembly and sophisticated top-down device fabrication. First, we summarize a number of different approaches of forming molecular-scale junctions and discuss various experimental techniques for examining these nanoscale circuits in details. We then give a full introduction of characterization techniques and theoretical simulations for molecular electronics. Third, we highlight the major contributions and new concepts of integrating molecular functionalities into electrical circuits. Finally, we provide a critical discussion of limitations and main challenges that still exist for the development of molecular electronics. These analyses should be valuable for deeply understanding charge transport through molecular junctions, the device fabrication process, and the roadmap for future practical molecular electronics.

  3. Satellite-enhanced dynamical downscaling for the analysis of extreme events

    NASA Astrophysics Data System (ADS)

    Nunes, Ana M. B.

    2016-09-01

    The use of regional models in the downscaling of general circulation models provides a strategy to generate more detailed climate information. In that case, boundary-forcing techniques can be useful to maintain the large-scale features from the coarse-resolution global models in agreement with the inner modes of the higher-resolution regional models. Although those procedures might improve dynamics, downscaling via regional modeling still aims for better representation of physical processes. With the purpose of improving dynamics and physical processes in regional downscaling of global reanalysis, the Regional Spectral Model—originally developed at the National Centers for Environmental Prediction—employs a newly reformulated scale-selective bias correction, together with the 3-hourly assimilation of the satellite-based precipitation estimates constructed from the Climate Prediction Center morphing technique. The two-scheme technique for the dynamical downscaling of global reanalysis can be applied in analyses of environmental disasters and risk assessment, with hourly outputs, and resolution of about 25 km. Here the satellite-enhanced dynamical downscaling added value is demonstrated in simulations of the first reported hurricane in the western South Atlantic Ocean basin through comparisons with global reanalyses and satellite products available in ocean areas.

  4. Amino Acid Contents of Meteorite Mineral Separates

    NASA Technical Reports Server (NTRS)

    Berger, E. L.; Burton, A. S; Locke, D.

    2017-01-01

    Indigenous amino acids have been found indigenous all 8 carbonaceous chondrite groups. However, the abundances, structural, enantiomeric and isotopic compositions of amino acids differ significantly among meteorites of different groups and petrologic types. This suggests that parent-body conditions (thermal or aqueous alteration), mineralogy, and the preservation of amino acids are linked. Previously, elucidating specific relationships between amino acids and mineralogy was not possible because the samples analyzed for amino acids were much larger than the scale at which petrologic heterogeneity is observed (sub mm-scale differences corresponding to sub-mg samples). Recent advances in amino acid measurements and application of techniques such as high resolution X-ray diffraction (HR-XRD) and scanning electron microscopy (SEM) with energy dispersive spectroscopy (EDS) for mineralogical characterizations allow us to perform coordinated analyses on the scale at which mineral heterogeneity is observed.

  5. High-Temperature Strain Sensing for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Piazza, Anthony; Richards, Lance W.; Hudson, Larry D.

    2008-01-01

    Thermal protection systems (TPS) and hot structures are utilizing advanced materials that operate at temperatures that exceed abilities to measure structural performance. Robust strain sensors that operate accurately and reliably beyond 1800 F are needed but do not exist. These shortcomings hinder the ability to validate analysis and modeling techniques and hinders the ability to optimize structural designs. This presentation examines high-temperature strain sensing for aerospace applications and, more specifically, seeks to provide strain data for validating finite element models and thermal-structural analyses. Efforts have been made to develop sensor attachment techniques for relevant structural materials at the small test specimen level and to perform laboratory tests to characterize sensor and generate corrections to apply to indicated strains. Areas highlighted in this presentation include sensors, sensor attachment techniques, laboratory evaluation/characterization of strain measurement, and sensor use in large-scale structures.

  6. Innovative Applications of Laser Scanning and Rapid Prototype Printing to Rock Breakdown Experiments

    NASA Technical Reports Server (NTRS)

    Bourke, Mary; Viles, Heather; Nicoll, Joe; Lyew-Ayee, Parris; Ghent, Rebecca; Holmlund, James

    2008-01-01

    We present the novel application of two technologies for use in rock breakdown experiments, i.e. close-range, ground-based 3D triangulation scanning and rapid prototype printing. These techniques aid analyses of form-process interactions across the range of scales relevant to breakdown (micron-m). This is achieved through (a) the creation of DEMs (which permit quantitative description and analysis of rock surface morphology and morphological change) and (b) the production of more realistically-shaped experimental blocks. We illustrate the use of these techniques, alongside appropriate data analysis routines, in experiments designed to investigate the persistence of fluvially-derived features in the face of subsequent wind abrasion and weathering. These techniques have a range of potential applications in experimental field and lab-based geomorphic studies beyond those specifically outlined here.

  7. cm-scale variations of crystal orientation fabric in cold Alpine ice core from Colle Gnifetti

    NASA Astrophysics Data System (ADS)

    Kerch, Johanna; Weikusat, Ilka; Eisen, Olaf; Wagenbach, Dietmar; Erhardt, Tobias

    2015-04-01

    Analysis of the microstructural parameters of ice has been an important part of ice core analyses so far mainly in polar cores in order to obtain information about physical processes (e.g. deformation, recrystallisation) on the micro- and macro-scale within an ice body. More recently the influence of impurities and climatic conditions during snow accumulation on these processes has come into focus. A deeper understanding of how palaeoclimate proxies interact with physical properties of the ice matrix bears relevance for palaeoclimatic interpretations, improved geophysical measurement techniques and the furthering of ice dynamical modeling. Variations in microstructural parameters e.g. crystal orientation fabric or grain size can be observed on a scale of hundreds and tens of metres but also on a centimetre scale. The underlying processes are not necessarily the same on all scales. Especially for the short-scale variations many questions remain unanswered. We present results from a study that aims to investigate following hypotheses: 1. Variations in grain size and fabric, i.e. strong changes of the orientation of ice crystals with respect to the vertical, occur on a centimetre scale and can be observed in all depths of an ice core. 2. Palaeoclimate proxies like dust and impurities have an impact on the microstructural processes and thus are inducing the observed short-scale variations in grain size and fabric. 3. The interaction of proxies with the ice matrix leads to depth intervals that show correlating behaviour as well as ranges with anticorrelation between microstructural parameters and palaeoclimatic proxies. The respective processes need to be identified. Fabric Analyser measurements were conducted on more than 80 samples (total of 8 m) from different depth ranges of a cold Alpine ice core (72 m length) drilled in 2013 at Colle Gnifetti, Switzerland/Italy. Results were obtained by automatic image processing, providing estimates for grain size distributions and crystal orientation fabric, and comparison with data from continuous flow analysis of chemical impurities. A microstructural characterisation of the analysed core is presented with emphasis on the observed variations in crystal orientation fabric. The relevance of these results for palaeoclimate reconstruction and geophysical applications in ice are discussed.

  8. Wavelet and Fractal Analysis of Remotely Sensed Surface Temperature with Applications to Estimation of Surface Sensible Heat Flux Density

    NASA Technical Reports Server (NTRS)

    Schieldge, John

    2000-01-01

    Wavelet and fractal analyses have been used successfully to analyze one-dimensional data sets such as time series of financial, physical, and biological parameters. These techniques have been applied to two-dimensional problems in some instances, including the analysis of remote sensing imagery. In this respect, these techniques have not been widely used by the remote sensing community, and their overall capabilities as analytical tools for use on satellite and aircraft data sets is not well known. Wavelet and fractal analyses have the potential to provide fresh insight into the characterization of surface properties such as temperature and emissivity distributions, and surface processes such as the heat and water vapor exchange between the surface and the lower atmosphere. In particular, the variation of sensible heat flux density as a function of the change In scale of surface properties Is difficult to estimate, but - in general - wavelets and fractals have proved useful in determining the way a parameter varies with changes in scale. We present the results of a limited study on the relationship between spatial variations in surface temperature distribution and sensible heat flux distribution as determined by separate wavelet and fractal analyses. We analyzed aircraft imagery obtained in the thermal infrared (IR) bands from the multispectral TIMS and hyperspectral MASTER airborne sensors. The thermal IR data allows us to estimate the surface kinetic temperature distribution for a number of sites in the Midwestern and Southwestern United States (viz., San Pedro River Basin, Arizona; El Reno, Oklahoma; Jornada, New Mexico). The ground spatial resolution of the aircraft data varied from 5 to 15 meters. All sites were instrumented with meteorological and hydrological equipment including surface layer flux measuring stations such as Bowen Ratio systems and sonic anemometers. The ground and aircraft data sets provided the inputs for the wavelet and fractal analyses, and the validation of the results.

  9. Assimilation of ZDR Columns for Improving the Spin-Up and Forecasts of Convective Storms

    NASA Astrophysics Data System (ADS)

    Carlin, J.; Gao, J.; Snyder, J.; Ryzhkov, A.

    2017-12-01

    A primary motivation for assimilating radar reflectivity data is the reduction of spin-up time for modeled convection. To accomplish this, cloud analysis techniques seek to induce and sustain convective updrafts in storm-scale models by inserting temperature and moisture increments and hydrometeor mixing ratios into the model analysis from simple relations with reflectivity. Polarimetric radar data provide additional insight into the microphysical and dynamic structure of convection. In particular, the radar meteorology community has known for decades that convective updrafts cause, and are typically co-located with, differential reflectivity (ZDR) columns - vertical protrusions of enhanced ZDR above the environmental 0˚C level. Despite these benefits, limited work has been done thus far to assimilate dual-polarization radar data into numerical weather prediction models. In this study, we explore the utility of assimilating ZDR columns to improve storm-scale model analyses and forecasts of convection. We modify the existing Advanced Regional Prediction System's (ARPS) cloud analysis routine to adjust model temperature and moisture state variables using detected ZDR columns as proxies for convective updrafts, and compare the resultant cycled analyses and forecasts with those from the original reflectivity-based cloud analysis formulation. Results indicate qualitative and quantitative improvements from assimilating ZDR columns, including more coherent analyzed updrafts, forecast updraft helicity swaths that better match radar-derived rotation tracks, more realistic forecast reflectivity fields, and larger equitable threat scores. These findings support the use of dual-polarization radar signatures to improve storm-scale model analyses and forecasts.

  10. Application of the radioisotope excited X-ray fluorescence technique in charge optimization during thermite smelting of Fe-Ni, Fe-cr, and Fe-Ti alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, I.G.; Joseph, D.; Lal, M.

    1995-10-01

    A wide range of ferroalloys are used to facilitate the addition of different alloying elements to molten steel. High-carbon ferroalloys are produced on a tonnage basis by carbothermic smelting in an electric furnace, and an aluminothermic route is generally adopted for small scale production of low-carbon varieties. The physicochemical principles of carbothermy and aluminothermy have been well documented in the literature. However, limited technical data are reported on the production of individual ferroalloys of low-carbon varieties from their selected resources. The authors demonstrate her the application of an energy dispersive X-ray fluorescence (EDXRF) technique in meeting the analytical requirements ofmore » a thermite smelting campaign, carried out with the aim of preparing low-carbon-low-nitrogen Fe-Ni, Fe-Cr, and Fe-Ti alloys from indigenously available nickel bearing spent catalyst, mineral chromite, and ilmenite/rutile, respectively. They have chosen the EDXRF technique to meet the analytical requirements because of its capability to analyze samples of ore, minerals, a metal, and alloys in different forms, such as powder, sponge, as-smelted, or as-cast, to obtain rapid multielement analyses with ease. Rapid analyses of thermite feed and product by this technique have aided in the appropriate alterations of the charge constitutents to obtain optimum charge consumption.« less

  11. Big Data and Health Economics: Strengths, Weaknesses, Opportunities and Threats.

    PubMed

    Collins, Brendan

    2016-02-01

    'Big data' is the collective name for the increasing capacity of information systems to collect and store large volumes of data, which are often unstructured and time stamped, and to analyse these data by using regression and other statistical techniques. This is a review of the potential applications of big data and health economics, using a SWOT (strengths, weaknesses, opportunities, threats) approach. In health economics, large pseudonymized databases, such as the planned care.data programme in the UK, have the potential to increase understanding of how drugs work in the real world, taking into account adherence, co-morbidities, interactions and side effects. This 'real-world evidence' has applications in individualized medicine. More routine and larger-scale cost and outcomes data collection will make health economic analyses more disease specific and population specific but may require new skill sets. There is potential for biomonitoring and lifestyle data to inform health economic analyses and public health policy.

  12. Visual field progression with frequency-doubling matrix perimetry and standard automated perimetry in patients with glaucoma and in healthy controls.

    PubMed

    Redmond, Tony; O'Leary, Neil; Hutchison, Donna M; Nicolela, Marcelo T; Artes, Paul H; Chauhan, Balwantray C

    2013-12-01

    A new analysis method called permutation of pointwise linear regression measures the significance of deterioration over time at each visual field location, combines the significance values into an overall statistic, and then determines the likelihood of change in the visual field. Because the outcome is a single P value, individualized to that specific visual field and independent of the scale of the original measurement, the method is well suited for comparing techniques with different stimuli and scales. To test the hypothesis that frequency-doubling matrix perimetry (FDT2) is more sensitive than standard automated perimetry (SAP) in identifying visual field progression in glaucoma. Patients with open-angle glaucoma and healthy controls were examined by FDT2 and SAP, both with the 24-2 test pattern, on the same day at 6-month intervals in a longitudinal prospective study conducted in a hospital-based setting. Only participants with at least 5 examinations were included. Data were analyzed with permutation of pointwise linear regression. Permutation of pointwise linear regression is individualized to each participant, in contrast to current analyses in which the statistical significance is inferred from population-based approaches. Analyses were performed with both total deviation and pattern deviation. Sixty-four patients and 36 controls were included in the study. The median age, SAP mean deviation, and follow-up period were 65 years, -2.6 dB, and 5.4 years, respectively, in patients and 62 years, +0.4 dB, and 5.2 years, respectively, in controls. Using total deviation analyses, statistically significant deterioration was identified in 17% of patients with FDT2, in 34% of patients with SAP, and in 14% of patients with both techniques; in controls these percentages were 8% with FDT2, 31% with SAP, and 8% with both. Using pattern deviation analyses, statistically significant deterioration was identified in 16% of patients with FDT2, in 17% of patients with SAP, and in 3% of patients with both techniques; in controls these values were 3% with FDT2 and none with SAP. No evidence was found that FDT2 is more sensitive than SAP in identifying visual field deterioration. In about one-third of healthy controls, age-related deterioration with SAP reached statistical significance.

  13. The spatial and temporal domains of modern ecology.

    PubMed

    Estes, Lyndon; Elsen, Paul R; Treuer, Timothy; Ahmed, Labeeb; Caylor, Kelly; Chang, Jason; Choi, Jonathan J; Ellis, Erle C

    2018-05-01

    To understand ecological phenomena, it is necessary to observe their behaviour across multiple spatial and temporal scales. Since this need was first highlighted in the 1980s, technology has opened previously inaccessible scales to observation. To help to determine whether there have been corresponding changes in the scales observed by modern ecologists, we analysed the resolution, extent, interval and duration of observations (excluding experiments) in 348 studies that have been published between 2004 and 2014. We found that observational scales were generally narrow, because ecologists still primarily use conventional field techniques. In the spatial domain, most observations had resolutions ≤1 m 2 and extents ≤10,000 ha. In the temporal domain, most observations were either unreplicated or infrequently repeated (>1 month interval) and ≤1 year in duration. Compared with studies conducted before 2004, observational durations and resolutions appear largely unchanged, but intervals have become finer and extents larger. We also found a large gulf between the scales at which phenomena are actually observed and the scales those observations ostensibly represent, raising concerns about observational comprehensiveness. Furthermore, most studies did not clearly report scale, suggesting that it remains a minor concern. Ecologists can better understand the scales represented by observations by incorporating autocorrelation measures, while journals can promote attentiveness to scale by implementing scale-reporting standards.

  14. Analysis of Grassland Ecosystem Physiology at Multiple Scales Using Eddy Covariance, Stable Isotope and Remote Sensing Techniques

    NASA Astrophysics Data System (ADS)

    Flanagan, L. B.; Geske, N.; Emrick, C.; Johnson, B. G.

    2006-12-01

    Grassland ecosystems typically exhibit very large annual fluctuations in above-ground biomass production and net ecosystem productivity (NEP). Eddy covariance flux measurements, plant stable isotope analyses, and canopy spectral reflectance techniques have been applied to study environmental constraints on grassland ecosystem productivity and the acclimation responses of the ecosystem at a site near Lethbridge, Alberta, Canada. We have observed substantial interannual variation in grassland productivity during 1999-2005. In addition, there was a strong correlation between peak above-ground biomass production and NEP calculated from eddy covariance measurements. Interannual variation in NEP was strongly controlled by the total amount of precipitation received during the growing season (April-August). We also observed significant positive correlations between a multivariate ENSO index and total growing season precipitation, and between the ENSO index and annual NEP values. This suggested that a significant fraction of the annual variability in grassland productivity was associated with ENSO during 1999-2005. Grassland productivity varies asymmetrically in response to changes in precipitation with increases in productivity during wet years being much more pronounced than reductions during dry years. Strong increases in plant water-use efficiency, based on carbon and oxygen stable isotope analyses, contribute to the resilience of productivity during times of drought. Within a growing season increased stomatal limitation of photosynthesis, associated with improved water-use efficiency, resulted in apparent shifts in leaf xanthophyll cycle pigments and changes to the Photochemical Reflectance Index (PRI) calculated from hyper-spectral reflectance measurements conducted at the canopy-scale. These shifts in PRI were apparent before seasonal drought caused significant reductions in leaf area index (LAI) and changes to canopy-scale "greenness" based on NDVI values. With further progression of the seasonal drought, LAI and canopy-scale NDVI also declined in strong correlation. In addition, we have observed strong correlation between NDVI calculated from canopy-scale reflectance measurements and NDVI determined by MODIS. Continued reflectance measurements will help to understand and document the response of the grassland to seasonal and annual environmental change.

  15. Fractal Characterization of Multitemporal Scaled Remote Sensing Data

    NASA Technical Reports Server (NTRS)

    Quattrochi, Dale A.; Lam, Nina Siu-Ngan; Qiu, Hong-lie

    1998-01-01

    Scale is an "innate" concept in geographic information systems. It is recognized as something that is intrinsic to the ingestion, storage, manipulation, analysis, modeling, and output of space and time data within a GIS purview, yet the relative meaning and ramifications of scaling spatial and temporal data from this perspective remain enigmatic. As GISs become more sophisticated as a product of more robust software and more powerful computer systems, there is an urgent need to examine the issue of scale, and its relationship to the whole body of spatiotemporal data, as imparted in GISS. Scale is fundamental to the characterization of geo-spatial data as represented in GISS, but we have relatively little insight on the effects of, or how to measure the effects of, scale in representing multiscaled data; i.e., data that are acquired in different formats (e.g., map, digital) and exist in varying spatial, temporal, and in the case of remote sensing data, radiometric, configurations. This is particularly true in the emerging era of Integrated GISs (IGIS), wherein spatial data in a variety of formats (e.g., raster, vector) are combined with multiscaled remote sensing data, capable of performing highly sophisticated space-time data analyses and modeling. Moreover, the complexities associated with the integration of multiscaled data sets in a multitude of formats are exacerbated by the confusion of what the term "scale" is from a multidisciplinary perspective; i.e., "scale" takes on significantly different meanings depending upon one's disciplinary background and spatial perspective which can lead to substantive confusion in the input, manipulation, analyses, and output of IGISs (Quattrochi, 1993). Hence, we must begin to look at the universality of scale and begin to develop the theory, methods, and techniques necessary to advance knowledge on the "Science of Scale" across a wide number of spatial disciplines that use GISs.

  16. Modeling process-structure-property relationships for additive manufacturing

    NASA Astrophysics Data System (ADS)

    Yan, Wentao; Lin, Stephen; Kafka, Orion L.; Yu, Cheng; Liu, Zeliang; Lian, Yanping; Wolff, Sarah; Cao, Jian; Wagner, Gregory J.; Liu, Wing Kam

    2018-02-01

    This paper presents our latest work on comprehensive modeling of process-structure-property relationships for additive manufacturing (AM) materials, including using data-mining techniques to close the cycle of design-predict-optimize. To illustrate the processstructure relationship, the multi-scale multi-physics process modeling starts from the micro-scale to establish a mechanistic heat source model, to the meso-scale models of individual powder particle evolution, and finally to the macro-scale model to simulate the fabrication process of a complex product. To link structure and properties, a highefficiency mechanistic model, self-consistent clustering analyses, is developed to capture a variety of material response. The model incorporates factors such as voids, phase composition, inclusions, and grain structures, which are the differentiating features of AM metals. Furthermore, we propose data-mining as an effective solution for novel rapid design and optimization, which is motivated by the numerous influencing factors in the AM process. We believe this paper will provide a roadmap to advance AM fundamental understanding and guide the monitoring and advanced diagnostics of AM processing.

  17. Red, Straight, no bends: primordial power spectrum reconstruction from CMB and large-scale structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ravenni, Andrea; Verde, Licia; Cuesta, Antonio J., E-mail: andrea.ravenni@pd.infn.it, E-mail: liciaverde@icc.ub.edu, E-mail: ajcuesta@icc.ub.edu

    2016-08-01

    We present a minimally parametric, model independent reconstruction of the shape of the primordial power spectrum. Our smoothing spline technique is well-suited to search for smooth features such as deviations from scale invariance, and deviations from a power law such as running of the spectral index or small-scale power suppression. We use a comprehensive set of the state-of the art cosmological data: Planck observations of the temperature and polarisation anisotropies of the cosmic microwave background, WiggleZ and Sloan Digital Sky Survey Data Release 7 galaxy power spectra and the Canada-France-Hawaii Lensing Survey correlation function. This reconstruction strongly supports the evidencemore » for a power law primordial power spectrum with a red tilt and disfavours deviations from a power law power spectrum including small-scale power suppression such as that induced by significantly massive neutrinos. This offers a powerful confirmation of the inflationary paradigm, justifying the adoption of the inflationary prior in cosmological analyses.« less

  18. Red, Straight, no bends: primordial power spectrum reconstruction from CMB and large-scale structure

    NASA Astrophysics Data System (ADS)

    Ravenni, Andrea; Verde, Licia; Cuesta, Antonio J.

    2016-08-01

    We present a minimally parametric, model independent reconstruction of the shape of the primordial power spectrum. Our smoothing spline technique is well-suited to search for smooth features such as deviations from scale invariance, and deviations from a power law such as running of the spectral index or small-scale power suppression. We use a comprehensive set of the state-of the art cosmological data: Planck observations of the temperature and polarisation anisotropies of the cosmic microwave background, WiggleZ and Sloan Digital Sky Survey Data Release 7 galaxy power spectra and the Canada-France-Hawaii Lensing Survey correlation function. This reconstruction strongly supports the evidence for a power law primordial power spectrum with a red tilt and disfavours deviations from a power law power spectrum including small-scale power suppression such as that induced by significantly massive neutrinos. This offers a powerful confirmation of the inflationary paradigm, justifying the adoption of the inflationary prior in cosmological analyses.

  19. Diagnostic and Therapeutic Management of Nasal Airway Obstruction: Advances in Diagnosis and Treatment.

    PubMed

    Mohan, Suresh; Fuller, Jennifer C; Ford, Stephanie Friree; Lindsay, Robin W

    2018-05-10

    Nasal airway obstruction (NAO) is a common complaint in the otolaryngologist's office and can have a negative influence on quality of life (QOL). Existing diagnostic methods have improved, but little consensus exists on optimal tools. Furthermore, although surgical techniques for nasal obstruction continue to be developed, effective outcome measurement is lacking. An update of recent advances in diagnostic and therapeutic management of NAO is warranted. To review advances in diagnosis and treatment of NAO from the last 5 years. PubMed, Embase, CINAHL, the Cochrane Library, LILACS, Web of Science, and Guideline.gov were searched with the terms nasal obstruction and nasal blockage and their permutations from July 26, 2012, through October 23, 2017. Studies were included if they evaluated NAO using a subjective and an objective technique, and in the case of intervention-based studies, the Nasal Obstruction Symptom Evaluation (NOSE) scale and an objective technique. Exclusion criteria consisted of animal studies; patients younger than 14 years; nasal foreign bodies; nasal masses including polyps; choanal atresia; sinus disease; obstructive sleep apnea or sleep-disordered breathing; allergic rhinitis; and studies not specific to nasal obstruction. The initial search resulted in 942 articles. After independent screening by 2 investigators, 46 unique articles remained, including 2 randomized clinical trials, 3 systematic reviews, 3 meta-analyses, and 39 nonrandomized cohort studies (including a combined systematic review and meta-analysis). An aggregate of approximately 32 000 patients were reviewed (including meta-analyses). Of the subjective measures available for NAO, the NOSE scale is outstanding with regard to disease-specific validation and correlation with symptoms. No currently available objective measure can be considered a criterion standard. Structural measures of flow, pressure, and volume appear to be necessary but insufficient to assess NAO. Therefore, novel variables and techniques must continue to be explored in search of an ideal instrument to aid in assessment of surgical outcomes. Nasal airway obstruction is a clinical diagnosis with considerable effects on QOL. An adequate diagnosis begins with a focused history and physical examination and requires a patient QOL measure such as the NOSE scale. Objective measures should be adjunctive and require further validation for widespread adoption. These results are limited by minimal high-quality evidence among studies and the risk of bias in observational studies. NA.

  20. Geochemistry and the Understanding of Groundwater Systems

    NASA Astrophysics Data System (ADS)

    Glynn, P. D.; Plummer, L. N.; Weissmann, G. S.; Stute, M.

    2009-12-01

    Geochemical techniques and concepts have made major contributions to the understanding of groundwater systems. Advances continue to be made through (1) development of measurement and characterization techniques, (2) improvements in computer technology, networks and numerical modeling, (3) investigation of coupled geologic, hydrologic, geochemical and biologic processes, and (4) scaling of individual observations, processes or subsystem models into larger coherent model frameworks. Many applications benefit from progress in these areas, such as: (1) understanding paleoenvironments, in particular paleoclimate, through the use of groundwater archives, (2) assessing the sustainability (recharge and depletion) of groundwater resources, and (3) their vulnerability to contamination, (4) evaluating the capacity and consequences of subsurface waste isolation (e.g. geologic carbon sequestration, nuclear and chemical waste disposal), (5) assessing the potential for mitigation/transformation of anthropogenic contaminants in groundwater systems, and (6) understanding the effect of groundwater lag times in ecosystem-scale responses to natural events, land-use changes, human impacts, and remediation efforts. Obtaining “representative” groundwater samples is difficult and progress in obtaining “representative” samples, or interpreting them, requires new techniques in characterizing groundwater system heterogeneity. Better characterization and simulation of groundwater system heterogeneity (both physical and geochemical) is critical to interpreting the meaning of groundwater “ages”; to understanding and predicting groundwater flow, solute transport, and geochemical evolution; and to quantifying groundwater recharge and discharge processes. Research advances will also come from greater use and progress (1) in the application of environmental tracers to ground water dating and in the analysis of new geochemical tracers (e.g. compound specific isotopic analyses, noble gas isotopes, analyses of natural organic tracers), (2) in inverse geochemical and hydrological modeling, (3) in the understanding and simulation of coupled biological, geological, geochemical and hydrological processes, and (4) in the description and quantification of processes occurring at the boundaries of groundwater systems (e.g. unsaturated zone processes, groundwater/surface water interactions, impacts of changing geomorphology and vegetation). Improvements are needed in the integration of widely diverse information. Better techniques are needed to construct coherent conceptual frameworks from individual observations, simulated or reconstructed information, process models, and intermediate scale models. Iterating between data collection, interpretation, and the application of forward, inverse, and statistical modeling tools is likely to provide progress in this area. Quantifying groundwater system processes by using an open-system thermodynamic approach in a common mass- and energy-flow framework will also facilitate comparison and understanding of diverse processes.

  1. Synthesis micro-scale boron nitride nanotubes at low substrate temperature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sajjad, Muhammad, E-mail: msajjadd@gmail.com; Makarov, Vladimir; Morell, Gerardo

    2016-07-15

    High temperature synthesis methods produce defects in 1D nanomaterials, which ultimately limit their applications. We report here the synthesis of micro-scale boron nitride nanotubes (BNNT) at low substrate temperature (300 {sup o}C) using a pulsed CO{sub 2} laser deposition technique in the presence of catalyst. The electron microscopic analyses have shown the nanotubes distributed randomly on the surface of the substrate. The average diameter (∼0.25 μm) of a nanotube, which is the highest reported value to date, is estimated by SEM data and confirmed by TEM measurements. These nanotubes are promising for high response deep-UV photo-luminescent devices. A detailed synthesismore » mechanism is presented and correlated with the experimental results.« less

  2. Application of multivariate statistical techniques in microbial ecology

    PubMed Central

    Paliy, O.; Shankar, V.

    2016-01-01

    Recent advances in high-throughput methods of molecular analyses have led to an explosion of studies generating large scale ecological datasets. Especially noticeable effect has been attained in the field of microbial ecology, where new experimental approaches provided in-depth assessments of the composition, functions, and dynamic changes of complex microbial communities. Because even a single high-throughput experiment produces large amounts of data, powerful statistical techniques of multivariate analysis are well suited to analyze and interpret these datasets. Many different multivariate techniques are available, and often it is not clear which method should be applied to a particular dataset. In this review we describe and compare the most widely used multivariate statistical techniques including exploratory, interpretive, and discriminatory procedures. We consider several important limitations and assumptions of these methods, and we present examples of how these approaches have been utilized in recent studies to provide insight into the ecology of the microbial world. Finally, we offer suggestions for the selection of appropriate methods based on the research question and dataset structure. PMID:26786791

  3. Light microscopy applications in systems biology: opportunities and challenges

    PubMed Central

    2013-01-01

    Biological systems present multiple scales of complexity, ranging from molecules to entire populations. Light microscopy is one of the least invasive techniques used to access information from various biological scales in living cells. The combination of molecular biology and imaging provides a bottom-up tool for direct insight into how molecular processes work on a cellular scale. However, imaging can also be used as a top-down approach to study the behavior of a system without detailed prior knowledge about its underlying molecular mechanisms. In this review, we highlight the recent developments on microscopy-based systems analyses and discuss the complementary opportunities and different challenges with high-content screening and high-throughput imaging. Furthermore, we provide a comprehensive overview of the available platforms that can be used for image analysis, which enable community-driven efforts in the development of image-based systems biology. PMID:23578051

  4. The characterization of an air pollution episode using satellite total ozone measurements

    NASA Technical Reports Server (NTRS)

    Fishman, Jack; Shipham, Mark C.; Vukovich, Fred M.; Cahoon, Donald R.

    1987-01-01

    A case study is presented which demonstrates that measurements of total ozone from a space-based platform can be used to study a widespread air pollution episode over the southeastern U.S. In particular, the synoptic-scale distribution of surface-level ozone obtained from an independent analysis of ground-based monitoring stations appears to be captured by the synoptic-scale distribution of total ozone, even though about 90 percent of the total ozone is in the stratosphere. Additional analyses of upper air meteorological data, other satellite imagery, and in situ aircraft measurements of ozone likewise support the fact that synoptic-scale variability of tropospheric ozone is primarily responsible for the observed variability in total ozone under certain conditions. The use of the type of analysis discussed in this study may provide an important technique for understanding the global budget of tropospheric ozone.

  5. Soil organic matter composition from correlated thermal analysis and nuclear magnetic resonance data in Australian national inventory of agricultural soils

    NASA Astrophysics Data System (ADS)

    Moore, T. S.; Sanderman, J.; Baldock, J.; Plante, A. F.

    2016-12-01

    National-scale inventories typically include soil organic carbon (SOC) content, but not chemical composition or biogeochemical stability. Australia's Soil Carbon Research Programme (SCaRP) represents a national inventory of SOC content and composition in agricultural systems. The program used physical fractionation followed by 13C nuclear magnetic resonance (NMR) spectroscopy. While these techniques are highly effective, they are typically too expensive and time consuming for use in large-scale SOC monitoring. We seek to understand if analytical thermal analysis is a viable alternative. Coupled differential scanning calorimetry (DSC) and evolved gas analysis (CO2- and H2O-EGA) yields valuable data on SOC composition and stability via ramped combustion. The technique requires little training to use, and does not require fractionation or other sample pre-treatment. We analyzed 300 agricultural samples collected by SCaRP, divided into four fractions: whole soil, coarse particulates (POM), untreated mineral associated (HUM), and hydrofluoric acid (HF)-treated HUM. All samples were analyzed by DSC-EGA, but only the POM and HF-HUM fractions were analyzed by NMR. Multivariate statistical analyses were used to explore natural clustering in SOC composition and stability based on DSC-EGA data. A partial least-squares regression (PLSR) model was used to explore correlations among the NMR and DSC-EGA data. Correlations demonstrated regions of combustion attributable to specific functional groups, which may relate to SOC stability. We are increasingly challenged with developing an efficient technique to assess SOC composition and stability at large spatial and temporal scales. Correlations between NMR and DSC-EGA may demonstrate the viability of using thermal analysis in lieu of more demanding methods in future large-scale surveys, and may provide data that goes beyond chemical composition to better approach quantification of biogeochemical stability.

  6. 4D ground penetrating radar measurements as non-invasive means for hydrological process investigation

    NASA Astrophysics Data System (ADS)

    Jackisch, Conrad; Allroggen, Niklas

    2017-04-01

    The missing vision into the subsurface appears to be a major limiting factor for our hydrological process understanding and theory development. Today, hydrology-related sciences have collected tremendous evidence for soils acting as drainage network and retention stores simultaneously in structured and self-organising domains. However, our present observation technology relies mainly on point-scale sensors, which integrate over a volume of unknown structures and is blind for their distribution. Although heterogeneity is acknowledged at all scales, it is rarely seen as inherent system property. At small scales (soil moisture probe) and at large scales (neutron probe) our measurements leave quite some ambiguity. Consequently, spatially and temporally continuous measurement of soil water states is essential for advancing our understanding and development of subsurface process theories. We present results from several irrigation experiments accompanied by 2D and 3D time-lapse GPR for the development of a novel technique to visualise and quantify water dynamics in the subsurface. Through the comparison of TDR, tracer and gravimetric measurement of soil moisture it becomes apparent that all sensor-based techniques are capable to record temporal dynamics, but are challenged to precisely quantify the measurements and to extrapolate them in space. At the same time excavative methods are very limited in temporal and spatial resolution. The application of non-invasive 4D GPR measurements complements the existing techniques and reveals structural and temporal dynamics simultaneously. By consequently increasing the density of the GPR data recordings in time and space, we find means to process the data also in the time-dimension. This opens ways to quantitatively analyse soil water dynamics in complex settings.

  7. A Surrogate Technique for Investigating Deterministic Dynamics in Discrete Human Movement.

    PubMed

    Taylor, Paul G; Small, Michael; Lee, Kwee-Yum; Landeo, Raul; O'Meara, Damien M; Millett, Emma L

    2016-10-01

    Entropy is an effective tool for investigation of human movement variability. However, before applying entropy, it can be beneficial to employ analyses to confirm that observed data are not solely the result of stochastic processes. This can be achieved by contrasting observed data with that produced using surrogate methods. Unlike continuous movement, no appropriate method has been applied to discrete human movement. This article proposes a novel surrogate method for discrete movement data, outlining the processes for determining its critical values. The proposed technique reliably generated surrogates for discrete joint angle time series, destroying fine-scale dynamics of the observed signal, while maintaining macro structural characteristics. Comparison of entropy estimates indicated observed signals had greater regularity than surrogates and were not only the result of stochastic but also deterministic processes. The proposed surrogate method is both a valid and reliable technique to investigate determinism in other discrete human movement time series.

  8. Phylogenomic Reconstruction of the Oomycete Phylogeny Derived from 37 Genomes

    PubMed Central

    McCarthy, Charley G. P.

    2017-01-01

    ABSTRACT The oomycetes are a class of microscopic, filamentous eukaryotes within the Stramenopiles-Alveolata-Rhizaria (SAR) supergroup which includes ecologically significant animal and plant pathogens, most infamously the causative agent of potato blight Phytophthora infestans. Single-gene and concatenated phylogenetic studies both of individual oomycete genera and of members of the larger class have resulted in conflicting conclusions concerning species phylogenies within the oomycetes, particularly for the large Phytophthora genus. Genome-scale phylogenetic studies have successfully resolved many eukaryotic relationships by using supertree methods, which combine large numbers of potentially disparate trees to determine evolutionary relationships that cannot be inferred from individual phylogenies alone. With a sufficient amount of genomic data now available, we have undertaken the first whole-genome phylogenetic analysis of the oomycetes using data from 37 oomycete species and 6 SAR species. In our analysis, we used established supertree methods to generate phylogenies from 8,355 homologous oomycete and SAR gene families and have complemented those analyses with both phylogenomic network and concatenated supermatrix analyses. Our results show that a genome-scale approach to oomycete phylogeny resolves oomycete classes and individual clades within the problematic Phytophthora genus. Support for the resolution of the inferred relationships between individual Phytophthora clades varies depending on the methodology used. Our analysis represents an important first step in large-scale phylogenomic analysis of the oomycetes. IMPORTANCE The oomycetes are a class of eukaryotes and include ecologically significant animal and plant pathogens. Single-gene and multigene phylogenetic studies of individual oomycete genera and of members of the larger classes have resulted in conflicting conclusions concerning interspecies relationships among these species, particularly for the Phytophthora genus. The onset of next-generation sequencing techniques now means that a wealth of oomycete genomic data is available. For the first time, we have used genome-scale phylogenetic methods to resolve oomycete phylogenetic relationships. We used supertree methods to generate single-gene and multigene species phylogenies. Overall, our supertree analyses utilized phylogenetic data from 8,355 oomycete gene families. We have also complemented our analyses with superalignment phylogenies derived from 131 single-copy ubiquitous gene families. Our results show that a genome-scale approach to oomycete phylogeny resolves oomycete classes and clades. Our analysis represents an important first step in large-scale phylogenomic analysis of the oomycetes. PMID:28435885

  9. Simulating recurrent event data with hazard functions defined on a total time scale.

    PubMed

    Jahn-Eimermacher, Antje; Ingel, Katharina; Ozga, Ann-Kathrin; Preussler, Stella; Binder, Harald

    2015-03-08

    In medical studies with recurrent event data a total time scale perspective is often needed to adequately reflect disease mechanisms. This means that the hazard process is defined on the time since some starting point, e.g. the beginning of some disease, in contrast to a gap time scale where the hazard process restarts after each event. While techniques such as the Andersen-Gill model have been developed for analyzing data from a total time perspective, techniques for the simulation of such data, e.g. for sample size planning, have not been investigated so far. We have derived a simulation algorithm covering the Andersen-Gill model that can be used for sample size planning in clinical trials as well as the investigation of modeling techniques. Specifically, we allow for fixed and/or random covariates and an arbitrary hazard function defined on a total time scale. Furthermore we take into account that individuals may be temporarily insusceptible to a recurrent incidence of the event. The methods are based on conditional distributions of the inter-event times conditional on the total time of the preceeding event or study start. Closed form solutions are provided for common distributions. The derived methods have been implemented in a readily accessible R script. The proposed techniques are illustrated by planning the sample size for a clinical trial with complex recurrent event data. The required sample size is shown to be affected not only by censoring and intra-patient correlation, but also by the presence of risk-free intervals. This demonstrates the need for a simulation algorithm that particularly allows for complex study designs where no analytical sample size formulas might exist. The derived simulation algorithm is seen to be useful for the simulation of recurrent event data that follow an Andersen-Gill model. Next to the use of a total time scale, it allows for intra-patient correlation and risk-free intervals as are often observed in clinical trial data. Its application therefore allows the simulation of data that closely resemble real settings and thus can improve the use of simulation studies for designing and analysing studies.

  10. Analysis strategies for high-resolution UHF-fMRI data.

    PubMed

    Polimeni, Jonathan R; Renvall, Ville; Zaretskaya, Natalia; Fischl, Bruce

    2018-03-01

    Functional MRI (fMRI) benefits from both increased sensitivity and specificity with increasing magnetic field strength, making it a key application for Ultra-High Field (UHF) MRI scanners. Most UHF-fMRI studies utilize the dramatic increases in sensitivity and specificity to acquire high-resolution data reaching sub-millimeter scales, which enable new classes of experiments to probe the functional organization of the human brain. This review article surveys advanced data analysis strategies developed for high-resolution fMRI at UHF. These include strategies designed to mitigate distortion and artifacts associated with higher fields in ways that attempt to preserve spatial resolution of the fMRI data, as well as recently introduced analysis techniques that are enabled by these extremely high-resolution data. Particular focus is placed on anatomically-informed analyses, including cortical surface-based analysis, which are powerful techniques that can guide each step of the analysis from preprocessing to statistical analysis to interpretation and visualization. New intracortical analysis techniques for laminar and columnar fMRI are also reviewed and discussed. Prospects for single-subject individualized analyses are also presented and discussed. Altogether, there are both specific challenges and opportunities presented by UHF-fMRI, and the use of proper analysis strategies can help these valuable data reach their full potential. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Biogenicity and Syngeneity of Organic Matter in Ancient Sedimentary Rocks: Recent Advances in the Search for Evidence of Past Life

    NASA Astrophysics Data System (ADS)

    Oehler, Dorothy Z.; Cady, Sherry L.

    2014-08-01

    The past decade has seen an explosion of new technologies for assessment of biogenicity and syngeneity of carbonaceous material within sedimentary rocks. Advances have been made in techniques for analysis of in situ organic matter as well as for extracted bulk samples of soluble and insoluble (kerogen) organic fractions. The in situ techniques allow analysis of micrometer-to-sub-micrometer-scale organic residues within their host rocks and include Raman and fluorescence spectroscopy/imagery, confocal laser scanning microscopy, and forms of secondary ion/laser-based mass spectrometry, analytical transmission electron microscopy, and X-ray absorption microscopy/spectroscopy. Analyses can be made for chemical, molecular, and isotopic composition coupled with assessment of spatial relationships to surrounding minerals, veins, and fractures. The bulk analyses include improved methods for minimizing contamination and recognizing syngenetic constituents of soluble organic fractions as well as enhanced spectroscopic and pyrolytic techniques for unlocking syngenetic molecular signatures in kerogen. Together, these technologies provide vital tools for the study of some of the oldest and problematic carbonaceous residues and for advancing our understanding of the earliest stages of biological evolution on Earth and the search for evidence of life beyond Earth. We discuss each of these new technologies, emphasizing their advantages and disadvantages, applications, and likely future directions.

  12. The scale invariant generator technique for quantifying anisotropic scale invariance

    NASA Astrophysics Data System (ADS)

    Lewis, G. M.; Lovejoy, S.; Schertzer, D.; Pecknold, S.

    1999-11-01

    Scale invariance is rapidly becoming a new paradigm for geophysics. However, little attention has been paid to the anisotropy that is invariably present in geophysical fields in the form of differential stratification and rotation, texture and morphology. In order to account for scaling anisotropy, the formalism of generalized scale invariance (GSI) was developed. Until now there has existed only a single fairly ad hoc GSI analysis technique valid for studying differential rotation. In this paper, we use a two-dimensional representation of the linear approximation to generalized scale invariance, to obtain a much improved technique for quantifying anisotropic scale invariance called the scale invariant generator technique (SIG). The accuracy of the technique is tested using anisotropic multifractal simulations and error estimates are provided for the geophysically relevant range of parameters. It is found that the technique yields reasonable estimates for simulations with a diversity of anisotropic and statistical characteristics. The scale invariant generator technique can profitably be applied to the scale invariant study of vertical/horizontal and space/time cross-sections of geophysical fields as well as to the study of the texture/morphology of fields.

  13. Comparison of remote sensing image processing techniques to identify tornado damage areas from Landsat TM data

    USGS Publications Warehouse

    Myint, S.W.; Yuan, M.; Cerveny, R.S.; Giri, C.P.

    2008-01-01

    Remote sensing techniques have been shown effective for large-scale damage surveys after a hazardous event in both near real-time or post-event analyses. The paper aims to compare accuracy of common imaging processing techniques to detect tornado damage tracks from Landsat TM data. We employed the direct change detection approach using two sets of images acquired before and after the tornado event to produce a principal component composite images and a set of image difference bands. Techniques in the comparison include supervised classification, unsupervised classification, and objectoriented classification approach with a nearest neighbor classifier. Accuracy assessment is based on Kappa coefficient calculated from error matrices which cross tabulate correctly identified cells on the TM image and commission and omission errors in the result. Overall, the Object-oriented Approach exhibits the highest degree of accuracy in tornado damage detection. PCA and Image Differencing methods show comparable outcomes. While selected PCs can improve detection accuracy 5 to 10%, the Object-oriented Approach performs significantly better with 15-20% higher accuracy than the other two techniques. ?? 2008 by MDPI.

  14. Comparison of Remote Sensing Image Processing Techniques to Identify Tornado Damage Areas from Landsat TM Data

    PubMed Central

    Myint, Soe W.; Yuan, May; Cerveny, Randall S.; Giri, Chandra P.

    2008-01-01

    Remote sensing techniques have been shown effective for large-scale damage surveys after a hazardous event in both near real-time or post-event analyses. The paper aims to compare accuracy of common imaging processing techniques to detect tornado damage tracks from Landsat TM data. We employed the direct change detection approach using two sets of images acquired before and after the tornado event to produce a principal component composite images and a set of image difference bands. Techniques in the comparison include supervised classification, unsupervised classification, and object-oriented classification approach with a nearest neighbor classifier. Accuracy assessment is based on Kappa coefficient calculated from error matrices which cross tabulate correctly identified cells on the TM image and commission and omission errors in the result. Overall, the Object-oriented Approach exhibits the highest degree of accuracy in tornado damage detection. PCA and Image Differencing methods show comparable outcomes. While selected PCs can improve detection accuracy 5 to 10%, the Object-oriented Approach performs significantly better with 15-20% higher accuracy than the other two techniques. PMID:27879757

  15. Reduction of odours in pilot-scale landfill biocovers.

    PubMed

    Capanema, M A; Cabana, H; Cabral, A R

    2014-04-01

    Unpleasant odours generated from waste management facilities represent an environmental and societal concern. This multi-year study documented odour and total reduced sulfur (TRS) abatement in four experimental landfill biocovers installed on the final cover of the Saint-Nicéphore landfill (Canada). Performance was evaluated based on the reduction in odour and TRS concentrations between the raw biogas collected from a dedicated well and the emitted gases at the surface. Odour analyses were carried out by the sensorial technique of olfactometry, whereas TRS analyses followed the pulse fluorescence technique. The large difference of 2-5 orders of magnitude between raw biogas (average odour concentration=2,100,000OUm(-3)) and emitted gases resulted in odour removal efficiencies of close to 100% for all observations. With respect to TRS concentrations, abatement efficiencies were all greater than 95%, with values averaging 21,000ppb of eq. SO2 in the raw biogas. The influence of water infiltration on odour concentrations was documented and showed that lower odour values were obtained when the 48-h accumulated precipitation prior to sampling was higher. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Figuration and detection of single molecules

    NASA Astrophysics Data System (ADS)

    Nevels, R.; Welch, G. R.; Cremer, P. S.; Hemmer, P.; Phillips, T.; Scully, S.; Sokolov, A. V.; Svidzinsky, A. A.; Xia, H.; Zheltikov, A.; Scully, M. O.

    2012-08-01

    Recent advances in the description of atoms and molecules based on Dimensional scaling analysis, developed by Dudley Herschbach and co-workers, provided new insights into visualization of molecular structure and chemical bonding. Prof. Herschbach is also a giant in the field of single molecule scattering. We here report on the engineering of molecular detectors. Such systems have a wide range of application from medical diagnostics to the monitoring of chemical, biological and environmental hazards. We discuss ways to identify preselected molecules, in particular, mycotoxin contaminants using coherent laser spectroscopy. Mycotoxin contaminants, e.g. aflatoxin B1 which is present in corn and peanuts, are usually analysed by time-consuming microscopic, chemical and biological assays. We present a new approach that derives from recent experiments in which molecules are prepared by one (or more) femtosecond laser(s) and probed by another set. We call this technique FAST CARS (femto second adaptive spectroscopic technique for coherent anti-Stokes Raman spectroscopy). We propose and analyse ways in which FAST CARS can be used to identify preselected molecules, e.g. aflatoxin, rapidly and economically.

  17. Modeling the Hydrologic Effects of Large-Scale Green Infrastructure Projects with GIS

    NASA Astrophysics Data System (ADS)

    Bado, R. A.; Fekete, B. M.; Khanbilvardi, R.

    2015-12-01

    Impervious surfaces in urban areas generate excess runoff, which in turn causes flooding, combined sewer overflows, and degradation of adjacent surface waters. Municipal environmental protection agencies have shown a growing interest in mitigating these effects with 'green' infrastructure practices that partially restore the perviousness and water holding capacity of urban centers. Assessment of the performance of current and future green infrastructure projects is hindered by the lack of adequate hydrological modeling tools; conventional techniques fail to account for the complex flow pathways of urban environments, and detailed analyses are difficult to prepare for the very large domains in which green infrastructure projects are implemented. Currently, no standard toolset exists that can rapidly and conveniently predict runoff, consequent inundations, and sewer overflows at a city-wide scale. We demonstrate how streamlined modeling techniques can be used with open-source GIS software to efficiently model runoff in large urban catchments. Hydraulic parameters and flow paths through city blocks, roadways, and sewer drains are automatically generated from GIS layers, and ultimately urban flow simulations can be executed for a variety of rainfall conditions. With this methodology, users can understand the implications of large-scale land use changes and green/gray storm water retention systems on hydraulic loading, peak flow rates, and runoff volumes.

  18. Influence of root canal instrumentation and obturation techniques on intra-operative pain during endodontic therapy

    PubMed Central

    Martín-González, Jenifer; Echevarría-Pérez, Marta; Sánchez-Domínguez, Benito; Tarilonte-Delgado, Maria L.; Castellanos-Cosano, Lizett; López-Frías, Francisco J.

    2012-01-01

    Objective: To analyse the influence of root canal instrumentation and obturation techniques on intra-operative pain experienced by patients during endodontic therapy. Method and Materials: A descriptive cross-sectional study was carried out in Ponferrada and Sevilla, Spain, including 80 patients (46 men and 34 women), with ages ranged from 10 to 74 years, randomly recruited. Patient gender and age, affected tooth, pulpal diagnosis, periapical status, previous NSAID or antibiotic (AB) treatment, and root canal instrumentation and obturation techniques were recorded. After root canal treatment (RCT), patients completed a 10-cm visual analogue scale (VAS) that ranked the level of pain. Results were analysed statistically using the Chi-square and ANOVA tests and logistic regression analysis. Results: The mean pain level during root canal treatment was 2.9 ± 3.0 (median = 2) in a VAS between 0 and 10. Forty percent of patients experienced no pain. Gender, age, arch, previous NSAIDs or AB treatment and anaesthetic type did not influence significantly the pain level (p > 0.05). Pain during root canal treatment was significantly greater in molar teeth (OR = 10.1; 95% C.I. = 1.6 - 63.5; p = 0.013). Root canal instrumentation and obturation techniques did not affect significantly patient’s pain during root canal treatment (p > 0.05). Conclusion: Patients feel more pain when RCT is carried out on molar teeth. The root canal instrumentation and obturation techniques do not affect significantly the patients’ pain during RCT. Key words:Anaesthesia, endodontic pain, pulpitis, root canal instrumentation, root canal obturation, rotary files. PMID:22549694

  19. Synergies between geomorphic hazard and risk and sediment cascade research fields: exploiting geomorphic processes' susceptibility analyses to derive potential sediment sources in the Oltet, river catchment, southern Romania

    NASA Astrophysics Data System (ADS)

    Jurchescu, Marta-Cristina

    2015-04-01

    Identifying sediment sources and sediment availability represents a major problem and one of the first concerns in the field of sediment cascade. This paper addresses the on-site effects associated with sediment transfer, investigating the degree to which studies pertaining to the field of geomorphic hazard and risk research could be exploited in sediment budget estimations. More precisely, the paper investigates whether results obtained in assessing susceptibility to various geomorphic processes (landslides, soil erosion, gully erosion) could be transferred to the study of sediment sources within a basin. The study area is a medium-sized catchment (> 2400 km2) in southern Romania encompassing four different geomorphic units (mountains, hills, piedmont and plain). The region is highly affected by a wide range of geomorphic processes which supply sediments to the drainage network. The presence of a reservoir at the river outlet emphasizes the importance of estimating sediment budgets. The susceptibility analyses are conducted separately for each type of the considered processes in a top-down framework, i.e. at two different scales, using scale-adapted methods and validation techniques in each case, as widely-recognized in the hazard and risk research literature. The analyses start at a regional scale, which has in view the entire catchment, using readily available data on conditioning factors. In a second step, the suceptibility analyses are carried out at a medium scale for selected hotspot-compartments of the catchment. In order to appraise the extent to which susceptibility results are relevant in interpreting sediment sources at catchment scale, scale-induced differences are analysed in the case of each process. Based on the amount of uncertainty revealed by each regional-scale analysis in comparison to the medium-scale ones, decisions are made on whether the first are acceptable to the aim of identifying potential sediment source areas or if they should be refined using more precise methods and input data. The three final basin-wide susceptibility maps are eventually coverted, on a threshold basis, to maps showing the potential areas of sediment production by landslides, soil erosion and gully erosion respectively. These are then combined into one single map of potential sediment sources. The susceptibility assessments indicate that the basin compartments most prone to landslides and soil erosion correspond to the Subcarpathian hills, while the one most threatened by gully erosion corresponds to the piedmont relief. The final map of potential sediment sources shows that approximately 34% of the study catchment is occupied by areas potentially generating sediment through landslides and gully erosion, extending over most of the high piedmont and Subcarpathian hills. The results prove that there is an important link between the two research fields, i.e. geomorphic hazard and risk and sediment cascade, by allowing the transfer of knowledge from geomorphic processes' susceptibility analyses to the estimation of potential sediment sources within catchments. The synergy between the two fields raises further challenges to be tackled in future (e.g. how to derive sediment transfer rates from quantitative hazard estimates).

  20. Comparison of digital and conventional impression techniques: evaluation of patients' perception, treatment comfort, effectiveness and clinical outcomes.

    PubMed

    Yuzbasioglu, Emir; Kurt, Hanefi; Turunc, Rana; Bilir, Halenur

    2014-01-30

    The purpose of this study was to compare two impression techniques from the perspective of patient preferences and treatment comfort. Twenty-four (12 male, 12 female) subjects who had no previous experience with either conventional or digital impression participated in this study. Conventional impressions of maxillary and mandibular dental arches were taken with a polyether impression material (Impregum, 3 M ESPE), and bite registrations were made with polysiloxane bite registration material (Futar D, Kettenbach). Two weeks later, digital impressions and bite scans were performed using an intra-oral scanner (CEREC Omnicam, Sirona). Immediately after the impressions were made, the subjects' attitudes, preferences and perceptions towards impression techniques were evaluated using a standardized questionnaire. The perceived source of stress was evaluated using the State-Trait Anxiety Scale. Processing steps of the impression techniques (tray selection, working time etc.) were recorded in seconds. Statistical analyses were performed with the Wilcoxon Rank test, and p < 0.05 was considered significant. There were significant differences among the groups (p < 0.05) in terms of total working time and processing steps. Patients stated that digital impressions were more comfortable than conventional techniques. Digital impressions resulted in a more time-efficient technique than conventional impressions. Patients preferred the digital impression technique rather than conventional techniques.

  1. Impact of Variable-Resolution Meshes on Regional Climate Simulations

    NASA Astrophysics Data System (ADS)

    Fowler, L. D.; Skamarock, W. C.; Bruyere, C. L.

    2014-12-01

    The Model for Prediction Across Scales (MPAS) is currently being used for seasonal-scale simulations on globally-uniform and regionally-refined meshes. Our ongoing research aims at analyzing simulations of tropical convective activity and tropical cyclone development during one hurricane season over the North Atlantic Ocean, contrasting statistics obtained with a variable-resolution mesh against those obtained with a quasi-uniform mesh. Analyses focus on the spatial distribution, frequency, and intensity of convective and grid-scale precipitations, and their relative contributions to the total precipitation as a function of the horizontal scale. Multi-month simulations initialized on May 1st 2005 using ERA-Interim re-analyses indicate that MPAS performs satisfactorily as a regional climate model for different combinations of horizontal resolutions and transitions between the coarse and refined meshes. Results highlight seamless transitions for convection, cloud microphysics, radiation, and land-surface processes between the quasi-uniform and locally- refined meshes, despite the fact that the physics parameterizations were not developed for variable resolution meshes. Our goal of analyzing the performance of MPAS is twofold. First, we want to establish that MPAS can be successfully used as a regional climate model, bypassing the need for nesting and nudging techniques at the edges of the computational domain as done in traditional regional climate modeling. Second, we want to assess the performance of our convective and cloud microphysics parameterizations as the horizontal resolution varies between the lower-resolution quasi-uniform and higher-resolution locally-refined areas of the global domain.

  2. Impact of Variable-Resolution Meshes on Regional Climate Simulations

    NASA Astrophysics Data System (ADS)

    Fowler, L. D.; Skamarock, W. C.; Bruyere, C. L.

    2013-12-01

    The Model for Prediction Across Scales (MPAS) is currently being used for seasonal-scale simulations on globally-uniform and regionally-refined meshes. Our ongoing research aims at analyzing simulations of tropical convective activity and tropical cyclone development during one hurricane season over the North Atlantic Ocean, contrasting statistics obtained with a variable-resolution mesh against those obtained with a quasi-uniform mesh. Analyses focus on the spatial distribution, frequency, and intensity of convective and grid-scale precipitations, and their relative contributions to the total precipitation as a function of the horizontal scale. Multi-month simulations initialized on May 1st 2005 using NCEP/NCAR re-analyses indicate that MPAS performs satisfactorily as a regional climate model for different combinations of horizontal resolutions and transitions between the coarse and refined meshes. Results highlight seamless transitions for convection, cloud microphysics, radiation, and land-surface processes between the quasi-uniform and locally-refined meshes, despite the fact that the physics parameterizations were not developed for variable resolution meshes. Our goal of analyzing the performance of MPAS is twofold. First, we want to establish that MPAS can be successfully used as a regional climate model, bypassing the need for nesting and nudging techniques at the edges of the computational domain as done in traditional regional climate modeling. Second, we want to assess the performance of our convective and cloud microphysics parameterizations as the horizontal resolution varies between the lower-resolution quasi-uniform and higher-resolution locally-refined areas of the global domain.

  3. Dynamic fatigue of a machinable glass-ceramic

    NASA Technical Reports Server (NTRS)

    Smyth, K. K.; Magida, M. B.

    1982-01-01

    To assess the stress corrosion susceptibility of a machinable glass-ceramic, its dynamic fatigue behavior was investigated by measuring its strength as a function of stress rate. Fracture mechanics techniques were used to analyse the results for the purpose of making lifetime predictions for components of this material. This material was concluded to have only moderate resistance to stress in ambient conditions. The effects of specimen size on strength were assessed for the material used in this study: it was concluded that the Weibull edge-flaw scaling law adequately describes the observed strength-size relationship.

  4. Temperature-dependent daily variability of precipitable water in special sensor microwave/imager observations

    NASA Technical Reports Server (NTRS)

    Gutowski, William J.; Lindemulder, Elizabeth A.; Jovaag, Kari

    1995-01-01

    We use retrievals of atmospheric precipitable water from satellite microwave observations and analyses of near-surface temperature to examine the relationship between these two fields on daily and longer time scales. The retrieval technique producing the data used here is most effective over the open ocean, so the analysis focuses on the southern hemisphere's extratropics, which have an extensive ocean surface. For both the total and the eddy precipitable water fields, there is a close correspondence between local variations in the precipitable water and near-surface temperature. The correspondence appears particularly strong for synoptic and planetary scale transient eddies. More specifically, the results support a typical modeling assumption that transient eddy moisture fields are proportional to transient eddy temperature fields under the assumption f constant relative humidity.

  5. Molecular inversion probe assay.

    PubMed

    Absalan, Farnaz; Ronaghi, Mostafa

    2007-01-01

    We have described molecular inversion probe technologies for large-scale genetic analyses. This technique provides a comprehensive and powerful tool for the analysis of genetic variation and enables affordable, large-scale studies that will help uncover the genetic basis of complex disease and explain the individual variation in response to therapeutics. Major applications of the molecular inversion probes (MIP) technologies include targeted genotyping from focused regions to whole-genome studies, and allele quantification of genomic rearrangements. The MIP technology (used in the HapMap project) provides an efficient, scalable, and affordable way to score polymorphisms in case/control populations for genetic studies. The MIP technology provides the highest commercially available multiplexing levels and assay conversion rates for targeted genotyping. This enables more informative, genome-wide studies with either the functional (direct detection) approach or the indirect detection approach.

  6. NM-Scale Anatomy of an Entire Stardust Carrot Track

    NASA Technical Reports Server (NTRS)

    Nakamura-Messenger, K.; Keller, L. P.; Clemett, S. J.; Messenger, S.

    2009-01-01

    Comet Wild-2 samples collected by NASA s Stardust mission are extremely complex, heterogeneous, and have experienced wide ranges of alteration during the capture process. There are two major types of track morphologies: "carrot" and "bulbous," that reflect different structural/compositional properties of the impactors. Carrot type tracks are typically produced by compact or single mineral grains which survive essentially intact as a single large terminal particle. Bulbous tracks are likely produced by fine-grained or organic-rich impactors [1]. Owing to their challenging nature and especially high value of Stardust samples, we have invested considerable effort in developing both sample preparation and analytical techniques tailored for Stardust sample analyses. Our report focuses on our systematic disassembly and coordinated analysis of Stardust carrot track #112 from the mm to nm-scale.

  7. Connecting the large- and the small-scale magnetic fields of solar-like stars

    NASA Astrophysics Data System (ADS)

    Lehmann, L. T.; Jardine, M. M.; Mackay, D. H.; Vidotto, A. A.

    2018-05-01

    A key question in understanding the observed magnetic field topologies of cool stars is the link between the small- and the large-scale magnetic field and the influence of the stellar parameters on the magnetic field topology. We examine various simulated stars to connect the small-scale with the observable large-scale field. The highly resolved 3D simulations we used couple a flux transport model with a non-potential coronal model using a magnetofrictional technique. The surface magnetic field of these simulations is decomposed into spherical harmonics which enables us to analyse the magnetic field topologies on a wide range of length scales and to filter the large-scale magnetic field for a direct comparison with the observations. We show that the large-scale field of the self-consistent simulations fits the observed solar-like stars and is mainly set up by the global dipolar field and the large-scale properties of the flux pattern, e.g. the averaged latitudinal position of the emerging small-scale field and its global polarity pattern. The stellar parameters flux emergence rate, differential rotation and meridional flow affect the large-scale magnetic field topology. An increased flux emergence rate increases the magnetic flux in all field components and an increased differential rotation increases the toroidal field fraction by decreasing the poloidal field. The meridional flow affects the distribution of the magnetic energy across the spherical harmonic modes.

  8. Determination of Lead in Blood by Atomic Absorption Spectrophotometry1

    PubMed Central

    Selander, Stig; Cramér, Kim

    1968-01-01

    Lead in blood was determined by atomic absorption spectrophotometry, using a wet ashing procedure and a procedure in which the proteins were precipitated with trichloroacetic acid. In both methods the lead was extracted into isobutylmethylketone before measurement, using ammonium pyrrolidine dithiocarbamate as chelator. The simpler precipitation procedure was shown to give results identical with those obtained with the ashing technique. In addition, blood specimens were examined by the precipitation method and by spectral analysis, which method includes wet ashing of the samples, with good agreement. All analyses were done on blood samples from `normal' persons or from lead-exposed workers, and no additions of inorganic lead were made. The relatively simple protein precipitation technique gave accurate results and is suitable for the large-scale control of lead-exposed workers. PMID:5663425

  9. Variation objective analyses for cyclone studies

    NASA Technical Reports Server (NTRS)

    Achtemeier, G. L.; Kidder, S. Q.; Ochs, H. T.

    1985-01-01

    The objectives were to: (1) develop an objective analysis technique that will maximize the information content of data available from diverse sources, with particular emphasis on the incorporation of observations from satellites with those from more traditional immersion techniques; and (2) to develop a diagnosis of the state of the synoptic scale atmosphere on a much finer scale over a much broader region than is presently possible to permit studies of the interactions and energy transfers between global, synoptic and regional scale atmospheric processes. The variational objective analysis model consists of the two horizontal momentum equations, the hydrostatic equation, and the integrated continuity equation for a dry hydrostatic atmosphere. Preliminary tests of the model with the SESMAE I data set are underway for 12 GMT 10 April 1979. At this stage of purpose of the analysis is not the diagnosis of atmospheric structures but rather the validation of the model. Model runs for rawinsonde data and with the precision modulus weights set to force most of the adjustment of the wind field to the mass field have produced 90 to 95 percent reductions in the imbalance of the initial data after only 4-cycles through the Euler-Lagrange equations. Sensitivity tests for linear stability of the 11 Euler-Lagrange equations that make up the VASP Model 1 indicate that there will be a lower limit to the scales of motion that can be resolved by this method. Linear stability criteria are violated where there is large horizontal wind shear near the upper tropospheric jet.

  10. Spectral analysis of structure functions and their scaling exponents in forced isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Linkmann, Moritz; McComb, W. David; Yoffe, Samuel; Berera, Arjun

    2014-11-01

    The pseudospectral method, in conjunction with a new technique for obtaining scaling exponents ζn from the structure functions Sn (r) , is presented as an alternative to the extended self-similarity (ESS) method and the use of generalized structure functions. We propose plotting the ratio | Sn (r) /S3 (r) | against the separation r in accordance with a standard technique for analysing experimental data. This method differs from the ESS technique, which plots the generalized structure functions Gn (r) against G3 (r) , where G3 (r) ~ r . Using our method for the particular case of S2 (r) we obtain the new result that the exponent ζ2 decreases as the Taylor-Reynolds number increases, with ζ2 --> 0 . 679 +/- 0 . 013 as Rλ --> ∞ . This supports the idea of finite-viscosity corrections to the K41 prediction for S2, and is the opposite of the result obtained by ESS. The pseudospectral method permits the forcing to be taken into account exactly through the calculation of the energy input in real space from the work spectrum of the stirring forces. The combination of the viscous and the forcing corrections as calculated by the pseudospectral method is shown to account for the deviation of S3 from Kolmogorov's ``four-fifths''-law at all scales. This work has made use of the resources provided by the UK supercomputing service HECToR, made available through the Edinburgh Compute and Data Facility (ECDF). A. B. is supported by STFC, S. R. Y. and M. F. L. are funded by EPSRC.

  11. Small-scale multi-axial hybrid simulation of a shear-critical reinforced concrete frame

    NASA Astrophysics Data System (ADS)

    Sadeghian, Vahid; Kwon, Oh-Sung; Vecchio, Frank

    2017-10-01

    This study presents a numerical multi-scale simulation framework which is extended to accommodate hybrid simulation (numerical-experimental integration). The framework is enhanced with a standardized data exchange format and connected to a generalized controller interface program which facilitates communication with various types of laboratory equipment and testing configurations. A small-scale experimental program was conducted using a six degree-of-freedom hydraulic testing equipment to verify the proposed framework and provide additional data for small-scale testing of shearcritical reinforced concrete structures. The specimens were tested in a multi-axial hybrid simulation manner under a reversed cyclic loading condition simulating earthquake forces. The physical models were 1/3.23-scale representations of a beam and two columns. A mixed-type modelling technique was employed to analyze the remainder of the structures. The hybrid simulation results were compared against those obtained from a large-scale test and finite element analyses. The study found that if precautions are taken in preparing model materials and if the shear-related mechanisms are accurately considered in the numerical model, small-scale hybrid simulations can adequately simulate the behaviour of shear-critical structures. Although the findings of the study are promising, to draw general conclusions additional test data are required.

  12. Scaling Techniques for Combustion Device Random Vibration Predictions

    NASA Technical Reports Server (NTRS)

    Kenny, R. J.; Ferebee, R. C.; Duvall, L. D.

    2016-01-01

    This work presents compares scaling techniques that can be used for prediction of combustion device component random vibration levels with excitation due to the internal combustion dynamics. Acceleration and unsteady dynamic pressure data from multiple component test programs are compared and normalized per the two scaling approaches reviewed. Two scaling technique are reviewed and compared against the collected component test data. The first technique is an existing approach developed by Barrett, and the second technique is an updated approach new to this work. Results from utilizing both techniques are presented and recommendations about future component random vibration prediction approaches are given.

  13. A cross-national study on the multidimensional characteristics of the five-item psychological demands scale of the Job Content Questionnaire.

    PubMed

    Choi, BongKyoo; Kawakami, Norito; Chang, SeiJin; Koh, SangBaek; Bjorner, Jakob; Punnett, Laura; Karasek, Robert

    2008-01-01

    The five-item psychological demands scale of the Job Content Questionnaire (JCQ) has been assumed to be one-dimensional in practice. To examine whether the scale has sufficient internal consistency and external validity to be treated as a single scale, using the cross-national JCQ datasets from the United States, Korea, and Japan. Exploratory factor analyses with 22 JCQ items, confirmatory factor analyses with the five psychological demands items, and correlations analyses with mental health indexes. Generally, exploratory factor analyses displayed the predicted demand/control/support structure with three and four factors extracted. However, at more detailed levels of exploratory and confirmatory factor analyses, the demands scale showed clear evidence of multi-factor structure. The correlations of items and subscales of the demands scale with mental health indexes were similar to those of the full scale in the Korean and Japanese datasets, but not in the U.S. data. In 4 out of 16 sub-samples of the U.S. data, several significant correlations of the components of the demands scale with job dissatisfaction and life dissatisfaction were obscured by the full scale. The multidimensionality of the psychological demands scale should be considered in psychometric analysis and interpretation, occupational epidemiologic studies, and future scale extension.

  14. Multiple Hadean crystallization and reworking events preserved in individual Jack Hills zircon grains

    NASA Astrophysics Data System (ADS)

    Bellucci, Jeremy; Nemchin, Alexander; Whitehouse, Martin; Snape, Joshua

    2017-04-01

    Five Hadean (>3.9 Ga) aged zircon grains from the Jack Hills metasedimentary belt have been investigated by an improved secondary ion mass spectrometry scanning ion image technique. This technique has the ability to obtain accurate and precise full U-Pb systematics on a scale <5 μm, as well as document the spatial distribution of U, Th and Pb. All five of the grains investigated here have complex cathodoluminescence patterns that correlate to different U, Th, and Pb concentration domains. The age determinations for these different chemical zones indicate multiple reworking events that are preserved in each grain and have affected the primary crystalized zircon on the scale of <10 μm, smaller than traditional ion microprobe spot analyses. These new scanning ion images and age determinations suggest that roughly half, if not all, previous analyses, including those of trace elements and various isotope systems, could have intersected several domains of unfractured zircon, thus making the interpretation of any trace element, Hf, or O isotopic data tenuous. Lastly, all of the grains analyzed here preserve at least two distinguishable 207Pb/206Pb ages. These ages are preserved in core-rim and/or complex internal textural relationships. These secondary events took place during at ca. 4.3, 4.2, 4.1, 4.0, and 3.7 Ga potentially indicating a sequence of magmatic and/or metamorphic events that recycled some volume of early crust during the Hadean and into Paleo- to Mesoarchean several times with an apparent periodicity of ca. 100 Ma.

  15. Screening by imaging: scaling up single-DNA-molecule analysis with a novel parabolic VA-TIRF reflector and noise-reduction techniques.

    PubMed

    van 't Hoff, Marcel; Reuter, Marcel; Dryden, David T F; Oheim, Martin

    2009-09-21

    Bacteriophage lambda-DNA molecules are frequently used as a scaffold to characterize the action of single proteins unwinding, translocating, digesting or repairing DNA. However, scaling up such single-DNA-molecule experiments under identical conditions to attain statistically relevant sample sizes remains challenging. Additionally the movies obtained are frequently noisy and difficult to analyse with any precision. We address these two problems here using, firstly, a novel variable-angle total internal reflection fluorescence (VA-TIRF) reflector composed of a minimal set of optical reflective elements, and secondly, using single value decomposition (SVD) to improve the signal-to-noise ratio prior to analysing time-lapse image stacks. As an example, we visualize under identical optical conditions hundreds of surface-tethered single lambda-DNA molecules, stained with the intercalating dye YOYO-1 iodide, and stretched out in a microcapillary flow. Another novelty of our approach is that we arrange on a mechanically driven stage several capillaries containing saline, calibration buffer and lambda-DNA, respectively, thus extending the approach to high-content, high-throughput screening of single molecules. Our length measurements of individual DNA molecules from noise-reduced kymograph images using SVD display a 6-fold enhanced precision compared to raw-data analysis, reaching approximately 1 kbp resolution. Combining these two methods, our approach provides a straightforward yet powerful way of collecting statistically relevant amounts of data in a semi-automated manner. We believe that our conceptually simple technique should be of interest for a broader range of single-molecule studies, well beyond the specific example of lambda-DNA shown here.

  16. Regime Behavior in Paleo-Reconstructed Streamflow: Attributions to Atmospheric Dynamics, Synoptic Circulation and Large-Scale Climate Teleconnection Patterns

    NASA Astrophysics Data System (ADS)

    Ravindranath, A.; Devineni, N.

    2017-12-01

    Studies have shown that streamflow behavior and dynamics have a significant link with climate and climate variability. Patterns of persistent regime behavior from extended streamflow records in many watersheds justify investigating large-scale climate mechanisms as potential drivers of hydrologic regime behavior and streamflow variability. Understanding such streamflow-climate relationships is crucial to forecasting/simulation systems and the planning and management of water resources. In this study, hidden Markov models are used with reconstructed streamflow to detect regime-like behaviors - the hidden states - and state transition phenomena. Individual extreme events and their spatial variability across the basin are then verified with the identified states. Wavelet analysis is performed to examine the signals over time in the streamflow records. Joint analyses of the climatic data in the 20th century and the identified states are undertaken to better understand the hydroclimatic connections within the basin as well as important teleconnections that influence water supply. Compositing techniques are used to identify atmospheric circulation patterns associated with identified states of streamflow. The grouping of such synoptic patterns and their frequency are then examined. Sliding time-window correlation analysis and cross-wavelet spectral analysis are performed to establish the synchronicity of basin flows to the identified synoptic and teleconnection patterns. The Missouri River Basin (MRB) is examined in this study, both as a means of better understanding the synoptic climate controls in this important watershed and as a case study for the techniques developed here. Initial wavelet analyses of reconstructed streamflow at major gauges in the MRB show multidecadal cycles in regime behavior.

  17. A Study on Mutil-Scale Background Error Covariances in 3D-Var Data Assimilation

    NASA Astrophysics Data System (ADS)

    Zhang, Xubin; Tan, Zhe-Min

    2017-04-01

    The construction of background error covariances is a key component of three-dimensional variational data assimilation. There are different scale background errors and interactions among them in the numerical weather Prediction. However, the influence of these errors and their interactions cannot be represented in the background error covariances statistics when estimated by the leading methods. So, it is necessary to construct background error covariances influenced by multi-scale interactions among errors. With the NMC method, this article firstly estimates the background error covariances at given model-resolution scales. And then the information of errors whose scales are larger and smaller than the given ones is introduced respectively, using different nesting techniques, to estimate the corresponding covariances. The comparisons of three background error covariances statistics influenced by information of errors at different scales reveal that, the background error variances enhance particularly at large scales and higher levels when introducing the information of larger-scale errors by the lateral boundary condition provided by a lower-resolution model. On the other hand, the variances reduce at medium scales at the higher levels, while those show slight improvement at lower levels in the nested domain, especially at medium and small scales, when introducing the information of smaller-scale errors by nesting a higher-resolution model. In addition, the introduction of information of larger- (smaller-) scale errors leads to larger (smaller) horizontal and vertical correlation scales of background errors. Considering the multivariate correlations, the Ekman coupling increases (decreases) with the information of larger- (smaller-) scale errors included, whereas the geostrophic coupling in free atmosphere weakens in both situations. The three covariances obtained in above work are used in a data assimilation and model forecast system respectively, and then the analysis-forecast cycles for a period of 1 month are conducted. Through the comparison of both analyses and forecasts from this system, it is found that the trends for variation in analysis increments with information of different scale errors introduced are consistent with those for variation in variances and correlations of background errors. In particular, introduction of smaller-scale errors leads to larger amplitude of analysis increments for winds at medium scales at the height of both high- and low- level jet. And analysis increments for both temperature and humidity are greater at the corresponding scales at middle and upper levels under this circumstance. These analysis increments improve the intensity of jet-convection system which includes jets at different levels and coupling between them associated with latent heat release, and these changes in analyses contribute to the better forecasts for winds and temperature in the corresponding areas. When smaller-scale errors are included, analysis increments for humidity enhance significantly at large scales at lower levels to moisten southern analyses. This humidification devotes to correcting dry bias there and eventually improves forecast skill of humidity. Moreover, inclusion of larger- (smaller-) scale errors is beneficial for forecast quality of heavy (light) precipitation at large (small) scales due to the amplification (diminution) of intensity and area in precipitation forecasts but tends to overestimate (underestimate) light (heavy) precipitation .

  18. Biogenicity and Syngeneity of Organic Matter in Ancient Sedimentary Rocks: Recent Advances in the Search for Evidence of Past Life

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oehler, Dorothy Z.; Cady, Sherry L.

    2014-12-01

    he past decade has seen an explosion of new technologies for assessment of biogenicity and syngeneity of carbonaceous material within sedimentary rocks. Advances have been made in techniques for analysis of in situ organic matter as well as for extracted bulk samples of soluble and insoluble (kerogen) organic fractions. The in situ techniques allow analysis of micrometer-to-sub-micrometer-scale organic residues within their host rocks and include Raman and fluorescence spectroscopy/imagery, confocal laser scanning microscopy, and forms of secondary ion/laser-based mass spectrometry, analytical transmission electron microscopy, and X-ray absorption microscopy/spectroscopy. Analyses can be made for chemical, molecular, and isotopic composition coupled withmore » assessment of spatial relationships to surrounding minerals, veins, and fractures. The bulk analyses include improved methods for minimizing contamination and recognizing syngenetic constituents of soluble organic fractions as well as enhanced spectroscopic and pyrolytic techniques for unlocking syngenetic molecular signatures in kerogen. Together, these technologies provide vital tools for the study of some of the oldest and problematic carbonaceous residues and for advancing our understanding of the earliest stages of biological evolution on Earth and the search for evidence of life beyond Earth. We discuss each of these new technologies, emphasizing their advantages and disadvantages, applications, and likely future directions.« less

  19. Parameter Identification and Uncertainty Analysis for Visual MODFLOW based Groundwater Flow Model in a Small River Basin, Eastern India

    NASA Astrophysics Data System (ADS)

    Jena, S.

    2015-12-01

    The overexploitation of groundwater resulted in abandoning many shallow tube wells in the river Basin in Eastern India. For the sustainability of groundwater resources, basin-scale modelling of groundwater flow is essential for the efficient planning and management of the water resources. The main intent of this study is to develope a 3-D groundwater flow model of the study basin using the Visual MODFLOW package and successfully calibrate and validate it using 17 years of observed data. The sensitivity analysis was carried out to quantify the susceptibility of aquifer system to the river bank seepage, recharge from rainfall and agriculture practices, horizontal and vertical hydraulic conductivities, and specific yield. To quantify the impact of parameter uncertainties, Sequential Uncertainty Fitting Algorithm (SUFI-2) and Markov chain Monte Carlo (MCMC) techniques were implemented. Results from the two techniques were compared and the advantages and disadvantages were analysed. Nash-Sutcliffe coefficient (NSE) and coefficient of determination (R2) were adopted as two criteria during calibration and validation of the developed model. NSE and R2 values of groundwater flow model for calibration and validation periods were in acceptable range. Also, the MCMC technique was able to provide more reasonable results than SUFI-2. The calibrated and validated model will be useful to identify the aquifer properties, analyse the groundwater flow dynamics and the change in groundwater levels in future forecasts.

  20. Evaluation of empirical relationships between extreme rainfall and daily maximum temperature in Australia

    NASA Astrophysics Data System (ADS)

    Herath, Sujeewa Malwila; Sarukkalige, Ranjan; Nguyen, Van Thanh Van

    2018-01-01

    Understanding the relationships between extreme daily and sub-daily rainfall events and their governing factors is important in order to analyse the properties of extreme rainfall events in a changing climate. Atmospheric temperature is one of the dominant climate variables which has a strong relationship with extreme rainfall events. In this study, a temperature-rainfall binning technique is used to evaluate the dependency of extreme rainfall on daily maximum temperature. The Clausius-Clapeyron (C-C) relation was found to describe the relationship between daily maximum temperature and a range of rainfall durations from 6 min up to 24 h for seven Australian weather stations, the stations being located in Adelaide, Brisbane, Canberra, Darwin, Melbourne, Perth and Sydney. The analysis shows that the rainfall - temperature scaling varies with location, temperature and rainfall duration. The Darwin Airport station shows a negative scaling relationship, while the other six stations show a positive relationship. To identify the trend in scaling relationship over time the same analysis is conducted using data covering 10 year periods. Results indicate that the dependency of extreme rainfall on temperature also varies with the analysis period. Further, this dependency shows an increasing trend for more extreme short duration rainfall and a decreasing trend for average long duration rainfall events at most stations. Seasonal variations of the scale changing trends were analysed by categorizing the summer and autumn seasons in one group and the winter and spring seasons in another group. Most of 99th percentile of 6 min, 1 h and 24 h rain durations at Perth, Melbourne and Sydney stations show increasing trend for both groups while Adelaide and Darwin show decreasing trend. Furthermore, majority of scaling trend of 50th percentile are decreasing for both groups.

  1. Characterization of edge turbulence in relation to edge magnetic field configuration in Ohmic L-mode plasmas in the Mega Amp Spherical Tokamak

    NASA Astrophysics Data System (ADS)

    Hnat, B.; Dudson, B. D.; Dendy, R. O.; Counsell, G. F.; Kirk, A.; MAST Team

    2008-08-01

    Ion saturation current (Isat) measurements of edge plasma turbulence are analysed for six MAST L-mode plasmas that differ primarily in their edge magnetic field configurations. The analysis techniques are designed to capture the strong nonlinearities of the datasets. First, absolute moments of the data are examined to obtain accurate values of scaling exponents. This confirms dual scaling behaviour in all samples, with the temporal scale τ ≈ 40-60 µs separating the two regimes. Strong universality is then identified in the functional form of the probability density function (PDF) for Isat fluctuations, which is well approximated by the Fréchet distribution on temporal scales τ <= 40 µs. For temporal scales τ > 40 µs, the PDFs appear to converge to the Gumbel distribution, which has been previously identified as a universal feature of many other complex phenomena. The optimal fitting parameters k = 1.15 for Fréchet and a = 1.35 for Gumbel provide a simple quantitative characterization of the full spectrum of fluctuations. It is concluded that, to good approximation, the properties of the edge turbulence are independent of the edge magnetic field configuration.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, William BJ J; Rearden, Bradley T

    The validation of neutron transport methods used in nuclear criticality safety analyses is required by consensus American National Standards Institute/American Nuclear Society (ANSI/ANS) standards. In the last decade, there has been an increased interest in correlations among critical experiments used in validation that have shared physical attributes and which impact the independence of each measurement. The statistical methods included in many of the frequently cited guidance documents on performing validation calculations incorporate the assumption that all individual measurements are independent, so little guidance is available to practitioners on the topic. Typical guidance includes recommendations to select experiments from multiple facilitiesmore » and experiment series in an attempt to minimize the impact of correlations or common-cause errors in experiments. Recent efforts have been made both to determine the magnitude of such correlations between experiments and to develop and apply methods for adjusting the bias and bias uncertainty to account for the correlations. This paper describes recent work performed at Oak Ridge National Laboratory using the Sampler sequence from the SCALE code system to develop experimental correlations using a Monte Carlo sampling technique. Sampler will be available for the first time with the release of SCALE 6.2, and a brief introduction to the methods used to calculate experiment correlations within this new sequence is presented in this paper. Techniques to utilize these correlations in the establishment of upper subcritical limits are the subject of a companion paper and will not be discussed here. Example experimental uncertainties and correlation coefficients are presented for a variety of low-enriched uranium water-moderated lattice experiments selected for use in a benchmark exercise by the Working Party on Nuclear Criticality Safety Subgroup on Uncertainty Analysis in Criticality Safety Analyses. The results include studies on the effect of fuel rod pitch on the correlations, and some observations are also made regarding difficulties in determining experimental correlations using the Monte Carlo sampling technique.« less

  3. Impact of deforestation on local precipitation patterns over the Da River basin, Vietnam

    NASA Astrophysics Data System (ADS)

    Anghileri, Daniela; Spartà, Daniele; Castelletti, Andrea; Boschetti, Mirco

    2014-05-01

    Change in land cover, e.g. from forest to bare soil, might severely impact the hydrological cycle at the river basin scale by altering the balance between rainfall and evaporation, ultimately affecting streamflow dynamics. These changes generally occur over decades, but they might be much more rapid in developing countries, where economic growth and growing population may cause abrupt changes in landscape and ecosystem. Detecting, analysing and modelling these changes is an essential step to design mitigation strategies and adaptation plans, balancing economic development and ecosystem protection. In this work we investigate the impact of land cover changes on the water cycle in the Da River basin, Vietnam. More precisely, the objective is to evaluate the interlink between deforestation and precipitation. The case study is particularly interesting because Vietnam is one of the world fastest growing economies and natural resources have been considerably exploited to support after-war development. Vietnam has the second highest rate of deforestation of primary forests in the world, second to only Nigeria (FAO 2005), with associated problems like abrupt change in run-off, erosion, sediment transport and flash floods. We performed land cover evaluation by combining literature information and Remote Sensing techniques, using Landsat images. We then analysed time series of precipitation observed on the period 1960-2011 in several stations located in the catchment area. We used multiple trend detection techniques, both state-of-the-art (e.g., Linear regression and Mann-Kendall) and novel trend detection techniques (Moving Average on Shifting Horizon), to investigate trends in seasonal pattern of precipitation. Results suggest that deforestation may induce a negative trend in the precipitation volume. The effect is mainly recognizable at the beginning and at the end of the monsoon season, when the local mechanisms of precipitation formation prevail over the large scale ones.

  4. A Meta-Analytic Review of Stand-Alone Interventions to Improve Body Image

    PubMed Central

    Alleva, Jessica M.; Sheeran, Paschal; Webb, Thomas L.; Martijn, Carolien; Miles, Eleanor

    2015-01-01

    Objective Numerous stand-alone interventions to improve body image have been developed. The present review used meta-analysis to estimate the effectiveness of such interventions, and to identify the specific change techniques that lead to improvement in body image. Methods The inclusion criteria were that (a) the intervention was stand-alone (i.e., solely focused on improving body image), (b) a control group was used, (c) participants were randomly assigned to conditions, and (d) at least one pretest and one posttest measure of body image was taken. Effect sizes were meta-analysed and moderator analyses were conducted. A taxonomy of 48 change techniques used in interventions targeted at body image was developed; all interventions were coded using this taxonomy. Results The literature search identified 62 tests of interventions (N = 3,846). Interventions produced a small-to-medium improvement in body image (d + = 0.38), a small-to-medium reduction in beauty ideal internalisation (d + = -0.37), and a large reduction in social comparison tendencies (d + = -0.72). However, the effect size for body image was inflated by bias both within and across studies, and was reliable but of small magnitude once corrections for bias were applied. Effect sizes for the other outcomes were no longer reliable once corrections for bias were applied. Several features of the sample, intervention, and methodology moderated intervention effects. Twelve change techniques were associated with improvements in body image, and three techniques were contra-indicated. Conclusions The findings show that interventions engender only small improvements in body image, and underline the need for large-scale, high-quality trials in this area. The review identifies effective techniques that could be deployed in future interventions. PMID:26418470

  5. A comparison between EDA-EnVar and ETKF-EnVar data assimilation techniques using radar observations at convective scales through a case study of Hurricane Ike (2008)

    NASA Astrophysics Data System (ADS)

    Shen, Feifei; Xu, Dongmei; Xue, Ming; Min, Jinzhong

    2017-07-01

    This study examines the impacts of assimilating radar radial velocity (Vr) data for the simulation of hurricane Ike (2008) with two different ensemble generation techniques in the framework of the hybrid ensemble-variational (EnVar) data assimilation system of Weather Research and Forecasting model. For the generation of ensemble perturbations we apply two techniques, the ensemble transform Kalman filter (ETKF) and the ensemble of data assimilation (EDA). For the ETKF-EnVar, the forecast ensemble perturbations are updated by the ETKF, while for the EDA-EnVar, the hybrid is employed to update each ensemble member with perturbed observations. The ensemble mean is analyzed by the hybrid method with flow-dependent ensemble covariance for both EnVar. The sensitivity of analyses and forecasts to the two applied ensemble generation techniques is investigated in our current study. It is found that the EnVar system is rather stable with different ensemble update techniques in terms of its skill on improving the analyses and forecasts. The EDA-EnVar-based ensemble perturbations are likely to include slightly less organized spatial structures than those in ETKF-EnVar, and the perturbations of the latter are constructed more dynamically. Detailed diagnostics reveal that both of the EnVar schemes not only produce positive temperature increments around the hurricane center but also systematically adjust the hurricane location with the hurricane-specific error covariance. On average, the analysis and forecast from the ETKF-EnVar have slightly smaller errors than that from the EDA-EnVar in terms of track, intensity, and precipitation forecast. Moreover, ETKF-EnVar yields better forecasts when verified against conventional observations.

  6. Systems Level Dissection of Anaerobic Methane Cycling: Quantitative Measurements of Single Cell Ecophysiology, Genetic Mechanisms, and Microbial Interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orphan, Victoria; Tyson, Gene; Meile, Christof

    The global biological CH4 cycle is largely controlled through coordinated and often intimate microbial interactions between archaea and bacteria, the majority of which are still unknown or have been only cursorily identified. Members of the methanotrophic archaea, aka ‘ANME’, are believed to play a major role in the cycling of methane in anoxic environments coupled to sulfate, nitrate, and possibly iron and manganese oxides, frequently forming diverse physical and metabolic partnerships with a range of bacteria. The thermodynamic challenges overcome by the ANME and their bacterial partners and corresponding slow rates of growth are common characteristics in anaerobic ecosystems, and,more » in stark contrast to most cultured microorganisms, this type of energy and resource limited microbial lifestyle is likely the norm in the environment. While we have gained an in-depth systems level understanding of fast-growing, energy-replete microorganisms, comparatively little is known about the dynamics of cell respiration, growth, protein turnover, gene expression, and energy storage in the slow-growing microbial majority. These fundamental properties, combined with the observed metabolic and symbiotic versatility of methanotrophic ANME, make these cooperative microbial systems a relevant (albeit challenging) system to study and for which to develop and optimize culture-independent methodologies, which enable a systems-level understanding of microbial interactions and metabolic networks. We used an integrative systems biology approach to study anaerobic sediment microcosms and methane-oxidizing bioreactors and expanded our understanding of the methanotrophic ANME archaea, their interactions with physically-associated bacteria, ecophysiological characteristics, and underlying genetic basis for cooperative microbial methane-oxidation linked with different terminal electron acceptors. Our approach is inherently multi-disciplinary and multi-scaled, combining transcriptional and proteomic analyses with high resolution microscopy techniques, and stable isotopic and chemical analyses that span community level ‘omics investigations (cm scale) to interspecies consortia (µm scale), to the individual cell and its subcellular components (nm scale). We have organized our methodological approach into three broad categories, RNA-based, Protein-targeted and Geochemical, each encompassing a range of scales, with many techniques and resulting datasets that are highly complementary with one another, and together, offer a unique systems-level perspective of methane-based microbial interactions.« less

  7. Scaling and characterisation of a 2-DoF velocity amplified electromagnetic vibration energy harvester

    NASA Astrophysics Data System (ADS)

    O’Donoghue, D.; Frizzell, R.; Punch, J.

    2018-07-01

    Vibration energy harvesters (VEHs) offer an alternative to batteries for the autonomous operation of low-power electronics. Understanding the influence of scaling on VEHs is of great importance in the design of reduced scale harvesters. The nonlinear harvesters investigated here employ velocity amplification, a technique used to increase velocity through impacts, to improve the power output of multiple-degree-of-freedom VEHs, compared to linear resonators. Such harvesters, employing electromagnetic induction, are referred to as velocity amplified electromagnetic generators (VAEGs), with gains in power achieved by increasing the relative velocity between the magnet and coil in the transducer. The influence of scaling on a nonlinear 2-DoF VAEG is presented. Due to the increased complexity of VAEGs, compared to linear systems, linear scaling theory cannot be directly applied to VAEGs. Therefore, a detailed nonlinear scaling method is utilised. Experimental and numerical methods are employed. This nonlinear scaling method can be used for analysing the scaling behaviour of all nonlinear electromagnetic VEHs. It is demonstrated that the electromagnetic coupling coefficient degrades more rapidly with scale for systems with larger displacement amplitudes, meaning that systems operating at low frequencies will scale poorly compared to those operating at higher frequencies. The load power of the 2-DoF VAEG is predicted to scale as {P}L\\propto {s}5.51 (s = volume1/3), suggesting that achieving high power densities in a VAEG with low device volume is extremely challenging.

  8. The Systems Biology Markup Language (SBML) Level 3 Package: Flux Balance Constraints.

    PubMed

    Olivier, Brett G; Bergmann, Frank T

    2015-09-04

    Constraint-based modeling is a well established modelling methodology used to analyze and study biological networks on both a medium and genome scale. Due to their large size, genome scale models are typically analysed using constraint-based optimization techniques. One widely used method is Flux Balance Analysis (FBA) which, for example, requires a modelling description to include: the definition of a stoichiometric matrix, an objective function and bounds on the values that fluxes can obtain at steady state. The Flux Balance Constraints (FBC) Package extends SBML Level 3 and provides a standardized format for the encoding, exchange and annotation of constraint-based models. It includes support for modelling concepts such as objective functions, flux bounds and model component annotation that facilitates reaction balancing. The FBC package establishes a base level for the unambiguous exchange of genome-scale, constraint-based models, that can be built upon by the community to meet future needs (e. g. by extending it to cover dynamic FBC models).

  9. The Systems Biology Markup Language (SBML) Level 3 Package: Flux Balance Constraints.

    PubMed

    Olivier, Brett G; Bergmann, Frank T

    2015-06-01

    Constraint-based modeling is a well established modelling methodology used to analyze and study biological networks on both a medium and genome scale. Due to their large size, genome scale models are typically analysed using constraint-based optimization techniques. One widely used method is Flux Balance Analysis (FBA) which, for example, requires a modelling description to include: the definition of a stoichiometric matrix, an objective function and bounds on the values that fluxes can obtain at steady state. The Flux Balance Constraints (FBC) Package extends SBML Level 3 and provides a standardized format for the encoding, exchange and annotation of constraint-based models. It includes support for modelling concepts such as objective functions, flux bounds and model component annotation that facilitates reaction balancing. The FBC package establishes a base level for the unambiguous exchange of genome-scale, constraint-based models, that can be built upon by the community to meet future needs (e. g. by extending it to cover dynamic FBC models).

  10. Continuous wavelet transform based time-scale and multifractal analysis of the nonlinear oscillations in a hollow cathode glow discharge plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nurujjaman, Md.; Narayanan, Ramesh; Iyengar, A. N. Sekar

    2009-10-15

    Continuous wavelet transform (CWT) based time-scale and multifractal analyses have been carried out on the anode glow related nonlinear floating potential fluctuations in a hollow cathode glow discharge plasma. CWT has been used to obtain the contour and ridge plots. Scale shift (or inversely frequency shift), which is a typical nonlinear behavior, has been detected from the undulating contours. From the ridge plots, we have identified the presence of nonlinearity and degree of chaoticity. Using the wavelet transform modulus maxima technique we have obtained the multifractal spectrum for the fluctuations at different discharge voltages and the spectrum was observed tomore » become a monofractal for periodic signals. These multifractal spectra were also used to estimate different quantities such as the correlation and fractal dimension, degree of multifractality, and complexity parameters. These estimations have been found to be consistent with the nonlinear time series analysis.« less

  11. Long Term trends in Meridional ISO Activity as seen in CMIP5 Simulations

    NASA Astrophysics Data System (ADS)

    Srivastava, G.; Chakraborty, A.; Nanjundaiah, R. S.

    2016-12-01

    Active and break phases of Indian Summer Monsoon (ISM) are manifested as subseasonal increase and decrease of convection. Major part of this intra-seasonal variability comes from low-frequency oscillations. Previous studies showed that northward propagating convective cloud bands are associated with these low-frequency intra-seasonal oscillations. Therefore, a thorough understanding of their spatial extent, location and intensity will be useful to understand and model the ISM. In this study, we have used Continous Wavelet Transform (CWT) technique to estimate the spatial extent (scale), center and intensity of these poleward propagating oscillatory systems. Using observation datasets, we show that scale, centre and intensity of these nothward propagating modes show different characteristics during floods and droughts of ISM. We have analysed different scenarios of CMIP5 and find that the change in mean ISM is related to changes in scale, center and intensity of northward propagations. We further show using AGCM simulations that SST over different regions of the world can modulate ISO over the Indian region.

  12. Unification of color postprocessing techniques for 3-dimensional computational mechanics

    NASA Technical Reports Server (NTRS)

    Bailey, Bruce Charles

    1985-01-01

    To facilitate the understanding of complex three-dimensional numerical models, advanced interactive color postprocessing techniques are introduced. These techniques are sufficiently flexible so that postprocessing difficulties arising from model size, geometric complexity, response variation, and analysis type can be adequately overcome. Finite element, finite difference, and boundary element models may be evaluated with the prototype postprocessor. Elements may be removed from parent models to be studied as independent subobjects. Discontinuous responses may be contoured including responses which become singular, and nonlinear color scales may be input by the user for the enhancement of the contouring operation. Hit testing can be performed to extract precise geometric, response, mesh, or material information from the database. In addition, stress intensity factors may be contoured along the crack front of a fracture model. Stepwise analyses can be studied, and the user can recontour responses repeatedly, as if he were paging through the response sets. As a system, these tools allow effective interpretation of complex analysis results.

  13. Three-dimensional micro-scale strain mapping in living biological soft tissues.

    PubMed

    Moo, Eng Kuan; Sibole, Scott C; Han, Sang Kuy; Herzog, Walter

    2018-04-01

    Non-invasive characterization of the mechanical micro-environment surrounding cells in biological tissues at multiple length scales is important for the understanding of the role of mechanics in regulating the biosynthesis and phenotype of cells. However, there is a lack of imaging methods that allow for characterization of the cell micro-environment in three-dimensional (3D) space. The aims of this study were (i) to develop a multi-photon laser microscopy protocol capable of imprinting 3D grid lines onto living tissue at a high spatial resolution, and (ii) to develop image processing software capable of analyzing the resulting microscopic images and performing high resolution 3D strain analyses. Using articular cartilage as the biological tissue of interest, we present a novel two-photon excitation imaging technique for measuring the internal 3D kinematics in intact cartilage at sub-micrometer resolution, spanning length scales from the tissue to the cell level. Using custom image processing software, we provide accurate and robust 3D micro-strain analysis that allows for detailed qualitative and quantitative assessment of the 3D tissue kinematics. This novel technique preserves tissue structural integrity post-scanning, therefore allowing for multiple strain measurements at different time points in the same specimen. The proposed technique is versatile and opens doors for experimental and theoretical investigations on the relationship between tissue deformation and cell biosynthesis. Studies of this nature may enhance our understanding of the mechanisms underlying cell mechano-transduction, and thus, adaptation and degeneration of soft connective tissues. We presented a novel two-photon excitation imaging technique for measuring the internal 3D kinematics in intact cartilage at sub-micrometer resolution, spanning from tissue length scale to cellular length scale. Using a custom image processing software (lsmgridtrack), we provide accurate and robust micro-strain analysis that allowed for detailed qualitative and quantitative assessment of the 3D tissue kinematics. The approach presented here can also be applied to other biological tissues such as meniscus and annulus fibrosus, as well as tissue-engineered tissues for the characterization of their mechanical properties. This imaging technique opens doors for experimental and theoretical investigation on the relationship between tissue deformation and cell biosynthesis. Studies of this nature may enhance our understanding of the mechanisms underlying cell mechano-transduction, and thus, adaptation and degeneration of soft connective tissues. Copyright © 2018 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  14. A Skilful Marine Sclerochronological Network Based Reconstruction of North Atlantic Subpolar Gyre Dynamics

    NASA Astrophysics Data System (ADS)

    Reynolds, D.; Hall, I. R.; Slater, S. M.; Scourse, J. D.; Wanamaker, A. D.; Halloran, P. R.; Garry, F. K.

    2017-12-01

    Spatial network analyses of precisely dated, and annually resolved, tree-ring proxy records have facilitated robust reconstructions of past atmospheric climate variability and the associated mechanisms and forcings that drive it. In contrast, a lack of similarly dated marine archives has constrained the use of such techniques in the marine realm, despite the potential for developing a more robust understanding of the role basin scale ocean dynamics play in the global climate system. Here we show that a spatial network of marine molluscan sclerochronological oxygen isotope (δ18Oshell) series spanning the North Atlantic region provides a skilful reconstruction of basin scale North Atlantic sea surface temperatures (SSTs). Our analyses demonstrate that the composite marine series (referred to as δ18Oproxy_PC1) is significantly sensitive to inter-annual variability in North Atlantic SSTs (R=-0.61 P<0.01) and surface air temperatures (SATs; R=-0.67, P<0.01) over the 20th century. Subpolar gyre (SPG) SSTs dominates variability in the δ18Oproxy_PC1 series at sub-centennial frequencies (R=-0.51, P<0.01). Comparison of the δ18Oproxy_PC1 series against variability in the strength of the European Slope Current and maximum North Atlantic meridional overturning circulation derived from numeric climate models (CMIP5), indicates that variability in the SPG region, associated with the strength of the surface currents of the North Atlantic, are playing a significant role in shaping the multi-decadal scale SST variability over the industrial era. These analyses demonstrate that spatial networks developed from sclerochronological archives can provide powerful baseline archives of past ocean variability that can facilitate the development of a quantitative understanding for the role the oceans play in the global climate systems and constraining uncertainties in numeric climate models.

  15. Satellite-Enhanced Dynamical Downscaling of Extreme Events

    NASA Astrophysics Data System (ADS)

    Nunes, A.

    2015-12-01

    Severe weather events can be the triggers of environmental disasters in regions particularly susceptible to changes in hydrometeorological conditions. In that regard, the reconstruction of past extreme weather events can help in the assessment of vulnerability and risk mitigation actions. Using novel modeling approaches, dynamical downscaling of long-term integrations from global circulation models can be useful for risk analysis, providing more accurate climate information at regional scales. Originally developed at the National Centers for Environmental Prediction (NCEP), the Regional Spectral Model (RSM) is being used in the dynamical downscaling of global reanalysis, within the South American Hydroclimate Reconstruction Project. Here, RSM combines scale-selective bias correction with assimilation of satellite-based precipitation estimates to downscale extreme weather occurrences. Scale-selective bias correction is a method employed in the downscaling, similar to the spectral nudging technique, in which the downscaled solution develops in agreement with its coarse boundaries. Precipitation assimilation acts on modeled deep-convection, drives the land-surface variables, and therefore the hydrological cycle. During the downscaling of extreme events that took place in Brazil in recent years, RSM continuously assimilated NCEP Climate Prediction Center morphing technique precipitation rates. As a result, RSM performed better than its global (reanalysis) forcing, showing more consistent hydrometeorological fields compared with more sophisticated global reanalyses. Ultimately, RSM analyses might provide better-quality initial conditions for high-resolution numerical predictions in metropolitan areas, leading to more reliable short-term forecasting of severe local storms.

  16. Digital Reef Rugosity Estimates Coral Reef Habitat Complexity

    PubMed Central

    Dustan, Phillip; Doherty, Orla; Pardede, Shinta

    2013-01-01

    Ecological habitats with greater structural complexity contain more species due to increased niche diversity. This is especially apparent on coral reefs where individual coral colonies aggregate to give a reef its morphology, species zonation, and three dimensionality. Structural complexity is classically measured with a reef rugosity index, which is the ratio of a straight line transect to the distance a flexible chain of equal length travels when draped over the reef substrate; yet, other techniques from visual categories to remote sensing have been used to characterize structural complexity at scales from microhabitats to reefscapes. Reef-scale methods either lack quantitative precision or are too time consuming to be routinely practical, while remotely sensed indices are mismatched to the finer scale morphology of coral colonies and reef habitats. In this communication a new digital technique, Digital Reef Rugosity (DRR) is described which utilizes a self-contained water level gauge enabling a diver to quickly and accurately characterize rugosity with non-invasive millimeter scale measurements of coral reef surface height at decimeter intervals along meter scale transects. The precise measurements require very little post-processing and are easily imported into a spreadsheet for statistical analyses and modeling. To assess its applicability we investigated the relationship between DRR and fish community structure at four coral reef sites on Menjangan Island off the northwest corner of Bali, Indonesia and one on mainland Bali to the west of Menjangan Island; our findings show a positive relationship between DRR and fish diversity. Since structural complexity drives key ecological processes on coral reefs, we consider that DRR may become a useful quantitative community-level descriptor to characterize reef complexity. PMID:23437380

  17. Digital reef rugosity estimates coral reef habitat complexity.

    PubMed

    Dustan, Phillip; Doherty, Orla; Pardede, Shinta

    2013-01-01

    Ecological habitats with greater structural complexity contain more species due to increased niche diversity. This is especially apparent on coral reefs where individual coral colonies aggregate to give a reef its morphology, species zonation, and three dimensionality. Structural complexity is classically measured with a reef rugosity index, which is the ratio of a straight line transect to the distance a flexible chain of equal length travels when draped over the reef substrate; yet, other techniques from visual categories to remote sensing have been used to characterize structural complexity at scales from microhabitats to reefscapes. Reef-scale methods either lack quantitative precision or are too time consuming to be routinely practical, while remotely sensed indices are mismatched to the finer scale morphology of coral colonies and reef habitats. In this communication a new digital technique, Digital Reef Rugosity (DRR) is described which utilizes a self-contained water level gauge enabling a diver to quickly and accurately characterize rugosity with non-invasive millimeter scale measurements of coral reef surface height at decimeter intervals along meter scale transects. The precise measurements require very little post-processing and are easily imported into a spreadsheet for statistical analyses and modeling. To assess its applicability we investigated the relationship between DRR and fish community structure at four coral reef sites on Menjangan Island off the northwest corner of Bali, Indonesia and one on mainland Bali to the west of Menjangan Island; our findings show a positive relationship between DRR and fish diversity. Since structural complexity drives key ecological processes on coral reefs, we consider that DRR may become a useful quantitative community-level descriptor to characterize reef complexity.

  18. HAlign-II: efficient ultra-large multiple sequence alignment and phylogenetic tree reconstruction with distributed and parallel computing.

    PubMed

    Wan, Shixiang; Zou, Quan

    2017-01-01

    Multiple sequence alignment (MSA) plays a key role in biological sequence analyses, especially in phylogenetic tree construction. Extreme increase in next-generation sequencing results in shortage of efficient ultra-large biological sequence alignment approaches for coping with different sequence types. Distributed and parallel computing represents a crucial technique for accelerating ultra-large (e.g. files more than 1 GB) sequence analyses. Based on HAlign and Spark distributed computing system, we implement a highly cost-efficient and time-efficient HAlign-II tool to address ultra-large multiple biological sequence alignment and phylogenetic tree construction. The experiments in the DNA and protein large scale data sets, which are more than 1GB files, showed that HAlign II could save time and space. It outperformed the current software tools. HAlign-II can efficiently carry out MSA and construct phylogenetic trees with ultra-large numbers of biological sequences. HAlign-II shows extremely high memory efficiency and scales well with increases in computing resource. THAlign-II provides a user-friendly web server based on our distributed computing infrastructure. HAlign-II with open-source codes and datasets was established at http://lab.malab.cn/soft/halign.

  19. Effective model development of internal auditors in the village financial institution

    NASA Astrophysics Data System (ADS)

    Arsana, I. M. M.; Sugiarta, I. N.

    2018-01-01

    Designing an effective audit system is complex and challenging, and a focus on examining how internal audit drive improvement in three core performance dimensions ethicality, efficiency, and effectiveness in organization is needed. The problem of research is how the desain model and peripheral of supporter of effective supervation Village Credit Institution? Research of objectives is yielding the desain model and peripheral of supporter of effective supervation Village Credit Institution. Method Research use data collecting technique interview, observation and enquette. Data analysis, data qualitative before analysed to be turned into quantitative data in the form of scale. Each variable made to become five classificat pursuant to scale of likert. Data analysed descriptively to find supervation level, Structural Equation Model (SEM) to find internal and eksternal factor. So that desain model supervation with descriptive analysis. Result of research desain model and peripheral of supporter of effective supervation Village Credit Institution. The conclusion desain model supported by three sub system: sub system institute yield body supervisor of Village Credit Institution, sub system standardization and working procedure yield standard operating procedure supervisor of Village Credit Institution, sub system education and training yield supervisor professional of Village Credit Institution.

  20. Interfaces in Oxides Formed on NiAlCr Doped with Y, Hf, Ti, and B

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boll, Torben; Unocic, Kinga A.; Pint, Bruce A.

    Abstract This study applies atom probe tomography (APT) to analyze the oxide scales formed on model NiAlCr alloys doped with Hf, Y, Ti, and B. Due to its ability to measure small amounts of alloying elements in the oxide matrix and its ability to quantify segregation, t he technique offers a possibility for detailed studies of the dopant’s fate during high-temperature oxidation. Three model NiAlCr alloys with different additions of Hf, Y, Ti, and B were prepared and oxidized in O 2at 1,100°C for 100 h. All specimens showed an outer region consisting of different spinel oxides with relativelymore » small grains and the protective Al 2O 3-oxide layer below. APT analyses focused mainly on this protective oxide layer. In all the investigated samples segregation of both Hf and Y to the oxide grain boundaries was observed and quantified. Neither B nor Ti were observed in the alumina grains or at the analyzed interfaces. The processes of formation of oxide scales and segregation of the alloying elements are discussed. The experimental challenges of the oxide analyses by APT are also addressed.« less

  1. Disclosure Control using Partially Synthetic Data for Large-Scale Health Surveys, with Applications to CanCORS

    PubMed Central

    Loong, Bronwyn; Zaslavsky, Alan M.; He, Yulei; Harrington, David P.

    2013-01-01

    Statistical agencies have begun to partially synthesize public-use data for major surveys to protect the confidentiality of respondents’ identities and sensitive attributes, by replacing high disclosure risk and sensitive variables with multiple imputations. To date, there are few applications of synthetic data techniques to large-scale healthcare survey data. Here, we describe partial synthesis of survey data collected by CanCORS, a comprehensive observational study of the experiences, treatments, and outcomes of patients with lung or colorectal cancer in the United States. We review inferential methods for partially synthetic data, and discuss selection of high disclosure risk variables for synthesis, specification of imputation models, and identification disclosure risk assessment. We evaluate data utility by replicating published analyses and comparing results using original and synthetic data, and discuss practical issues in preserving inferential conclusions. We found that important subgroup relationships must be included in the synthetic data imputation model, to preserve the data utility of the observed data for a given analysis procedure. We conclude that synthetic CanCORS data are suited best for preliminary data analyses purposes. These methods address the requirement to share data in clinical research without compromising confidentiality. PMID:23670983

  2. Interfaces in Oxides Formed on NiAlCr Doped with Y, Hf, Ti, and B

    DOE PAGES

    Boll, Torben; Unocic, Kinga A.; Pint, Bruce A.; ...

    2017-03-20

    Abstract This study applies atom probe tomography (APT) to analyze the oxide scales formed on model NiAlCr alloys doped with Hf, Y, Ti, and B. Due to its ability to measure small amounts of alloying elements in the oxide matrix and its ability to quantify segregation, t he technique offers a possibility for detailed studies of the dopant’s fate during high-temperature oxidation. Three model NiAlCr alloys with different additions of Hf, Y, Ti, and B were prepared and oxidized in O 2at 1,100°C for 100 h. All specimens showed an outer region consisting of different spinel oxides with relativelymore » small grains and the protective Al 2O 3-oxide layer below. APT analyses focused mainly on this protective oxide layer. In all the investigated samples segregation of both Hf and Y to the oxide grain boundaries was observed and quantified. Neither B nor Ti were observed in the alumina grains or at the analyzed interfaces. The processes of formation of oxide scales and segregation of the alloying elements are discussed. The experimental challenges of the oxide analyses by APT are also addressed.« less

  3. Comparison of digital and conventional impression techniques: evaluation of patients’ perception, treatment comfort, effectiveness and clinical outcomes

    PubMed Central

    2014-01-01

    Background The purpose of this study was to compare two impression techniques from the perspective of patient preferences and treatment comfort. Methods Twenty-four (12 male, 12 female) subjects who had no previous experience with either conventional or digital impression participated in this study. Conventional impressions of maxillary and mandibular dental arches were taken with a polyether impression material (Impregum, 3 M ESPE), and bite registrations were made with polysiloxane bite registration material (Futar D, Kettenbach). Two weeks later, digital impressions and bite scans were performed using an intra-oral scanner (CEREC Omnicam, Sirona). Immediately after the impressions were made, the subjects’ attitudes, preferences and perceptions towards impression techniques were evaluated using a standardized questionnaire. The perceived source of stress was evaluated using the State-Trait Anxiety Scale. Processing steps of the impression techniques (tray selection, working time etc.) were recorded in seconds. Statistical analyses were performed with the Wilcoxon Rank test, and p < 0.05 was considered significant. Results There were significant differences among the groups (p < 0.05) in terms of total working time and processing steps. Patients stated that digital impressions were more comfortable than conventional techniques. Conclusions Digital impressions resulted in a more time-efficient technique than conventional impressions. Patients preferred the digital impression technique rather than conventional techniques. PMID:24479892

  4. Testing for periodicity of extinction

    NASA Technical Reports Server (NTRS)

    Raup, David M.; Sepkoski, J. J., Jr.

    1988-01-01

    The statistical techniques used by Raup and Sepkoski (1984 and 1986) to identify a 26-Myr periodicity in the biological extinction record for the past 250 Myr are reexamined, responding in detail to the criticisms of Stigler and Wagner (1987). It is argued that evaluation of a much larger set of extinction data using a time scale with 51 sampling intervals supports the finding of periodicity. In a reply by Sigler and Wagner, the preference for a 26-Myr period is attributed to a numerical quirk in the Harland et al. (1982) time scale, in which the subinterval boundaries are not linear interpolations between the stage boundaries but have 25-Myr periodicity. It is stressed that the results of the stringent statistical tests imposed do not disprove periodicity but rather indicate that the evidence and analyses presented so far are inadequate.

  5. Complexity Induced Anisotropic Bimodal Intermittent Turbulence in Space Plasmas

    NASA Technical Reports Server (NTRS)

    Chang, Tom; Tam, Sunny W. Y.; Wu, Cheng-Chin

    2004-01-01

    The "physics of complexity" in space plasmas is the central theme of this exposition. It is demonstrated that the sporadic and localized interactions of magnetic coherent structures arising from the plasma resonances can be the source for the coexistence of nonpropagating spatiotemporal fluctuations and propagating modes. Non-Gaussian probability distribution functions of the intermittent fluctuations from direct numerical simulations are obtained and discussed. Power spectra and local intermittency measures using the wavelet analyses are presented to display the spottiness of the small-scale turbulent fluctuations and the non-uniformity of coarse-grained dissipation that can lead to magnetic topological reconfigurations. The technique of the dynamic renormalization group is applied to the study of the scaling properties of such type of multiscale fluctuations. Charged particle interactions with both the propagating and nonpropagating portions of the intermittent turbulence are also described.

  6. 'Just give me the best quality of life questionnaire': the Karnofsky scale and the history of quality of life measurements in cancer trials.

    PubMed

    Timmermann, Carsten

    2013-09-01

    To use the history of the Karnofsky Performance Scale as a case study illustrating the emergence of interest in the measurement and standardisation of quality of life; to understand the origins of current-day practices. Articles referring to the Karnofsky scale and quality of life measurements published from the 1940s to the 1990s were identified by searching databases and screening journals, and analysed using close-reading techniques. Secondary literature was consulted to understand the context in which articles were written. The Karnofsky scale was devised for a different purpose than measuring quality of life: as a standardisation device that helped quantify effects of chemotherapeutic agents less easily measurable than survival time. Interest in measuring quality of life only emerged around 1970. When quality of life measurements were increasingly widely discussed in the medical press from the late 1970s onwards, a consensus emerged that the Karnofsky scale was not a very good tool. More sophisticated approaches were developed, but Karnofsky continued to be used. I argue that the scale provided a quick and simple, approximate assessment of the 'soft' effects of treatment by physicians, overlapping but not identical with quality of life.

  7. ‘Just give me the best quality of life questionnaire’: the Karnofsky scale and the history of quality of life measurements in cancer trials

    PubMed Central

    Timmermann, Carsten

    2013-01-01

    Objectives: To use the history of the Karnofsky Performance Scale as a case study illustrating the emergence of interest in the measurement and standardisation of quality of life; to understand the origins of current-day practices. Methods: Articles referring to the Karnofsky scale and quality of life measurements published from the 1940s to the 1990s were identified by searching databases and screening journals, and analysed using close-reading techniques. Secondary literature was consulted to understand the context in which articles were written. Results: The Karnofsky scale was devised for a different purpose than measuring quality of life: as a standardisation device that helped quantify effects of chemotherapeutic agents less easily measurable than survival time. Interest in measuring quality of life only emerged around 1970. Discussion: When quality of life measurements were increasingly widely discussed in the medical press from the late 1970s onwards, a consensus emerged that the Karnofsky scale was not a very good tool. More sophisticated approaches were developed, but Karnofsky continued to be used. I argue that the scale provided a quick and simple, approximate assessment of the ‘soft’ effects of treatment by physicians, overlapping but not identical with quality of life. PMID:23239756

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamanini, Nicola; Wright, Matthew, E-mail: nicola.tamanini@cea.fr, E-mail: matthew.wright.13@ucl.ac.uk

    We investigate the cosmological dynamics of the recently proposed extended chameleon models at both background and linear perturbation levels. Dynamical systems techniques are employed to fully characterize the evolution of the universe at the largest distances, while structure formation is analysed at sub-horizon scales within the quasi-static approximation. The late time dynamical transition from dark matter to dark energy domination can be well described by almost all extended chameleon models considered, with no deviations from ΛCDM results at both background and perturbation levels. The results obtained in this work confirm the cosmological viability of extended chameleons as alternative dark energymore » models.« less

  9. Variations in atmospheric angular momentum and the length of day

    NASA Technical Reports Server (NTRS)

    Rosen, R. D.; Salstein, D. A.

    1982-01-01

    Six years of twice daily global analyses were used to create and study a lengthy time series of high temporal resolution angular momentum values. Changes in these atmospheric values were compared to independently determined charges in the rotation rate of the solid Earth. Finally, the atmospheric data was examined in more detail to determine the time and space scales on which variations in momentum occur within the atmosphere and which regions are contributing most to the changes found in the global integral. The data and techniques used to derive the time series of momentum values are described.

  10. Evaluation Criteria for Micro-CAI: A Psychometric Approach

    PubMed Central

    Wallace, Douglas; Slichter, Mark; Bolwell, Christine

    1985-01-01

    The increased use of microcomputer-based instructional programs has resulted in a greater need for third-party evaluation of the software. This in turn has prompted the development of micro-CAI evaluation tools. The present project sought to develop a prototype instrument to assess the impact of CAI program presentation characteristics on students. Data analysis and scale construction was conducted using standard item reliability analyses and factor analytic techniques. Adequate subscale reliabilities and factor structures were found, suggesting that a psychometric approach to CAI evaluation may possess some merit. Efforts to assess the utility of the resultant instrument are currently underway.

  11. Simulation of an ensemble of future climate time series with an hourly weather generator

    NASA Astrophysics Data System (ADS)

    Caporali, E.; Fatichi, S.; Ivanov, V. Y.; Kim, J.

    2010-12-01

    There is evidence that climate change is occurring in many regions of the world. The necessity of climate change predictions at the local scale and fine temporal resolution is thus warranted for hydrological, ecological, geomorphological, and agricultural applications that can provide thematic insights into the corresponding impacts. Numerous downscaling techniques have been proposed to bridge the gap between the spatial scales adopted in General Circulation Models (GCM) and regional analyses. Nevertheless, the time and spatial resolutions obtained as well as the type of meteorological variables may not be sufficient for detailed studies of climate change effects at the local scales. In this context, this study presents a stochastic downscaling technique that makes use of an hourly weather generator to simulate time series of predicted future climate. Using a Bayesian approach, the downscaling procedure derives distributions of factors of change for several climate statistics from a multi-model ensemble of GCMs. Factors of change are sampled from their distributions using a Monte Carlo technique to entirely account for the probabilistic information obtained with the Bayesian multi-model ensemble. Factors of change are subsequently applied to the statistics derived from observations to re-evaluate the parameters of the weather generator. The weather generator can reproduce a wide set of climate variables and statistics over a range of temporal scales, from extremes, to the low-frequency inter-annual variability. The final result of such a procedure is the generation of an ensemble of hourly time series of meteorological variables that can be considered as representative of future climate, as inferred from GCMs. The generated ensemble of scenarios also accounts for the uncertainty derived from multiple GCMs used in downscaling. Applications of the procedure in reproducing present and future climates are presented for different locations world-wide: Tucson (AZ), Detroit (MI), and Firenze (Italy). The stochastic downscaling is carried out with eight GCMs from the CMIP3 multi-model dataset (IPCC 4AR, A1B scenario).

  12. Estimation of cocaine consumption in the community: a critical comparison of the results from three complimentary techniques

    PubMed Central

    Reid, Malcolm J; Langford, Katherine H; Grung, Merete; Gjerde, Hallvard; Amundsen, Ellen J; Morland, Jorg; Thomas, Kevin V

    2012-01-01

    Objectives A range of approaches are now available to estimate the level of drug use in the community so it is desirable to critically compare results from the differing techniques. This paper presents a comparison of the results from three methods for estimating the level of cocaine use in the general population. Design The comparison applies to; a set of regional-scale sample survey questionnaires, a representative sample survey on drug use among drivers and an analysis of the quantity of cocaine-related metabolites in sewage. Setting 14 438 participants provided data for the set of regional-scale sample survey questionnaires; 2341 drivers provided oral-fluid samples and untreated sewage from 570 000 people was analysed for biomarkers of cocaine use. All data were collected in Oslo, Norway. Results 0.70 (0.36–1.03) % of drivers tested positive for cocaine use which suggest a prevalence that is higher than the 0.22 (0.13–0.30) % (per day) figure derived from regional-scale survey questionnaires, but the degree to which cocaine consumption in the driver population follows the general population is an unanswered question. Despite the comparatively low-prevalence figure the survey questionnaires did provide estimates of the volume of consumption that are comparable with the amount of cocaine-related metabolites in sewage. Per-user consumption estimates are however highlighted as a significant source of uncertainty as little or no data on the quantities consumed by individuals are available, and much of the existing data are contradictory. Conclusions The comparison carried out in the present study can provide an excellent means of checking the quality and accuracy of the three measurement techniques because they each approach the problem from a different viewpoint. Together the three complimentary techniques provide a well-balanced assessment of the drug-use situation in a given community and identify areas where more research is needed. PMID:23144259

  13. Imaging the North Anatolian Fault using the scattered teleseismic wavefield

    NASA Astrophysics Data System (ADS)

    Thompson, D. A.; Rost, S.; Houseman, G. A.; Cornwell, D. G.; Turkelli, N.; Teoman, U.; Kahraman, M.; Altuncu Poyraz, S.; Gülen, L.; Utkucu, M.; Frederiksen, A. W.; Rondenay, S.

    2013-12-01

    The North Anatolian Fault Zone (NAFZ) is a major continental strike-slip fault system, similar in size and scale to the San Andreas system, that extends ˜1200 km across Turkey. In 2012, a new multidisciplinary project (FaultLab) was instigated to better understand deformation throughout the entire crust in the NAFZ, in particular the expected transition from narrow zones of brittle deformation in the upper crust to possibly broader shear zones in the lower crust/upper mantle and how these features contribute to the earthquake loading cycle. This contribution will discuss the first results from the seismic component of the project, a 73 station network encompassing the northern and southern branches of the NAFZ in the Sakarya region. The Dense Array for North Anatolia (DANA) is arranged as a 6×11 grid with a nominal station spacing of 7 km, with a further 7 stations located outside of the main grid. With the excellent resolution afforded by the DANA network, we will present images of crustal structure using the technique of teleseismic scattering tomography. The method uses a full waveform inversion of the teleseismic scattered wavefield coupled with array processing techniques to infer the properties and location of small-scale heterogeneities (with scales on the order of the seismic wavelength) within the crust. We will also present preliminary results of teleseismic scattering migration, another powerful method that benefits from the dense data coverage of the deployed seismic network. Images obtained using these methods together with other conventional imaging techniques will provide evidence for how the deformation is distributed within the fault zone at depth, providing constraints that can be used in conjunction with structural analyses of exhumed fault segments and models of geodetic strain-rate across the fault system. By linking together results from the complementary techniques being employed in the FaultLab project, we aim to produce a comprehensive picture of fault structure and dynamics throughout the crust and shallow upper mantle of this major active fault zone.

  14. HYDRORECESSION: A toolbox for streamflow recession analysis

    NASA Astrophysics Data System (ADS)

    Arciniega, S.

    2015-12-01

    Streamflow recession curves are hydrological signatures allowing to study the relationship between groundwater storage and baseflow and/or low flows at the catchment scale. Recent studies have showed that streamflow recession analysis can be quite sensitive to the combination of different models, extraction techniques and parameter estimation methods. In order to better characterize streamflow recession curves, new methodologies combining multiple approaches have been recommended. The HYDRORECESSION toolbox, presented here, is a Matlab graphical user interface developed to analyse streamflow recession time series with the support of different tools allowing to parameterize linear and nonlinear storage-outflow relationships through four of the most useful recession models (Maillet, Boussinesq, Coutagne and Wittenberg). The toolbox includes four parameter-fitting techniques (linear regression, lower envelope, data binning and mean squared error) and three different methods to extract hydrograph recessions segments (Vogel, Brutsaert and Aksoy). In addition, the toolbox has a module that separates the baseflow component from the observed hydrograph using the inverse reservoir algorithm. Potential applications provided by HYDRORECESSION include model parameter analysis, hydrological regionalization and classification, baseflow index estimates, catchment-scale recharge and low-flows modelling, among others. HYDRORECESSION is freely available for non-commercial and academic purposes.

  15. Assessment of sub-grid scale dispersion closure with regularized deconvolution method in a particle-laden turbulent jet

    NASA Astrophysics Data System (ADS)

    Wang, Qing; Zhao, Xinyu; Ihme, Matthias

    2017-11-01

    Particle-laden turbulent flows are important in numerous industrial applications, such as spray combustion engines, solar energy collectors etc. It is of interests to study this type of flows numerically, especially using large-eddy simulations (LES). However, capturing the turbulence-particle interaction in LES remains challenging due to the insufficient representation of the effect of sub-grid scale (SGS) dispersion. In the present work, a closure technique for the SGS dispersion using regularized deconvolution method (RDM) is assessed. RDM was proposed as the closure for the SGS dispersion in a counterflow spray that is studied numerically using finite difference method on a structured mesh. A presumed form of LES filter is used in the simulations. In the present study, this technique has been extended to finite volume method with an unstructured mesh, where no presumption on the filter form is required. The method is applied to a series of particle-laden turbulent jets. Parametric analyses of the model performance are conducted for flows with different Stokes numbers and Reynolds numbers. The results from LES will be compared against experiments and direct numerical simulations (DNS).

  16. OpenMP Parallelization and Optimization of Graph-Based Machine Learning Algorithms

    DOE PAGES

    Meng, Zhaoyi; Koniges, Alice; He, Yun Helen; ...

    2016-09-21

    In this paper, we investigate the OpenMP parallelization and optimization of two novel data classification algorithms. The new algorithms are based on graph and PDE solution techniques and provide significant accuracy and performance advantages over traditional data classification algorithms in serial mode. The methods leverage the Nystrom extension to calculate eigenvalue/eigenvectors of the graph Laplacian and this is a self-contained module that can be used in conjunction with other graph-Laplacian based methods such as spectral clustering. We use performance tools to collect the hotspots and memory access of the serial codes and use OpenMP as the parallelization language to parallelizemore » the most time-consuming parts. Where possible, we also use library routines. We then optimize the OpenMP implementations and detail the performance on traditional supercomputer nodes (in our case a Cray XC30), and test the optimization steps on emerging testbed systems based on Intel’s Knights Corner and Landing processors. We show both performance improvement and strong scaling behavior. Finally, a large number of optimization techniques and analyses are necessary before the algorithm reaches almost ideal scaling.« less

  17. Shuttle radar DEM hydrological correction for erosion modelling in small catchments

    NASA Astrophysics Data System (ADS)

    Jarihani, Ben; Sidle, Roy; Bartley, Rebecca

    2016-04-01

    Digital Elevation Models (DEMs) that accurately replicate both landscape form and processes are critical to support modelling of environmental processes. Catchment and hillslope scale runoff and sediment processes (i.e., patterns of overland flow, infiltration, subsurface stormflow and erosion) are all topographically mediated. In remote and data-scarce regions, high resolution DEMs (LiDAR) are often not available, and moderate to course resolution digital elevation models (e.g., SRTM) have difficulty replicating detailed hydrological patterns, especially in relatively flat landscapes. Several surface reconditioning algorithms (e.g., Smoothing) and "Stream burning" techniques (e.g., Agree or ANUDEM), in conjunction with representation of the known stream networks, have been used to improve DEM performance in replicating known hydrology. Detailed stream network data are not available at regional and national scales, but can be derived at local scales from remotely-sensed data. This research explores the implication of high resolution stream network data derived from Google Earth images for DEM hydrological correction, instead of using course resolution stream networks derived from topographic maps. The accuracy of implemented method in producing hydrological-efficient DEMs were assessed by comparing the hydrological parameters derived from modified DEMs and limited high-resolution airborne LiDAR DEMs. The degree of modification is dominated by the method used and availability of the stream network data. Although stream burning techniques improve DEMs hydrologically, these techniques alter DEM characteristics that may affect catchment boundaries, stream position and length, as well as secondary terrain derivatives (e.g., slope, aspect). Modification of a DEM to better reflect known hydrology can be useful, however, knowledge of the magnitude and spatial pattern of the changes are required before using a DEM for subsequent analyses.

  18. Nanometer-Scale Chemistry of a Calcite Biomineralization Template: Implications for Skeletal Composition and Nucleation.

    PubMed

    Branson, Oscar; Bonnin, Elisa A; Perea, Daniel E; Spero, Howard J; Zhu, Zihua; Winters, Maria; Hönisch, Bärbel; Russell, Ann D; Fehrenbacher, Jennifer S; Gagnon, Alexander C

    2016-11-15

    Plankton, corals, and other organisms produce calcium carbonate skeletons that are integral to their survival, form a key component of the global carbon cycle, and record an archive of past oceanographic conditions in their geochemistry. A key aspect of the formation of these biominerals is the interaction between organic templating structures and mineral precipitation processes. Laboratory-based studies have shown that these atomic-scale processes can profoundly influence the architecture and composition of minerals, but their importance in calcifying organisms is poorly understood because it is difficult to measure the chemistry of in vivo biomineral interfaces at spatially relevant scales. Understanding the role of templates in biomineral nucleation, and their importance in skeletal geochemistry requires an integrated, multiscale approach, which can place atom-scale observations of organic-mineral interfaces within a broader structural and geochemical context. Here we map the chemistry of an embedded organic template structure within a carbonate skeleton of the foraminifera Orbulina universa using both atom probe tomography (APT), a 3D chemical imaging technique with Ångström-level spatial resolution, and time-of-flight secondary ionization mass spectrometry (ToF-SIMS), a 2D chemical imaging technique with submicron resolution. We quantitatively link these observations, revealing that the organic template in O. universa is uniquely enriched in both Na and Mg, and contributes to intraskeletal chemical heterogeneity. Our APT analyses reveal the cation composition of the organic surface, offering evidence to suggest that cations other than Ca 2+ , previously considered passive spectator ions in biomineral templating, may be important in defining the energetics of carbonate nucleation on organic templates.

  19. [Cultural adaptation to Spanish and assessment of an Adolescent Peer Relationships Tool for detecting school bullying: Preliminary study of the psychometric properties].

    PubMed

    Gascón-Cánovas, Juan J; Russo de Leon, Jessica Roxanna; Cózar Fernandez, Antonio; Heredia Calzado, Jose M

    2017-07-01

    School bullying is a growing problem. The current study is aimed at culturally adapting and assessing the psychometric properties of a brief scale to measure bullying. A cross-cultural adaptation of the brief scale -Adolescent Peer Relations Instrument-Bullying (APRI)- was performed using the translation and back-translation technique. The Spanish version of APRI questionnaire was administered to a sample of 1,428 schoolchildren aged 12-14years in the region of Mar Menor in Murcia (Spain). Exploratory factor analysis, with oblique rotation, was used to assess the validity of the internal structure, the Cronbach's alpha to analyse their consistency, and the Kruskal-Wallis test to check their ability to discriminate between subjects with varying degrees of bullying according to Kidscreen-52 scale of social acceptability RESULTS: Two factors were identified in the adapted version of APRI (physical victimisation and verbal/social victimisation), similar to those in the original scale. The questionnaire has high internal consistency (Cronbach's alpha=0.94) and discrimination capacity (P<01), with significant effect sizes between degrees of bullying. The internal structure of the APRI Spanish version is similar to the original, and its scores confirm high reliability and construct validity. Further studies need to be performed with broader age ranges and confirmatory analysis techniques, to ratify the equivalence of the adapted version with the original version. Copyright © 2015 Asociación Española de Pediatría. Publicado por Elsevier España, S.L.U. All rights reserved.

  20. Multivariate analyses of tinnitus complaint and change in tinnitus complaint: a masker study.

    PubMed

    Jakes, S; Stephens, S D

    1987-11-01

    Multivariate statistical techniques were used to re-analyse the data from the recent DHSS multi-centre masker study. These analyses were undertaken to three ends. First, to clarify and attempt to replicate the previously found factor structure of complaints about tinnitus. Secondly, to attempt to identify common factors in the change or improvement measures pre- and post-masker treatment. Thirdly, to identify predictors of any such outcome factors. Two complaint factors were identified; 'Distress' and 'intrusiveness'. A series of analyses were conducted on change measures using different numbers of subjects and variables. When only semantic differential scales were used, the change factors were very similar to the complaint factors noted above. When variables measuring other aspects of improvement were included, several other factors were identified. These included; 'tinnitus helped', 'masking effects', 'residual inhibition' and 'matched loudness'. Twenty-five conceptually distinct predictors of outcome were identified. These predictor variables were quite different for different outcome factors. For example, high-frequency hearing loss was a predictor of tinnitus being helped by the masker, and a low frequency match and a low masking threshold predicted therapeutic success on residual inhibition. Decrease in matched loudness was predicted by louder tinnitus initially.

  1. Analysis of the variability of extra-tropical cyclones at the regional scale for the coasts of Northern Germany and investigation of their coastal impacts

    NASA Astrophysics Data System (ADS)

    Schaaf, Benjamin; Feser, Frauke

    2015-04-01

    The evaluation of long-term changes in wind speeds is very important for the coastal areas and the protection measures. Therefor the wind variability at the regional scale for the coast of Northern Germany shall be analysed. In order to derive changes in storminess it is essential to analyse long, homogeneous meteorological time series. Wind measurements often suffer from inconsistencies which arise from changes in instrumentation, observation method, or station location. Reanalysis data take into account such inhomogeneities of observation data and convert these measurements into a consistent, gridded data set with the same grid spacing and time intervals. This leads to a smooth, homogeneous data set, but with relatively low resolution (about 210 km for the longest reanalysis data set, the NCEP reanalysis starting in 1948). Therefore a high-resolution regional atmospheric model will be used to bring these reanalyses to a higher resolution, using in addition to a dynamical downscaling approach the spectral nudging technique. This method 'nudges' the large spatial scales of the regional climate model towards the reanalysis, while the smaller spatial scales are left unchanged. It was applied successfully in a number of applications, leading to realistic atmospheric weather descriptions of the past. With the regional climate model COSMO-CLM a very high-resolution data set was calculated for the last 67 years, the period from 1948 until now. The model area is North Germany with the coastal area of the North sea and parts of the Baltic sea. This is one of the first model simulations on climate scale with a very high resolution of 2.8 km, so even small scale effects can be detected. With this hindcast-simulation there are numerous options of evaluation. One can create wind climatologies for regional areas such as for the metropolitan region of Hamburg. Otherwise one can investigate individual storms in a case study. With a filtering and tracking program the course of individual storms can be tracked and compared with observations. Also statistical studies can be done and one can calculate percentiles, return periods and other different extreme value statistic variables. Later, with a further nesting simulation, the resolution can be reduced to 1 km for individual areas of interest to analyse small islands (as Foehr or Amrum) and their effects on the atmospheric flow more closely.

  2. CO{sub 2} Sequestration Capacity and Associated Aspects of the Most Promising Geologic Formations in the Rocky Mountain Region: Local-Scale Analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laes, Denise; Eisinger, Chris; Morgan, Craig

    2013-07-30

    The purpose of this report is to provide a summary of individual local-­scale CCS site characterization studies conducted in Colorado, New Mexico and Utah. These site-­ specific characterization analyses were performed as part of the “Characterization of Most Promising Sequestration Formations in the Rocky Mountain Region” (RMCCS) project. The primary objective of these local-­scale analyses is to provide a basis for regional-­scale characterization efforts within each state. Specifically, limits on time and funding will typically inhibit CCS projects from conducting high-­ resolution characterization of a state-­sized region, but smaller (< 10,000 km{sup 2}) site analyses are usually possible, and suchmore » can provide insight regarding limiting factors for the regional-­scale geology. For the RMCCS project, the outcomes of these local-­scale studies provide a starting point for future local-­scale site characterization efforts in the Rocky Mountain region.« less

  3. Groundwater development stress: Global-scale indices compared to regional modeling

    USGS Publications Warehouse

    Alley, William; Clark, Brian R.; Ely, Matt; Faunt, Claudia

    2018-01-01

    The increased availability of global datasets and technologies such as global hydrologic models and the Gravity Recovery and Climate Experiment (GRACE) satellites have resulted in a growing number of global-scale assessments of water availability using simple indices of water stress. Developed initially for surface water, such indices are increasingly used to evaluate global groundwater resources. We compare indices of groundwater development stress for three major agricultural areas of the United States to information available from regional water budgets developed from detailed groundwater modeling. These comparisons illustrate the potential value of regional-scale analyses to supplement global hydrological models and GRACE analyses of groundwater depletion. Regional-scale analyses allow assessments of water stress that better account for scale effects, the dynamics of groundwater flow systems, the complexities of irrigated agricultural systems, and the laws, regulations, engineering, and socioeconomic factors that govern groundwater use. Strategic use of regional-scale models with global-scale analyses would greatly enhance knowledge of the global groundwater depletion problem.

  4. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    NASA Astrophysics Data System (ADS)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  5. Whorfian effects on colour memory are not reliable.

    PubMed

    Wright, Oliver; Davies, Ian R L; Franklin, Anna

    2015-01-01

    The Whorfian hypothesis suggests that differences between languages cause differences in cognitive processes. Support for this idea comes from studies that find that patterns of colour memory errors made by speakers of different languages align with differences in colour lexicons. The current study provides a large-scale investigation of the relationship between colour language and colour memory, adopting a cross-linguistic and developmental approach. Colour memory on a delayed matching-to-sample (XAB) task was investigated in 2 language groups with differing colour lexicons, for 3 developmental stages and 2 regions of colour space. Analyses used a Bayesian technique to provide simultaneous assessment of two competing hypotheses (H1-Whorfian effect present, H0-Whorfian effect absent). Results of the analyses consistently favoured H0. The findings suggest that Whorfian effects on colour memory are not reliable and that the importance of such effects should not be overestimated.

  6. Gaining insights from social media language: Methodologies and challenges.

    PubMed

    Kern, Margaret L; Park, Gregory; Eichstaedt, Johannes C; Schwartz, H Andrew; Sap, Maarten; Smith, Laura K; Ungar, Lyle H

    2016-12-01

    Language data available through social media provide opportunities to study people at an unprecedented scale. However, little guidance is available to psychologists who want to enter this area of research. Drawing on tools and techniques developed in natural language processing, we first introduce psychologists to social media language research, identifying descriptive and predictive analyses that language data allow. Second, we describe how raw language data can be accessed and quantified for inclusion in subsequent analyses, exploring personality as expressed on Facebook to illustrate. Third, we highlight challenges and issues to be considered, including accessing and processing the data, interpreting effects, and ethical issues. Social media has become a valuable part of social life, and there is much we can learn by bringing together the tools of computer science with the theories and insights of psychology. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  7. Controls on Mississippi Valley-Type Zn-Pb mineralization in Behabad district, Central Iran: Constraints from spatial and numerical analyses

    NASA Astrophysics Data System (ADS)

    Parsa, Mohammad; Maghsoudi, Abbas

    2018-04-01

    The Behabad district, located in the central Iranian microcontinent, contains numerous epigenetic stratabound carbonate-hosted Zn-Pb ore bodies. The mineralizations formed as fault, fracture and karst fillings in the Permian-Triassic formations, especially in Middle Triassic dolostones, and comprise mainly non-sulfides zinc ores. These are all interpreted as Mississippi Valley-type (MVT) base metal deposits. From an economic geological point of view, it is imperative to recognize the processes that have plausibly controlled the emplacement of MVT Zn-Pb mineralization in the Behabad district. To address the foregoing issue, analyses of the spatial distribution of mineral deposits comprising fry and fractal techniques and analysis of the spatial association of mineral deposits with geological features using distance distribution analysis were applied to assess the regional-scale processes that could have operated in the distribution of MVT Zn-Pb deposits in the district. The obtained results based on these analytical techniques show the main trends of the occurrences are NW-SE and NE-SW, which are parallel or subparallel to the major northwest and northeast trending faults, supporting the idea that these particular faults could have acted as the main conduits for transport of mineral-bearing fluids. The results of these analyses also suggest that Permian-Triassic brittle carbonate sedimentary rocks have served as the lithological controls on MVT mineralization in the Behabad district as they are spatially and temporally associated with mineralization.

  8. Multitemporal Three Dimensional Imaging of Volcanic Products on the Macro- and Micro- Scale

    NASA Astrophysics Data System (ADS)

    Carter, A. J.; Ramsey, M. S.; Durant, A. J.; Skilling, I. P.

    2006-12-01

    Satellite data from the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) can be processed using a nadir- and backward-viewing band at the same wavelength to generate a Digital Elevation Model (DEM) at a maximum spatial resolution of 15 metres. Bezymianny Volcano (Kamchatka Peninsula, Russia) was chosen as a test target for multitemporal DEM generation. DEMs were used to generate a layer stack and calculate coarse topographic changes from 2000 to 2006, the most significant of which was a new crater that formed in spring 2005. The eruption that occurred on 11 January 2005 produced a pyroclastic deposit on the east flank, which was mapped and from which samples were collected in August 2005. A comparison was made between field-based observations of the deposit and micron-scale roughness (analogous to vesicularity) derived from ASTER thermal infrared data following the model described in Ramsey and Fink (1999) on lava domes. In order to investigate applying this technique to the pyroclastic deposits, 18 small samples from Bezymianny were selected for Scanning Electron Microscope (SEM) micron-scale analysis. The SEM image data were processed using software capable of calculating surface roughness and vesicle volume from stereo pairs: a statistical analysis of samples is presented using a high resolution grid of surface profiles. The results allow for a direct comparison to field, laboratory, and satellite-based estimates of micron-scale roughness. Prior to SEM processing, laboratory thermal emission spectra of the microsamples were collected and modelled to estimate vesicularity. Each data set was compared and assessed for coherence within the limitations of each technique. This study outlines the value of initially imaging at the macro-scale to assess major topographic changes over time at the volcano. This is followed by an example of the application of micro-scale SEM imaging and spectral deconvolution, highlighting the advantages of using multiple resolutions to analyse frequently overlapping products at Bezymianny.

  9. The performance of moss, grass, and 1- and 2-year old spruce needles as bioindicators of contamination: a comparative study at the scale of the Czech Republic.

    PubMed

    Suchara, Ivan; Sucharova, Julie; Hola, Marie; Reimann, Clemens; Boyd, Rognvald; Filzmoser, Peter; Englmaier, Peter

    2011-05-01

    Moss (Pleurozium schreberi), grass (Avenella flexuosa), and 1- and 2-year old spruce (Picea abies) needles were collected over the territory of the Czech Republic at an average sample density of 1 site per 290km(2). The samples were analysed for 39 elements (Ag, Al, As, Ba, Be, Bi, Ca, Cd, Ce, Co, Cr, Cs, Cu, Fe, Ga, Hg, K, La, Li, Mg, Mn, Mo, Na, Nd, Ni, Pb, Pr, Rb, S, Sb, Se, Sn, Sr, Th, Tl, U, V, Y and Zn) using ICP-MS and ICP-AES techniques (the major nutrients Ca, K, Mg and Na were not analysed in moss). Moss showed by far the highest element concentrations for most elements. Exceptions were Ba (spruce), Mn (spruce), Mo (grass), Ni (spruce), Rb (grass) and S (grass). Regional distribution maps and spatial trend analysis were used to study the suitability of the four materials as bioindicators of anthropogenic contamination. The highly industrialised areas in the north-west and the far east of the country and several more local contamination sources were indicated in the distribution maps of one or several sample materials. At the scale of the whole country moss was the best indicator of known contamination sources. However, on a more local scale, it appeared that spruce needles were especially well suited for detection of urban contamination. Copyright © 2010 Elsevier B.V. All rights reserved.

  10. Spatial Analysis of Rice Blast in China at Three Different Scales.

    PubMed

    Guo, Fangfang; Chen, Xinglong; Lu, Minghong; Yang, Li; Wang, Shi Wei; Wu, Bo Ming

    2018-05-22

    In this study, spatial analyses were conducted at three different scales to better understand the epidemiology of rice blast, a major rice disease caused by Magnaporthe oryzae. At regional scale, across the major rice production regions in China, rice blast incidence was monitored on 101 dates at 193 stations from June 10 th to Sep. 10 th during 2009-2014, and surveyed in 143 fields in September, 2016; at county scale, 3 surveys were done covering 1-5 counties in 2015-2016; and at field scale, blast was evaluated in 6 fields in 2015-2016. Spatial cluster and hot spot analyses were conducted in GIS on the geographical pattern of the disease at regional scale, and geostatistical analysis performed at all the three scales. Cluster and hot spot analyses revealed that high-disease areas were clustered in mountainous areas in China. Geostatistical analyses detected spatial dependence of blast incidence with influence ranges of 399 to 1080 km at regional scale, and 5 to 10 m at field scale, but not at county scale. The spatial patterns at different scales might be determined by inherent properties of rice blast and environmental driving forces, and findings from this study provide helpful information to sampling and management of rice blast.

  11. Analysis and application of ERTS-1 data for regional geological mapping

    NASA Technical Reports Server (NTRS)

    Gold, D. P.; Parizek, R. R.; Alexander, S. A.

    1973-01-01

    Combined visual and digital techniques of analysing ERTS-1 data for geologic information have been tried on selected areas in Pennsylvania. The major physiolographic and structural provinces show up well. Supervised mapping, following the imaged expression of known geologic features on ERTS band 5 enlargements (1:250,000) of parts of eastern Pennsylvania, delimited the Diabase Sills and the Precambrian rocks of the Reading Prong with remarkable accuracy. From unsupervised mapping, transgressive linear features are apparent in unexpected density, and exhibit strong control over river valley and stream channel directions. They are unaffected by bedrock type, age, or primary structural boundaries, which suggests they are either rejuvenated basement joint directions on different scales, or they are a recently impressed structure possibly associated with a drifting North American plate. With ground mapping and underflight data, 6 scales of linear features have been recognized.

  12. Accelerating Large Scale Image Analyses on Parallel, CPU-GPU Equipped Systems

    PubMed Central

    Teodoro, George; Kurc, Tahsin M.; Pan, Tony; Cooper, Lee A.D.; Kong, Jun; Widener, Patrick; Saltz, Joel H.

    2014-01-01

    The past decade has witnessed a major paradigm shift in high performance computing with the introduction of accelerators as general purpose processors. These computing devices make available very high parallel computing power at low cost and power consumption, transforming current high performance platforms into heterogeneous CPU-GPU equipped systems. Although the theoretical performance achieved by these hybrid systems is impressive, taking practical advantage of this computing power remains a very challenging problem. Most applications are still deployed to either GPU or CPU, leaving the other resource under- or un-utilized. In this paper, we propose, implement, and evaluate a performance aware scheduling technique along with optimizations to make efficient collaborative use of CPUs and GPUs on a parallel system. In the context of feature computations in large scale image analysis applications, our evaluations show that intelligently co-scheduling CPUs and GPUs can significantly improve performance over GPU-only or multi-core CPU-only approaches. PMID:25419545

  13. S-band omnidirectional antenna for the SERT-C satellite

    NASA Technical Reports Server (NTRS)

    Bassett, H. L.; Cofer, J. W., Jr.; Sheppard, R. R.; Sinclair, M. J.

    1975-01-01

    The program to design an S-band omnidirectional antenna system for the SERT-C spacecraft is discussed. The program involved the tasks of antenna analyses by computer techniques, scale model radiation pattern measurements of a number of antenna systems, full-scale RF measurements, and the recommended design, including detailed drawings. A number of antenna elements were considered: the cavity-backed spiral, quadrifilar helix, and crossed-dipoles were chosen for in-depth studies. The final design consisted of a two-element array of cavity-backed spirals mounted on opposite sides of spacecraft and fed in-phase through a hybrid junction. This antenna system meets the coverage requirement of having a gain of at least minus 10 dBi over 50 percent of a 4 pi steradian sphere with the solar panels in operation. This coverage level is increased if the ground station has the capability to change polarization.

  14. High resolution 40AR/39AR chronostratigraphy of the Late Cretaceous El Gallo Formation, Baja California del Norte, Mexico

    NASA Astrophysics Data System (ADS)

    Renne, Paul R.; Fulford, Madeleine M.; Busby-Spera, Cathy

    1991-03-01

    Laser probe 40Ar/39Ar analyses of individual sanidine grains from four tuffs in the alluvial Late Cretaceous (Campanian) El Gallo Formation yield statistically distinct mean dates ranging from 74.87±0.05 Ma to 73.59±0.09 Ma. The exceptional precision of these dates permits calculation of statistically significant sediment accumulation rates that are much higher than passive sediment loading would cause, implying rapid tectonically induced subsidence. The dates bracket tightly the age of important dinosaur and mammalian faunas previously reported from the El Gallo Formation. The dates support an age less than 73 Ma for the Campanian/Maastrichtian stage boundary, younger than indicated by several currently used time scales. Further application of the single grain 40Ar/39Ar technique may be expected to greatly benefit stratigraphic studies of Mesozoic sedimentary basins and contribute to calibration of biostratigraphic and magnetostratigraphic time scales.

  15. Pragmatically Applied Cervical and Thoracic Nonthrust Manipulation Versus Thrust Manipulation for Patients With Mechanical Neck Pain: A Multicenter Randomized Clinical Trial.

    PubMed

    Griswold, David; Learman, Ken; Kolber, Morey J; O'Halloran, Bryan; Cleland, Joshua A

    2018-03-01

    Study Design Randomized clinical trial. Background The comparative effectiveness between nonthrust manipulation (NTM) and thrust manipulation (TM) for mechanical neck pain has been investigated, with inconsistent results. Objective To compare the clinical effectiveness of concordant cervical and thoracic NTM and TM for patients with mechanical neck pain. Methods The Neck Disability Index (NDI) was the primary outcome. Secondary outcomes included the Patient-Specific Functional Scale (PSFS), numeric pain-rating scale (NPRS), deep cervical flexion endurance (DCF), global rating of change (GROC), number of visits, and duration of care. The covariate was clinical equipoise for intervention. Outcomes were collected at baseline, visit 2, and discharge. Patients were randomly assigned to receive either NTM or TM directed at the cervical and thoracic spines. Techniques and dosages were selected pragmatically and applied to the most symptomatic level. Two-way mixed-model analyses of covariance were used to assess clinical outcomes at 3 time points. Analyses of covariance were used to assess between-group differences for the GROC, number of visits, and duration of care at discharge. Results One hundred three patients were included in the analyses (NTM, n = 55 and TM, n = 48). The between-group analyses revealed no differences in outcomes on the NDI (P = .67), PSFS (P = .26), NPRS (P = .25), DCF (P = .98), GROC (P = .77), number of visits (P = .21), and duration of care (P = .61) for patients with mechanical neck pain who received either NTM or TM. Conclusion NTM and TM produce equivalent outcomes for patients with mechanical neck pain. The trial was registered with ClinicalTrials.gov (NCT02619500). Level of Evidence Therapy, level 1b. J Orthop Sports Phys Ther 2018;48(3):137-145. Epub 6 Feb 2018. doi:10.2519/jospt.2018.7738.

  16. Desertification in the south Junggar Basin, 2000-2009: Part I. Spatial analysis and indicator retrieval

    NASA Astrophysics Data System (ADS)

    Jiang, Miao; Lin, Yi

    2018-07-01

    Desertification is a serious environmental problem that threatens ecological balance and society sustainability, and pursuit of efficient techniques for its monitoring is always highlighted. Compared to in-situ investigation, remote sensing (RS) has proved to be an efficient solution plan, particularly for large covers, whereas previous RS-based studies mostly focused on proposal and validation of various indicators for different scenarios. To comprehensively reflect desertification and project its trend, this study attempted to develop a new comprehensive RS information model, with the scenario for test deployed at the south Junggar Basin, China in the last decade (2000-2009). The premise of establishing such a model, however, is not simple, involving selection of RS images with appropriate spatial resolutions and uniform retrievals of indicators with high accuracies. To handle these fundamental problems, this Part I compared the merits and faults of MODIS and TM images in desertification characterization, by making spatial analyses including land cover patch- and pixel-scale analyses and land attribute semi-variance and scale-agreement analyses. After the MODIS images with the resolution of 250 m were identified to be the appropriate choice, multiple representative indicators including NDVI, fraction of vegetation cover, land surface temperature, albedo and soil moisture that relate to different aspects of desertification processes were uniformly retrieved by using their individual effective algorithms and downscaling. Tests showed the spatial analyses did help in ensuring the premise of the whole study and the retrievals of indicators were reliable. The contributions are of fundamental implications for improving RS-based desertification analysis and have created a firm foundation for developing a RS information model in Part II.

  17. Real-time observations of lithium battery reactions-operando neutron diffraction analysis during practical operation.

    PubMed

    Taminato, Sou; Yonemura, Masao; Shiotani, Shinya; Kamiyama, Takashi; Torii, Shuki; Nagao, Miki; Ishikawa, Yoshihisa; Mori, Kazuhiro; Fukunaga, Toshiharu; Onodera, Yohei; Naka, Takahiro; Morishima, Makoto; Ukyo, Yoshio; Adipranoto, Dyah Sulistyanintyas; Arai, Hajime; Uchimoto, Yoshiharu; Ogumi, Zempachi; Suzuki, Kota; Hirayama, Masaaki; Kanno, Ryoji

    2016-06-30

    Among the energy storage devices for applications in electric vehicles and stationary uses, lithium batteries typically deliver high performance. However, there is still a missing link between the engineering developments for large-scale batteries and the fundamental science of each battery component. Elucidating reaction mechanisms under practical operation is crucial for future battery technology. Here, we report an operando diffraction technique that uses high-intensity neutrons to detect reactions in non-equilibrium states driven by high-current operation in commercial 18650 cells. The experimental system comprising a time-of-flight diffractometer with automated Rietveld analysis was developed to collect and analyse diffraction data produced by sequential charge and discharge processes. Furthermore, observations under high current drain revealed inhomogeneous reactions, a structural relaxation after discharge, and a shift in the lithium concentration ranges with cycling in the electrode matrix. The technique provides valuable information required for the development of advanced batteries.

  18. Identification and assessment of professional competencies for implementation of nanotechnology in engineering education

    NASA Astrophysics Data System (ADS)

    Jean, Ming-Der; Jiang, Ji-Bin; Chien, Jia-Yi

    2017-11-01

    The purpose of this study was to construct the indicators of professional competencies of the nanotechnology-based sputtering system industry based on industry requirements and analyse the core competencies of the industry for promoting the human resource of physical vapour deposition technology. The document analysis, expert interview, and Delphi technique surveys were considered and the survey items with 32 items divided into 7 domains were selected according to consensus opinions of 10 experts by the Delphi survey technique. Through three questionnaire surveys' analysis, the professional competence scales for the K-S tests showed a good internal consistency. The findings of this study provide guidelines for professional competence for nanotechnology-based sputtering technology by applying surface heat-treatment industry. These guidelines can also reveal the practical competency requirements of nanotechnology-based sputtering technology to deal with any subsequent challenges, future developments, and invisible services for students in a technology institute programme.

  19. A simple Lagrangian forecast system with aviation forecast potential

    NASA Technical Reports Server (NTRS)

    Petersen, R. A.; Homan, J. H.

    1983-01-01

    A trajectory forecast procedure is developed which uses geopotential tendency fields obtained from a simple, multiple layer, potential vorticity conservative isentropic model. This model can objectively account for short-term advective changes in the mass field when combined with fine-scale initial analyses. This procedure for producing short-term, upper-tropospheric trajectory forecasts employs a combination of a detailed objective analysis technique, an efficient mass advection model, and a diagnostically proven trajectory algorithm, none of which require extensive computer resources. Results of initial tests are presented, which indicate an exceptionally good agreement for trajectory paths entering the jet stream and passing through an intensifying trough. It is concluded that this technique not only has potential for aiding in route determination, fuel use estimation, and clear air turbulence detection, but also provides an example of the types of short range forecasting procedures which can be applied at local forecast centers using simple algorithms and a minimum of computer resources.

  20. Real-time observations of lithium battery reactions—operando neutron diffraction analysis during practical operation

    PubMed Central

    Taminato, Sou; Yonemura, Masao; Shiotani, Shinya; Kamiyama, Takashi; Torii, Shuki; Nagao, Miki; Ishikawa, Yoshihisa; Mori, Kazuhiro; Fukunaga, Toshiharu; Onodera, Yohei; Naka, Takahiro; Morishima, Makoto; Ukyo, Yoshio; Adipranoto, Dyah Sulistyanintyas; Arai, Hajime; Uchimoto, Yoshiharu; Ogumi, Zempachi; Suzuki, Kota; Hirayama, Masaaki; Kanno, Ryoji

    2016-01-01

    Among the energy storage devices for applications in electric vehicles and stationary uses, lithium batteries typically deliver high performance. However, there is still a missing link between the engineering developments for large-scale batteries and the fundamental science of each battery component. Elucidating reaction mechanisms under practical operation is crucial for future battery technology. Here, we report an operando diffraction technique that uses high-intensity neutrons to detect reactions in non-equilibrium states driven by high-current operation in commercial 18650 cells. The experimental system comprising a time-of-flight diffractometer with automated Rietveld analysis was developed to collect and analyse diffraction data produced by sequential charge and discharge processes. Furthermore, observations under high current drain revealed inhomogeneous reactions, a structural relaxation after discharge, and a shift in the lithium concentration ranges with cycling in the electrode matrix. The technique provides valuable information required for the development of advanced batteries. PMID:27357605

  1. Nanoscale deformation and friction characteristics of atomically thin WSe2 and heterostructure using nanoscratch and Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Manimunda, P.; Nakanishi, Y.; Jaques, Y. M.; Susarla, S.; Woellner, C. F.; Bhowmick, S.; Asif, S. A. S.; Galvão, D. S.; Tiwary, C. S.; Ajayan, P. M.

    2017-12-01

    2D transition metals di-selenides are attracting a lot of attention due to their interesting optical, chemical and electronics properties. Here, the deformation characteristics of monolayer, multi- layer WSe2 and its heterostructure with MoSe2 were investigated using a new technique that combines nanoscratch and Raman spectroscopy. The 2D monolayer WSe2 showed anisotropy in deformation. Effect of number of WSe2 layers on friction characteristics were explored in detail. Experimental observations were further supported by MD simulations. Raman spectra recorded from the scratched regions showed strain induced degeneracy splitting. Further nano-scale scratch tests were extended to MoSe2-WSe2 lateral heterostructures. Effect of deformation on lateral hetero junctions were further analysed using PL and Raman spectroscopy. This new technique is completely general and can be applied to study other 2D materials.

  2. Electrocoagulation of Palm Oil Mill Effluent

    PubMed Central

    Agustin, Melissa B.; Sengpracha, Waya P.; Phutdhawong, Weerachai

    2008-01-01

    Electrocoagulation (EC) is an electrochemical technique which has been employed in the treatment of various kinds of wastewater. In this work the potential use of EC for the treatment of palm oil mill effluent (POME) was investigated. In a laboratory scale, POME from a factory site in Chumporn Province (Thailand) was subjected to EC using aluminum as electrodes and sodium chloride as supporting electrolyte. Results show that EC can reduce the turbidity, acidity, COD, and BOD of the POME as well as some of its heavy metal contents. Phenolic compounds are also removed from the effluent. Recovery techniques were employed in the coagulated fraction and the recovered compounds was analysed for antioxidant activity by DPPH method. The isolate was found to have a moderate antioxidant activity. From this investigation, it can be concluded that EC is an efficient method for the treatment of POME. PMID:19139537

  3. Chemical Characterization and Determination of the Anti-Oxidant Capacity of Two Brown Algae with Respect to Sampling Season and Morphological Structures Using Infrared Spectroscopy and Multivariate Analyses.

    PubMed

    Beratto, Angelo; Agurto, Cristian; Freer, Juanita; Peña-Farfal, Carlos; Troncoso, Nicolás; Agurto, Andrés; Castillo, Rosario Del P

    2017-10-01

    Brown algae biomass has been shown to be a highly important industrial source for the production of alginates and different nutraceutical products. The characterization of this biomass is necessary in order to allocate its use to specific applications according to the chemical and biological characteristics of this highly variable resource. The methods commonly used for algae characterization require a long time for the analysis and rigorous pretreatments of samples. In this work, nondestructive and fast analyses of different morphological structures from Lessonia spicata and Macrocystis pyrifera, which were collected during different seasons, were performed using Fourier transform infrared (FT-IR) techniques in combination with chemometric methods. Mid-infrared (IR) and near-infrared (NIR) spectral ranges were tested to evaluate the spectral differences between the species, seasons, and morphological structures of algae using a principal component analysis (PCA). Quantitative analyses of the polyphenol and alginate contents and the anti-oxidant capacity of the samples were performed using partial least squares (PLS) with both spectral ranges in order to build a predictive model for the rapid quantification of these parameters with industrial purposes. The PCA mainly showed differences in the samples based on seasonal sampling, where changes were observed in the bands corresponding to polysaccharides, proteins, and lipids. The obtained PLS models had high correlation coefficients (r) for the polyphenol content and anti-oxidant capacity (r > 0.9) and lower values for the alginate determination (0.7 < r < 0.8). Fourier transform infrared-based techniques were suitable tools for the rapid characterization of algae biomass, in which high variability in the samples was incorporated for the qualitative and quantitative analyses, and have the potential to be used on an industrial scale.

  4. Enantioselective Analytical- and Preparative-Scale Separation of Hexabromocyclododecane Stereoisomers Using Packed Column Supercritical Fluid Chromatography.

    PubMed

    Riddell, Nicole; Mullin, Lauren Gayle; van Bavel, Bert; Ericson Jogsten, Ingrid; McAlees, Alan; Brazeau, Allison; Synnott, Scott; Lough, Alan; McCrindle, Robert; Chittim, Brock

    2016-11-10

    Hexabromocyclododecane (HBCDD) is an additive brominated flame retardant which has been listed in Annex A of the Stockholm Convention for elimination of production and use. It has been reported to persist in the environment and has the potential for enantiomer-specific degradation, accumulation, or both, making enantioselective analyses increasingly important. The six main stereoisomers of technical HBCDD (i.e., the (+) and (-) enantiomers of α-, β-, and γ-HBCDD) were separated and isolated for the first time using enantioselective packed column supercritical fluid chromatography (pSFC) separation methods on a preparative scale. Characterization was completed using published chiral liquid chromatography (LC) methods and elution profiles, as well as X-ray crystallography, and the isolated fractions were definitively identified. Additionally, the resolution of the enantiomers, along with two minor components of the technical product (δ- and ε-HBCDD), was investigated on an analytical scale using both LC and pSFC separation techniques, and changes in elution order were highlighted. Baseline separation of all HBCDD enantiomers was achieved by pSFC on an analytical scale using a cellulose-based column. The described method emphasizes the potential associated with pSFC as a green method of isolating and analyzing environmental contaminants of concern.

  5. Construct validation of emotional labor scale for a sample of Pakistani corporate employees.

    PubMed

    Akhter, Noreen

    2017-02-01

    To translate, adapt and validate emotional labour scale for Pakistani corporate employees. This study was conducted in locale of Rawalpindi and Islamabad from October 2014 to December 2015, and comprised customer service employees of commercial banks and telecommunication companies. It comprised of two independent parts. Part one had two steps. Step one involved translation and adaptation of the instrument. In the second step psychometric properties of the translated scale were established by administering it to customer services employees from commercial banks and the telecommunication sector. Data of the pilot study was analysed by using exploratory factor analysis to extract the initial factor of emotional labour. Part two comprised the main study. Commercial bank employees were included in the sample by using convenient sampling technique. SPSS 20 was used for data analysis. There were 145 participants in the first study and 495 in the second study . Exploratory factor analysis initially generated three-factor model of emotional labour which was further confirmed by confirmatory factor analysis suggesting that emotional labour had three distinct dimensions, i.e. surface acting, deep acting and genuine expressions of emotions. The emotional labour scale was found to be a valid and reliable measure.

  6. Multi-scale functional mapping of tidal marsh vegetation for restoration monitoring

    NASA Astrophysics Data System (ADS)

    Tuxen Bettman, Karin

    2007-12-01

    Nearly half of the world's natural wetlands have been destroyed or degraded, and in recent years, there have been significant endeavors to restore wetland habitat throughout the world. Detailed mapping of restoring wetlands can offer valuable information about changes in vegetation and geomorphology, which can inform the restoration process and ultimately help to improve chances of restoration success. I studied six tidal marshes in the San Francisco Estuary, CA, US, between 2003 and 2004 in order to develop techniques for mapping tidal marshes at multiple scales by incorporating specific restoration objectives for improved longer term monitoring. I explored a "pixel-based" remote sensing image analysis method for mapping vegetation in restored and natural tidal marshes, describing the benefits and limitations of this type of approach (Chapter 2). I also performed a multi-scale analysis of vegetation pattern metrics for a recently restored tidal marsh in order to target the metrics that are consistent across scales and will be robust measures of marsh vegetation change (Chapter 3). Finally, I performed an "object-based" image analysis using the same remotely sensed imagery, which maps vegetation type and specific wetland functions at multiple scales (Chapter 4). The combined results of my work highlight important trends and management implications for monitoring wetland restoration using remote sensing, and will better enable restoration ecologists to use remote sensing for tidal marsh monitoring. Several findings important for tidal marsh restoration monitoring were made. Overall results showed that pixel-based methods are effective at quantifying landscape changes in composition and diversity in recently restored marshes, but are limited in their use for quantifying smaller, more fine-scale changes. While pattern metrics can highlight small but important changes in vegetation composition and configuration across years, scientists should exercise caution when using metrics in their studies or to validate restoration management decisions, and multi-scale analyses should be performed before metrics are used in restoration science for important management decisions. Lastly, restoration objectives, ecosystem function, and scale can each be integrated into monitoring techniques using remote sensing for improved restoration monitoring.

  7. AQMEII3: the EU and NA regional scale program of the ...

    EPA Pesticide Factsheets

    The presentation builds on the work presented last year at the 14th CMAS meeting and it is applied to the work performed in the context of the AQMEII-HTAP collaboration. The analysis is conducted within the framework of the third phase of AQMEII (Air Quality Model Evaluation International Initiative) and encompasses the gauging of model performance through measurement-to-model comparison, error decomposition and time series analysis of the models biases. Through the comparison of several regional-scale chemistry transport modelling systems applied to simulate meteorology and air quality over two continental areas, this study aims at i) apportioning the error to the responsible processes through time-scale analysis, and ii) help detecting causes of models error, and iii) identify the processes and scales most urgently requiring dedicated investigations. The operational metrics (magnitude of the error, sign of the bias, associativity) provide an overall sense of model strengths and deficiencies, while the apportioning of the error into its constituent parts (bias, variance and covariance) can help assess the nature and quality of the error. Each of the error components is analysed independently and apportioned to specific processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day) using the error apportionment technique devised in the previous phases of AQMEII. The National Exposure Research Laboratory (NERL) Computational Exposur

  8. Integrating hydrologic and geophysical data to constrain coastal surficial aquifer processes at multiple spatial and temporal scales

    USGS Publications Warehouse

    Schultz, Gregory M.; Ruppel, Carolyn; Fulton, Patrick; Hyndman, David W.; Day-Lewis, Frederick D.; Singha, Kamini

    2007-01-01

    Since 1997, repeated, coincident geophysical surveys and extensive hydrologic studies in shallow monitoring wells have been used to study static and dynamic processes associated with surface water-groundwater interaction at a range of spatial scales at the estuarine and ocean boundaries of an undeveloped, permeable barrier island in the Georgia part of the U.S. South Atlantic Bight. Because geophysical and hydrologic data measure different parameters, at different resolution and precision, and over vastly different spatial scales, reconciling the coincident data or even combining complementary inversion, hydrogeochemcial analyses and well-based groundwater monitoring, and, in some cases, limited vegetation mapping to demonstrate the utility of an integrative, multidisciplinary approach for elucidating groundwater processes at spatial scales (tens to thousands of meters) that are often difficult to capture with traditional hydrologic approaches. The case studies highlight regional aquifer characteristics, varying degrees of lateral saltwater intrusion at estuarine boundaries, complex subsurface salinity gradients at the ocean boundary, and imaging of submarsh groundwater discharge and possible free convection in the pore waters of a clastic marsh. This study also documents the use of geophysical techniques for detecting temporal changes in groundwater salinity regimes under natural (not forced) gradients at intratidal to interannual (1998-200 Southeastern U.S.A. drought) time scales.

  9. A methodology for least-squares local quasi-geoid modelling using a noisy satellite-only gravity field model

    NASA Astrophysics Data System (ADS)

    Klees, R.; Slobbe, D. C.; Farahani, H. H.

    2018-04-01

    The paper is about a methodology to combine a noisy satellite-only global gravity field model (GGM) with other noisy datasets to estimate a local quasi-geoid model using weighted least-squares techniques. In this way, we attempt to improve the quality of the estimated quasi-geoid model and to complement it with a full noise covariance matrix for quality control and further data processing. The methodology goes beyond the classical remove-compute-restore approach, which does not account for the noise in the satellite-only GGM. We suggest and analyse three different approaches of data combination. Two of them are based on a local single-scale spherical radial basis function (SRBF) model of the disturbing potential, and one is based on a two-scale SRBF model. Using numerical experiments, we show that a single-scale SRBF model does not fully exploit the information in the satellite-only GGM. We explain this by a lack of flexibility of a single-scale SRBF model to deal with datasets of significantly different bandwidths. The two-scale SRBF model performs well in this respect, provided that the model coefficients representing the two scales are estimated separately. The corresponding methodology is developed in this paper. Using the statistics of the least-squares residuals and the statistics of the errors in the estimated two-scale quasi-geoid model, we demonstrate that the developed methodology provides a two-scale quasi-geoid model, which exploits the information in all datasets.

  10. Towards ground-truthing of spaceborne estimates of above-ground life biomass and leaf area index in tropical rain forests

    NASA Astrophysics Data System (ADS)

    Köhler, P.; Huth, A.

    2010-08-01

    The canopy height h of forests is a key variable which can be obtained using air- or spaceborne remote sensing techniques such as radar interferometry or LIDAR. If new allometric relationships between canopy height and the biomass stored in the vegetation can be established this would offer the possibility for a global monitoring of the above-ground carbon content on land. In the absence of adequate field data we use simulation results of a tropical rain forest growth model to propose what degree of information might be generated from canopy height and thus to enable ground-truthing of potential future satellite observations. We here analyse the correlation between canopy height in a tropical rain forest with other structural characteristics, such as above-ground life biomass (AGB) (and thus carbon content of vegetation) and leaf area index (LAI) and identify how correlation and uncertainty vary for two different spatial scales. The process-based forest growth model FORMIND2.0 was applied to simulate (a) undisturbed forest growth and (b) a wide range of possible disturbance regimes typically for local tree logging conditions for a tropical rain forest site on Borneo (Sabah, Malaysia) in South-East Asia. In both undisturbed and disturbed forests AGB can be expressed as a power-law function of canopy height h (AGB = a · hb) with an r2 ~ 60% if data are analysed in a spatial resolution of 20 m × 20 m (0.04 ha, also called plot size). The correlation coefficient of the regression is becoming significant better in the disturbed forest sites (r2 = 91%) if data are analysed hectare wide. There seems to exist no functional dependency between LAI and canopy height, but there is also a linear correlation (r2 ~ 60%) between AGB and the area fraction of gaps in which the canopy is highly disturbed. A reasonable agreement of our results with observations is obtained from a comparison of the simulations with permanent sampling plot (PSP) data from the same region and with the large-scale forest inventory in Lambir. We conclude that the spaceborne remote sensing techniques such as LIDAR and radar interferometry have the potential to quantify the carbon contained in the vegetation, although this calculation contains due to the heterogeneity of the forest landscape structural uncertainties which restrict future applications to spatial averages of about one hectare in size. The uncertainties in AGB for a given canopy height are here 20-40% (95% confidence level) corresponding to a standard deviation of less than ± 10%. This uncertainty on the 1 ha-scale is much smaller than in the analysis of 0.04 ha-scale data. At this small scale (0.04 ha) AGB can only be calculated out of canopy height with an uncertainty which is at least of the magnitude of the signal itself due to the natural spatial heterogeneity of these forests.

  11. Measuring thermal conductivity of thin films and coatings with the ultra-fast transient hot-strip technique

    NASA Astrophysics Data System (ADS)

    Belkerk, B. E.; Soussou, M. A.; Carette, M.; Djouadi, M. A.; Scudeller, Y.

    2012-07-01

    This paper reports the ultra-fast transient hot-strip (THS) technique for determining the thermal conductivity of thin films and coatings of materials on substrates. The film thicknesses can vary between 10 nm and more than 10 µm. Precise measurement of thermal conductivity was performed with an experimental device generating ultra-short electrical pulses, and subsequent temperature increases were electrically measured on nanosecond and microsecond time scales. The electrical pulses were applied within metallized micro-strips patterned on the sample films and the temperature increases were analysed within time periods selected in the window [100 ns-10 µs]. The thermal conductivity of the films was extracted from the time-dependent thermal impedance of the samples derived from a three-dimensional heat diffusion model. The technique is described and its performance demonstrated on different materials covering a large thermal conductivity range. Experiments were carried out on bulk Si and thin films of amorphous SiO2 and crystallized aluminum nitride (AlN). The present approach can assess film thermal resistances as low as 10-8 K m2 W-1 with a precision of about 10%. This has never been attained before with the THS technique.

  12. Back School programme for nurses has reduced low back pain levels: A randomised controlled trial.

    PubMed

    Járomi, Melinda; Kukla, Aniko; Szilágyi, Brigitta; Simon-Ugron, Ágnes; Bobály, Viktória Kovácsné; Makai, Alexandra; Linek, Pawel; Ács, Pongrác; Leidecker, Eleonóra

    2018-03-01

    (i) To examine patient lifting techniques used by nurses, and (ii) to evaluate an effectiveness of the Spine Care for Nurses programme in chronic nonspecific low back pain syndrome reduction and the execution of proper patient lifting techniques. Millions of nurses around the world suffer from occupational-related chronic nonspecific low back pain (chronic nonspecific low back pain syndrome). Generally, low back pain in nurses is a result of increased pressure on the spine and can be associated with improperly conducted patient lifting techniques. A randomised controlled trial was conducted among 137 nurses with chronic nonspecific low back pain syndrome. Participants were randomised into an experimental and control group (experimental group n = 67, control group n = 70). Nurses in the experimental group attended the Spine Care for Nurses programme for 3 months. The programme consisted of didactic education, spine-strengthening exercises and education on safe patient handling techniques. The control group only received a brief written lifestyle guidance. The Zebris WinSpine Triple Lumbar examination was used to analyse nurses' patient lifting techniques (horizontal and vertical lifting). The lumbar pain intensity was measured with a 0-100 visual analogue scale. The pre-intervention average chronic nonspecific low back pain syndrome intensity score on visual analogue scale decreased from 49.3 to the postintervention score of 7.5. The correct execution of vertical lifting techniques in the experimental group increased from 8.91%-97.01% (control group: 8.57% pre-intervention test and postintervention test 11.42%). The horizontal patient lifting technique pre-intervention increased from 10.44%-100% correct execution in the experimental group (control group: pre-intervention test 10.00% and postintervention test 11.42%). The Spine Care for Nurses programme significantly reduced chronic nonspecific low back pain syndrome and increased the number of properly executed horizontal and vertical patient lifting techniques in nurses. We recommend that healthcare organisations should consider the implementation of regular Spine Care for Nurses programmes as successful low back injury prevention programmes. © 2017 John Wiley & Sons Ltd.

  13. 3-dimensional electron microscopic imaging of the zebrafish olfactory bulb and dense reconstruction of neurons.

    PubMed

    Wanner, Adrian A; Genoud, Christel; Friedrich, Rainer W

    2016-11-08

    Large-scale reconstructions of neuronal populations are critical for structural analyses of neuronal cell types and circuits. Dense reconstructions of neurons from image data require ultrastructural resolution throughout large volumes, which can be achieved by automated volumetric electron microscopy (EM) techniques. We used serial block face scanning EM (SBEM) and conductive sample embedding to acquire an image stack from an olfactory bulb (OB) of a zebrafish larva at a voxel resolution of 9.25×9.25×25 nm 3 . Skeletons of 1,022 neurons, 98% of all neurons in the OB, were reconstructed by manual tracing and efficient error correction procedures. An ergonomic software package, PyKNOSSOS, was created in Python for data browsing, neuron tracing, synapse annotation, and visualization. The reconstructions allow for detailed analyses of morphology, projections and subcellular features of different neuron types. The high density of reconstructions enables geometrical and topological analyses of the OB circuitry. Image data can be accessed and viewed through the neurodata web services (http://www.neurodata.io). Raw data and reconstructions can be visualized in PyKNOSSOS.

  14. Measurement of the neutrino component of an antineutrino beam observed by a nonmagnetized detector

    NASA Astrophysics Data System (ADS)

    Aguilar-Arevalo, A. A.; Anderson, C. E.; Brice, S. J.; Brown, B. C.; Bugel, L.; Conrad, J. M.; Dharmapalan, R.; Djurcic, Z.; Fleming, B. T.; Ford, R.; Garcia, F. G.; Garvey, G. T.; Grange, J.; Green, J. A.; Imlay, R.; Johnson, R. A.; Karagiorgi, G.; Katori, T.; Kobilarcik, T.; Linden, S. K.; Louis, W. C.; Mahn, K. B. M.; Marsh, W.; Mauger, C.; Metcalf, W.; Mills, G. B.; Mirabal, J.; Moore, C. D.; Mousseau, J.; Nelson, R. H.; Nguyen, V.; Nienaber, P.; Nowak, J. A.; Osmanov, B.; Patch, A.; Pavlovic, Z.; Perevalov, D.; Polly, C. C.; Ray, H.; Roe, B. P.; Russell, A. D.; Shaevitz, M. H.; Sorel, M.; Spitz, J.; Stancu, I.; Stefanski, R. J.; Tayloe, R.; Tzanov, M.; van de Water, R. G.; Wascko, M. O.; White, D. H.; Wilking, M. J.; Zeller, G. P.; Zimmerman, E. D.

    2011-10-01

    Two methods are employed to measure the neutrino flux of the antineutrino-mode beam observed by the MiniBooNE detector. The first method compares data to simulated event rates in a high-purity νμ-induced charged-current single π+ (CC1π+) sample while the second exploits the difference between the angular distributions of muons created in νμ and ν¯μ charged-current quasielastic (CCQE) interactions. The results from both analyses indicate the prediction of the neutrino flux component of the predominately antineutrino beam is overestimated—the CC1π+ analysis indicates the predicted νμ flux should be scaled by 0.76±0.11, while the CCQE angular fit yields 0.65±0.23. The energy spectrum of the flux prediction is checked by repeating the analyses in bins of reconstructed neutrino energy, and the results show that the spectral shape is well-modeled. These analyses are a demonstration of techniques for measuring the neutrino contamination of antineutrino beams observed by future nonmagnetized detectors.

  15. 3-dimensional electron microscopic imaging of the zebrafish olfactory bulb and dense reconstruction of neurons

    PubMed Central

    Wanner, Adrian A.; Genoud, Christel; Friedrich, Rainer W.

    2016-01-01

    Large-scale reconstructions of neuronal populations are critical for structural analyses of neuronal cell types and circuits. Dense reconstructions of neurons from image data require ultrastructural resolution throughout large volumes, which can be achieved by automated volumetric electron microscopy (EM) techniques. We used serial block face scanning EM (SBEM) and conductive sample embedding to acquire an image stack from an olfactory bulb (OB) of a zebrafish larva at a voxel resolution of 9.25×9.25×25 nm3. Skeletons of 1,022 neurons, 98% of all neurons in the OB, were reconstructed by manual tracing and efficient error correction procedures. An ergonomic software package, PyKNOSSOS, was created in Python for data browsing, neuron tracing, synapse annotation, and visualization. The reconstructions allow for detailed analyses of morphology, projections and subcellular features of different neuron types. The high density of reconstructions enables geometrical and topological analyses of the OB circuitry. Image data can be accessed and viewed through the neurodata web services (http://www.neurodata.io). Raw data and reconstructions can be visualized in PyKNOSSOS. PMID:27824337

  16. Seasonal changes in the techniques employed by wild chimpanzees in the Mahale Mountains, Tanzania, to feed on termites (Pseudacanthotermes spiniger).

    PubMed

    Uehara, S

    1982-01-01

    During a short period, wild chimpanzees of group K in the Mahale Mountains employ a set of several techniques, including tool use, to feed on one species of termite (Pseudacanthotermes spiniger). They appear to use each technique appropriately according to phenological changes in the prey insect's activities. The chimpanzees also ingest small pieces of soil from the tower of P. spiniger's mound throughout the year. Geophagy presumably makes them visually and tactually aware of the phenological changes of the termite's reproductive cycle. Analyses of fecal samples from the chimpanzees indicate interannual fluctuations in the amount of termites ingested. On the other hand, the chimpanzees of group B, ranging to the north of group K, utilize a fishing technique to obtain another type of termite (Macrotermes?herus) on a large scale during the first half of the wet season. Fecal analysis data show that chimpanzees of group B consume far more termites than those of group K. The probability that the same or similar tool-using techniques as fishing may be employed in feeding on different types of insects by chimpanzees of different unit groups according to subtle local differences in the insect fauna of their home ranges is discussed.

  17. From Aeolis Palus to the Bagnold Dunes field: Overview of martian soil analyses performed by ChemCam in Gale Crater

    NASA Astrophysics Data System (ADS)

    Cousin, A.; Meslin, P. Y.; Dehouck, E.; David, G.; Rapin, W.; Schröder, S.; Forni, O.; Gasnault, O.; Williams, A. J.; Lasue, J.; Stein, N.; Ehlmann, B. L.; Payre, V.; Anderson, R. B.; Blaney, D. L.; Bridges, N. T.; Clark, B. C.; Frydenvang, J.; Gasda, P. J.; Johnson, J. R.; Lanza, N.; l'Haridon, J.; Mangold, N.; Maurice, S.; Newsom, H. E.; Ollila, A.; Pinet, P. C.; Sautter, V.; Thomas, N. H.; Wiens, R. C.

    2017-12-01

    In situ analysis of the chemical and mineralogical composition of the martian soil, and the determination of its volatile inventory, can provide important constraints on the bulk composition of the martian crust, on its igneous diversity, but also on the physical and chemical weathering processes that have altered its primary igneous constituents. Transport processes that have occurred over long geological time scales, however, make this analysis quite complex, as constituents from different unknown sources are mixed together, and may have been sorted according to grain size or density. A meteoritic contribution is also present. Disentangling the influence of each of these processes requires the use of different analytical techniques, at different spatial scales, and at different locations over the planet. We will present an overview of the soil analyses obtained over the past 5 years by the ChemCam instrument on board MSL/Curiosity. Their specificity lies in their small spatial scale ( 300 μm), close to the average grains' size. At this scale, chemical trends are observed, resulting from the mixing of different end-members with different grain sizes: coarse felsic grains of likely local origin, fine grains with a basaltic composition close to soil compositions observed at other landing sites, but distinct from local rocks, and a fine-grained, Si-poor, volatile-rich component probably associated with the XRD-amorphous component detected by the CheMin instrument. The thin ablation depth associated with each laser shot ( 1 μm) enables us to analyse the surface of the grains, which is characterized by a strong, but variable hydrogen signal. These analyses provide constraints on the composition of a possible alteration rind or coating present at their surface. An extensive, multi-instrument investigation of active dunes (barchan and linear dunes) has also been carried out, revealing slight chemical differences with surrounding soils, and a more homogeneous composition, although chemical variations as a function of grain size are observed, with coarser grains enriched in mafic minerals. These results illustrate the still ongoing influence of aeolian transport on the physical sorting of loose, unconsolidated sediments. These results also provide ground truth for orbital IR observations of aeolian bedforms.

  18. Video-assisted structured teaching to improve aseptic technique during neuraxial block.

    PubMed

    Friedman, Z; Siddiqui, N; Mahmoud, S; Davies, S

    2013-09-01

    Teaching epidural catheter insertion tends to focus on developing manual dexterity rather than improving aseptic technique which usually remains poor despite increasing experience. The aim of this study was to compare epidural aseptic technique performance, by novice operators after a targeted teaching intervention, with operators taught aseptic technique before the intervention was initiated. Starting July 2008, two groups of second-year anaesthesia residents (pre- and post-teaching intervention) performing their 4-month obstetric anaesthesia rotation in a university affiliated centre were videotaped three to four times while performing epidural procedures. Trained blinded independent examiners reviewed the procedures. The primary outcome was a comparison of aseptic technique performance scores (0-30 points) graded on a scale task-specific checklist. A total of 86 sessions by 29 residents were included in the study analysis. The intraclass correlation coefficient for inter-rater reliability for the aseptic technique was 0.90. The median aseptic technique scores for the rotation period were significantly higher in the post-intervention group [27.58, inter-quartile range (IQR) 22.33-29.50 vs 16.56, IQR 13.33-22.00]. Similar results were demonstrated when scores were analysed for low, moderate, and high levels of experience throughout the rotation. Procedure-specific aseptic technique teaching, aided by video assessment and video demonstration, helped significantly improve aseptic practice by novice trainees. Future studies should consider looking at retention over longer periods of time in more senior residents.

  19. Multi-scale structures of turbulent magnetic reconnection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakamura, T. K. M., E-mail: takuma.nakamura@oeaw.ac.at; Nakamura, R.; Narita, Y.

    2016-05-15

    We have analyzed data from a series of 3D fully kinetic simulations of turbulent magnetic reconnection with a guide field. A new concept of the guide filed reconnection process has recently been proposed, in which the secondary tearing instability and the resulting formation of oblique, small scale flux ropes largely disturb the structure of the primary reconnection layer and lead to 3D turbulent features [W. Daughton et al., Nat. Phys. 7, 539 (2011)]. In this paper, we further investigate the multi-scale physics in this turbulent, guide field reconnection process by introducing a wave number band-pass filter (k-BPF) technique in whichmore » modes for the small scale (less than ion scale) fluctuations and the background large scale (more than ion scale) variations are separately reconstructed from the wave number domain to the spatial domain in the inverse Fourier transform process. Combining with the Fourier based analyses in the wave number domain, we successfully identify spatial and temporal development of the multi-scale structures in the turbulent reconnection process. When considering a strong guide field, the small scale tearing mode and the resulting flux ropes develop over a specific range of oblique angles mainly along the edge of the primary ion scale flux ropes and reconnection separatrix. The rapid merging of these small scale modes leads to a smooth energy spectrum connecting ion and electron scales. When the guide field is sufficiently weak, the background current sheet is strongly kinked and oblique angles for the small scale modes are widely scattered at the kinked regions. Similar approaches handling both the wave number and spatial domains will be applicable to the data from multipoint, high-resolution spacecraft observations such as the NASA magnetospheric multiscale (MMS) mission.« less

  20. Multi-scale structures of turbulent magnetic reconnection

    NASA Astrophysics Data System (ADS)

    Nakamura, T. K. M.; Nakamura, R.; Narita, Y.; Baumjohann, W.; Daughton, W.

    2016-05-01

    We have analyzed data from a series of 3D fully kinetic simulations of turbulent magnetic reconnection with a guide field. A new concept of the guide filed reconnection process has recently been proposed, in which the secondary tearing instability and the resulting formation of oblique, small scale flux ropes largely disturb the structure of the primary reconnection layer and lead to 3D turbulent features [W. Daughton et al., Nat. Phys. 7, 539 (2011)]. In this paper, we further investigate the multi-scale physics in this turbulent, guide field reconnection process by introducing a wave number band-pass filter (k-BPF) technique in which modes for the small scale (less than ion scale) fluctuations and the background large scale (more than ion scale) variations are separately reconstructed from the wave number domain to the spatial domain in the inverse Fourier transform process. Combining with the Fourier based analyses in the wave number domain, we successfully identify spatial and temporal development of the multi-scale structures in the turbulent reconnection process. When considering a strong guide field, the small scale tearing mode and the resulting flux ropes develop over a specific range of oblique angles mainly along the edge of the primary ion scale flux ropes and reconnection separatrix. The rapid merging of these small scale modes leads to a smooth energy spectrum connecting ion and electron scales. When the guide field is sufficiently weak, the background current sheet is strongly kinked and oblique angles for the small scale modes are widely scattered at the kinked regions. Similar approaches handling both the wave number and spatial domains will be applicable to the data from multipoint, high-resolution spacecraft observations such as the NASA magnetospheric multiscale (MMS) mission.

  1. Review of the outer scale of the atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Ziad, Aziz

    2016-07-01

    Outer scale is a relevant parameter for the experimental performance evaluation of large telescopes. Different techniques have been used for the outer scale estimation. In situ measurements with radiosounding balloons have given very small values of outer scale. This latter has also been estimated directly at the ground level from the wavefront analysis with High Angular Resolution (HAR) techniques using interferometric or Shack-Hartmann or more generally AO systems data. Dedicated instruments have been also developed for the outer scale monitoring such as the Generalized Seeing Monitor (GSM) and the Monitor of Outer Scale Profile (MOSP). The measured values of outer scale from HAR techniques, GSM and MOSP are somewhat coherent and are larger than the in situ results. The main explanation of this difference comes from the definition of the outer scale itself. This paper aims to give a review in a non-exhaustive way of different techniques and instruments for the measurement of the outer scale. Comparisons of outer scale measurements will be discussed in the light of the different definitions of this parameter, the associated observable quantities and the atmospheric turbulence model as well.

  2. Hydrothermal alteration and diagenesis of terrestrial lacustrine pillow basalts: Coordination of hyperspectral imaging with laboratory measurements

    NASA Astrophysics Data System (ADS)

    Greenberger, Rebecca N.; Mustard, John F.; Cloutis, Edward A.; Mann, Paul; Wilson, Janette H.; Flemming, Roberta L.; Robertson, Kevin M.; Salvatore, Mark R.; Edwards, Christopher S.

    2015-12-01

    We investigate an outcrop of ∼187 Ma lacustrine pillow basalts of the Talcott Formation exposed in Meriden, Connecticut, USA, focusing on coordinated analyses of one pillow lava to characterize the aqueous history of these basalts in the Hartford Basin. This work uses a suite of multidisciplinary measurements, including hyperspectral imaging, other spectroscopic techniques, and chemical and mineralogical analyses, from the microscopic scale up to the scale of an outcrop. The phases identified in the sample are albite, large iron oxides, and titanite throughout; calcite in vesicles; calcic clinopyroxene, aegirine, and Fe/Mg-bearing clay in the rind; and fine-grained hematite and pyroxenes in the interior. Using imaging spectroscopy, the chemistry and mineralogy results extend to the hand sample and larger outcrop. From all of the analyses, we suggest that the pillow basalts were altered initially after emplacement, either by heated lake water or magmatic fluids, at temperatures of at least 400-600 °C, and the calcic clinopyroxenes and aegirine identified in the rind are a preserved record of that alteration. As the hydrothermal system cooled to slightly lower temperatures, clays formed in the rind, and, during this alteration, the sample oxidized to form hematite in the matrix of the interior and Fe3+ in the pyroxenes in the rind. During the waning stages of the hydrothermal system, calcite precipitated in vesicles within the rind. Later, diagenetic processes albitized the sample, with albite replacing plagioclase, lining vesicles, and accreting onto the exterior of the sample. This albitization or Na-metasomatism occurred when the lake within the Hartford Basin evaporated during a drier past climatic era, resulting in Na-rich brines. As Ca-rich plagioclase altered to albite, Ca was released into solution, eventually precipitating as calcite in previously-unfilled vesicles, dominantly in the interior of the pillow. Coordinated analyses of this sample permit identification of the alteration phases and help synthesize the aqueous history of pillow lavas of the Talcott Formation. These results are also relevant to Mars, where volcanically-resurfaced open basin lakes have been found, and this Hartford Basin outcrop may be a valuable analog for any potential volcano-lacustrine interactions. The results can also help to inform the utility and optimization of potentially complementary, synergistic, and uniquely-suited techniques for characterization of hydrothermally-altered terrains.

  3. Floating-point scaling technique for sources separation automatic gain control

    NASA Astrophysics Data System (ADS)

    Fermas, A.; Belouchrani, A.; Ait-Mohamed, O.

    2012-07-01

    Based on the floating-point representation and taking advantage of scaling factor indetermination in blind source separation (BSS) processing, we propose a scaling technique applied to the separation matrix, to avoid the saturation or the weakness in the recovered source signals. This technique performs an automatic gain control in an on-line BSS environment. We demonstrate the effectiveness of this technique by using the implementation of a division-free BSS algorithm with two inputs, two outputs. The proposed technique is computationally cheaper and efficient for a hardware implementation compared to the Euclidean normalisation.

  4. Formulation/cure technology for ultrahigh molecular weight silphenylene-siloxane polymers

    NASA Technical Reports Server (NTRS)

    Hundley, N. H.; Patterson, W. J.

    1985-01-01

    Molecular weights above one million were achieved for methylvinylsilphenylene-siloxane terpolymers using a two-stage polymerization technique which was successfully scaled up to 200 grams. The resulting polymer was vulcanized by two different formulations and compared to an identically formulated commercial methylvinyl silicone on the basis of ultimate strength, Young's modulus, percent elongation at failure, and tear strength. Relative thermal/oxidative stabilities of the elastomers were assessed by gradient and isothermal thermogravimetric analyses performed in both air and nitrogen. The experimental elastomer exhibited enhanced thermal/oxidative stability and possed equivalent or superior mechanical properties. The effect of variations in prepolymer molecular weight on mechanical properties was also investigated.

  5. Multidimensional competences of supply chain managers: an empirical study

    NASA Astrophysics Data System (ADS)

    Shou, Yongyi; Wang, Weijiao

    2017-01-01

    Supply chain manager competences have attracted increasing attention from both practitioners and scholars in recent years. This paper conducted an explorative study to understand the dimensionality of supply chain manager competences. Online job advertisements for supply chain managers were collected as secondary data, since these advertisements reflect employers' real job requirements. We adopted the multidimensional scaling (MDS) technique to process and analyse the data. Five dimensions of supply chain manager competences are identified: generic skills, functional skills, supply chain management (SCM) qualifications and leadership, SCM expertise, and industry-specific and senior management skills. Statistic tests indicate that supply chain manager competence saliences vary in different industries and regions.

  6. Quantum algorithms for topological and geometric analysis of data

    PubMed Central

    Lloyd, Seth; Garnerone, Silvano; Zanardi, Paolo

    2016-01-01

    Extracting useful information from large data sets can be a daunting task. Topological methods for analysing data sets provide a powerful technique for extracting such information. Persistent homology is a sophisticated tool for identifying topological features and for determining how such features persist as the data is viewed at different scales. Here we present quantum machine learning algorithms for calculating Betti numbers—the numbers of connected components, holes and voids—in persistent homology, and for finding eigenvectors and eigenvalues of the combinatorial Laplacian. The algorithms provide an exponential speed-up over the best currently known classical algorithms for topological data analysis. PMID:26806491

  7. A comparison of algorithms for inference and learning in probabilistic graphical models.

    PubMed

    Frey, Brendan J; Jojic, Nebojsa

    2005-09-01

    Research into methods for reasoning under uncertainty is currently one of the most exciting areas of artificial intelligence, largely because it has recently become possible to record, store, and process large amounts of data. While impressive achievements have been made in pattern classification problems such as handwritten character recognition, face detection, speaker identification, and prediction of gene function, it is even more exciting that researchers are on the verge of introducing systems that can perform large-scale combinatorial analyses of data, decomposing the data into interacting components. For example, computational methods for automatic scene analysis are now emerging in the computer vision community. These methods decompose an input image into its constituent objects, lighting conditions, motion patterns, etc. Two of the main challenges are finding effective representations and models in specific applications and finding efficient algorithms for inference and learning in these models. In this paper, we advocate the use of graph-based probability models and their associated inference and learning algorithms. We review exact techniques and various approximate, computationally efficient techniques, including iterated conditional modes, the expectation maximization (EM) algorithm, Gibbs sampling, the mean field method, variational techniques, structured variational techniques and the sum-product algorithm ("loopy" belief propagation). We describe how each technique can be applied in a vision model of multiple, occluding objects and contrast the behaviors and performances of the techniques using a unifying cost function, free energy.

  8. The associations among family meal frequency, food preparation frequency, self-efficacy for cooking, and food preparation techniques in children and adolescents.

    PubMed

    Woodruff, Sarah J; Kirby, Ashley R

    2013-01-01

    The purpose of this study was to describe family dinner frequency (FDF) by food preparation frequency (prep), self-efficacy for cooking (SE), and food preparation techniques (techniques) among a small sample in southwestern Ontario, Canada. A cross-sectional survey was administered under the supervision of the research team. After-school programs, sports programs, and 1 elementary school. The sample included 145 participants (41% boys, 59% girls) in grades 4-8. Demographics, prep, SE, techniques, FDF, and family meal attitudes and behaviors. Exploratory 1-way ANOVA and chi-square analyses were used. An ordinal regression analysis was used to determine the associations between FDF with descriptor variables (sex, grade, and ethnicity) and prep, SE, techniques, FDF, and family meal attitudes and behaviors (P < .05). Approximately 59% reported family dinners on 6 or 7 days per week. Half of participants were involved with prep 1-6 times per week. Mean SE was 25.3 (scale 1-32), and girls performed more techniques than boys (P = .02). Participants with greater SE (odds ratio = 1.15) and higher family meal attitudes and behaviors (odds ratio = 1.15) were more likely to have a higher FDF. Future health promotion strategies for family meals should aim at increasing children's and adolescents' SE. Copyright © 2013 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  9. A thermal scale modeling study for Apollo and Apollo applications, volume 2

    NASA Technical Reports Server (NTRS)

    Shannon, R. L.

    1972-01-01

    The development and demonstration of practical thermal scale modeling techniques applicable to systems involving radiation, conduction, and convection with emphasis on cabin atmosphere/cabin wall thermal interface are discussed. The Apollo spacecraft environment is used as the model. Four possible scaling techniques were considered: (1) modified material preservation, (2) temperature preservation, (3) scaling compromises, and Nusselt number preservation. A thermal mathematical model was developed for use with the Nusselt number preservation technique.

  10. New Homogeneous Standards by Atomic Layer Deposition for Synchrotron X-ray Fluorescence and Absorption Spectroscopies.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butterworth, A.L.; Becker, N.; Gainsforth, Z.

    2012-03-13

    Quantification of synchrotron XRF analyses is typically done through comparisons with measurements on the NIST SRM 1832/1833 thin film standards. Unfortunately, these standards are inhomogeneous on small scales at the tens of percent level. We are synthesizing new homogeneous multilayer standards using the Atomic Layer Deposition technique and characterizing them using multiple analytical methods, including ellipsometry, Rutherford Back Scattering at Evans Analytical, Synchrotron X-ray Fluorescence (SXRF) at Advanced Photon Source (APS) Beamline 13-ID, Synchrotron X-ray Absorption Spectroscopy (XAS) at Advanced Light Source (ALS) Beamlines 11.0.2 and 5.3.2.1 and by electron microscopy techniques. Our motivation for developing much-needed cross-calibration of synchrotronmore » techniques is borne from coordinated analyses of particles captured in the aerogel of the NASA Stardust Interstellar Dust Collector (SIDC). The Stardust Interstellar Dust Preliminary Examination (ISPE) team have characterized three sub-nanogram, {approx}1{micro}m-sized fragments considered as candidates to be the first contemporary interstellar dust ever collected, based on their chemistries and trajectories. The candidates were analyzed in small wedges of aerogel in which they were extracted from the larger collector, using high sensitivity, high spatial resolution >3 keV synchrotron x-ray fluorescence spectroscopy (SXRF) and <2 keV synchrotron x-ray transmission microscopy (STXM) during Stardust ISPE. The ISPE synchrotron techniques have complementary capabilities. Hard X-ray SXRF is sensitive to sub-fg mass of elements Z {ge} 20 (calcium) and has a spatial resolution as low as 90nm. X-ray Diffraction data were collected simultaneously with SXRF data. Soft X-ray STXM at ALS beamline 11.0.2 can detect fg-mass of most elements, including cosmochemically important oxygen, magnesium, aluminum and silicon, which are invisible to SXRF in this application. ALS beamline 11.0.2 has spatial resolution better than 25 nm. Limiting factors for Stardust STXM analyses were self-imposed limits of photon dose due to radiation damage concerns, and significant attenuation of <1500 eV X-rays by {approx}80{micro}m thick, {approx}25 mg/cm{sup 3} density silica aerogel capture medium. In practice, the ISPE team characterized the major, light elements using STXM (O, Mg, Al, Si) and the heavier minor and trace elements using SXRF. The two data sets overlapped only with minor Fe and Ni ({approx}1% mass abundance), providing few quantitative cross-checks. New improved standards for cross calibration are essential for consortium-based analyses of Stardust interstellar and cometary particles, IDPs. Indeed, they have far reaching application across the whole synchrotron-based analytical community. We have synthesized three ALD multilayers simultaneously on silicon nitride membranes and silicon and characterized them using RBS (on Si), XRF (on Si{sub 3}N{sub 4}) and STXM/XAS (holey Si{sub 3}N{sub 4}). The systems we have started to work with are Al-Zn-Fe and Y-Mg-Er. We have found these ALD multi-layers to be uniform at {micro}m- to nm scales, and have found excellent consistency between four analytical techniques so far. The ALD films can also be used as a standard for e-beam instruments, eg., TEM EELS or EDX. After some early issues with the consistency of coatings to the back-side of the membrane windows, we are confident to be able to show multi-analytical agreement to within 10%. As the precision improves, we can use the new standards to verify or improve the tabulated cross-sections.« less

  11. Analyses of 1/15 scale Creare bypass transient experiments. [PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kmetyk, L.N.; Buxton, L.D.; Cole, R.K. Jr.

    1982-09-01

    RELAP4 analyses of several 1/15 scale Creare H-series bypass transient experiments have been done to investigate the effect of using different downcomer nodalizations, physical scales, slip models, and vapor fraction donoring methods. Most of the analyses were thermal equilibrium calculations performed with RELAP4/MOD5, but a few such calculations were done with RELAP4/MOD6 and RELAP4/MOD7, which contain improved slip models. In order to estimate the importance of nonequilibrium effects, additional analyses were performed with TRAC-PD2, RELAP5 and the nonequilibrium option of RELAP4/MOD7. The purpose of these studies was to determine whether results from Westinghouse's calculation of the Creare experiments, which weremore » done with a UHI-modified version of SATAN, were sufficient to guarantee SATAN would be conservative with respect to ECC bypass in full-scale plant analyses.« less

  12. Nanometer-Scale Chemistry of a Calcite Biomineralization Template: Implications for Skeletal Composition and Nucleation

    PubMed Central

    Bonnin, Elisa A.; Perea, Daniel E.; Spero, Howard J.; Zhu, Zihua; Winters, Maria; Hönisch, Bärbel; Russell, Ann D.; Fehrenbacher, Jennifer S.; Gagnon, Alexander C.

    2016-01-01

    Plankton, corals, and other organisms produce calcium carbonate skeletons that are integral to their survival, form a key component of the global carbon cycle, and record an archive of past oceanographic conditions in their geochemistry. A key aspect of the formation of these biominerals is the interaction between organic templating structures and mineral precipitation processes. Laboratory-based studies have shown that these atomic-scale processes can profoundly influence the architecture and composition of minerals, but their importance in calcifying organisms is poorly understood because it is difficult to measure the chemistry of in vivo biomineral interfaces at spatially relevant scales. Understanding the role of templates in biomineral nucleation, and their importance in skeletal geochemistry requires an integrated, multiscale approach, which can place atom-scale observations of organic-mineral interfaces within a broader structural and geochemical context. Here we map the chemistry of an embedded organic template structure within a carbonate skeleton of the foraminifera Orbulina universa using both atom probe tomography (APT), a 3D chemical imaging technique with Ångström-level spatial resolution, and time-of-flight secondary ionization mass spectrometry (ToF-SIMS), a 2D chemical imaging technique with submicron resolution. We quantitatively link these observations, revealing that the organic template in O. universa is uniquely enriched in both Na and Mg, and contributes to intraskeletal chemical heterogeneity. Our APT analyses reveal the cation composition of the organic surface, offering evidence to suggest that cations other than Ca2+, previously considered passive spectator ions in biomineral templating, may be important in defining the energetics of carbonate nucleation on organic templates. PMID:27794119

  13. Spatial clustering of mental disorders and associated characteristics of the neighbourhood context in Malmö, Sweden, in 2001

    PubMed Central

    Chaix, Basile; Leyland, Alastair H; Sabel, Clive E; Chauvin, Pierre; Råstam, Lennart; Kristersson, Håkan; Merlo, Juan

    2006-01-01

    Study objective Previous research provides preliminary evidence of spatial variations of mental disorders and associations between neighbourhood social context and mental health. This study expands past literature by (1) using spatial techniques, rather than multilevel models, to compare the spatial distributions of two groups of mental disorders (that is, disorders due to psychoactive substance use, and neurotic, stress related, and somatoform disorders); and (2) investigating the independent impact of contextual deprivation and neighbourhood social disorganisation on mental health, while assessing both the magnitude and the spatial scale of these effects. Design Using different spatial techniques, the study investigated mental disorders due to psychoactive substance use, and neurotic disorders. Participants All 89 285 persons aged 40–69 years residing in Malmö, Sweden, in 2001, geolocated to their place of residence. Main results The spatial scan statistic identified a large cluster of increased prevalence in a similar location for the two mental disorders in the northern part of Malmö. However, hierarchical geostatistical models showed that the two groups of disorders exhibited a different spatial distribution, in terms of both magnitude and spatial scale. Mental disorders due to substance consumption showed larger neighbourhood variations, and varied in space on a larger scale, than neurotic disorders. After adjustment for individual factors, the risk of substance related disorders increased with neighbourhood deprivation and neighbourhood social disorganisation. The risk of neurotic disorders only increased with contextual deprivation. Measuring contextual factors across continuous space, it was found that these associations operated on a local scale. Conclusions Taking space into account in the analyses permitted deeper insight into the contextual determinants of mental disorders. PMID:16614334

  14. Using G-Theory to Enhance Evidence of Reliability and Validity for Common Uses of the Paulhus Deception Scales.

    PubMed

    Vispoel, Walter P; Morris, Carrie A; Kilinc, Murat

    2018-01-01

    We applied a new approach to Generalizability theory (G-theory) involving parallel splits and repeated measures to evaluate common uses of the Paulhus Deception Scales based on polytomous and four types of dichotomous scoring. G-theory indices of reliability and validity accounting for specific-factor, transient, and random-response measurement error supported use of polytomous over dichotomous scores as contamination checks; as control, explanatory, and outcome variables; as aspects of construct validation; and as indexes of environmental effects on socially desirable responding. Polytomous scoring also provided results for flagging faking as dependable as those when using dichotomous scoring methods. These findings argue strongly against the nearly exclusive use of dichotomous scoring for the Paulhus Deception Scales in practice and underscore the value of G-theory in demonstrating this. We provide guidelines for applying our G-theory techniques to other objectively scored clinical assessments, for using G-theory to estimate how changes to a measure might improve reliability, and for obtaining software to conduct G-theory analyses free of charge.

  15. Successful adaptation of three-dimensional inversion methodologies for archaeological-scale, total-field magnetic data sets

    NASA Astrophysics Data System (ADS)

    Cheyney, S.; Fishwick, S.; Hill, I. A.; Linford, N. T.

    2015-08-01

    Despite the development of advanced processing and interpretation tools for magnetic data sets in the fields of mineral and hydrocarbon industries, these methods have not achieved similar levels of adoption for archaeological or very near surface surveys. Using a synthetic data set we demonstrate that certain methodologies and assumptions used to successfully invert more regional-scale data can lead to large discrepancies between the true and recovered depths when applied to archaeological-type anomalies. We propose variations to the current approach, analysing the choice of the depth-weighting function, mesh design and parameter constraints, to develop an appropriate technique for the 3-D inversion of archaeological-scale data sets. The results show a successful recovery of a synthetic scenario, as well as a case study of a Romano-Celtic temple in the UK. For the case study, the final susceptibility model is compared with two coincident ground penetrating radar surveys, showing a high correlation with the comparative depth slices. The new approach takes interpretation of archaeological data sets beyond a simple 2-D visual interpretation based on pattern recognition.

  16. Driving magnetic turbulence using flux ropes in a moderate guide field linear system

    NASA Astrophysics Data System (ADS)

    Brookhart, Matthew I.; Stemo, Aaron; Waleffe, Roger; Forest, Cary B.

    2017-12-01

    We present a series of experiments on novel, line-tied plasma geometries as a study of the generation of chaos and turbulence in line-tied systems. Plasma production and the injection scale for magnetic energy is provided by spatially discrete plasma guns that inject both plasma and current. The guns represent a technique for controlling the injection scale of magnetic energy. A two-dimensional (2-D) array of magnetic probes provides spatially resolved time histories of the magnetic fluctuations at a single cross-section of the experimental cylinder, allowing simultaneous spatial measurements of chaotic and turbulent behaviour. The first experiment shows chaotic fluctuations and self-organization in a hollow-current line-tied screw pinch. These dynamics is modulated primarily by the applied magnetic field and weakly by the plasma current and safety factor. The second experiment analyses the interactions of multiple line-tied flux ropes. The flux ropes all exhibit chaotic behaviour, and under certain conditions develop an inverse cascade to larger scales and a turbulent inertial range with magnetic energy ( ) related to perpendicular wave number ( \\bot $ ) as \\bot -2.5\\pm 0.5$ .

  17. Disclosure control using partially synthetic data for large-scale health surveys, with applications to CanCORS.

    PubMed

    Loong, Bronwyn; Zaslavsky, Alan M; He, Yulei; Harrington, David P

    2013-10-30

    Statistical agencies have begun to partially synthesize public-use data for major surveys to protect the confidentiality of respondents' identities and sensitive attributes by replacing high disclosure risk and sensitive variables with multiple imputations. To date, there are few applications of synthetic data techniques to large-scale healthcare survey data. Here, we describe partial synthesis of survey data collected by the Cancer Care Outcomes Research and Surveillance (CanCORS) project, a comprehensive observational study of the experiences, treatments, and outcomes of patients with lung or colorectal cancer in the USA. We review inferential methods for partially synthetic data and discuss selection of high disclosure risk variables for synthesis, specification of imputation models, and identification disclosure risk assessment. We evaluate data utility by replicating published analyses and comparing results using original and synthetic data and discuss practical issues in preserving inferential conclusions. We found that important subgroup relationships must be included in the synthetic data imputation model, to preserve the data utility of the observed data for a given analysis procedure. We conclude that synthetic CanCORS data are suited best for preliminary data analyses purposes. These methods address the requirement to share data in clinical research without compromising confidentiality. Copyright © 2013 John Wiley & Sons, Ltd.

  18. The shape of galaxy dark matter haloes in massive galaxy clusters: insights from strong gravitational lensing

    NASA Astrophysics Data System (ADS)

    Jauzac, Mathilde; Harvey, David; Massey, Richard

    2018-07-01

    We assess how much unused strong lensing information is available in the deep Hubble Space Telescope imaging and Very Large Telescope/Multi Unit Spectroscopic Explorer spectroscopy of the Frontier Field clusters. As a pilot study, we analyse galaxy cluster MACS J0416.1-2403 (z = 0.397, M(R < 200 kpc) = 1.6 × 1014 M⊙), which has 141 multiple images with spectroscopic redshifts. We find that many additional parameters in a cluster mass model can be constrained, and that adding even small amounts of extra freedom to a model can dramatically improve its figures of merit. We use this information to constrain the distribution of dark matter around cluster member galaxies, simultaneously with the cluster's large-scale mass distribution. We find tentative evidence that some galaxies' dark matter has surprisingly similar ellipticity to their stars (unlike in the field, where it is more spherical), but that its orientation is often misaligned. When non-coincident dark matter and stellar haloes are allowed, the model improves by 35 per cent. This technique may provide a new way to investigate the processes and time-scales on which dark matter is stripped from galaxies as they fall into a massive cluster. Our preliminary conclusions will be made more robust by analysing the remaining five Frontier Field clusters.

  19. Viscoelastic properties of cell walls of single living plant cells determined by dynamic nanoindentation

    PubMed Central

    Hayot, Céline M.; Forouzesh, Elham; Goel, Ashwani; Avramova, Zoya; Turner, Joseph A.

    2012-01-01

    Plant development results from controlled cell divisions, structural modifications, and reorganizations of the cell wall. Thereby, regulation of cell wall behaviour takes place at multiple length scales involving compositional and architectural aspects in addition to various developmental and/or environmental factors. The physical properties of the primary wall are largely determined by the nature of the complex polymer network, which exhibits time-dependent behaviour representative of viscoelastic materials. Here, a dynamic nanoindentation technique is used to measure the time-dependent response and the viscoelastic behaviour of the cell wall in single living cells at a micron or sub-micron scale. With this approach, significant changes in storage (stiffness) and loss (loss of energy) moduli are captured among the tested cells. The results reveal hitherto unknown differences in the viscoelastic parameters of the walls of same-age similarly positioned cells of the Arabidopsis ecotypes (Col 0 and Ws 2). The technique is also shown to be sensitive enough to detect changes in cell wall properties in cells deficient in the activity of the chromatin modifier ATX1. Extensive computational modelling of the experimental measurements (i.e. modelling the cell as a viscoelastic pressure vessel) is used to analyse the influence of the wall thickness, as well as the turgor pressure, at the positions of our measurements. By combining the nanoDMA technique with finite element simulations quantifiable measurements of the viscoelastic properties of plant cell walls are achieved. Such techniques are expected to find broader applications in quantifying the influence of genetic, biological, and environmental factors on the nanoscale mechanical properties of the cell wall. PMID:22291130

  20. An exploratory study investigating children's perceptions of dental behavioural management techniques.

    PubMed

    Davies, E Bethan; Buchanan, Heather

    2013-07-01

    Behaviour management techniques (BMTs) are utilised by dentists to aid children's dental anxiety (DA). Children's perceptions of these have been underexplored, and their feedback could help inform paediatric dentistry. To explore children's acceptability and perceptions of dental communication and BMTs and to compare these by age, gender, and DA. A total of sixty-two 9- to 11-year-old school children participated in the study. Children's acceptability of BMTs was quantified using a newly developed Likert scale, alongside exploration of children's experiences and perceptions through interviews. anova and t-tests explored BMT acceptability ratings by age, gender, and DA. Thematic analysis was used to analyse interviews. Statistical analyses showed no effect of age, gender, or DA upon BMT acceptability. Children generally perceived the BMTs as acceptable or neutral; stop signals were the most acceptable, and voice control the least acceptable BMT. Beneficial experiences of distraction and positive reinforcement were common. Children described the positive nature of their dentist's communication and BMT utilisation. Dental anxiety did not affect children's perceptions of BMTs. Children were generally positive about dentist's communication and established BMTs. Children's coping styles may impact perceptions and effectiveness of BMTs and should be explored in future investigations. © 2012 John Wiley & Sons Ltd, BSPD and IAPD.

  1. Effects of relaxation on depression levels in women with high-risk pregnancies: a randomised clinical trial 1

    PubMed Central

    de Araújo, Wanda Scherrer; Romero, Walckiria Garcia; Zandonade, Eliana; Amorim, Maria Helena Costa

    2016-01-01

    ABSTRACT Objective: to analyse the effects of relaxation as a nursing intervention on the depression levels of hospitalised women with high-risk pregnancies. Methods: a randomised clinical trial realised in a reference centre for high-risk pregnancies. The sample consisted of 50 women with high-risk pregnancies (25 in the control group and 25 in the intervention group). The Benson relaxation technique was applied to the intervention group for five days. Control variables were collected using a predesigned form, and the signs and symptoms of depression were evaluated using the Edinburgh Postnatal Depression Scale (EPDS). The Statistical Package for Social Sciences (SPSS), version 20.0, was used with a significance level of 5%. The Wilcoxon and paired t-tests were used to evaluate depression levels between two timepoints. Using categorical data, the McNemar test was used to analyse differences in depression severity before and after the intervention. Results: depression levels decreased in the intervention group five days after the relaxation technique was applied (4.5 ± 3, p<0.05) compared with the levels at the first timepoint (10.3±5.9). Conclusion: as a nursing intervention, relaxation was effective in decreasing the symptoms of depression in hospitalised women with high-risk pregnancies. PMID:27627126

  2. Hydrodynamic simulation and particle-tracking techniques for identification of source areas to public-water intakes on the St. Clair-Detroit river waterway in the Great Lakes Basin

    USGS Publications Warehouse

    Holtschlag, David J.; Koschik, John A.

    2004-01-01

    Source areas to public water intakes on the St. Clair-Detroit River Waterway were identified by use of hydrodynamic simulation and particle-tracking analyses to help protect public supplies from contaminant spills and discharges. This report describes techniques used to identify these areas and illustrates typical results using selected points on St. Clair River and Lake St. Clair. Parameterization of an existing two-dimensional hydrodynamic model (RMA2) of the St. Clair-Detroit River Waterway was enhanced to improve estimation of local flow velocities. Improvements in simulation accuracy were achieved by computing channel roughness coefficients as a function of flow depth, and determining eddy viscosity coefficients on the basis of velocity data. The enhanced parameterization was combined with refinements in the model mesh near 13 public water intakes on the St. Clair-Detroit River Waterway to improve the resolution of flow velocities while maintaining consistency with flow and water-level data. Scenarios representing a range of likely flow and wind conditions were developed for hydrodynamic simulation. Particle-tracking analyses combined advective movements described by hydrodynamic scenarios with random components associated with sub-grid-scale movement and turbulent mixing to identify source areas to public water intakes.

  3. An explicit approach to detecting and characterizing submersed aquatic vegetation using a single-beam digital echosounder

    NASA Astrophysics Data System (ADS)

    Sabol, Bruce M.

    2005-09-01

    There has been a longstanding need for an objective and cost-effective technique to detect, characterize, and quantify submersed aquatic vegetation at spatial scales between direct physical sampling and remote aerial-based imaging. Acoustic-based approaches for doing so are reviewed and an explicit approach, using a narrow, single-beam echosounder, is described in detail. This heuristic algorithm is based on the spatial distribution of a thresholded signal generated from a high-frequency, narrow-beam echosounder operated in a vertical orientation from a survey boat. The physical basis, rationale, and implementation of this algorithm are described, and data documenting performance are presented. Using this technique, it is possible to generate orders of magnitude more data than would be available using previous techniques with a comparable level of effort. Thus, new analysis and interpretation approaches are called for which can make full use of these data. Several analyses' examples are shown for environmental effects application studies. Current operational window and performance limitations are identified and thoughts on potential processing approaches to improve performance are discussed.

  4. A review and content analysis of engagement, functionality, aesthetics, information quality, and change techniques in the most popular commercial apps for weight management.

    PubMed

    Bardus, Marco; van Beurden, Samantha B; Smith, Jane R; Abraham, Charles

    2016-03-10

    There are thousands of apps promoting dietary improvement, increased physical activity (PA) and weight management. Despite a growing number of reviews in this area, popular apps have not been comprehensively analysed in terms of features related to engagement, functionality, aesthetics, information quality, and content, including the types of change techniques employed. The databases containing information about all Health and Fitness apps on GP and iTunes (7,954 and 25,491 apps) were downloaded in April 2015. Database filters were applied to select the most popular apps available in both stores. Two researchers screened the descriptions selecting only weight management apps. Features, app quality and content were independently assessed using the Mobile App Rating Scale (MARS) and previously-defined categories of techniques relevant to behaviour change. Inter-coder reliabilities were calculated, and correlations between features explored. Of the 23 popular apps included in the review 16 were free (70%), 15 (65%) addressed weight control, diet and PA combined; 19 (83%) allowed behavioural tracking. On 5-point MARS scales, apps were of average quality (Md = 3.2, IQR = 1.4); "functionality" (Md = 4.0, IQR = 1.1) was the highest and "information quality" (Md = 2.0, IQR = 1.1) was the lowest domain. On average, 10 techniques were identified per app (range: 1-17) and of the 34 categories applied, goal setting and self-monitoring techniques were most frequently identified. App quality was positively correlated with number of techniques included (rho = .58, p < .01) and number of "technical" features (rho = .48, p < .05), which was also associated with the number of techniques included (rho = .61, p < .01). Apps that provided tracking used significantly more techniques than those that did not. Apps with automated tracking scored significantly higher in engagement, aesthetics, and overall MARS scores. Those that used change techniques previously associated with effectiveness (i.e., goal setting, self-monitoring and feedback) also had better "information quality". Popular apps assessed have overall moderate quality and include behavioural tracking features and a range of change techniques associated with behaviour change. These apps may influence behaviour, although more attention to information quality and evidence-based content are warranted to improve their quality.

  5. Multidimensional scaling for evolutionary algorithms--visualization of the path through search space and solution space using Sammon mapping.

    PubMed

    Pohlheim, Hartmut

    2006-01-01

    Multidimensional scaling as a technique for the presentation of high-dimensional data with standard visualization techniques is presented. The technique used is often known as Sammon mapping. We explain the mathematical foundations of multidimensional scaling and its robust calculation. We also demonstrate the use of this technique in the area of evolutionary algorithms. First, we present the visualization of the path through the search space of the best individuals during an optimization run. We then apply multidimensional scaling to the comparison of multiple runs regarding the variables of individuals and multi-criteria objective values (path through the solution space).

  6. Internalized HIV Stigma and Disclosure Concerns: Development and Validation of Two Scales in Spanish-Speaking Populations.

    PubMed

    Hernansaiz-Garrido, Helena; Alonso-Tapia, Jesús

    2017-01-01

    Internalized stigma and disclosure concerns are key elements for the study of mental health in people living with HIV. Since no measures of these constructs were available for Spanish population, this study sought to develop such instruments, to analyze their reliability and validity and to provide a short version. A heterogeneous sample of 458 adults from different Spanish-speaking countries completed the HIV-Internalized Stigma Scale and the HIV-Disclosure Concerns Scale, along with the Hospital Anxiety and Depression Scale, Rosenberg's Self-esteem Scale and other socio-demographic variables. Reliability and correlation analyses, exploratory factor analyses, path analyses with latent variables, and ANOVAs were conducted to test the scales' psychometric properties. The scales showed good reliability in terms of internal consistency and temporal stability, as well as good sensitivity and factorial and criterion validity. The HIV-Internalized Stigma Scale and the HIV-Disclosure Concerns Scale are reliable and valid means to assess these variables in several contexts.

  7. Lightweight and Statistical Techniques for Petascale PetaScale Debugging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Barton

    2014-06-30

    This project investigated novel techniques for debugging scientific applications on petascale architectures. In particular, we developed lightweight tools that narrow the problem space when bugs are encountered. We also developed techniques that either limit the number of tasks and the code regions to which a developer must apply a traditional debugger or that apply statistical techniques to provide direct suggestions of the location and type of error. We extend previous work on the Stack Trace Analysis Tool (STAT), that has already demonstrated scalability to over one hundred thousand MPI tasks. We also extended statistical techniques developed to isolate programming errorsmore » in widely used sequential or threaded applications in the Cooperative Bug Isolation (CBI) project to large scale parallel applications. Overall, our research substantially improved productivity on petascale platforms through a tool set for debugging that complements existing commercial tools. Previously, Office Of Science application developers relied either on primitive manual debugging techniques based on printf or they use tools, such as TotalView, that do not scale beyond a few thousand processors. However, bugs often arise at scale and substantial effort and computation cycles are wasted in either reproducing the problem in a smaller run that can be analyzed with the traditional tools or in repeated runs at scale that use the primitive techniques. New techniques that work at scale and automate the process of identifying the root cause of errors were needed. These techniques significantly reduced the time spent debugging petascale applications, thus leading to a greater overall amount of time for application scientists to pursue the scientific objectives for which the systems are purchased. We developed a new paradigm for debugging at scale: techniques that reduced the debugging scenario to a scale suitable for traditional debuggers, e.g., by narrowing the search for the root-cause analysis to a small set of nodes or by identifying equivalence classes of nodes and sampling our debug targets from them. We implemented these techniques as lightweight tools that efficiently work on the full scale of the target machine. We explored four lightweight debugging refinements: generic classification parameters, such as stack traces, application-specific classification parameters, such as global variables, statistical data acquisition techniques and machine learning based approaches to perform root cause analysis. Work done under this project can be divided into two categories, new algorithms and techniques for scalable debugging, and foundation infrastructure work on our MRNet multicast-reduction framework for scalability, and Dyninst binary analysis and instrumentation toolkits.« less

  8. Ion and laser microprobes applied to the measurement of corrosion produced hydrogen on a microscopic scale.

    NASA Technical Reports Server (NTRS)

    Gray, H. R.

    1972-01-01

    Use of an ion microprobe and a laser microprobe to measure concentrations of corrosion-produced hydrogen on a microscopic scale. Hydrogen concentrations of several thousand ppm were measured by both analytical techniques below corroded and fracture surfaces of hot salt stress corroded titanium alloy specimens. This extremely high concentration compares with only about 100 ppm hydrogen determined by standard vacuum fusion chemical analyses of bulk samples. Both the ion and laser microprobes were used to measure hydrogen concentration profiles in stepped intervals to substantial depths below the original corroded and fracture surfaces. For the ion microprobe, the area of local analysis was 22 microns in diameter and for the laser microprobe, the area of local analysis was about 300 microns in diameter. The segregation of hydrogen below fracture surfaces supports a previously proposed theory that corrosion-produced hydrogen is responsible for hot salt stress corrosion embrittlement and cracking of titanium alloys. These advanced analytical techniques suggest great potential for many areas of stress corrosion and hydrogen embrittlement research, quality control, and field inspection of corrosion problems. For example, it appears possible that a contour map of hydrogen distribution at notch roots and crack tips could be quantitatively determined. Such information would be useful in substantiating current theories of stress corrosion and hydrogen embrittlement.

  9. Advanced Techniques for Seismic Protection of Historical Buildings: Experimental and Numerical Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazzolani, Federico M.

    2008-07-08

    The seismic protection of historical and monumental buildings, namely dating back from the ancient age up to the 20th Century, is being looked at with greater and greater interest, above all in the Euro-Mediterranean area, its cultural heritage being strongly susceptible to undergo severe damage or even collapse due to earthquake. The cultural importance of historical and monumental constructions limits, in many cases, the possibility to upgrade them from the seismic point of view, due to the fear of using intervention techniques which could have detrimental effects on their cultural value. Consequently, a great interest is growing in the developmentmore » of sustainable methodologies for the use of Reversible Mixed Technologies (RMTs) in the seismic protection of the existing constructions. RMTs, in fact, are conceived for exploiting the peculiarities of innovative materials and special devices, and they allow ease of removal when necessary. This paper deals with the experimental and numerical studies, framed within the EC PROHITECH research project, on the application of RMTs to the historical and monumental constructions mainly belonging to the cultural heritage of the Euro-Mediterranean area. The experimental tests and the numerical analyses are carried out at five different levels, namely full scale models, large scale models, sub-systems, devices, materials and elements.« less

  10. International Watershed Technology: Improving Water Quality and Quantity at the Local, Basin, and Regional Scales

    USGS Publications Warehouse

    Tollner, Ernest W.; Douglas-Mankin, Kyle R.

    2017-01-01

    This article introduces the five papers in the “International Watershed Technology” collection. These papers were selected from 60 technical presentations at the fifth biennial ASABE 21st Century Watershed Technology Conference and Workshop: Improving the Quality of Water Resources at Local, Basin, and Regional Scales, held in Quito, Ecuador, on 3-9 December 2016. The conference focused on solving spatial and temporal water quality and quantity problems and addressed topics such as watershed management in developing countries, aquatic ecology and ecohydrology, ecosystem services, climate change mitigation strategies, flood forecasting, remote sensing, and water resource policy and management. While diverse, the presentation topics reflected the continuing evolution of the “data mining” and “big data” themes of past conferences related to geospatial data applications, with increasing emphasis on practical solutions. The papers selected for this collection represent applications of spatial data analyses toward practical ends with a theme of “tools and techniques for sustainability.” The papers address a range of topics, including the matching of crops with water availability, and assessing the environmental impacts of agricultural production. The papers identify some of the latest tools and techniques for improving sustainability in watershed resource management that are relevant to both developing and developed countries.

  11. The Research Identity Scale: Psychometric Analyses and Scale Refinement

    ERIC Educational Resources Information Center

    Jorgensen, Maribeth F.; Schweinle, William E.

    2018-01-01

    The 68-item Research Identity Scale (RIS) was informed through qualitative exploration of research identity development in master's-level counseling students and practitioners. Classical psychometric analyses revealed the items had strong validity and reliability and a single factor. A one-parameter Rasch analysis and item review was used to…

  12. Hydrological connectivity for riverine fish: measurement challenges and research opportunities

    USGS Publications Warehouse

    Fullerton, A.H.; Burnett, K.M.; Steel, E.A.; Flitcroft, R.L.; Pess, G.R.; Feist, B.E.; Torgersen, Christian E.; Miller, D.J.; Sanderson, B.L.

    2010-01-01

    In this review, we first summarize how hydrologic connectivity has been studied for riverine fish capable of moving long distances, and then identify research opportunities that have clear conservation significance. Migratory species, such as anadromous salmonids, are good model organisms for understanding ecological connectivity in rivers because the spatial scale over which movements occur among freshwater habitats is large enough to be easily observed with available techniques; they are often economically or culturally valuable with habitats that can be easily fragmented by human activities; and they integrate landscape conditions from multiple surrounding catchment(s) with in‐river conditions. Studies have focussed on three themes: (i) relatively stable connections (connections controlled by processes that act over broad spatio‐temporal scales >1000 km2 and >100 years); (ii) dynamic connections (connections controlled by processes acting over fine to moderate spatio‐temporal scales ∼1–1000 km2 and <1–100 years); and (iii) anthropogenic influences on hydrologic connectivity, including actions that disrupt or enhance natural connections experienced by fish.We outline eight challenges to understanding the role of connectivity in riverine fish ecology, organized under three foci: (i) addressing the constraints of river structure; (ii) embracing temporal complexity in hydrologic connectivity; and (iii) managing connectivity for riverine fishes. Challenges include the spatial structure of stream networks, the force and direction of flow, scale‐dependence of connectivity, shifting boundaries, complexity of behaviour and life histories and quantifying anthropogenic influence on connectivity and aligning management goals. As we discuss each challenge, we summarize relevant approaches in the literature and provide additional suggestions for improving research and management of connectivity for riverine fishes.Specifically, we suggest that rapid advances are possible in the following arenas: (i) incorporating network structure and river discharge into analyses; (ii) increasing explicit consideration of temporal complexity and fish behaviour in the scope of analyses; and (iii) parsing degrees of human and natural influences on connectivity and defining acceptable alterations. Multiscale analyses are most likely to identify dominant patterns of connections and disconnections, and the appropriate scale at which to focus conservation activities.

  13. Detecting the transition to failure: wavelet analysis of multi-scale crack patterns at different confining pressures

    NASA Astrophysics Data System (ADS)

    Rizzo, R. E.; Healy, D.; Farrell, N. J.

    2017-12-01

    Numerous laboratory brittle deformation experiments have shown that a rapid transition exists in the behaviour of porous materials under stress: at a certain point, early formed tensile cracks interact and coalesce into a `single' narrow zone, the shear plane, rather than remaining distributed throughout the material. In this work, we present and apply a novel image processing tool which is able to quantify this transition between distributed (`stable') damage accumulation and localised (`unstable') deformation, in terms of size, density, and orientation of cracks at the point of failure. Our technique, based on a two-dimensional (2D) continuous Morlet wavelet analysis, can recognise, extract and visually separate the multi-scale changes occurring in the fracture network during the deformation process. We have analysed high-resolution SEM-BSE images of thin sections of Hopeman Sandstone (Scotland, UK) taken from core plugs deformed under triaxial conditions, with increasing confining pressure. Through this analysis, we can determine the relationship between the initial orientation of tensile microcracks and the final geometry of the through-going shear fault, exploiting the total areal coverage of the analysed image. In addition, by comparing patterns of fractures in thin sections derived from triaxial (σ1>σ2=σ3=Pc) laboratory experiments conducted at different confining pressures (Pc), we can quantitatively explore the relationship between the observed geometry and the inferred mechanical processes. The methodology presented here can have important implications for larger-scale mechanical problems related to major fault propagation. Just as a core plug scale fault localises through extension and coalescence of microcracks, larger faults also grow by extension and coalescence of segments in a multi-scale process by which microscopic cracks can ultimately lead to macroscopic faulting. Consequently, wavelet analysis represents a useful tool for fracture pattern recognition, applicable to the detection of the transitions occurring at the time of catastrophic rupture.

  14. Multi-scale finite element analyses for stress and strain evaluations of braid fibril artificial blood vessel and smooth muscle cell.

    PubMed

    Nakamachi, Eiji; Uchida, Takahiro; Kuramae, Hiroyuki; Morita, Yusuke

    2014-08-01

    In this study, we developed a multi-scale finite element (FE) analysis code to obtain the stress and strain that occurred in the smooth muscle cell (SMC) at micro-scale, which was seeded in the real fabricated braid fibril artificial blood vessel. This FE code can predict the dynamic response of stress under the blood pressure loading. We try to establish a computer-aided engineering (CAE)-driven scaffold design technique for the blood vessel regeneration. Until now, there occurred the great progresses for the endothelial cell activation and intima layer regeneration in the blood vessel regeneration study. However, there remains the difficulty of the SMC activation and media layer regeneration. Therefore, many researchers are now studying to elucidate the fundamental mechanism of SMC activation and media layer regeneration by using the biomechanical technique. As the numerical tool, we used the dynamic-explicit FE code PAM-CRASH, ESI Ltd. For the material models, the nonlinear viscoelastic constitutive law was adapted for the human blood vessel, SMC and the extra-cellular matrix, and the elastic law for the polyglycolic acid (PGA) fiber. Through macro-FE and micro-FE analyses of fabricated braid fibril tubes by using PGA fiber under the combined conditions of the orientation angle and the pitch of fiber, we searched an appropriate structure for the stress stimulation for SMC functionalization. Objectives of this study are indicated as follows: 1. to analyze the stress and strain of the human blood vessel and SMC, and 2. to calculate stress and strain of the real fabricated braid fibril artificial blood vessel and SMC to search an appropriate PGA fiber structure under combined conditions of PGA fiber numbers, 12 and 24, and the helical orientation angles of fiber, 15, 30, 45, 60, and 75 degrees. Finally, we found a braid fibril tube, which has an angle of 15 degree and 12 PGA fibers, as a most appropriate artificial blood vessel for SMC functionalization. Copyright © 2014 John Wiley & Sons, Ltd.

  15. A Method for Calculating Strain Energy Release Rates in Preliminary Design of Composite Skin/Stringer Debonding Under Multi-Axial Loading

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Minguet, Pierre J.; OBrien, T. Kevin

    1999-01-01

    Three simple procedures were developed to determine strain energy release rates, G, in composite skin/stringer specimens for various combinations of unaxial and biaxial (in-plane/out-of-plane) loading conditions. These procedures may be used for parametric design studies in such a way that only a few finite element computations will be necessary for a study of many load combinations. The results were compared with mixed mode strain energy release rates calculated directly from nonlinear two-dimensional plane-strain finite element analyses using the virtual crack closure technique. The first procedure involved solving three unknown parameters needed to determine the energy release rates. Good agreement was obtained when the external loads were used in the expression derived. This superposition technique was only applicable if the structure exhibits a linear load/deflection behavior. Consequently, a second technique was derived which was applicable in the case of nonlinear load/deformation behavior. The technique involved calculating six unknown parameters from a set of six simultaneous linear equations with data from six nonlinear analyses to determine the energy release rates. This procedure was not time efficient, and hence, less appealing. A third procedure was developed to calculate mixed mode energy release rates as a function of delamination lengths. This procedure required only one nonlinear finite element analysis of the specimen with a single delamination length to obtain a reference solution for the energy release rates and the scale factors. The delamination was extended in three separate linear models of the local area in the vicinity of the delamination subjected to unit loads to obtain the distribution of G with delamination lengths. This set of sub-problems was Although additional modeling effort is required to create the sub- models, this local technique is efficient for parametric studies.

  16. Recurrent patterning in the daily foraging routes of hamadryas baboons (Papio hamadryas): spatial memory in large-scale versus small-scale space.

    PubMed

    Schreier, Amy L; Grove, Matt

    2014-05-01

    The benefits of spatial memory for foraging animals can be assessed on two distinct spatial scales: small-scale space (travel within patches) and large-scale space (travel between patches). While the patches themselves may be distributed at low density, within patches resources are likely densely distributed. We propose, therefore, that spatial memory for recalling the particular locations of previously visited feeding sites will be more advantageous during between-patch movement, where it may reduce the distances traveled by animals that possess this ability compared to those that must rely on random search. We address this hypothesis by employing descriptive statistics and spectral analyses to characterize the daily foraging routes of a band of wild hamadryas baboons in Filoha, Ethiopia. The baboons slept on two main cliffs--the Filoha cliff and the Wasaro cliff--and daily travel began and ended on a cliff; thus four daily travel routes exist: Filoha-Filoha, Filoha-Wasaro, Wasaro-Wasaro, Wasaro-Filoha. We use newly developed partial sum methods and distribution-fitting analyses to distinguish periods of area-restricted search from more extensive movements. The results indicate a single peak in travel activity in the Filoha-Filoha and Wasaro-Filoha routes, three peaks of travel activity in the Filoha-Wasaro routes, and two peaks in the Wasaro-Wasaro routes; and are consistent with on-the-ground observations of foraging and ranging behavior of the baboons. In each of the four daily travel routes the "tipping points" identified by the partial sum analyses indicate transitions between travel in small- versus large-scale space. The correspondence between the quantitative analyses and the field observations suggest great utility for using these types of analyses to examine primate travel patterns and especially in distinguishing between movement in small versus large-scale space. Only the distribution-fitting analyses are inconsistent with the field observations, which may be due to the scale at which these analyses were conducted. © 2013 Wiley Periodicals, Inc.

  17. A Local to National Scale Catchment Model Simulation Framework for Hydrological Predictions and Impact Assessments Under Uncertainty

    NASA Astrophysics Data System (ADS)

    Freer, Jim; Coxon, Gemma; Quinn, Niall; Dunne, Toby; Lane, Rosie; Bates, Paul; Wagener, Thorsten; Woods, Ross; Neal, Jeff; Howden, Nicholas; Musuuza, Jude

    2017-04-01

    There is a huge challenge in developing hydrological model structures that can be used for hypothesis testing, prediction, impact assessment and risk analyses over a wide range of spatial scales. There are many reasons why this is the case, from computational demands, to how we define and characterize different features and pathway connectivities in the landscape, that differ depending on the objectives of the study. However there is certainly a need more than ever to explore the trade-offs between the complexity of modelling applied (i.e. spatial discretization, levels of process representation, complexity of landscape representation) compared to the benefits realized in terms of predictive capability and robustness of these predictions during hydrological extremes and during change. Furthermore, there is a further balance, particularly associated with prediction uncertainties, in that it is not desirable to have modelling systems that are too complex compared to the observed data that would ever be available to apply them. This is particularly the case when models are applied to quantify national impact assessments, especially if these are based on validation assessments from smaller more detailed case studies. Therefore the hydrological community needs modelling tools and approaches that enable these trade-offs to be explored and to understand the level of representation needed in models to be 'fit-for-purpose' for a given application. This paper presents a catchment scale national modelling framework based on Dynamic-TOPMODEL specifically setup to fulfil these aims. A key component of the modelling framework is it's structural flexibility, as is the ability to assess model outputs using Monte Carlo simulation techniques. The model build has been automated to work at any spatial scale to the national scale, and within that to control the level of spatial discretisation and connectivity of locally accounted landscape elements in the form of hydrological response units (HRU's). This allows for the explicit consideration of spatial rainfall fields, landscape, soils and geological attributes and the spatial connectivity of hydrological flow pathways to explore what level of modelling complexity we need for different prediction problems. We shall present this framework and show how it can be used in flood and drought risk analyses as well as include attributes and features within the landscape to explore societal and climate impacts effectively within an uncertainty analyses framework.

  18. Rocket engine system reliability analyses using probabilistic and fuzzy logic techniques

    NASA Technical Reports Server (NTRS)

    Hardy, Terry L.; Rapp, Douglas C.

    1994-01-01

    The reliability of rocket engine systems was analyzed by using probabilistic and fuzzy logic techniques. Fault trees were developed for integrated modular engine (IME) and discrete engine systems, and then were used with the two techniques to quantify reliability. The IRRAS (Integrated Reliability and Risk Analysis System) computer code, developed for the U.S. Nuclear Regulatory Commission, was used for the probabilistic analyses, and FUZZYFTA (Fuzzy Fault Tree Analysis), a code developed at NASA Lewis Research Center, was used for the fuzzy logic analyses. Although both techniques provided estimates of the reliability of the IME and discrete systems, probabilistic techniques emphasized uncertainty resulting from randomness in the system whereas fuzzy logic techniques emphasized uncertainty resulting from vagueness in the system. Because uncertainty can have both random and vague components, both techniques were found to be useful tools in the analysis of rocket engine system reliability.

  19. Scaling images using their background ratio. An application in statistical comparisons of images.

    PubMed

    Kalemis, A; Binnie, D; Bailey, D L; Flower, M A; Ott, R J

    2003-06-07

    Comparison of two medical images often requires image scaling as a pre-processing step. This is usually done with the scaling-to-the-mean or scaling-to-the-maximum techniques which, under certain circumstances, in quantitative applications may contribute a significant amount of bias. In this paper, we present a simple scaling method which assumes only that the most predominant values in the corresponding images belong to their background structure. The ratio of the two images to be compared is calculated and its frequency histogram is plotted. The scaling factor is given by the position of the peak in this histogram which belongs to the background structure. The method was tested against the traditional scaling-to-the-mean technique on simulated planar gamma-camera images which were compared using pixelwise statistical parametric tests. Both sensitivity and specificity for each condition were measured over a range of different contrasts and sizes of inhomogeneity for the two scaling techniques. The new method was found to preserve sensitivity in all cases while the traditional technique resulted in significant degradation of sensitivity in certain cases.

  20. Integration of Scale Invariant Generator Technique and S-A Technique for Characterizing 2-D Patterns for Information Retrieve

    NASA Astrophysics Data System (ADS)

    Cao, L.; Cheng, Q.

    2004-12-01

    The scale invariant generator technique (SIG) and spectrum-area analysis technique (S-A) were developed independently relevant to the concept of the generalized scale invariance (GSI). The former was developed for characterizing the parameters involved in the GSI for characterizing and simulating multifractal measures whereas the latter was for identifying scaling breaks for decomposition of superimposed multifractal measures caused by multiple geophysical processes. A natural integration of these two techniques may yield a new technique to serve two purposes, on the one hand, that can enrich the power of S-A by increasing the interpretability of decomposed patterns in some applications of S-A and, on the other hand, that can provide a mean to test the uniqueness of multifractality of measures which is essential for application of SIG technique in more complicated environment. The implementation of the proposed technique has been done as a Dynamic Link Library (DLL) in Visual C++. The program can be friendly used for method validation and application in different fields.

  1. Methodologies for the assessment of earthquake-triggered landslides hazard. A comparison of Logistic Regression and Artificial Neural Network models.

    NASA Astrophysics Data System (ADS)

    García-Rodríguez, M. J.; Malpica, J. A.; Benito, B.

    2009-04-01

    In recent years, interest in landslide hazard assessment studies has increased substantially. They are appropriate for evaluation and mitigation plan development in landslide-prone areas. There are several techniques available for landslide hazard research at a regional scale. Generally, they can be classified in two groups: qualitative and quantitative methods. Most of qualitative methods tend to be subjective, since they depend on expert opinions and represent hazard levels in descriptive terms. On the other hand, quantitative methods are objective and they are commonly used due to the correlation between the instability factors and the location of the landslides. Within this group, statistical approaches and new heuristic techniques based on artificial intelligence (artificial neural network (ANN), fuzzy logic, etc.) provide rigorous analysis to assess landslide hazard over large regions. However, they depend on qualitative and quantitative data, scale, types of movements and characteristic factors used. We analysed and compared an approach for assessing earthquake-triggered landslides hazard using logistic regression (LR) and artificial neural networks (ANN) with a back-propagation learning algorithm. One application has been developed in El Salvador, a country of Central America where the earthquake-triggered landslides are usual phenomena. In a first phase, we analysed the susceptibility and hazard associated to the seismic scenario of the 2001 January 13th earthquake. We calibrated the models using data from the landslide inventory for this scenario. These analyses require input variables representing physical parameters to contribute to the initiation of slope instability, for example, slope gradient, elevation, aspect, mean annual precipitation, lithology, land use, and terrain roughness, while the occurrence or non-occurrence of landslides is considered as dependent variable. The results of the landslide susceptibility analysis are checked using landslide location data. These results show a high concordance between the landslide inventory and the high susceptibility estimated zone with an adjustment of 95.1 % for ANN model and 89.4% for LR model. In addition, we make a comparative analysis of both techniques using the Receiver Operating Characteristic (ROC) curve, a graphical plot of the sensitivity vs. (1 - specificity) for a binary classifier system in function of its discrimination threshold, and calculating the Area Under the ROC (AUROC) value for each model. Finally, the previous models are used for the developing a new probabilistic landslide hazard map for future events. They are obtained combining the expected triggering factor (calculated earthquake ground motion) for a return period of 475 years with the susceptibility map.

  2. Impact and fracture analysis of fish scales from Arapaima gigas.

    PubMed

    Torres, F G; Malásquez, M; Troncoso, O P

    2015-06-01

    Fish scales from the Amazonian fish Arapaima gigas have been characterised to study their impact and fracture behaviour at three different environmental conditions. Scales were cut in two different directions to analyse the influence of the orientation of collagen layers. The energy absorbed during impact tests was measured for each sample and SEM images were taken after each test in order to analyse the failure mechanisms. The results showed that scales tested at cryogenic temperatures display fragile behaviour, while scales tested at room temperature did not fracture. Different failure mechanisms have been identified, analysed and compared with the failure modes that occur in bone. The impact energy obtained for fish scales was two to three times higher than the values reported for bone in the literature. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Development of a Linear Ion Trap Mass Spectrometer (LITMS) Investigation for Future Planetary Surface Missions

    NASA Technical Reports Server (NTRS)

    Brinckerhoff, W.; Danell, R.; Van Ameron, F.; Pinnick, V.; Li, X.; Arevalo, R.; Glavin, D.; Getty, S.; Mahaffy, P.; Chu, P.; hide

    2014-01-01

    Future surface missions to Mars and other planetary bodies will benefit from continued advances in miniature sensor and sample handling technologies that enable high-performance chemical analyses of natural samples. Fine-scale (approx.1 mm and below) analyses of rock surfaces and interiors, such as exposed on a drill core, will permit (1) the detection of habitability markers including complex organics in association with their original depositional environment, and (2) the characterization of successive layers and gradients that can reveal the time-evolution of those environments. In particular, if broad-based and highly-sensitive mass spectrometry techniques could be brought to such scales, the resulting planetary science capability would be truly powerful. The Linear Ion Trap Mass Spectrometer (LITMS) investigation is designed to conduct fine-scale organic and inorganic analyses of short (approx.5-10 cm) rock cores such as could be acquired by a planetary lander or rover arm-based drill. LITMS combines both pyrolysis/gas chromatograph mass spectrometry (GCMS) of sub-sampled core fines, and laser desorption mass spectrometry (LDMS) of the intact core surface, using a common mass analyzer, enhanced from the design used in the Mars Organic Molecule Analyzer (MOMA) instrument on the 2018 ExoMars rover. LITMS additionally features developments based on the Sample Analysis at Mars (SAM) investigation on MSL and recent NASA-funded prototype efforts in laser mass spectrometry, pyrolysis, and precision subsampling. LITMS brings these combined capabilities to achieve its four measurement objectives: (1) Organics: Broad Survey Detect organic molecules over a wide range of molecular weight, volatility, electronegativity, concentration, and host mineralogy. (2) Organic: Molecular Structure Characterize internal molecular structure to identify individual compounds, and reveal functionalization and processing. (3) Inorganic Host Environment Assess the local chemical/mineralogical makeup of organic host phases to help determine deposition and preservation factors. (4) Chemical Stratigraphy Analyze the fine spatial distribution and variation of key species with depth.

  4. ETE: a python Environment for Tree Exploration.

    PubMed

    Huerta-Cepas, Jaime; Dopazo, Joaquín; Gabaldón, Toni

    2010-01-13

    Many bioinformatics analyses, ranging from gene clustering to phylogenetics, produce hierarchical trees as their main result. These are used to represent the relationships among different biological entities, thus facilitating their analysis and interpretation. A number of standalone programs are available that focus on tree visualization or that perform specific analyses on them. However, such applications are rarely suitable for large-scale surveys, in which a higher level of automation is required. Currently, many genome-wide analyses rely on tree-like data representation and hence there is a growing need for scalable tools to handle tree structures at large scale. Here we present the Environment for Tree Exploration (ETE), a python programming toolkit that assists in the automated manipulation, analysis and visualization of hierarchical trees. ETE libraries provide a broad set of tree handling options as well as specific methods to analyze phylogenetic and clustering trees. Among other features, ETE allows for the independent analysis of tree partitions, has support for the extended newick format, provides an integrated node annotation system and permits to link trees to external data such as multiple sequence alignments or numerical arrays. In addition, ETE implements a number of built-in analytical tools, including phylogeny-based orthology prediction and cluster validation techniques. Finally, ETE's programmable tree drawing engine can be used to automate the graphical rendering of trees with customized node-specific visualizations. ETE provides a complete set of methods to manipulate tree data structures that extends current functionality in other bioinformatic toolkits of a more general purpose. ETE is free software and can be downloaded from http://ete.cgenomics.org.

  5. Multiscale mechanisms of nutritionally induced property variation in spider silks.

    PubMed

    Blamires, Sean J; Nobbs, Madeleine; Martens, Penny J; Tso, I-Min; Chuang, Wei-Tsung; Chang, Chung-Kai; Sheu, Hwo-Shuenn

    2018-01-01

    Variability in spider major ampullate (MA) silk properties at different scales has proven difficult to determine and remains an obstacle to the development of synthetic fibers mimicking MA silk performance. A multitude of techniques may be used to measure multiscale aspects of silk properties. Here we fed five species of Araneoid spider solutions that either contained protein or were protein deprived and performed silk tensile tests, small and wide-angle X-ray scattering (SAXS/WAXS), amino acid composition analyses, and silk gene expression analyses, to resolve persistent questions about how nutrient deprivation induces variations in MA silk mechanical properties across scales. Our analyses found that the properties of each spider's silk varied differently in response to variations in their protein intake. We found changes in the crystalline and non-crystalline nanostructures to play specific roles in inducing the property variations we found. Across treatment MaSp expression patterns differed in each of the five species. We found that in most species MaSp expression and amino acid composition variations did not conform with our predictions based on a traditional MaSp expression model. In general, changes to the silk's alanine and proline compositions influenced the alignment of the proteins within the silk's amorphous region, which influenced silk extensibility and toughness. Variations in structural alignment in the crystalline and non-crystalline regions influenced ultimate strength independent of genetic expression. Our study provides the deepest insights thus far into the mechanisms of how MA silk properties vary from gene expression to nanostructure formations to fiber mechanics. Such knowledge is imperative for promoting the production of synthetic silk fibers.

  6. ETE: a python Environment for Tree Exploration

    PubMed Central

    2010-01-01

    Background Many bioinformatics analyses, ranging from gene clustering to phylogenetics, produce hierarchical trees as their main result. These are used to represent the relationships among different biological entities, thus facilitating their analysis and interpretation. A number of standalone programs are available that focus on tree visualization or that perform specific analyses on them. However, such applications are rarely suitable for large-scale surveys, in which a higher level of automation is required. Currently, many genome-wide analyses rely on tree-like data representation and hence there is a growing need for scalable tools to handle tree structures at large scale. Results Here we present the Environment for Tree Exploration (ETE), a python programming toolkit that assists in the automated manipulation, analysis and visualization of hierarchical trees. ETE libraries provide a broad set of tree handling options as well as specific methods to analyze phylogenetic and clustering trees. Among other features, ETE allows for the independent analysis of tree partitions, has support for the extended newick format, provides an integrated node annotation system and permits to link trees to external data such as multiple sequence alignments or numerical arrays. In addition, ETE implements a number of built-in analytical tools, including phylogeny-based orthology prediction and cluster validation techniques. Finally, ETE's programmable tree drawing engine can be used to automate the graphical rendering of trees with customized node-specific visualizations. Conclusions ETE provides a complete set of methods to manipulate tree data structures that extends current functionality in other bioinformatic toolkits of a more general purpose. ETE is free software and can be downloaded from http://ete.cgenomics.org. PMID:20070885

  7. Treatment for insertional Achilles tendinopathy: a systematic review.

    PubMed

    Wiegerinck, J I; Kerkhoffs, G M; van Sterkenburg, M N; Sierevelt, I N; van Dijk, C N

    2013-06-01

    Systematically search and analyse the results of surgical and non-surgical treatments for insertional Achilles tendinopathy. A structured systematic review of the literature was performed to identify surgical and non-surgical therapeutic studies reporting on ten or more adults with insertional Achilles tendinopathy. MEDLINE, CINAHL, EMBASE (Classic) and the Cochrane database of controlled trials (1945-March 2011) were searched. The Coleman methodology score was used to assess the quality of included articles, and these were analysed with an emphasis on change in pain score, patient satisfaction and complication rate. Of 451 reviewed abstracts, 14 trials met our inclusion criteria evaluating 452 procedures in 433 patients. Five surgical techniques were evaluated; all had a good patient satisfaction (avg. 89 %). The complication ratio differed substantially between techniques. Two studies analysed injections showing significant decrease in visual analogue scale (VAS). Eccentric exercises showed a significant decrease in VAS, but a large group of patients was unsatisfied. Extracorporeal shockwave therapy (ESWT) was superior to both wait-and-see and an eccentric training regime. One study evaluated laser CO(2), TECAR and cryoultrasound, all with significant decrease in VAS. Despite differences in outcome and complication ratio, the patient satisfaction is high in all surgical studies. It is not possible to draw conclusions regarding the best surgical treatment for insertional Achilles tendinopathy. ESWT seems effective in patients with non-calcified insertional Achilles tendinopathy. Although both eccentric exercises resulted in a decrease in VAS score, full range of motion eccentric exercises shows a low patient satisfaction compared to floor level exercises and other conservative treatment modalities.

  8. H2o Quantitative Analysis of Transition Zone Minerals Wadsleyite and Ringwoodite By Raman Spectroscopy

    NASA Astrophysics Data System (ADS)

    Novella, D.; Bolfan-Casanova, N.; Bureau, H.; Raepsaet, C.; Montagnac, G.

    2014-12-01

    Liquid H2O covers approximately 70% of the Earth's surface but it can also be incorporated as OH- groups in nominally anhydrous minerals (NAMs) that constitute the Earth's mantle, as observed in peridotitic xenoliths. The presence of even trace amounts (ppm wt) of hydrogen in mantle minerals strongly affect the physical, chemical and rheological properties of the mantle. The Earth's transition zone (410 to 660 km depth) is particularly important in this regard since it can store large amounts of H2O (wt%) as shown by experiments and recently by a natural sample. Addressing the behavior of H2O at high depths and its potential concentration in mantle NAMs is therefore fundamental to fully comprehend global-scale processes such as plate tectonics and magmatism. We developed an innovative technique to measure the H2O content of main transition zone NAMs wadsleyite and ringwoodite by Raman spectroscopy. This technique allows to use a beam of 1-3 µm size to measure small samples that are typical for high pressure natural and synthetic specimens. High pressure polyphasic samples are indeed very challenging to be measured in terms of H2O content by the routinely used Fourier transform infra-red (FTIR) spectroscopy and ion probe mass spectroscopy analyses, making the Raman approach a valid alternative. High quality crystals of wadsleyite and ringwoodite were synthesized at high pressure and temperature in a multi-anvil press and analyzed by Raman and FTIR spectroscopy as well as elastic recoil detection analyses (ERDA) which is an absolute, standard-free technique. We will present experimental data that allow to apply Raman spectroscopy to the determination of H2O content of the most abundant minerals in the transition zone. The data gathered in this study will also permit to investigate the absorption coefficients of wadsleyite and ringwoodite that are employed in FTIR quantitative analyses.

  9. The Job Responsibilities Scale: Invariance in a Longitudinal Prospective Study.

    ERIC Educational Resources Information Center

    Ludlow, Larry H.; Lunz, Mary E.

    1998-01-01

    The degree of invariance of the Job Responsibilities Scale for medical technologists was studied for 1993 and 1995, conducting factor analyses of data from each year (1063 and 665 individuals, respectively). Nearly identical factor patterns were found, and Rasch rating scale analyses found nearly identical pairs of item estimates. Implications are…

  10. Histomorphometry and cortical robusticity of the adult human femur.

    PubMed

    Miszkiewicz, Justyna Jolanta; Mahoney, Patrick

    2018-01-13

    Recent quantitative analyses of human bone microanatomy, as well as theoretical models that propose bone microstructure and gross anatomical associations, have started to reveal insights into biological links that may facilitate remodeling processes. However, relationships between bone size and the underlying cortical bone histology remain largely unexplored. The goal of this study is to determine the extent to which static indicators of bone remodeling and vascularity, measured using histomorphometric techniques, relate to femoral midshaft cortical width and robusticity. Using previously published and new quantitative data from 450 adult human male (n = 233) and female (n = 217) femora, we determine if these aspects of femoral size relate to bone microanatomy. Scaling relationships are explored and interpreted within the context of tissue form and function. Analyses revealed that the area and diameter of Haversian canals and secondary osteons, and densities of secondary osteons and osteocyte lacunae from the sub-periosteal region of the posterior midshaft femur cortex were significantly, but not consistently, associated with femoral size. Cortical width and bone robusticity were correlated with osteocyte lacunae density and scaled with positive allometry. Diameter and area of osteons and Haversian canals decreased as the width of cortex and bone robusticity increased, revealing a negative allometric relationship. These results indicate that microscopic products of cortical bone remodeling and vascularity are linked to femur size. Allometric relationships between more robust human femora with thicker cortical bone and histological products of bone remodeling correspond with principles of bone functional adaptation. Future studies may benefit from exploring scaling relationships between bone histomorphometric data and measurements of bone macrostructure.

  11. A preliminary study comparing the use of cervical/upper thoracic mobilization and manipulation for individuals with mechanical neck pain.

    PubMed

    Griswold, David; Learman, Ken; O'Halloran, Bryan; Cleland, Josh

    2015-05-01

    Neck pain is routinely managed using manual therapy (MT) to the cervical and thoracic spines. While both mobilizations and manipulations to these areas have been shown to reduce neck pain, increase cervical range of motion, and reduce disability, the most effective option remains elusive. The purpose of this preliminary trial was to compare the pragmatic use of cervical and thoracic mobilizations vs. manipulation for mechanical neck pain. This trial included 20 patients with mechanical neck pain. Each patient was randomized to receive either mobilization or manipulation to both the cervical and thoracic spines during their plan of care. Within-group analyses were made with Wilcoxon signed-rank tests and between-group analyses were made with Mann-Whitney U. There were no between-group differences for any of the dependent variables including cervical active range of motion (CAROM) (P = 0.18), deep cervical flexion (DCF) endurance (P = 0.06), numerical pain rating scale (NPRS) (P = 0.26), the neck disability index (NDI, P = 0.33), patient-specific functional scale (PSFS, P = 0.20), or the global rating of change (GROC) scale (P = 0.94). Within-group results were significant for all outcome variables (P<0.001) from initial evaluation to discharge for both groups. These findings were consistent with other trials previously conducted that applied the MT techniques in a pragmatic fashion, but varied from previous trials where the treatment was standardized. A larger experimental study is necessary to further examine the differences between mobilization and manipulation for neck pain.

  12. Replicating the microbial community and water quality performance of full-scale slow sand filters in laboratory-scale filters.

    PubMed

    Haig, Sarah-Jane; Quince, Christopher; Davies, Robert L; Dorea, Caetano C; Collins, Gavin

    2014-09-15

    Previous laboratory-scale studies to characterise the functional microbial ecology of slow sand filters have suffered from methodological limitations that could compromise their relevance to full-scale systems. Therefore, to ascertain if laboratory-scale slow sand filters (L-SSFs) can replicate the microbial community and water quality production of industrially operated full-scale slow sand filters (I-SSFs), eight cylindrical L-SSFs were constructed and were used to treat water from the same source as the I-SSFs. Half of the L-SSFs sand beds were composed of sterilized sand (sterile) from the industrial filters and the other half with sand taken directly from the same industrial filter (non-sterile). All filters were operated for 10 weeks, with the microbial community and water quality parameters sampled and analysed weekly. To characterize the microbial community phyla-specific qPCR assays and 454 pyrosequencing of the 16S rRNA gene were used in conjunction with an array of statistical techniques. The results demonstrate that it is possible to mimic both the water quality production and the structure of the microbial community of full-scale filters in the laboratory - at all levels of taxonomic classification except OTU - thus allowing comparison of LSSF experiments with full-scale units. Further, it was found that the sand type composing the filter bed (non-sterile or sterile), the water quality produced, the age of the filters and the depth of sand samples were all significant factors in explaining observed differences in the structure of the microbial consortia. This study is the first to the authors' knowledge that demonstrates that scaled-down slow sand filters can accurately reproduce the water quality and microbial consortia of full-scale slow sand filters. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. A combination of interdisciplinary analytical tools for evaluation of multi-layered coatings on medical grade stainless steel for biomedical applications.

    PubMed

    Maver, Uroš; Xhanari, Klodian; Žižek, Marko; Korte, Dorota; Gradišnik, Lidija; Franko, Mladen; Finšgar, Matjaž

    2018-05-03

    In this comprehensive study several analytical techniques were used in order to evaluate multi--layered biomedical surface coatings composed of a drug (diclofenac) and a polymer (chitosan). Such a thorough examination is of paramount importance in order to assure safety and prove efficiency of potential biomedical materials already at the in vitro level, hence leading to their potentially faster introduction to clinical trials. For the first time a novel technique based on thermal diffusivity and conductivity measurement (photothermal beam deflection spectroscopy - BDS) was employed in order to analyse in a non-destructive way the thickness of respective layers, together with their thermal diffusivity and conductivity. In addition to attenuated total reflection Fourier-transform infrared spectroscopy (ATR-FTIR), BDS confirmed successive surface layers of the prepared coatings. Scanning electron microscopy and atomic force microscopy were used to examine structural information on the macro- and micro/nano-scale, respectively. Surface hydrophilicity was measured with the contact angle analysis, which clearly showed differences in hydrophilicity between coated and non-coated samples. Considering the targeted application of the prepared coatings (as implant in orthopaedic treatments), the in vitro drug release was analysed spectrophotometrically to examine the coatings potential for a controlled drug release. Furthermore, the material was also tested by electrochemical impedance spectroscopy and cyclic polarisation techniques, which were able to detect even minor differences between the performance of the coated and non-coated materials. As the final test, the biocompatibility of the coatings with human osteoblasts was determined. Copyright © 2018. Published by Elsevier B.V.

  14. Parameter Optimisation and Uncertainty Analysis in Visual MODFLOW based Flow Model for predicting the groundwater head in an Eastern Indian Aquifer

    NASA Astrophysics Data System (ADS)

    Mohanty, B.; Jena, S.; Panda, R. K.

    2016-12-01

    The overexploitation of groundwater elicited in abandoning several shallow tube wells in the study Basin in Eastern India. For the sustainability of groundwater resources, basin-scale modelling of groundwater flow is indispensable for the effective planning and management of the water resources. The basic intent of this study is to develop a 3-D groundwater flow model of the study basin using the Visual MODFLOW Flex 2014.2 package and successfully calibrate and validate the model using 17 years of observed data. The sensitivity analysis was carried out to quantify the susceptibility of aquifer system to the river bank seepage, recharge from rainfall and agriculture practices, horizontal and vertical hydraulic conductivities, and specific yield. To quantify the impact of parameter uncertainties, Sequential Uncertainty Fitting Algorithm (SUFI-2) and Markov chain Monte Carlo (McMC) techniques were implemented. Results from the two techniques were compared and the advantages and disadvantages were analysed. Nash-Sutcliffe coefficient (NSE), Coefficient of Determination (R2), Mean Absolute Error (MAE), Mean Percent Deviation (Dv) and Root Mean Squared Error (RMSE) were adopted as criteria of model evaluation during calibration and validation of the developed model. NSE, R2, MAE, Dv and RMSE values for groundwater flow model during calibration and validation were in acceptable range. Also, the McMC technique was able to provide more reasonable results than SUFI-2. The calibrated and validated model will be useful to identify the aquifer properties, analyse the groundwater flow dynamics and the change in groundwater levels in future forecasts.

  15. Measuring Pilot Knowledge in Training: The Pathfinder Network Scaling Technique

    DTIC Science & Technology

    2007-01-01

    Network Scaling Technique Leah J. Rowe Roger W. Schvaneveldt L -3 Communications Arizona State University Mesa, AZ Mesa, AZ leah.rowe...7293 Page 2 of 8 Measuring Pilot Knowledge in Training: The Pathfinder Network Scaling Technique Leah J. Rowe Roger W. Schvaneveldt L -3...training. ABOUT THE AUTHORS Leah J. Rowe is a Training Research Specialist with L -3 Communications at the Air Force Research Laboratory

  16. KENO-VI Primer: A Primer for Criticality Calculations with SCALE/KENO-VI Using GeeWiz

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowman, Stephen M

    2008-09-01

    The SCALE (Standardized Computer Analyses for Licensing Evaluation) computer software system developed at Oak Ridge National Laboratory is widely used and accepted around the world for criticality safety analyses. The well-known KENO-VI three-dimensional Monte Carlo criticality computer code is one of the primary criticality safety analysis tools in SCALE. The KENO-VI primer is designed to help a new user understand and use the SCALE/KENO-VI Monte Carlo code for nuclear criticality safety analyses. It assumes that the user has a college education in a technical field. There is no assumption of familiarity with Monte Carlo codes in general or with SCALE/KENO-VImore » in particular. The primer is designed to teach by example, with each example illustrating two or three features of SCALE/KENO-VI that are useful in criticality analyses. The primer is based on SCALE 6, which includes the Graphically Enhanced Editing Wizard (GeeWiz) Windows user interface. Each example uses GeeWiz to provide the framework for preparing input data and viewing output results. Starting with a Quickstart section, the primer gives an overview of the basic requirements for SCALE/KENO-VI input and allows the user to quickly run a simple criticality problem with SCALE/KENO-VI. The sections that follow Quickstart include a list of basic objectives at the beginning that identifies the goal of the section and the individual SCALE/KENO-VI features that are covered in detail in the sample problems in that section. Upon completion of the primer, a new user should be comfortable using GeeWiz to set up criticality problems in SCALE/KENO-VI. The primer provides a starting point for the criticality safety analyst who uses SCALE/KENO-VI. Complete descriptions are provided in the SCALE/KENO-VI manual. Although the primer is self-contained, it is intended as a companion volume to the SCALE/KENO-VI documentation. (The SCALE manual is provided on the SCALE installation DVD.) The primer provides specific examples of using SCALE/KENO-VI for criticality analyses; the SCALE/KENO-VI manual provides information on the use of SCALE/KENO-VI and all its modules. The primer also contains an appendix with sample input files.« less

  17. Simple Assessment Techniques for Soil and Water. Environmental Factors in Small Scale Development Projects. Workshops.

    ERIC Educational Resources Information Center

    Coordination in Development, New York, NY.

    This booklet was produced in response to the growing need for reliable environmental assessment techniques that can be applied to small-scale development projects. The suggested techniques emphasize low-technology environmental analysis. Although these techniques may lack precision, they can be extremely valuable in helping to assure the success…

  18. Assessing Hydrological and Energy Budgets in Amazonia through Regional Downscaling, and Comparisons with Global Reanalysis Products

    NASA Astrophysics Data System (ADS)

    Nunes, A.; Ivanov, V. Y.

    2014-12-01

    Although current global reanalyses provide reasonably accurate large-scale features of the atmosphere, systematic errors are still found in the hydrological and energy budgets of such products. In the tropics, precipitation is particularly challenging to model, which is also adversely affected by the scarcity of hydrometeorological datasets in the region. With the goal of producing downscaled analyses that are appropriate for a climate assessment at regional scales, a regional spectral model has used a combination of precipitation assimilation with scale-selective bias correction. The latter is similar to the spectral nudging technique, which prevents the departure of the regional model's internal states from the large-scale forcing. The target area in this study is the Amazon region, where large errors are detected in reanalysis precipitation. To generate the downscaled analysis, the regional climate model used NCEP/DOE R2 global reanalysis as the initial and lateral boundary conditions, and assimilated NOAA's Climate Prediction Center (CPC) MORPHed precipitation (CMORPH), available at 0.25-degree resolution, every 3 hours. The regional model's precipitation was successfully brought closer to the observations, in comparison to the NCEP global reanalysis products, as a result of the impact of a precipitation assimilation scheme on cumulus-convection parameterization, and improved boundary forcing achieved through a new version of scale-selective bias correction. Water and energy budget terms were also evaluated against global reanalyses and other datasets.

  19. Characterization of oxide scales grown on alloy 310S stainless steel after long term exposure to supercritical water at 500 °C

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behnamian, Yashar, E-mail: behnamia@ualberta.ca

    The oxide scale grown of static capsules made of alloy 310S stainless steel was investigated by exposure to the supercritical water at 500 °C 25 MPa for various exposure times up to 20,000 h. Characterization techniques such as X-ray diffraction, scanning/transmission electron microscopy, energy dispersive spectroscopy, and fast Fourier transformation were employed on the oxide scales. The elemental and phase analyses indicated that long term exposure to the SCW resulted in the formation of scales identified as Fe{sub 3}O{sub 4} (outer layer), Fe-Cr spinel (inner layer), Cr{sub 2}O{sub 3} (transition layer) on the substrate, and Ni-enrichment (chrome depleted region) inmore » the alloy 310S. It was found that the layer thickness and weight gain vs. exposure time followed parabolic law. The oxidation mechanism and scales grown on the alloy 310S stainless steel exposed to SCW are discussed. - Highlights: •Oxidation of alloy 310S stainless steel exposed to SCW (500 °C/25 MPa) •The layer thickness and weight gain vs. exposure time followed parabolic law. •Oxide layers including Fe{sub 3}O{sub 4} (outer), Fe-Cr spinel (inner) and Cr{sub 2}O{sub 3} (transition) •Ni element is segregated by the selective oxidation of Cr.« less

  20. Periorbital Biometric Measurements using ImageJ Software: Standardisation of Technique and Assessment Of Intra- and Interobserver Variability

    PubMed Central

    Rajyalakshmi, R.; Prakash, Winston D.; Ali, Mohammad Javed; Naik, Milind N.

    2017-01-01

    Purpose: To assess the reliability and repeatability of periorbital biometric measurements using ImageJ software and to assess if the horizontal visible iris diameter (HVID) serves as a reliable scale for facial measurements. Methods: This study was a prospective, single-blind, comparative study. Two clinicians performed 12 periorbital measurements on 100 standardised face photographs. Each individual’s HVID was determined by Orbscan IIz and used as a scale for measurements using ImageJ software. All measurements were repeated using the ‘average’ HVID of the study population as a measurement scale. Intraclass correlation coefficient (ICC) and Pearson product-moment coefficient were used as statistical tests to analyse the data. Results: The range of ICC for intra- and interobserver variability was 0.79–0.99 and 0.86–0.99, respectively. Test-retest reliability ranged from 0.66–1.0 to 0.77–0.98, respectively. When average HVID of the study population was used as scale, ICC ranged from 0.83 to 0.99, and the test-retest reliability ranged from 0.83 to 0.96 and the measurements correlated well with recordings done with individual Orbscan HVID measurements. Conclusion: Periorbital biometric measurements using ImageJ software are reproducible and repeatable. Average HVID of the population as measured by Orbscan is a reliable scale for facial measurements. PMID:29403183

  1. A systematic comparison of different object-based classification techniques using high spatial resolution imagery in agricultural environments

    NASA Astrophysics Data System (ADS)

    Li, Manchun; Ma, Lei; Blaschke, Thomas; Cheng, Liang; Tiede, Dirk

    2016-07-01

    Geographic Object-Based Image Analysis (GEOBIA) is becoming more prevalent in remote sensing classification, especially for high-resolution imagery. Many supervised classification approaches are applied to objects rather than pixels, and several studies have been conducted to evaluate the performance of such supervised classification techniques in GEOBIA. However, these studies did not systematically investigate all relevant factors affecting the classification (segmentation scale, training set size, feature selection and mixed objects). In this study, statistical methods and visual inspection were used to compare these factors systematically in two agricultural case studies in China. The results indicate that Random Forest (RF) and Support Vector Machines (SVM) are highly suitable for GEOBIA classifications in agricultural areas and confirm the expected general tendency, namely that the overall accuracies decline with increasing segmentation scale. All other investigated methods except for RF and SVM are more prone to obtain a lower accuracy due to the broken objects at fine scales. In contrast to some previous studies, the RF classifiers yielded the best results and the k-nearest neighbor classifier were the worst results, in most cases. Likewise, the RF and Decision Tree classifiers are the most robust with or without feature selection. The results of training sample analyses indicated that the RF and adaboost. M1 possess a superior generalization capability, except when dealing with small training sample sizes. Furthermore, the classification accuracies were directly related to the homogeneity/heterogeneity of the segmented objects for all classifiers. Finally, it was suggested that RF should be considered in most cases for agricultural mapping.

  2. Automatic location of L/H transition times for physical studies with a large statistical basis

    NASA Astrophysics Data System (ADS)

    González, S.; Vega, J.; Murari, A.; Pereira, A.; Dormido-Canto, S.; Ramírez, J. M.; contributors, JET-EFDA

    2012-06-01

    Completely automatic techniques to estimate and validate L/H transition times can be essential in L/H transition analyses. The generation of databases with hundreds of transition times and without human intervention is an important step to accomplish (a) L/H transition physics analysis, (b) validation of L/H theoretical models and (c) creation of L/H scaling laws. An entirely unattended methodology is presented in this paper to build large databases of transition times in JET using time series. The proposed technique has been applied to a dataset of 551 JET discharges between campaigns C21 and C26. A prediction with discharges that show a clear signature in time series is made through the locating properties of the wavelet transform. It is an accurate prediction and the uncertainty interval is ±3.2 ms. The discharges with a non-clear pattern in the time series use an L/H mode classifier based on discharges with a clear signature. In this case, the estimation error shows a distribution with mean and standard deviation of 27.9 ms and 37.62 ms, respectively. Two different regression methods have been applied to the measurements acquired at the transition times identified by the automatic system. The obtained scaling laws for the threshold power are not significantly different from those obtained using the data at the transition times determined manually by the experts. The automatic methods allow performing physical studies with a large number of discharges, showing, for example, that there are statistically different types of transitions characterized by different scaling laws.

  3. The Effects of International Trade on Water Use.

    PubMed

    Kagohashi, Kazuki; Tsurumi, Tetsuya; Managi, Shunsuke

    2015-01-01

    The growing scarcity of water resources worldwide is conditioned not only by precipitation changes but also by changes to water use patterns; the latter is driven by social contexts such as capital intensity, trade openness, and income. This study explores the determinants of water use by focusing on the effect of trade openness on the degree to which water is withdrawn and consumed. Previous studies have conducted analyses on the determinants of water use but have ignored the endogeneity of trade openness. To deal with this endogeneity problem, we adopt instrumental variable estimation and clarify the determinants of water use. The determinants of water use are divided into scale, technique, and composition effects. Calculating each trade-induced effect, we examine how trade openness affects the degree of water use. Our results show that while trade has a positive effect on water withdrawal/consumption through trade-induced scale effects and direct composition effects, the trade-induced technique and the indirect composition effect, both of which exhibit a negative sign, counteract the scale effect and the direct composition effect, resulting in reduced water withdrawal/consumption. The overall effect induced by trade is calculated as being in the range of -1.00 to -1.52; this means that the overall effect of a 1% increase in the intensity of trade openness reduces the degree of water withdrawal/consumption by roughly 1.0-1.5%, on average. This result indicates that international bilateral trade would promote efficient water use through the diffusion of water-saving technologies and the reformation of industry composition.

  4. The Effects of International Trade on Water Use

    PubMed Central

    Kagohashi, Kazuki; Tsurumi, Tetsuya; Managi, Shunsuke

    2015-01-01

    The growing scarcity of water resources worldwide is conditioned not only by precipitation changes but also by changes to water use patterns; the latter is driven by social contexts such as capital intensity, trade openness, and income. This study explores the determinants of water use by focusing on the effect of trade openness on the degree to which water is withdrawn and consumed. Previous studies have conducted analyses on the determinants of water use but have ignored the endogeneity of trade openness. To deal with this endogeneity problem, we adopt instrumental variable estimation and clarify the determinants of water use. The determinants of water use are divided into scale, technique, and composition effects. Calculating each trade-induced effect, we examine how trade openness affects the degree of water use. Our results show that while trade has a positive effect on water withdrawal/consumption through trade-induced scale effects and direct composition effects, the trade-induced technique and the indirect composition effect, both of which exhibit a negative sign, counteract the scale effect and the direct composition effect, resulting in reduced water withdrawal/consumption. The overall effect induced by trade is calculated as being in the range of –1.00 to –1.52; this means that the overall effect of a 1% increase in the intensity of trade openness reduces the degree of water withdrawal/consumption by roughly 1.0–1.5%, on average. This result indicates that international bilateral trade would promote efficient water use through the diffusion of water-saving technologies and the reformation of industry composition. PMID:26168045

  5. In situ visualisation and characterisation of the capacity of highly reactive minerals to preserve soil organic matter (SOM) in colloids at submicron scale.

    PubMed

    Xiao, Jian; Wen, Yongli; Li, Huan; Hao, Jialong; Shen, Qirong; Ran, Wei; Mei, Xinlan; He, Xinhua; Yu, Guanghui

    2015-11-01

    Mineral-organo associations (MOAs) are a mixture of identifiable biopolymers associated with highly reactive minerals and microorganisms. However, the in situ characterization and correlation between soil organic matter (SOM) and highly reactive Al and Fe minerals are still unclear for the lack of technologies, particularly in the long-term agricultural soil colloids at submicron scale. We combined several novel techniques, including nano-scale secondary ion mass spectrometry (NanoSIMS), X-ray absorption near edge structure (XANES) and confocal laser scanning microscopy (CLSM) to characterise the capacity of highly reactive Al and Fe minerals to preserve SOM in Ferralic Cambisol in south China. Our results demonstrated that: (1) highly reactive minerals were strongly related to SOM preservation, while SOM had a more significant line correlation with the highly reactive Al minerals than the highly reactive Fe minerals, according to the regions of interest correlation analyses using NanoSIMS; (2) allophane and ferrihydrite were the potential mineral species to determine the SOM preservation capability, which was evaluated by the X-ray photoelectron spectroscopy (XPS) and Fe K-edge XANES spectroscopy techniques; and (3) soil organic biopolymers with dominant compounds, such as proteins, polysaccharides and lipids, were distributed at the rough and clustered surface of MOAs with high chemical and spatial heterogeneity according to the CLSM observation. Our results also promoted the understanding of the roles played by the highly reactive Al and Fe minerals in the spatial distribution of soil organic biopolymers and SOM sequestration. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Adaptation of the Practice Environment Scale for military nurses: a psychometric analysis.

    PubMed

    Swiger, Pauline A; Raju, Dheeraj; Breckenridge-Sproat, Sara; Patrician, Patricia A

    2017-09-01

    The aim of this study was to confirm the psychometric properties of Practice Environment Scale of the Nursing Work Index in a military population. This study also demonstrates association rule analysis, a contemporary exploratory technique. One of the instruments most commonly used to evaluate the nursing practice environment is the Practice Environment Scale of the Nursing Work Index. Although the instrument has been widely used, the reliability, validity and individual item function are not commonly evaluated. Gaps exist with regard to confirmatory evaluation of the subscale factors, individual item analysis and evaluation in the outpatient setting and with non-registered nursing staff. This was a secondary data analysis of existing survey data. Multiple psychometric methods were used for this analysis using survey data collected in 2014. First, descriptive analyses were conducted, including exploration using association rules. Next, internal consistency was tested and confirmatory factor analysis was performed to test the factor structure. The specified factor structure did not hold; therefore, exploratory factor analysis was performed. Finally, item analysis was executed using item response theory. The differential item functioning technique allowed the comparison of responses by care setting and nurse type. The results of this study indicate that responses differ between groups and that several individual items could be removed without altering the psychometric properties of the instrument. The instrument functions moderately well in a military population; however, researchers may want to consider nurse type and care setting during analysis to identify any meaningful variation in responses. © 2017 John Wiley & Sons Ltd.

  7. Stuttering, Induced Fluency, and Natural Fluency: A Hierarchical Series of Activation Likelihood Estimation Meta-Analyses

    PubMed Central

    Budde, Kristin S.; Barron, Daniel S.; Fox, Peter T.

    2015-01-01

    Developmental stuttering is a speech disorder most likely due to a heritable form of developmental dysmyelination impairing the function of the speech-motor system. Speech-induced brain-activation patterns in persons who stutter (PWS) are anomalous in various ways; the consistency of these aberrant patterns is a matter of ongoing debate. Here, we present a hierarchical series of coordinate-based meta-analyses addressing this issue. Two tiers of meta-analyses were performed on a 17-paper dataset (202 PWS; 167 fluent controls). Four large-scale (top-tier) meta-analyses were performed, two for each subject group (PWS and controls). These analyses robustly confirmed the regional effects previously postulated as “neural signatures of stuttering” (Brown 2005) and extended this designation to additional regions. Two smaller-scale (lower-tier) meta-analyses refined the interpretation of the large-scale analyses: 1) a between-group contrast targeting differences between PWS and controls (stuttering trait); and 2) a within-group contrast (PWS only) of stuttering with induced fluency (stuttering state). PMID:25463820

  8. Quantification of Impervious Surfaces Along the Wasatch Front, Utah: AN Object-Based Image Analysis Approach to Identifying AN Indicator for Wetland Stress

    NASA Astrophysics Data System (ADS)

    Leydsman-McGinty, E. I.; Ramsey, R. D.; McGinty, C.

    2013-12-01

    The Remote Sensing/GIS Laboratory at Utah State University, in cooperation with the United States Environmental Protection Agency, is quantifying impervious surfaces for three watershed sub-basins in Utah. The primary objective of developing watershed-scale quantifications of impervious surfaces is to provide an indicator of potential impacts to wetlands that occur within the Wasatch Front and along the Great Salt Lake. A geospatial layer of impervious surfaces can assist state agencies involved with Utah's Wetlands Program Plan (WPP) in understanding the impacts of impervious surfaces on wetlands, as well as support them in carrying out goals and actions identified in the WPP. The three watershed sub-basins, Lower Bear-Malad, Lower Weber, and Jordan, span the highly urbanized Wasatch Front and are consistent with focal areas in need of wetland monitoring and assessment as identified in Utah's WPP. Geospatial layers of impervious surface currently exist in the form of national and regional land cover datasets; however, these datasets are too coarse to be utilized in fine-scale analyses. In addition, the pixel-based image processing techniques used to develop these coarse datasets have proven insufficient in smaller scale or detailed studies, particularly when applied to high-resolution satellite imagery or aerial photography. Therefore, object-based image analysis techniques are being implemented to develop the geospatial layer of impervious surfaces. Object-based image analysis techniques employ a combination of both geospatial and image processing methods to extract meaningful information from high-resolution imagery. Spectral, spatial, textural, and contextual information is used to group pixels into image objects and then subsequently used to develop rule sets for image classification. eCognition, an object-based image analysis software program, is being utilized in conjunction with one-meter resolution National Agriculture Imagery Program (NAIP) aerial photography from 2011.

  9. Four-Spacecraft Magnetic Curvature and Vorticity Analyses on Kelvin-Helmholtz Waves in MHD Simulations

    NASA Astrophysics Data System (ADS)

    Kieokaew, Rungployphan; Foullon, Claire; Lavraud, Benoit

    2018-01-01

    Four-spacecraft missions are probing the Earth's magnetospheric environment with high potential for revealing spatial and temporal scales of a variety of in situ phenomena. The techniques allowed by these four spacecraft include the calculation of vorticity and the magnetic curvature analysis (MCA), both of which have been used in the study of various plasma structures. Motivated by curved magnetic field and vortical structures induced by Kelvin- Helmholtz (KH) waves, we investigate the robustness of the MCA and vorticity techniques when increasing (regular) tetrahedron sizes, to interpret real data. Here for the first time, we test both techniques on a 2.5-D MHD simulation of KH waves at the magnetopause. We investigate, in particular, the curvature and flow vorticity across KH vortices and produce time series for static spacecraft in the boundary layers. The combined results of magnetic curvature and vorticity further help us to understand the development of KH waves. In particular, first, in the trailing edge, the magnetic curvature across the magnetopause points in opposite directions, in the wave propagation direction on the magnetosheath side and against it on the magnetospheric side. Second, the existence of a "turnover layer" in the magnetospheric side, defined by negative vorticity for the duskside magnetopause, which persists in the saturation phase, is reminiscent of roll-up history. We found significant variations in the MCA measures depending on the size of the tetrahedron. This study lends support for cross-scale observations to better understand the nature of curvature and its role in plasma phenomena.

  10. Evaluation and recommendation of sensitivity analysis methods for application to Stochastic Human Exposure and Dose Simulation models.

    PubMed

    Mokhtari, Amirhossein; Christopher Frey, H; Zheng, Junyu

    2006-11-01

    Sensitivity analyses of exposure or risk models can help identify the most significant factors to aid in risk management or to prioritize additional research to reduce uncertainty in the estimates. However, sensitivity analysis is challenged by non-linearity, interactions between inputs, and multiple days or time scales. Selected sensitivity analysis methods are evaluated with respect to their applicability to human exposure models with such features using a testbed. The testbed is a simplified version of a US Environmental Protection Agency's Stochastic Human Exposure and Dose Simulation (SHEDS) model. The methods evaluated include the Pearson and Spearman correlation, sample and rank regression, analysis of variance, Fourier amplitude sensitivity test (FAST), and Sobol's method. The first five methods are known as "sampling-based" techniques, wheras the latter two methods are known as "variance-based" techniques. The main objective of the test cases was to identify the main and total contributions of individual inputs to the output variance. Sobol's method and FAST directly quantified these measures of sensitivity. Results show that sensitivity of an input typically changed when evaluated under different time scales (e.g., daily versus monthly). All methods provided similar insights regarding less important inputs; however, Sobol's method and FAST provided more robust insights with respect to sensitivity of important inputs compared to the sampling-based techniques. Thus, the sampling-based methods can be used in a screening step to identify unimportant inputs, followed by application of more computationally intensive refined methods to a smaller set of inputs. The implications of time variation in sensitivity results for risk management are briefly discussed.

  11. Evaluation and error apportionment of an ensemble of ...

    EPA Pesticide Factsheets

    Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) helping to detect causes of models error, and iii) identifying the processes and scales most urgently requiring dedicated investigations. The analysis is conducted within the framework of the third phase of the Air Quality Model Evaluation International Initiative (AQMEII) and tackles model performance gauging through measurement-to-model comparison, error decomposition and time series analysis of the models biases for several fields (ozone, CO, SO2, NO, NO2, PM10, PM2.5, wind speed, and temperature). The operational metrics (magnitude of the error, sign of the bias, associativity) provide an overall sense of model strengths and deficiencies, while apportioning the error to its constituent parts (bias, variance and covariance) can help to assess the nature and quality of the error. Each of the error components is analysed independently and apportioned to specific processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day) using the error apportionment technique devised in the former phases of AQMEII.The application of the error apportionment method to the AQMEII Phase 3 simulations provides several key insights. In addition to reaffirming the strong impact

  12. Kinetic-scale fluctuations resolved with the Fast Plasma Investigation on NASA's Magnetospheric Multiscale mission.

    NASA Astrophysics Data System (ADS)

    Gershman, D. J.; Figueroa-Vinas, A.; Dorelli, J.; Goldstein, M. L.; Shuster, J. R.; Avanov, L. A.; Boardsen, S. A.; Stawarz, J. E.; Schwartz, S. J.; Schiff, C.; Lavraud, B.; Saito, Y.; Paterson, W. R.; Giles, B. L.; Pollock, C. J.; Strangeway, R. J.; Russell, C. T.; Torbert, R. B.; Moore, T. E.; Burch, J. L.

    2017-12-01

    Measurements from the Fast Plasma Investigation (FPI) on NASA's Magnetospheric Multiscale (MMS) mission have enabled unprecedented analyses of kinetic-scale plasma physics. FPI regularly provides estimates of current density and pressure gradients of sufficient accuracy to evaluate the relative contribution of terms in plasma equations of motion. In addition, high-resolution three-dimensional velocity distribution functions of both ions and electrons provide new insights into kinetic-scale processes. As an example, for a monochromatic kinetic Alfven wave (KAW) we find non-zero, but out-of-phase parallel current density and electric field fluctuations, providing direct confirmation of the conservative energy exchange between the wave field and particles. In addition, we use fluctuations in current density and magnetic field to calculate the perpendicular and parallel wavelengths of the KAW. Furthermore, examination of the electron velocity distribution inside the KAW reveals a population of electrons non-linearly trapped in the kinetic-scale magnetic mirror formed between successive wave peaks. These electrons not only contribute to the wave's parallel electric field but also account for over half of the density fluctuations within the wave, supplying an unexpected mechanism for maintaining quasi-neutrality in a KAW. Finally, we demonstrate that the employed wave vector determination technique is also applicable to broadband fluctuations found in Earth's turbulent magnetosheath.

  13. Variations in Global Precipitation: Climate-scale to Floods

    NASA Technical Reports Server (NTRS)

    Adler, Robert

    2006-01-01

    Variations in global precipitation from climate-scale to small scale are examined using satellite-based analyses of the Global Precipitation Climatology Project (GPCP) and information from the Tropical Rainfall Measuring Mission (TRMM). Global and large regional rainfall variations and possible long-term changes are examined using the 27- year (1979-2005) monthly dataset from the GPCP. In addition to global patterns associated with phenomena such as ENSO, the data set is explored for evidence of longterm change. Although the global change of precipitation in the data set is near zero, the data set does indicate a small upward trend in the Tropics (25S-25N), especially over ocean. Techniques are derived to isolate and eliminate variations due to ENS0 and major volcanic eruptions and the significance of the trend is examined. The status of TRMM estimates is examined in terms of evaluating and improving the long-term global data set. To look at rainfall variations on a much smaller scale TRMM data is used in combination with observations from other satellites to produce a 3-hr resolution, eight-year data set for examination of weather events and for practical applications such as detecting floods. Characteristics of the data set are presented and examples of recent flood events are examined.

  14. Implicit Priors in Galaxy Cluster Mass and Scaling Relation Determinations

    NASA Technical Reports Server (NTRS)

    Mantz, A.; Allen, S. W.

    2011-01-01

    Deriving the total masses of galaxy clusters from observations of the intracluster medium (ICM) generally requires some prior information, in addition to the assumptions of hydrostatic equilibrium and spherical symmetry. Often, this information takes the form of particular parametrized functions used to describe the cluster gas density and temperature profiles. In this paper, we investigate the implicit priors on hydrostatic masses that result from this fully parametric approach, and the implications of such priors for scaling relations formed from those masses. We show that the application of such fully parametric models of the ICM naturally imposes a prior on the slopes of the derived scaling relations, favoring the self-similar model, and argue that this prior may be influential in practice. In contrast, this bias does not exist for techniques which adopt an explicit prior on the form of the mass profile but describe the ICM non-parametrically. Constraints on the slope of the cluster mass-temperature relation in the literature show a separation based the approach employed, with the results from fully parametric ICM modeling clustering nearer the self-similar value. Given that a primary goal of scaling relation analyses is to test the self-similar model, the application of methods subject to strong, implicit priors should be avoided. Alternative methods and best practices are discussed.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walz-Flannigan, A; Lucas, J; Buchanan, K

    Purpose: Manual technique selection in radiography is needed for imaging situations where there is difficulty in proper positioning for AEC, prosthesis, for non-bucky imaging, or for guiding image repeats. Basic information about how to provide consistent image signal and contrast for various kV and tissue thickness is needed to create manual technique charts, and relevant for physicists involved in technique chart optimization. Guidance on technique combinations and rules-of-thumb to provide consistent image signal still in use today are based on measurements with optical density of screen-film combinations and older generation x-ray systems. Tools such as a kV-scale chart can bemore » useful to know how to modify mAs when kV is changed in order to maintain consistent image receptor signal level. We evaluate these tools for modern equipment for use in optimizing proper size scaled techniques. Methods: We used a water phantom to measure calibrated signal change for CR and DR (with grid) for various beam energies. Tube current values were calculated that would yield a consistent image signal response. Data was fit to provide sufficient granularity of detail to compose technique-scale chart. Tissue thickness approximated equivalence to 80% of water depth. Results: We created updated technique-scale charts, providing mAs and kV combinations to achieve consistent signal for CR and DR for various tissue equivalent thicknesses. We show how this information can be used to create properly scaled size-based manual technique charts. Conclusion: Relative scaling of mAs and kV for constant signal (i.e. the shape of the curve) appears substantially similar between film-screen and CR/DR. This supports the notion that image receptor related differences are minor factors for relative (not absolute) changes in mAs with varying kV. However, as demonstrated creation of these difficult to find detailed technique-scales are useful tools for manual chart optimization.« less

  16. Downscaling future climate scenarios to fine scales for hydrologic and ecological modeling and analysis

    USGS Publications Warehouse

    Flint, Lorraine E.; Flint, Alan L.

    2012-01-01

    The methodology, which includes a sequence of rigorous analyses and calculations, is intended to reduce the addition of uncertainty to the climate data as a result of the downscaling while providing the fine-scale climate information necessary for ecological analyses. It results in new but consistent data sets for the US at 4 km, the southwest US at 270 m, and California at 90 m and illustrates the utility of fine-scale downscaling to analyses of ecological processes influenced by topographic complexity.

  17. Effects of interactive instructional techniques in a web-based peripheral nervous system component for human anatomy.

    PubMed

    Allen, Edwin B; Walls, Richard T; Reilly, Frank D

    2008-02-01

    This study investigated the effects of interactive instructional techniques in a web-based peripheral nervous system (PNS) component of a first year medical school human anatomy course. Existing data from 9 years of instruction involving 856 students were used to determine (1) the effect of web-based interactive instructional techniques on written exam item performance and (2) differences between student opinions of the benefit level of five different types of interactive learning objects used. The interactive learning objects included Patient Case studies, review Games, Simulated Interactive Patients (SIP), Flashcards, and unit Quizzes. Exam item analysis scores were found to be significantly higher (p < 0.05) for students receiving the instructional treatment incorporating the web-based interactive learning objects than for students not receiving this treatment. Questionnaires using a five-point Likert scale were analysed to determine student opinion ratings of the interactive learning objects. Students reported favorably on the benefit level of all learning objects. Students rated the benefit level of the Simulated Interactive Patients (SIP) highest, and this rating was significantly higher (p < 0.05) than all other learning objects. This study suggests that web-based interactive instructional techniques improve student exam performance. Students indicated a strong acceptance of Simulated Interactive Patient learning objects.

  18. Elemental and isotopic imaging to study biogeochemical functioning of intact soil micro-environments

    NASA Astrophysics Data System (ADS)

    Mueller, Carsten W.

    2017-04-01

    The complexity of soils extends from the ecosystem-scale to individual micro-aggregates, where nano-scale interactions between biota, organic matter (OM) and mineral particles are thought to control the long-term fate of soil carbon and nitrogen. It is known that such biogeochemical processes show disproportionally high reaction rates within nano- to micro-meter sized isolated zones ('hot spots') in comparison to surrounding areas. However, the majority of soil research is conducted on large bulk (> 1 g) samples, which are often significantly altered prior to analysis and analysed destructively. Thus it has previously been impossible to study elemental flows (e.g. C and N) between plants, microbes and soil in complex environments at the necessary spatial resolution within an intact soil system. By using nano-scale secondary ion mass spectrometry (NanoSIMS) in concert with other imaging techniques (e.g. scanning electron microscopy (SEM) and micro computed tomography (µCT)), classic analyses (isotopic and elemental analysis) and biochemical methods (e.g. GC-MS) it is possible to exhibit a more complete picture of soil processes at the micro-scale. I will present exemplarily results about the fate and distribution of organic C and N in complex micro-scale soil structures for a range of intact soil systems. Elemental imaging was used to study initial soil formation as an increase in the structural connectivity of micro-aggregates. Element distribution will be presented as a key to detect functional spatial patterns and biogeochemical hot spots in macro-aggregate functioning and development. In addition isotopic imaging will be demonstrated as a key to trace the fate of plant derived OM in the intact rhizosphere from the root to microbiota and mineral soil particles. Especially the use of stable isotope enrichment (e.g. 13CO2, 15NH4+) in conjunction with NanoSIMS allows to directly trace the fate of OM or nutrients in soils at the relevant scale (e.g. assimilate C / inorganic N in the rhizosphere). However, especially the elemental mapping requires more sophisticated computational approaches to evaluate (and quantify) the spatial heterogeneities of biogeochemical properties in intact soil systems.

  19. An Analog Macroscopic Technique for Studying Molecular Hydrodynamic Processes in Dense Gases and Liquids.

    PubMed

    Dahlberg, Jerry; Tkacik, Peter T; Mullany, Brigid; Fleischhauer, Eric; Shahinian, Hossein; Azimi, Farzad; Navare, Jayesh; Owen, Spencer; Bisel, Tucker; Martin, Tony; Sholar, Jodie; Keanini, Russell G

    2017-12-04

    An analog, macroscopic method for studying molecular-scale hydrodynamic processes in dense gases and liquids is described. The technique applies a standard fluid dynamic diagnostic, particle image velocimetry (PIV), to measure: i) velocities of individual particles (grains), extant on short, grain-collision time-scales, ii) velocities of systems of particles, on both short collision-time- and long, continuum-flow-time-scales, iii) collective hydrodynamic modes known to exist in dense molecular fluids, and iv) short- and long-time-scale velocity autocorrelation functions, central to understanding particle-scale dynamics in strongly interacting, dense fluid systems. The basic system is composed of an imaging system, light source, vibrational sensors, vibrational system with a known media, and PIV and analysis software. Required experimental measurements and an outline of the theoretical tools needed when using the analog technique to study molecular-scale hydrodynamic processes are highlighted. The proposed technique provides a relatively straightforward alternative to photonic and neutron beam scattering methods traditionally used in molecular hydrodynamic studies.

  20. A Modified Rule of Thumb for Evaluating Scale Reproducibilities Determined by Electronic Computers

    ERIC Educational Resources Information Center

    Hofmann, Richard J.

    1978-01-01

    The Goodenough technique for determining scale error is compared to the Guttman technique and demonstrated to be more conservative than the Guttman technique. Implications with regard to Guttman's evaluative rule of thumb for evaluating a reproducibility are noted. (Author)

  1. Method and apparatus for determination of mechanical properties of functionally-graded materials

    DOEpatents

    Giannakopoulos, Antonios E.; Suresh, Subra

    1999-01-01

    Techniques for the determination of mechanical properties of homogenous or functionally-graded materials from indentation testing are presented. The technique is applicable to indentation on the nano-scale through the macro-scale including the geological scale. The technique involves creating a predictive load/depth relationship for a sample, providing an experimental load/depth relationship, comparing the experimental data to the predictive data, and determining a physical characteristic from the comparison.

  2. Comparison of Multi-Scale Digital Elevation Models for Defining Waterways and Catchments Over Large Areas

    NASA Astrophysics Data System (ADS)

    Harris, B.; McDougall, K.; Barry, M.

    2012-07-01

    Digital Elevation Models (DEMs) allow for the efficient and consistent creation of waterways and catchment boundaries over large areas. Studies of waterway delineation from DEMs are usually undertaken over small or single catchment areas due to the nature of the problems being investigated. Improvements in Geographic Information Systems (GIS) techniques, software, hardware and data allow for analysis of larger data sets and also facilitate a consistent tool for the creation and analysis of waterways over extensive areas. However, rarely are they developed over large regional areas because of the lack of available raw data sets and the amount of work required to create the underlying DEMs. This paper examines definition of waterways and catchments over an area of approximately 25,000 km2 to establish the optimal DEM scale required for waterway delineation over large regional projects. The comparative study analysed multi-scale DEMs over two test areas (Wivenhoe catchment, 543 km2 and a detailed 13 km2 within the Wivenhoe catchment) including various data types, scales, quality, and variable catchment input parameters. Historic and available DEM data was compared to high resolution Lidar based DEMs to assess variations in the formation of stream networks. The results identified that, particularly in areas of high elevation change, DEMs at 20 m cell size created from broad scale 1:25,000 data (combined with more detailed data or manual delineation in flat areas) are adequate for the creation of waterways and catchments at a regional scale.

  3. Detection of submicron scale cracks and other surface anomalies using positron emission tomography

    DOEpatents

    Cowan, Thomas E.; Howell, Richard H.; Colmenares, Carlos A.

    2004-02-17

    Detection of submicron scale cracks and other mechanical and chemical surface anomalies using PET. This surface technique has sufficient sensitivity to detect single voids or pits of sub-millimeter size and single cracks or fissures of millimeter size; and single cracks or fissures of millimeter-scale length, micrometer-scale depth, and nanometer-scale length, micrometer-scale depth, and nanometer-scale width. This technique can also be applied to detect surface regions of differing chemical reactivity. It may be utilized in a scanning or survey mode to simultaneously detect such mechanical or chemical features over large interior or exterior surface areas of parts as large as about 50 cm in diameter. The technique involves exposing a surface to short-lived radioactive gas for a time period, removing the excess gas to leave a partial monolayer, determining the location and shape of the cracks, voids, porous regions, etc., and calculating the width, depth, and length thereof. Detection of 0.01 mm deep cracks using a 3 mm detector resolution has been accomplished using this technique.

  4. Manufacturing Process Developments for Regeneratively-Cooled Channel Wall Rocket Nozzles

    NASA Technical Reports Server (NTRS)

    Gradl, Paul; Brandsmeier, Will

    2016-01-01

    Regeneratively cooled channel wall nozzles incorporate a series of integral coolant channels to contain the coolant to maintain adequate wall temperatures and expand hot gas providing engine thrust and specific impulse. NASA has been evaluating manufacturing techniques targeting large scale channel wall nozzles to support affordability of current and future liquid rocket engine nozzles and thrust chamber assemblies. The development of these large scale manufacturing techniques focus on the liner formation, channel slotting with advanced abrasive water-jet milling techniques and closeout of the coolant channels to replace or augment other cost reduction techniques being evaluated for nozzles. NASA is developing a series of channel closeout techniques including large scale additive manufacturing laser deposition and explosively bonded closeouts. A series of subscale nozzles were completed evaluating these processes. Fabrication of mechanical test and metallography samples, in addition to subscale hardware has focused on Inconel 625, 300 series stainless, aluminum alloys as well as other candidate materials. Evaluations of these techniques are demonstrating potential for significant cost reductions for large scale nozzles and chambers. Hot fire testing is planned using these techniques in the future.

  5. Item response theory in personality assessment: a demonstration using the MMPI-2 depression scale.

    PubMed

    Childs, R A; Dahlstrom, W G; Kemp, S M; Panter, A T

    2000-03-01

    Item response theory (IRT) analyses have, over the past 3 decades, added much to our understanding of the relationships among and characteristics of test items, as revealed in examinees response patterns. Assessment instruments used outside the educational context have only infrequently been analyzed using IRT, however. This study demonstrates the relevance of IRT to personality data through analyses of Scale 2 (the Depression Scale) on the revised Minnesota Multiphasic Personality Inventory (MMPI-2). A rich set of hypotheses regarding the items on this scale, including contrasts among the Harris-Lingoes and Wiener-Harmon subscales and differences in the items measurement characteristics for men and women, are investigated through the IRT analyses.

  6. Microhistological Techniques for Food Habits Analyses

    Treesearch

    Mark K. Johnson; Helen Wofford; Henry A. Pearson

    1983-01-01

    Techniques used to prepare and quantify herbivore diet samples for microhistological analyses are described. Plant fragments are illustrated for more than 50 selected plants common on longleaf-slash pine-bluestem range in the southeastern United States.

  7. Job-related stress in psychiatric assistant nurses.

    PubMed

    Yada, Hironori; Abe, Hiroshi; Omori, Hisamitsu; Ishida, Yasushi; Katoh, Takahiko

    2018-01-01

    We aimed to clarify how stress among psychiatric assistant nurses (PANs) differed from Registered Nurses (PRNs). Cross-sectional survey study was conducted with PRNs and PANs working in six psychiatric hospitals in Japan. The Psychiatric Nurse Job Stressor Scale (PNJSS) and the job stressor and stress reaction subscales of the Brief Job Stress Questionnaire measured stress in 68 PANs and 140 PRNs. The results were statistically analysed. Psychiatric assistant nurses had significantly higher scores than PRNs on the job stressor subscales in psychiatric nursing ability, interpersonal relations and in the stress reaction subscales of irritability and somatic symptoms. "Psychiatric nursing ability," "Communication" and "Use of techniques" were associated with almost all stress reactions in PANs than in PRNs.

  8. Paragenesis and Geochronology of the Nopal I Uranium Deposit, Mexico

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    M. Fayek; M. Ren

    2007-02-14

    Uranium deposits can, by analogy, provide important information on the long-term performance of radioactive waste forms and radioactive waste repositories. Their complex mineralogy and variable elemental and isotopic compositions can provide important information, provided that analyses are obtained on the scale of several micrometers. Here, we present a structural model of the Nopal I deposit as well as petrography at the nanoscale coupled with preliminary U-Th-Pb ages and O isotopic compositions of uranium-rich minerals obtained by Secondary Ion Mass Spectrometry (SIMS). This multi-technique approach promises to provide ''natural system'' data on the corrosion rate of uraninite, the natural analogue ofmore » spent nuclear fuel.« less

  9. Drinking water biofilms on copper and stainless steel exhibit specific molecular responses towards different disinfection regimes at waterworks.

    PubMed

    Jungfer, Christina; Friedrich, Frank; Varela Villarreal, Jessica; Brändle, Katharina; Gross, Hans-Jürgen; Obst, Ursula; Schwartz, Thomas

    2013-09-01

    Biofilms growing on copper and stainless steel substrata in natural drinking water were investigated. A modular pilot-scale distribution facility was installed at four waterworks using different raw waters and disinfection regimes. Three-month-old biofilms were analysed using molecular biology and microscopy methods. High total cell numbers, low counts of actively respiring cells and low numbers of cultivable bacteria indicated the high abundance of viable but not cultivable bacteria in the biofilms. The expression of the recA SOS responsive gene was detected and underlined the presence of transcriptionally active bacteria within the biofilms. This effect was most evident after UV disinfection, UV oxidation and UV disinfection with increased turbidity at waterworks compared to chemically treated and non-disinfected systems. Furthermore, live/dead staining techniques and environmental scanning electron microscopy imaging revealed the presence of living and intact bacteria in biofilms on copper substrata. Cluster analyses of DGGE profiles demonstrated differences in the composition of biofilms on copper and steel materials.

  10. First beam measurements on the vessel for extraction and source plasma analyses (VESPA) at the Rutherford Appleton Laboratory (RAL)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lawrie, Scott R., E-mail: scott.lawrie@stfc.ac.uk; John Adams Institute for Accelerator Science, Department of Physics, University of Oxford; Faircloth, Daniel C.

    2015-04-08

    In order to facilitate the testing of advanced H{sup −} ion sources for the ISIS and Front End Test Stand (FETS) facilities at the Rutherford Appleton Laboratory (RAL), a Vessel for Extraction and Source Plasma Analyses (VESPA) has been constructed. This will perform the first detailed plasma measurements on the ISIS Penning-type H{sup −} ion source using emission spectroscopic techniques. In addition, the 30-year-old extraction optics are re-designed from the ground up in order to fully transport the beam. Using multiple beam and plasma diagnostics devices, the ultimate aim is improve H{sup −} production efficiency and subsequent transport for eithermore » long-term ISIS user operations or high power FETS requirements. The VESPA will also accommodate and test a new scaled-up Penning H{sup −} source design. This paper details the VESPA design, construction and commissioning, as well as initial beam and spectroscopy results.« less

  11. Tipping point analysis of ocean acoustic noise

    NASA Astrophysics Data System (ADS)

    Livina, Valerie N.; Brouwer, Albert; Harris, Peter; Wang, Lian; Sotirakopoulos, Kostas; Robinson, Stephen

    2018-02-01

    We apply tipping point analysis to a large record of ocean acoustic data to identify the main components of the acoustic dynamical system and study possible bifurcations and transitions of the system. The analysis is based on a statistical physics framework with stochastic modelling, where we represent the observed data as a composition of deterministic and stochastic components estimated from the data using time-series techniques. We analyse long-term and seasonal trends, system states and acoustic fluctuations to reconstruct a one-dimensional stochastic equation to approximate the acoustic dynamical system. We apply potential analysis to acoustic fluctuations and detect several changes in the system states in the past 14 years. These are most likely caused by climatic phenomena. We analyse trends in sound pressure level within different frequency bands and hypothesize a possible anthropogenic impact on the acoustic environment. The tipping point analysis framework provides insight into the structure of the acoustic data and helps identify its dynamic phenomena, correctly reproducing the probability distribution and scaling properties (power-law correlations) of the time series.

  12. Prediction of hydrocarbon surface seepage potential using infiltrometer data

    NASA Astrophysics Data System (ADS)

    Connors, J. J.; Jackson, J. L.; Engle, R. A.; Connors, J. L.

    2017-12-01

    Environmental regulations addressing above-ground storage tank (AST) spill control activities typically require owners/operators to demonstrate that local soil permeability values are low enough to adequately contain released liquids while emergency-response procedures are conducted. Frequently, geotechnical borings and soil samples/analyses, and/or monitoring well slug-test analyses, are used to provide hydraulic conductivity data for the required calculations. While these techniques are useful in assessing hydrological characteristics of the subsurface, they do not always assess the uppermost surface soil layer, where the bulk of the containment can occur. This layer may have been subject to long-term permeability-reduction by activities such as compaction by vehicular and foot traffic, micro-coatings by hydrophobic pollutants, etc. This presentation explores the usefulness of dual-ring infiltrometers, both in field and bench-scale tests, to rapidly acquire actual hydraulic conductivity values of surficial soil layers, which can be much lower than subsurface values determined using more traditional downhole geotechnical and hydrogeological approaches.

  13. Quantitative Analysis of Venus Radar Backscatter Data in ArcGIS

    NASA Technical Reports Server (NTRS)

    Long, S. M.; Grosfils, E. B.

    2005-01-01

    Ongoing mapping of the Ganiki Planitia (V14) quadrangle of Venus and definition of material units has involved an integrated but qualitative analysis of Magellan radar backscatter images and topography using standard geomorphological mapping techniques. However, such analyses do not take full advantage of the quantitative information contained within the images. Analysis of the backscatter coefficient allows a much more rigorous statistical comparison between mapped units, permitting first order selfsimilarity tests of geographically separated materials assigned identical geomorphological labels. Such analyses cannot be performed directly on pixel (DN) values from Magellan backscatter images, because the pixels are scaled to the Muhleman law for radar echoes on Venus and are not corrected for latitudinal variations in incidence angle. Therefore, DN values must be converted based on pixel latitude back to their backscatter coefficient values before accurate statistical analysis can occur. Here we present a method for performing the conversions and analysis of Magellan backscatter data using commonly available ArcGIS software and illustrate the advantages of the process for geological mapping.

  14. Software development for the analysis of heartbeat sounds with LabVIEW in diagnosis of cardiovascular disease.

    PubMed

    Topal, Taner; Polat, Hüseyin; Güler, Inan

    2008-10-01

    In this paper, a time-frequency spectral analysis software (Heart Sound Analyzer) for the computer-aided analysis of cardiac sounds has been developed with LabVIEW. Software modules reveal important information for cardiovascular disorders, it can also assist to general physicians to come up with more accurate and reliable diagnosis at early stages. Heart sound analyzer (HSA) software can overcome the deficiency of expert doctors and help them in rural as well as urban clinics and hospitals. HSA has two main blocks: data acquisition and preprocessing, time-frequency spectral analyses. The heart sounds are first acquired using a modified stethoscope which has an electret microphone in it. Then, the signals are analysed using the time-frequency/scale spectral analysis techniques such as STFT, Wigner-Ville distribution and wavelet transforms. HSA modules have been tested with real heart sounds from 35 volunteers and proved to be quite efficient and robust while dealing with a large variety of pathological conditions.

  15. Physical-chemical treatment of rainwater runoff in recovery and recycling companies: Pilot-scale optimization.

    PubMed

    Blondeel, Evelyne; Depuydt, Veerle; Cornelis, Jasper; Chys, Michael; Verliefde, Arne; Van Hulle, Stijin Wim Henk

    2015-01-01

    Pilot-scale optimisation of different possible physical-chemical water treatment techniques was performed on the wastewater originating from three different recovery and recycling companies in order to select a (combination of) technique(s) for further full-scale implementation. This implementation is necessary to reduce the concentration of both common pollutants (such as COD, nutrients and suspended solids) and potentially toxic metals, polyaromatic hydrocarbons and poly-chlorinated biphenyls frequently below the discharge limits. The pilot-scale tests (at 250 L h(-1) scale) demonstrate that sand anthracite filtration or coagulation/flocculation are interesting as first treatment techniques with removal efficiencies of about 19% to 66% (sand anthracite filtration), respectively 18% to 60% (coagulation/flocculation) for the above mentioned pollutants (metals, polyaromatic hydrocarbons and poly chlorinated biphenyls). If a second treatment step is required, the implementation of an activated carbon filter is recommended (about 46% to 86% additional removal is obtained).

  16. Current Challenges in Plant Eco-Metabolomics

    PubMed Central

    Peters, Kristian; Worrich, Anja; Alka, Oliver; Balcke, Gerd; Bruelheide, Helge; Dietz, Sophie; Dührkop, Kai; Heinig, Uwe; Kücklich, Marlen; Müller, Caroline; Poeschl, Yvonne; Pohnert, Georg; Ruttkies, Christoph; Schweiger, Rabea; Shahaf, Nir; Tortosa, Maria; Ueberschaar, Nico; Velasco, Pablo; Weiß, Brigitte M.; van Dam, Nicole M.

    2018-01-01

    The relatively new research discipline of Eco-Metabolomics is the application of metabolomics techniques to ecology with the aim to characterise biochemical interactions of organisms across different spatial and temporal scales. Metabolomics is an untargeted biochemical approach to measure many thousands of metabolites in different species, including plants and animals. Changes in metabolite concentrations can provide mechanistic evidence for biochemical processes that are relevant at ecological scales. These include physiological, phenotypic and morphological responses of plants and communities to environmental changes and also interactions with other organisms. Traditionally, research in biochemistry and ecology comes from two different directions and is performed at distinct spatiotemporal scales. Biochemical studies most often focus on intrinsic processes in individuals at physiological and cellular scales. Generally, they take a bottom-up approach scaling up cellular processes from spatiotemporally fine to coarser scales. Ecological studies usually focus on extrinsic processes acting upon organisms at population and community scales and typically study top-down and bottom-up processes in combination. Eco-Metabolomics is a transdisciplinary research discipline that links biochemistry and ecology and connects the distinct spatiotemporal scales. In this review, we focus on approaches to study chemical and biochemical interactions of plants at various ecological levels, mainly plant–organismal interactions, and discuss related examples from other domains. We present recent developments and highlight advancements in Eco-Metabolomics over the last decade from various angles. We further address the five key challenges: (1) complex experimental designs and large variation of metabolite profiles; (2) feature extraction; (3) metabolite identification; (4) statistical analyses; and (5) bioinformatics software tools and workflows. The presented solutions to these challenges will advance connecting the distinct spatiotemporal scales and bridging biochemistry and ecology. PMID:29734799

  17. Effect of Variable Spatial Scales on USLE-GIS Computations

    NASA Astrophysics Data System (ADS)

    Patil, R. J.; Sharma, S. K.

    2017-12-01

    Use of appropriate spatial scale is very important in Universal Soil Loss Equation (USLE) based spatially distributed soil erosion modelling. This study aimed at assessment of annual rates of soil erosion at different spatial scales/grid sizes and analysing how changes in spatial scales affect USLE-GIS computations using simulation and statistical variabilities. Efforts have been made in this study to recommend an optimum spatial scale for further USLE-GIS computations for management and planning in the study area. The present research study was conducted in Shakkar River watershed, situated in Narsinghpur and Chhindwara districts of Madhya Pradesh, India. Remote Sensing and GIS techniques were integrated with Universal Soil Loss Equation (USLE) to predict spatial distribution of soil erosion in the study area at four different spatial scales viz; 30 m, 50 m, 100 m, and 200 m. Rainfall data, soil map, digital elevation model (DEM) and an executable C++ program, and satellite image of the area were used for preparation of the thematic maps for various USLE factors. Annual rates of soil erosion were estimated for 15 years (1992 to 2006) at four different grid sizes. The statistical analysis of four estimated datasets showed that sediment loss dataset at 30 m spatial scale has a minimum standard deviation (2.16), variance (4.68), percent deviation from observed values (2.68 - 18.91 %), and highest coefficient of determination (R2 = 0.874) among all the four datasets. Thus, it is recommended to adopt this spatial scale for USLE-GIS computations in the study area due to its minimum statistical variability and better agreement with the observed sediment loss data. This study also indicates large scope for use of finer spatial scales in spatially distributed soil erosion modelling.

  18. Sockeye: A 3D Environment for Comparative Genomics

    PubMed Central

    Montgomery, Stephen B.; Astakhova, Tamara; Bilenky, Mikhail; Birney, Ewan; Fu, Tony; Hassel, Maik; Melsopp, Craig; Rak, Marcin; Robertson, A. Gordon; Sleumer, Monica; Siddiqui, Asim S.; Jones, Steven J.M.

    2004-01-01

    Comparative genomics techniques are used in bioinformatics analyses to identify the structural and functional properties of DNA sequences. As the amount of available sequence data steadily increases, the ability to perform large-scale comparative analyses has become increasingly relevant. In addition, the growing complexity of genomic feature annotation means that new approaches to genomic visualization need to be explored. We have developed a Java-based application called Sockeye that uses three-dimensional (3D) graphics technology to facilitate the visualization of annotation and conservation across multiple sequences. This software uses the Ensembl database project to import sequence and annotation information from several eukaryotic species. A user can additionally import their own custom sequence and annotation data. Individual annotation objects are displayed in Sockeye by using custom 3D models. Ensembl-derived and imported sequences can be analyzed by using a suite of multiple and pair-wise alignment algorithms. The results of these comparative analyses are also displayed in the 3D environment of Sockeye. By using the Java3D API to visualize genomic data in a 3D environment, we are able to compactly display cross-sequence comparisons. This provides the user with a novel platform for visualizing and comparing genomic feature organization. PMID:15123592

  19. Whale song analyses using bioinformatics sequence analysis approaches

    NASA Astrophysics Data System (ADS)

    Chen, Yian A.; Almeida, Jonas S.; Chou, Lien-Siang

    2005-04-01

    Animal songs are frequently analyzed using discrete hierarchical units, such as units, themes and songs. Because animal songs and bio-sequences may be understood as analogous, bioinformatics analysis tools DNA/protein sequence alignment and alignment-free methods are proposed to quantify the theme similarities of the songs of false killer whales recorded off northeast Taiwan. The eighteen themes with discrete units that were identified in an earlier study [Y. A. Chen, masters thesis, University of Charleston, 2001] were compared quantitatively using several distance metrics. These metrics included the scores calculated using the Smith-Waterman algorithm with the repeated procedure; the standardized Euclidian distance and the angle metrics based on word frequencies. The theme classifications based on different metrics were summarized and compared in dendrograms using cluster analyses. The results agree with earlier classifications derived by human observation qualitatively. These methods further quantify the similarities among themes. These methods could be applied to the analyses of other animal songs on a larger scale. For instance, these techniques could be used to investigate song evolution and cultural transmission quantifying the dissimilarities of humpback whale songs across different seasons, years, populations, and geographic regions. [Work supported by SC Sea Grant, and Ilan County Government, Taiwan.

  20. The Chinese version of the Myocardial Infarction Dimensional Assessment Scale (MIDAS): Mokken scaling

    PubMed Central

    2012-01-01

    Background Hierarchical scales are very useful in clinical practice due to their ability to discriminate precisely between individuals, and the original English version of the Myocardial Infarction Dimensional Assessment Scale has been shown to contain a hierarchy of items. The purpose of this study was to analyse a Mandarin Chinese translation of the Myocardial Infarction Dimensional Assessment Scale for a hierarchy of items according to the criteria of Mokken scaling. Data from 180 Chinese participants who completed the Chinese translation of the Myocardial Infarction Dimensional Assessment Scale were analysed using the Mokken Scaling Procedure and the 'R' statistical programme using the diagnostics available in these programmes. Correlation between Mandarin Chinese items and a Chinese translation of the Short Form (36) Health Survey was also analysed. Findings Fifteen items from the Mandarin Chinese Myocardial Infarction Dimensional Assessment Scale were retained in a strong and reliable Mokken scale; invariant item ordering was not evident and the Mokken scaled items of the Chinese Myocardial Infarction Dimensional Assessment Scale correlated with the Short Form (36) Health Survey. Conclusions Items from the Mandarin Chinese Myocardial Infarction Dimensional Assessment Scale form a Mokken scale and this offers further insight into how the items of the Myocardial Infarction Dimensional Assessment Scale relate to the measurement of health-related quality of life people with a myocardial infarction. PMID:22221696

  1. Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses

    PubMed Central

    Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T

    2014-01-01

    Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. PMID:24462600

  2. Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses.

    PubMed

    Liu, Bo; Madduri, Ravi K; Sotomayor, Borja; Chard, Kyle; Lacinski, Lukasz; Dave, Utpal J; Li, Jianqiang; Liu, Chunchen; Foster, Ian T

    2014-06-01

    Due to the upcoming data deluge of genome data, the need for storing and processing large-scale genome data, easy access to biomedical analyses tools, efficient data sharing and retrieval has presented significant challenges. The variability in data volume results in variable computing and storage requirements, therefore biomedical researchers are pursuing more reliable, dynamic and convenient methods for conducting sequencing analyses. This paper proposes a Cloud-based bioinformatics workflow platform for large-scale next-generation sequencing analyses, which enables reliable and highly scalable execution of sequencing analyses workflows in a fully automated manner. Our platform extends the existing Galaxy workflow system by adding data management capabilities for transferring large quantities of data efficiently and reliably (via Globus Transfer), domain-specific analyses tools preconfigured for immediate use by researchers (via user-specific tools integration), automatic deployment on Cloud for on-demand resource allocation and pay-as-you-go pricing (via Globus Provision), a Cloud provisioning tool for auto-scaling (via HTCondor scheduler), and the support for validating the correctness of workflows (via semantic verification tools). Two bioinformatics workflow use cases as well as performance evaluation are presented to validate the feasibility of the proposed approach. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Associations between DSM-5 section III personality traits and the Minnesota Multiphasic Personality Inventory 2-Restructured Form (MMPI-2-RF) scales in a psychiatric patient sample.

    PubMed

    Anderson, Jaime L; Sellbom, Martin; Ayearst, Lindsay; Quilty, Lena C; Chmielewski, Michael; Bagby, R Michael

    2015-09-01

    Our aim in the current study was to evaluate the convergence between Diagnostic and Statistical Manual of Mental Disorders, fifth edition (DSM-5) Section III dimensional personality traits, as operationalized via the Personality Inventory for DSM-5 (PID-5), and Minnesota Multiphasic Personality Inventory 2-Restructured Form (MMPI-2-RF) scale scores in a psychiatric patient sample. We used a sample of 346 (171 men, 175 women) patients who were recruited through a university-affiliated psychiatric facility in Toronto, Canada. We estimated zero-order correlations between the PID-5 and MMPI-2-RF substantive scale scores, as well as a series of exploratory structural equation modeling (ESEM) analyses to examine how these scales converged in multivariate latent space. Results generally showed empirical convergence between the scales of these two measures that were thematically meaningful and in accordance with conceptual expectations. Correlation analyses showed significant associations between conceptually expected scales, and the highest associations tended to be between scales that were theoretically related. ESEM analyses generated evidence for distinct internalizing, externalizing, and psychoticism factors across all analyses. These findings indicate convergence between these two measures and help further elucidate the associations between dysfunctional personality traits and general psychopathology. (c) 2015 APA, all rights reserved.

  4. Fabrication methods for mesoscopic flying vehicle

    NASA Astrophysics Data System (ADS)

    Cheng, Yih-Lin

    2001-10-01

    Small-scale flying vehicles are attractive tools for atmospheric science research. A centimeter-size mesoscopic electric helicopter, the mesicopter, has been developed at Stanford University for these applications. The mesoscopic scale implies a design with critical features between tens of microns and several millimeters. Three major parts in the mesicopter are challenging to manufacture. Rotors require smooth 3D surfaces and a blade thickness of less than 100 mum. Components in the DC micro-motor must be made of engineering materials, which is difficult on the mesoscopic scale. Airframe fabrication has to integrate complex 3D geometry into one single structure at this scale. In this research, material selection and manufacturing approaches have been investigated and implemented. In rotor fabrication, high-strength polymers manufactured by the Shape Deposition Manufacturing (SDM) technique were the top choice. Aluminum alloys were only considered as the second choice because the fabrication process is more involved. Lift tests showed that the 4-blade polymer and aluminum rotors could deliver about 90% of the expected lift (4g). To explain the rotor performance, structural analyses of spinning rotors were performed and the fabricated geometry was investigated. The bending deflections and the torsional twists were found to be too small to degrade aerodynamic performance. The rotor geometry was verified by laser scanning and by cross-section observations. Commercially available motors are used in the prototypes but a smaller DC micro-motor was designed for future use. Components of the DC micro-motors were fabricated by the Mesoscopic Additive/Subtractive Material Processing technique, which is capable of shaping engineering materials on the mesoscopic scale. The approaches are described in this thesis. The airframe was manufactured using the SDM process, which is capable of building complex parts without assembly. Castable polymers were chosen and mixed with glass microspheres to reduce their density. The finished airframe (65.5 mm x 65.5 mm) weighed only 1.5g. Two mesicopter prototypes, weighing 3g and 17g, have illustrated that powered flight at this scale is feasible. This research provides solutions to manufacture the challenging parts for the mesicopter. The manufacturing approaches discussed here are applicable to other small flying vehicles in similar and even smaller size regimes.

  5. Ice stream motion facilitated by a shallow-deforming and accreting bed

    PubMed Central

    Spagnolo, Matteo; Phillips, Emrys; Piotrowski, Jan A.; Rea, Brice R.; Clark, Chris D.; Stokes, Chris R.; Carr, Simon J.; Ely, Jeremy C.; Ribolini, Adriano; Wysota, Wojciech; Szuman, Izabela

    2016-01-01

    Ice streams drain large portions of ice sheets and play a fundamental role in governing their response to atmospheric and oceanic forcing, with implications for sea-level change. The mechanisms that generate ice stream flow remain elusive. Basal sliding and/or bed deformation have been hypothesized, but ice stream beds are largely inaccessible. Here we present a comprehensive, multi-scale study of the internal structure of mega-scale glacial lineations (MSGLs) formed at the bed of a palaeo ice stream. Analyses were undertaken at macro- and microscales, using multiple techniques including X-ray tomography, thin sections and ground penetrating radar (GPR) acquisitions. Results reveal homogeneity in stratigraphy, kinematics, granulometry and petrography. The consistency of the physical and geological properties demonstrates a continuously accreting, shallow-deforming, bed and invariant basal conditions. This implies that ice stream basal motion on soft sediment beds during MSGL formation is accommodated by plastic deformation, facilitated by continuous sediment supply and an inefficient drainage system. PMID:26898399

  6. ELUCIDATING BRAIN CONNECTIVITY NETWORKS IN MAJOR DEPRESSIVE DISORDER USING CLASSIFICATION-BASED SCORING.

    PubMed

    Sacchet, Matthew D; Prasad, Gautam; Foland-Ross, Lara C; Thompson, Paul M; Gotlib, Ian H

    2014-04-01

    Graph theory is increasingly used in the field of neuroscience to understand the large-scale network structure of the human brain. There is also considerable interest in applying machine learning techniques in clinical settings, for example, to make diagnoses or predict treatment outcomes. Here we used support-vector machines (SVMs), in conjunction with whole-brain tractography, to identify graph metrics that best differentiate individuals with Major Depressive Disorder (MDD) from nondepressed controls. To do this, we applied a novel feature-scoring procedure that incorporates iterative classifier performance to assess feature robustness. We found that small-worldness , a measure of the balance between global integration and local specialization, most reliably differentiated MDD from nondepressed individuals. Post-hoc regional analyses suggested that heightened connectivity of the subcallosal cingulate gyrus (SCG) in MDDs contributes to these differences. The current study provides a novel way to assess the robustness of classification features and reveals anomalies in large-scale neural networks in MDD.

  7. Constitutive Modeling of Nanotube-Reinforced Polymer Composites

    NASA Technical Reports Server (NTRS)

    Odegard, G. M.; Gates, T. S.; Wise, K. E.; Park, C.; Siochi, E. J.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    In this study, a technique is presented for developing constitutive models for polymer composite systems reinforced with single-walled carbon nanotubes (SWNT). Because the polymer molecules are on the same size scale as the nanotubes, the interaction at the polymer/nanotube interface is highly dependent on the local molecular structure and bonding. At these small length scales, the lattice structures of the nanotube and polymer chains cannot be considered continuous, and the bulk mechanical properties can no longer be determined through traditional micromechanical approaches that are formulated by using continuum mechanics. It is proposed herein that the nanotube, the local polymer near the nanotube, and the nanotube/polymer interface can be modeled as an effective continuum fiber using an equivalent-continuum modeling method. The effective fiber serves as a means for incorporating micromechanical analyses for the prediction of bulk mechanical properties of SWNT/polymer composites with various nanotube lengths, concentrations, and orientations. As an example, the proposed approach is used for the constitutive modeling of two SWNT/polyimide composite systems.

  8. Intelligent Interfaces for Mining Large-Scale RNAi-HCS Image Databases

    PubMed Central

    Lin, Chen; Mak, Wayne; Hong, Pengyu; Sepp, Katharine; Perrimon, Norbert

    2010-01-01

    Recently, High-content screening (HCS) has been combined with RNA interference (RNAi) to become an essential image-based high-throughput method for studying genes and biological networks through RNAi-induced cellular phenotype analyses. However, a genome-wide RNAi-HCS screen typically generates tens of thousands of images, most of which remain uncategorized due to the inadequacies of existing HCS image analysis tools. Until now, it still requires highly trained scientists to browse a prohibitively large RNAi-HCS image database and produce only a handful of qualitative results regarding cellular morphological phenotypes. For this reason we have developed intelligent interfaces to facilitate the application of the HCS technology in biomedical research. Our new interfaces empower biologists with computational power not only to effectively and efficiently explore large-scale RNAi-HCS image databases, but also to apply their knowledge and experience to interactive mining of cellular phenotypes using Content-Based Image Retrieval (CBIR) with Relevance Feedback (RF) techniques. PMID:21278820

  9. Towards a natural disaster intervention and recovery framework.

    PubMed

    Lawther, Peter M

    2016-07-01

    Contemporary responses to facilitate long-term recovery from large-scale natural disasters juxtapose between those of humanitarian agencies and governments and those of the affected community. The extent to which these mechanisms articulate is crucial to the recovery propensity of the affected communities. This research examines such action by exploring the relationship between the scale of post-disaster response interventions, the extent of community participation in them, and their impact on community recovery, using a community wealth capital framework. The investigation was applied to a study of the longer-term community recovery of the island of Vilufushi, Republic of Maldives, which was almost completely destroyed by the Indian Ocean tsunami of 26 December 2004. Data were analysed through the employment of a pattern match technique and a holistic recovery network analysis. The research framework, informed by the case-study results, other long-term recovery evaluations, and existing resilience theory, is reconfigured as a testable roadmap for future post-disaster interventions. © 2016 The Author(s). Disasters © Overseas Development Institute, 2016.

  10. A simple autocorrelation algorithm for determining grain size from digital images of sediment

    USGS Publications Warehouse

    Rubin, D.M.

    2004-01-01

    Autocorrelation between pixels in digital images of sediment can be used to measure average grain size of sediment on the bed, grain-size distribution of bed sediment, and vertical profiles in grain size in a cross-sectional image through a bed. The technique is less sensitive than traditional laboratory analyses to tails of a grain-size distribution, but it offers substantial other advantages: it is 100 times as fast; it is ideal for sampling surficial sediment (the part that interacts with a flow); it can determine vertical profiles in grain size on a scale finer than can be sampled physically; and it can be used in the field to provide almost real-time grain-size analysis. The technique can be applied to digital images obtained using any source with sufficient resolution, including digital cameras, digital video, or underwater digital microscopes (for real-time grain-size mapping of the bed). ?? 2004, SEPM (Society for Sedimentary Geology).

  11. Measurement of replication structures at the nanometer scale using super-resolution light microscopy

    PubMed Central

    Baddeley, D.; Chagin, V. O.; Schermelleh, L.; Martin, S.; Pombo, A.; Carlton, P. M.; Gahl, A.; Domaing, P.; Birk, U.; Leonhardt, H.; Cremer, C.; Cardoso, M. C.

    2010-01-01

    DNA replication, similar to other cellular processes, occurs within dynamic macromolecular structures. Any comprehensive understanding ultimately requires quantitative data to establish and test models of genome duplication. We used two different super-resolution light microscopy techniques to directly measure and compare the size and numbers of replication foci in mammalian cells. This analysis showed that replication foci vary in size from 210 nm down to 40 nm. Remarkably, spatially modulated illumination (SMI) and 3D-structured illumination microscopy (3D-SIM) both showed an average size of 125 nm that was conserved throughout S-phase and independent of the labeling method, suggesting a basic unit of genome duplication. Interestingly, the improved optical 3D resolution identified 3- to 5-fold more distinct replication foci than previously reported. These results show that optical nanoscopy techniques enable accurate measurements of cellular structures at a level previously achieved only by electron microscopy and highlight the possibility of high-throughput, multispectral 3D analyses. PMID:19864256

  12. Proposal for a study of computer mapping of terrain using multispectral data from ERTS-A for the Yellowstone National Park test site

    NASA Technical Reports Server (NTRS)

    Smedes, H. W. (Principal Investigator); Root, R. R.; Roller, N. E. G.; Despain, D.

    1978-01-01

    The author has identified the following significant results. A terrain map of Yellowstone National Park showed plant community types and other classes of ground cover in what is basically a wild land. The map comprised 12 classes, six of which were mapped with accuracies of 70 to 95%. The remaining six classes had spectral reflectances that overlapped appreciably, and hence, those were mapped less accurately. Techniques were devised for quantitatively comparing the recognition map of the park with control data acquired from ground inspection and from analysis of sidelooking radar images, a thermal IR mosaic, and IR aerial photos of several scales. Quantitative analyses were made in ten 40 sq km test areas. Comparison mechanics were performed by computer with the final results displayed on line printer output. Forested areas were mapped by computer using ERTS data for less than 1/4 the cost of the conventional forest mapping technique for topographic base maps.

  13. Direct measurement of local material properties within living embryonic tissues

    NASA Astrophysics Data System (ADS)

    Serwane, Friedhelm; Mongera, Alessandro; Rowghanian, Payam; Kealhofer, David; Lucio, Adam; Hockenbery, Zachary; Campàs, Otger

    The shaping of biological matter requires the control of its mechanical properties across multiple scales, ranging from single molecules to cells and tissues. Despite their relevance, measurements of the mechanical properties of sub-cellular, cellular and supra-cellular structures within living embryos pose severe challenges to existing techniques. We have developed a technique that uses magnetic droplets to measure the mechanical properties of complex fluids, including in situ and in vivo measurements within living embryos ,across multiple length and time scales. By actuating the droplets with magnetic fields and recording their deformation we probe the local mechanical properties, at any length scale we choose by varying the droplets' diameter. We use the technique to determine the subcellular mechanics of individual blastomeres of zebrafish embryos, and bridge the gap to the tissue scale by measuring the local viscosity and elasticity of zebrafish embryonic tissues. Using this technique, we show that embryonic zebrafish tissues are viscoelastic with a fluid-like behavior at long time scales. This technique will enable mechanobiology and mechano-transduction studies in vivo, including the study of diseases correlated with tissue stiffness, such as cancer.

  14. Earthquake Hazard Class Mapping by Parcel in Las Vegas Valley

    NASA Astrophysics Data System (ADS)

    Pancha, A.; Pullammanappallil, S.; Louie, J. N.; Hellmer, W. K.

    2011-12-01

    Clark County, Nevada completed the very first effort in the United States to map earthquake hazard class systematically through an entire urban area. The map is used in development and disaster response planning, in addition to its direct use for building code implementation and enforcement. The County contracted with the Nevada System of Higher Education to classify about 500 square miles including urban Las Vegas Valley, and exurban areas considered for future development. The Parcel Map includes over 10,000 surface-wave array measurements accomplished over three years using Optim's SeisOpt° ReMi measurement and processing techniques adapted for large scale data. These array measurements classify individual parcels on the NEHRP hazard scale. Parallel "blind" tests were conducted at 93 randomly selected sites. The rms difference between the Vs30 values yielded by the blind data and analyses and the Parcel Map analyses is 4.92%. Only six of the blind-test sites showed a difference with a magnitude greater than 10%. We describe a "C+" Class for sites with Class B average velocities but soft surface soil. The measured Parcel Map shows a clearly definable C+ to C boundary on the west side of the Valley. The C to D boundary is much more complex. Using the parcel map in computing shaking in the Valley for scenario earthquakes is crucial for obtaining realistic predictions of ground motions.

  15. Varved sediments from Lake Czechowskie (Poland) reveal gradual increase in Atlantic influence during the Holocene

    NASA Astrophysics Data System (ADS)

    Ott, Florian; Brauer, Achim; Słowiński, Michał; Wulf, Sabine; Putyrskaya, Victoria; Plessen, Birgit; Błaszkiewicz, Miroslaw

    2015-04-01

    Detailed micro-facies and geochemical analyses have been carried out for the predominantly varved Holocene sediment record of Lake Czechowskie (north-central Poland). The chronology has been established by a multiple dating approach comprising varve counting, AMS 14C dating, 137Cs activity concentration measurements and tephrochronology. The combination of independent dating techniques revealed well-constrained time scales even in phases lacking annual laminations and allows reliable high-resolution archive synchronization. Quantitative (varve thickness variations) and qualitative (sublayer structure) varve parameters as well as geochemical composition have been obtained to gain a comprehensive view of climatic and environmental evolution during the last 11500 years in northern Poland. Five major sedimentological changes have been identified, encompassing transitions from varved to non-varved sediments (and vice versa) at 10.100 and 7.300 cal a BP, respectively, changes in general varve pattern at 6.500 and 4.200 cal a BP and distinct increase of varve thickness accompanied by increased annual variability since 2.800 cal a BP. These changes reflect large-scale reorganization of the climate system throughout the Holocene with increasing influences of the North Atlantic climate system in Poland. Moreover, the observed changes suggest different thresholds and trigger mechanisms over the investigated time period. This study is a contribution to the Virtual Institute of Integrated Climate and Landscape Evolution Analyses - ICLEA - of the Helmholtz Association, grant number VH-VI-415.

  16. Plant Chlorophyll fluorescence: active and passive measurements at canopy and leaf scales with different nitrogen treatments

    USDA-ARS?s Scientific Manuscript database

    Most studies assessing chlorophyll fluorescence (ChlF) have examined leaf responses to environmental stress conditions using active techniques. Alternatively, passive techniques are able to measure ChlF at both leaf and canopy scales. However, although the measurement principles of both techniques a...

  17. Nitrogen fluorescence in air for observing extensive air showers

    NASA Astrophysics Data System (ADS)

    Keilhauer, B.; Bohacova, M.; Fraga, M.; Matthews, J.; Sakaki, N.; Tameda, Y.; Tsunesada, Y.; Ulrich, A.

    2013-06-01

    Extensive air showers initiate the fluorescence emissions from nitrogen molecules in air. The UV-light is emitted isotropically and can be used for observing the longitudinal development of extensive air showers in the atmosphere over tenth of kilometers. This measurement technique is well-established since it is exploited for many decades by several cosmic ray experiments. However, a fundamental aspect of the air shower analyses is the description of the fluorescence emission in dependence on varying atmospheric conditions. Different fluorescence yields affect directly the energy scaling of air shower reconstruction. In order to explore the various details of the nitrogen fluorescence emission in air, a few experimental groups have been performing dedicated measurements over the last decade. Most of the measurements are now finished. These experimental groups have been discussing their techniques and results in a series of Air Fluorescence Workshops commenced in 2002. At the 8th Air Fluorescence Workshop 2011, it was suggested to develop a common way of describing the nitrogen fluorescence for application to air shower observations. Here, first analyses for a common treatment of the major dependences of the emission procedure are presented. Aspects like the contributions at different wavelengths, the dependence on pressure as it is decreasing with increasing altitude in the atmosphere, the temperature dependence, in particular that of the collisional cross sections between molecules involved, and the collisional de-excitation by water vapor are discussed.

  18. Caught in the middle with multiple displacement amplification: the myth of pooling for avoiding multiple displacement amplification bias in a metagenome.

    PubMed

    Marine, Rachel; McCarren, Coleen; Vorrasane, Vansay; Nasko, Dan; Crowgey, Erin; Polson, Shawn W; Wommack, K Eric

    2014-01-30

    Shotgun metagenomics has become an important tool for investigating the ecology of microorganisms. Underlying these investigations is the assumption that metagenome sequence data accurately estimates the census of microbial populations. Multiple displacement amplification (MDA) of microbial community DNA is often used in cases where it is difficult to obtain enough DNA for sequencing; however, MDA can result in amplification biases that may impact subsequent estimates of population census from metagenome data. Some have posited that pooling replicate MDA reactions negates these biases and restores the accuracy of population analyses. This assumption has not been empirically tested. Using mock viral communities, we examined the influence of pooling on population-scale analyses. In pooled and single reaction MDA treatments, sequence coverage of viral populations was highly variable and coverage patterns across viral genomes were nearly identical, indicating that initial priming biases were reproducible and that pooling did not alleviate biases. In contrast, control unamplified sequence libraries showed relatively even coverage across phage genomes. MDA should be avoided for metagenomic investigations that require quantitative estimates of microbial taxa and gene functional groups. While MDA is an indispensable technique in applications such as single-cell genomics, amplification biases cannot be overcome by combining replicate MDA reactions. Alternative library preparation techniques should be utilized for quantitative microbial ecology studies utilizing metagenomic sequencing approaches.

  19. O the Development and Use of Four-Dimensional Data Assimilation in Limited-Area Mesoscale Models Used for Meteorological Analysis.

    NASA Astrophysics Data System (ADS)

    Stauffer, David R.

    1990-01-01

    The application of dynamic relationships to the analysis problem for the atmosphere is extended to use a full-physics limited-area mesoscale model as the dynamic constraint. A four-dimensional data assimilation (FDDA) scheme based on Newtonian relaxation or "nudging" is developed and evaluated in the Penn State/National Center for Atmospheric Research (PSU/NCAR) mesoscale model, which is used here as a dynamic-analysis tool. The thesis is to determine what assimilation strategies and what meterological fields (mass, wind or both) have the greatest positive impact on the 72-h numerical simulations (dynamic analyses) of two mid-latitude, real-data cases. The basic FDDA methodology is tested in a 10-layer version of the model with a bulk-aerodynamic (single-layer) representation of the planetary boundary layer (PBL), and refined in a 15-layer version of the model by considering the effects of data assimilation within a multi-layer PBL scheme. As designed, the model solution can be relaxed toward either gridded analyses ("analysis nudging"), or toward the actual observations ("obs nudging"). The data used for assimilation include standard 12-hourly rawinsonde data, and also 3-hourly mesoalpha-scale surface data which are applied within the model's multi-layer PBL. Continuous assimilation of standard-resolution rawinsonde data into the 10-layer model successfully reduced large-scale amplitude and phase errors while the model realistically simulated mesoscale structures poorly defined or absent in the rawinsonde analyses and in the model simulations without FDDA. Nudging the model fields directly toward the rawinsonde observations generally produced results comparable to nudging toward gridded analyses. This obs -nudging technique is especially attractive for the assimilation of high-frequency, asynoptic data. Assimilation of 3-hourly surface wind and moisture data into the 15-layer FDDA system was most effective for improving the simulated precipitation fields because a significant portion of the vertically integrated moisture convergence often occurs in the PBL. Overall, the best dynamic analyses for the PBL, mass, wind and precipitation fields were obtained by nudging toward analyses of rawinsonde wind, temperature and moisture (the latter uses a weaker nudging coefficient) above the model PBL and toward analyses of surface-layer wind and moisture within the model PBL.

  20. Microfluidic desalination techniques and their potential applications.

    PubMed

    Roelofs, S H; van den Berg, A; Odijk, M

    2015-09-07

    In this review we discuss recent developments in the emerging research field of miniaturized desalination. Traditionally desalination is performed to convert salt water into potable water and research is focused on improving performance of large-scale desalination plants. Microfluidic desalination offers several new opportunities in comparison to macro-scale desalination, such as providing a platform to increase fundamental knowledge of ion transport on the nano- and microfluidic scale and new microfluidic sample preparation methods. This approach has also lead to the development of new desalination techniques, based on micro/nanofluidic ion-transport phenomena, which are potential candidates for up-scaling to (portable) drinking water devices. This review assesses microfluidic desalination techniques on their applications and is meant to contribute to further implementation of microfluidic desalination techniques in the lab-on-chip community.

  1. Incorporating Scale-Dependent Fracture Stiffness for Improved Reservoir Performance Prediction

    NASA Astrophysics Data System (ADS)

    Crawford, B. R.; Tsenn, M. C.; Homburg, J. M.; Stehle, R. C.; Freysteinson, J. A.; Reese, W. C.

    2017-12-01

    We present a novel technique for predicting dynamic fracture network response to production-driven changes in effective stress, with the potential for optimizing depletion planning and improving recovery prediction in stress-sensitive naturally fractured reservoirs. A key component of the method involves laboratory geomechanics testing of single fractures in order to develop a unique scaling relationship between fracture normal stiffness and initial mechanical aperture. Details of the workflow are as follows: tensile, opening mode fractures are created in a variety of low matrix permeability rocks with initial, unstressed apertures in the micrometer to millimeter range, as determined from image analyses of X-ray CT scans; subsequent hydrostatic compression of these fractured samples with synchronous radial strain and flow measurement indicates that both mechanical and hydraulic aperture reduction varies linearly with the natural logarithm of effective normal stress; these stress-sensitive single-fracture laboratory observations are then upscaled to networks with fracture populations displaying frequency-length and length-aperture scaling laws commonly exhibited by natural fracture arrays; functional relationships between reservoir pressure reduction and fracture network porosity, compressibility and directional permeabilities as generated by such discrete fracture network modeling are then exported to the reservoir simulator for improved naturally fractured reservoir performance prediction.

  2. Non-lethal sampling of walleye for stable isotope analysis: a comparison of three tissues

    USGS Publications Warehouse

    Chipps, Steven R.; VanDeHey, J.A.; Fincel, M.J.

    2012-01-01

    Stable isotope analysis of fishes is often performed using muscle or organ tissues that require sacrificing animals. Non-lethal sampling provides an alternative for evaluating isotopic composition for species of concern or individuals of exceptional value. Stable isotope values of white muscle (lethal) were compared with those from fins and scales (non-lethal) in walleye, Sander vitreus (Mitchill), from multiple systems, size classes and across a range of isotopic values. Isotopic variability was also compared among populations to determine the potential of non-lethal tissues for diet-variability analyses. Muscle-derived isotope values were enriched compared with fins and depleted relative to scales. A split-sample validation technique and linear regression found that isotopic composition of walleye fins and scales was significantly related to that in muscle tissue for both δ13C and δ15N (r2 = 0.79–0.93). However, isotopic variability was significantly different between tissue types in two of six populations for δ15N and three of six populations for δ13C. Although species and population specific, these findings indicate that isotopic measures obtained from non-lethal tissues are indicative of those obtained from muscle.

  3. Does the Nile reflect solar variability?

    NASA Astrophysics Data System (ADS)

    Ruzmaikin, Alexander; Feynman, Joan; Yung, Yuk

    Historical records of the Nile water level provide a unique opportunity to investigate the possibility that solar variability influences the Earth's climate. Particularly important are the annual records of the water level, which are uninterrupted for the years 622-1470 A.D. These records are non-stationary, so that standard spectral analyses cannot adequately characterize them. Here the Empirical Mode Decomposition technique, which is designed to deal with non-stationary, nonlinear time series, becomes useful. It allows the identification of two characteristic time scales in the water level data that can be linked to solar variability: the 88 year period and a time scale of about 200 years. These time scales are also present in the concurrent aurora data. Auroras are driven by coronal mass ejections and the rate of auroras is an excellent proxy for solar variabiliy. Analysis of auroral data contemporaneous with the Nile data shows peaks at 88 years and about 200 years. This suggests a physical link between solar variability and the low-frequency variations of the Nile water level. The link involves the influence of solar variability on the North Annual Mode of atmospheric variability and its North Atlantic and Indian Oceans patterns that affect rainfall over Eastren Equatorial Africa where the Nile originates.

  4. Patterns of genetic diversity in the polymorphic ground snake (Sonora semiannulata).

    PubMed

    Cox, Christian L; Chippindale, Paul T

    2014-08-01

    We evaluated the genetic diversity of a snake species with color polymorphism to understand the evolutionary processes that drive genetic structure across a large geographic region. Specifically, we analyzed genetic structure of the highly polymorphic ground snake, Sonora semiannulata, (1) among populations, (2) among color morphs (3) at regional and local spatial scales, using an amplified fragment length polymorphism dataset and multiple population genetic analyses, including FST-based and clustering analytical techniques. Based upon these methods, we found that there was moderate to low genetic structure among populations. However, this diversity was not associated with geographic locality at either spatial scale. Similarly, we found no evidence for genetic divergence among color morphs at either spatial scale. These results suggest that despite dramatic color polymorphism, this phenotypic diversity is not a major driver of genetic diversity within or among populations of ground snakes. We suggest that there are two mechanisms that could explain existing genetic diversity in ground snakes: recent range expansion from a genetically diverse founder population and current or recent gene flow among populations. Our findings have further implications for the types of color polymorphism that may generate genetic diversity in snakes.

  5. Communication about patient pain in primary care: development of the Physician-Patient Communication about Pain scale (PCAP).

    PubMed

    Haskard-Zolnierek, Kelly B

    2012-01-01

    This paper describes the development of the 47-item Physician-Patient Communication about Pain (PCAP) scale for use with audiotaped medical visit interactions. Patient pain was assessed with the Medical Outcomes Study SF-36 Bodily Pain Scale. Four raters assessed 181 audiotaped patient interactions with 68 physicians. Descriptive statistics of PCAP items were computed. Principal components analyses with 20 scale items were used to reduce the scale to composite variables for analyses. Validity was assessed through (1) comparing PCAP composite scores for patients with high versus low pain and (2) correlating PCAP composites with a separate communication rating scale. Principal components analyses yielded four physician and five patient communication composites (mean alpha=.77). Some evidence for concurrent validity was provided (5 of 18 correlations with communication validation rating scale were significant). Paired-sample t tests showed significant differences for 4 patient PCAP composites, showing the PCAP scale discriminates between high and low pain patients' communication. The PCAP scale shows partial evidence of reliability and two forms of validity. More research with this scale (developing more reliable and valid composites) is needed to extend these preliminary findings before this scale is applicable for use in practice. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  6. [Results of revision after failed surgical treatment for traumatic anterior shoulder instability].

    PubMed

    Lópiz-Morales, Y; Alcobe-Bonilla, J; García-Fernández, C; Francés-Borrego, A; Otero-Fernández, R; Marco-Martínez, F

    2013-01-01

    Persistent or recurrent glenohumeral instability after a previous operative stabilization can be a complex problem. Our aim is to establish the incidence of recurrence and its revision surgery, and to analyse the functional results of the revision instability surgery, as well as to determine surgical protocols to perform it. A retrospective analysis was conducted on 16 patients with recurrent instability out of 164 patients operated on between 1999 and 2011. The mean follow-up was 57 months and the mean age was 29 years. To evaluate functional outcome we employed Constant, Rowe, UCLA scores and the visual analogue scale. Of the 12 patients who failed the initial arthroscopic surgery, 6 patients underwent an arthroscopic antero-inferior labrum repair technique, 4 using open labrum repair techniques, and 2 coracoid transfer. The two cases of open surgery with recurrences underwent surgery for coracoid transfer. Results of the Constant score were excellent or good in 64% of patients. Surgical revision of instability is a complex surgery essentially for two reasons: the difficulty in recognising the problem, and the technical demand (greater variety and the increasingly complex techniques). Copyright © 2012 SECOT. Published by Elsevier Espana. All rights reserved.

  7. Binary optical filters for scale invariant pattern recognition

    NASA Technical Reports Server (NTRS)

    Reid, Max B.; Downie, John D.; Hine, Butler P.

    1992-01-01

    Binary synthetic discriminant function (BSDF) optical filters which are invariant to scale changes in the target object of more than 50 percent are demonstrated in simulation and experiment. Efficient databases of scale invariant BSDF filters can be designed which discriminate between two very similar objects at any view scaled over a factor of 2 or more. The BSDF technique has considerable advantages over other methods for achieving scale invariant object recognition, as it also allows determination of the object's scale. In addition to scale, the technique can be used to design recognition systems invariant to other geometric distortions.

  8. Clouds and the Earth's Radiant Energy System (CERES) algorithm theoretical basis document. volume 2; Geolocation, calibration, and ERBE-like analyses (subsystems 1-3)

    NASA Technical Reports Server (NTRS)

    Wielicki, B. A. (Principal Investigator); Barkstrom, B. R. (Principal Investigator); Charlock, T. P.; Baum, B. A.; Green, R. N.; Minnis, P.; Smith, G. L.; Coakley, J. A.; Randall, D. R.; Lee, R. B., III

    1995-01-01

    The theoretical bases for the Release 1 algorithms that will be used to process satellite data for investigation of the Clouds and Earth's Radiant Energy System (CERES) are described. The architecture for software implementation of the methodologies is outlined. Volume 2 details the techniques used to geolocate and calibrate the CERES scanning radiometer measurements of shortwave and longwave radiance to invert the radiances to top-of-the-atmosphere (TOA) and surface fluxes following the Earth Radiation Budget Experiment (ERBE) approach, and to average the fluxes over various time and spatial scales to produce an ERBE-like product. Spacecraft ephemeris and sensor telemetry are used with calibration coefficients to produce a chronologically ordered data product called bidirectional scan (BDS) radiances. A spatially organized instrument Earth scan product is developed for the cloud-processing subsystem. The ERBE-like inversion subsystem converts BDS radiances to unfiltered instantaneous TOA and surface fluxes. The TOA fluxes are determined by using established ERBE techniques. Hourly TOA fluxes are computed from the instantaneous values by using ERBE methods. Hourly surface fluxes are estimated from TOA fluxes by using simple parameterizations based on recent research. The averaging process produces daily, monthly-hourly, and monthly means of TOA and surface fluxes at various scales. This product provides a continuation of the ERBE record.

  9. Exploratory Studies in Generalized Predictive Control for Active Aeroelastic Control of Tiltrotor Aircraft

    NASA Technical Reports Server (NTRS)

    Kvaternik, Raymond G.; Juang, Jer-Nan; Bennett, Richard L.

    2000-01-01

    The Aeroelasticity Branch at NASA Langley Research Center has a long and substantive history of tiltrotor aeroelastic research. That research has included a broad range of experimental investigations in the Langley Transonic Dynamics Tunnel (TDT) using a variety of scale models and the development of essential analyses. Since 1994, the tiltrotor research program has been using a 1/5-scale, semispan aeroelastic model of the V-22 designed and built by Bell Helicopter Textron Inc. (BHTI) in 1981. That model has been refurbished to form a tiltrotor research testbed called the Wing and Rotor Aeroelastic Test System (WRATS) for use in the TDT. In collaboration with BHTI, studies under the current tiltrotor research program are focused on aeroelastic technology areas having the potential for enhancing the commercial and military viability of tiltrotor aircraft. Among the areas being addressed, considerable emphasis is being directed to the evaluation of modern adaptive multi-input multi- output (MIMO) control techniques for active stability augmentation and vibration control of tiltrotor aircraft. As part of this investigation, a predictive control technique known as Generalized Predictive Control (GPC) is being studied to assess its potential for actively controlling the swashplate of tiltrotor aircraft to enhance aeroelastic stability in both helicopter and airplane modes of flight. This paper summarizes the exploratory numerical and experimental studies that were conducted as part of that investigation.

  10. A novel validation and calibration method for motion capture systems based on micro-triangulation.

    PubMed

    Nagymáté, Gergely; Tuchband, Tamás; Kiss, Rita M

    2018-06-06

    Motion capture systems are widely used to measure human kinematics. Nevertheless, users must consider system errors when evaluating their results. Most validation techniques for these systems are based on relative distance and displacement measurements. In contrast, our study aimed to analyse the absolute volume accuracy of optical motion capture systems by means of engineering surveying reference measurement of the marker coordinates (uncertainty: 0.75 mm). The method is exemplified on an 18 camera OptiTrack Flex13 motion capture system. The absolute accuracy was defined by the root mean square error (RMSE) between the coordinates measured by the camera system and by engineering surveying (micro-triangulation). The original RMSE of 1.82 mm due to scaling error was managed to be reduced to 0.77 mm while the correlation of errors to their distance from the origin reduced from 0.855 to 0.209. A simply feasible but less accurate absolute accuracy compensation method using tape measure on large distances was also tested, which resulted in similar scaling compensation compared to the surveying method or direct wand size compensation by a high precision 3D scanner. The presented validation methods can be less precise in some respects as compared to previous techniques, but they address an error type, which has not been and cannot be studied with the previous validation methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. An evaluation of semi-automated methods for collecting ecosystem-level data in temperate marine systems.

    PubMed

    Griffin, Kingsley J; Hedge, Luke H; González-Rivero, Manuel; Hoegh-Guldberg, Ove I; Johnston, Emma L

    2017-07-01

    Historically, marine ecologists have lacked efficient tools that are capable of capturing detailed species distribution data over large areas. Emerging technologies such as high-resolution imaging and associated machine-learning image-scoring software are providing new tools to map species over large areas in the ocean. Here, we combine a novel diver propulsion vehicle (DPV) imaging system with free-to-use machine-learning software to semi-automatically generate dense and widespread abundance records of a habitat-forming algae over ~5,000 m 2 of temperate reef. We employ replicable spatial techniques to test the effectiveness of traditional diver-based sampling, and better understand the distribution and spatial arrangement of one key algal species. We found that the effectiveness of a traditional survey depended on the level of spatial structuring, and generally 10-20 transects (50 × 1 m) were required to obtain reliable results. This represents 2-20 times greater replication than have been collected in previous studies. Furthermore, we demonstrate the usefulness of fine-resolution distribution modeling for understanding patterns in canopy algae cover at multiple spatial scales, and discuss applications to other marine habitats. Our analyses demonstrate that semi-automated methods of data gathering and processing provide more accurate results than traditional methods for describing habitat structure at seascape scales, and therefore represent vastly improved techniques for understanding and managing marine seascapes.

  12. Phase-relationships between scales in the perturbed turbulent boundary layer

    NASA Astrophysics Data System (ADS)

    Jacobi, I.; McKeon, B. J.

    2017-12-01

    The phase-relationship between large-scale motions and small-scale fluctuations in a non-equilibrium turbulent boundary layer was investigated. A zero-pressure-gradient flat plate turbulent boundary layer was perturbed by a short array of two-dimensional roughness elements, both statically, and under dynamic actuation. Within the compound, dynamic perturbation, the forcing generated a synthetic very-large-scale motion (VLSM) within the flow. The flow was decomposed by phase-locking the flow measurements to the roughness forcing, and the phase-relationship between the synthetic VLSM and remaining fluctuating scales was explored by correlation techniques. The general relationship between large- and small-scale motions in the perturbed flow, without phase-locking, was also examined. The synthetic large scale cohered with smaller scales in the flow via a phase-relationship that is similar to that of natural large scales in an unperturbed flow, but with a much stronger organizing effect. Cospectral techniques were employed to describe the physical implications of the perturbation on the relative orientation of large- and small-scale structures in the flow. The correlation and cospectral techniques provide tools for designing more efficient control strategies that can indirectly control small-scale motions via the large scales.

  13. Weighted Statistical Binning: Enabling Statistically Consistent Genome-Scale Phylogenetic Analyses

    PubMed Central

    Bayzid, Md Shamsuzzoha; Mirarab, Siavash; Boussau, Bastien; Warnow, Tandy

    2015-01-01

    Because biological processes can result in different loci having different evolutionary histories, species tree estimation requires multiple loci from across multiple genomes. While many processes can result in discord between gene trees and species trees, incomplete lineage sorting (ILS), modeled by the multi-species coalescent, is considered to be a dominant cause for gene tree heterogeneity. Coalescent-based methods have been developed to estimate species trees, many of which operate by combining estimated gene trees, and so are called "summary methods". Because summary methods are generally fast (and much faster than more complicated coalescent-based methods that co-estimate gene trees and species trees), they have become very popular techniques for estimating species trees from multiple loci. However, recent studies have established that summary methods can have reduced accuracy in the presence of gene tree estimation error, and also that many biological datasets have substantial gene tree estimation error, so that summary methods may not be highly accurate in biologically realistic conditions. Mirarab et al. (Science 2014) presented the "statistical binning" technique to improve gene tree estimation in multi-locus analyses, and showed that it improved the accuracy of MP-EST, one of the most popular coalescent-based summary methods. Statistical binning, which uses a simple heuristic to evaluate "combinability" and then uses the larger sets of genes to re-calculate gene trees, has good empirical performance, but using statistical binning within a phylogenomic pipeline does not have the desirable property of being statistically consistent. We show that weighting the re-calculated gene trees by the bin sizes makes statistical binning statistically consistent under the multispecies coalescent, and maintains the good empirical performance. Thus, "weighted statistical binning" enables highly accurate genome-scale species tree estimation, and is also statistically consistent under the multi-species coalescent model. New data used in this study are available at DOI: http://dx.doi.org/10.6084/m9.figshare.1411146, and the software is available at https://github.com/smirarab/binning. PMID:26086579

  14. Analysis of spatial and temporal rainfall trends in Sicily during the 1921-2012 period

    NASA Astrophysics Data System (ADS)

    Liuzzo, Lorena; Bono, Enrico; Sammartano, Vincenzo; Freni, Gabriele

    2016-10-01

    Precipitation patterns worldwide are changing under the effects of global warming. The impacts of these changes could dramatically affect the hydrological cycle and, consequently, the availability of water resources. In order to improve the quality and reliability of forecasting models, it is important to analyse historical precipitation data to account for possible future changes. For these reasons, a large number of studies have recently been carried out with the aim of investigating the existence of statistically significant trends in precipitation at different spatial and temporal scales. In this paper, the existence of statistically significant trends in rainfall from observational datasets, which were measured by 245 rain gauges over Sicily (Italy) during the 1921-2012 period, was investigated. Annual, seasonal and monthly time series were examined using the Mann-Kendall non-parametric statistical test to detect statistically significant trends at local and regional scales, and their significance levels were assessed. Prior to the application of the Mann-Kendall test, the historical dataset was completed using a geostatistical spatial interpolation technique, the residual ordinary kriging, and then processed to remove the influence of serial correlation on the test results, applying the procedure of trend-free pre-whitening. Once the trends at each site were identified, the spatial patterns of the detected trends were examined using spatial interpolation techniques. Furthermore, focusing on the 30 years from 1981 to 2012, the trend analysis was repeated with the aim of detecting short-term trends or possible changes in the direction of the trends. Finally, the effect of climate change on the seasonal distribution of rainfall during the year was investigated by analysing the trend in the precipitation concentration index. The application of the Mann-Kendall test to the rainfall data provided evidence of a general decrease in precipitation in Sicily during the 1921-2012 period. Downward trends frequently occurred during the autumn and winter months. However, an increase in total annual precipitation was detected during the period from 1981 to 2012.

  15. In situ quantification of Br and Cl in minerals and fluid inclusions by LA-ICP-MS: a powerful tool to identify fluid sources

    USGS Publications Warehouse

    Hammerli, Johannes; Rusk, Brian; Spandler, Carl; Emsbo, Poul; Oliver, Nicholas H.S.

    2013-01-01

    Bromine and chlorine are important halogens for fluid source identification in the Earth's crust, but until recently we lacked routine analytical techniques to determine the concentration of these elements in situ on a micrometer scale in minerals and fluid inclusions. In this study, we evaluate the potential of in situ Cl and Br measurements by LA-ICP-MS through analysis of a range of scapolite grains with known Cl and Br concentrations. We assess the effects of varying spot sizes, variable plasma energy and resolve the contribution of polyatomic interferences on Br measurements. Using well-characterised natural scapolite standards, we show that LA-ICP-MS analysis allows measurement of Br and Cl concentrations in scapolite, and fluid inclusions as small as 16 μm in diameter and potentially in sodalite and a variety of other minerals, such as apatite, biotite, and amphibole. As a demonstration of the accuracy and potential of Cl and Br analyses by LA-ICP-MS, we analysed natural fluid inclusions hosted in sphalerite and compared them to crush and leach ion chromatography Cl/Br analyses. Limit of detection for Br is ~8 μg g−1, whereas relatively high Cl concentrations (> 500 μg g−1) are required for quantification by LA-ICP-MS. In general, our LA-ICP-MS fluid inclusion results agree well with ion chromatography (IC) data. Additionally, combined cathodoluminescence and LA-ICP-MS analyses on natural scapolites within a well-studied regional metamorphic suite in South Australia demonstrate that Cl and Br can be quantified with a ~25 μm resolution in natural minerals. This technique can be applied to resolve a range of hydrothermal geology problems, including determining the origins of ore forming brines and ore deposition processes, mapping metamorphic and hydrothermal fluid provinces and pathways, and constraining the effects of fluid–rock reactions and fluid mixing.

  16. Stuttering, induced fluency, and natural fluency: a hierarchical series of activation likelihood estimation meta-analyses.

    PubMed

    Budde, Kristin S; Barron, Daniel S; Fox, Peter T

    2014-12-01

    Developmental stuttering is a speech disorder most likely due to a heritable form of developmental dysmyelination impairing the function of the speech-motor system. Speech-induced brain-activation patterns in persons who stutter (PWS) are anomalous in various ways; the consistency of these aberrant patterns is a matter of ongoing debate. Here, we present a hierarchical series of coordinate-based meta-analyses addressing this issue. Two tiers of meta-analyses were performed on a 17-paper dataset (202 PWS; 167 fluent controls). Four large-scale (top-tier) meta-analyses were performed, two for each subject group (PWS and controls). These analyses robustly confirmed the regional effects previously postulated as "neural signatures of stuttering" (Brown, Ingham, Ingham, Laird, & Fox, 2005) and extended this designation to additional regions. Two smaller-scale (lower-tier) meta-analyses refined the interpretation of the large-scale analyses: (1) a between-group contrast targeting differences between PWS and controls (stuttering trait); and (2) a within-group contrast (PWS only) of stuttering with induced fluency (stuttering state). Copyright © 2014 Elsevier Inc. All rights reserved.

  17. A modified technique for the preparation of SO2 from sulphates and sulphides for sulphur isotope analyses.

    PubMed

    Han, L; Tanweer, A; Szaran, J; Halas, S

    2002-09-01

    A modified technique for the conversion of sulphates and sulphides to SO2 with the mixture of V2O5-SiO2 for sulphur isotopic analyses is described. This technique is more suitable for routine analysis of large number of samples. Modification of the reaction vessel and using manifold inlet system allows to analyse up to 24 samples every day. The modified technique assures the complete yield of SO2, consistent oxygen isotope composition of the SO2 gas and reproducibility of delta34S measurements being within 0.10 per thousand. It is observed, however, oxygen in SO2 produced from sulphides differs in delta18O with respect to that produced from sulphates.

  18. Multiscale mechanisms of nutritionally induced property variation in spider silks

    PubMed Central

    Nobbs, Madeleine; Martens, Penny J.; Tso, I-Min; Chuang, Wei-Tsung; Chang, Chung-Kai; Sheu, Hwo-Shuenn

    2018-01-01

    Variability in spider major ampullate (MA) silk properties at different scales has proven difficult to determine and remains an obstacle to the development of synthetic fibers mimicking MA silk performance. A multitude of techniques may be used to measure multiscale aspects of silk properties. Here we fed five species of Araneoid spider solutions that either contained protein or were protein deprived and performed silk tensile tests, small and wide-angle X-ray scattering (SAXS/WAXS), amino acid composition analyses, and silk gene expression analyses, to resolve persistent questions about how nutrient deprivation induces variations in MA silk mechanical properties across scales. Our analyses found that the properties of each spider’s silk varied differently in response to variations in their protein intake. We found changes in the crystalline and non-crystalline nanostructures to play specific roles in inducing the property variations we found. Across treatment MaSp expression patterns differed in each of the five species. We found that in most species MaSp expression and amino acid composition variations did not conform with our predictions based on a traditional MaSp expression model. In general, changes to the silk’s alanine and proline compositions influenced the alignment of the proteins within the silk’s amorphous region, which influenced silk extensibility and toughness. Variations in structural alignment in the crystalline and non-crystalline regions influenced ultimate strength independent of genetic expression. Our study provides the deepest insights thus far into the mechanisms of how MA silk properties vary from gene expression to nanostructure formations to fiber mechanics. Such knowledge is imperative for promoting the production of synthetic silk fibers. PMID:29390013

  19. Estimating sample size for landscape-scale mark-recapture studies of North American migratory tree bats

    USGS Publications Warehouse

    Ellison, Laura E.; Lukacs, Paul M.

    2014-01-01

    Concern for migratory tree-roosting bats in North America has grown because of possible population declines from wind energy development. This concern has driven interest in estimating population-level changes. Mark-recapture methodology is one possible analytical framework for assessing bat population changes, but sample size requirements to produce reliable estimates have not been estimated. To illustrate the sample sizes necessary for a mark-recapture-based monitoring program we conducted power analyses using a statistical model that allows reencounters of live and dead marked individuals. We ran 1,000 simulations for each of five broad sample size categories in a Burnham joint model, and then compared the proportion of simulations in which 95% confidence intervals overlapped between and among years for a 4-year study. Additionally, we conducted sensitivity analyses of sample size to various capture probabilities and recovery probabilities. More than 50,000 individuals per year would need to be captured and released to accurately determine 10% and 15% declines in annual survival. To detect more dramatic declines of 33% or 50% survival over four years, then sample sizes of 25,000 or 10,000 per year, respectively, would be sufficient. Sensitivity analyses reveal that increasing recovery of dead marked individuals may be more valuable than increasing capture probability of marked individuals. Because of the extraordinary effort that would be required, we advise caution should such a mark-recapture effort be initiated because of the difficulty in attaining reliable estimates. We make recommendations for what techniques show the most promise for mark-recapture studies of bats because some techniques violate the assumptions of mark-recapture methodology when used to mark bats.

  20. Effects of thermal cycling parameters on residual stresses in alumina scales of CoNiCrAlY and NiCoCrAlY bond coats

    DOE PAGES

    Nordhorn, Christian; Mücke, Robert; Unocic, Kinga A.; ...

    2014-08-20

    In this paper, furnace cycling experiments were performed on free-standing high-velocity oxygen-fuel bond coat samples to investigate the effect of material composition, surface texture, and cycling conditions on the average stresses in the formed oxide scales after cooling. The oxide scale thicknesses were determined by SEM image analyses and information about the stresses were acquired by photo-stimulated luminescence-spectroscopy. Additionally, the scale thickness dependent stress fields were calculated in finite-element analyses including approximation functions for the surface roughness derived on the basis of profilometry data. The evolution of the average residual stress as a function of oxide scale thickness was subjectmore » to stochastic fluctuations predominantly caused by local scale spallations. In comparison to the supplemental modeling results, thermal stresses due to mismatch of thermal expansion coefficients are identified as the main contribution to the residual stresses. Finally, the theoretical results emphasize that analyses of spectroscopic data acquired for average stress investigations of alumina scales rely on detailed information about microstructural features.« less

  1. North American extreme temperature events and related large scale meteorological patterns: A review of statistical methods, dynamics, modeling, and trends

    DOE PAGES

    Grotjahn, Richard; Black, Robert; Leung, Ruby; ...

    2015-05-22

    This paper reviews research approaches and open questions regarding data, statistical analyses, dynamics, modeling efforts, and trends in relation to temperature extremes. Our specific focus is upon extreme events of short duration (roughly less than 5 days) that affect parts of North America. These events are associated with large scale meteorological patterns (LSMPs). Methods used to define extreme events statistics and to identify and connect LSMPs to extreme temperatures are presented. Recent advances in statistical techniques can connect LSMPs to extreme temperatures through appropriately defined covariates that supplements more straightforward analyses. A wide array of LSMPs, ranging from synoptic tomore » planetary scale phenomena, have been implicated as contributors to extreme temperature events. Current knowledge about the physical nature of these contributions and the dynamical mechanisms leading to the implicated LSMPs is incomplete. There is a pressing need for (a) systematic study of the physics of LSMPs life cycles and (b) comprehensive model assessment of LSMP-extreme temperature event linkages and LSMP behavior. Generally, climate models capture the observed heat waves and cold air outbreaks with some fidelity. However they overestimate warm wave frequency and underestimate cold air outbreaks frequency, and underestimate the collective influence of low-frequency modes on temperature extremes. Climate models have been used to investigate past changes and project future trends in extreme temperatures. Overall, modeling studies have identified important mechanisms such as the effects of large-scale circulation anomalies and land-atmosphere interactions on changes in extreme temperatures. However, few studies have examined changes in LSMPs more specifically to understand the role of LSMPs on past and future extreme temperature changes. Even though LSMPs are resolvable by global and regional climate models, they are not necessarily well simulated so more research is needed to understand the limitations of climate models and improve model skill in simulating extreme temperatures and their associated LSMPs. Furthermore, the paper concludes with unresolved issues and research questions.« less

  2. Evidence for early neurodegeneration in the cervical cord of patients with primary progressive multiple sclerosis

    PubMed Central

    Schneider, Torben; Solanky, Bhavana S.; Yiannakas, Marios C.; Altmann, Dan R.; Wheeler-Kingshott, Claudia A. M.; Peters, Amy L.; Day, Brian L.; Thompson, Alan J.; Ciccarelli, Olga

    2015-01-01

    Spinal neurodegeneration is an important determinant of disability progression in patients with primary progressive multiple sclerosis. Advanced imaging techniques, such as single-voxel 1H-magnetic resonance spectroscopy and q-space imaging, have increased pathological specificity for neurodegeneration, but are challenging to implement in the spinal cord and have yet to be applied in early primary progressive multiple sclerosis. By combining these imaging techniques with new clinical measures, which reflect spinal cord pathology more closely than conventional clinical tests, we explored the potential for spinal magnetic resonance spectroscopy and q-space imaging to detect early spinal neurodegeneration that may be responsible for clinical disability. Data from 21 patients with primary progressive multiple sclerosis within 6 years of disease onset, and 24 control subjects were analysed. Patients were clinically assessed on grip strength, vibration perception thresholds and postural stability, in addition to the Expanded Disability Status Scale, Nine Hole Peg Test, Timed 25-Foot Walk Test, Multiple Sclerosis Walking Scale-12, and Modified Ashworth Scale. All subjects underwent magnetic resonance spectroscopy and q-space imaging of the cervical cord and conventional brain and spinal magnetic resonance imaging at 3 T. Multivariate analyses and multiple regression models were used to assess the differences in imaging measures between groups and the relationship between magnetic resonance imaging measures and clinical scores, correcting for age, gender, spinal cord cross-sectional area, brain T2 lesion volume, and brain white matter and grey matter volume fractions. Although patients did not show significant cord atrophy when compared with healthy controls, they had significantly lower total N-acetyl-aspartate (mean 4.01 versus 5.31 mmol/l, P = 0.020) and glutamate-glutamine (mean 4.65 versus 5.93 mmol/l, P = 0.043) than controls. Patients showed an increase in q-space imaging-derived indices of perpendicular diffusivity in both the whole cord and major columns compared with controls (P < 0.05 for all indices). Lower total N-acetyl-aspartate was associated with higher disability, as assessed by the Expanded Disability Status Scale (coefficient = −0.41, 0.01 < P < 0.05), Modified Ashworth Scale (coefficient = −3.78, 0.01 < P < 0.05), vibration perception thresholds (coefficient = −4.37, P = 0.021) and postural sway (P < 0.001). Lower glutamate-glutamine predicted increased postural sway (P = 0.017). Increased perpendicular diffusivity in the whole cord and columns was associated with increased scores on the Modified Ashworth Scale, vibration perception thresholds and postural sway (P < 0.05 in all cases). These imaging findings indicate reduced structural integrity of neurons, demyelination, and abnormalities in the glutamatergic pathways in the cervical cord of early primary progressive multiple sclerosis, in the absence of extensive spinal cord atrophy. The observed relationship between imaging measures and disability suggests that early spinal neurodegeneration may underlie clinical impairment, and should be targeted in future clinical trials with neuroprotective agents to prevent the development of progressive disability. PMID:25863355

  3. Semi-supervised Machine Learning for Analysis of Hydrogeochemical Data and Models

    NASA Astrophysics Data System (ADS)

    Vesselinov, Velimir; O'Malley, Daniel; Alexandrov, Boian; Moore, Bryan

    2017-04-01

    Data- and model-based analyses such as uncertainty quantification, sensitivity analysis, and decision support using complex physics models with numerous model parameters and typically require a huge number of model evaluations (on order of 10^6). Furthermore, model simulations of complex physics may require substantial computational time. For example, accounting for simultaneously occurring physical processes such as fluid flow and biogeochemical reactions in heterogeneous porous medium may require several hours of wall-clock computational time. To address these issues, we have developed a novel methodology for semi-supervised machine learning based on Non-negative Matrix Factorization (NMF) coupled with customized k-means clustering. The algorithm allows for automated, robust Blind Source Separation (BSS) of groundwater types (contamination sources) based on model-free analyses of observed hydrogeochemical data. We have also developed reduced order modeling tools, which coupling support vector regression (SVR), genetic algorithms (GA) and artificial and convolutional neural network (ANN/CNN). SVR is applied to predict the model behavior within prior uncertainty ranges associated with the model parameters. ANN and CNN procedures are applied to upscale heterogeneity of the porous medium. In the upscaling process, fine-scale high-resolution models of heterogeneity are applied to inform coarse-resolution models which have improved computational efficiency while capturing the impact of fine-scale effects at the course scale of interest. These techniques are tested independently on a series of synthetic problems. We also present a decision analysis related to contaminant remediation where the developed reduced order models are applied to reproduce groundwater flow and contaminant transport in a synthetic heterogeneous aquifer. The tools are coded in Julia and are a part of the MADS high-performance computational framework (https://github.com/madsjulia/Mads.jl).

  4. Identifying the performance characteristics of a winning outcome in elite mixed martial arts competition.

    PubMed

    James, Lachlan P; Robertson, Sam; Haff, G Gregory; Beckman, Emma M; Kelly, Vincent G

    2017-03-01

    To determine those performance indicators that have the greatest influence on classifying outcome at the elite level of mixed martial arts (MMA). A secondary objective was to establish the efficacy of decision tree analysis in explaining the characteristics of victory when compared to alternate statistical methods. Cross-sectional observational. Eleven raw performance indicators from male Ultimate Fighting Championship bouts (n=234) from July 2014 to December 2014 were screened for analysis. Each raw performance indicator was also converted to a rate-dependent measure to be scaled to fight duration. Further, three additional performance indicators were calculated from the dataset and included in the analysis. Cohen's d effect sizes were employed to determine the magnitude of the differences between Wins and Losses, while decision tree (chi-square automatic interaction detector (CHAID)) and discriminant function analyses (DFA) were used to classify outcome (Win and Loss). Effect size comparisons revealed differences between Wins and Losses across a number of performance indicators. Decision tree (raw: 71.8%; rate-scaled: 76.3%) and DFA (raw: 71.4%; rate-scaled 71.2%) achieved similar classification accuracies. Grappling and accuracy performance indicators were the most influential in explaining outcome. The decision tree models also revealed multiple combinations of performance indicators leading to victory. The decision tree analyses suggest that grappling activity and technique accuracy are of particular importance in achieving victory in elite-level MMA competition. The DFA results supported the importance of these performance indicators. Decision tree induction represents an intuitive and slightly more accurate approach to explaining bout outcome in this sport when compared to DFA. Copyright © 2016 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  5. Validity of personality measurement in adults with anxiety disorders: psychometric properties of the Spanish NEO-FFI-R using Rasch analyses

    PubMed Central

    Inchausti, Felix; Mole, Joe; Fonseca-Pedrero, Eduardo; Ortuño-Sierra, Javier

    2015-01-01

    The aim of this study was to analyse the psychometric properties of the Spanish NEO Five Factor Inventory–Revised (NEO-FFI-R) using Rasch analyses, in order to test its rating scale functioning, the reliability of scores, internal structure, and differential item functioning (DIF) by gender in a psychiatric sample. The NEO-FFI-R responses of 433 Spanish adults (154 males) with an anxiety disorder as primary diagnosis were analysed using the Rasch model for rating scales. Two intermediate categories of response (‘neutral’ and ‘agree’) malfunctioned in the Neuroticism and Conscientiousness scales. In addition, model reliabilities were lower than expected in Agreeableness and Neuroticism, and the item fit values indicated each scale had items that did not achieve moderate to high discrimination on its dimension, particularly in the Agreeableness scale. Concerning unidimensionality, the five NEO-FFI-R scales showed large first components of unexplained variance. Finally, DIF by gender was detected in many items. The results suggest that the scores of the Spanish NEO-FFI-R are unreliable in psychiatric samples and cannot be generalized between males and females, especially in the Openness, Conscientiousness, and Agreeableness scales. Future directions for testing and refinement should be developed before the NEO-FFI-R can be used reliably in clinical samples. PMID:25954224

  6. Passive Microwave Algorithms for Sea Ice Concentration: A Comparison of Two Techniques

    NASA Technical Reports Server (NTRS)

    Comiso, Josefino C.; Cavalieri, Donald J.; Parkinson, Claire L.; Gloersen, Per

    1997-01-01

    The most comprehensive large-scale characterization of the global sea ice cover so far has been provided by satellite passive microwave data. Accurate retrieval of ice concentrations from these data is important because of the sensitivity of surface flux(e.g. heat, salt, and water) calculations to small change in the amount of open water (leads and polynyas) within the polar ice packs. Two algorithms that have been used for deriving ice concentrations from multichannel data are compared. One is the NASA Team algorithm and the other is the Bootstrap algorithm, both of which were developed at NASA's Goddard Space Flight Center. The two algorithms use different channel combinations, reference brightness temperatures, weather filters, and techniques. Analyses are made to evaluate the sensitivity of algorithm results to variations of emissivity and temperature with space and time. To assess the difference in the performance of the two algorithms, analyses were performed with data from both hemispheres and for all seasons. The results show only small differences in the central Arctic in but larger disagreements in the seasonal regions and in summer. In some ares in the Antarctic, the Bootstrap technique show ice concentrations higher than those of the Team algorithm by as much as 25%; whereas, in other areas, it shows ice concentrations lower by as much as 30%. The The differences in the results are caused by temperature effects, emissivity effects, and tie point differences. The Team and the Bootstrap results were compared with available Landsat, advanced very high resolution radiometer (AVHRR) and synthetic aperture radar (SAR) data. AVHRR, Landsat, and SAR data sets all yield higher concentrations than the passive microwave algorithms. Inconsistencies among results suggest the need for further validation studies.

  7. Beware the black box: investigating the sensitivity of FEA simulations to modelling factors in comparative biomechanics.

    PubMed

    Walmsley, Christopher W; McCurry, Matthew R; Clausen, Phillip D; McHenry, Colin R

    2013-01-01

    Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be 'reasonable' are often assumed to have little influence on the results and their interpretation. HERE WE REPORT AN EXTENSIVE SENSITIVITY ANALYSIS WHERE HIGH RESOLUTION FINITE ELEMENT (FE) MODELS OF MANDIBLES FROM SEVEN SPECIES OF CROCODILE WERE ANALYSED UNDER LOADS TYPICAL FOR COMPARATIVE ANALYSIS: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results.

  8. Beware the black box: investigating the sensitivity of FEA simulations to modelling factors in comparative biomechanics

    PubMed Central

    McCurry, Matthew R.; Clausen, Phillip D.; McHenry, Colin R.

    2013-01-01

    Finite element analysis (FEA) is a computational technique of growing popularity in the field of comparative biomechanics, and is an easily accessible platform for form-function analyses of biological structures. However, its rapid evolution in recent years from a novel approach to common practice demands some scrutiny in regards to the validity of results and the appropriateness of assumptions inherent in setting up simulations. Both validation and sensitivity analyses remain unexplored in many comparative analyses, and assumptions considered to be ‘reasonable’ are often assumed to have little influence on the results and their interpretation. Here we report an extensive sensitivity analysis where high resolution finite element (FE) models of mandibles from seven species of crocodile were analysed under loads typical for comparative analysis: biting, shaking, and twisting. Simulations explored the effect on both the absolute response and the interspecies pattern of results to variations in commonly used input parameters. Our sensitivity analysis focuses on assumptions relating to the selection of material properties (heterogeneous or homogeneous), scaling (standardising volume, surface area, or length), tooth position (front, mid, or back tooth engagement), and linear load case (type of loading for each feeding type). Our findings show that in a comparative context, FE models are far less sensitive to the selection of material property values and scaling to either volume or surface area than they are to those assumptions relating to the functional aspects of the simulation, such as tooth position and linear load case. Results show a complex interaction between simulation assumptions, depending on the combination of assumptions and the overall shape of each specimen. Keeping assumptions consistent between models in an analysis does not ensure that results can be generalised beyond the specific set of assumptions used. Logically, different comparative datasets would also be sensitive to identical simulation assumptions; hence, modelling assumptions should undergo rigorous selection. The accuracy of input data is paramount, and simulations should focus on taking biological context into account. Ideally, validation of simulations should be addressed; however, where validation is impossible or unfeasible, sensitivity analyses should be performed to identify which assumptions have the greatest influence upon the results. PMID:24255817

  9. Using relational databases for improved sequence similarity searching and large-scale genomic analyses.

    PubMed

    Mackey, Aaron J; Pearson, William R

    2004-10-01

    Relational databases are designed to integrate diverse types of information and manage large sets of search results, greatly simplifying genome-scale analyses. Relational databases are essential for management and analysis of large-scale sequence analyses, and can also be used to improve the statistical significance of similarity searches by focusing on subsets of sequence libraries most likely to contain homologs. This unit describes using relational databases to improve the efficiency of sequence similarity searching and to demonstrate various large-scale genomic analyses of homology-related data. This unit describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. These include basic use of the database to generate a novel sequence library subset, how to extend and use seqdb_demo for the storage of sequence similarity search results and making use of various kinds of stored search results to address aspects of comparative genomic analysis.

  10. Validation of the Work-Life Balance Culture Scale (WLBCS).

    PubMed

    Nitzsche, Anika; Jung, Julia; Kowalski, Christoph; Pfaff, Holger

    2014-01-01

    The purpose of this paper is to describe the theoretical development and initial validation of the newly developed Work-Life Balance Culture Scale (WLBCS), an instrument for measuring an organizational culture that promotes the work-life balance of employees. In Study 1 (N=498), the scale was developed and its factorial validity tested through exploratory factor analyses. In Study 2 (N=513), confirmatory factor analysis (CFA) was performed to examine model fit and retest the dimensional structure of the instrument. To assess construct validity, a priori hypotheses were formulated and subsequently tested using correlation analyses. Exploratory and confirmatory factor analyses revealed a one-factor model. Results of the bivariate correlation analyses may be interpreted as preliminary evidence of the scale's construct validity. The five-item WLBCS is a new and efficient instrument with good overall quality. Its conciseness makes it particularly suitable for use in employee surveys to gain initial insight into a company's perceived work-life balance culture.

  11. Crystallography of refractory metal nuggets in carbonaceous chondrites: A transmission Kikuchi diffraction approach

    NASA Astrophysics Data System (ADS)

    Daly, Luke; Bland, Phil A.; Dyl, Kathryn A.; Forman, Lucy V.; Saxey, David W.; Reddy, Steven M.; Fougerouse, Denis; Rickard, William D. A.; Trimby, Patrick W.; Moody, Steve; Yang, Limei; Liu, Hongwei; Ringer, Simon P.; Saunders, Martin; Piazolo, Sandra

    2017-11-01

    Transmission Kikuchi diffraction (TKD) is a relatively new technique that is currently being developed for geological sample analysis. This technique utilises the transmission capabilities of a scanning electron microscope (SEM) to rapidly and accurately map the crystallographic and geochemical features of an electron transparent sample. TKD uses a similar methodology to traditional electron backscatter diffraction (EBSD), but is capable of achieving a much higher spatial resolution (5-10 nm) (Trimby, 2012; Trimby et al., 2014). Here we apply TKD to refractory metal nuggets (RMNs) which are micrometre to sub-micrometre metal alloys composed of highly siderophile elements (HSEs) found in primitive carbonaceous chondrite meteorites. TKD allows us to analyse RMNs in situ, enabling the characterisation of nanometre-scale variations in chemistry and crystallography, whilst preserving their spatial and crystallographic context. This provides a complete representation of each RMN, permitting detailed interpretation of their formation history. We present TKD analysis of five transmission electron microscopy (TEM) lamellae containing RMNs coupled with EBSD and TEM analyses. These analyses revealed textures and relationships not previously observed in RMNs. These textures indicate some RMNs experienced annealing, forming twins. Some RMNs also acted as nucleation centres, and formed immiscible metal-silicate fluids. In fact, each RMN analysed in this study had different crystallographic textures. These RMNs also had heterogeneous compositions, even between RMNs contained within the same inclusion, host phase and even separated by only a few nanometres. Some RMNs are also affected by secondary processes at low temperature causing exsolution of molybdenite. However, most RMNs had crystallographic textures indicating that the RMN formed prior to their host inclusion. TKD analyses reveal most RMNs have been affected by processing in the protoplanetary disk. Despite this alteration, RMNs still preserve primary crystallographic textures and heterogeneous chemical signatures. This heterogeneity in crystallographic relationships, which mostly suggest that RMNs pre-date their host, is consistent with the idea that there is not a dominant RMN forming process. Each RMN has experienced a complex history, supporting the suggestion of Daly et al. (2017), that RMNs may preserve a diverse pre-solar chemical signature inherited from the Giant Molecular Cloud.

  12. Local Geographic Variation of Public Services Inequality: Does the Neighborhood Scale Matter?

    PubMed Central

    Wei, Chunzhu; Cabrera-Barona, Pablo; Blaschke, Thomas

    2016-01-01

    This study aims to explore the effect of the neighborhood scale when estimating public services inequality based on the aggregation of social, environmental, and health-related indicators. Inequality analyses were carried out at three neighborhood scales: the original census blocks and two aggregated neighborhood units generated by the spatial “k”luster analysis by the tree edge removal (SKATER) algorithm and the self-organizing map (SOM) algorithm. Then, we combined a set of health-related public services indicators with the geographically weighted principal components analyses (GWPCA) and the principal components analyses (PCA) to measure the public services inequality across all multi-scale neighborhood units. Finally, a statistical test was applied to evaluate the scale effects in inequality measurements by combining all available field survey data. We chose Quito as the case study area. All of the aggregated neighborhood units performed better than the original census blocks in terms of the social indicators extracted from a field survey. The SKATER and SOM algorithms can help to define the neighborhoods in inequality analyses. Moreover, GWPCA performs better than PCA in multivariate spatial inequality estimation. Understanding the scale effects is essential to sustain a social neighborhood organization, which, in turn, positively affects social determinants of public health and public quality of life. PMID:27706072

  13. Relevance of multiple spatial scales in habitat models: A case study with amphibians and grasshoppers

    NASA Astrophysics Data System (ADS)

    Altmoos, Michael; Henle, Klaus

    2010-11-01

    Habitat models for animal species are important tools in conservation planning. We assessed the need to consider several scales in a case study for three amphibian and two grasshopper species in the post-mining landscapes near Leipzig (Germany). The two species groups were selected because habitat analyses for grasshoppers are usually conducted on one scale only whereas amphibians are thought to depend on more than one spatial scale. First, we analysed how the preference to single habitat variables changed across nested scales. Most environmental variables were only significant for a habitat model on one or two scales, with the smallest scale being particularly important. On larger scales, other variables became significant, which cannot be recognized on lower scales. Similar preferences across scales occurred in only 13 out of 79 cases and in 3 out of 79 cases the preference and avoidance for the same variable were even reversed among scales. Second, we developed habitat models by using a logistic regression on every scale and for all combinations of scales and analysed how the quality of habitat models changed with the scales considered. To achieve a sufficient accuracy of the habitat models with a minimum number of variables, at least two scales were required for all species except for Bufo viridis, for which a single scale, the microscale, was sufficient. Only for the European tree frog ( Hyla arborea), at least three scales were required. The results indicate that the quality of habitat models increases with the number of surveyed variables and with the number of scales, but costs increase too. Searching for simplifications in multi-scaled habitat models, we suggest that 2 or 3 scales should be a suitable trade-off, when attempting to define a suitable microscale.

  14. From a meso- to micro-scale connectome: array tomography and mGRASP

    PubMed Central

    Rah, Jong-Cheol; Feng, Linqing; Druckmann, Shaul; Lee, Hojin; Kim, Jinhyun

    2015-01-01

    Mapping mammalian synaptic connectivity has long been an important goal of neuroscience because knowing how neurons and brain areas are connected underpins an understanding of brain function. Meeting this goal requires advanced techniques with single synapse resolution and large-scale capacity, especially at multiple scales tethering the meso- and micro-scale connectome. Among several advanced LM-based connectome technologies, Array Tomography (AT) and mammalian GFP-Reconstitution Across Synaptic Partners (mGRASP) can provide relatively high-throughput mapping synaptic connectivity at multiple scales. AT- and mGRASP-assisted circuit mapping (ATing and mGRASPing), combined with techniques such as retrograde virus, brain clearing techniques, and activity indicators will help unlock the secrets of complex neural circuits. Here, we discuss these useful new tools to enable mapping of brain circuits at multiple scales, some functional implications of spatial synaptic distribution, and future challenges and directions of these endeavors. PMID:26089781

  15. Optical technique for inner-scale measurement: possible astronomical applications.

    PubMed

    Masciadri, E; Vernin, J

    1997-02-20

    We propose an optical technique that allows us to estimate the inner scale by measuring the variance of angle of arrival fluctuations of collimated laser beams of different sections w (i) passing through a turbulent layer. To test the potential efficiency of the system, we made measurements on a turbulent air flow generated in the laboratory, the statistical properties of which are known and controlled, unlike atmospheric turbulence. We deduced a Kolmogorov behavior with a 6-mm inner scale and a 90-mm outer scale in accordance with measurements by a more complicated technique using the same turbulent channel. Our proposed method is especially sensitive to inner-scale measurement and can be adapted easily to atmospheric turbulence analysis. We propose an outdoor experimental setup that should work in less controlled conditions that can affect astronomical observations. The inner-scale assessment might be important when phase retrieval with Laplacian methods is used for adaptive optics purposes.

  16. Developing a model of competence in the operating theatre: psychometric validation of the perceived perioperative competence scale-revised.

    PubMed

    Gillespie, Brigid M; Polit, Denise F; Hamlin, Lois; Chaboyer, Wendy

    2012-01-01

    This paper describes the development and validation of the Revised Perioperative Competence Scale (PPCS-R). There is a lack of a psychometrically tested sound self-assessment tools to measure nurses' perceived competence in the operating room. Content validity was established by a panel of international experts and the original 98-item scale was pilot tested with 345 nurses in Queensland, Australia. Following the removal of several items, a national sample that included all 3209 nurses who were members of the Australian College of Operating Room Nurses was surveyed using the 94-item version. Psychometric testing assessed content validity using exploratory factor analysis, internal consistency using Cronbach's alpha, and construct validity using the "known groups" technique. During item reduction, several preliminary factor analyses were performed on two random halves of the sample (n=550). Usable data for psychometric assessment were obtained from 1122 nurses. The original 94-item scale was reduced to 40 items. The final factor analysis using the entire sample resulted in a 40 item six-factor solution. Cronbach's alpha for the 40-item scale was .96. Construct validation demonstrated significant differences (p<.0001) in perceived competence scores relative to years of operating room experience and receipt of specialty education. On the basis of these results, the psychometric properties of the PPCS-R were considered encouraging. Further testing of the tool in different samples of operating room nurses is necessary to enable cross-cultural comparisons. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Multi-Scale and Object-Oriented Analysis for Mountain Terrain Segmentation and Geomorphological Assessment

    NASA Astrophysics Data System (ADS)

    Marston, B. K.; Bishop, M. P.; Shroder, J. F.

    2009-12-01

    Digital terrain analysis of mountain topography is widely utilized for mapping landforms, assessing the role of surface processes in landscape evolution, and estimating the spatial variation of erosion. Numerous geomorphometry techniques exist to characterize terrain surface parameters, although their utility to characterize the spatial hierarchical structure of the topography and permit an assessment of the erosion/tectonic impact on the landscape is very limited due to scale and data integration issues. To address this problem, we apply scale-dependent geomorphometric and object-oriented analyses to characterize the hierarchical spatial structure of mountain topography. Specifically, we utilized a high resolution digital elevation model to characterize complex topography in the Shimshal Valley in the Western Himalaya of Pakistan. To accomplish this, we generate terrain objects (geomorphological features and landform) including valley floors and walls, drainage basins, drainage network, ridge network, slope facets, and elemental forms based upon curvature. Object-oriented analysis was used to characterize object properties accounting for object size, shape, and morphometry. The spatial overlay and integration of terrain objects at various scales defines the nature of the hierarchical organization. Our results indicate that variations in the spatial complexity of the terrain hierarchical organization is related to the spatio-temporal influence of surface processes and landscape evolution dynamics. Terrain segmentation and the integration of multi-scale terrain information permits further assessment of process domains and erosion, tectonic impact potential, and natural hazard potential. We demonstrate this with landform mapping and geomorphological assessment examples.

  18. Planetary-Scale Geospatial Data Analysis Techniques in Google's Earth Engine Platform (Invited)

    NASA Astrophysics Data System (ADS)

    Hancher, M.

    2013-12-01

    Geoscientists have more and more access to new tools for large-scale computing. With any tool, some tasks are easy and other tasks hard. It is natural to look to new computing platforms to increase the scale and efficiency of existing techniques, but there is a more exiting opportunity to discover and develop a new vocabulary of fundamental analysis idioms that are made easy and effective by these new tools. Google's Earth Engine platform is a cloud computing environment for earth data analysis that combines a public data catalog with a large-scale computational facility optimized for parallel processing of geospatial data. The data catalog includes a nearly complete archive of scenes from Landsat 4, 5, 7, and 8 that have been processed by the USGS, as well as a wide variety of other remotely-sensed and ancillary data products. Earth Engine supports a just-in-time computation model that enables real-time preview during algorithm development and debugging as well as during experimental data analysis and open-ended data exploration. Data processing operations are performed in parallel across many computers in Google's datacenters. The platform automatically handles many traditionally-onerous data management tasks, such as data format conversion, reprojection, resampling, and associating image metadata with pixel data. Early applications of Earth Engine have included the development of Google's global cloud-free fifteen-meter base map and global multi-decadal time-lapse animations, as well as numerous large and small experimental analyses by scientists from a range of academic, government, and non-governmental institutions, working in a wide variety of application areas including forestry, agriculture, urban mapping, and species habitat modeling. Patterns in the successes and failures of these early efforts have begun to emerge, sketching the outlines of a new set of simple and effective approaches to geospatial data analysis.

  19. Coupling carbon isotopes with astrochronology and correlation techniques for Late Cretaceous chronostratigraphic refinement and paleoclimate reconstruction

    NASA Astrophysics Data System (ADS)

    Jones, M. M.; Sageman, B. B.; Meyers, S. R.

    2016-12-01

    Late Cretaceous carbon isotope ratios (δ13C) recorded in organic matter and marine carbonates preserve an archive of the global carbon cycle in a greenhouse climate state. Due to excellent connectivity among surface carbon reservoirs and the low residence time of carbon in them, excursions in the δ13C that record changes in fluxes serve as widely correlative chronostratigraphic markers. In this study, floating astronomical time scales (ATS) from an organic carbon-rich marine Turonian succession at Demerara Rise (tropical N. Atlantic) are combined with high-resolution δ13C chemostratigraphy to estimate CIE timing and duration for refinement of the geologic time scale. In addition, a Gaussian kernel smoothing technique for objective correlation of astronomically tuned δ13C records is developed. Correlation with three coeval Turonian sections (Western Interior Basin, Texas, & Europe) shows consistency in astronomical and radioisotopic time scale ages for CIEs. In particular, a mid-Turonian sea level fall is demonstrated to be synchronous within 100 ka uncertainty. Spectral analyses of δ13Corg, %TOC, %Carbonate, and C/N time series provide insights into astronomical forcings influencing paleoclimate and paleoceanographic conditions in the tropical proto-North Atlantic upwelling zone. The stable long eccentricity cycle ( 405 ka) is robustly recorded in all geochemical data, and has the highest amplitude in %TOC, %Carbonate, and C/N time series. However, δ13C from Demerara Rise is dominated by a 1 Myr cycle resembling long obliquity, suggesting a dynamic organic carbon reservoir and/or climate feedback originating in high-latitudes was prominent during the Turonian greenhouse carbon cycle. This investigation emphasizes δ13C chemostratigraphy and astrochronology are useful chronostratigraphic methods for importing high-resolution time control into disparate basins to answer questions regarding sea level records, paleoclimate, and mass extinction on a global scale, and at the same time for deciphering the response of the global carbon cycle to astronomical climate forcing.

  20. SRM Internal Flow Tests and Computational Fluid Dynamic Analysis. Volume 2; CFD RSRM Full-Scale Analyses

    NASA Technical Reports Server (NTRS)

    2001-01-01

    This document presents the full-scale analyses of the CFD RSRM. The RSRM model was developed with a 20 second burn time. The following are presented as part of the full-scale analyses: (1) RSRM embedded inclusion analysis; (2) RSRM igniter nozzle design analysis; (3) Nozzle Joint 4 erosion anomaly; (4) RSRM full motor port slag accumulation analysis; (5) RSRM motor analysis of two-phase flow in the aft segment/submerged nozzle region; (6) Completion of 3-D Analysis of the hot air nozzle manifold; (7) Bates Motor distributed combustion test case; and (8) Three Dimensional Polysulfide Bump Analysis.

  1. Genome-wide heterogeneity of nucleotide substitution model fit.

    PubMed

    Arbiza, Leonardo; Patricio, Mateus; Dopazo, Hernán; Posada, David

    2011-01-01

    At a genomic scale, the patterns that have shaped molecular evolution are believed to be largely heterogeneous. Consequently, comparative analyses should use appropriate probabilistic substitution models that capture the main features under which different genomic regions have evolved. While efforts have concentrated in the development and understanding of model selection techniques, no descriptions of overall relative substitution model fit at the genome level have been reported. Here, we provide a characterization of best-fit substitution models across three genomic data sets including coding regions from mammals, vertebrates, and Drosophila (24,000 alignments). According to the Akaike Information Criterion (AIC), 82 of 88 models considered were selected as best-fit models at least in one occasion, although with very different frequencies. Most parameter estimates also varied broadly among genes. Patterns found for vertebrates and Drosophila were quite similar and often more complex than those found in mammals. Phylogenetic trees derived from models in the 95% confidence interval set showed much less variance and were significantly closer to the tree estimated under the best-fit model than trees derived from models outside this interval. Although alternative criteria selected simpler models than the AIC, they suggested similar patterns. All together our results show that at a genomic scale, different gene alignments for the same set of taxa are best explained by a large variety of different substitution models and that model choice has implications on different parameter estimates including the inferred phylogenetic trees. After taking into account the differences related to sample size, our results suggest a noticeable diversity in the underlying evolutionary process. All together, we conclude that the use of model selection techniques is important to obtain consistent phylogenetic estimates from real data at a genomic scale.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu Xiaoying; Ho, Shirley; Trac, Hy

    We investigate machine learning (ML) techniques for predicting the number of galaxies (N{sub gal}) that occupy a halo, given the halo's properties. These types of mappings are crucial for constructing the mock galaxy catalogs necessary for analyses of large-scale structure. The ML techniques proposed here distinguish themselves from traditional halo occupation distribution (HOD) modeling as they do not assume a prescribed relationship between halo properties and N{sub gal}. In addition, our ML approaches are only dependent on parent halo properties (like HOD methods), which are advantageous over subhalo-based approaches as identifying subhalos correctly is difficult. We test two algorithms: supportmore » vector machines (SVM) and k-nearest-neighbor (kNN) regression. We take galaxies and halos from the Millennium simulation and predict N{sub gal} by training our algorithms on the following six halo properties: number of particles, M{sub 200}, {sigma}{sub v}, v{sub max}, half-mass radius, and spin. For Millennium, our predicted N{sub gal} values have a mean-squared error (MSE) of {approx}0.16 for both SVM and kNN. Our predictions match the overall distribution of halos reasonably well and the galaxy correlation function at large scales to {approx}5%-10%. In addition, we demonstrate a feature selection algorithm to isolate the halo parameters that are most predictive, a useful technique for understanding the mapping between halo properties and N{sub gal}. Lastly, we investigate these ML-based approaches in making mock catalogs for different galaxy subpopulations (e.g., blue, red, high M{sub star}, low M{sub star}). Given its non-parametric nature as well as its powerful predictive and feature selection capabilities, ML offers an interesting alternative for creating mock catalogs.« less

  3. Extraction of ice absorptions in comet spectra, and application to VIRTIS/Rosetta

    NASA Astrophysics Data System (ADS)

    Erard, Stéphane; Despan, Daniela; Leyrat, Cédric; Drossart, Pierre; Capaccioni, Fabrizio; Filacchione, Gianrico

    2014-05-01

    Detection of ice spectral features can be difficult on comet surfaces, due to the mixing with dark opaque materials, as shown by Deep Impact and Epoxi observations. We study here the possible use of high-level spectral detection techniques in this context. A method based on wavelet decomposition and a multiscale vision model, partly derived from image analysis techniques, was presented recently (Erard, 2013). It is here used to extract shallow features from spectra in reflected light, up to ~3 µm. The outcome of the analysis is a description of the bands detected, and a quantitative and reliable confidence parameter. The bands can be described either by the most appropriate wavelet scale only (for rapid analyses) or after reconstruction from all scales involved (for more precise measurements). An interesting side effect is the ability to separate even narrow features from random noise, as well as to identify low-frequency variations i.e., wide and shallow bands. Tests are performed on laboratory analogues spectra and available observational data. The technique is expected to provide detection of ice in the early stages of Rosetta observations of 67P this year, from VIRTIS data (Coradini et al., 2009). Strategies are devised to quickly analyze large datasets, e. g., by applying the extraction technique to components first identified by an ACI (Erard et al., 2011). The exact position of the bands can be diagnostic of surface temperature, in particular at 1.6 µm (e. g., Fink & Larson, 1975) and 3.6 µm (Filacchione et al., 2013), and may complement estimates retrieved from the onset of thermal emission longward of 3.5 µm. Erard, S. (2013) 8th EPSC EPSC2013-520. Coradini et al (2009), Rosetta book, Schulz et al Eds. Erard, S. et al (2011) Planet & Space Sc 59, 1842-1852 Fink, U. & Larson, H. (1975) Icarus 24, 411-420 Filacchione et al (2013) AGU Fall Meeting Abstracts A7

  4. Manual for Transference Work Scale; a micro-analytical tool for therapy process analyses.

    PubMed

    Ulberg, Randi; Amlo, Svein; Høglend, Per

    2014-11-18

    The present paper is a manual for the Transference Work Scale (TWS). The inter-rater agreement on the 26 TWS items was good to excellent and previously published. TWS is a therapy process rating scale focusing on Transference Work (TW) (i.e. analysis of the patient-therapist relationship). TW is considered a core active ingredient in dynamic psychotherapy. Adequate process scales are needed to identify and analyze in-session effects of therapist techniques in psychodynamic psychotherapy and empirically establish their links to outcome. TWS was constructed to identify and categorize relational (transference) interventions, and explore the in-session impact of analysis of the patient-therapist relationship (transference work). TWS has sub scales that rate timing, content, and valence of the transference interventions, as well as response from the patient. Descriptions and elaborations of the items in TWS are provided. Clinical examples of transference work from the First Experimental Study of Transference Interpretations (FEST) are included and followed by examples of how to rate transcripts from therapy sessions with TWS. The present manual describes in detail the rating procedure when using Transference Work Scale. Ratings are illustrated with clinical examples from FEST. TWS might be a potentially useful tool to explore the interaction of timing, category, and valence of transference work in predicting in-session patient response as well as treatment outcome. TWS might prove especially suitable for intensive case studies combining quantitative and narrative data. First Experimental Study of Transference-interpretations (FEST307/95). ClinicalTrials.gov Identifier: NCT00423462. URL: http://clinicaltrials.gov/ct2/show/NCT00423462?term=FEST&rank=2.

  5. A Scalable Analysis Toolkit

    NASA Technical Reports Server (NTRS)

    Aiken, Alexander

    2001-01-01

    The Scalable Analysis Toolkit (SAT) project aimed to demonstrate that it is feasible and useful to statically detect software bugs in very large systems. The technical focus of the project was on a relatively new class of constraint-based techniques for analysis software, where the desired facts about programs (e.g., the presence of a particular bug) are phrased as constraint problems to be solved. At the beginning of this project, the most successful forms of formal software analysis were limited forms of automatic theorem proving (as exemplified by the analyses used in language type systems and optimizing compilers), semi-automatic theorem proving for full verification, and model checking. With a few notable exceptions these approaches had not been demonstrated to scale to software systems of even 50,000 lines of code. Realistic approaches to large-scale software analysis cannot hope to make every conceivable formal method scale. Thus, the SAT approach is to mix different methods in one application by using coarse and fast but still adequate methods at the largest scales, and reserving the use of more precise but also more expensive methods at smaller scales for critical aspects (that is, aspects critical to the analysis problem under consideration) of a software system. The principled method proposed for combining a heterogeneous collection of formal systems with different scalability characteristics is mixed constraints. This idea had been used previously in small-scale applications with encouraging results: using mostly coarse methods and narrowly targeted precise methods, useful information (meaning the discovery of bugs in real programs) was obtained with excellent scalability.

  6. Soil quality and soil degradation in agricultural loess soils in Central Europe - impacts of traditional small-scale and modernized large-scale agriculture

    NASA Astrophysics Data System (ADS)

    Schneider, Christian

    2017-04-01

    The study analyzes the impact of different farming systems on soil quality and soil degradation in European loess landscapes. The analyses are based on geo-chemical soil properties, landscape metrics and geomorphological indicators. The German Middle Saxonian Loess Region represents loess landscapes whose ecological functions were shaped by land consolidation measures resulting in large-scale high-input farming systems. The Polish Proszowice Plateau is still characterized by a traditional small-scale peasant agriculture. The research areas were analyzed on different scale levels combining GIS, field, and laboratory methods. A digital terrain classification was used to identify representative catchment basins for detailed pedological studies which were focused on soil properties that responded to soil management within several years, like pH-value, total carbon (TC), total nitrogen (TN), inorganic carbon (IC), soil organic carbon (TOC=TC-IC), hot-water extractable carbon (HWC), hot-water extractable nitrogen (HWN), total phosphorus, plant-available phosphorus (P), plant-available potassium (K) and the potential cation exchange capacity (CEC). The study has shown that significant differences in major soil properties can be observed because of different fertilizer inputs and partly because of different cultivation techniques. Also the traditional system increases soil heterogeneity. Contrary to expectations the study has shown that the small-scale peasant farming system resulted in similar mean soil organic carbon and phosphorus contents like the industrialized high-input farming system. A further study could include investigations of the effects of soil amendments like herbicides and pesticide on soil degradation.

  7. Large-Scale Phase Synchrony Reflects Clinical Status After Stroke: An EEG Study.

    PubMed

    Kawano, Teiji; Hattori, Noriaki; Uno, Yutaka; Kitajo, Keiichi; Hatakenaka, Megumi; Yagura, Hajime; Fujimoto, Hiroaki; Yoshioka, Tomomi; Nagasako, Michiko; Otomune, Hironori; Miyai, Ichiro

    2017-06-01

    Stroke-induced focal brain lesions often exert remote effects via residual neural network activity. Electroencephalographic (EEG) techniques can assess neural network modifications after brain damage. Recently, EEG phase synchrony analyses have shown associations between the level of large-scale phase synchrony of brain activity and clinical symptoms; however, few reports have assessed such associations in stroke patients. The aim of this study was to investigate the clinical relevance of hemispheric phase synchrony in stroke patients by calculating its correlation with clinical status. This cross-sectional study included 19 patients with post-acute ischemic stroke admitted for inpatient rehabilitation. Interhemispheric phase synchrony indices (IH-PSIs) were computed in 2 frequency bands (alpha [α], and beta [β]), and associations between indices and scores of the Functional Independence Measure (FIM), the National Institutes of Health Stroke Scale (NIHSS), and the Fugl-Meyer Motor Assessment (FMA) were analyzed. For further assessments of IH-PSIs, ipsilesional intrahemispheric PSIs (IntraH-PSIs) as well as IH- and IntraH-phase lag indices (PLIs) were also evaluated. IH-PSIs correlated significantly with FIM scores and NIHSS scores. In contrast, IH-PSIs did not correlate with FMA scores. IntraH-PSIs correlate with FIM scores after removal of the outlier. The results of analysis with PLIs were consistent with IH-PSIs. The PSIs correlated with performance on the activities of daily living scale but not with scores on a pure motor impairment scale. These results suggest that large-scale phase synchrony represented by IH-PSIs provides a novel surrogate marker for clinical status after stroke.

  8. Digital signal processing techniques for pitch shifting and time scaling of audio signals

    NASA Astrophysics Data System (ADS)

    Buś, Szymon; Jedrzejewski, Konrad

    2016-09-01

    In this paper, we present the techniques used for modifying the spectral content (pitch shifting) and for changing the time duration (time scaling) of an audio signal. A short introduction gives a necessary background for understanding the discussed issues and contains explanations of the terms used in the paper. In subsequent sections we present three different techniques appropriate both for pitch shifting and for time scaling. These techniques use three different time-frequency representations of a signal, namely short-time Fourier transform (STFT), continuous wavelet transform (CWT) and constant-Q transform (CQT). The results of simulation studies devoted to comparison of the properties of these methods are presented and discussed in the paper.

  9. Calibrations for a MCAO Imaging System

    NASA Astrophysics Data System (ADS)

    Hibon, Pascale; B. Neichel; V. Garrel; R. Carrasco

    2017-09-01

    "GeMS, the Gemini Multi conjugate adaptive optics System installed at the Gemini South telescope (Cerro Pachon, Chile) started to deliver science since the beginning of 2013. GeMS is using the Multi Conjugate AdaptiveOptics (MCAO) technique allowing to dramatically increase the corrected field of view (FOV) compared to classical Single Conjugated Adaptive Optics (SCAO) systems. It is the first sodium-based multi-Laser Guide Star (LGS) adaptive optics system. It has been designed to feed two science instruments: GSAOI, a 4k×4k NIR imager covering 85"×85" with 0.02" pixel scale, and Flamingos-2, a NIR multi-object spectrograph. We present here an overview of the calibrations necessary for reducing and analysing the science datasets obtained with GeMS+GSAOI."

  10. Landfalling Tropical Cyclones: Forecast Problems and Associated Research Opportunities

    USGS Publications Warehouse

    Marks, F.D.; Shay, L.K.; Barnes, G.; Black, P.; Demaria, M.; McCaul, B.; Mounari, J.; Montgomery, M.; Powell, M.; Smith, J.D.; Tuleya, B.; Tripoli, G.; Xie, Lingtian; Zehr, R.

    1998-01-01

    The Fifth Prospectus Development Team of the U.S. Weather Research Program was charged to identify and delineate emerging research opportunities relevant to the prediction of local weather, flooding, and coastal ocean currents associated with landfalling U.S. hurricanes specifically, and tropical cyclones in general. Central to this theme are basic and applied research topics, including rapid intensity change, initialization of and parameterization in dynamical models, coupling of atmospheric and oceanic models, quantitative use of satellite information, and mobile observing strategies to acquire observations to evaluate and validate predictive models. To improve the necessary understanding of physical processes and provide the initial conditions for realistic predictions, a focused, comprehensive mobile observing system in a translating storm-coordinate system is required. Given the development of proven instrumentation and improvement of existing systems, three-dimensional atmospheric and oceanic datasets need to be acquired whenever major hurricanes threaten the United States. The spatial context of these focused three-dimensional datasets over the storm scales is provided by satellites, aircraft, expendable probes released from aircraft, and coastal (both fixed and mobile), moored, and drifting surface platforms. To take full advantage of these new observations, techniques need to be developed to objectively analyze these observations, and initialize models aimed at improving prediction of hurricane track and intensity from global-scale to mesoscale dynamical models. Multinested models allow prediction of all scales from the global, which determine long- term hurricane motion to the convective scale, which affect intensity. Development of an integrated analysis and model forecast system optimizing the use of three-dimensional observations and providing the necessary forecast skill on all relevant spatial scales is required. Detailed diagnostic analyses of these datasets will lead to improved understanding of the physical processes of hurricane motion, intensity change, the atmospheric and oceanic boundary layers, and the air- sea coupling mechanisms. The ultimate aim of this effort is the construction of real-time analyses of storm surge, winds, and rain, prior to and during landfall, to improve warnings and provide local officials with the comprehensive information required for recovery efforts in the hardest hit areas as quickly as possible.

  11. A non-contact measurement technique at the micro scale

    NASA Astrophysics Data System (ADS)

    Ghosh, Santaneel

    During their production and normal use, electronic packages experience large temperature excursions, leading to high thermo-mechanical stress gradients that cause fatigue failure of the solder joints. In order to prevent premature failure and prolong the fatigue life of solder joints, there is a pressing need for the characterization of the solder, especially lead-free solder, at the micro-level (joint size). The characterization and modeling of solder behavior at the appropriate scale is a major issue. However, direct measurement techniques are not applicable to characterize the deformation response of solder joints because of their micro scale dimensions. Therefore, a non-contact measurement technique utilizing a Scanning Electron Microscope (SEM) in conjunction with Digital Image Correlation (DIC) has been developed. Validation was achieved by performing a four-point bending test in both an in-house optical system with DIC and inside the SEM. This non-contact measurement technique was then used to extract the stress-strain response of the solder. Mechanical tests were performed on solder joints that were created using the same type of solder balls used in the electronic industry and were representative of normal joint scales. The SEM-DIC technique has been proven to be applicable for the determining the stress-strain response of solder material at the micro-scale. This study resulted in a validated material characterization technique specifically designed for micro-scale material response. One of the main contributions of this study is that the method is a lot simpler and cheaper, yet highly effective, compared to the previous methods. This technique is also readily applicable to the measurement of the stress-strain response of any micro-scale specimen, such as other metals, polymers, etc. Also, the measured displacement field by obtained by DIC can be used as the base for calculating the strain field on the surface of a specimen.

  12. AQMEII3 evaluation of regional NA/EU simulations and ...

    EPA Pesticide Factsheets

    Through the comparison of several regional-scale chemistry transport modelling systems that simulate meteorology and air quality over the European and American continents, this study aims at i) apportioning the error to the responsible processes using time-scale analysis, ii) helping to detect causes of models error, and iii) identifying the processes and scales most urgently requiring dedicated investigations. The analysis is conducted within the framework of the third phase of the Air Quality Model Evaluation International Initiative (AQMEII) and tackles model performance gauging through measurement-to-model comparison, error decomposition and time series analysis of the models biases for several fields (ozone, CO, SO2, NO, NO2, PM10, PM2.5, wind speed, and temperature). The operational metrics (magnitude of the error, sign of the bias, associativity) provide an overall sense of model strengths and deficiencies, while apportioning the error to its constituent parts (bias, variance and covariance) can help to assess the nature and quality of the error. Each of the error components is analysed independently and apportioned to specific processes based on the corresponding timescale (long scale, synoptic, diurnal, and intra-day) using the error apportionment technique devised in the former phases of AQMEII. The application of the error apportionment method to the AQMEII Phase 3 simulations provides several key insights. In addition to reaffirming the strong impac

  13. Community-based native seed production for restoration in Brazil - the role of science and policy.

    PubMed

    Schmidt, I B; de Urzedo, D I; Piña-Rodrigues, F C M; Vieira, D L M; de Rezende, G M; Sampaio, A B; Junqueira, R G P

    2018-05-20

    Large-scale restoration programmes in the tropics require large volumes of high quality, genetically diverse and locally adapted seeds from a large number of species. However, scarcity of native seeds is a critical restriction to achieve restoration targets. In this paper, we analyse three successful community-based networks that supply native seeds and seedlings for Brazilian Amazon and Cerrado restoration projects. In addition, we propose directions to promote local participation, legal, technical and commercialisation issues for up-scaling the market of native seeds for restoration with high quality and social justice. We argue that effective community-based restoration arrangements should follow some principles: (i) seed production must be based on real market demand; (ii) non-governmental and governmental organisations have a key role in supporting local organisation, legal requirements and selling processes; (iii) local ecological knowledge and labour should be valued, enabling local communities to promote large-scale seed production; (iv) applied research can help develop appropriate techniques and solve technical issues. The case studies from Brazil and principles presented here can be useful for the up-scaling restoration ecology efforts in many other parts of the world and especially in tropical countries where improving rural community income is a strategy for biodiversity conservation and restoration. © 2018 German Society for Plant Sciences and The Royal Botanical Society of the Netherlands.

  14. Reconstruction of the 3-D Shape and Crystal Preferred Orientation of Olivine: A Combined X-ray µ-CT and EBSD-SEM approach

    NASA Astrophysics Data System (ADS)

    Kahl, Wolf-Achim; Hidas, Károly; Dilissen, Nicole; Garrido, Carlos J.; López-Sánchez Vizcaíno, Vicente; Jesús Román-Alpiste, Manuel

    2017-04-01

    The complete reconstruction of the microstructure of rocks requires, among others, a full description of the shape preferred orientation (SPO) and crystal preferred orientation (CPO) of the constituent mineral phases. New advances in instrumental analyses, particularly electron backscatter diffraction (EBSD) coupled to focused ion beam-scanning electron microscope (FIB-SEM), allows a complete characterization of SPO and CPO in rocks at the micron scale [1-2]. Unfortunately, the large grain size of many crystalline rocks, such as peridotite, prevents a representative characterization of the CPO and SPO of their constituent minerals by this technique. Here, we present a new approach combining X-ray micro computed tomography (µ-CT) and EBSD to reconstruct the geographically oriented, 3-D SPO and CPO of cm- to mm-sized olivine crystals in two contrasting fabric types of chlorite harzburgites (Almírez ultramafic massif, SE Spain). The semi-destructive sample treatment involves drilling of geographically oriented micro drills in the field and preparation of oriented thin sections from µ-CT scanned cores. This allows for establishing the link among geological structures, macrostructure, fabric, and 3-D SPO-CPO at the thin section scale. Based on EBSD analyses, different CPO groups of olivine crystals can be discriminated in the thin sections and allocated to 3-D SPO in the µ-CT volume data. This approach overcomes the limitations of both methods (i.e., no crystal orientation data in µ-CT and no spatial information in EBSD), hence 3-D orientation of the crystallographic axes of olivines from different orientation groups could be correlated with the crystal shapes of olivine grains. This combined µ-CT and EBSD technique enables the correlation of both SPO and CPO and representative grain size, and is capable to characterize the 3-D microstructure of olivine-bearing rocks at the hand specimen scale. REFERENCES 1. Zaefferer, S., Wright, S.I., Raabe, D., 2008. Three-Dimensional orientation microscopy in a focused ion beam-scanning electron microscope: A new dimension of microstructure characterization. Metallurgical and Materials Transactions A 39, 374-389. 2. Burnett, T.L., Kelley, R., Winiarski, B., Contreras, L., Daly, M., Gholinia, A., Burke, M.G., Withers, P.J., 2016. Large volume serial section tomography by Xe Plasma FIB dual beam microscopy. Ultramicroscopy 161, 119-129.

  15. Theoretical Studies of Microstrip Antennas : Volume I, General Design Techniques and Analyses of Single and Coupled Elements

    DOT National Transportation Integrated Search

    1979-09-01

    Volume 1 of Theoretical Studies of Microstrip Antennas deals with general techniques and analyses of single and coupled radiating elements. Specifically, we review and then employ an important equivalence theorem that allows a pair of vector potentia...

  16. A simple iterative independent component analysis algorithm for vibration source signal identification of complex structures

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Sup; Cho, Dae-Seung; Kim, Kookhyun; Jeon, Jae-Jin; Jung, Woo-Jin; Kang, Myeng-Hwan; Kim, Jae-Ho

    2015-01-01

    Independent Component Analysis (ICA), one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: instability and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to validate the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.

  17. Results of surgical treatment of acromioclavicular dislocations type III using modified Weaver Dunn technique.

    PubMed

    López-Alameda, S; Fernández-Santás, T; García-Villanueva, A; Varillas-Delgado, D; Garcia de Lucas, F

    To evaluate the clinical and radiological results of the surgical treatment of type III acromioclavicular dislocations using the Weaver-Dunn technique in the delayed phase. A non-randomised controlled retrospective observational study of 38 patients operated between January 2006 and December 2014. We excluded 10 patients due to death or non-localisation. We collected demographic data, time to intervention, complications, analysing the Visual Analog Scale, DASH and Oxford Shoulder Score and the updated radiological result. mean age of patients with right-dominant shoulder affected in 71% of cases predominantly by non-level falls was 35. 70% of the cases had subjective perception of both recovery of strength and disappearance of deformity. Full radiological reduction was observed in 95% of the cases with the appearance of mild osteoarthritis in 44% and moderate osteoarthritis in 5.6%. The results of the DASH presented values of 12,939 (±16,851) and the OSS of 42,736 (±7,794), indicating satisfactory articular function. The data from this study shows similar results to previous studies regarding subjective recovery of strength, maintenance of anatomical reduction, functional test results and efficacy of the Weaver-Dunn technique. The modified Weaver-Dunn technique provided good clinical and radiological results with patient reincorporation to their usual activities and maintenance over time. Copyright © 2017 SECOT. Publicado por Elsevier España, S.L.U. All rights reserved.

  18. Scaling range of power laws that originate from fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Grech, Dariusz; Mazur, Zygmunt

    2013-05-01

    We extend our previous study of scaling range properties performed for detrended fluctuation analysis (DFA) [Physica A0378-437110.1016/j.physa.2013.01.049 392, 2384 (2013)] to other techniques of fluctuation analysis (FA). The new technique, called modified detrended moving average analysis (MDMA), is introduced, and its scaling range properties are examined and compared with those of detrended moving average analysis (DMA) and DFA. It is shown that contrary to DFA, DMA and MDMA techniques exhibit power law dependence of the scaling range with respect to the length of the searched signal and with respect to the accuracy R2 of the fit to the considered scaling law imposed by DMA or MDMA methods. This power law dependence is satisfied for both uncorrelated and autocorrelated data. We find also a simple generalization of this power law relation for series with a different level of autocorrelations measured in terms of the Hurst exponent. Basic relations between scaling ranges for different techniques are also discussed. Our findings should be particularly useful for local FA in, e.g., econophysics, finances, or physiology, where the huge number of short time series has to be examined at once and wherever the preliminary check of the scaling range regime for each of the series separately is neither effective nor possible.

  19. Development of an instrument to understand the child protective services decision-making process, with a focus on placement decisions.

    PubMed

    Dettlaff, Alan J; Christopher Graham, J; Holzman, Jesse; Baumann, Donald J; Fluke, John D

    2015-11-01

    When children come to the attention of the child welfare system, they become involved in a decision-making process in which decisions are made that have a significant effect on their future and well-being. The decision to remove children from their families is particularly complex; yet surprisingly little is understood about this decision-making process. This paper presents the results of a study to develop an instrument to explore, at the caseworker level, the context of the removal decision, with the objective of understanding the influence of the individual and organizational factors on this decision, drawing from the Decision Making Ecology as the underlying rationale for obtaining the measures. The instrument was based on the development of decision-making scales used in prior decision-making studies and administered to child protection caseworkers in several states. Analyses included reliability analyses, principal components analyses, and inter-correlations among the resulting scales. For one scale regarding removal decisions, a principal components analysis resulted in the extraction of two components, jointly identified as caseworkers' decision-making orientation, described as (1) an internal reference to decision-making and (2) an external reference to decision-making. Reliability analyses demonstrated acceptable to high internal consistency for 9 of the 11 scales. Full details of the reliability analyses, principal components analyses, and inter-correlations among the seven scales are discussed, along with implications for practice and the utility of this instrument to support the understanding of decision-making in child welfare. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Probing Mantle Heterogeneity Across Spatial Scales

    NASA Astrophysics Data System (ADS)

    Hariharan, A.; Moulik, P.; Lekic, V.

    2017-12-01

    Inferences of mantle heterogeneity in terms of temperature, composition, grain size, melt and crystal structure may vary across local, regional and global scales. Probing these scale-dependent effects require quantitative comparisons and reconciliation of tomographic models that vary in their regional scope, parameterization, regularization and observational constraints. While a range of techniques like radial correlation functions and spherical harmonic analyses have revealed global features like the dominance of long-wavelength variations in mantle heterogeneity, they have limited applicability for specific regions of interest like subduction zones and continental cratons. Moreover, issues like discrepant 1-D reference Earth models and related baseline corrections have impeded the reconciliation of heterogeneity between various regional and global models. We implement a new wavelet-based approach that allows for structure to be filtered simultaneously in both the spectral and spatial domain, allowing us to characterize heterogeneity on a range of scales and in different geographical regions. Our algorithm extends a recent method that expanded lateral variations into the wavelet domain constructed on a cubed sphere. The isolation of reference velocities in the wavelet scaling function facilitates comparisons between models constructed with arbitrary 1-D reference Earth models. The wavelet transformation allows us to quantify the scale-dependent consistency between tomographic models in a region of interest and investigate the fits to data afforded by heterogeneity at various dominant wavelengths. We find substantial and spatially varying differences in the spectrum of heterogeneity between two representative global Vp models constructed using different data and methodologies. Applying the orthonormality of the wavelet expansion, we isolate detailed variations in velocity from models and evaluate additional fits to data afforded by adding such complexities to long-wavelength variations. Our method provides a way to probe and evaluate localized features in a multi-scale description of mantle heterogeneity.

  1. The Use of Quality Control and Data Mining Techniques for Monitoring Scaled Scores: An Overview. Research Report. ETS RR-12-20

    ERIC Educational Resources Information Center

    von Davier, Alina A.

    2012-01-01

    Maintaining comparability of test scores is a major challenge faced by testing programs that have almost continuous administrations. Among the potential problems are scale drift and rapid accumulation of errors. Many standard quality control techniques for testing programs, which can effectively detect and address scale drift for small numbers of…

  2. Discovery of Newer Therapeutic Leads for Prostate Cancer

    DTIC Science & Technology

    2009-06-01

    promising plant extracts and then prepare large-scale quantities of the plant extracts using supercritical fluid extraction techniques and use this...quantities of the plant extracts using supercritical fluid extraction techniques. Large scale plant collections were conducted for 14 of the top 20...material for bioassay-guided fractionation of the biologically active constituents using modern chromatography techniques. The chemical structures of

  3. NASA/FAA general aviation crash dynamics program

    NASA Technical Reports Server (NTRS)

    Thomson, R. G.; Hayduk, R. J.; Carden, H. D.

    1981-01-01

    The program involves controlled full scale crash testing, nonlinear structural analyses to predict large deflection elastoplastic response, and load attenuating concepts for use in improved seat and subfloor structure. Both analytical and experimental methods are used to develop expertise in these areas. Analyses include simplified procedures for estimating energy dissipating capabilities and comprehensive computerized procedures for predicting airframe response. These analyses are developed to provide designers with methods for predicting accelerations, loads, and displacements on collapsing structure. Tests on typical full scale aircraft and on full and subscale structural components are performed to verify the analyses and to demonstrate load attenuating concepts. A special apparatus was built to test emergency locator transmitters when attached to representative aircraft structure. The apparatus is shown to provide a good simulation of the longitudinal crash pulse observed in full scale aircraft crash tests.

  4. Child Behavior Checklist—Mania Scale (CBCL-MS): Development and Evaluation of a Population-Based Screening Scale for Bipolar Disorder

    PubMed Central

    Papachristou, Efstathios; Ormel, Johan; Oldehinkel, Albertine J.; Kyriakopoulos, Marinos; Reinares, María; Reichenberg, Abraham; Frangou, Sophia

    2013-01-01

    Context Early identification of Bipolar Disorder (BD) remains poor despite the high levels of disability associated with the disorder. Objective We developed and evaluated a new DSM orientated scale for the identification of young people at risk for BD based on the Child Behavior Checklist (CBCL) and compared its performance against the CBCL-Pediatric Bipolar Disorder (CBCL-PBD) and the CBCL-Externalizing Scale, the two most widely used scales. Methods The new scale, CBCL-Mania Scale (CBCL-MS), comprises 19 CBCL items that directly correspond to operational criteria for mania. We tested the reliability, longitudinal stability and diagnostic accuracy of the CBCL-MS on data from the TRacking Adolescents' Individual Lives Survey (TRAILS), a prospective epidemiological cohort study of 2230 Dutch youths assessed with the CBCL at ages 11, 13 and 16. At age 19 lifetime psychiatric diagnoses were ascertained with the Composite International Diagnostic Interview. We compared the predictive ability of the CBCL-MS against the CBCL-Externalising Scale and the CBCL-PBD in the TRAILS sample. Results The CBCL-MS had high internal consistency and satisfactory accuracy (area under the curve = 0.64) in this general population sample. Principal Component Analyses, followed by parallel analyses and confirmatory factor analyses, identified four factors corresponding to distractibility/disinhibition, psychosis, increased libido and disrupted sleep. This factor structure remained stable across all assessment ages. Logistic regression analyses showed that the CBCL-MS had significantly higher predictive ability than both the other scales. Conclusions Our data demonstrate that the CBCL-MS is a promising screening instrument for BD. The factor structure of the CBCL-MS showed remarkable temporal stability between late childhood and early adulthood suggesting that it maps on to meaningful developmental dimensions of liability to BD. PMID:23967059

  5. Communication and cooperation in underwater acoustic networks

    NASA Astrophysics Data System (ADS)

    Yerramalli, Srinivas

    In this thesis, we present a study of several problems related to underwater point to point communications and network formation. We explore techniques to improve the achievable data rate on a point to point link using better physical layer techniques and then study sensor cooperation which improves the throughput and reliability in an underwater network. Robust point-to-point communications in underwater networks has become increasingly critical in several military and civilian applications related to underwater communications. We present several physical layer signaling and detection techniques tailored to the underwater channel model to improve the reliability of data detection. First, a simplified underwater channel model in which the time scale distortion on each path is assumed to be the same (single scale channel model in contrast to a more general multi scale model). A novel technique, which exploits the nature of OFDM signaling and the time scale distortion, called Partial FFT Demodulation is derived. It is observed that this new technique has some unique interference suppression properties and performs better than traditional equalizers in several scenarios of interest. Next, we consider the multi scale model for the underwater channel and assume that single scale processing is performed at the receiver. We then derive optimized front end pre-processing techniques to reduce the interference caused during single scale processing of signals transmitted on a multi-scale channel. We then propose an improvised channel estimation technique using dictionary optimization methods for compressive sensing and show that significant performance gains can be obtained using this technique. In the next part of this thesis, we consider the problem of sensor node cooperation among rational nodes whose objective is to improve their individual data rates. We first consider the problem of transmitter cooperation in a multiple access channel and investigate the stability of the grand coalition of transmitters using tools from cooperative game theory and show that the grand coalition in both the asymptotic regimes of high and low SNR. Towards studying the problem of receiver cooperation for a broadcast channel, we propose a game theoretic model for the broadcast channel and then derive a game theoretic duality between the multiple access and the broadcast channel and show that how the equilibria of the broadcast channel are related to the multiple access channel and vice versa.

  6. Variance Estimation Using Replication Methods in Structural Equation Modeling with Complex Sample Data

    ERIC Educational Resources Information Center

    Stapleton, Laura M.

    2008-01-01

    This article discusses replication sampling variance estimation techniques that are often applied in analyses using data from complex sampling designs: jackknife repeated replication, balanced repeated replication, and bootstrapping. These techniques are used with traditional analyses such as regression, but are currently not used with structural…

  7. Adapting and Validating a Scale to Measure Sexual Stigma among Lesbian, Bisexual and Queer Women

    PubMed Central

    Logie, Carmen H.; Earnshaw, Valerie

    2015-01-01

    Lesbian, bisexual and queer (LBQ) women experience pervasive sexual stigma that harms wellbeing. Stigma is a multi-dimensional construct and includes perceived stigma, awareness of negative attitudes towards one’s group, and enacted stigma, overt experiences of discrimination. Despite its complexity, sexual stigma research has generally explored singular forms of sexual stigma among LBQ women. The study objective was to develop a scale to assess perceived and enacted sexual stigma among LBQ women. We adapted a sexual stigma scale for use with LBQ women. The validation process involved 3 phases. First, we held a focus group where we engaged a purposively selected group of key informants in cognitive interviewing techniques to modify the survey items to enhance relevance to LBQ women. Second, we implemented an internet-based, cross-sectional survey with LBQ women (n=466) in Toronto, Canada. Third, we administered an internet-based survey at baseline and 6-week follow-up with LBQ women in Toronto (n=24) and Calgary (n=20). We conducted an exploratory factor analysis using principal components analysis and descriptive statistics to explore health and demographic correlates of the sexual stigma scale. Analyses yielded one scale with two factors: perceived and enacted sexual stigma. The total scale and subscales demonstrated adequate internal reliability (total scale alpha coefficient: 0.78; perceived sub-scale: 0.70; enacted sub-scale: 0.72), test-retest reliability, and construct validity. Perceived and enacted sexual stigma were associated with higher rates of depressive symptoms and lower self-esteem, social support, and self-rated health scores. Results suggest this sexual stigma scale adapted for LBQ women has good psychometric properties and addresses enacted and perceived stigma dimensions. The overwhelming majority of participants reported experiences of perceived sexual stigma. This underscores the importance of moving beyond a singular focus on discrimination to explore perceptions of social judgment, negative attitudes and social norms. PMID:25679391

  8. Multi-scale modeling in cell biology

    PubMed Central

    Meier-Schellersheim, Martin; Fraser, Iain D. C.; Klauschen, Frederick

    2009-01-01

    Biomedical research frequently involves performing experiments and developing hypotheses that link different scales of biological systems such as, for instance, the scales of intracellular molecular interactions to the scale of cellular behavior and beyond to the behavior of cell populations. Computational modeling efforts that aim at exploring such multi-scale systems quantitatively with the help of simulations have to incorporate several different simulation techniques due to the different time and space scales involved. Here, we provide a non-technical overview of how different scales of experimental research can be combined with the appropriate computational modeling techniques. We also show that current modeling software permits building and simulating multi-scale models without having to become involved with the underlying technical details of computational modeling. PMID:20448808

  9. Advances in the Quantitative Characterization of the Shape of Ash-Sized Pyroclast Populations: Fractal Analyses Coupled to Micro- and Nano-Computed Tomography Techniques

    NASA Astrophysics Data System (ADS)

    Rausch, J.; Vonlanthen, P.; Grobety, B. H.

    2014-12-01

    The quantification of shape parameters in pyroclasts is fundamental to infer the dominant type of magma fragmentation (magmatic vs. phreatomagmatic), as well as the behavior of volcanic plumes and clouds in the atmosphere. In a case study aiming at reconstructing the fragmentation mechanisms triggering maar eruptions in two geologically and compositionally distinctive volcanic fields (West and East Eifel, Germany), the shapes of a large number of ash particle contours obtained from SEM images were analyzed by a dilation-based fractal method. Volcanic particle contours are pseudo-fractals showing mostly two distinct slopes in Richardson plots related to the fractal dimensions D1 (small-scale "textural" dimension) and D2 (large-scale "morphological" dimension). The validity of the data obtained from 2D sections was tested by analysing SEM micro-CT slices of one particle cut in different orientations and positions. Results for West Eifel maar particles yield large D1 values (> 1.023), resembling typical values of magmatic particles, which are characterized by a complex shape, especially at small scales. In contrast, the D1 values of ash particles from one East Eifel maar deposit are much smaller, coinciding with the fractal dimensions obtained from phreatomagmatic end-member particles. These quantitative morphological analyses suggest that the studied maar eruptions were triggered by two different fragmentation processes: phreatomagmatic in the East Eifel and magmatic in the West Eifel. The application of fractal analysis to quantitatively characterize the shape of pyroclasts and the linking of fractal dimensions to specific fragmentation processes has turned out to be a very promising tool for studying the fragmentation history of any volcanic eruption. The next step is to extend morphological analysis of volcanic particles to 3 dimensions. SEM micro-CT, already applied in this study, offers the required resolution, but is not suitable for the analysis of a large number of particles. Newly released nano CT-scanners, however, allows the simultaneous analysis of a statistically relevant number of particles (in the hundreds range). Preliminary results of a first trial will be presented.

  10. (Multi)fractality of Earthquakes by use of Wavelet Analysis

    NASA Astrophysics Data System (ADS)

    Enescu, B.; Ito, K.; Struzik, Z. R.

    2002-12-01

    The fractal character of earthquakes' occurrence, in time, space or energy, has by now been established beyond doubt and is in agreement with modern models of seismicity. Moreover, the cascade-like generation process of earthquakes -with one "main" shock followed by many aftershocks, having their own aftershocks- may well be described through multifractal analysis, well suited for dealing with such multiplicative processes. The (multi)fractal character of seismicity has been analysed so far by using traditional techniques, like the box-counting and correlation function algorithms. This work introduces a new approach for characterising the multifractal patterns of seismicity. The use of wavelet analysis, in particular of the wavelet transform modulus maxima, to multifractal analysis was pioneered by Arneodo et al. (1991, 1995) and applied successfully in diverse fields, such as the study of turbulence, the DNA sequences or the heart rate dynamics. The wavelets act like a microscope, revealing details about the analysed data at different times and scales. We introduce and perform such an analysis on the occurrence time of earthquakes and show its advantages. In particular, we analyse shallow seismicity, characterised by a high aftershock "productivity", as well as intermediate and deep seismic activity, known for its scarcity of aftershocks. We examine as well declustered (aftershocks removed) versions of seismic catalogues. Our preliminary results show some degree of multifractality for the undeclustered, shallow seismicity. On the other hand, at large scales, we detect a monofractal scaling behaviour, clearly put in evidence for the declustered, shallow seismic activity. Moreover, some of the declustered sequences show a long-range dependent (LRD) behaviour, characterised by a Hurst exponent, H > 0.5, in contrast with the memory-less, Poissonian model. We demonstrate that the LRD is a genuine characteristic and is not an effect of the time series probability distribution function. One of the most attractive features of wavelet analysis is its ability to determine a local Hurst exponent. We show that this feature together with the possibility of extending the analysis to spatial patterns may constitute a valuable approach to search for anomalous (precursory?) patterns of seismic activity.

  11. Multi-scale dynamical behavior of spatially distributed systems: a deterministic point of view

    NASA Astrophysics Data System (ADS)

    Mangiarotti, S.; Le Jean, F.; Drapeau, L.; Huc, M.

    2015-12-01

    Physical and biophysical systems are spatially distributed systems. Their behavior can be observed or modelled spatially at various resolutions. In this work, a deterministic point of view is adopted to analyze multi-scale behavior taking a set of ordinary differential equation (ODE) as elementary part of the system.To perform analyses, scenes of study are thus generated based on ensembles of identical elementary ODE systems. Without any loss of generality, their dynamics is chosen chaotic in order to ensure sensitivity to initial conditions, that is, one fundamental property of atmosphere under instable conditions [1]. The Rössler system [2] is used for this purpose for both its topological and algebraic simplicity [3,4].Two cases are thus considered: the chaotic oscillators composing the scene of study are taken either independent, or in phase synchronization. Scale behaviors are analyzed considering the scene of study as aggregations (basically obtained by spatially averaging the signal) or as associations (obtained by concatenating the time series). The global modeling technique is used to perform the numerical analyses [5].One important result of this work is that, under phase synchronization, a scene of aggregated dynamics can be approximated by the elementary system composing the scene, but modifying its parameterization [6]. This is shown based on numerical analyses. It is then demonstrated analytically and generalized to a larger class of ODE systems. Preliminary applications to cereal crops observed from satellite are also presented.[1] Lorenz, Deterministic nonperiodic flow. J. Atmos. Sci., 20, 130-141 (1963).[2] Rössler, An equation for continuous chaos, Phys. Lett. A, 57, 397-398 (1976).[3] Gouesbet & Letellier, Global vector-field reconstruction by using a multivariate polynomial L2 approximation on nets, Phys. Rev. E 49, 4955-4972 (1994).[4] Letellier, Roulin & Rössler, Inequivalent topologies of chaos in simple equations, Chaos, Solitons & Fractals, 28, 337-360 (2006).[5] Mangiarotti, Coudret, Drapeau, & Jarlan, Polynomial search and global modeling, Phys. Rev. E 86(4), 046205 (2012).[6] Mangiarotti, Modélisation globale et Caractérisation Topologique de dynamiques environnementales. Habilitation à Diriger des Recherches, Univ. Toulouse 3 (2014).

  12. Measurement of perceived competence in Dutch children with mild intellectual disabilities.

    PubMed

    Elias, C; Vermeer, A; 't Hart, H

    2005-04-01

    Little research has been conducted on the perceived competence of children with mild intellectual disabilities (MID). One of the reasons for the marked absence of research appears to be the lack of reliable and clearly valid measurement instruments for this particular group of children. In the present study, it was examined whether a pictorial scale originally designed to measure perceived competence in typically developing children could successfully be used with children with MID. The pictorial scale was administered to a group of 106 children with MID. The construct validity, reliability and stability of the scale were investigated. The results of the exploratory factor analyses and the confirmatory factor analyses supported the conceptual framework proposed. The construct validity was also supported by the pattern of intercorrelations between the subscales. The scale had adequate internal consistency and the stability analyses showed sufficient stability across a 4-month period. The findings show the psychometric properties of the pictorial scale to justify its use with children with MID.

  13. A Review of Multidimensional, Multifluid Intermediate-scale Experiments: Flow Behavior, Saturation Imaging, and Tracer Detection and Quantification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oostrom, Mart; Dane, J. H.; Wietsma, Thomas W.

    2007-08-01

    A review is presented of original multidimensional, intermediate-scale experiments involving non-aqueous phase liquid (NAPL) flow behavior, imaging, and detection/quantification with solute tracers. In a companion paper (Oostrom, M., J.H. Dane, and T.W. Wietsma. 2006. A review of multidimensional, multifluid intermediate-scale experiments: Nonaqueous phase dissolution and enhanced remediation. Vadose Zone Journal 5:570-598) experiments related to aqueous dissolution and enhanced remediation were discussed. The experiments investigating flow behavior include infiltration and redistribution experiments with both light and dense NAPLs in homogeneous and heterogeneous porous medium systems. The techniques used for NAPL saturation mapping for intermediate-scale experiments include photon-attenuation methods such as gammamore » and X-ray techniques, and photographic methods such as the light reflection, light transmission, and multispectral image analysis techniques. Solute tracer methods used for detection and quantification of NAPL in the subsurface are primarily limited to variations of techniques comparing the behavior of conservative and partitioning tracers. Besides a discussion of the experimental efforts, recommendations for future research at this laboratory scale are provided.« less

  14. Monitoring Corals and Submerged Aquatic Vegetation in Western Pacific Using Satellite Remote Sensing Integrated with Field Data

    NASA Astrophysics Data System (ADS)

    Roelfsema, C. M.; Phinn, S. R.; Lyons, M. B.; Kovacs, E.; Saunders, M. I.; Leon, J. X.

    2013-12-01

    Corals and Submerged Aquatic Vegetation (SAV) are typically found in highly dynamic environments where the magnitude and types of physical and biological processes controlling their distribution, diversity and function changes dramatically. Recent advances in the types of satellite image data and the length of their archives that are available globally, coupled with new techniques for extracting environmental information from these data sets has enabled significant advances to be made in our ability to map and monitor coral and SAV environments. Object Based Image Analysis techniques are one of the most significant advances in information extraction techniques for processing images to deliver environmental information at multiple spatial scales. This poster demonstrates OBIA applied to high spatial resolution satellite image data to map and monitor coral and SAV communities across a variety of environments in the Western Pacific that vary in their extent, biological composition, forcing physical factors and location. High spatial resolution satellite imagery (Quickbird, Ikonos and Worldview2) were acquired coincident with field surveys on each reef to collect georeferenced benthic photo transects, over various areas in the Western Pacific. Base line maps were created, from Roviana Lagoon Solomon island (600 km2), Bikini Atoll Marshall Island (800 Km2), Lizard Island, Australia (30 km2) and time series maps for geomorphic and benthic communities were collected for Heron Reef, Australia (24 km2) and Eastern Banks area of Moreton Bay, Australia (200 km2). The satellite image data were corrected for radiometric and atmospheric distortions to at-surface reflectance. Georeferenced benthic photos were acquired by divers or Autonomous Underwater Vehicles, analysed for benthic cover composition, and used for calibration and validation purposes. Hierarchical mapping from: reef/non-reef (1000's - 10000's m); reef type (100's - 1000's m); 'geomorphic zone' (10's - 100's m); to dominant components of benthic cover compositions (1 - 10's m); and individual benthic cover type scale (0.5-5.0's m), was completed using object based segmentation and semi-automated labelling through membership rules. Accuracy assessment of the satellite image based maps and field data sets scales maps produced with 90% maximum accuracy larger scales and less complex maps, versus 40 % at smaller scale and complex maps. The study showed that current data sets and object based analysis are able to reliable map at various scales and level of complexity covering a variety of extent and environments at various times; as a result science and management can use these tools to assess and understand the ecological processes taking place in coral and SAV environments.

  15. A Sub-filter Scale Noise Equation far Hybrid LES Simulations

    NASA Technical Reports Server (NTRS)

    Goldstein, Marvin E.

    2006-01-01

    Hybrid LES/subscale modeling approaches have an important advantage over the current noise prediction methods in that they only involve modeling of the relatively universal subscale motion and not the configuration dependent larger scale turbulence . Previous hybrid approaches use approximate statistical techniques or extrapolation methods to obtain the requisite information about the sub-filter scale motion. An alternative approach would be to adopt the modeling techniques used in the current noise prediction methods and determine the unknown stresses from experimental data. The present paper derives an equation for predicting the sub scale sound from information that can be obtained with currently available experimental procedures. The resulting prediction method would then be intermediate between the current noise prediction codes and previously proposed hybrid techniques.

  16. An evaluation of the precision of fin ray, otolith, and scale age determinations for brook trout

    USGS Publications Warehouse

    Stolarski, J.T.; Hartman, K.J.

    2008-01-01

    The ages of brook trout Salvelinus fontinalis are typically estimated using scales despite a lack of research documenting the effectiveness of this technique. The use of scales is often preferred because it is nonlethal and is believed to require less effort than alternative methods. To evaluate the relative effectiveness of different age estimation methodologies for brook trout, we measured the precision and processing times of scale, sagittal otolith, and pectoral fin ray age estimation techniques. Three independent readers, age bias plots, coefficients of variation (CV = 100 x SD/mean), and percent agreement (PA) were used to measure within-reader, among-structure bias and within-structure, among-reader precision. Bias was generally minimal; however, the age estimates derived from scales tended to be lower than those derived from otoliths within older (age > 2) cohorts. Otolith, fin ray, and scale age estimates were within 1 year of each other for 95% of the comparisons. The measures of precision for scales (CV = 6.59; PA = 82.30) and otoliths (CV = 7.45; PA = 81.48) suggest higher agreement between these structures than with fin rays (CV = 11.30; PA = 65.84). The mean per-sample processing times were lower for scale (13.88 min) and otolith techniques (12.23 min) than for fin ray techniques (22.68 min). The comparable processing times of scales and otoliths contradict popular belief and are probably a result of the high proportion of regenerated scales within samples and the ability to infer age from whole (as opposed to sectioned) otoliths. This research suggests that while scales produce age estimates rivaling those of otoliths for younger (age > 3) cohorts, they may be biased within older cohorts and therefore should be used with caution. ?? Copyright by the American Fisheries Society 2008.

  17. Nuclear Electromagnetic Pulse Review

    NASA Astrophysics Data System (ADS)

    Dinallo, Michael

    2011-04-01

    Electromagnetic Pulse (EMP) from nuclear detonations have been observed for well over half a century. Beginning in the mid-to-late 1950s, the physics and modeling of EMP has been researched and will continue into the foreseeable future. The EMP environment propagates hundreds of miles from its origins and causes interference for all types of electronic instrumentation. This includes military, municipal and industry based electronic infrastructures such as power generation and distribution, command and control systems, systems used in financial and emergency services, electronic monitoring and communications networks, to mention some key infrastructure elements. Research into EMP has included originating physics, propagation and electromagnetic field coupling analyses and measurement-sensor development. Several methods for calculating EMP induced transient interference (voltage and current induction) will be briefly discussed and protection techniques reviewed. These methods can be mathematically simple or involve challenging boundary value solution techniques. A few illustrative calculations will demonstrate the concern for electronic system operability. Analyses such as the Wunsch-Bell model for electronic upset or damage, and the Singularity Expansion Method (SEM) put forth by Dr. Carl Baum, will facilitate the concern for EMP effects. The SEM determines the voltages and currents induced from transient electromagnetic fields in terms of natural modes of various types of electronic platforms (aerospace vehicles or land-based assets - fixed or mobile). Full-scale facility and laboratory simulation and response measurement approaches will be discussed. The talk will conclude with a discussion of some present research activities.

  18. Multi-scale Pore Imaging Techniques to Characterise Heterogeneity Effects on Flow in Carbonate Rock

    NASA Astrophysics Data System (ADS)

    Shah, S. M.

    2017-12-01

    Digital rock analysis and pore-scale studies have become an essential tool in the oil and gas industry to understand and predict the petrophysical and multiphase flow properties for the assessment and exploitation of hydrocarbon reserves. Carbonate reservoirs, accounting for majority of the world's hydrocarbon reserves, are well known for their heterogeneity and multiscale pore characteristics. The pore sizes in carbonate rock can vary over orders of magnitudes, the geometry and topology parameters of pores at different scales have a great impact on flow properties. A pore-scale study is often comprised of two key procedures: 3D pore-scale imaging and numerical modelling techniques. The fundamental problem in pore-scale imaging and modelling is how to represent and model the different range of scales encountered in porous media, from the pore-scale to macroscopic petrophysical and multiphase flow properties. However, due to the restrictions of image size vs. resolution, the desired detail is rarely captured at the relevant length scales using any single imaging technique. Similarly, direct simulations of transport properties in heterogeneous rocks with broad pore size distributions are prohibitively expensive computationally. In this study, we present the advances and review the practical limitation of different imaging techniques varying from core-scale (1mm) using Medical Computed Tomography (CT) to pore-scale (10nm - 50µm) using Micro-CT, Confocal Laser Scanning Microscopy (CLSM) and Focussed Ion Beam (FIB) to characterise the complex pore structure in Ketton carbonate rock. The effect of pore structure and connectivity on the flow properties is investigated using the obtained pore scale images of Ketton carbonate using Pore Network and Lattice-Boltzmann simulation methods in comparison with experimental data. We also shed new light on the existence and size of the Representative Element of Volume (REV) capturing the different scales of heterogeneity from the pore-scale imaging.

  19. Predictive distributions for between-study heterogeneity and simple methods for their application in Bayesian meta-analysis

    PubMed Central

    Turner, Rebecca M; Jackson, Dan; Wei, Yinghui; Thompson, Simon G; Higgins, Julian P T

    2015-01-01

    Numerous meta-analyses in healthcare research combine results from only a small number of studies, for which the variance representing between-study heterogeneity is estimated imprecisely. A Bayesian approach to estimation allows external evidence on the expected magnitude of heterogeneity to be incorporated. The aim of this paper is to provide tools that improve the accessibility of Bayesian meta-analysis. We present two methods for implementing Bayesian meta-analysis, using numerical integration and importance sampling techniques. Based on 14 886 binary outcome meta-analyses in the Cochrane Database of Systematic Reviews, we derive a novel set of predictive distributions for the degree of heterogeneity expected in 80 settings depending on the outcomes assessed and comparisons made. These can be used as prior distributions for heterogeneity in future meta-analyses. The two methods are implemented in R, for which code is provided. Both methods produce equivalent results to standard but more complex Markov chain Monte Carlo approaches. The priors are derived as log-normal distributions for the between-study variance, applicable to meta-analyses of binary outcomes on the log odds-ratio scale. The methods are applied to two example meta-analyses, incorporating the relevant predictive distributions as prior distributions for between-study heterogeneity. We have provided resources to facilitate Bayesian meta-analysis, in a form accessible to applied researchers, which allow relevant prior information on the degree of heterogeneity to be incorporated. © 2014 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:25475839

  20. The Matching Criterion Purification for Differential Item Functioning Analyses in a Large-Scale Assessment

    ERIC Educational Resources Information Center

    Lee, HyeSun; Geisinger, Kurt F.

    2016-01-01

    The current study investigated the impact of matching criterion purification on the accuracy of differential item functioning (DIF) detection in large-scale assessments. The three matching approaches for DIF analyses (block-level matching, pooled booklet matching, and equated pooled booklet matching) were employed with the Mantel-Haenszel…

  1. Bridging the Gap Between Large-scale Data Sets and Analyses: Semi-automated Methods to Facilitate Length Polymorphism Scoring and Data Analyses.

    EPA Science Inventory

    Amplified fragment length polymorphism (AFLP) markers can be developed more quickly and at a lower cost than microsatellite and single nucleotide polymorphism markers, which makes them ideal markers for large-scale studies of understudied taxa — such as species at risk. However,...

  2. Iterative categorization (IC): a systematic technique for analysing qualitative data

    PubMed Central

    2016-01-01

    Abstract The processes of analysing qualitative data, particularly the stage between coding and publication, are often vague and/or poorly explained within addiction science and research more broadly. A simple but rigorous and transparent technique for analysing qualitative textual data, developed within the field of addiction, is described. The technique, iterative categorization (IC), is suitable for use with inductive and deductive codes and can support a range of common analytical approaches, e.g. thematic analysis, Framework, constant comparison, analytical induction, content analysis, conversational analysis, discourse analysis, interpretative phenomenological analysis and narrative analysis. Once the data have been coded, the only software required is a standard word processing package. Worked examples are provided. PMID:26806155

  3. Formal and heuristic system decomposition methods in multidisciplinary synthesis. Ph.D. Thesis, 1991

    NASA Technical Reports Server (NTRS)

    Bloebaum, Christina L.

    1991-01-01

    The multidisciplinary interactions which exist in large scale engineering design problems provide a unique set of difficulties. These difficulties are associated primarily with unwieldy numbers of design variables and constraints, and with the interdependencies of the discipline analysis modules. Such obstacles require design techniques which account for the inherent disciplinary couplings in the analyses and optimizations. The objective of this work was to develop an efficient holistic design synthesis methodology that takes advantage of the synergistic nature of integrated design. A general decomposition approach for optimization of large engineering systems is presented. The method is particularly applicable for multidisciplinary design problems which are characterized by closely coupled interactions among discipline analyses. The advantage of subsystem modularity allows for implementation of specialized methods for analysis and optimization, computational efficiency, and the ability to incorporate human intervention and decision making in the form of an expert systems capability. The resulting approach is not a method applicable to only a specific situation, but rather, a methodology which can be used for a large class of engineering design problems in which the system is non-hierarchic in nature.

  4. Body Topography Parcellates Human Sensory and Motor Cortex.

    PubMed

    Kuehn, Esther; Dinse, Juliane; Jakobsen, Estrid; Long, Xiangyu; Schäfer, Andreas; Bazin, Pierre-Louis; Villringer, Arno; Sereno, Martin I; Margulies, Daniel S

    2017-07-01

    The cytoarchitectonic map as proposed by Brodmann currently dominates models of human sensorimotor cortical structure, function, and plasticity. According to this model, primary motor cortex, area 4, and primary somatosensory cortex, area 3b, are homogenous areas, with the major division lying between the two. Accumulating empirical and theoretical evidence, however, has begun to question the validity of the Brodmann map for various cortical areas. Here, we combined in vivo cortical myelin mapping with functional connectivity analyses and topographic mapping techniques to reassess the validity of the Brodmann map in human primary sensorimotor cortex. We provide empirical evidence that area 4 and area 3b are not homogenous, but are subdivided into distinct cortical fields, each representing a major body part (the hand and the face). Myelin reductions at the hand-face borders are cortical layer-specific, and coincide with intrinsic functional connectivity borders as defined using large-scale resting state analyses. Our data extend the Brodmann model in human sensorimotor cortex and suggest that body parts are an important organizing principle, similar to the distinction between sensory and motor processing. © The Author 2017. Published by Oxford University Press.

  5. Utilizing Skylab data in on-going resources management programs in the state of Ohio

    NASA Technical Reports Server (NTRS)

    Baldridge, P. E. (Principal Investigator); Goesling, P. H.; Martin, T. A.; Wukelic, G. E.; Stephan, J. G.; Smail, H. E.; Ebbert, T. F.

    1975-01-01

    The author has identified the following significant results. The use of Skylab imagery for total area woodland surveys was found to be more accurate and cheaper than conventional surveys using aerial photo-plot techniques. Machine-aided (primarily density slicing) analyses of Skylab 190A and 190B color and infrared color photography demonstrated the feasibility of using such data for differentiating major timber classes including pines, hardwoods, mixed, cut, and brushland providing such analyses are made at scales of 1:24,000 and larger. Manual and machine-assisted image analysis indicated that spectral and spatial capabilities of Skylab EREP photography are adequate to distinguish most parameters of current, coal surface mining concern associated with: (1) active mining, (2) orphan lands, (3) reclaimed lands, and (4) active reclamation. Excellent results were achieved when comparing Skylab and aerial photographic interpretations of detailed surface mining features. Skylab photographs when combined with other data bases (e.g., census, agricultural land productivity, and transportation networks), provide a comprehensive, meaningful, and integrated view of major elements involved in the urbanization/encroachment process.

  6. Imaging the Directed Transport of Single Engineered RNA Transcripts in Real-Time Using Ratiometric Bimolecular Beacons

    PubMed Central

    Zhang, Xuemei; Zajac, Allison L.; Huang, Lingyan; Behlke, Mark A.; Tsourkas, Andrew

    2014-01-01

    The relationship between RNA expression and cell function can often be difficult to decipher due to the presence of both temporal and sub-cellular processing of RNA. These intricacies of RNA regulation can often be overlooked when only acquiring global measurements of RNA expression. This has led to development of several tools that allow for the real-time imaging of individual engineered RNA transcripts in living cells. Here, we describe a new technique that utilizes an oligonucleotide-based probe, ratiometric bimolecular beacon (RBMB), to image RNA transcripts that were engineered to contain 96-tandem repeats of the RBMB target sequence in the 3′-untranslated region. Binding of RBMBs to the target RNA resulted in discrete bright fluorescent spots, representing individual transcripts, that could be imaged in real-time. Since RBMBs are a synthetic probe, the use of photostable, bright, and red-shifted fluorophores led to a high signal-to-background. RNA motion was readily characterized by both mean squared displacement and moment scaling spectrum analyses. These analyses revealed clear examples of directed, Brownian, and subdiffusive movements. PMID:24454933

  7. Use of simulated evaporation to assess the potential for scale formation during reverse osmosis desalination

    USGS Publications Warehouse

    Huff, G.F.

    2004-01-01

    The tendency of solutes in input water to precipitate efficiency lowering scale deposits on the membranes of reverse osmosis (RO) desalination systems is an important factor in determining the suitability of input water for desalination. Simulated input water evaporation can be used as a technique to quantitatively assess the potential for scale formation in RO desalination systems. The technique was demonstrated by simulating the increase in solute concentrations required to form calcite, gypsum, and amorphous silica scales at 25??C and 40??C from 23 desalination input waters taken from the literature. Simulation results could be used to quantitatively assess the potential of a given input water to form scale or to compare the potential of a number of input waters to form scale during RO desalination. Simulated evaporation of input waters cannot accurately predict the conditions under which scale will form owing to the effects of potentially stable supersaturated solutions, solution velocity, and residence time inside RO systems. However, the simulated scale-forming potential of proposed input waters could be compared with the simulated scale-forming potentials and actual scale-forming properties of input waters having documented operational histories in RO systems. This may provide a technique to estimate the actual performance and suitability of proposed input waters during RO.

  8. Scaling dimensions in spectroscopy of soil and vegetation

    NASA Astrophysics Data System (ADS)

    Malenovský, Zbyněk; Bartholomeus, Harm M.; Acerbi-Junior, Fausto W.; Schopfer, Jürg T.; Painter, Thomas H.; Epema, Gerrit F.; Bregt, Arnold K.

    2007-05-01

    The paper revises and clarifies definitions of the term scale and scaling conversions for imaging spectroscopy of soil and vegetation. We demonstrate a new four-dimensional scale concept that includes not only spatial but also the spectral, directional and temporal components. Three scaling remote sensing techniques are reviewed: (1) radiative transfer, (2) spectral (un)mixing, and (3) data fusion. Relevant case studies are given in the context of their up- and/or down-scaling abilities over the soil/vegetation surfaces and a multi-source approach is proposed for their integration. Radiative transfer (RT) models are described to show their capacity for spatial, spectral up-scaling, and directional down-scaling within a heterogeneous environment. Spectral information and spectral derivatives, like vegetation indices (e.g. TCARI/OSAVI), can be scaled and even tested by their means. Radiative transfer of an experimental Norway spruce ( Picea abies (L.) Karst.) research plot in the Czech Republic was simulated by the Discrete Anisotropic Radiative Transfer (DART) model to prove relevance of the correct object optical properties scaled up to image data at two different spatial resolutions. Interconnection of the successive modelling levels in vegetation is shown. A future development in measurement and simulation of the leaf directional spectral properties is discussed. We describe linear and/or non-linear spectral mixing techniques and unmixing methods that demonstrate spatial down-scaling. Relevance of proper selection or acquisition of the spectral endmembers using spectral libraries, field measurements, and pure pixels of the hyperspectral image is highlighted. An extensive list of advanced unmixing techniques, a particular example of unmixing a reflective optics system imaging spectrometer (ROSIS) image from Spain, and examples of other mixture applications give insight into the present status of scaling capabilities. Simultaneous spatial and temporal down-scaling by means of a data fusion technique is described. A demonstrative example is given for the moderate resolution imaging spectroradiometer (MODIS) and LANDSAT Thematic Mapper (TM) data from Brazil. Corresponding spectral bands of both sensors were fused via a pyramidal wavelet transform in Fourier space. New spectral and temporal information of the resultant image can be used for thematic classification or qualitative mapping. All three described scaling techniques can be integrated as the relevant methodological steps within a complex multi-source approach. We present this concept of combining numerous optical remote sensing data and methods to generate inputs for ecosystem process models.

  9. Infrared thermal remote sensing for soil salinity assessment on landscape scale

    NASA Astrophysics Data System (ADS)

    Ivushkin, Konstantin; Bartholomeus, Harm; Bregt, Arnold K.; Pulatov, Alim; Bui, Elisabeth N.; Wilford, John

    2017-04-01

    Soil salinity is considered as one of the most severe land degradation aspects. An increased soil salt level inhibits growth and development of crops. Therefore, up to date soil salinity information is vital for appropriate management practices and reclamation strategies. This information is required at increasing spatial and temporal resolution for appropriate management adaptations. Conventional soil sampling and associated laboratory analyses are slow, expensive, and often cannot deliver the temporal and spatial resolution required. The change of canopy temperature is one of the stress indicators in plants. Its behaviour in response to salt stress on individual plant level is well studied in laboratory and greenhouse experiments, but its potential for landscape scale studies using remote sensing techniques is not investigated yet. In our study, possibilities of satellite thermography for landscape scale soil salinity assessment of cropped areas were studied. The performance of satellite thermography is compared with other approaches that have been used before, like Normalised Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI). The study areas were Syrdarya province of Uzbekistan and four study areas in four Australian states namely, Western Australia, South Australia, Queensland and New South Wales. The diversity of the study areas allowed us to analyse behaviour of canopy temperature of different crops (wheat, cotton, barley) and different agriculture practices (rain fed and irrigated). MODIS and Landsat TM multiannual satellite images were used to measure canopy temperature. As ground truth for Uzbekistan study area we used a provincial soil salinity map. For the Australian study areas we used the EC map for the whole country. ANOVA was used to analyse relations between the soil salinity maps and canopy temperature, NDVI, EVI. Time series graphs were created to analyse the dynamics of the indicators during the growing season. The results showed significant relations between the soil salinity maps and canopy temperature. The amplitude of canopy temperature difference between salinity classes varies for different crops, but the trend of temperature increase under increased salinity is present in all cases. The calculated F-values were higher for canopy temperature than for all other compared indicators. The vegetation indices also showed significant differences, but F-values were lower compared to canopy temperature. Also the visual comparison of the soil salinity map and the canopy temperature map show similar spatial patterns. The NDVI and EVI maps look more random and noisy and patterns are less pronounced than for the canopy temperature map. The strongest relation between the soil salinity map and canopy temperature was usually observed at the end of a dry season and in the period of maximum crop development. Satellite thermography appeared to be a valuable approach to detect soil salinity under agricultural crops at landscape scale.

  10. A real-time interferometer technique for compressible flow research

    NASA Technical Reports Server (NTRS)

    Bachalo, W. D.; Houser, M. J.

    1984-01-01

    Strengths and shortcomings in the application of interferometric techniques to transonic flow fields are examined and an improved method is elaborated. Such applications have demonstrated the value of interferometry in obtaining data for compressible flow research. With holographic techniques, interferometry may be applied in large scale facilities without the use of expensive optics or elaborate vibration isolation equipment. Results obtained using holographic interferometry and other methods demonstrate that reliable qualitative and quantitative data can be acquired. Nevertheless, the conventional method can be difficult to set up and apply, and it cannot produce real-time data. A new interferometry technique is investigated that promises to be easier to apply and can provide real-time information. This single-beam technique has the necessary insensitivity to vibration for large scale wind tunnel operations. Capabilities of the method and preliminary tests on some laboratory scale flow fluids are described.

  11. Comparison of injection pain caused by the DentalVibe Injection System versus a traditional syringe for inferior alveolar nerve block anaesthesia in paediatric patients.

    PubMed

    Elbay, M; Şermet Elbay, Ü; Yıldırım, S; Uğurluel, C; Kaya, C; Baydemir, C

    2015-06-01

    To compare paediatric patients' pain during needle insertion and injection in inferior alveoler nerve block (IANB) anaesthesia injected by either a traditional syringe (TS) or the DentalVibe Injection Comfort System (DV). the study was a randomised controlled crossover clinical trial, comprised of 60 children aged 6-12 requiring an operative procedure with IANB anaesthesia on their mandibular molars bilaterally. One of the molar teeth was treated with TS and the contralateral tooth was treated with DV. On each visit, subjective and objective pain was evaluated using the Wond-Baker Faces Pain Rating Scale (PRS) and the Face, Legg, Cry, Consolability Scale (FLACC Scale). Patients were asked which anaesthesia technique they preferred. Data were analysed using Wilcoxon signed rank, Spearman correlation, and Mann-Whitney U tests. There were no statistically significant differences for pain evalution during needle insertion and injection of each injection system. However, a negative correlation was found on the FLACC between age and pain scores during injection after using DV. Paediatric patients experienced similar pain during IANB anaesthesia administered with TS and DV. With increased age, pain values reduced during anaesthetic agent injection with DV according to FLACC. The traditional procedure was preferred to DV in paediatric patients.

  12. Incorporating precision, accuracy and alternative sampling designs into a continental monitoring program for colonial waterbirds

    USGS Publications Warehouse

    Steinkamp, Melanie J.; Peterjohn, B.G.; Keisman, J.L.

    2003-01-01

    A comprehensive monitoring program for colonial waterbirds in North America has never existed. At smaller geographic scales, many states and provinces conduct surveys of colonial waterbird populations. Periodic regional surveys are conducted at varying times during the breeding season using a variety of survey methods, which complicates attempts to estimate population trends for most species. The US Geological Survey Patuxent Wildlife Research Center has recently started to coordinate colonial waterbird monitoring efforts throughout North America. A centralized database has been developed with an Internet-based data entry and retrieval page. The extent of existing colonial waterbird surveys has been defined, allowing gaps in coverage to be identified and basic inventories completed where desirable. To enable analyses of comparable data at regional or larger geographic scales, sampling populations through statistically sound sampling designs should supersede obtaining counts at every colony. Standardized breeding season survey techniques have been agreed upon and documented in a monitoring manual. Each survey in the manual has associated with it recommendations for bias estimation, and includes specific instructions on measuring detectability. The methods proposed in the manual are for developing reliable, comparable indices of population size to establish trend information at multiple spatial and temporal scales, but they will not result in robust estimates of total population numbers.

  13. Long-term Observations of Intense Precipitation Small-scale Spatial Variability in a Semi-arid Catchment

    NASA Astrophysics Data System (ADS)

    Cropp, E. L.; Hazenberg, P.; Castro, C. L.; Demaria, E. M.

    2017-12-01

    In the southwestern US, the summertime North American Monsoon (NAM) provides about 60% of the region's annual precipitation. Recent research using high-resolution atmospheric model simulations and retrospective predictions has shown that since the 1950's, and more specifically in the last few decades, the mean daily precipitation in the southwestern U.S. during the NAM has followed a decreasing trend. Furthermore, days with more extreme precipitation have intensified. The current work focuses the impact of these long-term changes on the observed small-scale spatial variability of intense precipitation. Since limited long-term high-resolution observational data exist to support such climatological-induced spatial changes in precipitation frequency and intensity, the current work utilizes observations from the USDA-ARS Walnut Gulch Experimental Watershed (WGEW) in southeastern Arizona. Within this 150 km^2 catchment over 90 rain gauges have been installed since the 1950s, measuring at sub-hourly resolution. We have applied geospatial analyses and the kriging interpolation technique to identify long-term changes in the spatial and temporal correlation and anisotropy of intense precipitation. The observed results will be compared with the previously model simulated results, as well as related to large-scale variations in climate patterns, such as the El Niño Southern Oscillation (ENSO) and the Pacific Decadal Oscillation (PDO).

  14. NeuroCa: integrated framework for systematic analysis of spatiotemporal neuronal activity patterns from large-scale optical recording data

    PubMed Central

    Jang, Min Jee; Nam, Yoonkey

    2015-01-01

    Abstract. Optical recording facilitates monitoring the activity of a large neural network at the cellular scale, but the analysis and interpretation of the collected data remain challenging. Here, we present a MATLAB-based toolbox, named NeuroCa, for the automated processing and quantitative analysis of large-scale calcium imaging data. Our tool includes several computational algorithms to extract the calcium spike trains of individual neurons from the calcium imaging data in an automatic fashion. Two algorithms were developed to decompose the imaging data into the activity of individual cells and subsequently detect calcium spikes from each neuronal signal. Applying our method to dense networks in dissociated cultures, we were able to obtain the calcium spike trains of ∼1000 neurons in a few minutes. Further analyses using these data permitted the quantification of neuronal responses to chemical stimuli as well as functional mapping of spatiotemporal patterns in neuronal firing within the spontaneous, synchronous activity of a large network. These results demonstrate that our method not only automates time-consuming, labor-intensive tasks in the analysis of neural data obtained using optical recording techniques but also provides a systematic way to visualize and quantify the collective dynamics of a network in terms of its cellular elements. PMID:26229973

  15. Origin of a cryptic lineage in a threatened reptile through isolation and historical hybridization.

    PubMed

    Sovic, M G; Fries, A C; Gibbs, H L

    2016-11-01

    Identifying phylogenetically distinct lineages and understanding the evolutionary processes by which they have arisen are important goals of phylogeography. This information can also help define conservation units in endangered species. Such analyses are being transformed by the availability of genomic-scale data sets and novel analytical approaches for statistically comparing different historical scenarios as causes of phylogeographic patterns. Here, we use genomic-scale restriction-site-associated DNA sequencing (RADseq) data to test for distinct lineages in the endangered Eastern Massasauga Rattlesnake (Sistrurus catenatus). We then use coalescent-based modeling techniques to identify the evolutionary mechanisms responsible for the origin of the lineages in this species. We find equivocal evidence for distinct phylogenetic lineages within S. catenatus east of the Mississippi River, but strong support for a previously unrecognized lineage on the western edge of the range of this snake, represented by populations from Iowa, USA. Snakes from these populations show patterns of genetic admixture with a nearby non-threatened sister species (Sistrurus tergeminus). Tests of historical demographic models support the hypothesis that the genetic distinctiveness of Iowa snakes is due to a combination of isolation and historical introgression between S. catenatus and S. tergeminus. Our work provides an example of how model-based analysis of genomic-scale data can help identify conservation units in rare species.

  16. Modeling whole-tree carbon assimilation rate using observed transpiration rates and needle sugar carbon isotope ratios.

    PubMed

    Hu, Jia; Moore, David J P; Riveros-Iregui, Diego A; Burns, Sean P; Monson, Russell K

    2010-03-01

    *Understanding controls over plant-atmosphere CO(2) exchange is important for quantifying carbon budgets across a range of spatial and temporal scales. In this study, we used a simple approach to estimate whole-tree CO(2) assimilation rate (A(Tree)) in a subalpine forest ecosystem. *We analysed the carbon isotope ratio (delta(13)C) of extracted needle sugars and combined it with the daytime leaf-to-air vapor pressure deficit to estimate tree water-use efficiency (WUE). The estimated WUE was then combined with observations of tree transpiration rate (E) using sap flow techniques to estimate A(Tree). Estimates of A(Tree) for the three dominant tree species in the forest were combined with species distribution and tree size to estimate and gross primary productivity (GPP) using an ecosystem process model. *A sensitivity analysis showed that estimates of A(Tree) were more sensitive to dynamics in E than delta(13)C. At the ecosystem scale, the abundance of lodgepole pine trees influenced seasonal dynamics in GPP considerably more than Engelmann spruce and subalpine fir because of its greater sensitivity of E to seasonal climate variation. *The results provide the framework for a nondestructive method for estimating whole-tree carbon assimilation rate and ecosystem GPP over daily-to weekly time scales.

  17. Improving the Factor Structure of Psychological Scales

    PubMed Central

    Zhang, Xijuan; Savalei, Victoria

    2015-01-01

    Many psychological scales written in the Likert format include reverse worded (RW) items in order to control acquiescence bias. However, studies have shown that RW items often contaminate the factor structure of the scale by creating one or more method factors. The present study examines an alternative scale format, called the Expanded format, which replaces each response option in the Likert scale with a full sentence. We hypothesized that this format would result in a cleaner factor structure as compared with the Likert format. We tested this hypothesis on three popular psychological scales: the Rosenberg Self-Esteem scale, the Conscientiousness subscale of the Big Five Inventory, and the Beck Depression Inventory II. Scales in both formats showed comparable reliabilities. However, scales in the Expanded format had better (i.e., lower and more theoretically defensible) dimensionalities than scales in the Likert format, as assessed by both exploratory factor analyses and confirmatory factor analyses. We encourage further study and wider use of the Expanded format, particularly when a scale’s dimensionality is of theoretical interest. PMID:27182074

  18. Core OCD Symptoms: Exploration of Specificity and Relations with Psychopathology

    PubMed Central

    Stasik, Sara M.; Naragon-Gainey, Kristin; Chmielewski, Michael; Watson, David

    2012-01-01

    Obsessive-compulsive disorder (OCD) is a heterogeneous condition, comprised of multiple symptom domains. This study used aggregate composite scales representing three core OCD dimensions (Checking, Cleaning, Rituals), as well as Hoarding, to examine the discriminant validity, diagnostic specificity, and predictive ability of OCD symptom scales. The core OCD scales demonstrated strong patterns of convergent and discriminant validity – suggesting that these dimensions are distinct from other self-reported symptoms – whereas hoarding symptoms correlated just as strongly with OCD and non-OCD symptoms in most analyses. Across analyses, our results indicated that Checking is a particularly strong, specific marker of OCD diagnosis, whereas the specificity of Cleaning and Hoarding to OCD was less strong. Finally, the OCD Checking scale was the only significant predictor of OCD diagnosis in logistic regression analyses. Results are discussed with regard to the importance of assessing OCD symptom dimensions separately and implications for classification. PMID:23026094

  19. Scanning electron microscopy analysis of hair index on Karachi's population for social and professional appearance enhancement.

    PubMed

    Ali, N; Zohra, R R; Qader, S A U; Mumtaz, M

    2015-06-01

    Hair texture, appearance and pigment play an important role in social and professional communication and maintaining an overall appearance. This study was especially designed for morphological assessment of hair damage caused to Karachi's population due to natural factors and cosmetic treatments using scanning electron microscopy (SEM) technique. Hair samples under the study of synthetic factor's effect were given several cosmetic treatments (hot straightened, bleached, synthetic dyed and henna dyed) whereas samples under natural factor's effect (variation in gender, age and pigmentation) were left untreated. Morphological assessment was performed using SEM technique. Results obtained were statistically analysed using minitab 16 and spss 18 softwares. Scanning electron microscopy images revealed less number of cuticular scales in males than females of same age although size of cuticular scales was found to be larger in males than in females. Mean hair index of white hair was greater than black hair of the same head as it is comparatively newly originated. Tukey's method revealed that among cosmetic treatments, bleaching and synthetic henna caused most of the damage to the hair. Statistical evaluation of results obtained from SEM analysis revealed that human scalp hair index show morphological variation with respect to age, gender, hair pigmentation, chemical and physical treatments. Individuals opting for cosmetic treatments could clearly visualize the extent of hair damage these may cause in long run. © 2015 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  20. Relative and absolute reliability of measures of linoleic acid-derived oxylipins in human plasma.

    PubMed

    Gouveia-Figueira, Sandra; Bosson, Jenny A; Unosson, Jon; Behndig, Annelie F; Nording, Malin L; Fowler, Christopher J

    2015-09-01

    Modern analytical techniques allow for the measurement of oxylipins derived from linoleic acid in biological samples. Most validatory work has concerned extraction techniques, repeated analysis of aliquots from the same biological sample, and the influence of external factors such as diet and heparin treatment upon their levels, whereas less is known about the relative and absolute reliability of measurements undertaken on different days. A cohort of nineteen healthy males were used, where samples were taken at the same time of day on two occasions, at least 7 days apart. Relative reliability was assessed using Lin's concordance correlation coefficients (CCC) and intraclass correlation coefficients (ICC). Absolute reliability was assessed by Bland-Altman analyses. Nine linoleic acid oxylipins were investigated. ICC and CCC values ranged from acceptable (0.56 [13-HODE]) to poor (near zero [9(10)- and 12(13)-EpOME]). Bland-Altman limits of agreement were in general quite wide, ranging from ±0.5 (12,13-DiHOME) to ±2 (9(10)-EpOME; log10 scale). It is concluded that relative reliability of linoleic acid-derived oxylipins varies between lipids with compounds such as the HODEs showing better relative reliability than compounds such as the EpOMEs. These differences should be kept in mind when designing and interpreting experiments correlating plasma levels of these lipids with factors such as age, body mass index, rating scales etc. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. What is bioinformatics? A proposed definition and overview of the field.

    PubMed

    Luscombe, N M; Greenbaum, D; Gerstein, M

    2001-01-01

    The recent flood of data from genome sequences and functional genomics has given rise to new field, bioinformatics, which combines elements of biology and computer science. Here we propose a definition for this new field and review some of the research that is being pursued, particularly in relation to transcriptional regulatory systems. Our definition is as follows: Bioinformatics is conceptualizing biology in terms of macromolecules (in the sense of physical-chemistry) and then applying "informatics" techniques (derived from disciplines such as applied maths, computer science, and statistics) to understand and organize the information associated with these molecules, on a large-scale. Analyses in bioinformatics predominantly focus on three types of large datasets available in molecular biology: macromolecular structures, genome sequences, and the results of functional genomics experiments (e.g. expression data). Additional information includes the text of scientific papers and "relationship data" from metabolic pathways, taxonomy trees, and protein-protein interaction networks. Bioinformatics employs a wide range of computational techniques including sequence and structural alignment, database design and data mining, macromolecular geometry, phylogenetic tree construction, prediction of protein structure and function, gene finding, and expression data clustering. The emphasis is on approaches integrating a variety of computational methods and heterogeneous data sources. Finally, bioinformatics is a practical discipline. We survey some representative applications, such as finding homologues, designing drugs, and performing large-scale censuses. Additional information pertinent to the review is available over the web at http://bioinfo.mbb.yale.edu/what-is-it.

  2. Hospital Social Work and Spirituality: Views of Medical Social Workers.

    PubMed

    Pandya, Samta P

    2016-01-01

    This article is based on a study of 1,389 medical social workers in 108 hospitals across 12 countries, on their views on spirituality and spiritually sensitive interventions in hospital settings. Results of the logistic regression analyses and structural equation models showed that medical social workers from European countries, United States of America, Canada, and Australia, those had undergone spiritual training, and those who had higher self-reported spiritual experiences scale scores were more likely to have the view that spirituality in hospital settings is for facilitating integral healing and wellness of patients and were more likely to prefer spiritual packages of New Age movements as the form of spiritual program, understand spiritual assessment as assessing the patients' spiritual starting point, to then build on further interventions and were likely to attest the understanding of spiritual techniques as mindfulness techniques. Finally they were also likely to understand the spiritual goals of intervention in a holistic way, that is, as that of integral healing, growth of consciousness and promoting overall well-being of patients vis-à-vis only coping and coming to terms with health adversities. Results of the structural equation models also showed covariances between religion, spirituality training, and scores on the self-reported spiritual experiences scale, having thus a set of compounding effects on social workers' views on spiritual interventions in hospitals. The implications of the results for health care social work practice and curriculum are discussed.

  3. Circulating tumor cell detection: A direct comparison between negative and unbiased enrichment in lung cancer.

    PubMed

    Xu, Yan; Liu, Biao; Ding, Fengan; Zhou, Xiaodie; Tu, Pin; Yu, Bo; He, Yan; Huang, Peilin

    2017-06-01

    Circulating tumor cells (CTCs), isolated as a 'liquid biopsy', may provide important diagnostic and prognostic information. Therefore, rapid, reliable and unbiased detection of CTCs are required for routine clinical analyses. It was demonstrated that negative enrichment, an epithelial marker-independent technique for isolating CTCs, exhibits a better efficiency in the detection of CTCs compared with positive enrichment techniques that only use specific anti-epithelial cell adhesion molecules. However, negative enrichment techniques incur significant cell loss during the isolation procedure, and as it is a method that uses only one type of antibody, it is inherently biased. The detection procedure and identification of cell types also relies on skilled and experienced technicians. In the present study, the detection sensitivity of using negative enrichment and a previously described unbiased detection method was compared. The results revealed that unbiased detection methods may efficiently detect >90% of cancer cells in blood samples containing CTCs. By contrast, only 40-60% of CTCs were detected by negative enrichment. Additionally, CTCs were identified in >65% of patients with stage I/II lung cancer. This simple yet efficient approach may achieve a high level of sensitivity. It demonstrates a potential for the large-scale clinical implementation of CTC-based diagnostic and prognostic strategies.

  4. Methodological assessment of 2b-RAD genotyping technique for population structure inferences in yellowfin tuna (Thunnus albacares).

    PubMed

    Pecoraro, Carlo; Babbucci, Massimiliano; Villamor, Adriana; Franch, Rafaella; Papetti, Chiara; Leroy, Bruno; Ortega-Garcia, Sofia; Muir, Jeff; Rooker, Jay; Arocha, Freddy; Murua, Hilario; Zudaire, Iker; Chassot, Emmanuel; Bodin, Nathalie; Tinti, Fausto; Bargelloni, Luca; Cariani, Alessia

    2016-02-01

    Global population genetic structure of yellowfin tuna (Thunnus albacares) is still poorly understood despite its relevance for the tuna fishery industry. Low levels of genetic differentiation among oceans speak in favour of the existence of a single panmictic population worldwide of this highly migratory fish. However, recent studies indicated genetic structuring at a much smaller geographic scales than previously considered, pointing out that YFT population genetic structure has not been properly assessed so far. In this study, we demonstrated for the first time, the utility of 2b-RAD genotyping technique for investigating population genetic diversity and differentiation in high gene-flow species. Running de novo pipeline in Stacks, a total of 6772 high-quality genome-wide SNPs were identified across Atlantic, Indian and Pacific population samples representing all major distribution areas. Preliminary analyses showed shallow but significant population structure among oceans (FST=0.0273; P-value<0.01). Discriminant Analysis of Principal Components endorsed the presence of genetically discrete yellowfin tuna populations among three oceanic pools. Although such evidence needs to be corroborated by increasing sample size, these results showed the efficiency of this genotyping technique in assessing genetic divergence in a marine fish with high dispersal potential. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Approximate reduction of linear population models governed by stochastic differential equations: application to multiregional models.

    PubMed

    Sanz, Luis; Alonso, Juan Antonio

    2017-12-01

    In this work we develop approximate aggregation techniques in the context of slow-fast linear population models governed by stochastic differential equations and apply the results to the treatment of populations with spatial heterogeneity. Approximate aggregation techniques allow one to transform a complex system involving many coupled variables and in which there are processes with different time scales, by a simpler reduced model with a fewer number of 'global' variables, in such a way that the dynamics of the former can be approximated by that of the latter. In our model we contemplate a linear fast deterministic process together with a linear slow process in which the parameters are affected by additive noise, and give conditions for the solutions corresponding to positive initial conditions to remain positive for all times. By letting the fast process reach equilibrium we build a reduced system with a lesser number of variables, and provide results relating the asymptotic behaviour of the first- and second-order moments of the population vector for the original and the reduced system. The general technique is illustrated by analysing a multiregional stochastic system in which dispersal is deterministic and the rate growth of the populations in each patch is affected by additive noise.

  6. Plant chlorophyll fluorescence: active and passive measurements at canopy and leaf scales with different nitrogen treatments

    PubMed Central

    Cendrero-Mateo, M. Pilar; Moran, M. Susan; Papuga, Shirley A.; Thorp, K.R.; Alonso, L.; Moreno, J.; Ponce-Campos, G.; Rascher, U.; Wang, G.

    2016-01-01

    Most studies assessing chlorophyll fluorescence (ChlF) have examined leaf responses to environmental stress conditions using active techniques. Alternatively, passive techniques are able to measure ChlF at both leaf and canopy scales. However, the measurement principles of both techniques are different, and only a few datasets concerning the relationships between them are reported in the literature. In this study, we investigated the potential for interchanging ChlF measurements using active techniques with passive measurements at different temporal and spatial scales. The ultimate objective was to determine the limits within which active and passive techniques are comparable. The results presented in this study showed that active and passive measurements were highly correlated over the growing season across nitrogen treatments at both canopy and leaf-average scale. At the single-leaf scale, the seasonal relation between techniques was weaker, but still significant. The variability within single-leaf measurements was largely related to leaf heterogeneity associated with variations in CO2 assimilation and stomatal conductance, and less so to variations in leaf chlorophyll content, leaf size or measurement inputs (e.g. light reflected and emitted by the leaf and illumination conditions and leaf spectrum). This uncertainty was exacerbated when single-leaf analysis was limited to a particular day rather than the entire season. We concluded that daily measurements of active and passive ChlF at the single-leaf scale are not comparable. However, canopy and leaf-average active measurements can be used to better understand the daily and seasonal behaviour of passive ChlF measurements. In turn, this can be used to better estimate plant photosynthetic capacity and therefore to provide improved information for crop management. PMID:26482242

  7. Magnitude of anthropogenic phosphorus storage in the agricultural production and the waste management systems at the regional and country scales.

    PubMed

    Chowdhury, Rubel Biswas; Chakraborty, Priyanka

    2016-08-01

    Based on a systematic review of 17 recent substance flow analyses of phosphorus (P) at the regional and country scales, this study presents an assessment of the magnitude of anthropogenic P storage in the agricultural production and the waste management systems to identify the potential for minimizing unnecessary P storage to reduce the input of P as mineral fertilizer and the loss of P. The assessment indicates that in case of all (6) P flow analyses at the regional scale, the combined mass of annual P storage in the agricultural production and the waste management systems is greater than 50 % of the mass of annual P inflow as mineral fertilizer in the agricultural production system, while this is close to or more than 100 % in case of half of these analyses. At the country scale, in case of the majority (7 out of 11) of analyses, the combined mass of annual P storage in the agricultural production and the waste management systems has been found to be roughly equivalent or greater than 100 % of the mass of annual P inflow as mineral fertilizer in the agricultural production system, while it ranged from 30 to 60 % in the remaining analyses. A simple scenario analysis has revealed that the annual storage of P in this manner over 100 years could result in the accumulation of a massive amount of P in the agricultural production and the waste management systems at both the regional and country scales. This study suggests that sustainable P management initiatives at the regional and country scales should put more emphasis on minimizing unwanted P storage in the agricultural production and the waste management systems.

  8. German Validation of the Conners 3® Rating Scales for Parents, Teachers, and Children.

    PubMed

    Christiansen, Hanna; Hirsch, Oliver; Drechsler, Renate; Wanderer, Sina; Knospe, Eva-Lotte; Günther, Thomas; Lidzba, Karen

    2016-01-01

    Attention-deficit/hyperactivity disorder (ADHD) rating scales such as the Conners’ Rating Scales (CRS) are valuable adjuncts for diagnosis, since they offer parent, teacher, and self-ratings of children susceptible for ADHD. Even though the scales are widely used internationally, cross-cultural comparability has rarely been verified, and culture and language invariance have only been presumed. The Conners 3(®) rating scales are the updated version of the CRS, though hardly any studies report the psychometric properties apart from the results published in the test edition itself. To our knowledge there are no studies on the various adaptations of the Conners 3(®) in other languages. The German translations of the Conners 3(®) were completed by 745 children, 953 parents, and 741 teachers (children’s age range: 6–18 years, mean: 11.74 years of age). Exploratory and confirmatory factor analyses on content scale items were conducted to obtain the factor structure for the German version and to replicate the factor structure of the original American models. Cronbach’s α was calculated to establish internal consistency. The exploratory analyses for the German model resulted in factor structures globally different from the American model, though confirmatory analyses revealed very good model fi ts with highly satisfying Cronbach’s αs. We were able to provide empirical evidence for the subscale Inattention which had only hypothetically been derived by Conners (2008). Even though the exploratory analyses resulted in different factor structures, the confirmatory analyses have such excellent psychometric properties that use of the German adaptation of the Conners 3(®) is justifi ed in international multicenter studies.

  9. Spatially explicit modelling of forest structure and function using airborne lidar and hyperspectral remote sensing data combined with micrometeorological measurements

    NASA Astrophysics Data System (ADS)

    Thomas, Valerie Anne

    This research models canopy-scale photosynthesis at the Groundhog River Flux Site through the integration of high-resolution airborne remote sensing data and micrometeorological measurements collected from a flux tower. Light detection and ranging (lidar) data are analysed to derive models of tree structure, including: canopy height, basal area, crown closure, and average aboveground biomass. Lidar and hyperspectral remote sensing data are used to model canopy chlorophyll (Chl) and carotenoid concentrations (known to be good indicators of photosynthesis). The integration of lidar and hyperspectral data is applied to derive spatially explicit models of the fraction of photosynthetically active radiation (fPAR) absorbed by the canopy as well as a species classification for the site. These products are integrated with flux tower meteorological measurements (i.e., air temperature and global solar radiation) collected on a continuous basis over 2004 to apply the C-Fix model of carbon exchange to the site. Results demonstrate that high resolution lidar and lidar-hyperspectral integration techniques perform well in the boreal mixedwood environment. Lidar models are well correlated with forest structure, despite the complexities introduced in the mixedwood case (e.g., r2=0.84, 0.89, 0.60, and 0.91, for mean dominant height, basal area, crown closure, and average aboveground biomass). Strong relationships are also shown for canopy scale chlorophyll/carotenoid concentration analysis using integrated lidar-hyperspectral techniques (e.g., r2=0.84, 0.84, and 0.82 for Chl(a), Chl(a+b), and Chl(b)). Examination of the spatially explicit models of fPAR reveal distinct spatial patterns which become increasingly apparent throughout the season due to the variation in species groupings (and canopy chlorophyll concentration) within the 1 km radius surrounding the flux tower. Comparison of results from the modified local-scale version of the C-Fix model to tower gross ecosystem productivity (GEP) demonstrate a good correlation to flux tower measured GEP (r2=0.70 for 10 day averages), with the largest deviations occurring in June-July. This research has direct benefits for forest inventory mapping and management practices; mapping of canopy physiology and biochemical constituents related to forest health; and scaling and direct comparison to large resolution satellite models to help bridge the gap between the local-scale measurements at flux towers and predictions derived from continental-scale carbon models.

  10. Applications of NMR and computational methodologies to study protein dynamics.

    PubMed

    Narayanan, Chitra; Bafna, Khushboo; Roux, Louise D; Agarwal, Pratul K; Doucet, Nicolas

    2017-08-15

    Overwhelming evidence now illustrates the defining role of atomic-scale protein flexibility in biological events such as allostery, cell signaling, and enzyme catalysis. Over the years, spin relaxation nuclear magnetic resonance (NMR) has provided significant insights on the structural motions occurring on multiple time frames over the course of a protein life span. The present review article aims to illustrate to the broader community how this technique continues to shape many areas of protein science and engineering, in addition to being an indispensable tool for studying atomic-scale motions and functional characterization. Continuing developments in underlying NMR technology alongside software and hardware developments for complementary computational approaches now enable methodologies to routinely provide spatial directionality and structural representations traditionally harder to achieve solely using NMR spectroscopy. In addition to its well-established role in structural elucidation, we present recent examples that illustrate the combined power of selective isotope labeling, relaxation dispersion experiments, chemical shift analyses, and computational approaches for the characterization of conformational sub-states in proteins and enzymes. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Real-time high-resolution heterodyne-based measurements of spectral dynamics in fibre lasers

    PubMed Central

    Sugavanam, Srikanth; Fabbri, Simon; Le, Son Thai; Lobach, Ivan; Kablukov, Sergey; Khorev, Serge; Churkin, Dmitry

    2016-01-01

    Conventional tools for measurement of laser spectra (e.g. optical spectrum analysers) capture data averaged over a considerable time period. However, the generation spectrum of many laser types may involve spectral dynamics whose relatively fast time scale is determined by their cavity round trip period, calling for instrumentation featuring both high temporal and spectral resolution. Such real-time spectral characterisation becomes particularly challenging if the laser pulses are long, or they have continuous or quasi-continuous wave radiation components. Here we combine optical heterodyning with a technique of spatio-temporal intensity measurements that allows the characterisation of such complex sources. Fast, round-trip-resolved spectral dynamics of cavity-based systems in real-time are obtained, with temporal resolution of one cavity round trip and frequency resolution defined by its inverse (85 ns and 24 MHz respectively are demonstrated). We also show how under certain conditions for quasi-continuous wave sources, the spectral resolution could be further increased by a factor of 100 by direct extraction of phase information from the heterodyned dynamics or by using double time scales within the spectrogram approach. PMID:26984634

  12. Protein homology model refinement by large-scale energy optimization.

    PubMed

    Park, Hahnbeom; Ovchinnikov, Sergey; Kim, David E; DiMaio, Frank; Baker, David

    2018-03-20

    Proteins fold to their lowest free-energy structures, and hence the most straightforward way to increase the accuracy of a partially incorrect protein structure model is to search for the lowest-energy nearby structure. This direct approach has met with little success for two reasons: first, energy function inaccuracies can lead to false energy minima, resulting in model degradation rather than improvement; and second, even with an accurate energy function, the search problem is formidable because the energy only drops considerably in the immediate vicinity of the global minimum, and there are a very large number of degrees of freedom. Here we describe a large-scale energy optimization-based refinement method that incorporates advances in both search and energy function accuracy that can substantially improve the accuracy of low-resolution homology models. The method refined low-resolution homology models into correct folds for 50 of 84 diverse protein families and generated improved models in recent blind structure prediction experiments. Analyses of the basis for these improvements reveal contributions from both the improvements in conformational sampling techniques and the energy function.

  13. Geometric factor and influence of sensors in the establishment of a resistivity-moisture relation in soil samples

    NASA Astrophysics Data System (ADS)

    López-Sánchez, M.; Mansilla-Plaza, L.; Sánchez-de-laOrden, M.

    2017-10-01

    Prior to field scale research, soil samples are analysed on a laboratory scale for electrical resistivity calibrations. Currently, there are a variety of field instruments to estimate the water content in soils using different physical phenomena. These instruments can be used to develop moisture-resistivity relationships on the same soil samples. This assures that measurements are performed on the same material and under the same conditions (e.g., humidity and temperature). A geometric factor is applied to the location of electrodes, in order to calculate the apparent electrical resistivity of the laboratory test cells. This geometric factor can be determined in three different ways: by means of the use of an analytical approximation, laboratory trials (experimental approximation), or by the analysis of a numerical model. The first case, the analytical approximation, is not appropriate for complex cells or arrays. And both, the experimental and numerical approximation can lead to inaccurate results. Therefore, we propose a novel approach to obtain a compromise solution between both techniques, providing a more precise determination of the geometrical factor.

  14. Pilot-scale resin adsorption as a means to recover and fractionate apple polyphenols.

    PubMed

    Kammerer, Dietmar R; Carle, Reinhold; Stanley, Roger A; Saleh, Zaid S

    2010-06-09

    The purification and fractionation of phenolic compounds from crude plant extracts using a food-grade acrylic adsorbent were studied at pilot-plant scale. A diluted apple juice concentrate served as a model phenolic solution for column adsorption and desorption trials. Phenolic concentrations were evaluated photometrically using the Folin-Ciocalteu assay and by HPLC-DAD. Recovery rates were significantly affected by increasing phenolic concentrations of the feed solutions applied to the column. In contrast, the flow rate during column loading hardly influenced adsorption efficiency, whereas the temperature and pH value were shown to be crucial parameters determining both total phenolic recovery rates and the adsorption behavior of individual polyphenols. As expected, the eluent composition had the greatest impact on the desorption characteristics of both total and individual phenolic compounds. HPLC analyses revealed significantly different elution profiles of individual polyphenols depending on lipophilicity. This technique allows fractionation of crude plant phenolic extracts, thus providing the opportunity to design the functional properties of the resulting phenolic fractions selectively, and the present study delivers valuable information with regard to the adjustment of individual process parameters.

  15. Modeling and Simulation of the Second-Generation Orion Crew Module Air Bag Landing System

    NASA Technical Reports Server (NTRS)

    Timmers, Richard B.; Hardy, Robin C.; Willey, Cliff E.; Welch, Joseph V.

    2009-01-01

    Air bags were evaluated as the landing attenuation system for earth landing of the Orion Crew Module (CM). Analysis conducted to date shows that airbags are capable of providing a graceful landing of the CM in nominal and off-nominal conditions such as parachute failure, high horizontal winds, and unfavorable vehicle/ground angle combinations, while meeting crew and vehicle safety requirements. The analyses and associated testing presented here surround a second generation of the airbag design developed by ILC Dover, building off of relevant first-generation design, analysis, and testing efforts. In order to fully evaluate the second generation air bag design and correlate the dynamic simulations, a series of drop tests were carried out at NASA Langley s Landing and Impact Research (LandIR) facility in Hampton, Virginia. The tests consisted of a full-scale set of air bags attached to a full-scale test article representing the Orion Crew Module. The techniques used to collect experimental data, develop the simulations, and make comparisons to experimental data are discussed.

  16. Diversification of land plants: insights from a family-level phylogenetic analysis.

    PubMed

    Fiz-Palacios, Omar; Schneider, Harald; Heinrichs, Jochen; Savolainen, Vincent

    2011-11-21

    Some of the evolutionary history of land plants has been documented based on the fossil record and a few broad-scale phylogenetic analyses, especially focusing on angiosperms and ferns. Here, we reconstructed phylogenetic relationships among all 706 families of land plants using molecular data. We dated the phylogeny using multiple fossils and a molecular clock technique. Applying various tests of diversification that take into account topology, branch length, numbers of extant species as well as extinction, we evaluated diversification rates through time. We also compared these diversification profiles against the distribution of the climate modes of the Phanerozoic. We found evidence for the radiations of ferns and mosses in the shadow of angiosperms coinciding with the rather warm Cretaceous global climate. In contrast, gymnosperms and liverworts show a signature of declining diversification rates during geological time periods of cool global climate. This broad-scale phylogenetic analysis helps to reveal the successive waves of diversification that made up the diversity of land plants we see today. Both warm temperatures and wet climate may have been necessary for the rise of the diversity under a successive lineage replacement scenario.

  17. Regional gray matter correlates of vocational interests

    PubMed Central

    2012-01-01

    Background Previous studies have identified brain areas related to cognitive abilities and personality, respectively. In this exploratory study, we extend the application of modern neuroimaging techniques to another area of individual differences, vocational interests, and relate the results to an earlier study of cognitive abilities salient for vocations. Findings First, we examined the psychometric relationships between vocational interests and abilities in a large sample. The primary relationships between those domains were between Investigative (scientific) interests and general intelligence and between Realistic (“blue-collar”) interests and spatial ability. Then, using MRI and voxel-based morphometry, we investigated the relationships between regional gray matter volume and vocational interests. Specific clusters of gray matter were found to be correlated with Investigative and Realistic interests. Overlap analyses indicated some common brain areas between the correlates of Investigative interests and general intelligence and between the correlates of Realistic interests and spatial ability. Conclusions Two of six vocational-interest scales show substantial relationships with regional gray matter volume. The overlap between the brain correlates of these scales and cognitive-ability factors suggest there are relationships between individual differences in brain structure and vocations. PMID:22591829

  18. Regional gray matter correlates of vocational interests.

    PubMed

    Schroeder, David H; Haier, Richard J; Tang, Cheuk Ying

    2012-05-16

    Previous studies have identified brain areas related to cognitive abilities and personality, respectively. In this exploratory study, we extend the application of modern neuroimaging techniques to another area of individual differences, vocational interests, and relate the results to an earlier study of cognitive abilities salient for vocations. First, we examined the psychometric relationships between vocational interests and abilities in a large sample. The primary relationships between those domains were between Investigative (scientific) interests and general intelligence and between Realistic ("blue-collar") interests and spatial ability. Then, using MRI and voxel-based morphometry, we investigated the relationships between regional gray matter volume and vocational interests. Specific clusters of gray matter were found to be correlated with Investigative and Realistic interests. Overlap analyses indicated some common brain areas between the correlates of Investigative interests and general intelligence and between the correlates of Realistic interests and spatial ability. Two of six vocational-interest scales show substantial relationships with regional gray matter volume. The overlap between the brain correlates of these scales and cognitive-ability factors suggest there are relationships between individual differences in brain structure and vocations.

  19. Estimation of old field ecosystem biomass using low altitude imagery

    NASA Technical Reports Server (NTRS)

    Nor, S. M.; Safir, G.; Burton, T. M.; Hook, J. E.; Schultink, G.

    1977-01-01

    Color-infrared photography was used to evaluate the biomass of experimental plots in an old-field ecosystem that was treated with different levels of waste water from a sewage treatment facility. Cibachrome prints at a scale of approximately 1:1,600 produced from 35 mm color infrared slides were used to analyze density patterns using prepared tonal density scales and multicell grids registered to ground panels shown on the photograph. Correlation analyses between tonal density and vegetation biomass obtained from ground samples and harvests were carried out. Correlations between mean tonal density and harvest biomass data gave consistently high coefficients ranging from 0.530 to 0.896 at the 0.001 significance level. Corresponding multiple regression analysis resulted in higher correlation coefficients. The results of this study indicate that aerial infrared photography can be used to estimate standing crop biomass on waste water irrigated old field ecosystems. Combined with minimal ground truth data, this technique could enable managers of wastewater irrigation projects to precisely time harvest of such systems for maximal removal of nutrients in harvested biomass.

  20. Genome-Scale Variation of Tubeworm Symbionts

    NASA Astrophysics Data System (ADS)

    Robidart, J.; Felbeck, H.

    2005-12-01

    Hydrothermal vent tubeworms are completely dependent on their bacterial symbionts for nutrition. Despite this dependency, many studies have concluded that bacterial symbionts are acquired anew from the environment, every generation rather than the more reliable mode of symbiont transmission from parent directly to offspring. Ribosomal 16S sequences have shown little variation of symbiont phylogeny from worm to worm, but higher resolution genome-scale analyses have found that there is genomic heterogeneity between symbionts from worms in different environments. What genes can be "spared," while resulting in an intact symbiosis? Have symbionts from one environment gained physiological capabilities that make them more fit in that environment? In order to answer these questions, subtractive hybridization was used on symbionts of Riftia pachyptila tubeworms from different environments to gain insight into which genes are present in one symbiont and absent in the other. Many genes were found to be unique to each symbiont and these results will be presented. This technique will be applied to answer many fundamental questions regarding microbial symbiont evolution to a specific physico-chemical environment, to a different host species, and more.

Top