Sample records for scale analysis approach

  1. Nonlinear Image Denoising Methodologies

    DTIC Science & Technology

    2002-05-01

    53 5.3 A Multiscale Approach to Scale-Space Analysis . . . . . . . . . . . . . . . . 53 5.4...etc. In this thesis, Our approach to denoising is first based on a controlled nonlinear stochastic random walk to achieve a scale space analysis ( as in... stochastic treatment or interpretation of the diffusion. In addition, unless a specific stopping time is known to be adequate, the resulting evolution

  2. Adaptation of ICT Integration Approach Scale to Kosovo Culture: A Study of Validity and Reliability Analysis

    ERIC Educational Resources Information Center

    Kervan, Serdan; Tezci, Erdogan

    2018-01-01

    The aim of this study is to adapt ICT integration approach scale to Kosovo culture, which measures ICT integration approaches of university faculty to teaching and learning process. The scale developed in Turkish has been translated into Albanian to provide linguistic equivalence. The survey was given to a total of 303 instructors [161 (53.1%)…

  3. Bayesian hierarchical model for large-scale covariance matrix estimation.

    PubMed

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  4. Combining Flux Balance and Energy Balance Analysis for Large-Scale Metabolic Network: Biochemical Circuit Theory for Analysis of Large-Scale Metabolic Networks

    NASA Technical Reports Server (NTRS)

    Beard, Daniel A.; Liang, Shou-Dan; Qian, Hong; Biegel, Bryan (Technical Monitor)

    2001-01-01

    Predicting behavior of large-scale biochemical metabolic networks represents one of the greatest challenges of bioinformatics and computational biology. Approaches, such as flux balance analysis (FBA), that account for the known stoichiometry of the reaction network while avoiding implementation of detailed reaction kinetics are perhaps the most promising tools for the analysis of large complex networks. As a step towards building a complete theory of biochemical circuit analysis, we introduce energy balance analysis (EBA), which compliments the FBA approach by introducing fundamental constraints based on the first and second laws of thermodynamics. Fluxes obtained with EBA are thermodynamically feasible and provide valuable insight into the activation and suppression of biochemical pathways.

  5. Multiscale recurrence analysis of spatio-temporal data

    NASA Astrophysics Data System (ADS)

    Riedl, M.; Marwan, N.; Kurths, J.

    2015-12-01

    The description and analysis of spatio-temporal dynamics is a crucial task in many scientific disciplines. In this work, we propose a method which uses the mapogram as a similarity measure between spatially distributed data instances at different time points. The resulting similarity values of the pairwise comparison are used to construct a recurrence plot in order to benefit from established tools of recurrence quantification analysis and recurrence network analysis. In contrast to other recurrence tools for this purpose, the mapogram approach allows the specific focus on different spatial scales that can be used in a multi-scale analysis of spatio-temporal dynamics. We illustrate this approach by application on mixed dynamics, such as traveling parallel wave fronts with additive noise, as well as more complicate examples, pseudo-random numbers and coupled map lattices with a semi-logistic mapping rule. Especially the complicate examples show the usefulness of the multi-scale consideration in order to take spatial pattern of different scales and with different rhythms into account. So, this mapogram approach promises new insights in problems of climatology, ecology, or medicine.

  6. Multiscale recurrence analysis of spatio-temporal data.

    PubMed

    Riedl, M; Marwan, N; Kurths, J

    2015-12-01

    The description and analysis of spatio-temporal dynamics is a crucial task in many scientific disciplines. In this work, we propose a method which uses the mapogram as a similarity measure between spatially distributed data instances at different time points. The resulting similarity values of the pairwise comparison are used to construct a recurrence plot in order to benefit from established tools of recurrence quantification analysis and recurrence network analysis. In contrast to other recurrence tools for this purpose, the mapogram approach allows the specific focus on different spatial scales that can be used in a multi-scale analysis of spatio-temporal dynamics. We illustrate this approach by application on mixed dynamics, such as traveling parallel wave fronts with additive noise, as well as more complicate examples, pseudo-random numbers and coupled map lattices with a semi-logistic mapping rule. Especially the complicate examples show the usefulness of the multi-scale consideration in order to take spatial pattern of different scales and with different rhythms into account. So, this mapogram approach promises new insights in problems of climatology, ecology, or medicine.

  7. Scaling up to address data science challenges

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendelberger, Joanne R.

    Statistics and Data Science provide a variety of perspectives and technical approaches for exploring and understanding Big Data. Partnerships between scientists from different fields such as statistics, machine learning, computer science, and applied mathematics can lead to innovative approaches for addressing problems involving increasingly large amounts of data in a rigorous and effective manner that takes advantage of advances in computing. Here, this article will explore various challenges in Data Science and will highlight statistical approaches that can facilitate analysis of large-scale data including sampling and data reduction methods, techniques for effective analysis and visualization of large-scale simulations, and algorithmsmore » and procedures for efficient processing.« less

  8. Scaling up to address data science challenges

    DOE PAGES

    Wendelberger, Joanne R.

    2017-04-27

    Statistics and Data Science provide a variety of perspectives and technical approaches for exploring and understanding Big Data. Partnerships between scientists from different fields such as statistics, machine learning, computer science, and applied mathematics can lead to innovative approaches for addressing problems involving increasingly large amounts of data in a rigorous and effective manner that takes advantage of advances in computing. Here, this article will explore various challenges in Data Science and will highlight statistical approaches that can facilitate analysis of large-scale data including sampling and data reduction methods, techniques for effective analysis and visualization of large-scale simulations, and algorithmsmore » and procedures for efficient processing.« less

  9. The Dispositions for Culturally Responsive Pedagogy Scale

    ERIC Educational Resources Information Center

    Whitaker, Manya C.; Valtierra, Kristina Marie

    2018-01-01

    Purpose: The purpose of this study is to develop and validate the dispositions for culturally responsive pedagogy scale (DCRPS). Design/methodology/approach: Scale development consisted of a six-step process including item development, expert review, exploratory factor analysis, factor interpretation, confirmatory factor analysis and convergent…

  10. Two-dimensional analysis of coupled heat and moisture transport in masonry structures

    NASA Astrophysics Data System (ADS)

    Krejčí, Tomáš

    2016-06-01

    Reconstruction and maintenance of historical buildings and bridges require good knowledge of temperature and moisture distribution. Sharp changes in the temperature and moisture can lead to damage. This paper describes analysis of coupled heat and moisture transfer in masonry based on two-level approach. Macro-scale level describes the whole structure while meso-scale level takes into account detailed composition of the masonry. The two-level approach is very computationally demanding and it was implemented in parallel. The two-level approach was used in analysis of temperature and moisture distribution in Charles bridge in Prague, Czech Republic.

  11. Time and frequency domain characteristics of detrending-operation-based scaling analysis: Exact DFA and DMA frequency responses

    NASA Astrophysics Data System (ADS)

    Kiyono, Ken; Tsujimoto, Yutaka

    2016-07-01

    We develop a general framework to study the time and frequency domain characteristics of detrending-operation-based scaling analysis methods, such as detrended fluctuation analysis (DFA) and detrending moving average (DMA) analysis. In this framework, using either the time or frequency domain approach, the frequency responses of detrending operations are calculated analytically. Although the frequency domain approach based on conventional linear analysis techniques is only applicable to linear detrending operations, the time domain approach presented here is applicable to both linear and nonlinear detrending operations. Furthermore, using the relationship between the time and frequency domain representations of the frequency responses, the frequency domain characteristics of nonlinear detrending operations can be obtained. Based on the calculated frequency responses, it is possible to establish a direct connection between the root-mean-square deviation of the detrending-operation-based scaling analysis and the power spectrum for linear stochastic processes. Here, by applying our methods to DFA and DMA, including higher-order cases, exact frequency responses are calculated. In addition, we analytically investigate the cutoff frequencies of DFA and DMA detrending operations and show that these frequencies are not optimally adjusted to coincide with the corresponding time scale.

  12. Time and frequency domain characteristics of detrending-operation-based scaling analysis: Exact DFA and DMA frequency responses.

    PubMed

    Kiyono, Ken; Tsujimoto, Yutaka

    2016-07-01

    We develop a general framework to study the time and frequency domain characteristics of detrending-operation-based scaling analysis methods, such as detrended fluctuation analysis (DFA) and detrending moving average (DMA) analysis. In this framework, using either the time or frequency domain approach, the frequency responses of detrending operations are calculated analytically. Although the frequency domain approach based on conventional linear analysis techniques is only applicable to linear detrending operations, the time domain approach presented here is applicable to both linear and nonlinear detrending operations. Furthermore, using the relationship between the time and frequency domain representations of the frequency responses, the frequency domain characteristics of nonlinear detrending operations can be obtained. Based on the calculated frequency responses, it is possible to establish a direct connection between the root-mean-square deviation of the detrending-operation-based scaling analysis and the power spectrum for linear stochastic processes. Here, by applying our methods to DFA and DMA, including higher-order cases, exact frequency responses are calculated. In addition, we analytically investigate the cutoff frequencies of DFA and DMA detrending operations and show that these frequencies are not optimally adjusted to coincide with the corresponding time scale.

  13. An approach to studying scale for students in higher education: a Rasch measurement model analysis.

    PubMed

    Waugh, R F; Hii, T K; Islam, A

    2000-01-01

    A questionnaire comprising 80 self-report items was designed to measure student Approaches to Studying in a higher education context. The items were conceptualized and designed from five learning orientations: a Deep Approach, a Surface Approach, a Strategic Approach, Clarity of Direction and Academic Self-Confidence, to include 40 attitude items and 40 corresponding behavior items. The study aimed to create a scale and investigate its psychometric properties using a Rasch measurement model. The convenience sample consisted of 350 students at an Australian university in 1998. The analysis supported the conceptual structure of the Scale as involving studying attitudes and behaviors towards five orientations to learning. Attitudes are mostly easier than behaviors, in line with the theory. Sixty-eight items fit the model and have good psychometric properties. The proportion of observed variance considered true is 92% and the Scale is well-targeted against the students. Some harder items are needed to improve the targeting and some further testing work needs to be done on the Surface Approach. In the Surface Approach and Clarity of Direction in Studying, attitudes make a lesser contribution than behaviors to the variable, Approaches to Studying.

  14. Bringing analysis of gender and social-ecological resilience together in small-scale fisheries research: Challenges and opportunities.

    PubMed

    Kawarazuka, Nozomi; Locke, Catherine; McDougall, Cynthia; Kantor, Paula; Morgan, Miranda

    2017-03-01

    The demand for gender analysis is now increasingly orthodox in natural resource programming, including that for small-scale fisheries. Whilst the analysis of social-ecological resilience has made valuable contributions to integrating social dimensions into research and policy-making on natural resource management, it has so far demonstrated limited success in effectively integrating considerations of gender equity. This paper reviews the challenges in, and opportunities for, bringing a gender analysis together with social-ecological resilience analysis in the context of small-scale fisheries research in developing countries. We conclude that rather than searching for a single unifying framework for gender and resilience analysis, it will be more effective to pursue a plural solution in which closer engagement is fostered between analysis of gender and social-ecological resilience whilst preserving the strengths of each approach. This approach can make an important contribution to developing a better evidence base for small-scale fisheries management and policy.

  15. A randomization approach to handling data scaling in nuclear medicine.

    PubMed

    Bai, Chuanyong; Conwell, Richard; Kindem, Joel

    2010-06-01

    In medical imaging, data scaling is sometimes desired to handle the system complexity, such as uniformity calibration. Since the data are usually saved in short integer, conventional data scaling will first scale the data in floating point format and then truncate or round the floating point data to short integer data. For example, when using truncation, scaling of 9 by 1.1 results in 9 and scaling of 10 by 1.1 results in 11. When the count level is low, such scaling may change the local data distribution and affect the intended application of the data. In this work, the authors use an example gated cardiac SPECT study to illustrate the effect of conventional scaling by factors of 1.1 and 1.2. The authors then scaled the data with the same scaling factors using a randomization approach, in which a random number evenly distributed between 0 and 1 is generated to determine how the floating point data will be saved as short integer data. If the random number is between 0 and 0.9, then 9.9 will be saved as 10, otherwise 9. In other words, the floating point value 9.9 will be saved in short integer value as 10 with 90% probability or 9 with 10% probability. For statistical analysis of the performance, the authors applied the conventional approach with rounding and the randomization approach to 50 consecutive gated studies from a clinical site. For the example study, the image reconstructed from the original data showed an apparent perfusion defect at the apex of the myocardium. The defect size was noticeably changed by scaling with 1.1 and 1.2 using the conventional approaches with truncation and rounding. Using the randomization approach, in contrast, the images from the scaled data appeared identical to the original image. Line profile analysis of the scaled data showed that the randomization approach introduced the least change to the data as compared to the conventional approaches. For the 50 gated data sets, significantly more studies showed quantitative differences between the original images and the images from the data scaled by 1.2 using the rounding approach than the randomization approach [46/50 (92%) versus 3/50 (6%), p < 0.05]. Likewise, significantly more studies showed visually noticeable differences between the original images and the images from the data scaled by 1.2 using the rounding approach than randomization [29/50 (58%) versus 1/50 (2%), p < 0.05]. In conclusion, the proposed randomization approach minimizes the scaling-introduced local data change as compared to the conventional approaches. It is preferred for nuclear medicine data scaling.

  16. Exploring Rating Quality in Rater-Mediated Assessments Using Mokken Scale Analysis

    ERIC Educational Resources Information Center

    Wind, Stefanie A.; Engelhard, George, Jr.

    2016-01-01

    Mokken scale analysis is a probabilistic nonparametric approach that offers statistical and graphical tools for evaluating the quality of social science measurement without placing potentially inappropriate restrictions on the structure of a data set. In particular, Mokken scaling provides a useful method for evaluating important measurement…

  17. A General Approach for Estimating Scale Score Reliability for Panel Survey Data

    ERIC Educational Resources Information Center

    Biemer, Paul P.; Christ, Sharon L.; Wiesen, Christopher A.

    2009-01-01

    Scale score measures are ubiquitous in the psychological literature and can be used as both dependent and independent variables in data analysis. Poor reliability of scale score measures leads to inflated standard errors and/or biased estimates, particularly in multivariate analysis. Reliability estimation is usually an integral step to assess…

  18. Ecological hierarchies and self-organisation - Pattern analysis, modelling and process integration across scales

    USGS Publications Warehouse

    Reuter, H.; Jopp, F.; Blanco-Moreno, J. M.; Damgaard, C.; Matsinos, Y.; DeAngelis, D.L.

    2010-01-01

    A continuing discussion in applied and theoretical ecology focuses on the relationship of different organisational levels and on how ecological systems interact across scales. We address principal approaches to cope with complex across-level issues in ecology by applying elements of hierarchy theory and the theory of complex adaptive systems. A top-down approach, often characterised by the use of statistical techniques, can be applied to analyse large-scale dynamics and identify constraints exerted on lower levels. Current developments are illustrated with examples from the analysis of within-community spatial patterns and large-scale vegetation patterns. A bottom-up approach allows one to elucidate how interactions of individuals shape dynamics at higher levels in a self-organisation process; e.g., population development and community composition. This may be facilitated by various modelling tools, which provide the distinction between focal levels and resulting properties. For instance, resilience in grassland communities has been analysed with a cellular automaton approach, and the driving forces in rodent population oscillations have been identified with an agent-based model. Both modelling tools illustrate the principles of analysing higher level processes by representing the interactions of basic components.The focus of most ecological investigations on either top-down or bottom-up approaches may not be appropriate, if strong cross-scale relationships predominate. Here, we propose an 'across-scale-approach', closely interweaving the inherent potentials of both approaches. This combination of analytical and synthesising approaches will enable ecologists to establish a more coherent access to cross-level interactions in ecological systems. ?? 2010 Gesellschaft f??r ??kologie.

  19. A scaling procedure for the response of an isolated system with high modal overlap factor

    NASA Astrophysics Data System (ADS)

    De Rosa, S.; Franco, F.

    2008-10-01

    The paper deals with a numerical approach that reduces some physical sizes of the solution domain to compute the dynamic response of an isolated system: it has been named Asymptotical Scaled Modal Analysis (ASMA). The proposed numerical procedure alters the input data needed to obtain the classic modal responses to increase the frequency band of validity of the discrete or continuous coordinates model through the definition of a proper scaling coefficient. It is demonstrated that the computational cost remains acceptable while the frequency range of analysis increases. Moreover, with reference to the flexural vibrations of a rectangular plate, the paper discusses the ASMA vs. the statistical energy analysis and the energy distribution approach. Some insights are also given about the limits of the scaling coefficient. Finally it is shown that the linear dynamic response, predicted with the scaling procedure, has the same quality and characteristics of the statistical energy analysis, but it can be useful when the system cannot be solved appropriately by the standard Statistical Energy Analysis (SEA).

  20. A Scalable Analysis Toolkit

    NASA Technical Reports Server (NTRS)

    Aiken, Alexander

    2001-01-01

    The Scalable Analysis Toolkit (SAT) project aimed to demonstrate that it is feasible and useful to statically detect software bugs in very large systems. The technical focus of the project was on a relatively new class of constraint-based techniques for analysis software, where the desired facts about programs (e.g., the presence of a particular bug) are phrased as constraint problems to be solved. At the beginning of this project, the most successful forms of formal software analysis were limited forms of automatic theorem proving (as exemplified by the analyses used in language type systems and optimizing compilers), semi-automatic theorem proving for full verification, and model checking. With a few notable exceptions these approaches had not been demonstrated to scale to software systems of even 50,000 lines of code. Realistic approaches to large-scale software analysis cannot hope to make every conceivable formal method scale. Thus, the SAT approach is to mix different methods in one application by using coarse and fast but still adequate methods at the largest scales, and reserving the use of more precise but also more expensive methods at smaller scales for critical aspects (that is, aspects critical to the analysis problem under consideration) of a software system. The principled method proposed for combining a heterogeneous collection of formal systems with different scalability characteristics is mixed constraints. This idea had been used previously in small-scale applications with encouraging results: using mostly coarse methods and narrowly targeted precise methods, useful information (meaning the discovery of bugs in real programs) was obtained with excellent scalability.

  1. Survival analysis for a large scale forest health issue: Missouri oak decline

    Treesearch

    C.W. Woodall; P.L. Grambsch; W. Thomas; W.K. Moser

    2005-01-01

    Survival analysis methodologies provide novel approaches for forest mortality analysis that may aid in detecting, monitoring, and mitigating of large-scale forest health issues. This study examined survivor analysis for evaluating a regional forest health issue - Missouri oak decline. With a statewide Missouri forest inventory, log-rank tests of the effects of...

  2. Evaluating the Invariance of Cognitive Profile Patterns Derived from Profile Analysis via Multidimensional Scaling (PAMS): A Bootstrapping Approach

    ERIC Educational Resources Information Center

    Kim, Se-Kang

    2010-01-01

    The aim of the current study is to validate the invariance of major profile patterns derived from multidimensional scaling (MDS) by bootstrapping. Profile Analysis via Multidimensional Scaling (PAMS) was employed to obtain profiles and bootstrapping was used to construct the sampling distributions of the profile coordinates and the empirical…

  3. Scale-dependent approaches to modeling spatial epidemiology of chronic wasting disease.

    USGS Publications Warehouse

    Conner, Mary M.; Gross, John E.; Cross, Paul C.; Ebinger, Michael R.; Gillies, Robert; Samuel, Michael D.; Miller, Michael W.

    2007-01-01

    For each scale, we presented a focal approach that would be useful for understanding the spatial pattern and epidemiology of CWD, as well as being a useful tool for CWD management. The focal approaches include risk analysis and micromaps for the regional scale, cluster analysis for the landscape scale, and individual based modeling for the fine scale of within population. For each of these methods, we used simulated data and walked through the method step by step to fully illustrate the “how to”, with specifics about what is input and output, as well as what questions the method addresses. We also provided a summary table to, at a glance, describe the scale, questions that can be addressed, and general data required for each method described in this e-book. We hope that this review will be helpful to biologists and managers by increasing the utility of their surveillance data, and ultimately be useful for increasing our understanding of CWD and allowing wildlife biologists and managers to move beyond retroactive fire-fighting to proactive preventative action.

  4. An alternative to Rasch analysis using triadic comparisons and multi-dimensional scaling

    NASA Astrophysics Data System (ADS)

    Bradley, C.; Massof, R. W.

    2016-11-01

    Rasch analysis is a principled approach for estimating the magnitude of some shared property of a set of items when a group of people assign ordinal ratings to them. In the general case, Rasch analysis not only estimates person and item measures on the same invariant scale, but also estimates the average thresholds used by the population to define rating categories. However, Rasch analysis fails when there is insufficient variance in the observed responses because it assumes a probabilistic relationship between person measures, item measures and the rating assigned by a person to an item. When only a single person is rating all items, there may be cases where the person assigns the same rating to many items no matter how many times he rates them. We introduce an alternative to Rasch analysis for precisely these situations. Our approach leverages multi-dimensional scaling (MDS) and requires only rank orderings of items and rank orderings of pairs of distances between items to work. Simulations show one variant of this approach - triadic comparisons with non-metric MDS - provides highly accurate estimates of item measures in realistic situations.

  5. Timing of Formal Phase Safety Reviews for Large-Scale Integrated Hazard Analysis

    NASA Technical Reports Server (NTRS)

    Massie, Michael J.; Morris, A. Terry

    2010-01-01

    Integrated hazard analysis (IHA) is a process used to identify and control unacceptable risk. As such, it does not occur in a vacuum. IHA approaches must be tailored to fit the system being analyzed. Physical, resource, organizational and temporal constraints on large-scale integrated systems impose additional direct or derived requirements on the IHA. The timing and interaction between engineering and safety organizations can provide either benefits or hindrances to the overall end product. The traditional approach for formal phase safety review timing and content, which generally works well for small- to moderate-scale systems, does not work well for very large-scale integrated systems. This paper proposes a modified approach to timing and content of formal phase safety reviews for IHA. Details of the tailoring process for IHA will describe how to avoid temporary disconnects in major milestone reviews and how to maintain a cohesive end-to-end integration story particularly for systems where the integrator inherently has little to no insight into lower level systems. The proposal has the advantage of allowing the hazard analysis development process to occur as technical data normally matures.

  6. Systems analysis of urban wastewater systems--two systematic approaches to analyse a complex system.

    PubMed

    Benedetti, L; Blumensaat, F; Bönisch, G; Dirckx, G; Jardin, N; Krebs, P; Vanrolleghem, P A

    2005-01-01

    This work was aimed at performing an analysis of the integrated urban wastewater system (catchment area, sewer, WWTP, receiving water). It focused on analysing the substance fluxes going through the system to identify critical pathways of pollution, as well as assessing the effectiveness of energy consumption and operational/capital costs. Two different approaches were adopted in the study to analyse urban wastewater systems of diverse characteristics. In the first approach a wide ranged analysis of a system at river basin scale is applied. The Nete river basin in Belgium, a tributary of the Schelde, was analysed through the 29 sewer catchments constituting the basin. In the second approach a more detailed methodology was developed to separately analyse two urban wastewater systems situated within the Ruhr basin (Germany) on a river stretch scale. The paper mainly focuses on the description of the method applied. Only the most important results are presented. The main outcomes of these studies are: the identification of stressors on the receiving water bodies, an extensive benchmarking of wastewater systems, and the evidence of the scale dependency of results in such studies.

  7. Performance Analysis, Design Considerations, and Applications of Extreme-Scale In Situ Infrastructures

    DOE PAGES

    Ayachit, Utkarsh; Bauer, Andrew; Duque, Earl P. N.; ...

    2016-11-01

    A key trend facing extreme-scale computational science is the widening gap between computational and I/O rates, and the challenge that follows is how to best gain insight from simulation data when it is increasingly impractical to save it to persistent storage for subsequent visual exploration and analysis. One approach to this challenge is centered around the idea of in situ processing, where visualization and analysis processing is performed while data is still resident in memory. Our paper examines several key design and performance issues related to the idea of in situ processing at extreme scale on modern platforms: Scalability, overhead,more » performance measurement and analysis, comparison and contrast with a traditional post hoc approach, and interfacing with simulation codes. We illustrate these principles in practice with studies, conducted on large-scale HPC platforms, that include a miniapplication and multiple science application codes, one of which demonstrates in situ methods in use at greater than 1M-way concurrency.« less

  8. A controls engineering approach for analyzing airplane input-output characteristics

    NASA Technical Reports Server (NTRS)

    Arbuckle, P. Douglas

    1991-01-01

    An engineering approach for analyzing airplane control and output characteristics is presented. State-space matrix equations describing the linear perturbation dynamics are transformed from physical coordinates into scaled coordinates. The scaling is accomplished by applying various transformations to the system to employ prior engineering knowledge of the airplane physics. Two different analysis techniques are then explained. Modal analysis techniques calculate the influence of each system input on each fundamental mode of motion and the distribution of each mode among the system outputs. The optimal steady state response technique computes the blending of steady state control inputs that optimize the steady state response of selected system outputs. Analysis of an example airplane model is presented to demonstrate the described engineering approach.

  9. Scaling in biomechanical experimentation: a finite similitude approach.

    PubMed

    Ochoa-Cabrero, Raul; Alonso-Rasgado, Teresa; Davey, Keith

    2018-06-01

    Biological experimentation has many obstacles: resource limitations, unavailability of materials, manufacturing complexities and ethical compliance issues; any approach that resolves all or some of these is of some interest. The aim of this study is applying the recently discovered concept of finite similitude as a novel approach for the design of scaled biomechanical experiments supported with analysis using a commercial finite-element package and validated by means of image correlation software. The study of isotropic scaling of synthetic bones leads to the selection of three-dimensional (3D) printed materials for the trial-space materials. These materials conforming to the theory are analysed in finite-element models of a cylinder and femur geometries undergoing compression, tension, torsion and bending tests to assess the efficacy of the approach using reverse scaling of the approach. The finite-element results show similar strain patterns in the surface for the cylinder with a maximum difference of less than 10% and for the femur with a maximum difference of less than 4% across all tests. Finally, the trial-space, physical-trial experimentation using 3D printed materials for compression and bending testing provides a good agreement in a Bland-Altman statistical analysis, providing good supporting evidence for the practicality of the approach. © 2018 The Author(s).

  10. Prose Representation: A Multidimensional Scaling Approach.

    ERIC Educational Resources Information Center

    LaPorte, Ronald E.; Voss, James F.

    1979-01-01

    Multidimensional scaling was used to study the comprehension of prose. Undergraduates rated the similarity of twenty nouns before and after reading passages containing those nouns. Results indicated that the scaling analysis provided an effective valid indicator of prose representation. (Author/JKS)

  11. Hydrometeorological variability on a large french catchment and its relation to large-scale circulation across temporal scales

    NASA Astrophysics Data System (ADS)

    Massei, Nicolas; Dieppois, Bastien; Fritier, Nicolas; Laignel, Benoit; Debret, Maxime; Lavers, David; Hannah, David

    2015-04-01

    In the present context of global changes, considerable efforts have been deployed by the hydrological scientific community to improve our understanding of the impacts of climate fluctuations on water resources. Both observational and modeling studies have been extensively employed to characterize hydrological changes and trends, assess the impact of climate variability or provide future scenarios of water resources. In the aim of a better understanding of hydrological changes, it is of crucial importance to determine how and to what extent trends and long-term oscillations detectable in hydrological variables are linked to global climate oscillations. In this work, we develop an approach associating large-scale/local-scale correlation, enmpirical statistical downscaling and wavelet multiresolution decomposition of monthly precipitation and streamflow over the Seine river watershed, and the North Atlantic sea level pressure (SLP) in order to gain additional insights on the atmospheric patterns associated with the regional hydrology. We hypothesized that: i) atmospheric patterns may change according to the different temporal wavelengths defining the variability of the signals; and ii) definition of those hydrological/circulation relationships for each temporal wavelength may improve the determination of large-scale predictors of local variations. The results showed that the large-scale/local-scale links were not necessarily constant according to time-scale (i.e. for the different frequencies characterizing the signals), resulting in changing spatial patterns across scales. This was then taken into account by developing an empirical statistical downscaling (ESD) modeling approach which integrated discrete wavelet multiresolution analysis for reconstructing local hydrometeorological processes (predictand : precipitation and streamflow on the Seine river catchment) based on a large-scale predictor (SLP over the Euro-Atlantic sector) on a monthly time-step. This approach basically consisted in 1- decomposing both signals (SLP field and precipitation or streamflow) using discrete wavelet multiresolution analysis and synthesis, 2- generating one statistical downscaling model per time-scale, 3- summing up all scale-dependent models in order to obtain a final reconstruction of the predictand. The results obtained revealed a significant improvement of the reconstructions for both precipitation and streamflow when using the multiresolution ESD model instead of basic ESD ; in addition, the scale-dependent spatial patterns associated to the model matched quite well those obtained from scale-dependent composite analysis. In particular, the multiresolution ESD model handled very well the significant changes in variance through time observed in either prepciptation or streamflow. For instance, the post-1980 period, which had been characterized by particularly high amplitudes in interannual-to-interdecadal variability associated with flood and extremely low-flow/drought periods (e.g., winter 2001, summer 2003), could not be reconstructed without integrating wavelet multiresolution analysis into the model. Further investigations would be required to address the issue of the stationarity of the large-scale/local-scale relationships and to test the capability of the multiresolution ESD model for interannual-to-interdecadal forecasting. In terms of methodological approach, further investigations may concern a fully comprehensive sensitivity analysis of the modeling to the parameter of the multiresolution approach (different families of scaling and wavelet functions used, number of coefficients/degree of smoothness, etc.).

  12. Developing Multidimensional Likert Scales Using Item Factor Analysis: The Case of Four-Point Items

    ERIC Educational Resources Information Center

    Asún, Rodrigo A.; Rdz-Navarro, Karina; Alvarado, Jesús M.

    2016-01-01

    This study compares the performance of two approaches in analysing four-point Likert rating scales with a factorial model: the classical factor analysis (FA) and the item factor analysis (IFA). For FA, maximum likelihood and weighted least squares estimations using Pearson correlation matrices among items are compared. For IFA, diagonally weighted…

  13. An Instructional Module on Mokken Scale Analysis

    ERIC Educational Resources Information Center

    Wind, Stefanie A.

    2017-01-01

    Mokken scale analysis (MSA) is a probabilistic-nonparametric approach to item response theory (IRT) that can be used to evaluate fundamental measurement properties with less strict assumptions than parametric IRT models. This instructional module provides an introduction to MSA as a probabilistic-nonparametric framework in which to explore…

  14. Application Perspective of 2D+SCALE Dimension

    NASA Astrophysics Data System (ADS)

    Karim, H.; Rahman, A. Abdul

    2016-09-01

    Different applications or users need different abstraction of spatial models, dimensionalities and specification of their datasets due to variations of required analysis and output. Various approaches, data models and data structures are now available to support most current application models in Geographic Information System (GIS). One of the focuses trend in GIS multi-dimensional research community is the implementation of scale dimension with spatial datasets to suit various scale application needs. In this paper, 2D spatial datasets that been scaled up as the third dimension are addressed as 2D+scale (or 3D-scale) dimension. Nowadays, various data structures, data models, approaches, schemas, and formats have been proposed as the best approaches to support variety of applications and dimensionality in 3D topology. However, only a few of them considers the element of scale as their targeted dimension. As the scale dimension is concerned, the implementation approach can be either multi-scale or vario-scale (with any available data structures and formats) depending on application requirements (topology, semantic and function). This paper attempts to discuss on the current and new potential applications which positively could be integrated upon 3D-scale dimension approach. The previous and current works on scale dimension as well as the requirements to be preserved for any given applications, implementation issues and future potential applications forms the major discussion of this paper.

  15. Using Rasch Analysis to Inform Rating Scale Development

    ERIC Educational Resources Information Center

    Van Zile-Tamsen, Carol

    2017-01-01

    The use of surveys, questionnaires, and rating scales to measure important outcomes in higher education is pervasive, but reliability and validity information is often based on problematic Classical Test Theory approaches. Rasch Analysis, based on Item Response Theory, provides a better alternative for examining the psychometric quality of rating…

  16. Exploring Incomplete Rating Designs with Mokken Scale Analysis

    ERIC Educational Resources Information Center

    Wind, Stefanie A.; Patil, Yogendra J.

    2018-01-01

    Recent research has explored the use of models adapted from Mokken scale analysis as a nonparametric approach to evaluating rating quality in educational performance assessments. A potential limiting factor to the widespread use of these techniques is the requirement for complete data, as practical constraints in operational assessment systems…

  17. Multiscale Measurement of Extreme Response Style

    ERIC Educational Resources Information Center

    Bolt, Daniel M.; Newton, Joseph R.

    2011-01-01

    This article extends a methodological approach considered by Bolt and Johnson for the measurement and control of extreme response style (ERS) to the analysis of rating data from multiple scales. Specifically, it is shown how the simultaneous analysis of item responses across scales allows for more accurate identification of ERS, and more effective…

  18. Applications of Combinatorial Programming to Data Analysis: The Traveling Salesman and Related Problems

    ERIC Educational Resources Information Center

    Hubert, Lawrence J.; Baker, Frank B.

    1978-01-01

    The "Traveling Salesman" and similar combinatorial programming tasks encountered in operations research are discussed as possible data analysis models in psychology, for example, in developmental scaling, Guttman scaling, profile smoothing, and data array clustering. A short overview of various computational approaches from this area of…

  19. Contemporary Militant Extremism: A Linguistic Approach to Scale Development

    ERIC Educational Resources Information Center

    Stankov, Lazar; Higgins, Derrick; Saucier, Gerard; Knezevic, Goran

    2010-01-01

    In this article, the authors describe procedures used in the development of a new scale of militant extremist mindset. A 2-step approach consisted of (a) linguistic analysis of the texts produced by known terrorist organizations and selection of statements from these texts that reflect the mindset of those belonging to these organizations and (b)…

  20. Inverse Transformation: Unleashing Spatially Heterogeneous Dynamics with an Alternative Approach to XPCS Data Analysis.

    PubMed

    Andrews, Ross N; Narayanan, Suresh; Zhang, Fan; Kuzmenko, Ivan; Ilavsky, Jan

    2018-02-01

    X-ray photon correlation spectroscopy (XPCS), an extension of dynamic light scattering (DLS) in the X-ray regime, detects temporal intensity fluctuations of coherent speckles and provides scattering vector-dependent sample dynamics at length scales smaller than DLS. The penetrating power of X-rays enables probing dynamics in a broad array of materials with XPCS, including polymers, glasses and metal alloys, where attempts to describe the dynamics with a simple exponential fit usually fails. In these cases, the prevailing XPCS data analysis approach employs stretched or compressed exponential decay functions (Kohlrausch functions), which implicitly assume homogeneous dynamics. In this paper, we propose an alternative analysis scheme based upon inverse Laplace or Gaussian transformation for elucidating heterogeneous distributions of dynamic time scales in XPCS, an approach analogous to the CONTIN algorithm widely accepted in the analysis of DLS from polydisperse and multimodal systems. Using XPCS data measured from colloidal gels, we demonstrate the inverse transform approach reveals hidden multimodal dynamics in materials, unleashing the full potential of XPCS.

  1. Inverse Transformation: Unleashing Spatially Heterogeneous Dynamics with an Alternative Approach to XPCS Data Analysis

    PubMed Central

    Andrews, Ross N.; Narayanan, Suresh; Zhang, Fan; Kuzmenko, Ivan; Ilavsky, Jan

    2018-01-01

    X-ray photon correlation spectroscopy (XPCS), an extension of dynamic light scattering (DLS) in the X-ray regime, detects temporal intensity fluctuations of coherent speckles and provides scattering vector-dependent sample dynamics at length scales smaller than DLS. The penetrating power of X-rays enables probing dynamics in a broad array of materials with XPCS, including polymers, glasses and metal alloys, where attempts to describe the dynamics with a simple exponential fit usually fails. In these cases, the prevailing XPCS data analysis approach employs stretched or compressed exponential decay functions (Kohlrausch functions), which implicitly assume homogeneous dynamics. In this paper, we propose an alternative analysis scheme based upon inverse Laplace or Gaussian transformation for elucidating heterogeneous distributions of dynamic time scales in XPCS, an approach analogous to the CONTIN algorithm widely accepted in the analysis of DLS from polydisperse and multimodal systems. Using XPCS data measured from colloidal gels, we demonstrate the inverse transform approach reveals hidden multimodal dynamics in materials, unleashing the full potential of XPCS. PMID:29875506

  2. Post-16 Physics and Chemistry Uptake: Combining Large-Scale Secondary Analysis with In-Depth Qualitative Methods

    ERIC Educational Resources Information Center

    Hampden-Thompson, Gillian; Lubben, Fred; Bennett, Judith

    2011-01-01

    Quantitative secondary analysis of large-scale data can be combined with in-depth qualitative methods. In this paper, we discuss the role of this combined methods approach in examining the uptake of physics and chemistry in post compulsory schooling for students in England. The secondary data analysis of the National Pupil Database (NPD) served…

  3. Multi-scale modelling of elastic moduli of trabecular bone

    PubMed Central

    Hamed, Elham; Jasiuk, Iwona; Yoo, Andrew; Lee, YikHan; Liszka, Tadeusz

    2012-01-01

    We model trabecular bone as a nanocomposite material with hierarchical structure and predict its elastic properties at different structural scales. The analysis involves a bottom-up multi-scale approach, starting with nanoscale (mineralized collagen fibril) and moving up the scales to sub-microscale (single lamella), microscale (single trabecula) and mesoscale (trabecular bone) levels. Continuum micromechanics methods, composite materials laminate theory and finite-element methods are used in the analysis. Good agreement is found between theoretical and experimental results. PMID:22279160

  4. Large-Scale Compute-Intensive Analysis via a Combined In-situ and Co-scheduling Workflow Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Messer, Bronson; Sewell, Christopher; Heitmann, Katrin

    2015-01-01

    Large-scale simulations can produce tens of terabytes of data per analysis cycle, complicating and limiting the efficiency of workflows. Traditionally, outputs are stored on the file system and analyzed in post-processing. With the rapidly increasing size and complexity of simulations, this approach faces an uncertain future. Trending techniques consist of performing the analysis in situ, utilizing the same resources as the simulation, and/or off-loading subsets of the data to a compute-intensive analysis system. We introduce an analysis framework developed for HACC, a cosmological N-body code, that uses both in situ and co-scheduling approaches for handling Petabyte-size outputs. An initial inmore » situ step is used to reduce the amount of data to be analyzed, and to separate out the data-intensive tasks handled off-line. The analysis routines are implemented using the PISTON/VTK-m framework, allowing a single implementation of an algorithm that simultaneously targets a variety of GPU, multi-core, and many-core architectures.« less

  5. Multiscale wavelet representations for mammographic feature analysis

    NASA Astrophysics Data System (ADS)

    Laine, Andrew F.; Song, Shuwu

    1992-12-01

    This paper introduces a novel approach for accomplishing mammographic feature analysis through multiresolution representations. We show that efficient (nonredundant) representations may be identified from digital mammography and used to enhance specific mammographic features within a continuum of scale space. The multiresolution decomposition of wavelet transforms provides a natural hierarchy in which to embed an interactive paradigm for accomplishing scale space feature analysis. Choosing wavelets (or analyzing functions) that are simultaneously localized in both space and frequency, results in a powerful methodology for image analysis. Multiresolution and orientation selectivity, known biological mechanisms in primate vision, are ingrained in wavelet representations and inspire the techniques presented in this paper. Our approach includes local analysis of complete multiscale representations. Mammograms are reconstructed from wavelet coefficients, enhanced by linear, exponential and constant weight functions localized in scale space. By improving the visualization of breast pathology we can improve the changes of early detection of breast cancers (improve quality) while requiring less time to evaluate mammograms for most patients (lower costs).

  6. Boundary formulations for sensitivity analysis without matrix derivatives

    NASA Technical Reports Server (NTRS)

    Kane, J. H.; Guru Prasad, K.

    1993-01-01

    A new hybrid approach to continuum structural shape sensitivity analysis employing boundary element analysis (BEA) is presented. The approach uses iterative reanalysis to obviate the need to factor perturbed matrices in the determination of surface displacement and traction sensitivities via a univariate perturbation/finite difference (UPFD) step. The UPFD approach makes it possible to immediately reuse existing subroutines for computation of BEA matrix coefficients in the design sensitivity analysis process. The reanalysis technique computes economical response of univariately perturbed models without factoring perturbed matrices. The approach provides substantial computational economy without the burden of a large-scale reprogramming effort.

  7. Accuracy and Reliability of Marker-Based Approaches to Scale the Pelvis, Thigh, and Shank Segments in Musculoskeletal Models.

    PubMed

    Kainz, Hans; Hoang, Hoa X; Stockton, Chris; Boyd, Roslyn R; Lloyd, David G; Carty, Christopher P

    2017-10-01

    Gait analysis together with musculoskeletal modeling is widely used for research. In the absence of medical images, surface marker locations are used to scale a generic model to the individual's anthropometry. Studies evaluating the accuracy and reliability of different scaling approaches in a pediatric and/or clinical population have not yet been conducted and, therefore, formed the aim of this study. Magnetic resonance images (MRI) and motion capture data were collected from 12 participants with cerebral palsy and 6 typically developed participants. Accuracy was assessed by comparing the scaled model's segment measures to the corresponding MRI measures, whereas reliability was assessed by comparing the model's segments scaled with the experimental marker locations from the first and second motion capture session. The inclusion of joint centers into the scaling process significantly increased the accuracy of thigh and shank segment length estimates compared to scaling with markers alone. Pelvis scaling approaches which included the pelvis depth measure led to the highest errors compared to the MRI measures. Reliability was similar between scaling approaches with mean ICC of 0.97. The pelvis should be scaled using pelvic width and height and the thigh and shank segment should be scaled using the proximal and distal joint centers.

  8. Adjacent-Categories Mokken Models for Rater-Mediated Assessments

    ERIC Educational Resources Information Center

    Wind, Stefanie A.

    2017-01-01

    Molenaar extended Mokken's original probabilistic-nonparametric scaling models for use with polytomous data. These polytomous extensions of Mokken's original scaling procedure have facilitated the use of Mokken scale analysis as an approach to exploring fundamental measurement properties across a variety of domains in which polytomous ratings are…

  9. A Mathematical Approach in Evaluating Biotechnology Attitude Scale: Rough Set Data Analysis

    ERIC Educational Resources Information Center

    Narli, Serkan; Sinan, Olcay

    2011-01-01

    Individuals' thoughts and attitudes towards biotechnology have been investigated in many countries. A Likert-type scale is the most commonly used scale to measure attitude. However, the weak side of a likert-type scale is that different responses may produce the same score. The Rough set method has been regarded to address this shortcoming. A…

  10. Estimating returns to scale and scale efficiency for energy consuming appliances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blum, Helcio; Okwelum, Edson O.

    Energy consuming appliances accounted for over 40% of the energy use and $17 billion in sales in the U.S. in 2014. Whether such amounts of money and energy were optimally combined to produce household energy services is not straightforwardly determined. The efficient allocation of capital and energy to provide an energy service has been previously approached, and solved with Data Envelopment Analysis (DEA) under constant returns to scale. That approach, however, lacks the scale dimension of the problem and may restrict the economic efficient models of an appliance available in the market when constant returns to scale does not hold.more » We expand on that approach to estimate returns to scale for energy using appliances. We further calculate DEA scale efficiency scores for the technically efficient models that comprise the economic efficient frontier of the energy service delivered, under different assumptions of returns to scale. We then apply this approach to evaluate dishwashers available in the market in the U.S. Our results show that (a) for the case of dishwashers scale matters, and (b) the dishwashing energy service is delivered under non-decreasing returns to scale. The results further demonstrate that this method contributes to increase consumers’ choice of appliances.« less

  11. Electromagnetic scaling functions within the Green's function Monte Carlo approach

    DOE PAGES

    Rocco, N.; Alvarez-Ruso, L.; Lovato, A.; ...

    2017-07-24

    We have studied the scaling properties of the electromagnetic response functions of 4He and 12C nuclei computed by the Green's function Monte Carlo approach, retaining only the one-body current contribution. Longitudinal and transverse scaling functions have been obtained in the relativistic and nonrelativistic cases and compared to experiment for various kinematics. The characteristic asymmetric shape of the scaling function exhibited by data emerges in the calculations in spite of the nonrelativistic nature of the model. The results are mostly consistent with scaling of zeroth, first, and second kinds. Our analysis reveals a direct correspondence between the scaling and the nucleon-densitymore » response functions. In conclusion, the scaling function obtained from the proton-density response displays scaling of the first kind, even more evidently than the longitudinal and transverse scaling functions« less

  12. Electromagnetic scaling functions within the Green's function Monte Carlo approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rocco, N.; Alvarez-Ruso, L.; Lovato, A.

    We have studied the scaling properties of the electromagnetic response functions of 4He and 12C nuclei computed by the Green's function Monte Carlo approach, retaining only the one-body current contribution. Longitudinal and transverse scaling functions have been obtained in the relativistic and nonrelativistic cases and compared to experiment for various kinematics. The characteristic asymmetric shape of the scaling function exhibited by data emerges in the calculations in spite of the nonrelativistic nature of the model. The results are mostly consistent with scaling of zeroth, first, and second kinds. Our analysis reveals a direct correspondence between the scaling and the nucleon-densitymore » response functions. In conclusion, the scaling function obtained from the proton-density response displays scaling of the first kind, even more evidently than the longitudinal and transverse scaling functions« less

  13. Application of Open Source Technologies for Oceanographic Data Analysis

    NASA Astrophysics Data System (ADS)

    Huang, T.; Gangl, M.; Quach, N. T.; Wilson, B. D.; Chang, G.; Armstrong, E. M.; Chin, T. M.; Greguska, F.

    2015-12-01

    NEXUS is a data-intensive analysis solution developed with a new approach for handling science data that enables large-scale data analysis by leveraging open source technologies such as Apache Cassandra, Apache Spark, Apache Solr, and Webification. NEXUS has been selected to provide on-the-fly time-series and histogram generation for the Soil Moisture Active Passive (SMAP) mission for Level 2 and Level 3 Active, Passive, and Active Passive products. It also provides an on-the-fly data subsetting capability. NEXUS is designed to scale horizontally, enabling it to handle massive amounts of data in parallel. It takes a new approach on managing time and geo-referenced array data by dividing data artifacts into chunks and stores them in an industry-standard, horizontally scaled NoSQL database. This approach enables the development of scalable data analysis services that can infuse and leverage the elastic computing infrastructure of the Cloud. It is equipped with a high-performance geospatial and indexed data search solution, coupled with a high-performance data Webification solution free from file I/O bottlenecks, as well as a high-performance, in-memory data analysis engine. In this talk, we will focus on the recently funded AIST 2014 project by using NEXUS as the core for oceanographic anomaly detection service and web portal. We call it, OceanXtremes

  14. A systems approach to assess farm-scale nutrient and trace element dynamics: a case study at the Ojebyn dairy farm.

    PubMed

    Oborn, Ingrid; Modin-Edman, Anna-Karin; Bengtsson, Helena; Gustafson, Gunnela M; Salomon, Eva; Nilsson, S Ingvar; Holmqvist, Johan; Jonsson, Simon; Sverdrup, Harald

    2005-06-01

    A systems analysis approach was used to assess farmscale nutrient and trace element sustainability by combining full-scale field experiments with specific studies of nutrient release from mineral weathering and trace-element cycling. At the Ojebyn dairy farm in northern Sweden, a farm-scale case study including phosphorus (P), potassium (K), and zinc (Zn) was run to compare organic and conventional agricultural management practices. By combining different element-balance approaches (at farmgate, barn, and field scales) and further adapting these to the FARMFLOW model, we were able to combine mass flows and pools within the subsystems and establish links between subsystems in order to make farm-scale predictions. It was found that internal element flows on the farm are large and that there are farm internal sources (Zn) and loss terms (K). The approaches developed and tested at the Ojebyn farm are promising and considered generally adaptable to any farm.

  15. Investigating the Measurement Properties of the Social Responsiveness Scale in Preschool Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Duku, Eric; Vaillancourt, Tracy; Szatmari, Peter; Georgiades, Stelios; Zwaigenbaum, Lonnie; Smith, Isabel M.; Bryson, Susan; Fombonne, Eric; Mirenda, Pat; Roberts, Wendy; Volden, Joanne; Waddell, Charlotte; Thompson, Ann; Bennett, Teresa

    2013-01-01

    The purpose of this study was to examine the measurement properties of the Social Responsiveness Scale in an accelerated longitudinal sample of 4-year-old preschool children with the complementary approaches of categorical confirmatory factor analysis and Rasch analysis. Measurement models based on the literature and other hypothesized measurement…

  16. Incorporating resource protection constraints in an analysis of landscape fuel-treatment effectiveness in the northern Sierra Nevada, CA, USA

    Treesearch

    Christopher B. Dow; Brandon M. Collins; Scott L. Stephens

    2016-01-01

    Finding novel ways to plan and implement landscape-level forest treatments that protect sensitive wildlife and other key ecosystem components, while also reducing the risk of large-scale, high-severity fires, can prove to be difficult. We examined alternative approaches to landscape-scale fuel-treatment design for the same landscape. These approaches included two...

  17. Spatio-temporal hierarchy in the dynamics of a minimalist protein model

    NASA Astrophysics Data System (ADS)

    Matsunaga, Yasuhiro; Baba, Akinori; Li, Chun-Biu; Straub, John E.; Toda, Mikito; Komatsuzaki, Tamiki; Berry, R. Stephen

    2013-12-01

    A method for time series analysis of molecular dynamics simulation of a protein is presented. In this approach, wavelet analysis and principal component analysis are combined to decompose the spatio-temporal protein dynamics into contributions from a hierarchy of different time and space scales. Unlike the conventional Fourier-based approaches, the time-localized wavelet basis captures the vibrational energy transfers among the collective motions of proteins. As an illustrative vehicle, we have applied our method to a coarse-grained minimalist protein model. During the folding and unfolding transitions of the protein, vibrational energy transfers between the fast and slow time scales were observed among the large-amplitude collective coordinates while the other small-amplitude motions are regarded as thermal noise. Analysis employing a Gaussian-based measure revealed that the time scales of the energy redistribution in the subspace spanned by such large-amplitude collective coordinates are slow compared to the other small-amplitude coordinates. Future prospects of the method are discussed in detail.

  18. Inverse transformation: unleashing spatially heterogeneous dynamics with an alternative approach to XPCS data analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, Ross N.; Narayanan, Suresh; Zhang, Fan

    X-ray photon correlation spectroscopy (XPCS), an extension of dynamic light scattering (DLS) in the X-ray regime, detects temporal intensity fluctuations of coherent speckles and provides scattering-vector-dependent sample dynamics at length scales smaller than DLS. The penetrating power of X-rays enables XPCS to probe the dynamics in a broad array of materials, including polymers, glasses and metal alloys, where attempts to describe the dynamics with a simple exponential fit usually fail. In these cases, the prevailing XPCS data analysis approach employs stretched or compressed exponential decay functions (Kohlrausch functions), which implicitly assume homogeneous dynamics. This paper proposes an alternative analysis schememore » based upon inverse Laplace or Gaussian transformation for elucidating heterogeneous distributions of dynamic time scales in XPCS, an approach analogous to the CONTIN algorithm widely accepted in the analysis of DLS from polydisperse and multimodal systems. In conclusion, using XPCS data measured from colloidal gels, it is demonstrated that the inverse transform approach reveals hidden multimodal dynamics in materials, unleashing the full potential of XPCS.« less

  19. Inverse transformation: unleashing spatially heterogeneous dynamics with an alternative approach to XPCS data analysis

    DOE PAGES

    Andrews, Ross N.; Narayanan, Suresh; Zhang, Fan; ...

    2018-02-01

    X-ray photon correlation spectroscopy (XPCS), an extension of dynamic light scattering (DLS) in the X-ray regime, detects temporal intensity fluctuations of coherent speckles and provides scattering-vector-dependent sample dynamics at length scales smaller than DLS. The penetrating power of X-rays enables XPCS to probe the dynamics in a broad array of materials, including polymers, glasses and metal alloys, where attempts to describe the dynamics with a simple exponential fit usually fail. In these cases, the prevailing XPCS data analysis approach employs stretched or compressed exponential decay functions (Kohlrausch functions), which implicitly assume homogeneous dynamics. This paper proposes an alternative analysis schememore » based upon inverse Laplace or Gaussian transformation for elucidating heterogeneous distributions of dynamic time scales in XPCS, an approach analogous to the CONTIN algorithm widely accepted in the analysis of DLS from polydisperse and multimodal systems. In conclusion, using XPCS data measured from colloidal gels, it is demonstrated that the inverse transform approach reveals hidden multimodal dynamics in materials, unleashing the full potential of XPCS.« less

  20. Long-Term High-Level Defense-Waste technology

    NASA Astrophysics Data System (ADS)

    1982-07-01

    In the residual liquid solidification effort, the primary alternative studied is the wiped film evaporator approach to solidifying salt well pumped liquids and returning the molten material to single shell tanks for microwave final stabilization to a hard dry product. Both systems analysis and experimental work are proceeding to evaluate this approach. The primary alternative for in situ stabilization of in-tank wastes is microwave drying of wet salt cake and unpumped sludges. Experimental work was successfully conducted on a 1/12 scale tank containing wet synthetic salt cake. Related systems analysis of a full scale system was initiated.

  1. Using multi-scale entropy and principal component analysis to monitor gears degradation via the motor current signature analysis

    NASA Astrophysics Data System (ADS)

    Aouabdi, Salim; Taibi, Mahmoud; Bouras, Slimane; Boutasseta, Nadir

    2017-06-01

    This paper describes an approach for identifying localized gear tooth defects, such as pitting, using phase currents measured from an induction machine driving the gearbox. A new tool of anomaly detection based on multi-scale entropy (MSE) algorithm SampEn which allows correlations in signals to be identified over multiple time scales. The motor current signature analysis (MCSA) in conjunction with principal component analysis (PCA) and the comparison of observed values with those predicted from a model built using nominally healthy data. The Simulation results show that the proposed method is able to detect gear tooth pitting in current signals.

  2. Development of life story experience (LSE) scales for migrant dentists in Australia: a sequential qualitative-quantitative study.

    PubMed

    Balasubramanian, M; Spencer, A J; Short, S D; Watkins, K; Chrisopoulos, S; Brennan, D S

    2016-09-01

    The integration of qualitative and quantitative approaches introduces new avenues to bridge strengths, and address weaknesses of both methods. To develop measure(s) for migrant dentist experiences in Australia through a mixed methods approach. The sequential qualitative-quantitative design involved first the harvesting of data items from qualitative study, followed by a national survey of migrant dentists in Australia. Statements representing unique experiences in migrant dentists' life stories were deployed the survey questionnaire, using a five-point Likert scale. Factor analysis was used to examine component factors. Eighty-two statements from 51 participants were harvested from the qualitative analysis. A total of 1,022 of 1,977 migrant dentists (response rate 54.5%) returned completed questionnaires. Factor analysis supported an initial eight-factor solution; further scale development and reliability analysis led to five scales with a final list of 38 life story experience (LSE) items. Three scales were based on home country events: health system and general lifestyle concerns (LSE1; 10 items), society and culture (LSE4; 4 items) and career development (LSE5; 4 items). Two scales included migrant experiences in Australia: appreciation towards Australian way of life (LSE2; 13 items) and settlement concerns (LSE3; 7 items). The five life story experience scales provided necessary conceptual clarity and empirical grounding to explore migrant dentist experiences in Australia. Being based on original migrant dentist narrations, these scales have the potential to offer in-depth insights for policy makers and support future research on dentist migration. Copyright© 2016 Dennis Barber Ltd

  3. Global analysis of approaches for deriving total water storage changes from GRACE satellites and implications for groundwater storage change estimation

    NASA Astrophysics Data System (ADS)

    Long, D.; Scanlon, B. R.; Longuevergne, L.; Chen, X.

    2015-12-01

    Increasing interest in use of GRACE satellites and a variety of new products to monitor changes in total water storage (TWS) underscores the need to assess the reliability of output from different products. The objective of this study was to assess skills and uncertainties of different approaches for processing GRACE data to restore signal losses caused by spatial filtering based on analysis of 1°×1° grid scale data and basin scale data in 60 river basins globally. Results indicate that scaling factors from six land surface models (LSMs), including four models from GLDAS-1 (Noah 2.7, Mosaic, VIC, and CLM 2.0), CLM 4.0, and WGHM, are similar over most humid, sub-humid, and high-latitude regions but can differ by up to 100% over arid and semi-arid basins and areas with intensive irrigation. Large differences in TWS anomalies from three processing approaches (scaling factor, additive, and multiplicative corrections) were found in arid and semi-arid regions, areas with intensive irrigation, and relatively small basins (e.g., ≤ 200,000 km2). Furthermore, TWS anomaly products from gridded data with CLM4.0 scaling factors and the additive correction approach more closely agree with WGHM output than the multiplicative correction approach. Estimation of groundwater storage changes using GRACE satellites requires caution in selecting an appropriate approach for restoring TWS changes. A priori ground-based data used in forward modeling can provide a powerful tool for explaining the distribution of signal gains or losses caused by low-pass filtering in specific regions of interest and should be very useful for more reliable estimation of groundwater storage changes using GRACE satellites.

  4. Gender Invariance of Family, School, and Peer Influence on Volunteerism Scale

    ERIC Educational Resources Information Center

    Law, Ben; Shek, Daniel; Ma, Cecilia

    2015-01-01

    Objective: This article examines the measurement invariance of "Family, School, and Peer Influence on Volunteerism Scale" (FSPV) across genders using the mean and covariance structure analysis approach. Method: A total of 2,845 Chinese high school adolescents aged 11 to 15 years completed the FSPV scale. Results: Results of the…

  5. A novel nonparametric item response theory approach to measuring socioeconomic position: a comparison using household expenditure data from a Vietnam health survey, 2003

    PubMed Central

    2014-01-01

    Background Measures of household socio-economic position (SEP) are widely used in health research. There exist a number of approaches to their measurement, with Principal Components Analysis (PCA) applied to a basket of household assets being one of the most common. PCA, however, carries a number of assumptions about the distribution of the data which may be untenable, and alternative, non-parametric, approaches may be preferred. Mokken scale analysis is a non-parametric, item response theory approach to scale development which appears never to have been applied to household asset data. A Mokken scale can be used to rank order items (measures of wealth) as well as households. Using data on household asset ownership from a national sample of 4,154 consenting households in the World Health Survey from Vietnam, 2003, we construct two measures of household SEP. Seventeen items asking about assets, and utility and infrastructure use were used. Mokken Scaling and PCA were applied to the data. A single item measure of total household expenditure is used as a point of contrast. Results An 11 item scale, out of the 17 items, was identified that conformed to the assumptions of a Mokken Scale. All the items in the scale were identified as strong items (Hi > .5). Two PCA measures of SEP were developed as a point of contrast. One PCA measure was developed using all 17 available asset items, the other used the reduced set of 11 items identified in the Mokken scale analaysis. The Mokken Scale measure of SEP and the 17 item PCA measure had a very high correlation (r = .98), and they both correlated moderately with total household expenditure: r = .59 and r = .57 respectively. In contrast the 11 item PCA measure correlated moderately with the Mokken scale (r = .68), and weakly with the total household expenditure (r = .18). Conclusion The Mokken scale measure of household SEP performed at least as well as PCA, and outperformed the PCA measure developed with the 11 items used in the Mokken scale. Unlike PCA, Mokken scaling carries no assumptions about the underlying shape of the distribution of the data, and can be used simultaneous to order household SEP and items. The approach, however, has not been tested with data from other countries and remains an interesting, but under researched approach. PMID:25126103

  6. A novel nonparametric item response theory approach to measuring socioeconomic position: a comparison using household expenditure data from a Vietnam health survey, 2003.

    PubMed

    Reidpath, Daniel D; Ahmadi, Keivan

    2014-01-01

    Measures of household socio-economic position (SEP) are widely used in health research. There exist a number of approaches to their measurement, with Principal Components Analysis (PCA) applied to a basket of household assets being one of the most common. PCA, however, carries a number of assumptions about the distribution of the data which may be untenable, and alternative, non-parametric, approaches may be preferred. Mokken scale analysis is a non-parametric, item response theory approach to scale development which appears never to have been applied to household asset data. A Mokken scale can be used to rank order items (measures of wealth) as well as households. Using data on household asset ownership from a national sample of 4,154 consenting households in the World Health Survey from Vietnam, 2003, we construct two measures of household SEP. Seventeen items asking about assets, and utility and infrastructure use were used. Mokken Scaling and PCA were applied to the data. A single item measure of total household expenditure is used as a point of contrast. An 11 item scale, out of the 17 items, was identified that conformed to the assumptions of a Mokken Scale. All the items in the scale were identified as strong items (Hi > .5). Two PCA measures of SEP were developed as a point of contrast. One PCA measure was developed using all 17 available asset items, the other used the reduced set of 11 items identified in the Mokken scale analaysis. The Mokken Scale measure of SEP and the 17 item PCA measure had a very high correlation (r = .98), and they both correlated moderately with total household expenditure: r = .59 and r = .57 respectively. In contrast the 11 item PCA measure correlated moderately with the Mokken scale (r = .68), and weakly with the total household expenditure (r = .18). The Mokken scale measure of household SEP performed at least as well as PCA, and outperformed the PCA measure developed with the 11 items used in the Mokken scale. Unlike PCA, Mokken scaling carries no assumptions about the underlying shape of the distribution of the data, and can be used simultaneous to order household SEP and items. The approach, however, has not been tested with data from other countries and remains an interesting, but under researched approach.

  7. Scale/Analytical Analyses of Freezing and Convective Melting with Internal Heat Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali S. Siahpush; John Crepeau; Piyush Sabharwall

    2013-07-01

    Using a scale/analytical analysis approach, we model phase change (melting) for pure materials which generate constant internal heat generation for small Stefan numbers (approximately one). The analysis considers conduction in the solid phase and natural convection, driven by internal heat generation, in the liquid regime. The model is applied for a constant surface temperature boundary condition where the melting temperature is greater than the surface temperature in a cylindrical geometry. The analysis also consider constant heat flux (in a cylindrical geometry).We show the time scales in which conduction and convection heat transfer dominate.

  8. Modeling Individual Differences in Unfolding Preference Data: A Restricted Latent Class Approach.

    ERIC Educational Resources Information Center

    Bockenholt, Ulf; Bockenholt, Ingo

    1990-01-01

    A latent-class scaling approach is presented for modeling paired comparison and "pick any/t" data obtained in preference studies. The utility of this approach is demonstrated through analysis of data from studies involving consumer preference and preference for political candidates. (SLD)

  9. A qualitative exploration of the human resource policy implications of voluntary counselling and testing scale-up in Kenya: applying a model for policy analysis

    PubMed Central

    2011-01-01

    Background Kenya experienced rapid scale up of HIV testing and counselling services in government health services from 2001. We set out to examine the human resource policy implications of scaling up HIV testing and counselling in Kenya and to analyse the resultant policy against a recognised theoretical framework of health policy reform (policy analysis triangle). Methods Qualitative methods were used to gain in-depth insights from policy makers who shaped scale up. This included 22 in-depth interviews with Voluntary Counselling and Testing (VCT) task force members, critical analysis of 53 sets of minutes and diary notes. We explore points of consensus and conflict amongst policymakers in Kenya and analyse this content to assess who favoured and resisted new policies, how scale up was achieved and the importance of the local context in which scale up occurred. Results The scale up of VCT in Kenya had a number of human resource policy implications resulting from the introduction of lay counsellors and their authorisation to conduct rapid HIV testing using newly introduced rapid testing technologies. Our findings indicate that three key groups of actors were critical: laboratory professionals, counselling associations and the Ministry of Health. Strategic alliances between donors, NGOs and these three key groups underpinned the process. The process of reaching consensus required compromise and time commitment but was critical to a unified nationwide approach. Policies around quality assurance were integral in ensuring standardisation of content and approach. Conclusion The introduction and scale up of new health service initiatives such as HIV voluntary counselling and testing necessitates changes to existing health systems and modification of entrenched interests around professional counselling and laboratory testing. Our methodological approach enabled exploration of complexities of scale up of HIV testing and counselling in Kenya. We argue that a better understanding of the diverse actors, the context and the process, is required to mitigate risks and maximise impact. PMID:22008721

  10. An integrated approach for automated cover-type mapping of large inaccessible areas in Alaska

    USGS Publications Warehouse

    Fleming, Michael D.

    1988-01-01

    The lack of any detailed cover type maps in the state necessitated that a rapid and accurate approach to be employed to develop maps for 329 million acres of Alaska within a seven-year period. This goal has been addressed by using an integrated approach to computer-aided analysis which combines efficient use of field data with the only consistent statewide spatial data sets available: Landsat multispectral scanner data, digital elevation data derived from 1:250 000-scale maps, and 1:60 000-scale color-infrared aerial photographs.

  11. PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deelman, Ewa; Carothers, Christopher; Mandal, Anirban

    Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less

  12. PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows

    DOE PAGES

    Deelman, Ewa; Carothers, Christopher; Mandal, Anirban; ...

    2015-07-14

    Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less

  13. Discontinuities, cross-scale patterns, and the organization of ecosystems

    USGS Publications Warehouse

    Nash, Kirsty L.; Allen, Craig R.; Angeler, David G.; Barichievy, Chris; Eason, Tarsha; Garmestani, Ahjond S.; Graham, Nicholas A.J.; Granholm, Dean; Knutson, Melinda; Nelson, R. John; Nystrom, Magnus; Stow, Craig A.; Sandstrom, Shana M.

    2014-01-01

    Ecological structures and processes occur at specific spatiotemporal scales, and interactions that occur across multiple scales mediate scale-specific (e.g., individual, community, local, or regional) responses to disturbance. Despite the importance of scale, explicitly incorporating a multi-scale perspective into research and management actions remains a challenge. The discontinuity hypothesis provides a fertile avenue for addressing this problem by linking measureable proxies to inherent scales of structure within ecosystems. Here we outline the conceptual framework underlying discontinuities and review the evidence supporting the discontinuity hypothesis in ecological systems. Next we explore the utility of this approach for understanding cross-scale patterns and the organization of ecosystems by describing recent advances for examining nonlinear responses to disturbance and phenomena such as extinctions, invasions, and resilience. To stimulate new research, we present methods for performing discontinuity analysis, detail outstanding knowledge gaps, and discuss potential approaches for addressing these gaps.

  14. [An across-scales analysis of the voice self-concept questionnaire (FESS)].

    PubMed

    Nusseck, Manfred; Richter, Bernhard; Echternach, Matthias; Spahn, Claudia

    2018-04-01

    The questionnaire for the assessment of the voice selfconcept (FESS) contains three sub-scales indicating the personal relation with the own voice. The scales address the relationship with one's own voice, the awareness of the use of one's own voice, and the perception of the connection between voice and emotional changes. A comprehensive approach across the three scales supporting a simplified interpretation of the results was still missing. The FESS questionnaire was used in a sample of 536 German teachers. With a discrimination analysis, commonalities in the scale characteristics were investigated. For a comparative validation with voice health and psychological and physiological wellbeing, the Voice Handicap Index (VHI), the questionnaire for Work-related Behavior and Experience Patterns (AVEM), and the questionnaire for Health-related Quality of Life (SF-12) were additionally collected. The analysis provided four different groups of voice self-concept: group 1 with healthy values in the voice self-concept and wellbeing scales, group 2 with a low voice self-concept and mean wellbeing values, group 3 with a high awareness of the voice use and mean wellbeing values and group 4 with low values in all scales. The results show that a combined approach across all scales of the questionnaire for the assessment of the voice self-concept enables a more detailed interpretation of the characteristics in the voice self-concept. The presented groups provide an applicable use supporting medical diagnoses. © Georg Thieme Verlag KG Stuttgart · New York.

  15. Broad and Narrow CHC Abilities Measured and Not Measured by the Wechsler Scales: Moving beyond Within-Battery Factor Analysis

    ERIC Educational Resources Information Center

    Flanagan, Dawn P.; Alfonso, Vincent C.; Reynolds, Matthew R.

    2013-01-01

    In this commentary, we reviewed two clinical validation studies on the Wechsler Scales conducted by Weiss and colleagues. These researchers used a rigorous within-battery model-fitting approach that demonstrated the factorial invariance of the Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) and Wechsler Adult Intelligence…

  16. Pair-barcode high-throughput sequencing for large-scale multiplexed sample analysis

    PubMed Central

    2012-01-01

    Background The multiplexing becomes the major limitation of the next-generation sequencing (NGS) in application to low complexity samples. Physical space segregation allows limited multiplexing, while the existing barcode approach only permits simultaneously analysis of up to several dozen samples. Results Here we introduce pair-barcode sequencing (PBS), an economic and flexible barcoding technique that permits parallel analysis of large-scale multiplexed samples. In two pilot runs using SOLiD sequencer (Applied Biosystems Inc.), 32 independent pair-barcoded miRNA libraries were simultaneously discovered by the combination of 4 unique forward barcodes and 8 unique reverse barcodes. Over 174,000,000 reads were generated and about 64% of them are assigned to both of the barcodes. After mapping all reads to pre-miRNAs in miRBase, different miRNA expression patterns are captured from the two clinical groups. The strong correlation using different barcode pairs and the high consistency of miRNA expression in two independent runs demonstrates that PBS approach is valid. Conclusions By employing PBS approach in NGS, large-scale multiplexed pooled samples could be practically analyzed in parallel so that high-throughput sequencing economically meets the requirements of samples which are low sequencing throughput demand. PMID:22276739

  17. Pair-barcode high-throughput sequencing for large-scale multiplexed sample analysis.

    PubMed

    Tu, Jing; Ge, Qinyu; Wang, Shengqin; Wang, Lei; Sun, Beili; Yang, Qi; Bai, Yunfei; Lu, Zuhong

    2012-01-25

    The multiplexing becomes the major limitation of the next-generation sequencing (NGS) in application to low complexity samples. Physical space segregation allows limited multiplexing, while the existing barcode approach only permits simultaneously analysis of up to several dozen samples. Here we introduce pair-barcode sequencing (PBS), an economic and flexible barcoding technique that permits parallel analysis of large-scale multiplexed samples. In two pilot runs using SOLiD sequencer (Applied Biosystems Inc.), 32 independent pair-barcoded miRNA libraries were simultaneously discovered by the combination of 4 unique forward barcodes and 8 unique reverse barcodes. Over 174,000,000 reads were generated and about 64% of them are assigned to both of the barcodes. After mapping all reads to pre-miRNAs in miRBase, different miRNA expression patterns are captured from the two clinical groups. The strong correlation using different barcode pairs and the high consistency of miRNA expression in two independent runs demonstrates that PBS approach is valid. By employing PBS approach in NGS, large-scale multiplexed pooled samples could be practically analyzed in parallel so that high-throughput sequencing economically meets the requirements of samples which are low sequencing throughput demand.

  18. Gait-force model and inertial measurement unit-based measurements: A new approach for gait analysis and balance monitoring.

    PubMed

    Li, Xinan; Xu, Hongyuan; Cheung, Jeffrey T

    2016-12-01

    This work describes a new approach for gait analysis and balance measurement. It uses an inertial measurement unit (IMU) that can either be embedded inside a dynamically unstable platform for balance measurement or mounted on the lower back of a human participant for gait analysis. The acceleration data along three Cartesian coordinates is analyzed by the gait-force model to extract bio-mechanics information in both the dynamic state as in the gait analyzer and the steady state as in the balance scale. For the gait analyzer, the simple, noninvasive and versatile approach makes it appealing to a broad range of applications in clinical diagnosis, rehabilitation monitoring, athletic training, sport-apparel design, and many other areas. For the balance scale, it provides a portable platform to measure the postural deviation and the balance index under visual or vestibular sensory input conditions. Despite its simple construction and operation, excellent agreement has been demonstrated between its performance and the high-cost commercial balance unit over a wide dynamic range. The portable balance scale is an ideal tool for routine monitoring of balance index, fall-risk assessment, and other balance-related health issues for both clinical and household use.

  19. Multiscale Metabolic Modeling: Dynamic Flux Balance Analysis on a Whole-Plant Scale1[W][OPEN

    PubMed Central

    Grafahrend-Belau, Eva; Junker, Astrid; Eschenröder, André; Müller, Johannes; Schreiber, Falk; Junker, Björn H.

    2013-01-01

    Plant metabolism is characterized by a unique complexity on the cellular, tissue, and organ levels. On a whole-plant scale, changing source and sink relations accompanying plant development add another level of complexity to metabolism. With the aim of achieving a spatiotemporal resolution of source-sink interactions in crop plant metabolism, a multiscale metabolic modeling (MMM) approach was applied that integrates static organ-specific models with a whole-plant dynamic model. Allowing for a dynamic flux balance analysis on a whole-plant scale, the MMM approach was used to decipher the metabolic behavior of source and sink organs during the generative phase of the barley (Hordeum vulgare) plant. It reveals a sink-to-source shift of the barley stem caused by the senescence-related decrease in leaf source capacity, which is not sufficient to meet the nutrient requirements of sink organs such as the growing seed. The MMM platform represents a novel approach for the in silico analysis of metabolism on a whole-plant level, allowing for a systemic, spatiotemporally resolved understanding of metabolic processes involved in carbon partitioning, thus providing a novel tool for studying yield stability and crop improvement. PMID:23926077

  20. Approaching the exa-scale: a real-world evaluation of rendering extremely large data sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patchett, John M; Ahrens, James P; Lo, Li - Ta

    2010-10-15

    Extremely large scale analysis is becoming increasingly important as supercomputers and their simulations move from petascale to exascale. The lack of dedicated hardware acceleration for rendering on today's supercomputing platforms motivates our detailed evaluation of the possibility of interactive rendering on the supercomputer. In order to facilitate our understanding of rendering on the supercomputing platform, we focus on scalability of rendering algorithms and architecture envisioned for exascale datasets. To understand tradeoffs for dealing with extremely large datasets, we compare three different rendering algorithms for large polygonal data: software based ray tracing, software based rasterization and hardware accelerated rasterization. We presentmore » a case study of strong and weak scaling of rendering extremely large data on both GPU and CPU based parallel supercomputers using Para View, a parallel visualization tool. Wc use three different data sets: two synthetic and one from a scientific application. At an extreme scale, algorithmic rendering choices make a difference and should be considered while approaching exascale computing, visualization, and analysis. We find software based ray-tracing offers a viable approach for scalable rendering of the projected future massive data sizes.« less

  1. Energy Decomposition Analysis Based on Absolutely Localized Molecular Orbitals for Large-Scale Density Functional Theory Calculations in Drug Design.

    PubMed

    Phipps, M J S; Fox, T; Tautermann, C S; Skylaris, C-K

    2016-07-12

    We report the development and implementation of an energy decomposition analysis (EDA) scheme in the ONETEP linear-scaling electronic structure package. Our approach is hybrid as it combines the localized molecular orbital EDA (Su, P.; Li, H. J. Chem. Phys., 2009, 131, 014102) and the absolutely localized molecular orbital EDA (Khaliullin, R. Z.; et al. J. Phys. Chem. A, 2007, 111, 8753-8765) to partition the intermolecular interaction energy into chemically distinct components (electrostatic, exchange, correlation, Pauli repulsion, polarization, and charge transfer). Limitations shared in EDA approaches such as the issue of basis set dependence in polarization and charge transfer are discussed, and a remedy to this problem is proposed that exploits the strictly localized property of the ONETEP orbitals. Our method is validated on a range of complexes with interactions relevant to drug design. We demonstrate the capabilities for large-scale calculations with our approach on complexes of thrombin with an inhibitor comprised of up to 4975 atoms. Given the capability of ONETEP for large-scale calculations, such as on entire proteins, we expect that our EDA scheme can be applied in a large range of biomolecular problems, especially in the context of drug design.

  2. Improving Attachments of Non-Invasive (Type III) Electronic Data Loggers to Cetaceans

    DTIC Science & Technology

    2015-09-30

    animals in human care will be performed to test and validate this approach. The cadaver trials will enable controlled testing to failure or with both...quantitative metrics and analysis tools to assess the impact of a tag on the animal . Here we will present: 1) the characterization of the mechanical...fine scale motion analysis for swimming animals . 2 APPROACH Our approach is divided into four subtasks: Task 1: Forces and failure modes

  3. Fully Coupled Micro/Macro Deformation, Damage, and Failure Prediction for SiC/Ti-15-3 Laminates

    NASA Technical Reports Server (NTRS)

    Bednarcyk, Brett A.; Arnold, Steven M.; Lerch, Brad A.

    2001-01-01

    The deformation, failure, and low cycle fatigue life of SCS-6/Ti-15-3 composites are predicted using a coupled deformation and damage approach in the context of the analytical generalized method of cells (GMC) micromechanics model. The local effects of inelastic deformation, fiber breakage, fiber-matrix interfacial debonding, and fatigue damage are included as sub-models that operate on the micro scale for the individual composite phases. For the laminate analysis, lamination theory is employed as the global or structural scale model, while GMC is embedded to operate on the meso scale to simulate the behavior of the composite material within each laminate layer. While the analysis approach is quite complex and multifaceted, it is shown, through comparison with experimental data, to be quite accurate and realistic while remaining extremely efficient.

  4. Multiscale recurrence quantification analysis of order recurrence plots

    NASA Astrophysics Data System (ADS)

    Xu, Mengjia; Shang, Pengjian; Lin, Aijing

    2017-03-01

    In this paper, we propose a new method of multiscale recurrence quantification analysis (MSRQA) to analyze the structure of order recurrence plots. The MSRQA is based on order patterns over a range of time scales. Compared with conventional recurrence quantification analysis (RQA), the MSRQA can show richer and more recognizable information on the local characteristics of diverse systems which successfully describes their recurrence properties. Both synthetic series and stock market indexes exhibit their properties of recurrence at large time scales that quite differ from those at a single time scale. Some systems present more accurate recurrence patterns under large time scales. It demonstrates that the new approach is effective for distinguishing three similar stock market systems and showing some inherent differences.

  5. Using scale and feather traits for module construction provides a functional approach to chicken epidermal development.

    PubMed

    Bao, Weier; Greenwold, Matthew J; Sawyer, Roger H

    2017-11-01

    Gene co-expression network analysis has been a research method widely used in systematically exploring gene function and interaction. Using the Weighted Gene Co-expression Network Analysis (WGCNA) approach to construct a gene co-expression network using data from a customized 44K microarray transcriptome of chicken epidermal embryogenesis, we have identified two distinct modules that are highly correlated with scale or feather development traits. Signaling pathways related to feather development were enriched in the traditional KEGG pathway analysis and functional terms relating specifically to embryonic epidermal development were also enriched in the Gene Ontology analysis. Significant enrichment annotations were discovered from customized enrichment tools such as Modular Single-Set Enrichment Test (MSET) and Medical Subject Headings (MeSH). Hub genes in both trait-correlated modules showed strong specific functional enrichment toward epidermal development. Also, regulatory elements, such as transcription factors and miRNAs, were targeted in the significant enrichment result. This work highlights the advantage of this methodology for functional prediction of genes not previously associated with scale- and feather trait-related modules.

  6. A multiscale-based approach for composite materials with embedded PZT filaments for energy harvesting

    NASA Astrophysics Data System (ADS)

    El-Etriby, Ahmed E.; Abdel-Meguid, Mohamed E.; Hatem, Tarek M.; Bahei-El-Din, Yehia A.

    2014-03-01

    Ambient vibrations are major source of wasted energy, exploiting properly such vibration can be converted to valuable energy and harvested to power up devices, i.e. electronic devices. Accordingly, energy harvesting using smart structures with active piezoelectric ceramics has gained wide interest over the past few years as a method for converting such wasted energy. This paper provides numerical and experimental analysis of piezoelectric fiber based composites for energy harvesting applications proposing a multi-scale modeling approach coupled with experimental verification. The multi-scale approach suggested to predict the behavior of piezoelectric fiber-based composites use micromechanical model based on Transformation Field Analysis (TFA) to calculate the overall material properties of electrically active composite structure. Capitalizing on the calculated properties, single-phase analysis of a homogeneous structure is conducted using finite element method. The experimental work approach involves running dynamic tests on piezoelectric fiber-based composites to simulate mechanical vibrations experienced by a subway train floor tiles. Experimental results agree well with the numerical results both for static and dynamic tests.

  7. Data Curation and Visualization for MuSIASEM Analysis of the Nexus

    NASA Astrophysics Data System (ADS)

    Renner, Ansel

    2017-04-01

    A novel software-based approach to relational analysis applying recent theoretical advancements of the Multi-Scale Integrated Analysis of Societal and Ecosystem Metabolism (MuSIASEM) accounting framework is presented. This research explores and explains underutilized ways software can assist complex system analysis across the stages of data collection, exploration, analysis and dissemination and in a transparent and collaborative manner. This work is being conducted as part of, and in support of, the four-year European Commission H2020 project: Moving Towards Adaptive Governance in Complexity: Informing Nexus Security (MAGIC). In MAGIC, theoretical advancements to MuSIASEM propose a powerful new approach to spatial-temporal WEFC relational analysis in accordance with a structural-functional scaling mechanism appropriate for biophysically relevant complex system analyses. Software is designed primarily with JavaScript using the Angular2 model-view-controller framework and the Data-Driven Documents (D3) library. These design choices clarify and modularize data flow, simplify research practitioner's work, allow for and assist stakeholder involvement and advance collaboration at all stages. Data requirements and scalable, robust yet light-weight structuring will first be explained. Following, algorithms to process this data will be explored. Data interfaces and data visualization approaches will lastly be presented and described.

  8. The psychometric characteristics of the revised depression attitude questionnaire (R-DAQ) in Pakistani medical practitioners: a cross-sectional study of doctors in Lahore.

    PubMed

    Haddad, Mark; Waqas, Ahmed; Sukhera, Ahmed Bashir; Tarar, Asad Zaman

    2017-07-27

    Depression is common mental health problem and leading contributor to the global burden of disease. The attitudes and beliefs of the public and of health professionals influence social acceptance and affect the esteem and help-seeking of people experiencing mental health problems. The attitudes of clinicians are particularly relevant to their role in accurately recognising and providing appropriate support and management of depression. This study examines the characteristics of the revised depression attitude questionnaire (R-DAQ) with doctors working in healthcare settings in Lahore, Pakistan. A cross-sectional survey was conducted in 2015 using the revised depression attitude questionnaire (R-DAQ). A convenience sample of 700 medical practitioners based in six hospitals in Lahore was approached to participate in the survey. The R-DAQ structure was examined using Parallel Analysis from polychoric correlations. Unweighted least squares analysis (ULSA) was used for factor extraction. Model fit was estimated using goodness-of-fit indices and the root mean square of standardized residuals (RMSR), and internal consistency reliability for the overall scale and subscales was assessed using reliability estimates based on Mislevy and Bock (BILOG 3 Item analysis and test scoring with binary logistic models. Mooresville: Scientific Software, 55) and the McDonald's Omega statistic. Findings using this approach were compared with principal axis factor analysis based on Pearson correlation matrix. 601 (86%) of the doctors approached consented to participate in the study. Exploratory factor analysis of R-DAQ scale responses demonstrated the same 3-factor structure as in the UK development study, though analyses indicated removal of 7 of the 22 items because of weak loading or poor model fit. The 3 factor solution accounted for 49.8% of the common variance. Scale reliability and internal consistency were adequate: total scale standardised alpha was 0.694; subscale reliability for professional confidence was 0.732, therapeutic optimism/pessimism was 0.638, and generalist perspective was 0.769. The R-DAQ was developed with a predominantly UK-based sample of health professionals. This study indicates that this scale functions adequately and provides a valid measure of depression attitudes for medical practitioners in Pakistan, with the same factor structure as in the scale development sample. However, optimal scale function necessitated removal of several items, with a 15-item scale enabling the most parsimonious factor solution for this population.

  9. Dense image matching of terrestrial imagery for deriving high-resolution topographic properties of vegetation locations in alpine terrain

    NASA Astrophysics Data System (ADS)

    Niederheiser, R.; Rutzinger, M.; Bremer, M.; Wichmann, V.

    2018-04-01

    The investigation of changes in spatial patterns of vegetation and identification of potential micro-refugia requires detailed topographic and terrain information. However, mapping alpine topography at very detailed scales is challenging due to limited accessibility of sites. Close-range sensing by photogrammetric dense matching approaches based on terrestrial images captured with hand-held cameras offers a light-weight and low-cost solution to retrieve high-resolution measurements even in steep terrain and at locations, which are difficult to access. We propose a novel approach for rapid capturing of terrestrial images and a highly automated processing chain for retrieving detailed dense point clouds for topographic modelling. For this study, we modelled 249 plot locations. For the analysis of vegetation distribution and location properties, topographic parameters, such as slope, aspect, and potential solar irradiation were derived by applying a multi-scale approach utilizing voxel grids and spherical neighbourhoods. The result is a micro-topography archive of 249 alpine locations that includes topographic parameters at multiple scales ready for biogeomorphological analysis. Compared with regional elevation models at larger scales and traditional 2D gridding approaches to create elevation models, we employ analyses in a fully 3D environment that yield much more detailed insights into interrelations between topographic parameters, such as potential solar irradiation, surface area, aspect and roughness.

  10. Elementary dispersion analysis of some mimetic discretizations on triangular C-grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korn, P., E-mail: peter.korn@mpimet.mpg.de; Danilov, S.; A.M. Obukhov Institute of Atmospheric Physics, Moscow

    2017-02-01

    Spurious modes supported by triangular C-grids limit their application for modeling large-scale atmospheric and oceanic flows. Their behavior can be modified within a mimetic approach that generalizes the scalar product underlying the triangular C-grid discretization. The mimetic approach provides a discrete continuity equation which operates on an averaged combination of normal edge velocities instead of normal edge velocities proper. An elementary analysis of the wave dispersion of the new discretization for Poincaré, Rossby and Kelvin waves shows that, although spurious Poincaré modes are preserved, their frequency tends to zero in the limit of small wavenumbers, which removes the divergence noisemore » in this limit. However, the frequencies of spurious and physical modes become close on shorter scales indicating that spurious modes can be excited unless high-frequency short-scale motions are effectively filtered in numerical codes. We argue that filtering by viscous dissipation is more efficient in the mimetic approach than in the standard C-grid discretization. Lumping of mass matrices appearing with the velocity time derivative in the mimetic discretization only slightly reduces the accuracy of the wave dispersion and can be used in practice. Thus, the mimetic approach cures some difficulties of the traditional triangular C-grid discretization but may still need appropriately tuned viscosity to filter small scales and high frequencies in solutions of full primitive equations when these are excited by nonlinear dynamics.« less

  11. Development of the Large-Scale Forcing Data to Support MC3E Cloud Modeling Studies

    NASA Astrophysics Data System (ADS)

    Xie, S.; Zhang, Y.

    2011-12-01

    The large-scale forcing fields (e.g., vertical velocity and advective tendencies) are required to run single-column and cloud-resolving models (SCMs/CRMs), which are the two key modeling frameworks widely used to link field data to climate model developments. In this study, we use an advanced objective analysis approach to derive the required forcing data from the soundings collected by the Midlatitude Continental Convective Cloud Experiment (MC3E) in support of its cloud modeling studies. MC3E is the latest major field campaign conducted during the period 22 April 2011 to 06 June 2011 in south-central Oklahoma through a joint effort between the DOE ARM program and the NASA Global Precipitation Measurement Program. One of its primary goals is to provide a comprehensive dataset that can be used to describe the large-scale environment of convective cloud systems and evaluate model cumulus parameterizations. The objective analysis used in this study is the constrained variational analysis method. A unique feature of this approach is the use of domain-averaged surface and top-of-the atmosphere (TOA) observations (e.g., precipitation and radiative and turbulent fluxes) as constraints to adjust atmospheric state variables from soundings by the smallest possible amount to conserve column-integrated mass, moisture, and static energy so that the final analysis data is dynamically and thermodynamically consistent. To address potential uncertainties in the surface observations, an ensemble forcing dataset will be developed. Multi-scale forcing will be also created for simulating various scale convective systems. At the meeting, we will provide more details about the forcing development and present some preliminary analysis of the characteristics of the large-scale forcing structures for several selected convective systems observed during MC3E.

  12. Validation of the ANOCOVA model for regional scale ECa-ECe calibration

    USDA-ARS?s Scientific Manuscript database

    Over the past decade two approaches have emerged as the preferred means for assessing salinity at regional scale: (1) vegetative indices from satellite imagery (e.g., MODIS enhanced vegetative index, NDVI, etc.) and (2) analysis of covariance (ANOCOVA) calibration of apparent soil electrical conduct...

  13. A field assessment of the value of steady shape hydraulic tomography for characterization of aquifer heterogeneities

    USGS Publications Warehouse

    Bohling, Geoffrey C.; Butler, James J.; Zhan, Xiaoyong; Knoll, Michael D.

    2007-01-01

    Hydraulic tomography is a promising approach for obtaining information on variations in hydraulic conductivity on the scale of relevance for contaminant transport investigations. This approach involves performing a series of pumping tests in a format similar to tomography. We present a field‐scale assessment of hydraulic tomography in a porous aquifer, with an emphasis on the steady shape analysis methodology. The hydraulic conductivity (K) estimates from steady shape and transient analyses of the tomographic data compare well with those from a tracer test and direct‐push permeameter tests, providing a field validation of the method. Zonations based on equal‐thickness layers and cross‐hole radar surveys are used to regularize the inverse problem. The results indicate that the radar surveys provide some useful information regarding the geometry of the K field. The steady shape analysis provides results similar to the transient analysis at a fraction of the computational burden. This study clearly demonstrates the advantages of hydraulic tomography over conventional pumping tests, which provide only large‐scale averages, and small‐scale hydraulic tests (e.g., slug tests), which cannot assess strata connectivity and may fail to sample the most important pathways or barriers to flow.

  14. Modal-pushover-based ground-motion scaling procedure

    USGS Publications Warehouse

    Kalkan, Erol; Chopra, Anil K.

    2011-01-01

    Earthquake engineering is increasingly using nonlinear response history analysis (RHA) to demonstrate the performance of structures. This rigorous method of analysis requires selection and scaling of ground motions appropriate to design hazard levels. This paper presents a modal-pushover-based scaling (MPS) procedure to scale ground motions for use in a nonlinear RHA of buildings. In the MPS method, the ground motions are scaled to match to a specified tolerance, a target value of the inelastic deformation of the first-mode inelastic single-degree-of-freedom (SDF) system whose properties are determined by the first-mode pushover analysis. Appropriate for first-mode dominated structures, this approach is extended for structures with significant contributions of higher modes by considering elastic deformation of second-mode SDF systems in selecting a subset of the scaled ground motions. Based on results presented for three actual buildings-4, 6, and 13-story-the accuracy and efficiency of the MPS procedure are established and its superiority over the ASCE/SEI 7-05 scaling procedure is demonstrated.

  15. The Scaling Group of the 1-D Invisicid Euler Equations

    NASA Astrophysics Data System (ADS)

    Schmidt, Emma; Ramsey, Scott; Boyd, Zachary; Baty, Roy

    2017-11-01

    The one dimensional (1-D) compressible Euler equations in non-ideal media support scale invariant solutions under a variety of initial conditions. Famous scale invariant solutions include the Noh, Sedov, Guderley, and collapsing cavity hydrodynamic test problems. We unify many classical scale invariant solutions under a single scaling group analysis. The scaling symmetry group generator provides a framework for determining all scale invariant solutions emitted by the 1-D Euler equations for arbitrary geometry, initial conditions, and equation of state. We approach the Euler equations from a geometric standpoint, and conduct scaling analyses for a broad class of materials.

  16. How widely applicable is river basin management? An analysis of wastewater management in an arid transboundary case.

    PubMed

    Dombrowsky, Ines; Almog, Ram; Becker, Nir; Feitelson, Eran; Klawitter, Simone; Lindemann, Stefan; Mutlak, Natalie

    2010-05-01

    The basin scale has been promoted universally as the optimal management unit that allows for the internalization of all external effects caused by multiple water uses. However, the basin scale has been put forward largely on the basis of experience in temperate zones. Hence whether the basin scale is the best scale for management in other settings remains questionable. To address these questions this paper analyzes the economic viability and the political feasibility of alternative management options in the Kidron/Wadi Nar region. The Kidron/Wadi Nar is a small basin in which wastewater from eastern Jerusalem flows through the desert to the Dead Sea. Various options for managing these wastewater flows were analyzed ex ante on the basis of both a cost benefit and a multi-criteria analysis. The paper finds that due to economies of scale, a pure basin approach is not desirable from a physical and economic perspective. Furthermore, in terms of political feasibility, it seems that the option which prompts the fewest objections from influential stakeholder groups in the two entities under the current asymmetrical political setting is not a basin solution either, but a two plant solution based on an outsourcing arrangement. These findings imply that the river basin management approach can not be considered the best management approach for the arid transboundary case at hand, and hence is not unequivocally universally applicable.

  17. How Widely Applicable is River Basin Management? An Analysis of Wastewater Management in an Arid Transboundary Case

    NASA Astrophysics Data System (ADS)

    Dombrowsky, Ines; Almog, Ram; Becker, Nir; Feitelson, Eran; Klawitter, Simone; Lindemann, Stefan; Mutlak, Natalie

    2010-05-01

    The basin scale has been promoted universally as the optimal management unit that allows for the internalization of all external effects caused by multiple water uses. However, the basin scale has been put forward largely on the basis of experience in temperate zones. Hence whether the basin scale is the best scale for management in other settings remains questionable. To address these questions this paper analyzes the economic viability and the political feasibility of alternative management options in the Kidron/Wadi Nar region. The Kidron/Wadi Nar is a small basin in which wastewater from eastern Jerusalem flows through the desert to the Dead Sea. Various options for managing these wastewater flows were analyzed ex ante on the basis of both a cost benefit and a multi-criteria analysis. The paper finds that due to economies of scale, a pure basin approach is not desirable from a physical and economic perspective. Furthermore, in terms of political feasibility, it seems that the option which prompts the fewest objections from influential stakeholder groups in the two entities under the current asymmetrical political setting is not a basin solution either, but a two plant solution based on an outsourcing arrangement. These findings imply that the river basin management approach can not be considered the best management approach for the arid transboundary case at hand, and hence is not unequivocally universally applicable.

  18. Stream network analysis and geomorphic flood plain mapping from orbital and suborbital remote sensing imagery application to flood hazard studies in central Texas

    NASA Technical Reports Server (NTRS)

    Baker, V. R. (Principal Investigator); Holz, R. K.; Hulke, S. D.; Patton, P. C.; Penteado, M. M.

    1975-01-01

    The author has identified the following significant results. Development of a quantitative hydrogeomorphic approach to flood hazard evaluation was hindered by (1) problems of resolution and definition of the morphometric parameters which have hydrologic significance, and (2) mechanical difficulties in creating the necessary volume of data for meaningful analysis. Measures of network resolution such as drainage density and basin Shreve magnitude indicated that large scale topographic maps offered greater resolution than small scale suborbital imagery and orbital imagery. The disparity in network resolution capabilities between orbital and suborbital imagery formats depends on factors such as rock type, vegetation, and land use. The problem of morphometric data analysis was approached by developing a computer-assisted method for network analysis. The system allows rapid identification of network properties which can then be related to measures of flood response.

  19. Parallel computing for probabilistic fatigue analysis

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Lua, Yuan J.; Smith, Mark D.

    1993-01-01

    This paper presents the results of Phase I research to investigate the most effective parallel processing software strategies and hardware configurations for probabilistic structural analysis. We investigate the efficiency of both shared and distributed-memory architectures via a probabilistic fatigue life analysis problem. We also present a parallel programming approach, the virtual shared-memory paradigm, that is applicable across both types of hardware. Using this approach, problems can be solved on a variety of parallel configurations, including networks of single or multiprocessor workstations. We conclude that it is possible to effectively parallelize probabilistic fatigue analysis codes; however, special strategies will be needed to achieve large-scale parallelism to keep large number of processors busy and to treat problems with the large memory requirements encountered in practice. We also conclude that distributed-memory architecture is preferable to shared-memory for achieving large scale parallelism; however, in the future, the currently emerging hybrid-memory architectures will likely be optimal.

  20. Wind-tunnel results of the aerodynamic characteristics of a 1/8-scale model of a twin engine short-haul transport. [in Langley V/STOL tunnel

    NASA Technical Reports Server (NTRS)

    Paulson, J. W., Jr.

    1977-01-01

    A wind tunnel test was conducted in the Langley V/STOL tunnel to define the aerodynamic characteristics of a 1/8-scale twin-engine short haul transport. The model was tested in both the cruise and approach configurations with various control surfaces deflected. Data were obtained out of ground effect for the cruise configuration and both in and out of ground effect for the approach configuration. These data are intended to be a reference point to begin the analysis of the flight characteristics of the NASA terminal configured vehicle (TCV) and are presented without analysis.

  1. Scaling characteristics of mountainous river flow fluctuations determined using a shallow-water acoustic tomography system

    NASA Astrophysics Data System (ADS)

    Al Sawaf, Mohamad Basel; Kawanisi, Kiyosi; Kagami, Junya; Bahreinimotlagh, Masoud; Danial, Mochammad Meddy

    2017-10-01

    The aim of this study is to investigate the scaling exponent properties of mountainous river flow fluctuations by detrended fluctuation analysis (DFA). Streamflow data were collected continuously using Fluvial Acoustic Tomography System (FATS), which is a novel system for measuring continuous streamflow at high-frequency scales. The results revealed that river discharge fluctuations have two scaling regimes and scaling break. In contrast to the Ranting Curve method (RC), the small-scale exponent detected by the FATS is estimated to be 1.02 ± 0.42% less than that estimated by RC. More importantly, the crossover times evaluated from the FATS delayed approximately by 42 ± 21 hr ≈2-3 days than their counterparts estimated by RC. The power spectral density analysis assists our findings. We found that scaling characteristics information evaluated for a river using flux data obtained by RC approach might not be accurately detected, because this classical method assumes that flow in river is steady and depends on constructing a relationship between discharge and water level, while the discharge obtained by the FATS decomposes velocity and depth into two ratings according to the continuity equation. Generally, this work highlights the performance of FATS as a powerful and effective approach for continuous streamflow measurements at high-frequency levels.

  2. Analysis of urban area land cover using SEASAT Synthetic Aperture Radar data

    NASA Technical Reports Server (NTRS)

    Henderson, F. M. (Principal Investigator)

    1980-01-01

    Digitally processed SEASAT synthetic aperture raar (SAR) imagery of the Denver, Colorado urban area was examined to explore the potential of SAR data for mapping urban land cover and the compatability of SAR derived land cover classes with the United States Geological Survey classification system. The imagery is examined at three different scales to determine the effect of image enlargement on accuracy and level of detail extractable. At each scale the value of employing a simplistic preprocessing smoothing algorithm to improve image interpretation is addressed. A visual interpretation approach and an automated machine/visual approach are employed to evaluate the feasibility of producing a semiautomated land cover classification from SAR data. Confusion matrices of omission and commission errors are employed to define classification accuracies for each interpretation approach and image scale.

  3. A holistic approach for large-scale derived flood frequency analysis

    NASA Astrophysics Data System (ADS)

    Dung Nguyen, Viet; Apel, Heiko; Hundecha, Yeshewatesfa; Guse, Björn; Sergiy, Vorogushyn; Merz, Bruno

    2017-04-01

    Spatial consistency, which has been usually disregarded because of the reported methodological difficulties, is increasingly demanded in regional flood hazard (and risk) assessments. This study aims at developing a holistic approach for deriving flood frequency at large scale consistently. A large scale two-component model has been established for simulating very long-term multisite synthetic meteorological fields and flood flow at many gauged and ungauged locations hence reflecting the spatially inherent heterogeneity. The model has been applied for the region of nearly a half million km2 including Germany and parts of nearby countries. The model performance has been multi-objectively examined with a focus on extreme. By this continuous simulation approach, flood quantiles for the studied region have been derived successfully and provide useful input for a comprehensive flood risk study.

  4. Numerical and experimental analysis of an in-scale masonry cross-vault prototype up to failure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rossi, Michela; Calderini, Chiara; Lagomarsino, Sergio

    2015-12-31

    A heterogeneous full 3D non-linear FE approach is validated against experimental results obtained on an in-scale masonry cross vault assembled with dry joints, and subjected to various loading conditions consisting on imposed displacement combinations to the abutments. The FE model relies into a discretization of the blocks by means of few rigid-infinitely resistant parallelepiped elements interacting by means of planar four-noded interfaces, where all the deformation (elastic and inelastic) occurs. The investigated response mechanisms of vault are the shear in-plane distortion and the longitudinal opening and closing mechanism at the abutments. After the validation of the approach on the experimentallymore » tested cross-vault, a sensitivity analysis is conducted on the same geometry, but in real scale, varying mortar joints mechanical properties, in order to furnish useful hints for safety assessment, especially in presence of seismic action.« less

  5. Contemporary militant extremism: a linguistic approach to scale development.

    PubMed

    Stankov, Lazar; Higgins, Derrick; Saucier, Gerard; Knežević, Goran

    2010-06-01

    In this article, the authors describe procedures used in the development of a new scale of militant extremist mindset. A 2-step approach consisted of (a) linguistic analysis of the texts produced by known terrorist organizations and selection of statements from these texts that reflect the mindset of those belonging to these organizations and (b) analyses of the structural properties of the scales based on 132 selected statements. Factor analysis of militant extremist statements with participants (N = 452) from Australia, Serbia, and the United States produced 3 dimensions: (a) justification and advocacy of violence (War factor), (b) violence in the name of God (God factor), and (c) blaming Western nations for the problems in the world today (West factor). We also report the distributions of scores for the 3 subscales, mean differences among the 3 national samples, and correlations with a measure of dogmatism (M. Rokeach, 1956).

  6. Preparing Laboratory and Real-World EEG Data for Large-Scale Analysis: A Containerized Approach

    PubMed Central

    Bigdely-Shamlo, Nima; Makeig, Scott; Robbins, Kay A.

    2016-01-01

    Large-scale analysis of EEG and other physiological measures promises new insights into brain processes and more accurate and robust brain–computer interface models. However, the absence of standardized vocabularies for annotating events in a machine understandable manner, the welter of collection-specific data organizations, the difficulty in moving data across processing platforms, and the unavailability of agreed-upon standards for preprocessing have prevented large-scale analyses of EEG. Here we describe a “containerized” approach and freely available tools we have developed to facilitate the process of annotating, packaging, and preprocessing EEG data collections to enable data sharing, archiving, large-scale machine learning/data mining and (meta-)analysis. The EEG Study Schema (ESS) comprises three data “Levels,” each with its own XML-document schema and file/folder convention, plus a standardized (PREP) pipeline to move raw (Data Level 1) data to a basic preprocessed state (Data Level 2) suitable for application of a large class of EEG analysis methods. Researchers can ship a study as a single unit and operate on its data using a standardized interface. ESS does not require a central database and provides all the metadata data necessary to execute a wide variety of EEG processing pipelines. The primary focus of ESS is automated in-depth analysis and meta-analysis EEG studies. However, ESS can also encapsulate meta-information for the other modalities such as eye tracking, that are increasingly used in both laboratory and real-world neuroimaging. ESS schema and tools are freely available at www.eegstudy.org and a central catalog of over 850 GB of existing data in ESS format is available at studycatalog.org. These tools and resources are part of a larger effort to enable data sharing at sufficient scale for researchers to engage in truly large-scale EEG analysis and data mining (BigEEG.org). PMID:27014048

  7. Genome-scale approaches to the epigenetics of common human disease

    PubMed Central

    2011-01-01

    Traditionally, the pathology of human disease has been focused on microscopic examination of affected tissues, chemical and biochemical analysis of biopsy samples, other available samples of convenience, such as blood, and noninvasive or invasive imaging of varying complexity, in order to classify disease and illuminate its mechanistic basis. The molecular age has complemented this armamentarium with gene expression arrays and selective analysis of individual genes. However, we are entering a new era of epigenomic profiling, i.e., genome-scale analysis of cell-heritable nonsequence genetic change, such as DNA methylation. The epigenome offers access to stable measurements of cellular state and to biobanked material for large-scale epidemiological studies. Some of these genome-scale technologies are beginning to be applied to create the new field of epigenetic epidemiology. PMID:19844740

  8. Accounting for standard errors of vision-specific latent trait in regression models.

    PubMed

    Wong, Wan Ling; Li, Xiang; Li, Jialiang; Wong, Tien Yin; Cheng, Ching-Yu; Lamoureux, Ecosse L

    2014-07-11

    To demonstrate the effectiveness of Hierarchical Bayesian (HB) approach in a modeling framework for association effects that accounts for SEs of vision-specific latent traits assessed using Rasch analysis. A systematic literature review was conducted in four major ophthalmic journals to evaluate Rasch analysis performed on vision-specific instruments. The HB approach was used to synthesize the Rasch model and multiple linear regression model for the assessment of the association effects related to vision-specific latent traits. The effectiveness of this novel HB one-stage "joint-analysis" approach allows all model parameters to be estimated simultaneously and was compared with the frequently used two-stage "separate-analysis" approach in our simulation study (Rasch analysis followed by traditional statistical analyses without adjustment for SE of latent trait). Sixty-six reviewed articles performed evaluation and validation of vision-specific instruments using Rasch analysis, and 86.4% (n = 57) performed further statistical analyses on the Rasch-scaled data using traditional statistical methods; none took into consideration SEs of the estimated Rasch-scaled scores. The two models on real data differed for effect size estimations and the identification of "independent risk factors." Simulation results showed that our proposed HB one-stage "joint-analysis" approach produces greater accuracy (average of 5-fold decrease in bias) with comparable power and precision in estimation of associations when compared with the frequently used two-stage "separate-analysis" procedure despite accounting for greater uncertainty due to the latent trait. Patient-reported data, using Rasch analysis techniques, do not take into account the SE of latent trait in association analyses. The HB one-stage "joint-analysis" is a better approach, producing accurate effect size estimations and information about the independent association of exposure variables with vision-specific latent traits. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.

  9. Modeling gene expression measurement error: a quasi-likelihood approach

    PubMed Central

    Strimmer, Korbinian

    2003-01-01

    Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution) or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale). Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood). Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic) variance structure of the data. As the quasi-likelihood behaves (almost) like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye) effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also improved the power of tests to identify differential expression. PMID:12659637

  10. Measuring self-reported studying and learning for university students: linking attitudes and behaviours on the same scale.

    PubMed

    Waugh, Russell F

    2002-12-01

    The relationships between self-reported Approaches to Studying and Self-concept, Self-capability and Studying and Learning Behaviour are usually studied by measuring the variables separately (using factor analysis and Cronbach Alphas) and then using various correlation techniques (such as multiple regression and path analysis). This procedure has measurement problems and is called into question. To create a single scale of Studying and Learning using a model with subsets of ordered stem-items based on a Deep Approach, a Surface Approach and a Strategic Approach, integrated with three self-reported aspects (an Ideal Self-view, a Capability Self-view and a Studying and Learning Behaviour Self-view). The stem-item sample was 33, all answered in three aspects, that produced an effective item sample of 99. The person convenience sample was 431 students in education (1(st) to 4(th) year) at an Australian university during 2000. The latest Rasch Unidimensional Measurement Model Computer Program (Andrich, Lyne, Sheridan, & Luo, 2000) was used to analyse the data and create a single scale of Studying and Learning. Altogether 77 items fitted a Rasch Measurement Model and formed a scale in which the 'difficulties' of the items were ordered from 'easy' to 'hard' and the student measures of Studying and Learning were ordered from 'low' to 'high'. The proportion of observed student variance considered true was 0.96. The response categories were answered consistently and logically and the results supported many, but not all, the conceptualised ordering of the subscales. Students found it 'easy' to report a high Ideal Self-view, 'much harder' to report a high Capability Self-view, and 'harder still' to report a high Studying and Learning Behaviour for the stem-items, in accordance with the model, where items fit the measurement model. The Ideal Self-view Surface Approach items provided the most non-fit to the model. This method was highly successful in producing a single scale of Studying and Learning from self-reported Self-concepts, Self-capabilities, and Studying and Learning Behaviours, based on a Deep Approach, a Surface Approach and a Strategic Approach.

  11. Using the Sociocognitive-Transformative Approach in Writing Classrooms: Effects on L2 Learners' Writing Performance

    ERIC Educational Resources Information Center

    Barrot, Jessie S.

    2018-01-01

    The current study used a scale-based approach and complexity, accuracy, and fluency (CAF) analysis to comprehensively capture the effects of the sociocognitive-transformative approach on 2nd language (L2) learners' writing performance. The study involved 66 preuniversity intermediate L2 students from 4 different English classes. I randomly…

  12. Learning Analysis of K-12 Students' Online Problem Solving: A Three-Stage Assessment Approach

    ERIC Educational Resources Information Center

    Hu, Yiling; Wu, Bian; Gu, Xiaoqing

    2017-01-01

    Problem solving is considered a fundamental human skill. However, large-scale assessment of problem solving in K-12 education remains a challenging task. Researchers have argued for the development of an enhanced assessment approach through joint effort from multiple disciplines. In this study, a three-stage approach based on an evidence-centered…

  13. Power Grid Data Analysis with R and Hadoop

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hafen, Ryan P.; Gibson, Tara D.; Kleese van Dam, Kerstin

    This book chapter presents an approach to analysis of large-scale time-series sensor information based on our experience with power grid data. We use the R-Hadoop Integrated Programming Environment (RHIPE) to analyze a 2TB data set and present code and results for this analysis.

  14. Confirmatory Factor Analysis of the Hewitt-Multidimensional Perfectionism Scale

    ERIC Educational Resources Information Center

    Barut, Yasar

    2015-01-01

    Various studies on the conceptual framework of perfectionism construct use Hewitt Multi-dimensional Perfectionism Scale (HMPS), as a basic approach. The measure has a prominent role with respect to the theoretical considerations of perfectionism dimensions. This study aimed to evaluate the psychometric properties of the Turkish version of the…

  15. The Diversity of School Organizational Configurations

    ERIC Educational Resources Information Center

    Lee, Linda C.

    2013-01-01

    School reform on a large scale has largely been unsuccessful. Approaches designed to document and understand the variety of organizational conditions that comprise our school systems are needed so that reforms can be tailored and results scaled. Therefore, this article develops a configurational framework that allows a systematic analysis of many…

  16. Multi-scale analysis of the fluxes between terrestrial water storage, groundwater, and stream discharge in the Columbia River Basin

    EPA Science Inventory

    The temporal relationships between the measurements of terrestrial water storage (TWS), groundwater, and stream discharge were analyzed at three different scales in the Columbia River Basin (CRB) for water years 2004 - 2012. Our nested watershed approach examined the Snake River ...

  17. Broadband Structural Dynamics: Understanding the Impulse-Response of Structures Across Multiple Length and Time Scales

    DTIC Science & Technology

    2010-08-18

    Spectral domain response calculated • Time domain response obtained through inverse transform Approach 4: WASABI Wavelet Analysis of Structural Anomalies...differences at unity scale! Time Function Transform Apply Spectral Domain Transfer Function Time Function Inverse Transform Transform Transform  mtP

  18. Characterizing the performance of ecosystem models across time scales: A spectral analysis of the North American Carbon Program site-level synthesis

    Treesearch

    Michael C. Dietze; Rodrigo Vargas; Andrew D. Richardson; Paul C. Stoy; Alan G. Barr; Ryan S. Anderson; M. Altaf Arain; Ian T. Baker; T. Andrew Black; Jing M. Chen; Philippe Ciais; Lawrence B. Flanagan; Christopher M. Gough; Robert F. Grant; David Hollinger; R. Cesar Izaurralde; Christopher J. Kucharik; Peter Lafleur; Shugang Liu; Erandathie Lokupitiya; Yiqi Luo; J. William Munger; Changhui Peng; Benjamin Poulter; David T. Price; Daniel M. Ricciuto; William J. Riley; Alok Kumar Sahoo; Kevin Schaefer; Andrew E. Suyker; Hanqin Tian; Christina Tonitto; Hans Verbeeck; Shashi B. Verma; Weifeng Wang; Ensheng Weng

    2011-01-01

    Ecosystem models are important tools for diagnosing the carbon cycle and projecting its behavior across space and time. Despite the fact that ecosystems respond to drivers at multiple time scales, most assessments of model performance do not discriminate different time scales. Spectral methods, such as wavelet analyses, present an alternative approach that enables the...

  19. The Development and Confirmatory Factor Analysis of a Scale for the Measurement of Gifted Students Attitude towards Mathematics

    ERIC Educational Resources Information Center

    Adediwura, Alaba Adeyemi

    2011-01-01

    The present study was aimed at developing an attitude scale and then use it to reveal the attitude of mathematical gifted students' attitude towards mathematics. In the first phase of the study, an attitude-scale (AS) towards mathematics were designed using the psychometric approach. The AS was composed of seventeen items aimed to measure four…

  20. Examination of turbulent entrainment-mixing mechanisms using a combined approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, C.; Liu, Y.; Niu, S.

    2011-10-01

    Turbulent entrainment-mixing mechanisms are investigated by applying a combined approach to the aircraft measurements of three drizzling and two nondrizzling stratocumulus clouds collected over the U.S. Department of Energy's Atmospheric Radiation Measurement Southern Great Plains site during the March 2000 cloud Intensive Observation Period. Microphysical analysis shows that the inhomogeneous entrainment-mixing process occurs much more frequently than the homogeneous counterpart, and most cases of the inhomogeneous entrainment-mixing process are close to the extreme scenario, having drastically varying cloud droplet concentration but roughly constant volume-mean radius. It is also found that the inhomogeneous entrainment-mixing process can occur both near the cloudmore » top and in the middle level of a cloud, and in both the nondrizzling clouds and nondrizzling legs in the drizzling clouds. A new dimensionless number, the scale number, is introduced as a dynamical measure for different entrainment-mixing processes, with a larger scale number corresponding to a higher degree of homogeneous entrainment mixing. Further empirical analysis shows that the scale number that separates the homogeneous from the inhomogeneous entrainment-mixing process is around 50, and most legs have smaller scale numbers. Thermodynamic analysis shows that sampling average of filament structures finer than the instrumental spatial resolution also contributes to the dominance of inhomogeneous entrainment-mixing mechanism. The combined microphysical-dynamical-thermodynamic analysis sheds new light on developing parameterization of entrainment-mixing processes and their microphysical and radiative effects in large-scale models.« less

  1. A Component-Based Extension Framework for Large-Scale Parallel Simulations in NEURON

    PubMed Central

    King, James G.; Hines, Michael; Hill, Sean; Goodman, Philip H.; Markram, Henry; Schürmann, Felix

    2008-01-01

    As neuronal simulations approach larger scales with increasing levels of detail, the neurosimulator software represents only a part of a chain of tools ranging from setup, simulation, interaction with virtual environments to analysis and visualizations. Previously published approaches to abstracting simulator engines have not received wide-spread acceptance, which in part may be to the fact that they tried to address the challenge of solving the model specification problem. Here, we present an approach that uses a neurosimulator, in this case NEURON, to describe and instantiate the network model in the simulator's native model language but then replaces the main integration loop with its own. Existing parallel network models are easily adopted to run in the presented framework. The presented approach is thus an extension to NEURON but uses a component-based architecture to allow for replaceable spike exchange components and pluggable components for monitoring, analysis, or control that can run in this framework alongside with the simulation. PMID:19430597

  2. A Monte Carlo–Based Bayesian Approach for Measuring Agreement in a Qualitative Scale

    PubMed Central

    Pérez Sánchez, Carlos Javier

    2014-01-01

    Agreement analysis has been an active research area whose techniques have been widely applied in psychology and other fields. However, statistical agreement among raters has been mainly considered from a classical statistics point of view. Bayesian methodology is a viable alternative that allows the inclusion of subjective initial information coming from expert opinions, personal judgments, or historical data. A Bayesian approach is proposed by providing a unified Monte Carlo–based framework to estimate all types of measures of agreement in a qualitative scale of response. The approach is conceptually simple and it has a low computational cost. Both informative and non-informative scenarios are considered. In case no initial information is available, the results are in line with the classical methodology, but providing more information on the measures of agreement. For the informative case, some guidelines are presented to elicitate the prior distribution. The approach has been applied to two applications related to schizophrenia diagnosis and sensory analysis. PMID:29881002

  3. A review of lipidomic technologies applicable to sphingolipidomics and their relevant applications

    PubMed Central

    Han, Xianlin; Jiang, Xuntian

    2009-01-01

    Sphingolipidomics, a branch of lipidomics, focuses on the large-scale study of the cellular sphingolipidomes. In the current review, two main approaches for the analysis of cellular sphingolipidomes (i.e. LC-MS- or LC-MS/MS-based approach and shotgun lipidomics-based approach) are briefly discussed. Their advantages, some considerations of these methods, and recent applications of these approaches are summarized. It is the authors’ sincere hope that this review article will add to the readers understanding of the advantages and limitations of each developed method for the analysis of a cellular sphingolipidome. PMID:19690629

  4. Geovisual analytics to enhance spatial scan statistic interpretation: an analysis of U.S. cervical cancer mortality

    PubMed Central

    Chen, Jin; Roth, Robert E; Naito, Adam T; Lengerich, Eugene J; MacEachren, Alan M

    2008-01-01

    Background Kulldorff's spatial scan statistic and its software implementation – SaTScan – are widely used for detecting and evaluating geographic clusters. However, two issues make using the method and interpreting its results non-trivial: (1) the method lacks cartographic support for understanding the clusters in geographic context and (2) results from the method are sensitive to parameter choices related to cluster scaling (abbreviated as scaling parameters), but the system provides no direct support for making these choices. We employ both established and novel geovisual analytics methods to address these issues and to enhance the interpretation of SaTScan results. We demonstrate our geovisual analytics approach in a case study analysis of cervical cancer mortality in the U.S. Results We address the first issue by providing an interactive visual interface to support the interpretation of SaTScan results. Our research to address the second issue prompted a broader discussion about the sensitivity of SaTScan results to parameter choices. Sensitivity has two components: (1) the method can identify clusters that, while being statistically significant, have heterogeneous contents comprised of both high-risk and low-risk locations and (2) the method can identify clusters that are unstable in location and size as the spatial scan scaling parameter is varied. To investigate cluster result stability, we conducted multiple SaTScan runs with systematically selected parameters. The results, when scanning a large spatial dataset (e.g., U.S. data aggregated by county), demonstrate that no single spatial scan scaling value is known to be optimal to identify clusters that exist at different scales; instead, multiple scans that vary the parameters are necessary. We introduce a novel method of measuring and visualizing reliability that facilitates identification of homogeneous clusters that are stable across analysis scales. Finally, we propose a logical approach to proceed through the analysis of SaTScan results. Conclusion The geovisual analytics approach described in this manuscript facilitates the interpretation of spatial cluster detection methods by providing cartographic representation of SaTScan results and by providing visualization methods and tools that support selection of SaTScan parameters. Our methods distinguish between heterogeneous and homogeneous clusters and assess the stability of clusters across analytic scales. Method We analyzed the cervical cancer mortality data for the United States aggregated by county between 2000 and 2004. We ran SaTScan on the dataset fifty times with different parameter choices. Our geovisual analytics approach couples SaTScan with our visual analytic platform, allowing users to interactively explore and compare SaTScan results produced by different parameter choices. The Standardized Mortality Ratio and reliability scores are visualized for all the counties to identify stable, homogeneous clusters. We evaluated our analysis result by comparing it to that produced by other independent techniques including the Empirical Bayes Smoothing and Kafadar spatial smoother methods. The geovisual analytics approach introduced here is developed and implemented in our Java-based Visual Inquiry Toolkit. PMID:18992163

  5. Geovisual analytics to enhance spatial scan statistic interpretation: an analysis of U.S. cervical cancer mortality.

    PubMed

    Chen, Jin; Roth, Robert E; Naito, Adam T; Lengerich, Eugene J; Maceachren, Alan M

    2008-11-07

    Kulldorff's spatial scan statistic and its software implementation - SaTScan - are widely used for detecting and evaluating geographic clusters. However, two issues make using the method and interpreting its results non-trivial: (1) the method lacks cartographic support for understanding the clusters in geographic context and (2) results from the method are sensitive to parameter choices related to cluster scaling (abbreviated as scaling parameters), but the system provides no direct support for making these choices. We employ both established and novel geovisual analytics methods to address these issues and to enhance the interpretation of SaTScan results. We demonstrate our geovisual analytics approach in a case study analysis of cervical cancer mortality in the U.S. We address the first issue by providing an interactive visual interface to support the interpretation of SaTScan results. Our research to address the second issue prompted a broader discussion about the sensitivity of SaTScan results to parameter choices. Sensitivity has two components: (1) the method can identify clusters that, while being statistically significant, have heterogeneous contents comprised of both high-risk and low-risk locations and (2) the method can identify clusters that are unstable in location and size as the spatial scan scaling parameter is varied. To investigate cluster result stability, we conducted multiple SaTScan runs with systematically selected parameters. The results, when scanning a large spatial dataset (e.g., U.S. data aggregated by county), demonstrate that no single spatial scan scaling value is known to be optimal to identify clusters that exist at different scales; instead, multiple scans that vary the parameters are necessary. We introduce a novel method of measuring and visualizing reliability that facilitates identification of homogeneous clusters that are stable across analysis scales. Finally, we propose a logical approach to proceed through the analysis of SaTScan results. The geovisual analytics approach described in this manuscript facilitates the interpretation of spatial cluster detection methods by providing cartographic representation of SaTScan results and by providing visualization methods and tools that support selection of SaTScan parameters. Our methods distinguish between heterogeneous and homogeneous clusters and assess the stability of clusters across analytic scales. We analyzed the cervical cancer mortality data for the United States aggregated by county between 2000 and 2004. We ran SaTScan on the dataset fifty times with different parameter choices. Our geovisual analytics approach couples SaTScan with our visual analytic platform, allowing users to interactively explore and compare SaTScan results produced by different parameter choices. The Standardized Mortality Ratio and reliability scores are visualized for all the counties to identify stable, homogeneous clusters. We evaluated our analysis result by comparing it to that produced by other independent techniques including the Empirical Bayes Smoothing and Kafadar spatial smoother methods. The geovisual analytics approach introduced here is developed and implemented in our Java-based Visual Inquiry Toolkit.

  6. Analysis of Discrete-Source Damage Progression in a Tensile Stiffened Composite Panel

    NASA Technical Reports Server (NTRS)

    Wang, John T.; Lotts, Christine G.; Sleight, David W.

    1999-01-01

    This paper demonstrates the progressive failure analysis capability in NASA Langley s COMET-AR finite element analysis code on a large-scale built-up composite structure. A large-scale five stringer composite panel with a 7-in. long discrete source damage was analyzed from initial loading to final failure including the geometric and material nonlinearities. Predictions using different mesh sizes, different saw cut modeling approaches, and different failure criteria were performed and assessed. All failure predictions have a reasonably good correlation with the test result.

  7. Phase Transitions in Planning Problems: Design and Analysis of Parameterized Families of Hard Planning Problems

    NASA Technical Reports Server (NTRS)

    Hen, Itay; Rieffel, Eleanor G.; Do, Minh; Venturelli, Davide

    2014-01-01

    There are two common ways to evaluate algorithms: performance on benchmark problems derived from real applications and analysis of performance on parametrized families of problems. The two approaches complement each other, each having its advantages and disadvantages. The planning community has concentrated on the first approach, with few ways of generating parametrized families of hard problems known prior to this work. Our group's main interest is in comparing approaches to solving planning problems using a novel type of computational device - a quantum annealer - to existing state-of-the-art planning algorithms. Because only small-scale quantum annealers are available, we must compare on small problem sizes. Small problems are primarily useful for comparison only if they are instances of parametrized families of problems for which scaling analysis can be done. In this technical report, we discuss our approach to the generation of hard planning problems from classes of well-studied NP-complete problems that map naturally to planning problems or to aspects of planning problems that many practical planning problems share. These problem classes exhibit a phase transition between easy-to-solve and easy-to-show-unsolvable planning problems. The parametrized families of hard planning problems lie at the phase transition. The exponential scaling of hardness with problem size is apparent in these families even at very small problem sizes, thus enabling us to characterize even very small problems as hard. The families we developed will prove generally useful to the planning community in analyzing the performance of planning algorithms, providing a complementary approach to existing evaluation methods. We illustrate the hardness of these problems and their scaling with results on four state-of-the-art planners, observing significant differences between these planners on these problem families. Finally, we describe two general, and quite different, mappings of planning problems to QUBOs, the form of input required for a quantum annealing machine such as the D-Wave II.

  8. Two-dimensional DFA scaling analysis applied to encrypted images

    NASA Astrophysics Data System (ADS)

    Vargas-Olmos, C.; Murguía, J. S.; Ramírez-Torres, M. T.; Mejía Carlos, M.; Rosu, H. C.; González-Aguilar, H.

    2015-01-01

    The technique of detrended fluctuation analysis (DFA) has been widely used to unveil scaling properties of many different signals. In this paper, we determine scaling properties in the encrypted images by means of a two-dimensional DFA approach. To carry out the image encryption, we use an enhanced cryptosystem based on a rule-90 cellular automaton and we compare the results obtained with its unmodified version and the encryption system AES. The numerical results show that the encrypted images present a persistent behavior which is close to that of the 1/f-noise. These results point to the possibility that the DFA scaling exponent can be used to measure the quality of the encrypted image content.

  9. Prospective and participatory integrated assessment of agricultural systems from farm to regional scales: Comparison of three modeling approaches.

    PubMed

    Delmotte, Sylvestre; Lopez-Ridaura, Santiago; Barbier, Jean-Marc; Wery, Jacques

    2013-11-15

    Evaluating the impacts of the development of alternative agricultural systems, such as organic or low-input cropping systems, in the context of an agricultural region requires the use of specific tools and methodologies. They should allow a prospective (using scenarios), multi-scale (taking into account the field, farm and regional level), integrated (notably multicriteria) and participatory assessment, abbreviated PIAAS (for Participatory Integrated Assessment of Agricultural System). In this paper, we compare the possible contribution to PIAAS of three modeling approaches i.e. Bio-Economic Modeling (BEM), Agent-Based Modeling (ABM) and statistical Land-Use/Land Cover Change (LUCC) models. After a presentation of each approach, we analyze their advantages and drawbacks, and identify their possible complementarities for PIAAS. Statistical LUCC modeling is a suitable approach for multi-scale analysis of past changes and can be used to start discussion about the futures with stakeholders. BEM and ABM approaches have complementary features for scenarios assessment at different scales. While ABM has been widely used for participatory assessment, BEM has been rarely used satisfactorily in a participatory manner. On the basis of these results, we propose to combine these three approaches in a framework targeted to PIAAS. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Atom Probe Tomographic Analysis of Biological Systems Enabled by Advanced Specimen Preparation Approaches

    NASA Astrophysics Data System (ADS)

    Perea, D. E.; Evans, J. E.

    2017-12-01

    The ability to image biointerfaces over nanometer to micrometer length scales is fundamental to correlating biological composition and structure to physiological function, and is aided by a multimodal approach using advanced complementary microscopic and spectroscopic characterization techniques. Atom Probe Tomography (APT) is a rapidly expanding technique for atomic-scale three-dimensional structural and chemical analysis. However, the regular application of APT to soft biological materials is lacking in large part due to difficulties in specimen preparation and inabilities to yield meaningful tomographic reconstructions that produce atomic scale compositional distributions as no other technique currently can. Here we describe the atomic-scale tomographic analysis of biological materials using APT that is facilitated by an advanced focused ion beam based approach. A novel specimen preparation strategy is used in the analysis of horse spleen ferritin protein embedded in an organic polymer resin which provides chemical contrast to distinguish the inorganic-organic interface of the ferrihydrite mineral core and protein shell of the ferritin protein. One-dimensional composition profiles directly reveal an enhanced concentration of P and Na at the surface of the ferrihydrite mineral core. We will also describe the development of a unique multifunctional environmental transfer hub allowing controlled cryogenic transfer of specimens under vacuum pressure conditions between an Atom Probe and cryo-FIB/SEM. The utility of the environmental transfer hub is demonstrated through the acquisition of previously unavailable mass spectral analysis of an intact organometallic molecule made possible via controlled cryogenic transfer. The results demonstrate a viable application of APT analysis to study complex biological organic/inorganic interfaces relevant to energy and the environment. References D.E. Perea et al. An environmental transfer hub for multimodal atom probe tomography, Adv. Struct. Chem. Imag, 2017, 3:12 The research was performed at the Environmental Molecular Sciences Laboratory; a national scientific user facility sponsored by the Department of Energy's Office of Biological and Environmental Research located at Pacific Northwest National Laboratory.

  11. Local variance for multi-scale analysis in geomorphometry.

    PubMed

    Drăguţ, Lucian; Eisank, Clemens; Strasser, Thomas

    2011-07-15

    Increasing availability of high resolution Digital Elevation Models (DEMs) is leading to a paradigm shift regarding scale issues in geomorphometry, prompting new solutions to cope with multi-scale analysis and detection of characteristic scales. We tested the suitability of the local variance (LV) method, originally developed for image analysis, for multi-scale analysis in geomorphometry. The method consists of: 1) up-scaling land-surface parameters derived from a DEM; 2) calculating LV as the average standard deviation (SD) within a 3 × 3 moving window for each scale level; 3) calculating the rate of change of LV (ROC-LV) from one level to another, and 4) plotting values so obtained against scale levels. We interpreted peaks in the ROC-LV graphs as markers of scale levels where cells or segments match types of pattern elements characterized by (relatively) equal degrees of homogeneity. The proposed method has been applied to LiDAR DEMs in two test areas different in terms of roughness: low relief and mountainous, respectively. For each test area, scale levels for slope gradient, plan, and profile curvatures were produced at constant increments with either resampling (cell-based) or image segmentation (object-based). Visual assessment revealed homogeneous areas that convincingly associate into patterns of land-surface parameters well differentiated across scales. We found that the LV method performed better on scale levels generated through segmentation as compared to up-scaling through resampling. The results indicate that coupling multi-scale pattern analysis with delineation of morphometric primitives is possible. This approach could be further used for developing hierarchical classifications of landform elements.

  12. Local variance for multi-scale analysis in geomorphometry

    PubMed Central

    Drăguţ, Lucian; Eisank, Clemens; Strasser, Thomas

    2011-01-01

    Increasing availability of high resolution Digital Elevation Models (DEMs) is leading to a paradigm shift regarding scale issues in geomorphometry, prompting new solutions to cope with multi-scale analysis and detection of characteristic scales. We tested the suitability of the local variance (LV) method, originally developed for image analysis, for multi-scale analysis in geomorphometry. The method consists of: 1) up-scaling land-surface parameters derived from a DEM; 2) calculating LV as the average standard deviation (SD) within a 3 × 3 moving window for each scale level; 3) calculating the rate of change of LV (ROC-LV) from one level to another, and 4) plotting values so obtained against scale levels. We interpreted peaks in the ROC-LV graphs as markers of scale levels where cells or segments match types of pattern elements characterized by (relatively) equal degrees of homogeneity. The proposed method has been applied to LiDAR DEMs in two test areas different in terms of roughness: low relief and mountainous, respectively. For each test area, scale levels for slope gradient, plan, and profile curvatures were produced at constant increments with either resampling (cell-based) or image segmentation (object-based). Visual assessment revealed homogeneous areas that convincingly associate into patterns of land-surface parameters well differentiated across scales. We found that the LV method performed better on scale levels generated through segmentation as compared to up-scaling through resampling. The results indicate that coupling multi-scale pattern analysis with delineation of morphometric primitives is possible. This approach could be further used for developing hierarchical classifications of landform elements. PMID:21779138

  13. An innovative approach for characteristic analysis and state-of-health diagnosis for a Li-ion cell based on the discrete wavelet transform

    NASA Astrophysics Data System (ADS)

    Kim, Jonghoon; Cho, B. H.

    2014-08-01

    This paper introduces an innovative approach to analyze electrochemical characteristics and state-of-health (SOH) diagnosis of a Li-ion cell based on the discrete wavelet transform (DWT). In this approach, the DWT has been applied as a powerful tool in the analysis of the discharging/charging voltage signal (DCVS) with non-stationary and transient phenomena for a Li-ion cell. Specifically, DWT-based multi-resolution analysis (MRA) is used for extracting information on the electrochemical characteristics in both time and frequency domain simultaneously. Through using the MRA with implementation of the wavelet decomposition, the information on the electrochemical characteristics of a Li-ion cell can be extracted from the DCVS over a wide frequency range. Wavelet decomposition based on the selection of the order 3 Daubechies wavelet (dB3) and scale 5 as the best wavelet function and the optimal decomposition scale is implemented. In particular, this present approach develops these investigations one step further by showing low and high frequency components (approximation component An and detail component Dn, respectively) extracted from variable Li-ion cells with different electrochemical characteristics caused by aging effect. Experimental results show the clearness of the DWT-based approach for the reliable diagnosis of the SOH for a Li-ion cell.

  14. Psychometric analysis of the leadership environment scale (LENS): Outcome from the Oregon research initiative on the organisation of nursing (ORION).

    PubMed

    Ross, Amy M; Ilic, Kelley; Kiyoshi-Teo, Hiroko; Lee, Christopher S

    2017-12-26

    The purpose of this study was to establish the psychometric properties of the new 16-item leadership environment scale. The leadership environment scale was based on complexity science concepts relevant to complex adaptive health care systems. A workforce survey of direct-care nurses was conducted (n = 1,443) in Oregon. Confirmatory factor analysis, exploratory factor analysis, concordant validity test and reliability tests were conducted to establish the structure and internal consistency of the leadership environment scale. Confirmatory factor analysis indices approached acceptable thresholds of fit with a single factor solution. Exploratory factor analysis showed improved fit with a two-factor model solution; the factors were labelled 'influencing relationships' and 'interdependent system supports'. Moderate to strong convergent validity was observed between the leadership environment scale/subscales and both the nursing workforce index and the safety organising scale. Reliability of the leadership environment scale and subscales was strong, with all alphas ≥.85. The leadership environment scale is structurally sound and reliable. Nursing management can employ adaptive complexity leadership attributes, measure their influence on the leadership environment, subsequently modify system supports and relationships and improve the quality of health care systems. The leadership environment scale is an innovative fit to complex adaptive systems and how nurses act as leaders within these systems. © 2017 John Wiley & Sons Ltd.

  15. Data management in large-scale collaborative toxicity studies: how to file experimental data for automated statistical analysis.

    PubMed

    Stanzel, Sven; Weimer, Marc; Kopp-Schneider, Annette

    2013-06-01

    High-throughput screening approaches are carried out for the toxicity assessment of a large number of chemical compounds. In such large-scale in vitro toxicity studies several hundred or thousand concentration-response experiments are conducted. The automated evaluation of concentration-response data using statistical analysis scripts saves time and yields more consistent results in comparison to data analysis performed by the use of menu-driven statistical software. Automated statistical analysis requires that concentration-response data are available in a standardised data format across all compounds. To obtain consistent data formats, a standardised data management workflow must be established, including guidelines for data storage, data handling and data extraction. In this paper two procedures for data management within large-scale toxicological projects are proposed. Both procedures are based on Microsoft Excel files as the researcher's primary data format and use a computer programme to automate the handling of data files. The first procedure assumes that data collection has not yet started whereas the second procedure can be used when data files already exist. Successful implementation of the two approaches into the European project ACuteTox is illustrated. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Sexual networks: measuring sexual selection in structured, polyandrous populations.

    PubMed

    McDonald, Grant C; James, Richard; Krause, Jens; Pizzari, Tommaso

    2013-03-05

    Sexual selection is traditionally measured at the population level, assuming that populations lack structure. However, increasing evidence undermines this approach, indicating that intrasexual competition in natural populations often displays complex patterns of spatial and temporal structure. This complexity is due in part to the degree and mechanisms of polyandry within a population, which can influence the intensity and scale of both pre- and post-copulatory sexual competition. Attempts to measure selection at the local and global scale have been made through multi-level selection approaches. However, definitions of local scale are often based on physical proximity, providing a rather coarse measure of local competition, particularly in polyandrous populations where the local scale of pre- and post-copulatory competition may differ drastically from each other. These limitations can be solved by social network analysis, which allows us to define a unique sexual environment for each member of a population: 'local scale' competition, therefore, becomes an emergent property of a sexual network. Here, we first propose a novel quantitative approach to measure pre- and post-copulatory sexual selection, which integrates multi-level selection with information on local scale competition derived as an emergent property of networks of sexual interactions. We then use simple simulations to illustrate the ways in which polyandry can impact estimates of sexual selection. We show that for intermediate levels of polyandry, the proposed network-based approach provides substantially more accurate measures of sexual selection than the more traditional population-level approach. We argue that the increasing availability of fine-grained behavioural datasets provides exciting new opportunities to develop network approaches to study sexual selection in complex societies.

  17. Scalable Parameter Estimation for Genome-Scale Biochemical Reaction Networks

    PubMed Central

    Kaltenbacher, Barbara; Hasenauer, Jan

    2017-01-01

    Mechanistic mathematical modeling of biochemical reaction networks using ordinary differential equation (ODE) models has improved our understanding of small- and medium-scale biological processes. While the same should in principle hold for large- and genome-scale processes, the computational methods for the analysis of ODE models which describe hundreds or thousands of biochemical species and reactions are missing so far. While individual simulations are feasible, the inference of the model parameters from experimental data is computationally too intensive. In this manuscript, we evaluate adjoint sensitivity analysis for parameter estimation in large scale biochemical reaction networks. We present the approach for time-discrete measurement and compare it to state-of-the-art methods used in systems and computational biology. Our comparison reveals a significantly improved computational efficiency and a superior scalability of adjoint sensitivity analysis. The computational complexity is effectively independent of the number of parameters, enabling the analysis of large- and genome-scale models. Our study of a comprehensive kinetic model of ErbB signaling shows that parameter estimation using adjoint sensitivity analysis requires a fraction of the computation time of established methods. The proposed method will facilitate mechanistic modeling of genome-scale cellular processes, as required in the age of omics. PMID:28114351

  18. Evaluation of uncertainty in the adjustment of fundamental constants

    NASA Astrophysics Data System (ADS)

    Bodnar, Olha; Elster, Clemens; Fischer, Joachim; Possolo, Antonio; Toman, Blaza

    2016-02-01

    Combining multiple measurement results for the same quantity is an important task in metrology and in many other areas. Examples include the determination of fundamental constants, the calculation of reference values in interlaboratory comparisons, or the meta-analysis of clinical studies. However, neither the GUM nor its supplements give any guidance for this task. Various approaches are applied such as weighted least-squares in conjunction with the Birge ratio or random effects models. While the former approach, which is based on a location-scale model, is particularly popular in metrology, the latter represents a standard tool used in statistics for meta-analysis. We investigate the reliability and robustness of the location-scale model and the random effects model with particular focus on resulting coverage or credible intervals. The interval estimates are obtained by adopting a Bayesian point of view in conjunction with a non-informative prior that is determined by a currently favored principle for selecting non-informative priors. Both approaches are compared by applying them to simulated data as well as to data for the Planck constant and the Newtonian constant of gravitation. Our results suggest that the proposed Bayesian inference based on the random effects model is more reliable and less sensitive to model misspecifications than the approach based on the location-scale model.

  19. Training Needs Analysis: Weaknesses in the Conventional Approach.

    ERIC Educational Resources Information Center

    Leat, Michael James; Lovel, Murray Jack

    1997-01-01

    Identification of the training and development needs of administrative support staff is not aided by conventional performance appraisal, which measures summary or comparative effectiveness. Meaningful diagnostic evaluation integrates three levels of analysis (organization, task, and individual), using behavioral expectation scales. (SK)

  20. Ecosystems, ecological restoration, and economics: does habitat or resource equivalency analysis mean other economic valuation methods are not needed?

    PubMed

    Shaw, W Douglass; Wlodarz, Marta

    2013-09-01

    Coastal and other area resources such as tidal wetlands, seagrasses, coral reefs, wetlands, and other ecosystems are often harmed by environmental damage that might be inflicted by human actions, or could occur from natural hazards such as hurricanes. Society may wish to restore resources to offset the harm, or receive compensation if this is not possible, but faces difficult choices among potential compensation projects. The optimal amount of restoration efforts can be determined by non-market valuation methods, service-to-service, or resource-to-resource approaches such as habitat equivalency analysis (HEA). HEA scales injured resources and lost services on a one-to-one trade-off basis. Here, we present the main differences between the HEA approach and other non-market valuation approaches. Particular focus is on the role of the social discount rate, which appears in the HEA equation and underlies calculations of the present value of future damages. We argue that while HEA involves elements of economic analysis, the assumption of a one-to-one trade-off between lost and restored services sometimes does not hold, and then other non-market economic valuation approaches may help in restoration scaling or in damage determination.

  1. Performance analysis of clustering techniques over microarray data: A case study

    NASA Astrophysics Data System (ADS)

    Dash, Rasmita; Misra, Bijan Bihari

    2018-03-01

    Handling big data is one of the major issues in the field of statistical data analysis. In such investigation cluster analysis plays a vital role to deal with the large scale data. There are many clustering techniques with different cluster analysis approach. But which approach suits a particular dataset is difficult to predict. To deal with this problem a grading approach is introduced over many clustering techniques to identify a stable technique. But the grading approach depends on the characteristic of dataset as well as on the validity indices. So a two stage grading approach is implemented. In this study the grading approach is implemented over five clustering techniques like hybrid swarm based clustering (HSC), k-means, partitioning around medoids (PAM), vector quantization (VQ) and agglomerative nesting (AGNES). The experimentation is conducted over five microarray datasets with seven validity indices. The finding of grading approach that a cluster technique is significant is also established by Nemenyi post-hoc hypothetical test.

  2. Demonstrating microbial co-occurrence pattern analyses within and between ecosystems

    PubMed Central

    Williams, Ryan J.; Howe, Adina; Hofmockel, Kirsten S.

    2014-01-01

    Co-occurrence patterns are used in ecology to explore interactions between organisms and environmental effects on coexistence within biological communities. Analysis of co-occurrence patterns among microbial communities has ranged from simple pairwise comparisons between all community members to direct hypothesis testing between focal species. However, co-occurrence patterns are rarely studied across multiple ecosystems or multiple scales of biological organization within the same study. Here we outline an approach to produce co-occurrence analyses that are focused at three different scales: co-occurrence patterns between ecosystems at the community scale, modules of co-occurring microorganisms within communities, and co-occurring pairs within modules that are nested within microbial communities. To demonstrate our co-occurrence analysis approach, we gathered publicly available 16S rRNA amplicon datasets to compare and contrast microbial co-occurrence at different taxonomic levels across different ecosystems. We found differences in community composition and co-occurrence that reflect environmental filtering at the community scale and consistent pairwise occurrences that may be used to infer ecological traits about poorly understood microbial taxa. However, we also found that conclusions derived from applying network statistics to microbial relationships can vary depending on the taxonomic level chosen and criteria used to build co-occurrence networks. We present our statistical analysis and code for public use in analysis of co-occurrence patterns across microbial communities. PMID:25101065

  3. Developing the Evaluation Scale to Determine the Impact of Body Language in an Argument: Reliability & Validity Analysis

    ERIC Educational Resources Information Center

    Karadag, Engin; Caliskan, Nihat; Yesil, Rustu

    2008-01-01

    In this research, it is aimed to develop a scale to observe the body language which is used during an argument. A sample group of 266 teacher candidates study at the departments of Class, Turkish or Social Sciences at the Faculty of Education was used in this study. A logical and statistical approach was pursued during the development of scale. An…

  4. A ubiquitous method for street scale spatial data collection and analysis in challenging urban environments: mapping health risks using spatial video in Haiti

    PubMed Central

    2013-01-01

    Background Fine-scale and longitudinal geospatial analysis of health risks in challenging urban areas is often limited by the lack of other spatial layers even if case data are available. Underlying population counts, residential context, and associated causative factors such as standing water or trash locations are often missing unless collected through logistically difficult, and often expensive, surveys. The lack of spatial context also hinders the interpretation of results and designing intervention strategies structured around analytical insights. This paper offers a ubiquitous spatial data collection approach using a spatial video that can be used to improve analysis and involve participatory collaborations. A case study will be used to illustrate this approach with three health risks mapped at the street scale for a coastal community in Haiti. Methods Spatial video was used to collect street and building scale information, including standing water, trash accumulation, presence of dogs, cohort specific population characteristics, and other cultural phenomena. These data were digitized into Google Earth and then coded and analyzed in a GIS using kernel density and spatial filtering approaches. The concentrations of these risks around area schools which are sometimes sources of diarrheal disease infection because of the high concentration of children and variable sanitary practices will show the utility of the method. In addition schools offer potential locations for cholera education interventions. Results Previously unavailable fine scale health risk data vary in concentration across the town, with some schools being proximate to greater concentrations of the mapped risks. The spatial video is also used to validate coded data and location specific risks within these “hotspots”. Conclusions Spatial video is a tool that can be used in any environment to improve local area health analysis and intervention. The process is rapid and can be repeated in study sites through time to track spatio-temporal dynamics of the communities. Its simplicity should also be used to encourage local participatory collaborations. PMID:23587358

  5. A ubiquitous method for street scale spatial data collection and analysis in challenging urban environments: mapping health risks using spatial video in Haiti.

    PubMed

    Curtis, Andrew; Blackburn, Jason K; Widmer, Jocelyn M; Morris, J Glenn

    2013-04-15

    Fine-scale and longitudinal geospatial analysis of health risks in challenging urban areas is often limited by the lack of other spatial layers even if case data are available. Underlying population counts, residential context, and associated causative factors such as standing water or trash locations are often missing unless collected through logistically difficult, and often expensive, surveys. The lack of spatial context also hinders the interpretation of results and designing intervention strategies structured around analytical insights. This paper offers a ubiquitous spatial data collection approach using a spatial video that can be used to improve analysis and involve participatory collaborations. A case study will be used to illustrate this approach with three health risks mapped at the street scale for a coastal community in Haiti. Spatial video was used to collect street and building scale information, including standing water, trash accumulation, presence of dogs, cohort specific population characteristics, and other cultural phenomena. These data were digitized into Google Earth and then coded and analyzed in a GIS using kernel density and spatial filtering approaches. The concentrations of these risks around area schools which are sometimes sources of diarrheal disease infection because of the high concentration of children and variable sanitary practices will show the utility of the method. In addition schools offer potential locations for cholera education interventions. Previously unavailable fine scale health risk data vary in concentration across the town, with some schools being proximate to greater concentrations of the mapped risks. The spatial video is also used to validate coded data and location specific risks within these "hotspots". Spatial video is a tool that can be used in any environment to improve local area health analysis and intervention. The process is rapid and can be repeated in study sites through time to track spatio-temporal dynamics of the communities. Its simplicity should also be used to encourage local participatory collaborations.

  6. Multivariate analysis of scale-dependent associations between bats and landscape structure

    USGS Publications Warehouse

    Gorresen, P.M.; Willig, M.R.; Strauss, R.E.

    2005-01-01

    The assessment of biotic responses to habitat disturbance and fragmentation generally has been limited to analyses at a single spatial scale. Furthermore, methods to compare responses between scales have lacked the ability to discriminate among patterns related to the identity, strength, or direction of associations of biotic variables with landscape attributes. We present an examination of the relationship of population- and community-level characteristics of phyllostomid bats with habitat features that were measured at multiple spatial scales in Atlantic rain forest of eastern Paraguay. We used a matrix of partial correlations between each biotic response variable (i.e., species abundance, species richness, and evenness) and a suite of landscape characteristics to represent the multifaceted associations of bats with spatial structure. Correlation matrices can correspond based on either the strength (i.e., magnitude) or direction (i.e., sign) of association. Therefore, a simulation model independently evaluated correspondence in the magnitude and sign of correlations among scales, and results were combined via a meta-analysis to provide an overall test of significance. Our approach detected both species-specific differences in response to landscape structure and scale dependence in those responses. This matrix-simulation approach has broad applicability to ecological situations in which multiple intercorrelated factors contribute to patterns in space or time. ?? 2005 by the Ecological Society of America.

  7. Scale development on consumer behavior toward counterfeit drugs in a developing country: a quantitative study exploiting the tools of an evolving paradigm.

    PubMed

    Alfadl, Abubakr A; Ibrahim, Mohamed Izham b Mohamed; Hassali, Mohamed Azmi Ahmad

    2013-09-11

    Although desperate need and drug counterfeiting are linked in developing countries, little research has been carried out to address this link, and there is a lack of proper tools and methodology. This study addresses the need for a new methodological approach by developing a scale to aid in understanding the demand side of drug counterfeiting in a developing country. The study presents a quantitative, non-representative survey conducted in Sudan. A face-to-face structured interview survey methodology was employed to collect the data from the general population (people in the street) in two phases: pilot (n = 100) and final survey (n = 1003). Data were analyzed by examining means, variances, squared multiple correlations, item-to-total correlations, and the results of an exploratory factor analysis and a confirmatory factor analysis. As an approach to scale purification, internal consistency was examined and improved. The scale was reduced from 44 to 41 items and Cronbach's alpha improved from 0.818 to 0.862. Finally, scale items were assessed. The result was an eleven-factor solution. Convergent and discriminant validity were demonstrated. The results of this study indicate that the "Consumer Behavior Toward Counterfeit Drugs Scale" is a valid, reliable measure with a solid theoretical base. Ultimately, the study offers public health policymakers a valid measurement tool and, consequently, a new methodological approach with which to build a better understanding of the demand side of counterfeit drugs and to develop more effective strategies to combat the problem.

  8. Practical guidelines to select and scale earthquake records for nonlinear response history analysis of structures

    USGS Publications Warehouse

    Kalkan, Erol; Chopra, Anil K.

    2010-01-01

    Earthquake engineering practice is increasingly using nonlinear response history analysis (RHA) to demonstrate performance of structures. This rigorous method of analysis requires selection and scaling of ground motions appropriate to design hazard levels. Presented herein is a modal-pushover-based scaling (MPS) method to scale ground motions for use in nonlinear RHA of buildings and bridges. In the MPS method, the ground motions are scaled to match (to a specified tolerance) a target value of the inelastic deformation of the first-'mode' inelastic single-degree-of-freedom (SDF) system whose properties are determined by first-'mode' pushover analysis. Appropriate for first-?mode? dominated structures, this approach is extended for structures with significant contributions of higher modes by considering elastic deformation of second-'mode' SDF system in selecting a subset of the scaled ground motions. Based on results presented for two bridges, covering single- and multi-span 'ordinary standard' bridge types, and six buildings, covering low-, mid-, and tall building types in California, the accuracy and efficiency of the MPS procedure are established and its superiority over the ASCE/SEI 7-05 scaling procedure is demonstrated.

  9. Control of Thermo-Acoustics Instabilities: The Multi-Scale Extended Kalman Approach

    NASA Technical Reports Server (NTRS)

    Le, Dzu K.; DeLaat, John C.; Chang, Clarence T.

    2003-01-01

    "Multi-Scale Extended Kalman" (MSEK) is a novel model-based control approach recently found to be effective for suppressing combustion instabilities in gas turbines. A control law formulated in this approach for fuel modulation demonstrated steady suppression of a high-frequency combustion instability (less than 500Hz) in a liquid-fuel combustion test rig under engine-realistic conditions. To make-up for severe transport-delays on control effect, the MSEK controller combines a wavelet -like Multi-Scale analysis and an Extended Kalman Observer to predict the thermo-acoustic states of combustion pressure perturbations. The commanded fuel modulation is composed of a damper action based on the predicted states, and a tones suppression action based on the Multi-Scale estimation of thermal excitations and other transient disturbances. The controller performs automatic adjustments of the gain and phase of these actions to minimize the Time-Scale Averaged Variances of the pressures inside the combustion zone and upstream of the injector. The successful demonstration of Active Combustion Control with this MSEK controller completed an important NASA milestone for the current research in advanced combustion technologies.

  10. Bush Encroachment Mapping for Africa - Multi-Scale Analysis with Remote Sensing and GIS

    NASA Astrophysics Data System (ADS)

    Graw, V. A. M.; Oldenburg, C.; Dubovyk, O.

    2015-12-01

    Bush encroachment describes a global problem which is especially facing the savanna ecosystem in Africa. Livestock is directly affected by decreasing grasslands and inedible invasive species which defines the process of bush encroachment. For many small scale farmers in developing countries livestock represents a type of insurance in times of crop failure or drought. Among that bush encroachment is also a problem for crop production. Studies on the mapping of bush encroachment so far focus on small scales using high-resolution data and rarely provide information beyond the national level. Therefore a process chain was developed using a multi-scale approach to detect bush encroachment for whole Africa. The bush encroachment map is calibrated with ground truth data provided by experts in Southern, Eastern and Western Africa. By up-scaling location specific information on different levels of remote sensing imagery - 30m with Landsat images and 250m with MODIS data - a map is created showing potential and actual areas of bush encroachment on the African continent and thereby provides an innovative approach to map bush encroachment on the regional scale. A classification approach links location data based on GPS information from experts to the respective pixel in the remote sensing imagery. Supervised classification is used while actual bush encroachment information represents the training samples for the up-scaling. The classification technique is based on Random Forests and regression trees, a machine learning classification approach. Working on multiple scales and with the help of field data an innovative approach can be presented showing areas affected by bush encroachment on the African continent. This information can help to prevent further grassland decrease and identify those regions where land management strategies are of high importance to sustain livestock keeping and thereby also secure livelihoods in rural areas.

  11. Building to Scale: An Analysis of Web-Based Services in CIC (Big Ten) Libraries.

    ERIC Educational Resources Information Center

    Dewey, Barbara I.

    Advancing library services in large universities requires creative approaches for "building to scale." This is the case for CIC, Committee on Institutional Cooperation (Big Ten), libraries whose home institutions serve thousands of students, faculty, staff, and others. Developing virtual Web-based services is an increasingly viable…

  12. Large-Scale High School Reform through School Improvement Networks: Exploring Possibilities for "Developmental Evaluation"

    ERIC Educational Resources Information Center

    Peurach, Donald J.; Lenhoff, Sarah Winchell; Glazer, Joshua L.

    2016-01-01

    Recognizing school improvement networks as a leading strategy for large-scale high school reform, this analysis examines developmental evaluation as an approach to examining school improvement networks as "learning systems" able to produce, use, and refine practical knowledge in large numbers of schools. Through a case study of one…

  13. A multi-objective constraint-based approach for modeling genome-scale microbial ecosystems.

    PubMed

    Budinich, Marko; Bourdon, Jérémie; Larhlimi, Abdelhalim; Eveillard, Damien

    2017-01-01

    Interplay within microbial communities impacts ecosystems on several scales, and elucidation of the consequent effects is a difficult task in ecology. In particular, the integration of genome-scale data within quantitative models of microbial ecosystems remains elusive. This study advocates the use of constraint-based modeling to build predictive models from recent high-resolution -omics datasets. Following recent studies that have demonstrated the accuracy of constraint-based models (CBMs) for simulating single-strain metabolic networks, we sought to study microbial ecosystems as a combination of single-strain metabolic networks that exchange nutrients. This study presents two multi-objective extensions of CBMs for modeling communities: multi-objective flux balance analysis (MO-FBA) and multi-objective flux variability analysis (MO-FVA). Both methods were applied to a hot spring mat model ecosystem. As a result, multiple trade-offs between nutrients and growth rates, as well as thermodynamically favorable relative abundances at community level, were emphasized. We expect this approach to be used for integrating genomic information in microbial ecosystems. Following models will provide insights about behaviors (including diversity) that take place at the ecosystem scale.

  14. A Disciplined Architectural Approach to Scaling Data Analysis for Massive, Scientific Data

    NASA Astrophysics Data System (ADS)

    Crichton, D. J.; Braverman, A. J.; Cinquini, L.; Turmon, M.; Lee, H.; Law, E.

    2014-12-01

    Data collections across remote sensing and ground-based instruments in astronomy, Earth science, and planetary science are outpacing scientists' ability to analyze them. Furthermore, the distribution, structure, and heterogeneity of the measurements themselves pose challenges that limit the scalability of data analysis using traditional approaches. Methods for developing science data processing pipelines, distribution of scientific datasets, and performing analysis will require innovative approaches that integrate cyber-infrastructure, algorithms, and data into more systematic approaches that can more efficiently compute and reduce data, particularly distributed data. This requires the integration of computer science, machine learning, statistics and domain expertise to identify scalable architectures for data analysis. The size of data returned from Earth Science observing satellites and the magnitude of data from climate model output, is predicted to grow into the tens of petabytes challenging current data analysis paradigms. This same kind of growth is present in astronomy and planetary science data. One of the major challenges in data science and related disciplines defining new approaches to scaling systems and analysis in order to increase scientific productivity and yield. Specific needs include: 1) identification of optimized system architectures for analyzing massive, distributed data sets; 2) algorithms for systematic analysis of massive data sets in distributed environments; and 3) the development of software infrastructures that are capable of performing massive, distributed data analysis across a comprehensive data science framework. NASA/JPL has begun an initiative in data science to address these challenges. Our goal is to evaluate how scientific productivity can be improved through optimized architectural topologies that identify how to deploy and manage the access, distribution, computation, and reduction of massive, distributed data, while managing the uncertainties of scientific conclusions derived from such capabilities. This talk will provide an overview of JPL's efforts in developing a comprehensive architectural approach to data science.

  15. Large-Scale Biomonitoring of Remote and Threatened Ecosystems via High-Throughput Sequencing

    PubMed Central

    Gibson, Joel F.; Shokralla, Shadi; Curry, Colin; Baird, Donald J.; Monk, Wendy A.; King, Ian; Hajibabaei, Mehrdad

    2015-01-01

    Biodiversity metrics are critical for assessment and monitoring of ecosystems threatened by anthropogenic stressors. Existing sorting and identification methods are too expensive and labour-intensive to be scaled up to meet management needs. Alternately, a high-throughput DNA sequencing approach could be used to determine biodiversity metrics from bulk environmental samples collected as part of a large-scale biomonitoring program. Here we show that both morphological and DNA sequence-based analyses are suitable for recovery of individual taxonomic richness, estimation of proportional abundance, and calculation of biodiversity metrics using a set of 24 benthic samples collected in the Peace-Athabasca Delta region of Canada. The high-throughput sequencing approach was able to recover all metrics with a higher degree of taxonomic resolution than morphological analysis. The reduced cost and increased capacity of DNA sequence-based approaches will finally allow environmental monitoring programs to operate at the geographical and temporal scale required by industrial and regulatory end-users. PMID:26488407

  16. A Systems Approach to Develop Sustainable Water Supply Infrastructure and Management

    EPA Science Inventory

    In a visit to Zhejiang University, China, Dr. Y. Jeffrey Yang will discuss in this presentation the system approach for urban water infrastructure sustainability. Through a system analysis, it becomes clear at an urban scale that the energy and water efficiencies of a water supp...

  17. Invariance of Parent Ratings of the ADHD Symptoms in Australian and Malaysian, and North European Australian and Malay Malaysia Children: A Mean and Covariance Structures Analysis Approach

    ERIC Educational Resources Information Center

    Gomez, Rapson

    2009-01-01

    Objective: This study used the mean and covariance structures analysis approach to examine the equality or invariance of ratings of the 18 ADHD symptoms. Method: 783 Australian and 928 Malaysian parents provided ratings for an ADHD rating scale. Invariance was tested across these groups (Comparison 1), and North European Australian (n = 623) and…

  18. Progress in the Phase 0 Model Development of a STAR Concept for Dynamics and Control Testing

    NASA Technical Reports Server (NTRS)

    Woods-Vedeler, Jessica A.; Armand, Sasan C.

    2003-01-01

    The paper describes progress in the development of a lightweight, deployable passive Synthetic Thinned Aperture Radiometer (STAR). The spacecraft concept presented will enable the realization of 10 km resolution global soil moisture and ocean salinity measurements at 1.41 GHz. The focus of this work was on definition of an approximately 1/3-scaled, 5-meter Phase 0 test article for concept demonstration and dynamics and control testing. Design requirements, parameters and a multi-parameter, hybrid scaling approach for the dynamically scaled test model were established. The El Scaling Approach that was established allows designers freedom to define the cross section of scaled, lightweight structural components that is most convenient for manufacturing when the mass of the component is small compared to the overall system mass. Static and dynamic response analysis was conducted on analytical models to evaluate system level performance and to optimize panel geometry for optimal tension load distribution.

  19. Manipulating measurement scales in medical statistical analysis and data mining: A review of methodologies

    PubMed Central

    Marateb, Hamid Reza; Mansourian, Marjan; Adibi, Peyman; Farina, Dario

    2014-01-01

    Background: selecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal–variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD). Ordinal-to-Interval scale conversion example: a breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests. Results: the sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable. Conclusion: by using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables. PMID:24672565

  20. Module Degradation Mechanisms Studied by a Multi-Scale Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnston, Steve; Al-Jassim, Mowafak; Hacke, Peter

    2016-11-21

    A key pathway to meeting the Department of Energy SunShot 2020 goals is to reduce financing costs by improving investor confidence through improved photovoltaic (PV) module reliability. A comprehensive approach to further understand and improve PV reliability includes characterization techniques and modeling from module to atomic scale. Imaging techniques, which include photoluminescence, electroluminescence, and lock-in thermography, are used to locate localized defects responsible for module degradation. Small area samples containing such defects are prepared using coring techniques and are then suitable and available for microscopic study and specific defect modeling and analysis.

  1. Establishing a direct connection between detrended fluctuation analysis and Fourier analysis

    NASA Astrophysics Data System (ADS)

    Kiyono, Ken

    2015-10-01

    To understand methodological features of the detrended fluctuation analysis (DFA) using a higher-order polynomial fitting, we establish the direct connection between DFA and Fourier analysis. Based on an exact calculation of the single-frequency response of the DFA, the following facts are shown analytically: (1) in the analysis of stochastic processes exhibiting a power-law scaling of the power spectral density (PSD), S (f ) ˜f-β , a higher-order detrending in the DFA has no adverse effect in the estimation of the DFA scaling exponent α , which satisfies the scaling relation α =(β +1 )/2 ; (2) the upper limit of the scaling exponents detectable by the DFA depends on the order of polynomial fit used in the DFA, and is bounded by m +1 , where m is the order of the polynomial fit; (3) the relation between the time scale in the DFA and the corresponding frequency in the PSD are distorted depending on both the order of the DFA and the frequency dependence of the PSD. We can improve the scale distortion by introducing the corrected time scale in the DFA corresponding to the inverse of the frequency scale in the PSD. In addition, our analytical approach makes it possible to characterize variants of the DFA using different types of detrending. As an application, properties of the detrending moving average algorithm are discussed.

  2. Item Factor Analysis: Current Approaches and Future Directions

    ERIC Educational Resources Information Center

    Wirth, R. J.; Edwards, Michael C.

    2007-01-01

    The rationale underlying factor analysis applies to continuous and categorical variables alike; however, the models and estimation methods for continuous (i.e., interval or ratio scale) data are not appropriate for item-level data that are categorical in nature. The authors provide a targeted review and synthesis of the item factor analysis (IFA)…

  3. Multiscale analysis of restoration priorities for marine shoreline planning.

    PubMed

    Diefenderfer, Heida L; Sobocinski, Kathryn L; Thom, Ronald M; May, Christopher W; Borde, Amy B; Southard, Susan L; Vavrinec, John; Sather, Nichole K

    2009-10-01

    Planners are being called on to prioritize marine shorelines for conservation status and restoration action. This study documents an approach to determining the management strategy most likely to succeed based on current conditions at local and landscape scales. The conceptual framework based in restoration ecology pairs appropriate restoration strategies with sites based on the likelihood of producing long-term resilience given the condition of ecosystem structures and processes at three scales: the shorezone unit (site), the drift cell reach (nearshore marine landscape), and the watershed (terrestrial landscape). The analysis is structured by a conceptual ecosystem model that identifies anthropogenic impacts on targeted ecosystem functions. A scoring system, weighted by geomorphic class, is applied to available spatial data for indicators of stress and function using geographic information systems. This planning tool augments other approaches to prioritizing restoration, including historical conditions and change analysis and ecosystem valuation.

  4. Efficient electronic structure theory via hierarchical scale-adaptive coupled-cluster formalism: I. Theory and computational complexity analysis

    NASA Astrophysics Data System (ADS)

    Lyakh, Dmitry I.

    2018-03-01

    A novel reduced-scaling, general-order coupled-cluster approach is formulated by exploiting hierarchical representations of many-body tensors, combined with the recently suggested formalism of scale-adaptive tensor algebra. Inspired by the hierarchical techniques from the renormalisation group approach, H/H2-matrix algebra and fast multipole method, the computational scaling reduction in our formalism is achieved via coarsening of quantum many-body interactions at larger interaction scales, thus imposing a hierarchical structure on many-body tensors of coupled-cluster theory. In our approach, the interaction scale can be defined on any appropriate Euclidean domain (spatial domain, momentum-space domain, energy domain, etc.). We show that the hierarchically resolved many-body tensors can reduce the storage requirements to O(N), where N is the number of simulated quantum particles. Subsequently, we prove that any connected many-body diagram consisting of a finite number of arbitrary-order tensors, e.g. an arbitrary coupled-cluster diagram, can be evaluated in O(NlogN) floating-point operations. On top of that, we suggest an additional approximation to further reduce the computational complexity of higher order coupled-cluster equations, i.e. equations involving higher than double excitations, which otherwise would introduce a large prefactor into formal O(NlogN) scaling.

  5. Quantitative maps of genetic interactions in yeast - comparative evaluation and integrative analysis.

    PubMed

    Lindén, Rolf O; Eronen, Ville-Pekka; Aittokallio, Tero

    2011-03-24

    High-throughput genetic screening approaches have enabled systematic means to study how interactions among gene mutations contribute to quantitative fitness phenotypes, with the aim of providing insights into the functional wiring diagrams of genetic interaction networks on a global scale. However, it is poorly known how well these quantitative interaction measurements agree across the screening approaches, which hinders their integrated use toward improving the coverage and quality of the genetic interaction maps in yeast and other organisms. Using large-scale data matrices from epistatic miniarray profiling (E-MAP), genetic interaction mapping (GIM), and synthetic genetic array (SGA) approaches, we carried out here a systematic comparative evaluation among these quantitative maps of genetic interactions in yeast. The relatively low association between the original interaction measurements or their customized scores could be improved using a matrix-based modelling framework, which enables the use of single- and double-mutant fitness estimates and measurements, respectively, when scoring genetic interactions. Toward an integrative analysis, we show how the detections from the different screening approaches can be combined to suggest novel positive and negative interactions which are complementary to those obtained using any single screening approach alone. The matrix approximation procedure has been made available to support the design and analysis of the future screening studies. We have shown here that even if the correlation between the currently available quantitative genetic interaction maps in yeast is relatively low, their comparability can be improved by means of our computational matrix approximation procedure, which will enable integrative analysis and detection of a wider spectrum of genetic interactions using data from the complementary screening approaches.

  6. Impact of Spatial Scales on the Intercomparison of Climate Scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Wei; Steptoe, Michael; Chang, Zheng

    2017-01-01

    Scenario analysis has been widely applied in climate science to understand the impact of climate change on the future human environment, but intercomparison and similarity analysis of different climate scenarios based on multiple simulation runs remain challenging. Although spatial heterogeneity plays a key role in modeling climate and human systems, little research has been performed to understand the impact of spatial variations and scales on similarity analysis of climate scenarios. To address this issue, the authors developed a geovisual analytics framework that lets users perform similarity analysis of climate scenarios from the Global Change Assessment Model (GCAM) using a hierarchicalmore » clustering approach.« less

  7. Allometric scaling: analysis of LD50 data.

    PubMed

    Burzala-Kowalczyk, Lidia; Jongbloed, Geurt

    2011-04-01

    The need to identify toxicologically equivalent doses across different species is a major issue in toxicology and risk assessment. In this article, we investigate interspecies scaling based on the allometric equation applied to the single, oral LD (50) data previously analyzed by Rhomberg and Wolff. We focus on the statistical approach, namely, regression analysis of the mentioned data. In contrast to Rhomberg and Wolff's analysis of species pairs, we perform an overall analysis based on the whole data set. From our study it follows that if one assumes one single scaling rule for all species and substances in the data set, then β = 1 is the most natural choice among a set of candidates known in the literature. In fact, we obtain quite narrow confidence intervals for this parameter. However, the estimate of the variance in the model is relatively high, resulting in rather wide prediction intervals. © 2010 Society for Risk Analysis.

  8. Nonlinear zero-sum differential game analysis by singular perturbation methods

    NASA Technical Reports Server (NTRS)

    Sinar, J.; Farber, N.

    1982-01-01

    A class of nonlinear, zero-sum differential games, exhibiting time-scale separation properties, can be analyzed by singular-perturbation techniques. The merits of such an analysis, leading to an approximate game solution, as well as the 'well-posedness' of the formulation, are discussed. This approach is shown to be attractive for investigating pursuit-evasion problems; the original multidimensional differential game is decomposed to a 'simple pursuit' (free-stream) game and two independent (boundary-layer) optimal-control problems. Using multiple time-scale boundary-layer models results in a pair of uniformly valid zero-order composite feedback strategies. The dependence of suboptimal strategies on relative geometry and own-state measurements is demonstrated by a three dimensional, constant-speed example. For game analysis with realistic vehicle dynamics, the technique of forced singular perturbations and a variable modeling approach is proposed. Accuracy of the analysis is evaluated by comparison with the numerical solution of a time-optimal, variable-speed 'game of two cars' in the horizontal plane.

  9. Adjoint-Based Aerodynamic Design of Complex Aerospace Configurations

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.

    2016-01-01

    An overview of twenty years of adjoint-based aerodynamic design research at NASA Langley Research Center is presented. Adjoint-based algorithms provide a powerful tool for efficient sensitivity analysis of complex large-scale computational fluid dynamics (CFD) simulations. Unlike alternative approaches for which computational expense generally scales with the number of design parameters, adjoint techniques yield sensitivity derivatives of a simulation output with respect to all input parameters at the cost of a single additional simulation. With modern large-scale CFD applications often requiring millions of compute hours for a single analysis, the efficiency afforded by adjoint methods is critical in realizing a computationally tractable design optimization capability for such applications.

  10. Bridging scales through multiscale modeling: a case study on protein kinase A.

    PubMed

    Boras, Britton W; Hirakis, Sophia P; Votapka, Lane W; Malmstrom, Robert D; Amaro, Rommie E; McCulloch, Andrew D

    2015-01-01

    The goal of multiscale modeling in biology is to use structurally based physico-chemical models to integrate across temporal and spatial scales of biology and thereby improve mechanistic understanding of, for example, how a single mutation can alter organism-scale phenotypes. This approach may also inform therapeutic strategies or identify candidate drug targets that might otherwise have been overlooked. However, in many cases, it remains unclear how best to synthesize information obtained from various scales and analysis approaches, such as atomistic molecular models, Markov state models (MSM), subcellular network models, and whole cell models. In this paper, we use protein kinase A (PKA) activation as a case study to explore how computational methods that model different physical scales can complement each other and integrate into an improved multiscale representation of the biological mechanisms. Using measured crystal structures, we show how molecular dynamics (MD) simulations coupled with atomic-scale MSMs can provide conformations for Brownian dynamics (BD) simulations to feed transitional states and kinetic parameters into protein-scale MSMs. We discuss how milestoning can give reaction probabilities and forward-rate constants of cAMP association events by seamlessly integrating MD and BD simulation scales. These rate constants coupled with MSMs provide a robust representation of the free energy landscape, enabling access to kinetic, and thermodynamic parameters unavailable from current experimental data. These approaches have helped to illuminate the cooperative nature of PKA activation in response to distinct cAMP binding events. Collectively, this approach exemplifies a general strategy for multiscale model development that is applicable to a wide range of biological problems.

  11. Multiple-length-scale deformation analysis in a thermoplastic polyurethane

    PubMed Central

    Sui, Tan; Baimpas, Nikolaos; Dolbnya, Igor P.; Prisacariu, Cristina; Korsunsky, Alexander M.

    2015-01-01

    Thermoplastic polyurethane elastomers enjoy an exceptionally wide range of applications due to their remarkable versatility. These block co-polymers are used here as an example of a structurally inhomogeneous composite containing nano-scale gradients, whose internal strain differs depending on the length scale of consideration. Here we present a combined experimental and modelling approach to the hierarchical characterization of block co-polymer deformation. Synchrotron-based small- and wide-angle X-ray scattering and radiography are used for strain evaluation across the scales. Transmission electron microscopy image-based finite element modelling and fast Fourier transform analysis are used to develop a multi-phase numerical model that achieves agreement with the combined experimental data using a minimal number of adjustable structural parameters. The results highlight the importance of fuzzy interfaces, that is, regions of nanometre-scale structure and property gradients, in determining the mechanical properties of hierarchical composites across the scales. PMID:25758945

  12. An Integrated Approach for Urban Earthquake Vulnerability Analyses

    NASA Astrophysics Data System (ADS)

    Düzgün, H. S.; Yücemen, M. S.; Kalaycioglu, H. S.

    2009-04-01

    The earthquake risk for an urban area has increased over the years due to the increasing complexities in urban environments. The main reasons are the location of major cities in hazard prone areas, growth in urbanization and population and rising wealth measures. In recent years physical examples of these factors are observed through the growing costs of major disasters in urban areas which have stimulated a demand for in-depth evaluation of possible strategies to manage the large scale damaging effects of earthquakes. Understanding and formulation of urban earthquake risk requires consideration of a wide range of risk aspects, which can be handled by developing an integrated approach. In such an integrated approach, an interdisciplinary view should be incorporated into the risk assessment. Risk assessment for an urban area requires prediction of vulnerabilities related to elements at risk in the urban area and integration of individual vulnerability assessments. However, due to complex nature of an urban environment, estimating vulnerabilities and integrating them necessities development of integrated approaches in which vulnerabilities of social, economical, structural (building stock and infrastructure), cultural and historical heritage are estimated for a given urban area over a given time period. In this study an integrated urban earthquake vulnerability assessment framework, which considers vulnerability of urban environment in a holistic manner and performs the vulnerability assessment for the smallest administrative unit, namely at neighborhood scale, is proposed. The main motivation behind this approach is the inability to implement existing vulnerability assessment methodologies for countries like Turkey, where the required data are usually missing or inadequate and decision makers seek for prioritization of their limited resources in risk reduction in the administrative districts from which they are responsible. The methodology integrates socio-economical, structural, coastal, ground condition, organizational vulnerabilities, as well as accessibility to critical services within the framework. The proposed framework has the following eight components: Seismic hazard analysis, soil response analysis, tsunami inundation analysis, structural vulnerability analysis, socio-economic vulnerability analysis, accessibility to critical services, GIS-based integrated vulnerability assessment, and visualization of vulnerabilities in 3D virtual city model The integrated model for various vulnerabilities obtained for the urban area is developed in GIS environment by using individual vulnerability assessments for considered elements at risk and serve for establishing the backbone of the spatial decision support system. The stages followed in the model are: Determination of a common mapping unit for each aspect of urban earthquake vulnerability, formation of a geo-database for the vulnerabilities, evaluation of urban vulnerability based on multi attribute utility theory with various weighting algorithms, mapping of the evaluated integrated earthquake risk in geographic information systems (GIS) in the neighborhood scale. The framework is also applicable to larger geographical mapping scales, for example, the building scale. When illustrating the results in building scale, 3-D visualizations with remote sensing data is used so that decision-makers can easily interpret the outputs. The proposed vulnerability assessment framework is flexible and can easily be applied to urban environments at various geographical scales with different mapping units. The obtained total vulnerability maps for the urban area provide a baseline for the development of risk reduction strategies for the decision makers. Moreover, as several aspects of elements at risk for an urban area is considered through vulnerability analyses, effect on changes in vulnerability conditions on the total can easily be determined. The developed approach also enables decision makers to monitor temporal and spatial changes in the urban environment due to implementation of risk reduction strategies.

  13. Quantitative subpixel spectral detection of targets in multispectral images. [terrestrial and planetary surfaces

    NASA Technical Reports Server (NTRS)

    Sabol, Donald E., Jr.; Adams, John B.; Smith, Milton O.

    1992-01-01

    The conditions that affect the spectral detection of target materials at the subpixel scale are examined. Two levels of spectral mixture analysis for determining threshold detection limits of target materials in a spectral mixture are presented, the cases where the target is detected as: (1) a component of a spectral mixture (continuum threshold analysis) and (2) residuals (residual threshold analysis). The results of these two analyses are compared under various measurement conditions. The examples illustrate the general approach that can be used for evaluating the spectral detectability of terrestrial and planetary targets at the subpixel scale.

  14. Applying the Pseudo-Panel Approach to International Large-Scale Assessments: A Methodology for Analyzing Subpopulation Trend Data

    ERIC Educational Resources Information Center

    Hooper, Martin

    2017-01-01

    TIMSS and PIRLS assess representative samples of students at regular intervals, measuring trends in student achievement and student contexts for learning. Because individual students are not tracked over time, analysis of international large-scale assessment data is usually conducted cross-sectionally. Gustafsson (2007) proposed examining the data…

  15. Behavioral and Emotional Strength-Based Assessment of Finnish Elementary Students: Psychometrics of the BERS-2

    ERIC Educational Resources Information Center

    Sointu, Erkko Tapio; Savolainen, Hannu; Lambert, Matthew C.; Lappalainen, Kristiina; Epstein, Michael H.

    2014-01-01

    When rating scales are used in different countries, thorough investigation of the psychometric properties is needed. We examined the internal structure of the Finnish translated Behavioral and Emotional Rating Scale-2 (BERS-2) using Rasch and confirmatory factor analysis approaches with a sample of youth, parents, and teachers. The results…

  16. The Relational Self-Concept Scale: A Context-Specific Self-Report Measure for Adolescents.

    ERIC Educational Resources Information Center

    Schott, Gareth R.; Bellin, Wynford

    2001-01-01

    Describes an alternative approach to measuring the self that directly accounts for the way individuals ruminate on their external actions in order to inform and maintain their self-image. Analysis of responses to this measure confirmed that the scale is multidimensional, possesses appropriate properties, and contains a high degree of ecological…

  17. Bayesian Factor Analysis as a Variable Selection Problem: Alternative Priors and Consequences

    PubMed Central

    Lu, Zhao-Hua; Chow, Sy-Miin; Loken, Eric

    2016-01-01

    Factor analysis is a popular statistical technique for multivariate data analysis. Developments in the structural equation modeling framework have enabled the use of hybrid confirmatory/exploratory approaches in which factor loading structures can be explored relatively flexibly within a confirmatory factor analysis (CFA) framework. Recently, a Bayesian structural equation modeling (BSEM) approach (Muthén & Asparouhov, 2012) has been proposed as a way to explore the presence of cross-loadings in CFA models. We show that the issue of determining factor loading patterns may be formulated as a Bayesian variable selection problem in which Muthén and Asparouhov’s approach can be regarded as a BSEM approach with ridge regression prior (BSEM-RP). We propose another Bayesian approach, denoted herein as the Bayesian structural equation modeling with spike and slab prior (BSEM-SSP), which serves as a one-stage alternative to the BSEM-RP. We review the theoretical advantages and disadvantages of both approaches and compare their empirical performance relative to two modification indices-based approaches and exploratory factor analysis with target rotation. A teacher stress scale data set (Byrne, 2012; Pettegrew & Wolf, 1982) is used to demonstrate our approach. PMID:27314566

  18. A critical comparison of systematic calibration protocols for activated sludge models: a SWOT analysis.

    PubMed

    Sin, Gürkan; Van Hulle, Stijn W H; De Pauw, Dirk J W; van Griensven, Ann; Vanrolleghem, Peter A

    2005-07-01

    Modelling activated sludge systems has gained an increasing momentum after the introduction of activated sludge models (ASMs) in 1987. Application of dynamic models for full-scale systems requires essentially a calibration of the chosen ASM to the case under study. Numerous full-scale model applications have been performed so far which were mostly based on ad hoc approaches and expert knowledge. Further, each modelling study has followed a different calibration approach: e.g. different influent wastewater characterization methods, different kinetic parameter estimation methods, different selection of parameters to be calibrated, different priorities within the calibration steps, etc. In short, there was no standard approach in performing the calibration study, which makes it difficult, if not impossible, to (1) compare different calibrations of ASMs with each other and (2) perform internal quality checks for each calibration study. To address these concerns, systematic calibration protocols have recently been proposed to bring guidance to the modeling of activated sludge systems and in particular to the calibration of full-scale models. In this contribution four existing calibration approaches (BIOMATH, HSG, STOWA and WERF) will be critically discussed using a SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis. It will also be assessed in what way these approaches can be further developed in view of further improving the quality of ASM calibration. In this respect, the potential of automating some steps of the calibration procedure by use of mathematical algorithms is highlighted.

  19. Brief communication: Using averaged soil moisture estimates to improve the performances of a regional-scale landslide early warning system

    NASA Astrophysics Data System (ADS)

    Segoni, Samuele; Rosi, Ascanio; Lagomarsino, Daniela; Fanti, Riccardo; Casagli, Nicola

    2018-03-01

    We communicate the results of a preliminary investigation aimed at improving a state-of-the-art RSLEWS (regional-scale landslide early warning system) based on rainfall thresholds by integrating mean soil moisture values averaged over the territorial units of the system. We tested two approaches. The simplest can be easily applied to improve other RSLEWS: it is based on a soil moisture threshold value under which rainfall thresholds are not used because landslides are not expected to occur. Another approach deeply modifies the original RSLEWS: thresholds based on antecedent rainfall accumulated over long periods are substituted with soil moisture thresholds. A back analysis demonstrated that both approaches consistently reduced false alarms, while the second approach reduced missed alarms as well.

  20. New, national bottom-up estimate for tree-based biological ...

    EPA Pesticide Factsheets

    Nitrogen is a limiting nutrient in many ecosystems, but is also a chief pollutant from human activity. Quantifying human impacts on the nitrogen cycle and investigating natural ecosystem nitrogen cycling both require an understanding of the magnitude of nitrogen inputs from biological nitrogen fixation (BNF). A bottom-up approach to estimating BNF—scaling rates up from measurements to broader scales—is attractive because it is rooted in actual BNF measurements. However, bottom-up approaches have been hindered by scaling difficulties, and a recent top-down approach suggested that the previous bottom-up estimate was much too large. Here, we used a bottom-up approach for tree-based BNF, overcoming scaling difficulties with the systematic, immense (>70,000 N-fixing trees) Forest Inventory and Analysis (FIA) database. We employed two approaches to estimate species-specific BNF rates: published ecosystem-scale rates (kg N ha-1 yr-1) and published estimates of the percent of N derived from the atmosphere (%Ndfa) combined with FIA-derived growth rates. Species-specific rates can vary for a variety of reasons, so for each approach we examined how different assumptions influenced our results. Specifically, we allowed BNF rates to vary with stand age, N-fixer density, and canopy position (since N-fixation is known to require substantial light).Our estimates from this bottom-up technique are several orders of magnitude lower than previous estimates indicating

  1. A sub-national scale geospatial analysis of diamond deposit lootability: the case of the Central African Republic

    USGS Publications Warehouse

    Malpeli, Katherine C.; Chirico, Peter G.

    2014-01-01

    The Central African Republic (CAR), a country with rich diamond deposits and a tumultuous political history, experienced a government takeover by the Seleka rebel coalition in 2013. It is within this context that we developed and implemented a geospatial approach for assessing the lootability of high value-to-weight resource deposits, using the case of diamonds in CAR as an example. According to current definitions of lootability, or the vulnerability of deposits to exploitation, CAR's two major diamond deposits are similarly lootable. However, using this geospatial approach, we demonstrate that the deposits experience differing political geographic, spatial location, and cultural geographic contexts, rendering the eastern deposits more lootable than the western deposits. The patterns identified through this detailed analysis highlight the geographic complexities surrounding the issue of conflict resources and lootability, and speak to the importance of examining these topics at the sub-national scale, rather than relying on national-scale statistics.

  2. Divided-evolution-based pulse scheme for quantifying exchange processes in proteins: powerful complement to relaxation dispersion experiments.

    PubMed

    Bouvignies, Guillaume; Hansen, D Flemming; Vallurupalli, Pramodh; Kay, Lewis E

    2011-02-16

    A method for quantifying millisecond time scale exchange in proteins is presented based on scaling the rate of chemical exchange using a 2D (15)N, (1)H(N) experiment in which (15)N dwell times are separated by short spin-echo pulse trains. Unlike the popular Carr-Purcell-Meiboom-Gill (CPMG) experiment where the effects of a radio frequency field on measured transverse relaxation rates are quantified, the new approach measures peak positions in spectra that shift as the effective exchange time regime is varied. The utility of the method is established through an analysis of data recorded on an exchanging protein-ligand system for which the exchange parameters have been accurately determined using alternative approaches. Computations establish that a combined analysis of CPMG and peak shift profiles extends the time scale that can be studied to include exchanging systems with highly skewed populations and exchange rates as slow as 20 s(-1).

  3. The Use of Weighted Graphs for Large-Scale Genome Analysis

    PubMed Central

    Zhou, Fang; Toivonen, Hannu; King, Ross D.

    2014-01-01

    There is an acute need for better tools to extract knowledge from the growing flood of sequence data. For example, thousands of complete genomes have been sequenced, and their metabolic networks inferred. Such data should enable a better understanding of evolution. However, most existing network analysis methods are based on pair-wise comparisons, and these do not scale to thousands of genomes. Here we propose the use of weighted graphs as a data structure to enable large-scale phylogenetic analysis of networks. We have developed three types of weighted graph for enzymes: taxonomic (these summarize phylogenetic importance), isoenzymatic (these summarize enzymatic variety/redundancy), and sequence-similarity (these summarize sequence conservation); and we applied these types of weighted graph to survey prokaryotic metabolism. To demonstrate the utility of this approach we have compared and contrasted the large-scale evolution of metabolism in Archaea and Eubacteria. Our results provide evidence for limits to the contingency of evolution. PMID:24619061

  4. Scaling analysis applied to the NORVEX code development and thermal energy flight experiment

    NASA Technical Reports Server (NTRS)

    Skarda, J. Raymond Lee; Namkoong, David; Darling, Douglas

    1991-01-01

    A scaling analysis is used to study the dominant flow processes that occur in molten phase change material (PCM) under 1 g and microgravity conditions. Results of the scaling analysis are applied to the development of the NORVEX (NASA Oak Ridge Void Experiment) computer program and the preparation of the Thermal Energy Storage (TES) flight experiment. The NORVEX computer program which is being developed to predict melting and freezing with void formation in a 1 g or microgravity environment of the PCM is described. NORVEX predictions are compared with the scaling and similarity results. The approach to be used to validate NORVEX with TES flight data is also discussed. Similarity and scaling show that the inertial terms must be included as part of the momentum equation in either the 1 g or microgravity environment (a creeping flow assumption is invalid). A 10(exp -4) environment was found to be a suitable microgravity environment for the proposed PCM.

  5. Fine-Scale Exposure to Allergenic Pollen in the Urban Environment: Evaluation of Land Use Regression Approach.

    PubMed

    Hjort, Jan; Hugg, Timo T; Antikainen, Harri; Rusanen, Jarmo; Sofiev, Mikhail; Kukkonen, Jaakko; Jaakkola, Maritta S; Jaakkola, Jouni J K

    2016-05-01

    Despite the recent developments in physically and chemically based analysis of atmospheric particles, no models exist for resolving the spatial variability of pollen concentration at urban scale. We developed a land use regression (LUR) approach for predicting spatial fine-scale allergenic pollen concentrations in the Helsinki metropolitan area, Finland, and evaluated the performance of the models against available empirical data. We used grass pollen data monitored at 16 sites in an urban area during the peak pollen season and geospatial environmental data. The main statistical method was generalized linear model (GLM). GLM-based LURs explained 79% of the spatial variation in the grass pollen data based on all samples, and 47% of the variation when samples from two sites with very high concentrations were excluded. In model evaluation, prediction errors ranged from 6% to 26% of the observed range of grass pollen concentrations. Our findings support the use of geospatial data-based statistical models to predict the spatial variation of allergenic grass pollen concentrations at intra-urban scales. A remote sensing-based vegetation index was the strongest predictor of pollen concentrations for exposure assessments at local scales. The LUR approach provides new opportunities to estimate the relations between environmental determinants and allergenic pollen concentration in human-modified environments at fine spatial scales. This approach could potentially be applied to estimate retrospectively pollen concentrations to be used for long-term exposure assessments. Hjort J, Hugg TT, Antikainen H, Rusanen J, Sofiev M, Kukkonen J, Jaakkola MS, Jaakkola JJ. 2016. Fine-scale exposure to allergenic pollen in the urban environment: evaluation of land use regression approach. Environ Health Perspect 124:619-626; http://dx.doi.org/10.1289/ehp.1509761.

  6. When micro meets macro: microbial lipid analysis and ecosystem ecology

    NASA Astrophysics Data System (ADS)

    Balser, T.; Gutknecht, J.

    2008-12-01

    There is growing interest in linking soil microbial community composition and activity with large-scale field studies of nutrient cycling or plant community response to disturbances. And while analysis of microbial communities has moved rapidly in the past decade from culture-based to non-culture based techniques, still it must be asked what have we gained from the move? How well does the necessarily micro-scale of microbial analysis allow us to address questions of interest at the macro-scale? Several challenges exist in bridging the scales, and foremost is the question of methodological feasibility. Past microbiological methodologies have not been readily adaptable to the large sample sizes necessary for ecosystem-scale research. As a result, it has been difficult to generate compatible microbial and ecosystem data sets. We describe the use of a modified lipid extraction method to generate microbial community data sets that allow us to match landscape-scale or long-term ecological studies with microbial community data. We briefly discuss the challenges and advantages associated with lipid analysis as an approach to addressing ecosystem ecological studies, and provide examples from our research in ecosystem restoration and recovery following disturbance and climate change.

  7. Degradation modeling of high temperature proton exchange membrane fuel cells using dual time scale simulation

    NASA Astrophysics Data System (ADS)

    Pohl, E.; Maximini, M.; Bauschulte, A.; vom Schloß, J.; Hermanns, R. T. E.

    2015-02-01

    HT-PEM fuel cells suffer from performance losses due to degradation effects. Therefore, the durability of HT-PEM is currently an important factor of research and development. In this paper a novel approach is presented for an integrated short term and long term simulation of HT-PEM accelerated lifetime testing. The physical phenomena of short term and long term effects are commonly modeled separately due to the different time scales. However, in accelerated lifetime testing, long term degradation effects have a crucial impact on the short term dynamics. Our approach addresses this problem by applying a novel method for dual time scale simulation. A transient system simulation is performed for an open voltage cycle test on a HT-PEM fuel cell for a physical time of 35 days. The analysis describes the system dynamics by numerical electrochemical impedance spectroscopy. Furthermore, a performance assessment is performed in order to demonstrate the efficiency of the approach. The presented approach reduces the simulation time by approximately 73% compared to conventional simulation approach without losing too much accuracy. The approach promises a comprehensive perspective considering short term dynamic behavior and long term degradation effects.

  8. Data layer integration for the national map of the united states

    USGS Publications Warehouse

    Usery, E.L.; Finn, M.P.; Starbuck, M.

    2009-01-01

    The integration of geographic data layers in multiple raster and vector formats, from many different organizations and at a variety of resolutions and scales, is a significant problem for The National Map of the United States being developed by the U.S. Geological Survey. Our research has examined data integration from a layer-based approach for five of The National Map data layers: digital orthoimages, elevation, land cover, hydrography, and transportation. An empirical approach has included visual assessment by a set of respondents with statistical analysis to establish the meaning of various types of integration. A separate theoretical approach with established hypotheses tested against actual data sets has resulted in an automated procedure for integration of specific layers and is being tested. The empirical analysis has established resolution bounds on meanings of integration with raster datasets and distance bounds for vector data. The theoretical approach has used a combination of theories on cartographic transformation and generalization, such as T??pfer's radical law, and additional research concerning optimum viewing scales for digital images to establish a set of guiding principles for integrating data of different resolutions.

  9. Cracks dynamics under tensional stress - a DEM approach

    NASA Astrophysics Data System (ADS)

    Debski, Wojciech; Klejment, Piotr; Kosmala, Alicja; Foltyn, Natalia; Szpindler, Maciej

    2017-04-01

    Breaking and fragmentation of solid materials is an extremely complex process involving scales ranging from an atomic scale (breaking inter-atomic bounds) up to thousands of kilometers in case of catastrophic earthquakes (in energy scale it ranges from single eV up to 1024 J). Such a large scale span of breaking processes opens lot of questions like, for example, scaling of breaking processes, existence of factors controlling final size of broken area, existence of precursors, dynamics of fragmentation, to name a few. The classical approach to study breaking process at seismological scales, i.e., physical processes in earthquake foci, is essentially based on two factors: seismic data (mostly) and the continuum mechanics (including the linear fracture mechanics). Such approach has been gratefully successful in developing kinematic (first) and dynamic (recently) models of seismic rupture and explaining many of earthquake features observed all around the globe. However, such approach will sooner or latter face a limitation due to a limited information content of seismic data and inherit limitations of the fracture mechanics principles. A way of avoiding this expected limitation is turning an attention towards a well established in physics method of computational simulations - a powerful branch of contemporary physics. In this presentation we discuss preliminary results of analysis of fracturing dynamics under external tensional forces using the Discrete Element Method approach. We demonstrate that even under a very simplified tensional conditions, the fragmentation dynamics is a very complex process, including multi-fracturing, spontaneous fracture generation and healing, etc. We also emphasis a role of material heterogeneity on the fragmentation process.

  10. A comparison of methods for determining field evapotranspiration: photosynthesis system, sap flow, and eddy covariance

    NASA Astrophysics Data System (ADS)

    Zhang, Z.; Tian, F.; Hu, H. C.; Hu, H. P.

    2013-11-01

    A multi-scale, multi-technique study was conducted to measure evapotranspiration and its components in a cotton field under mulched drip irrigation conditions in northwestern China. Three measurement techniques at different scales were used: photosynthesis system (leaf scale), sap flow (plant scale), and eddy covariance (field scale). The experiment was conducted from July to September 2012. To upscale the evapotranspiration from the leaf to the plant scale, an approach that incorporated the canopy structure and the relationships between sunlit and shaded leaves was proposed. To upscale the evapotranspiration from the plant to the field scale, an approach based on the transpiration per unit leaf area was adopted and modified to incorporate the temporal variability in the relationships between leaf area and stem diameter. At the plant scale, the estimate of the transpiration based on the photosynthesis system with upscaling was slightly higher (18%) than that obtained by sap flow. At the field scale, the estimates of transpiration derived from sap flow with upscaling and eddy covariance shown reasonable consistency during the cotton open boll growth stage when soil evaporation can be neglected. The results indicate that the upscaling approaches are reasonable and valid. Based on the measurements and upscaling approaches, evapotranspiration components were analyzed under mulched drip irrigation. During the two analysis sub-periods in July and August, evapotranspiration rates were 3.94 and 4.53 mm day-1, respectively. The fraction of transpiration to evapotranspiration reached 87.1% before drip irrigation and 82.3% after irrigation. The high fraction of transpiration over evapotranspiration was principally due to the mulched film above drip pipe, low soil water content in the inter-film zone, well-closed canopy, and high water requirement of the crop.

  11. Validation of Normalizations, Scaling, and Photofading Corrections for FRAP Data Analysis

    PubMed Central

    Kang, Minchul; Andreani, Manuel; Kenworthy, Anne K.

    2015-01-01

    Fluorescence Recovery After Photobleaching (FRAP) has been a versatile tool to study transport and reaction kinetics in live cells. Since the fluorescence data generated by fluorescence microscopy are in a relative scale, a wide variety of scalings and normalizations are used in quantitative FRAP analysis. Scaling and normalization are often required to account for inherent properties of diffusing biomolecules of interest or photochemical properties of the fluorescent tag such as mobile fraction or photofading during image acquisition. In some cases, scaling and normalization are also used for computational simplicity. However, to our best knowledge, the validity of those various forms of scaling and normalization has not been studied in a rigorous manner. In this study, we investigate the validity of various scalings and normalizations that have appeared in the literature to calculate mobile fractions and correct for photofading and assess their consistency with FRAP equations. As a test case, we consider linear or affine scaling of normal or anomalous diffusion FRAP equations in combination with scaling for immobile fractions. We also consider exponential scaling of either FRAP equations or FRAP data to correct for photofading. Using a combination of theoretical and experimental approaches, we show that compatible scaling schemes should be applied in the correct sequential order; otherwise, erroneous results may be obtained. We propose a hierarchical workflow to carry out FRAP data analysis and discuss the broader implications of our findings for FRAP data analysis using a variety of kinetic models. PMID:26017223

  12. In situ and in-transit analysis of cosmological simulations

    DOE PAGES

    Friesen, Brian; Almgren, Ann; Lukic, Zarija; ...

    2016-08-24

    Modern cosmological simulations have reached the trillion-element scale, rendering data storage and subsequent analysis formidable tasks. To address this circumstance, we present a new MPI-parallel approach for analysis of simulation data while the simulation runs, as an alternative to the traditional workflow consisting of periodically saving large data sets to disk for subsequent ‘offline’ analysis. We demonstrate this approach in the compressible gasdynamics/N-body code Nyx, a hybrid MPI+OpenMP code based on the BoxLib framework, used for large-scale cosmological simulations. We have enabled on-the-fly workflows in two different ways: one is a straightforward approach consisting of all MPI processes periodically haltingmore » the main simulation and analyzing each component of data that they own (‘ in situ’). The other consists of partitioning processes into disjoint MPI groups, with one performing the simulation and periodically sending data to the other ‘sidecar’ group, which post-processes it while the simulation continues (‘in-transit’). The two groups execute their tasks asynchronously, stopping only to synchronize when a new set of simulation data needs to be analyzed. For both the in situ and in-transit approaches, we experiment with two different analysis suites with distinct performance behavior: one which finds dark matter halos in the simulation using merge trees to calculate the mass contained within iso-density contours, and another which calculates probability distribution functions and power spectra of various fields in the simulation. Both are common analysis tasks for cosmology, and both result in summary statistics significantly smaller than the original data set. We study the behavior of each type of analysis in each workflow in order to determine the optimal configuration for the different data analysis algorithms.« less

  13. An automated approach for extracting Barrier Island morphology from digital elevation models

    NASA Astrophysics Data System (ADS)

    Wernette, Phillipe; Houser, Chris; Bishop, Michael P.

    2016-06-01

    The response and recovery of a barrier island to extreme storms depends on the elevation of the dune base and crest, both of which can vary considerably alongshore and through time. Quantifying the response to and recovery from storms requires that we can first identify and differentiate the dune(s) from the beach and back-barrier, which in turn depends on accurate identification and delineation of the dune toe, crest and heel. The purpose of this paper is to introduce a multi-scale automated approach for extracting beach, dune (dune toe, dune crest and dune heel), and barrier island morphology. The automated approach introduced here extracts the shoreline and back-barrier shoreline based on elevation thresholds, and extracts the dune toe, dune crest and dune heel based on the average relative relief (RR) across multiple spatial scales of analysis. The multi-scale automated RR approach to extracting dune toe, dune crest, and dune heel based upon relative relief is more objective than traditional approaches because every pixel is analyzed across multiple computational scales and the identification of features is based on the calculated RR values. The RR approach out-performed contemporary approaches and represents a fast objective means to define important beach and dune features for predicting barrier island response to storms. The RR method also does not require that the dune toe, crest, or heel are spatially continuous, which is important because dune morphology is likely naturally variable alongshore.

  14. Assessment of flow regime alterations over a spectrum of temporal scales using wavelet-based approaches

    NASA Astrophysics Data System (ADS)

    Wu, Fu-Chun; Chang, Ching-Fu; Shiau, Jenq-Tzong

    2015-05-01

    The full range of natural flow regime is essential for sustaining the riverine ecosystems and biodiversity, yet there are still limited tools available for assessment of flow regime alterations over a spectrum of temporal scales. Wavelet analysis has proven useful for detecting hydrologic alterations at multiple scales via the wavelet power spectrum (WPS) series. The existing approach based on the global WPS (GWPS) ratio tends to be dominated by the rare high-power flows so that alterations of the more frequent low-power flows are often underrepresented. We devise a new approach based on individual deviations between WPS (DWPS) that are root-mean-squared to yield the global DWPS (GDWPS). We test these two approaches on the three reaches of the Feitsui Reservoir system (Taiwan) that are subjected to different classes of anthropogenic interventions. The GDWPS reveal unique features that are not detected with the GWPS ratios. We also segregate the effects of individual subflow components on the overall flow regime alterations using the subflow GDWPS. The results show that the daily hydropeaking waves below the reservoir not only intensified the flow oscillations at daily scale but most significantly eliminated subweekly flow variability. Alterations of flow regime were most severe below the diversion weir, where the residual hydropeaking resulted in a maximum impact at daily scale while the postdiversion null flows led to large hydrologic alterations over submonthly scales. The smallest impacts below the confluence reveal that the hydrologic alterations at scales longer than 2 days were substantially mitigated with the joining of the unregulated tributary flows, whereas the daily-scale hydrologic alteration was retained because of the hydropeaking inherited from the reservoir releases. The proposed DWPS approach unravels for the first time the details of flow regime alterations at these intermediate scales that are overridden by the low-frequency high-power flows when the long-term averaged GWPS are used.

  15. Dynamics of Intersubject Brain Networks during Anxious Anticipation

    PubMed Central

    Najafi, Mahshid; Kinnison, Joshua; Pessoa, Luiz

    2017-01-01

    How do large-scale brain networks reorganize during the waxing and waning of anxious anticipation? Here, threat was dynamically modulated during human functional MRI as two circles slowly meandered on the screen; if they touched, an unpleasant shock was delivered. We employed intersubject correlation analysis, which allowed the investigation of network-level functional connectivity across brains, and sought to determine how network connectivity changed during periods of approach (circles moving closer) and periods of retreat (circles moving apart). Analysis of positive connection weights revealed that dynamic threat altered connectivity within and between the salience, executive, and task-negative networks. For example, dynamic functional connectivity increased within the salience network during approach and decreased during retreat. The opposite pattern was found for the functional connectivity between the salience and task-negative networks: decreases during approach and increases during approach. Functional connections between subcortical regions and the salience network also changed dynamically during approach and retreat periods. Subcortical regions exhibiting such changes included the putative periaqueductal gray, putative habenula, and putative bed nucleus of the stria terminalis. Additional analysis of negative functional connections revealed dynamic changes, too. For example, negative weights within the salience network decreased during approach and increased during retreat, opposite what was found for positive weights. Together, our findings unraveled dynamic features of functional connectivity of large-scale networks and subcortical regions across participants while threat levels varied continuously, and demonstrate the potential of characterizing emotional processing at the level of dynamic networks. PMID:29209184

  16. Characterization of Extracellular Proteins in Tomato Fruit using Lectin Affinity Chromatography and LC-MALDI-MS/MS analysis

    USDA-ARS?s Scientific Manuscript database

    The large-scale isolation and analysis of glycoproteins by lectin affinity chromatography coupled with mass spectrometry has become a powerful tool to monitor changes in the “glycoproteome” of mammalian cells. Thus far, however, this approach has not been used extensively for the analysis of plant g...

  17. Wavelet processing techniques for digital mammography

    NASA Astrophysics Data System (ADS)

    Laine, Andrew F.; Song, Shuwu

    1992-09-01

    This paper introduces a novel approach for accomplishing mammographic feature analysis through multiresolution representations. We show that efficient (nonredundant) representations may be identified from digital mammography and used to enhance specific mammographic features within a continuum of scale space. The multiresolution decomposition of wavelet transforms provides a natural hierarchy in which to embed an interactive paradigm for accomplishing scale space feature analysis. Similar to traditional coarse to fine matching strategies, the radiologist may first choose to look for coarse features (e.g., dominant mass) within low frequency levels of a wavelet transform and later examine finer features (e.g., microcalcifications) at higher frequency levels. In addition, features may be extracted by applying geometric constraints within each level of the transform. Choosing wavelets (or analyzing functions) that are simultaneously localized in both space and frequency, results in a powerful methodology for image analysis. Multiresolution and orientation selectivity, known biological mechanisms in primate vision, are ingrained in wavelet representations and inspire the techniques presented in this paper. Our approach includes local analysis of complete multiscale representations. Mammograms are reconstructed from wavelet representations, enhanced by linear, exponential and constant weight functions through scale space. By improving the visualization of breast pathology we can improve the chances of early detection of breast cancers (improve quality) while requiring less time to evaluate mammograms for most patients (lower costs).

  18. Data series embedding and scale invariant statistics.

    PubMed

    Michieli, I; Medved, B; Ristov, S

    2010-06-01

    Data sequences acquired from bio-systems such as human gait data, heart rate interbeat data, or DNA sequences exhibit complex dynamics that is frequently described by a long-memory or power-law decay of autocorrelation function. One way of characterizing that dynamics is through scale invariant statistics or "fractal-like" behavior. For quantifying scale invariant parameters of physiological signals several methods have been proposed. Among them the most common are detrended fluctuation analysis, sample mean variance analyses, power spectral density analysis, R/S analysis, and recently in the realm of the multifractal approach, wavelet analysis. In this paper it is demonstrated that embedding the time series data in the high-dimensional pseudo-phase space reveals scale invariant statistics in the simple fashion. The procedure is applied on different stride interval data sets from human gait measurements time series (Physio-Bank data library). Results show that introduced mapping adequately separates long-memory from random behavior. Smaller gait data sets were analyzed and scale-free trends for limited scale intervals were successfully detected. The method was verified on artificially produced time series with known scaling behavior and with the varying content of noise. The possibility for the method to falsely detect long-range dependence in the artificially generated short range dependence series was investigated. (c) 2009 Elsevier B.V. All rights reserved.

  19. Failure analysis of fuel cell electrodes using three-dimensional multi-length scale X-ray computed tomography

    NASA Astrophysics Data System (ADS)

    Pokhrel, A.; El Hannach, M.; Orfino, F. P.; Dutta, M.; Kjeang, E.

    2016-10-01

    X-ray computed tomography (XCT), a non-destructive technique, is proposed for three-dimensional, multi-length scale characterization of complex failure modes in fuel cell electrodes. Comparative tomography data sets are acquired for a conditioned beginning of life (BOL) and a degraded end of life (EOL) membrane electrode assembly subjected to cathode degradation by voltage cycling. Micro length scale analysis shows a five-fold increase in crack size and 57% thickness reduction in the EOL cathode catalyst layer, indicating widespread action of carbon corrosion. Complementary nano length scale analysis shows a significant reduction in porosity, increased pore size, and dramatically reduced effective diffusivity within the remaining porous structure of the catalyst layer at EOL. Collapsing of the structure is evident from the combination of thinning and reduced porosity, as uniquely determined by the multi-length scale approach. Additionally, a novel image processing based technique developed for nano scale segregation of pore, ionomer, and Pt/C dominated voxels shows an increase in ionomer volume fraction, Pt/C agglomerates, and severe carbon corrosion at the catalyst layer/membrane interface at EOL. In summary, XCT based multi-length scale analysis enables detailed information needed for comprehensive understanding of the complex failure modes observed in fuel cell electrodes.

  20. Managing artisanal and small-scale mining in forest areas: perspectives from a poststructural political ecology.

    PubMed

    Hirons, Mark

    2011-01-01

    Artisanal and small-scale mining (ASM) is an activity intimately associated with social deprivation and environmental degradation, including deforestation. This paper examines ASM and deforestation using a broadly poststructural political ecology framework. Hegemonic discourses are shown to consistently influence policy direction, particularly in emerging approaches such as Corporate Social Responsibility and the Forest Stewardship Council. A review of alternative discourses reveals that the poststructural method is useful for critiquing the international policy arena but does not inform new approaches. Synthesis of the analysis leads to conclusions that echo a growing body of literature advocating for policies to become increasingly sensitive to local contexts, synergistic between actors at difference scales, and to be integrated across sectors.

  1. Probabilistic, meso-scale flood loss modelling

    NASA Astrophysics Data System (ADS)

    Kreibich, Heidi; Botto, Anna; Schröter, Kai; Merz, Bruno

    2016-04-01

    Flood risk analyses are an important basis for decisions on flood risk management and adaptation. However, such analyses are associated with significant uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention during the last years, they are still not standard practice for flood risk assessments and even more for flood loss modelling. State of the art in flood loss modelling is still the use of simple, deterministic approaches like stage-damage functions. Novel probabilistic, multi-variate flood loss models have been developed and validated on the micro-scale using a data-mining approach, namely bagging decision trees (Merz et al. 2013). In this presentation we demonstrate and evaluate the upscaling of the approach to the meso-scale, namely on the basis of land-use units. The model is applied in 19 municipalities which were affected during the 2002 flood by the River Mulde in Saxony, Germany (Botto et al. submitted). The application of bagging decision tree based loss models provide a probability distribution of estimated loss per municipality. Validation is undertaken on the one hand via a comparison with eight deterministic loss models including stage-damage functions as well as multi-variate models. On the other hand the results are compared with official loss data provided by the Saxon Relief Bank (SAB). The results show, that uncertainties of loss estimation remain high. Thus, the significant advantage of this probabilistic flood loss estimation approach is that it inherently provides quantitative information about the uncertainty of the prediction. References: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64. Botto A, Kreibich H, Merz B, Schröter K (submitted) Probabilistic, multi-variable flood loss modelling on the meso-scale with BT-FLEMO. Risk Analysis.

  2. An integrated approach to reconstructing genome-scale transcriptional regulatory networks

    DOE PAGES

    Imam, Saheed; Noguera, Daniel R.; Donohue, Timothy J.; ...

    2015-02-27

    Transcriptional regulatory networks (TRNs) program cells to dynamically alter their gene expression in response to changing internal or environmental conditions. In this study, we develop a novel workflow for generating large-scale TRN models that integrates comparative genomics data, global gene expression analyses, and intrinsic properties of transcription factors (TFs). An assessment of this workflow using benchmark datasets for the well-studied γ-proteobacterium Escherichia coli showed that it outperforms expression-based inference approaches, having a significantly larger area under the precision-recall curve. Further analysis indicated that this integrated workflow captures different aspects of the E. coli TRN than expression-based approaches, potentially making themmore » highly complementary. We leveraged this new workflow and observations to build a large-scale TRN model for the α-Proteobacterium Rhodobacter sphaeroides that comprises 120 gene clusters, 1211 genes (including 93 TFs), 1858 predicted protein-DNA interactions and 76 DNA binding motifs. We found that ~67% of the predicted gene clusters in this TRN are enriched for functions ranging from photosynthesis or central carbon metabolism to environmental stress responses. We also found that members of many of the predicted gene clusters were consistent with prior knowledge in R. sphaeroides and/or other bacteria. Experimental validation of predictions from this R. sphaeroides TRN model showed that high precision and recall was also obtained for TFs involved in photosynthesis (PpsR), carbon metabolism (RSP_0489) and iron homeostasis (RSP_3341). In addition, this integrative approach enabled generation of TRNs with increased information content relative to R. sphaeroides TRN models built via other approaches. We also show how this approach can be used to simultaneously produce TRN models for each related organism used in the comparative genomics analysis. Our results highlight the advantages of integrating comparative genomics of closely related organisms with gene expression data to assemble large-scale TRN models with high-quality predictions.« less

  3. Order-crossing removal in Gabor order tracking by independent component analysis

    NASA Astrophysics Data System (ADS)

    Guo, Yu; Tan, Kok Kiong

    2009-08-01

    Order-crossing problems in Gabor order tracking (GOT) of rotating machinery often occur when noise due to power-frequency interference, local structure resonance, etc., is prominent in applications. They can render the analysis results and the waveform-reconstruction tasks in GOT inaccurate or even meaningless. An approach is proposed in this paper to address the order-crossing problem by independent component analysis (ICA). With the approach, accurate order analysis results can be obtained and the waveforms of the order components of interest can be reconstructed or extracted from the recorded noisy data series. In addition, the ambiguities (permutation and scaling) of ICA results are also solved with the approach. The approach is amenable to applications in condition monitoring and fault diagnosis of rotating machinery. The evaluation of the approach is presented in detail based on simulations and an experiment on a rotor test rig. The results obtained using the proposed approach are compared with those obtained using the standard GOT. The comparison shows that the presented approach is more effective to solve order-crossing problems in GOT.

  4. Modeling and Simulation of Optimal Resource Management during the Diurnal Cycle in Emiliania huxleyi by Genome-Scale Reconstruction and an Extended Flux Balance Analysis Approach.

    PubMed

    Knies, David; Wittmüß, Philipp; Appel, Sebastian; Sawodny, Oliver; Ederer, Michael; Feuer, Ronny

    2015-10-28

    The coccolithophorid unicellular alga Emiliania huxleyi is known to form large blooms, which have a strong effect on the marine carbon cycle. As a photosynthetic organism, it is subjected to a circadian rhythm due to the changing light conditions throughout the day. For a better understanding of the metabolic processes under these periodically-changing environmental conditions, a genome-scale model based on a genome reconstruction of the E. huxleyi strain CCMP 1516 was created. It comprises 410 reactions and 363 metabolites. Biomass composition is variable based on the differentiation into functional biomass components and storage metabolites. The model is analyzed with a flux balance analysis approach called diurnal flux balance analysis (diuFBA) that was designed for organisms with a circadian rhythm. It allows storage metabolites to accumulate or be consumed over the diurnal cycle, while keeping the structure of a classical FBA problem. A feature of this approach is that the production and consumption of storage metabolites is not defined externally via the biomass composition, but the result of optimal resource management adapted to the diurnally-changing environmental conditions. The model in combination with this approach is able to simulate the variable biomass composition during the diurnal cycle in proximity to literature data.

  5. Fuzzy-based propagation of prior knowledge to improve large-scale image analysis pipelines

    PubMed Central

    Mikut, Ralf

    2017-01-01

    Many automatically analyzable scientific questions are well-posed and a variety of information about expected outcomes is available a priori. Although often neglected, this prior knowledge can be systematically exploited to make automated analysis operations sensitive to a desired phenomenon or to evaluate extracted content with respect to this prior knowledge. For instance, the performance of processing operators can be greatly enhanced by a more focused detection strategy and by direct information about the ambiguity inherent in the extracted data. We present a new concept that increases the result quality awareness of image analysis operators by estimating and distributing the degree of uncertainty involved in their output based on prior knowledge. This allows the use of simple processing operators that are suitable for analyzing large-scale spatiotemporal (3D+t) microscopy images without compromising result quality. On the foundation of fuzzy set theory, we transform available prior knowledge into a mathematical representation and extensively use it to enhance the result quality of various processing operators. These concepts are illustrated on a typical bioimage analysis pipeline comprised of seed point detection, segmentation, multiview fusion and tracking. The functionality of the proposed approach is further validated on a comprehensive simulated 3D+t benchmark data set that mimics embryonic development and on large-scale light-sheet microscopy data of a zebrafish embryo. The general concept introduced in this contribution represents a new approach to efficiently exploit prior knowledge to improve the result quality of image analysis pipelines. The generality of the concept makes it applicable to practically any field with processing strategies that are arranged as linear pipelines. The automated analysis of terabyte-scale microscopy data will especially benefit from sophisticated and efficient algorithms that enable a quantitative and fast readout. PMID:29095927

  6. Brownian motion or Lévy walk? Stepping towards an extended statistical mechanics for animal locomotion.

    PubMed

    Gautestad, Arild O

    2012-09-07

    Animals moving under the influence of spatio-temporal scaling and long-term memory generate a kind of space-use pattern that has proved difficult to model within a coherent theoretical framework. An extended kind of statistical mechanics is needed, accounting for both the effects of spatial memory and scale-free space use, and put into a context of ecological conditions. Simulations illustrating the distinction between scale-specific and scale-free locomotion are presented. The results show how observational scale (time lag between relocations of an individual) may critically influence the interpretation of the underlying process. In this respect, a novel protocol is proposed as a method to distinguish between some main movement classes. For example, the 'power law in disguise' paradox-from a composite Brownian motion consisting of a superposition of independent movement processes at different scales-may be resolved by shifting the focus from pattern analysis at one particular temporal resolution towards a more process-oriented approach involving several scales of observation. A more explicit consideration of system complexity within a statistical mechanical framework, supplementing the more traditional mechanistic modelling approach, is advocated.

  7. Scale development on consumer behavior toward counterfeit drugs in a developing country: a quantitative study exploiting the tools of an evolving paradigm

    PubMed Central

    2013-01-01

    Background Although desperate need and drug counterfeiting are linked in developing countries, little research has been carried out to address this link, and there is a lack of proper tools and methodology. This study addresses the need for a new methodological approach by developing a scale to aid in understanding the demand side of drug counterfeiting in a developing country. Methods The study presents a quantitative, non-representative survey conducted in Sudan. A face-to-face structured interview survey methodology was employed to collect the data from the general population (people in the street) in two phases: pilot (n = 100) and final survey (n = 1003). Data were analyzed by examining means, variances, squared multiple correlations, item-to-total correlations, and the results of an exploratory factor analysis and a confirmatory factor analysis. Results As an approach to scale purification, internal consistency was examined and improved. The scale was reduced from 44 to 41 items and Cronbach’s alpha improved from 0.818 to 0.862. Finally, scale items were assessed. The result was an eleven-factor solution. Convergent and discriminant validity were demonstrated. Conclusion The results of this study indicate that the “Consumer Behavior Toward Counterfeit Drugs Scale” is a valid, reliable measure with a solid theoretical base. Ultimately, the study offers public health policymakers a valid measurement tool and, consequently, a new methodological approach with which to build a better understanding of the demand side of counterfeit drugs and to develop more effective strategies to combat the problem. PMID:24020730

  8. Interdisciplinary evaluation of dysphagia: clinical swallowing evaluation and videoendoscopy of swallowing.

    PubMed

    Sordi, Marina de; Mourão, Lucia Figueiredo; Silva, Ariovaldo Armando da; Flosi, Luciana Claudia Leite

    2009-01-01

    Patients with dysphagia have impairments in many aspects, and an interdisciplinary approach is fundamental to define diagnosis and treatment. A joint approach in the clinical and videoendoscopy evaluation is paramount. To study the correlation between the clinical assessment (ACD) and the videoendoscopic (VED) assessment of swallowing by classifying the degree of severity and the qualitative/descriptive analyses of the procedures. cross-sectional, descriptive and comparative. held from March to December of 2006, at the Otolaryngology/Dysphagia ward of a hospital in the country side of São Paulo. 30 dysphagic patients with different disorders were assessed by ACD and VED. The data was classified by means of severity scales and qualitative/ descriptive analysis. the correlation between severity ACD and VED scales pointed to a statistically significant low agreement (KAPA = 0.4) (p=0,006). The correlation between the qualitative/descriptive analysis pointed to an excellent and statistically significant agreement (KAPA=0.962) (p<0.001) concerning the entire sample. the low agreement between the severity scales point to a need to perform both procedures, reinforcing VED as a doable procedure. The descriptive qualitative analysis pointed to an excellent agreement, and such data reinforces our need to understand swallowing as a process.

  9. Factors Influencing the Sahelian Paradox at the Local Watershed Scale: Causal Inference Insights

    NASA Astrophysics Data System (ADS)

    Van Gordon, M.; Groenke, A.; Larsen, L.

    2017-12-01

    While the existence of paradoxical rainfall-runoff and rainfall-groundwater correlations are well established in the West African Sahel, the hydrologic mechanisms involved are poorly understood. In pursuit of mechanistic explanations, we perform a causal inference analysis on hydrologic variables in three watersheds in Benin and Niger. Using an ensemble of techniques, we compute the strength of relationships between observational soil moisture, runoff, precipitation, and temperature data at seasonal and event timescales. Performing analysis over a range of time lags allows dominant time scales to emerge from the relationships between variables. By determining the time scales of hydrologic connectivity over vertical and lateral space, we show differences in the importance of overland and subsurface flow over the course of the rainy season and between watersheds. While previous work on the paradoxical hydrologic behavior in the Sahel focuses on surface processes and infiltration, our results point toward the importance of subsurface flow to rainfall-runoff relationships in these watersheds. The hypotheses generated from our ensemble approach suggest that subsequent explorations of mechanistic hydrologic processes in the region include subsurface flow. Further, this work highlights how an ensemble approach to causal analysis can reveal nuanced relationships between variables even in poorly understood hydrologic systems.

  10. Cellular automata rule characterization and classification using texture descriptors

    NASA Astrophysics Data System (ADS)

    Machicao, Jeaneth; Ribas, Lucas C.; Scabini, Leonardo F. S.; Bruno, Odermir M.

    2018-05-01

    The cellular automata (CA) spatio-temporal patterns have attracted the attention from many researchers since it can provide emergent behavior resulting from the dynamics of each individual cell. In this manuscript, we propose an approach of texture image analysis to characterize and classify CA rules. The proposed method converts the CA spatio-temporal patterns into a gray-scale image. The gray-scale is obtained by creating a binary number based on the 8-connected neighborhood of each dot of the CA spatio-temporal pattern. We demonstrate that this technique enhances the CA rule characterization and allow to use different texture image analysis algorithms. Thus, various texture descriptors were evaluated in a supervised training approach aiming to characterize the CA's global evolution. Our results show the efficiency of the proposed method for the classification of the elementary CA (ECAs), reaching a maximum of 99.57% of accuracy rate according to the Li-Packard scheme (6 classes) and 94.36% for the classification of the 88 rules scheme. Moreover, within the image analysis context, we found a better performance of the method by means of a transformation of the binary states to a gray-scale.

  11. Turbulent compressible fluid: Renormalization group analysis, scaling regimes, and anomalous scaling of advected scalar fields

    NASA Astrophysics Data System (ADS)

    Antonov, N. V.; Gulitskiy, N. M.; Kostenko, M. M.; Lučivjanský, T.

    2017-03-01

    We study a model of fully developed turbulence of a compressible fluid, based on the stochastic Navier-Stokes equation, by means of the field-theoretic renormalization group. In this approach, scaling properties are related to the fixed points of the renormalization group equations. Previous analysis of this model near the real-world space dimension 3 identified a scaling regime [N. V. Antonov et al., Theor. Math. Phys. 110, 305 (1997), 10.1007/BF02630456]. The aim of the present paper is to explore the existence of additional regimes, which could not be found using the direct perturbative approach of the previous work, and to analyze the crossover between different regimes. It seems possible to determine them near the special value of space dimension 4 in the framework of double y and ɛ expansion, where y is the exponent associated with the random force and ɛ =4 -d is the deviation from the space dimension 4. Our calculations show that there exists an additional fixed point that governs scaling behavior. Turbulent advection of a passive scalar (density) field by this velocity ensemble is considered as well. We demonstrate that various correlation functions of the scalar field exhibit anomalous scaling behavior in the inertial-convective range. The corresponding anomalous exponents, identified as scaling dimensions of certain composite fields, can be systematically calculated as a series in y and ɛ . All calculations are performed in the leading one-loop approximation.

  12. Meta-Analysis of Scale Reliability Using Latent Variable Modeling

    ERIC Educational Resources Information Center

    Raykov, Tenko; Marcoulides, George A.

    2013-01-01

    A latent variable modeling approach is outlined that can be used for meta-analysis of reliability coefficients of multicomponent measuring instruments. Important limitations of efforts to combine composite reliability findings across multiple studies are initially pointed out. A reliability synthesis procedure is discussed that is based on…

  13. Collaboration, negotiation, and coalescence for interagency-collaborative teams to scale-up evidence-based practice.

    PubMed

    Aarons, Gregory A; Fettes, Danielle L; Hurlburt, Michael S; Palinkas, Lawrence A; Gunderson, Lara; Willging, Cathleen E; Chaffin, Mark J

    2014-01-01

    Implementation and scale-up of evidence-based practices (EBPs) is often portrayed as involving multiple stakeholders collaborating harmoniously in the service of a shared vision. In practice, however, collaboration is a more complex process that may involve shared and competing interests and agendas, and negotiation. The present study examined the scale-up of an EBP across an entire service system using the Interagency Collaborative Team approach. Participants were key stakeholders in a large-scale county-wide implementation of an EBP to reduce child neglect, SafeCare. Semistructured interviews and/or focus groups were conducted with 54 individuals representing diverse constituents in the service system, followed by an iterative approach to coding and analysis of transcripts. The study was conceptualized using the Exploration, Preparation, Implementation, and Sustainment framework. Although community stakeholders eventually coalesced around implementation of SafeCare, several challenges affected the implementation process. These challenges included differing organizational cultures, strategies, and approaches to collaboration; competing priorities across levels of leadership; power struggles; and role ambiguity. Each of the factors identified influenced how stakeholders approached the EBP implementation process. System-wide scale-up of EBPs involves multiple stakeholders operating in a nexus of differing agendas, priorities, leadership styles, and negotiation strategies. The term collaboration may oversimplify the multifaceted nature of the scale-up process. Implementation efforts should openly acknowledge and consider this nexus when individual stakeholders and organizations enter into EBP implementation through collaborative processes.

  14. Collaboration, Negotiation, and Coalescence for Interagency-Collaborative Teams to Scale-up Evidence-Based Practice

    PubMed Central

    Aarons, Gregory A.; Fettes, Danielle; Hurlburt, Michael; Palinkas, Lawrence; Gunderson, Lara; Willging, Cathleen; Chaffin, Mark

    2014-01-01

    Objective Implementation and scale-up of evidence-based practices (EBPs) is often portrayed as involving multiple stakeholders collaborating harmoniously in the service of a shared vision. In practice, however, collaboration is a more complex process that may involve shared and competing interests and agendas, and negotiation. The present study examined the scale-up of an EBP across an entire service system using the Interagency Collaborative Team (ICT) approach. Methods Participants were key stakeholders in a large-scale county-wide implementation of an EBP to reduce child neglect, SafeCare®. Semi-structured interviews and/or focus groups were conducted with 54 individuals representing diverse constituents in the service system, followed by an iterative approach to coding and analysis of transcripts. The study was conceptualized using the Exploration, Preparation, Implementation, and Sustainment (EPIS) framework. Results Although community stakeholders eventually coalesced around implementation of SafeCare, several challenges affected the implementation process. These challenges included differing organizational cultures, strategies, and approaches to collaboration, competing priorities across levels of leadership, power struggles, and role ambiguity. Each of the factors identified influenced how stakeholders approached the EBP implementation process. Conclusions System wide scale-up of EBPs involves multiple stakeholders operating in a nexus of differing agendas, priorities, leadership styles, and negotiation strategies. The term collaboration may oversimplify the multifaceted nature of the scale-up process. Implementation efforts should openly acknowledge and consider this nexus when individual stakeholders and organizations enter into EBP implementation through collaborative processes. PMID:24611580

  15. Impact of model complexity and multi-scale data integration on the estimation of hydrogeological parameters in a dual-porosity aquifer

    NASA Astrophysics Data System (ADS)

    Tamayo-Mas, Elena; Bianchi, Marco; Mansour, Majdi

    2018-03-01

    This study investigates the impact of model complexity and multi-scale prior hydrogeological data on the interpretation of pumping test data in a dual-porosity aquifer (the Chalk aquifer in England, UK). In order to characterize the hydrogeological properties, different approaches ranging from a traditional analytical solution (Theis approach) to more sophisticated numerical models with automatically calibrated input parameters are applied. Comparisons of results from the different approaches show that neither traditional analytical solutions nor a numerical model assuming a homogenous and isotropic aquifer can adequately explain the observed drawdowns. A better reproduction of the observed drawdowns in all seven monitoring locations is instead achieved when medium and local-scale prior information about the vertical hydraulic conductivity (K) distribution is used to constrain the model calibration process. In particular, the integration of medium-scale vertical K variations based on flowmeter measurements lead to an improvement in the goodness-of-fit of the simulated drawdowns of about 30%. Further improvements (up to 70%) were observed when a simple upscaling approach was used to integrate small-scale K data to constrain the automatic calibration process of the numerical model. Although the analysis focuses on a specific case study, these results provide insights about the representativeness of the estimates of hydrogeological properties based on different interpretations of pumping test data, and promote the integration of multi-scale data for the characterization of heterogeneous aquifers in complex hydrogeological settings.

  16. User group attitudes toward forest management treatments on the Shawnee National Forest: application of a photo-evaluation technique

    Treesearch

    Jonathan M. Cohen; Jean C. Mangun; Mae A. Davenport; Andrew D. Carver

    2008-01-01

    Diverse public opinions, competing management goals, and polarized interest groups combine with problems of scale to create a complex management arena for managers in the Central Hardwood Forest region. A mixed-methods approach that incorporated quantitative analysis of data from a photo evaluation-attitude scale survey instrument was used to assess attitudes toward...

  17. Asymptotic Expansion Homogenization for Multiscale Nuclear Fuel Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hales, J. D.; Tonks, M. R.; Chockalingam, K.

    2015-03-01

    Engineering scale nuclear fuel performance simulations can benefit by utilizing high-fidelity models running at a lower length scale. Lower length-scale models provide a detailed view of the material behavior that is used to determine the average material response at the macroscale. These lower length-scale calculations may provide insight into material behavior where experimental data is sparse or nonexistent. This multiscale approach is especially useful in the nuclear field, since irradiation experiments are difficult and expensive to conduct. The lower length-scale models complement the experiments by influencing the types of experiments required and by reducing the total number of experiments needed.more » This multiscale modeling approach is a central motivation in the development of the BISON-MARMOT fuel performance codes at Idaho National Laboratory. These codes seek to provide more accurate and predictive solutions for nuclear fuel behavior. One critical aspect of multiscale modeling is the ability to extract the relevant information from the lower length-scale sim- ulations. One approach, the asymptotic expansion homogenization (AEH) technique, has proven to be an effective method for determining homogenized material parameters. The AEH technique prescribes a system of equations to solve at the microscale that are used to compute homogenized material constants for use at the engineering scale. In this work, we employ AEH to explore the effect of evolving microstructural thermal conductivity and elastic constants on nuclear fuel performance. We show that the AEH approach fits cleanly into the BISON and MARMOT codes and provides a natural, multidimensional homogenization capability.« less

  18. Consistency Analysis of Genome-Scale Models of Bacterial Metabolism: A Metamodel Approach

    PubMed Central

    Ponce-de-Leon, Miguel; Calle-Espinosa, Jorge; Peretó, Juli; Montero, Francisco

    2015-01-01

    Genome-scale metabolic models usually contain inconsistencies that manifest as blocked reactions and gap metabolites. With the purpose to detect recurrent inconsistencies in metabolic models, a large-scale analysis was performed using a previously published dataset of 130 genome-scale models. The results showed that a large number of reactions (~22%) are blocked in all the models where they are present. To unravel the nature of such inconsistencies a metamodel was construed by joining the 130 models in a single network. This metamodel was manually curated using the unconnected modules approach, and then, it was used as a reference network to perform a gap-filling on each individual genome-scale model. Finally, a set of 36 models that had not been considered during the construction of the metamodel was used, as a proof of concept, to extend the metamodel with new biochemical information, and to assess its impact on gap-filling results. The analysis performed on the metamodel allowed to conclude: 1) the recurrent inconsistencies found in the models were already present in the metabolic database used during the reconstructions process; 2) the presence of inconsistencies in a metabolic database can be propagated to the reconstructed models; 3) there are reactions not manifested as blocked which are active as a consequence of some classes of artifacts, and; 4) the results of an automatic gap-filling are highly dependent on the consistency and completeness of the metamodel or metabolic database used as the reference network. In conclusion the consistency analysis should be applied to metabolic databases in order to detect and fill gaps as well as to detect and remove artifacts and redundant information. PMID:26629901

  19. Finding Tropical Cyclones on a Cloud Computing Cluster: Using Parallel Virtualization for Large-Scale Climate Simulation Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasenkamp, Daren; Sim, Alexander; Wehner, Michael

    Extensive computing power has been used to tackle issues such as climate changes, fusion energy, and other pressing scientific challenges. These computations produce a tremendous amount of data; however, many of the data analysis programs currently only run a single processor. In this work, we explore the possibility of using the emerging cloud computing platform to parallelize such sequential data analysis tasks. As a proof of concept, we wrap a program for analyzing trends of tropical cyclones in a set of virtual machines (VMs). This approach allows the user to keep their familiar data analysis environment in the VMs, whilemore » we provide the coordination and data transfer services to ensure the necessary input and output are directed to the desired locations. This work extensively exercises the networking capability of the cloud computing systems and has revealed a number of weaknesses in the current cloud system software. In our tests, we are able to scale the parallel data analysis job to a modest number of VMs and achieve a speedup that is comparable to running the same analysis task using MPI. However, compared to MPI based parallelization, the cloud-based approach has a number of advantages. The cloud-based approach is more flexible because the VMs can capture arbitrary software dependencies without requiring the user to rewrite their programs. The cloud-based approach is also more resilient to failure; as long as a single VM is running, it can make progress while as soon as one MPI node fails the whole analysis job fails. In short, this initial work demonstrates that a cloud computing system is a viable platform for distributed scientific data analyses traditionally conducted on dedicated supercomputing systems.« less

  20. Design and Analysis of a Formation Flying System for the Cross-Scale Mission Concept

    NASA Technical Reports Server (NTRS)

    Cornara, Stefania; Bastante, Juan C.; Jubineau, Franck

    2007-01-01

    The ESA-funded "Cross-Scale Technology Reference Study has been carried out with the primary aim to identify and analyse a mission concept for the investigation of fundamental space plasma processes that involve dynamical non-linear coupling across multiple length scales. To fulfill this scientific mission goal, a constellation of spacecraft is required, flying in loose formations around the Earth and sampling three characteristic plasma scale distances simultaneously, with at least two satellites per scale: electron kinetic (10 km), ion kinetic (100-2000 km), magnetospheric fluid (3000-15000 km). The key Cross-Scale mission drivers identified are the number of S/C, the space segment configuration, the reference orbit design, the transfer and deployment strategy, the inter-satellite localization and synchronization process and the mission operations. This paper presents a comprehensive overview of the mission design and analysis for the Cross-Scale concept and outlines a technically feasible mission architecture for a multi-dimensional investigation of space plasma phenomena. The main effort has been devoted to apply a thorough mission-level trade-off approach and to accomplish an exhaustive analysis, so as to allow the characterization of a wide range of mission requirements and design solutions.

  1. Static Aeroelastic Scaling and Analysis of a Sub-Scale Flexible Wing Wind Tunnel Model

    NASA Technical Reports Server (NTRS)

    Ting, Eric; Lebofsky, Sonia; Nguyen, Nhan; Trinh, Khanh

    2014-01-01

    This paper presents an approach to the development of a scaled wind tunnel model for static aeroelastic similarity with a full-scale wing model. The full-scale aircraft model is based on the NASA Generic Transport Model (GTM) with flexible wing structures referred to as the Elastically Shaped Aircraft Concept (ESAC). The baseline stiffness of the ESAC wing represents a conventionally stiff wing model. Static aeroelastic scaling is conducted on the stiff wing configuration to develop the wind tunnel model, but additional tailoring is also conducted such that the wind tunnel model achieves a 10% wing tip deflection at the wind tunnel test condition. An aeroelastic scaling procedure and analysis is conducted, and a sub-scale flexible wind tunnel model based on the full-scale's undeformed jig-shape is developed. Optimization of the flexible wind tunnel model's undeflected twist along the span, or pre-twist or wash-out, is then conducted for the design test condition. The resulting wind tunnel model is an aeroelastic model designed for the wind tunnel test condition.

  2. The use of single-date MODIS imagery for estimating large-scale urban impervious surface fraction with spectral mixture analysis and machine learning techniques

    NASA Astrophysics Data System (ADS)

    Deng, Chengbin; Wu, Changshan

    2013-12-01

    Urban impervious surface information is essential for urban and environmental applications at the regional/national scales. As a popular image processing technique, spectral mixture analysis (SMA) has rarely been applied to coarse-resolution imagery due to the difficulty of deriving endmember spectra using traditional endmember selection methods, particularly within heterogeneous urban environments. To address this problem, we derived endmember signatures through a least squares solution (LSS) technique with known abundances of sample pixels, and integrated these endmember signatures into SMA for mapping large-scale impervious surface fraction. In addition, with the same sample set, we carried out objective comparative analyses among SMA (i.e. fully constrained and unconstrained SMA) and machine learning (i.e. Cubist regression tree and Random Forests) techniques. Analysis of results suggests three major conclusions. First, with the extrapolated endmember spectra from stratified random training samples, the SMA approaches performed relatively well, as indicated by small MAE values. Second, Random Forests yields more reliable results than Cubist regression tree, and its accuracy is improved with increased sample sizes. Finally, comparative analyses suggest a tentative guide for selecting an optimal approach for large-scale fractional imperviousness estimation: unconstrained SMA might be a favorable option with a small number of samples, while Random Forests might be preferred if a large number of samples are available.

  3. Scaling analysis and SE simulation of the tilted cylinder-interface capillary interaction

    NASA Astrophysics Data System (ADS)

    Gao, S. Q.; Zhang, X. Y.; Zhou, Y. H.

    2018-06-01

    The capillary interaction induced by a tilted cylinder and interface is the basic configuration of many complex systems, such as micro-pillar arrays clustering, super-hydrophobicity of hairy surface, water-walking insects, and fiber aggregation. We systematically analyzed the scaling laws of tilt angle, contact angle, and cylinder radius on the contact line shape by SE simulation and experiment. The following in-depth analysis of the characteristic parameters (shift, stretch and distortion) of the deformed contact lines reveals the self-similar shape of contact line. Then a general capillary force scaling law is proposed to incredibly grasp all the simulated and experimental data by a quite straightforward ellipse approximation approach.

  4. Effect of inventory method on niche models: random versus systematic error

    Treesearch

    Heather E. Lintz; Andrew N. Gray; Bruce McCune

    2013-01-01

    Data from large-scale biological inventories are essential for understanding and managing Earth's ecosystems. The Forest Inventory and Analysis Program (FIA) of the U.S. Forest Service is the largest biological inventory in North America; however, the FIA inventory recently changed from an amalgam of different approaches to a nationally-standardized approach in...

  5. Classical, Generalizability, and Multifaceted Rasch Detection of Interrater Variability in Large, Sparse Data Sets.

    ERIC Educational Resources Information Center

    MacMillan, Peter D.

    2000-01-01

    Compared classical test theory (CTT), generalizability theory (GT), and multifaceted Rasch model (MFRM) approaches to detecting and correcting for rater variability using responses of 4,930 high school students graded by 3 raters on 9 scales. The MFRM approach identified far more raters as different than did the CTT analysis. GT and Rasch…

  6. Multiple constraint analysis of regional land-surface carbon flux

    Treesearch

    D.P. Turner; M. Göckede; B.E. Law; W.D. Ritts; W.B. Cohen; Z. Yang; T. Hudiburg; R. Kennedy; M. Duane

    2011-01-01

    We applied and compared bottom-up (process model-based) and top-down (atmospheric inversion-based) scaling approaches to evaluate the spatial and temporal patterns of net ecosystem production (NEP) over a 2.5 × 105 km2 area (the state of Oregon) in the western United States. Both approaches indicated a carbon sink over this...

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preston, Benjamin L.; King, Anthony W.; Ernst, Kathleen M.

    Human agency is an essential determinant of the dynamics of agroecosystems. However, the manner in which agency is represented within different approaches to agroecosystem modeling is largely contingent on the scales of analysis and the conceptualization of the system of interest. While appropriate at times, narrow conceptualizations of agroecosystems can preclude consideration for how agency manifests at different scales, thereby marginalizing processes, feedbacks, and constraints that would otherwise affect model results. Modifications to the existing modeling toolkit may therefore enable more holistic representations of human agency. Model integration can assist with the development of multi-scale agroecosystem modeling frameworks that capturemore » different aspects of agency. In addition, expanding the use of socioeconomic scenarios and stakeholder participation can assist in explicitly defining context-dependent elements of scale and agency. Finally, such approaches, however, should be accompanied by greater recognition of the meta agency of model users and the need for more critical evaluation of model selection and application.« less

  8. Liquid state isomorphism, Rosenfeld-Tarazona temperature scaling, and Riemannian thermodynamic geometry.

    PubMed

    Mausbach, Peter; Köster, Andreas; Vrabec, Jadran

    2018-05-01

    Aspects of isomorph theory, Rosenfeld-Tarazona temperature scaling, and thermodynamic geometry are comparatively discussed on the basis of the Lennard-Jones potential. The first two approaches approximate the high-density fluid state well when the repulsive interparticle interactions become dominant, which is typically the case close to the freezing line. However, previous studies of Rosenfeld-Tarazona scaling for the isochoric heat capacity and its relation to isomorph theory reveal deviations for the temperature dependence. It turns out that a definition of a state region in which repulsive interactions dominate is required for achieving consistent results. The Riemannian thermodynamic scalar curvature R allows for such a classification, indicating predominantly repulsive interactions by R>0. An analysis of the isomorphic character of the freezing line and the validity of Rosenfeld-Tarazona temperature scaling show that these approaches are consistent only in a small state region.

  9. Scale-invariant entropy-based theory for dynamic ordering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mahulikar, Shripad P., E-mail: spm@iitmandi.ac.in, E-mail: spm@aero.iitb.ac.in; Department of Aerospace Engineering, Indian Institute of Technology Bombay, Mumbai 400076; Kumari, Priti

    2014-09-01

    Dynamically Ordered self-organized dissipative structure exists in various forms and at different scales. This investigation first introduces the concept of an isolated embedding system, which embeds an open system, e.g., dissipative structure and its mass and/or energy exchange with its surroundings. Thereafter, scale-invariant theoretical analysis is presented using thermodynamic principles for Order creation, existence, and destruction. The sustainability criterion for Order existence based on its structured mass and/or energy interactions with the surroundings is mathematically defined. This criterion forms the basis for the interrelationship of physical parameters during sustained existence of dynamic Order. It is shown that the sufficient conditionmore » for dynamic Order existence is approached if its sustainability criterion is met, i.e., its destruction path is blocked. This scale-invariant approach has the potential to unify the physical understanding of universal dynamic ordering based on entropy considerations.« less

  10. Development of the Attributed Dignity Scale.

    PubMed

    Jacelon, Cynthia S; Dixon, Jane; Knafl, Kathleen A

    2009-07-01

    A sequential, multi-method approach to instrument development beginning with concept analysis, followed by (a) item generation from qualitative data, (b) review of items by expert and lay person panels, (c) cognitive appraisal interviews, (d) pilot testing, and (e) evaluating construct validity was used to develop a measure of attributed dignity in older adults. The resulting positively scored, 23-item scale has three dimensions: Self-Value, Behavioral Respect-Self, and Behavioral Respect-Others. Item-total correlations in the pilot study ranged from 0.39 to 0.85. Correlations between the Attributed Dignity Scale (ADS) and both Rosenberg's Self-Esteem Scale (0.17) and Crowne and Marlowe's Social Desirability Scale (0.36) were modest and in the expected direction, indicating attributed dignity is a related but independent concept. Next steps include testing the ADS with a larger sample to complete factor analysis, test-retest stability, and further study of the relationships between attributed dignity and other concepts.

  11. Equation-free multiscale computation: algorithms and applications.

    PubMed

    Kevrekidis, Ioannis G; Samaey, Giovanni

    2009-01-01

    In traditional physicochemical modeling, one derives evolution equations at the (macroscopic, coarse) scale of interest; these are used to perform a variety of tasks (simulation, bifurcation analysis, optimization) using an arsenal of analytical and numerical techniques. For many complex systems, however, although one observes evolution at a macroscopic scale of interest, accurate models are only given at a more detailed (fine-scale, microscopic) level of description (e.g., lattice Boltzmann, kinetic Monte Carlo, molecular dynamics). Here, we review a framework for computer-aided multiscale analysis, which enables macroscopic computational tasks (over extended spatiotemporal scales) using only appropriately initialized microscopic simulation on short time and length scales. The methodology bypasses the derivation of macroscopic evolution equations when these equations conceptually exist but are not available in closed form-hence the term equation-free. We selectively discuss basic algorithms and underlying principles and illustrate the approach through representative applications. We also discuss potential difficulties and outline areas for future research.

  12. Statistical analysis of microgravity experiment performance using the degrees of success scale

    NASA Technical Reports Server (NTRS)

    Upshaw, Bernadette; Liou, Ying-Hsin Andrew; Morilak, Daniel P.

    1994-01-01

    This paper describes an approach to identify factors that significantly influence microgravity experiment performance. Investigators developed the 'degrees of success' scale to provide a numerical representation of success. A degree of success was assigned to 293 microgravity experiments. Experiment information including the degree of success rankings and factors for analysis was compiled into a database. Through an analysis of variance, nine significant factors in microgravity experiment performance were identified. The frequencies of these factors are presented along with the average degree of success at each level. A preliminary discussion of the relationship between the significant factors and the degree of success is presented.

  13. Scale and the representation of human agency in the modeling of agroecosystems

    DOE PAGES

    Preston, Benjamin L.; King, Anthony W.; Ernst, Kathleen M.; ...

    2015-07-17

    Human agency is an essential determinant of the dynamics of agroecosystems. However, the manner in which agency is represented within different approaches to agroecosystem modeling is largely contingent on the scales of analysis and the conceptualization of the system of interest. While appropriate at times, narrow conceptualizations of agroecosystems can preclude consideration for how agency manifests at different scales, thereby marginalizing processes, feedbacks, and constraints that would otherwise affect model results. Modifications to the existing modeling toolkit may therefore enable more holistic representations of human agency. Model integration can assist with the development of multi-scale agroecosystem modeling frameworks that capturemore » different aspects of agency. In addition, expanding the use of socioeconomic scenarios and stakeholder participation can assist in explicitly defining context-dependent elements of scale and agency. Finally, such approaches, however, should be accompanied by greater recognition of the meta agency of model users and the need for more critical evaluation of model selection and application.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siranosian, Antranik Antonio; Schembri, Philip Edward; Luscher, Darby Jon

    The Los Alamos National Laboratory's Weapon Systems Engineering division's Advanced Engineering Analysis group employs material constitutive models of composites for use in simulations of components and assemblies of interest. Experimental characterization, modeling and prediction of the macro-scale (i.e. continuum) behaviors of these composite materials is generally difficult because they exhibit nonlinear behaviors on the meso- (e.g. micro-) and macro-scales. Furthermore, it can be difficult to measure and model the mechanical responses of the individual constituents and constituent interactions in the composites of interest. Current efforts to model such composite materials rely on semi-empirical models in which meso-scale properties are inferredmore » from continuum level testing and modeling. The proposed approach involves removing the difficulties of interrogating and characterizing micro-scale behaviors by scaling-up the problem to work with macro-scale composites, with the intention of developing testing and modeling capabilities that will be applicable to the mesoscale. This approach assumes that the physical mechanisms governing the responses of the composites on the meso-scale are reproducible on the macro-scale. Working on the macro-scale simplifies the quantification of composite constituents and constituent interactions so that efforts can be focused on developing material models and the testing techniques needed for calibration and validation. Other benefits to working with macro-scale composites include the ability to engineer and manufacture—potentially using additive manufacturing techniques—composites that will support the application of advanced measurement techniques such as digital volume correlation and three-dimensional computed tomography imaging, which would aid in observing and quantifying complex behaviors that are exhibited in the macro-scale composites of interest. Ultimately, the goal of this new approach is to develop a meso-scale composite modeling framework, applicable to many composite materials, and the corresponding macroscale testing and test data interrogation techniques to support model calibration.« less

  15. Improved regional-scale Brazilian cropping systems' mapping based on a semi-automatic object-based clustering approach

    NASA Astrophysics Data System (ADS)

    Bellón, Beatriz; Bégué, Agnès; Lo Seen, Danny; Lebourgeois, Valentine; Evangelista, Balbino Antônio; Simões, Margareth; Demonte Ferraz, Rodrigo Peçanha

    2018-06-01

    Cropping systems' maps at fine scale over large areas provide key information for further agricultural production and environmental impact assessments, and thus represent a valuable tool for effective land-use planning. There is, therefore, a growing interest in mapping cropping systems in an operational manner over large areas, and remote sensing approaches based on vegetation index time series analysis have proven to be an efficient tool. However, supervised pixel-based approaches are commonly adopted, requiring resource consuming field campaigns to gather training data. In this paper, we present a new object-based unsupervised classification approach tested on an annual MODIS 16-day composite Normalized Difference Vegetation Index time series and a Landsat 8 mosaic of the State of Tocantins, Brazil, for the 2014-2015 growing season. Two variants of the approach are compared: an hyperclustering approach, and a landscape-clustering approach involving a previous stratification of the study area into landscape units on which the clustering is then performed. The main cropping systems of Tocantins, characterized by the crop types and cropping patterns, were efficiently mapped with the landscape-clustering approach. Results show that stratification prior to clustering significantly improves the classification accuracies for underrepresented and sparsely distributed cropping systems. This study illustrates the potential of unsupervised classification for large area cropping systems' mapping and contributes to the development of generic tools for supporting large-scale agricultural monitoring across regions.

  16. The Structure of Character Strengths: Variable- and Person-Centered Approaches

    PubMed Central

    Najderska, Małgorzata; Cieciuch, Jan

    2018-01-01

    This article examines the structure of character strengths (Peterson and Seligman, 2004) following both variable-centered and person-centered approaches. We used the International Personality Item Pool-Values in Action (IPIP-VIA) questionnaire. The IPIP-VIA measures 24 character strengths and consists of 213 direct and reversed items. The present study was conducted in a heterogeneous group of N = 908 Poles (aged 18–78, M = 28.58). It was part of a validation project of a Polish version of the IPIP-VIA questionnaire. The variable-centered approach was used to examine the structure of character strengths on both the scale and item levels. The scale-level results indicated a four-factor structure that can be interpreted based on four of the five personality traits from the Big Five theory (excluding neuroticism). The item-level analysis suggested a slightly different and limited set of character strengths (17 not 24). After conducting a second-order analysis, a four-factor structure emerged, and three of the factors could be interpreted as being consistent with the scale-level factors. Three character strength profiles were found using the person-centered approach. Two of them were consistent with alpha and beta personality metatraits. The structure of character strengths can be described by using categories from the Five Factor Model of personality and metatraits. They form factors similar to some personality traits and occur in similar constellations as metatraits. The main contributions of this paper are: (1) the validation of IPIP-VIA conducted in variable-centered approach in a new research group (Poles) using a different measurement instrument; (2) introducing the person-centered approach to the study of the structure of character strengths. PMID:29515482

  17. VLBI-resolution radio-map algorithms: Performance analysis of different levels of data-sharing on multi-socket, multi-core architectures

    NASA Astrophysics Data System (ADS)

    Tabik, S.; Romero, L. F.; Mimica, P.; Plata, O.; Zapata, E. L.

    2012-09-01

    A broad area in astronomy focuses on simulating extragalactic objects based on Very Long Baseline Interferometry (VLBI) radio-maps. Several algorithms in this scope simulate what would be the observed radio-maps if emitted from a predefined extragalactic object. This work analyzes the performance and scaling of this kind of algorithms on multi-socket, multi-core architectures. In particular, we evaluate a sharing approach, a privatizing approach and a hybrid approach on systems with complex memory hierarchy that includes shared Last Level Cache (LLC). In addition, we investigate which manual processes can be systematized and then automated in future works. The experiments show that the data-privatizing model scales efficiently on medium scale multi-socket, multi-core systems (up to 48 cores) while regardless of algorithmic and scheduling optimizations, the sharing approach is unable to reach acceptable scalability on more than one socket. However, the hybrid model with a specific level of data-sharing provides the best scalability over all used multi-socket, multi-core systems.

  18. On signals faint and sparse: The ACICA algorithm for blind de-trending of exoplanetary transits with low signal-to-noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Waldmann, I. P., E-mail: ingo@star.ucl.ac.uk

    2014-01-01

    Independent component analysis (ICA) has recently been shown to be a promising new path in data analysis and de-trending of exoplanetary time series signals. Such approaches do not require or assume any prior or auxiliary knowledge about the data or instrument in order to de-convolve the astrophysical light curve signal from instrument or stellar systematic noise. These methods are often known as 'blind-source separation' (BSS) algorithms. Unfortunately, all BSS methods suffer from an amplitude and sign ambiguity of their de-convolved components, which severely limits these methods in low signal-to-noise (S/N) observations where their scalings cannot be determined otherwise. Here wemore » present a novel approach to calibrate ICA using sparse wavelet calibrators. The Amplitude Calibrated Independent Component Analysis (ACICA) allows for the direct retrieval of the independent components' scalings and the robust de-trending of low S/N data. Such an approach gives us an unique and unprecedented insight in the underlying morphology of a data set, which makes this method a powerful tool for exoplanetary data de-trending and signal diagnostics.« less

  19. Apportioning Sources of Riverine Nitrogen at Multiple Watershed Scales

    NASA Astrophysics Data System (ADS)

    Boyer, E. W.; Alexander, R. B.; Sebestyen, S. D.

    2005-05-01

    Loadings of reactive nitrogen (N) entering terrestrial landscapes have increased in recent decades due to anthropogenic activities associated with food and energy production. In the northeastern USA, this enhanced supply of N has been linked to many environmental concerns in both terrestrial and aquatic ecosystems, such as forest decline, lake and stream acidification, human respiratory problems, and coastal eutrophication. Thus N is a priority pollutant with regard to a whole host of air, land, and water quality issues, highlighting the need for methods to identify and quantify various N sources. Further, understanding precursor sources of N is critical to current and proposed public policies targeted at the reduction of N inputs to the terrestrial landscape and receiving waters. We present results from published and ongoing studies using multiple approaches to fingerprint sources of N in the northeastern USA, at watershed scales ranging from the headwaters to the coastal zone. The approaches include: 1) a mass balance model with a nitrogen-budgeting approach for analyses of large watersheds; 2) a spatially-referenced regression model with an empirical modeling approach for analyses of water quality at regional scales; and 3) a meta-analysis of monitoring data with a chemical tracer approach, utilizing concentrations of multiple elements and isotopic composition of N from water samples collected in the streams and rivers. We discuss the successes and limitations of these various approaches for apportioning contributions of N from multiple sources to receiving waters at regional scales.

  20. Object-Based Random Forest Classification of Land Cover from Remotely Sensed Imagery for Industrial and Mining Reclamation

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Luo, M.; Xu, L.; Zhou, X.; Ren, J.; Zhou, J.

    2018-04-01

    The RF method based on grid-search parameter optimization could achieve a classification accuracy of 88.16 % in the classification of images with multiple feature variables. This classification accuracy was higher than that of SVM and ANN under the same feature variables. In terms of efficiency, the RF classification method performs better than SVM and ANN, it is more capable of handling multidimensional feature variables. The RF method combined with object-based analysis approach could highlight the classification accuracy further. The multiresolution segmentation approach on the basis of ESP scale parameter optimization was used for obtaining six scales to execute image segmentation, when the segmentation scale was 49, the classification accuracy reached the highest value of 89.58 %. The classification accuracy of object-based RF classification was 1.42 % higher than that of pixel-based classification (88.16 %), and the classification accuracy was further improved. Therefore, the RF classification method combined with object-based analysis approach could achieve relatively high accuracy in the classification and extraction of land use information for industrial and mining reclamation areas. Moreover, the interpretation of remotely sensed imagery using the proposed method could provide technical support and theoretical reference for remotely sensed monitoring land reclamation.

  1. Development and validation of a fatigue assessment scale for U.S. construction workers.

    PubMed

    Zhang, Mingzong; Sparer, Emily H; Murphy, Lauren A; Dennerlein, Jack T; Fang, Dongping; Katz, Jeffrey N; Caban-Martinez, Alberto J

    2015-02-01

    To develop a fatigue assessment scale and test its reliability and validity for commercial construction workers. Using a two-phased approach, we first identified items (first phase) for the development of a Fatigue Assessment Scale for Construction Workers (FASCW) through review of existing scales in the scientific literature, key informant interviews (n = 11) and focus groups (three groups with six workers each) with construction workers. The second phase included assessment for the reliability, validity, and sensitivity of the new scale using a repeated-measures study design with a convenience sample of construction workers (n = 144). Phase one resulted in a 16-item preliminary scale that after factor analysis yielded a final 10-item scale with two sub-scales ("Lethargy" and "Bodily Ailment"). During phase two, the FASCW and its subscales demonstrated satisfactory internal consistency (alpha coefficients were FASCW [0.91], Lethargy [0.86] and Bodily Ailment [0.84]) and acceptable test-retest reliability (Pearson Correlations Coefficients: 0.59-0.68; Intraclass Correlation Coefficients: 0.74-0.80). Correlation analysis substantiated concurrent and convergent validity. A discriminant analysis demonstrated that the FASCW differentiated between groups with arthritis status and different work hours. The 10-item FASCW with good reliability and validity is an effective tool for assessing the severity of fatigue among construction workers. © 2015 Wiley Periodicals, Inc.

  2. Multifractal Analysis for Nutritional Assessment

    PubMed Central

    Park, Youngja; Lee, Kichun; Ziegler, Thomas R.; Martin, Greg S.; Hebbar, Gautam; Vidakovic, Brani; Jones, Dean P.

    2013-01-01

    The concept of multifractality is currently used to describe self-similar and complex scaling properties observed in numerous biological signals. Fractals are geometric objects or dynamic variations which exhibit some degree of similarity (irregularity) to the original object in a wide range of scales. This approach determines irregularity of biologic signal as an indicator of adaptability, the capability to respond to unpredictable stress, and health. In the present work, we propose the application of multifractal analysis of wavelet-transformed proton nuclear magnetic resonance (1H NMR) spectra of plasma to determine nutritional insufficiency. For validation of this method on 1H NMR signal of human plasma, standard deviation from classical statistical approach and Hurst exponent (H), left slope and partition function from multifractal analysis were extracted from 1H NMR spectra to test whether multifractal indices could discriminate healthy subjects from unhealthy, intensive care unit patients. After validation, the multifractal approach was applied to spectra of plasma from a modified crossover study of sulfur amino acid insufficiency and tested for associations with blood lipids. The results showed that standard deviation and H, but not left slope, were significantly different for sulfur amino acid sufficiency and insufficiency. Quadratic discriminant analysis of H, left slope and the partition function showed 78% overall classification accuracy according to sulfur amino acid status. Triglycerides and apolipoprotein C3 were significantly correlated with a multifractal model containing H, left slope, and standard deviation, and cholesterol and high-sensitivity C-reactive protein were significantly correlated to H. In conclusion, multifractal analysis of 1H NMR spectra provides a new approach to characterize nutritional status. PMID:23990878

  3. Methods for the analysis of ordinal response data in medical image quality assessment.

    PubMed

    Keeble, Claire; Baxter, Paul D; Gislason-Lee, Amber J; Treadgold, Laura A; Davies, Andrew G

    2016-07-01

    The assessment of image quality in medical imaging often requires observers to rate images for some metric or detectability task. These subjective results are used in optimization, radiation dose reduction or system comparison studies and may be compared to objective measures from a computer vision algorithm performing the same task. One popular scoring approach is to use a Likert scale, then assign consecutive numbers to the categories. The mean of these response values is then taken and used for comparison with the objective or second subjective response. Agreement is often assessed using correlation coefficients. We highlight a number of weaknesses in this common approach, including inappropriate analyses of ordinal data and the inability to properly account for correlations caused by repeated images or observers. We suggest alternative data collection and analysis techniques such as amendments to the scale and multilevel proportional odds models. We detail the suitability of each approach depending upon the data structure and demonstrate each method using a medical imaging example. Whilst others have raised some of these issues, we evaluated the entire study from data collection to analysis, suggested sources for software and further reading, and provided a checklist plus flowchart for use with any ordinal data. We hope that raised awareness of the limitations of the current approaches will encourage greater method consideration and the utilization of a more appropriate analysis. More accurate comparisons between measures in medical imaging will lead to a more robust contribution to the imaging literature and ultimately improved patient care.

  4. Quadrantal multi-scale distribution entropy analysis of heartbeat interval series based on a modified Poincaré plot

    NASA Astrophysics Data System (ADS)

    Huo, Chengyu; Huang, Xiaolin; Zhuang, Jianjun; Hou, Fengzhen; Ni, Huangjing; Ning, Xinbao

    2013-09-01

    The Poincaré plot is one of the most important approaches in human cardiac rhythm analysis. However, further investigations are still needed to concentrate on techniques that can characterize the dispersion of the points displayed by a Poincaré plot. Based on a modified Poincaré plot, we provide a novel measurement named distribution entropy (DE) and propose a quadrantal multi-scale distribution entropy analysis (QMDE) for the quantitative descriptions of the scatter distribution patterns in various regions and temporal scales. We apply this method to the heartbeat interval series derived from healthy subjects and congestive heart failure (CHF) sufferers, respectively, and find that the discriminations between them are most significant in the first quadrant, which implies significant impacts on vagal regulation brought about by CHF. We also investigate the day-night differences of young healthy people, and it is shown that the results present a clearly circadian rhythm, especially in the first quadrant. In addition, the multi-scale analysis indicates that the results of healthy subjects and CHF sufferers fluctuate in different trends with variation of the scale factor. The same phenomenon also appears in circadian rhythm investigations of young healthy subjects, which implies that the cardiac dynamic system is affected differently in various temporal scales by physiological or pathological factors.

  5. Towards practical multiscale approach for analysis of reinforced concrete structures

    NASA Astrophysics Data System (ADS)

    Moyeda, Arturo; Fish, Jacob

    2017-12-01

    We present a novel multiscale approach for analysis of reinforced concrete structural elements that overcomes two major hurdles in utilization of multiscale technologies in practice: (1) coupling between material and structural scales due to consideration of large representative volume elements (RVE), and (2) computational complexity of solving complex nonlinear multiscale problems. The former is accomplished using a variant of computational continua framework that accounts for sizeable reinforced concrete RVEs by adjusting the location of quadrature points. The latter is accomplished by means of reduced order homogenization customized for structural elements. The proposed multiscale approach has been verified against direct numerical simulations and validated against experimental results.

  6. Multi-scale symbolic transfer entropy analysis of EEG

    NASA Astrophysics Data System (ADS)

    Yao, Wenpo; Wang, Jun

    2017-10-01

    From both global and local perspectives, we symbolize two kinds of EEG and analyze their dynamic and asymmetrical information using multi-scale transfer entropy. Multi-scale process with scale factor from 1 to 199 and step size of 2 is applied to EEG of healthy people and epileptic patients, and then the permutation with embedding dimension of 3 and global approach are used to symbolize the sequences. The forward and reverse symbol sequences are taken as the inputs of transfer entropy. Scale factor intervals of permutation and global way are (37, 57) and (65, 85) where the two kinds of EEG have satisfied entropy distinctions. When scale factor is 67, transfer entropy of the healthy and epileptic subjects of permutation, 0.1137 and 0.1028, have biggest difference. And the corresponding values of the global symbolization is 0.0641 and 0.0601 which lies in the scale factor of 165. Research results show that permutation which takes contribution of local information has better distinction and is more effectively applied to our multi-scale transfer entropy analysis of EEG.

  7. Properties of small-scale interfacial turbulence from a novel thermography based approach

    NASA Astrophysics Data System (ADS)

    Schnieders, Jana; Garbe, Christoph

    2013-04-01

    Oceans cover nearly two thirds of the earth's surface and exchange processes between the Atmosphere and the Ocean are of fundamental environmental importance. At the air-sea interface, complex interaction processes take place on a multitude of scales. Turbulence plays a key role in the coupling of momentum, heat and mass transfer [2]. Here we use high resolution infrared imagery to visualize near surface aqueous turbulence. Thermographic data is analized from a range of laboratory facilities and experimental conditions with wind speeds ranging from 1ms-1 to 7ms-1 and various surface conditions. The surface heat pattern is formed by distinct structures on two scales - small-scale short lived structures termed fish scales and larger scale cold streaks that are consistent with the footprints of Langmuir Circulations. There are two key characteristics of the observed surface heat patterns: (1) The surface heat patterns show characteristic features of scales. (2) The structure of these patterns change with increasing wind stress and surface conditions. We present a new image processing based approach to the analysis of the spacing of cold streaks based on a machine learning approach [4, 1] to classify the thermal footprints of near surface turbulence. Our random forest classifier is based on classical features in image processing such as gray value gradients and edge detecting features. The result is a pixel-wise classification of the surface heat pattern with a subsequent analysis of the streak spacing. This approach has been presented in [3] and can be applied to a wide range of experimental data. In spite of entirely different boundary conditions, the spacing of turbulent cells near the air-water interface seems to match the expected turbulent cell size for flow near a no-slip wall. The analysis of the spacing of cold streaks shows consistent behavior in a range of laboratory facilities when expressed as a function of water sided friction velocity, u*. The scales systematically decrease until a point of saturation at u* = 0.7 cm/s. Results suggest a saturation in the tangential stress, anticipating that similar behavior will be observed in the open ocean. A comparison with studies of small-scale Langmuir circulations and Langmuir numbers shows that thermal footprints in infrared images are consistent with Langmuir circulations and depend strongly on wind wave conditions. Our approach is not limited to laboratory measurments. In the near future, we will deploy it on in-situ measurements and verify our findings in these more challenging conditions. References [1] L. Breimann. Random forests. Machine Learning, 45:5-32, 2001. [2] S. P. McKenna and W. R. McGillis. The role of free-surface turbulence and surfactants in air-water gas transfer. Int. J. Heat Mass Transfer, 47:539-553, 2004. [3] J Schnieders, C. S. Garbe, W.L. Peirson, and C. J. Zappa. Analyzing the footprints of near surface aqueous turbulence - an image processing based approach. Journal of Geophysical Research-Oceans, 2013. [4] Christoph Sommer, Christoph Straehle, Ullrich Koethe, and Fred A. Hamprecht. ilastik: Interactive learning and segmentation toolkit. In 8th IEEE International Symposium on Biomedical Imaging (ISBI 2011), 2011. [5] W.-T. Tsai, S.-M. Chen, and C.-H. Moeng. A numerical study on the evolution and structure of a stress-driven free-surface turbulent shear flow. J. Fluid Mech., 545:163-192, 2005.

  8. Remote Sensing Application to Land Use Classification in a Rapidly Changing Agricultural/Urban Area: City of Virginia Beach, Virginia. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Odenyo, V. A. O.

    1975-01-01

    Remote sensing data on computer-compatible tapes of LANDSAT 1 multispectral scanner imager were analyzed to generate a land use map of the City of Virginia Beach. All four bands were used in both the supervised and unsupervised approaches with the LAYSYS software system. Color IR imagery of a U-2 flight of the same area was also digitized and two sample areas were analyzed via the unsupervised approach. The relationships between the mapped land use and the soils of the area were investigated. A land use land cover map at a scale of 1:24,000 was obtained from the supervised analysis of LANDSAT 1 data. It was concluded that machine analysis of remote sensing data to produce land use maps was feasible; that the LAYSYS software system was usable for this purpose; and that the machine analysis was capable of extracting detailed information from the relatively small scale LANDSAT data in a much shorter time without compromising accuracy.

  9. Harnessing Whole Genome Sequencing in Medical Mycology.

    PubMed

    Cuomo, Christina A

    2017-01-01

    Comparative genome sequencing studies of human fungal pathogens enable identification of genes and variants associated with virulence and drug resistance. This review describes current approaches, resources, and advances in applying whole genome sequencing to study clinically important fungal pathogens. Genomes for some important fungal pathogens were only recently assembled, revealing gene family expansions in many species and extreme gene loss in one obligate species. The scale and scope of species sequenced is rapidly expanding, leveraging technological advances to assemble and annotate genomes with higher precision. By using iteratively improved reference assemblies or those generated de novo for new species, recent studies have compared the sequence of isolates representing populations or clinical cohorts. Whole genome approaches provide the resolution necessary for comparison of closely related isolates, for example, in the analysis of outbreaks or sampled across time within a single host. Genomic analysis of fungal pathogens has enabled both basic research and diagnostic studies. The increased scale of sequencing can be applied across populations, and new metagenomic methods allow direct analysis of complex samples.

  10. Finding elementary flux modes in metabolic networks based on flux balance analysis and flux coupling analysis: application to the analysis of Escherichia coli metabolism.

    PubMed

    Tabe-Bordbar, Shayan; Marashi, Sayed-Amir

    2013-12-01

    Elementary modes (EMs) are steady-state metabolic flux vectors with minimal set of active reactions. Each EM corresponds to a metabolic pathway. Therefore, studying EMs is helpful for analyzing the production of biotechnologically important metabolites. However, memory requirements for computing EMs may hamper their applicability as, in most genome-scale metabolic models, no EM can be computed due to running out of memory. In this study, we present a method for computing randomly sampled EMs. In this approach, a network reduction algorithm is used for EM computation, which is based on flux balance-based methods. We show that this approach can be used to recover the EMs in the medium- and genome-scale metabolic network models, while the EMs are sampled in an unbiased way. The applicability of such results is shown by computing “estimated” control-effective flux values in Escherichia coli metabolic network.

  11. Backscattering from a Gaussian distributed, perfectly conducting, rough surface

    NASA Technical Reports Server (NTRS)

    Brown, G. S.

    1977-01-01

    The problem of scattering by random surfaces possessing many scales of roughness is analyzed. The approach is applicable to bistatic scattering from dielectric surfaces, however, this specific analysis is restricted to backscattering from a perfectly conducting surface in order to more clearly illustrate the method. The surface is assumed to be Gaussian distributed so that the surface height can be split into large and small scale components, relative to the electromagnetic wavelength. A first order perturbation approach is employed wherein the scattering solution for the large scale structure is perturbed by the small scale diffraction effects. The scattering from the large scale structure is treated via geometrical optics techniques. The effect of the large scale surface structure is shown to be equivalent to a convolution in k-space of the height spectrum with the following: the shadowing function, a polarization and surface slope dependent function, and a Gaussian factor resulting from the unperturbed geometrical optics solution. This solution provides a continuous transition between the near normal incidence geometrical optics and wide angle Bragg scattering results.

  12. Development and validation of the coronary heart disease scale under the system of quality of life instruments for chronic diseases QLICD-CHD: combinations of classical test theory and Generalizability Theory.

    PubMed

    Wan, Chonghua; Li, Hezhan; Fan, Xuejin; Yang, Ruixue; Pan, Jiahua; Chen, Wenru; Zhao, Rong

    2014-06-04

    Quality of life (QOL) for patients with coronary heart disease (CHD) is now concerned worldwide with the specific instruments being seldom and no one developed by the modular approach. This paper is aimed to develop the CHD scale of the system of Quality of Life Instruments for Chronic Diseases (QLICD-CHD) by the modular approach and validate it by both classical test theory and Generalizability Theory. The QLICD-CHD was developed based on programmed decision procedures with multiple nominal and focus group discussions, in-depth interview, pre-testing and quantitative statistical procedures. 146 inpatients with CHD were used to provide the data measuring QOL three times before and after treatments. The psychometric properties of the scale were evaluated with respect to validity, reliability and responsiveness employing correlation analysis, factor analyses, multi-trait scaling analysis, t-tests and also G studies and D studies of Genralizability Theory analysis. Multi-trait scaling analysis, correlation and factor analyses confirmed good construct validity and criterion-related validity when using SF-36 as a criterion. The internal consistency α and test-retest reliability coefficients (Pearson r and Intra-class correlations ICC) for the overall instrument and all domains were higher than 0.70 and 0.80 respectively; The overall and all domains except for social domain had statistically significant changes after treatments with moderate effect size SRM (standardized response mea) ranging from 0.32 to 0.67. G-coefficients and index of dependability (Ф coefficients) confirmed the reliability of the scale further with more exact variance components. The QLICD-CHD has good validity, reliability, and moderate responsiveness and some highlights, and can be used as the quality of life instrument for patients with CHD. However, in order to obtain better reliability, the numbers of items for social domain should be increased or the items' quality, not quantity, should be improved.

  13. Detection of crossover time scales in multifractal detrended fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Ge, Erjia; Leung, Yee

    2013-04-01

    Fractal is employed in this paper as a scale-based method for the identification of the scaling behavior of time series. Many spatial and temporal processes exhibiting complex multi(mono)-scaling behaviors are fractals. One of the important concepts in fractals is crossover time scale(s) that separates distinct regimes having different fractal scaling behaviors. A common method is multifractal detrended fluctuation analysis (MF-DFA). The detection of crossover time scale(s) is, however, relatively subjective since it has been made without rigorous statistical procedures and has generally been determined by eye balling or subjective observation. Crossover time scales such determined may be spurious and problematic. It may not reflect the genuine underlying scaling behavior of a time series. The purpose of this paper is to propose a statistical procedure to model complex fractal scaling behaviors and reliably identify the crossover time scales under MF-DFA. The scaling-identification regression model, grounded on a solid statistical foundation, is first proposed to describe multi-scaling behaviors of fractals. Through the regression analysis and statistical inference, we can (1) identify the crossover time scales that cannot be detected by eye-balling observation, (2) determine the number and locations of the genuine crossover time scales, (3) give confidence intervals for the crossover time scales, and (4) establish the statistically significant regression model depicting the underlying scaling behavior of a time series. To substantive our argument, the regression model is applied to analyze the multi-scaling behaviors of avian-influenza outbreaks, water consumption, daily mean temperature, and rainfall of Hong Kong. Through the proposed model, we can have a deeper understanding of fractals in general and a statistical approach to identify multi-scaling behavior under MF-DFA in particular.

  14. Quantification of changes in language-related brain areas in autism spectrum disorders using large-scale network analysis.

    PubMed

    Goch, Caspar J; Stieltjes, Bram; Henze, Romy; Hering, Jan; Poustka, Luise; Meinzer, Hans-Peter; Maier-Hein, Klaus H

    2014-05-01

    Diagnosis of autism spectrum disorders (ASD) is difficult, as symptoms vary greatly and are difficult to quantify objectively. Recent work has focused on the assessment of non-invasive diffusion tensor imaging-based biomarkers that reflect the microstructural characteristics of neuronal pathways in the brain. While tractography-based approaches typically analyze specific structures of interest, a graph-based large-scale network analysis of the connectome can yield comprehensive measures of larger-scale architectural patterns in the brain. Commonly applied global network indices, however, do not provide any specificity with respect to functional areas or anatomical structures. Aim of this work was to assess the concept of network centrality as a tool to perform locally specific analysis without disregarding the global network architecture and compare it to other popular network indices. We create connectome networks from fiber tractographies and parcellations of the human brain and compute global network indices as well as local indices for Wernicke's Area, Broca's Area and the Motor Cortex. Our approach was evaluated on 18 children suffering from ASD and 18 typically developed controls using magnetic resonance imaging-based cortical parcellations in combination with diffusion tensor imaging tractography. We show that the network centrality of Wernicke's area is significantly (p<0.001) reduced in ASD, while the motor cortex, which was used as a control region, did not show significant alterations. This could reflect the reduced capacity for comprehension of language in ASD. The betweenness centrality could potentially be an important metric in the development of future diagnostic tools in the clinical context of ASD diagnosis. Our results further demonstrate the applicability of large-scale network analysis tools in the domain of region-specific analysis with a potential application in many different psychological disorders.

  15. Development of lichen response indexes using a regional gradient modeling approach for large-scale monitoring of forests

    Treesearch

    Susan Will-Wolf; Peter Neitlich

    2010-01-01

    Development of a regional lichen gradient model from community data is a powerful tool to derive lichen indexes of response to environmental factors for large-scale and long-term monitoring of forest ecosystems. The Forest Inventory and Analysis (FIA) Program of the U.S. Department of Agriculture Forest Service includes lichens in its national inventory of forests of...

  16. An Evaluation of the Precision of Measurement of Ryff's Psychological Well-Being Scales in a Population Sample

    ERIC Educational Resources Information Center

    Abbott, Rosemary A.; Ploubidis, George B.; Huppert, Felicia A.; Kuh, Diana; Croudace, Tim J.

    2010-01-01

    The aim of this study is to assess the effective measurement range of Ryff's Psychological Well-being scales (PWB). It applies normal ogive item response theory (IRT) methodology using factor analysis procedures for ordinal data based on a limited information estimation approach. The data come from a sample of 1,179 women participating in a…

  17. Visual unit analysis: a descriptive approach to landscape assessment

    Treesearch

    R. J. Tetlow; S. R. J. Sheppard

    1979-01-01

    Analysis of the visible attributes of landscapes is an important component of the planning process. When landscapes are at regional scale, economical and effective methodologies are critical. The Visual Unit concept appears to offer a logical and useful framework for description and evaluation. The concept subdivides landscape into coherent, spatially-defined units....

  18. ResStock Analysis Tool | Buildings | NREL

    Science.gov Websites

    Energy and Cost Savings for U.S. Homes Contact Eric Wilson to learn how ResStock can benefit your approach to large-scale residential energy analysis by combining: Large public and private data sources uncovered $49 billion in potential annual utility bill savings through cost-effective energy efficiency

  19. Coping with Guilt and Shame: A Narrative Approach

    ERIC Educational Resources Information Center

    Silfver, Mia

    2007-01-01

    Autobiographical narratives (N = 97) of guilt and shame experiences were analysed to determine how the nature of emotion and context relate to ways of coping in such situations. The coding categories were created by content analysis, and the connections between categories were analysed with optimal scaling and log-linear analysis. Two theoretical…

  20. INTEGRATING ECOLOGICAL RISK ASSESSMENT AND ECONOMIC ANALYSIS IN WATERSHEDS: A CONCEPTUAL APPROACH AND THREE CASE STUDIES

    EPA Science Inventory

    This document reports on a program of research to investigate the integration of ecological risk assessment (ERA) and economics, with an emphasis on the watershed as the scale for analysis. In 1993, the U.S. Environmental Protection Agency initiated watershed ERA (W-ERA) in five...

  1. Analysis of regional-scale vegetation dynamics of Mexico using stratified AVHRR NDVI data. [Normalized Difference Vegetaion Index

    NASA Technical Reports Server (NTRS)

    Turcotte, Kevin M.; Kramber, William J.; Venugopal, Gopalan; Lulla, Kamlesh

    1989-01-01

    Previous studies have shown that a good relationship exists between AVHRR Normalized Difference Vegetation Index (NDVI) measurements, and both regional-scale patterns of vegetation seasonality and productivity. Most of these studies used known samples of vegetation types. An alternative approach, and the objective was to examine the above relationships by analyzing one year of AVHRR NDVI data that was stratified using a small-scale vegetation map of Mexico. The results show that there is a good relationship between AVHRR NDVI measurements and regional-scale vegetation dynamics of Mexico.

  2. Finite-size scaling for discontinuous nonequilibrium phase transitions

    NASA Astrophysics Data System (ADS)

    de Oliveira, Marcelo M.; da Luz, M. G. E.; Fiore, Carlos E.

    2018-06-01

    A finite-size scaling theory, originally developed only for transitions to absorbing states [Phys. Rev. E 92, 062126 (2015), 10.1103/PhysRevE.92.062126], is extended to distinct sorts of discontinuous nonequilibrium phase transitions. Expressions for quantities such as response functions, reduced cumulants, and equal area probability distributions are derived from phenomenological arguments. Irrespective of system details, all these quantities scale with the volume, establishing the dependence on size. The approach generality is illustrated through the analysis of different models. The present results are a relevant step in trying to unify the scaling behavior description of nonequilibrium transition processes.

  3. Analysis of alluvial hydrostratigraphy using indicator geostatistics, with examples from Santa Clara Valley, California

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1995-03-01

    Current trends in hydrogeology seek to enlist sedimentary concepts in the interpretation of permeability structures. However, existing conceptual models of alluvial deposition tend to inadequately account for the heterogeneity caused by complex sedimentological and external factors. This dissertation presents three analyses of alluvial hydrostratigraphy using indicator geostatistics. This approach empirically acknowledges both the random and structured qualities of alluvial structures at scales relevant to site investigations. The first analysis introduces the indicator approach, whereby binary values are assigned to borehole-log intervals on the basis of inferred relative permeability; it presents a case study of indicator variography at a well-documented ground-watermore » contamination site, and uses indicator kriging to interpolate an aquifer-aquitard sequence in three dimensions. The second analysis develops an alluvial-architecture context for interpreting semivariograms, and performs comparative variography for a suite of alluvial sites in Santa Clara Valley, California. The third analysis investigates the use of a water well perforation indicator for assessing large-scale hydrostratigraphic structures within relatively deep production zones.« less

  4. An item response theory analysis of Harter's Self-Perception Profile for children or why strong clinical scales should be distrusted.

    PubMed

    Egberink, Iris J L; Meijer, Rob R

    2011-06-01

    The authors investigated the psychometric properties of the subscales of the Self-Perception Profile for Children with item response theory (IRT) models using a sample of 611 children. Results from a nonparametric Mokken analysis and a parametric IRT approach for boys (n = 268) and girls (n = 343) were compared. The authors found that most scales formed weak scales and that measurement precision was relatively low and only present for latent trait values indicating low self-perception. The subscales Physical Appearance and Global Self-Worth formed one strong scale. Children seem to interpret Global Self-Worth items as if they measure Physical Appearance. Furthermore, the authors found that strong Mokken scales (such as Global Self-Worth) consisted mostly of items that repeat the same item content. They conclude that researchers should be very careful in interpreting the total scores on the different Self-Perception Profile for Children scales. Finally, implications for further research are discussed.

  5. The disposition to understand for oneself at university: integrating learning processes with motivation and metacognition.

    PubMed

    Entwistle, Noel; McCune, Velda

    2013-06-01

    A re-analysis of several university-level interview studies has suggested that some students show evidence of a deep and stable approach to learning, along with other characteristics that support the approach. This combination, it was argued, could be seen to indicate a disposition to understand for oneself. To identify a group of students who showed high and consistent scores on deep approach, combined with equivalently high scores on effort and monitoring studying, and to explore these students' experiences of the teaching-learning environments they had experienced. Re-analysis of data from 1,896 students from 25 undergraduate courses taking four contrasting subject areas in eleven British universities. Inventories measuring approaches to studying were given at the beginning and the end of a semester, with the second inventory also exploring students' experiences of teaching. K-means cluster analysis was used to identify groups of students with differing patterns of response on the inventory scales, with a particular focus on students showing high, stable scores. One cluster clearly showed the characteristics expected of the disposition to understand and was also fairly stable over time. Other clusters also had deep approaches, but also showed either surface elements or lower scores on organized effort or monitoring their studying. Combining these findings with interview studies previously reported reinforces the idea of there being a disposition to understand for oneself that could be identified from an inventory scale or through further interviews. © 2013 The British Psychological Society.

  6. Robust inference for responder analysis: Innovative clinical trial design using a minimum p-value approach.

    PubMed

    Lin, Yunzhi

    2016-08-15

    Responder analysis is in common use in clinical trials, and has been described and endorsed in regulatory guidance documents, especially in trials where "soft" clinical endpoints such as rating scales are used. The procedure is useful, because responder rates can be understood more intuitively than a difference in means of rating scales. However, two major issues arise: 1) such dichotomized outcomes are inefficient in terms of using the information available and can seriously reduce the power of the study; and 2) the results of clinical trials depend considerably on the response cutoff chosen, yet in many disease areas there is no consensus as to what is the most appropriate cutoff. This article addresses these two issues, offering a novel approach for responder analysis that could both improve the power of responder analysis and explore different responder cutoffs if an agreed-upon common cutoff is not present. Specifically, we propose a statistically rigorous clinical trial design that pre-specifies multiple tests of responder rates between treatment groups based on a range of pre-specified responder cutoffs, and uses the minimum of the p-values for formal inference. The critical value for hypothesis testing comes from permutation distributions. Simulation studies are carried out to examine the finite sample performance of the proposed method. We demonstrate that the new method substantially improves the power of responder analysis, and in certain cases, yields power that is approaching the analysis using the original continuous (or ordinal) measure.

  7. Scalable Analysis Methods and In Situ Infrastructure for Extreme Scale Knowledge Discovery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bethel, Wes

    2016-07-24

    The primary challenge motivating this team’s work is the widening gap between the ability to compute information and to store it for subsequent analysis. This gap adversely impacts science code teams, who are able to perform analysis only on a small fraction of the data they compute, resulting in the very real likelihood of lost or missed science, when results are computed but not analyzed. Our approach is to perform as much analysis or visualization processing on data while it is still resident in memory, an approach that is known as in situ processing. The idea in situ processing wasmore » not new at the time of the start of this effort in 2014, but efforts in that space were largely ad hoc, and there was no concerted effort within the research community that aimed to foster production-quality software tools suitable for use by DOE science projects. In large, our objective was produce and enable use of production-quality in situ methods and infrastructure, at scale, on DOE HPC facilities, though we expected to have impact beyond DOE due to the widespread nature of the challenges, which affect virtually all large-scale computational science efforts. To achieve that objective, we assembled a unique team of researchers consisting of representatives from DOE national laboratories, academia, and industry, and engaged in software technology R&D, as well as engaged in close partnerships with DOE science code teams, to produce software technologies that were shown to run effectively at scale on DOE HPC platforms.« less

  8. [Application of risk-based approach for determination of critical factors in technology transfer of production of medicinal products].

    PubMed

    Beregovykh, V V; Spitskiy, O R

    2014-01-01

    Risk-based approach is used for examination of impact of different factors on quality of medicinal products in technology transfer. A general diagram is offered for risk analysis execution in technology transfer from pharmaceutical development to production. When transferring technology to full- scale commercial production it is necessary to investigate and simulate production process application beforehand in new real conditions. The manufacturing process is the core factorfor risk analysis having the most impact on quality attributes of a medicinal product. Further importantfactors are linked to materials and products to be handled and manufacturing environmental conditions such as premises, equipment and personnel. Usage of risk-based approach in designing of multipurpose production facility of medicinal products is shown where quantitative risk analysis tool RAMM (Risk Analysis and Mitigation Matrix) was applied.

  9. A Composite Network Approach for Assessing Multi-Species Connectivity: An Application to Road Defragmentation Prioritisation

    PubMed Central

    Saura, Santiago; Rondinini, Carlo

    2016-01-01

    One of the biggest challenges in large-scale conservation is quantifying connectivity at broad geographic scales and for a large set of species. Because connectivity analyses can be computationally intensive, and the planning process quite complex when multiple taxa are involved, assessing connectivity at large spatial extents for many species turns to be often intractable. Such limitation results in that conducted assessments are often partial by focusing on a few key species only, or are generic by considering a range of dispersal distances and a fixed set of areas to connect that are not directly linked to the actual spatial distribution or mobility of particular species. By using a graph theory framework, here we propose an approach to reduce computational effort and effectively consider large assemblages of species in obtaining multi-species connectivity priorities. We demonstrate the potential of the approach by identifying defragmentation priorities in the Italian road network focusing on medium and large terrestrial mammals. We show that by combining probabilistic species graphs prior to conducting the network analysis (i) it is possible to analyse connectivity once for all species simultaneously, obtaining conservation or restoration priorities that apply for the entire species assemblage; and that (ii) those priorities are well aligned with the ones that would be obtained by aggregating the results of separate connectivity analysis for each of the individual species. This approach offers great opportunities to extend connectivity assessments to large assemblages of species and broad geographic scales. PMID:27768718

  10. A mixed-integer linear programming approach to the reduction of genome-scale metabolic networks.

    PubMed

    Röhl, Annika; Bockmayr, Alexander

    2017-01-03

    Constraint-based analysis has become a widely used method to study metabolic networks. While some of the associated algorithms can be applied to genome-scale network reconstructions with several thousands of reactions, others are limited to small or medium-sized models. In 2015, Erdrich et al. introduced a method called NetworkReducer, which reduces large metabolic networks to smaller subnetworks, while preserving a set of biological requirements that can be specified by the user. Already in 2001, Burgard et al. developed a mixed-integer linear programming (MILP) approach for computing minimal reaction sets under a given growth requirement. Here we present an MILP approach for computing minimum subnetworks with the given properties. The minimality (with respect to the number of active reactions) is not guaranteed by NetworkReducer, while the method by Burgard et al. does not allow specifying the different biological requirements. Our procedure is about 5-10 times faster than NetworkReducer and can enumerate all minimum subnetworks in case there exist several ones. This allows identifying common reactions that are present in all subnetworks, and reactions appearing in alternative pathways. Applying complex analysis methods to genome-scale metabolic networks is often not possible in practice. Thus it may become necessary to reduce the size of the network while keeping important functionalities. We propose a MILP solution to this problem. Compared to previous work, our approach is more efficient and allows computing not only one, but even all minimum subnetworks satisfying the required properties.

  11. Large-scale structural optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1983-01-01

    Problems encountered by aerospace designers in attempting to optimize whole aircraft are discussed, along with possible solutions. Large scale optimization, as opposed to component-by-component optimization, is hindered by computational costs, software inflexibility, concentration on a single, rather than trade-off, design methodology and the incompatibility of large-scale optimization with single program, single computer methods. The software problem can be approached by placing the full analysis outside of the optimization loop. Full analysis is then performed only periodically. Problem-dependent software can be removed from the generic code using a systems programming technique, and then embody the definitions of design variables, objective function and design constraints. Trade-off algorithms can be used at the design points to obtain quantitative answers. Finally, decomposing the large-scale problem into independent subproblems allows systematic optimization of the problems by an organization of people and machines.

  12. Arrhenius time-scaled least squares: a simple, robust approach to accelerated stability data analysis for bioproducts.

    PubMed

    Rauk, Adam P; Guo, Kevin; Hu, Yanling; Cahya, Suntara; Weiss, William F

    2014-08-01

    Defining a suitable product presentation with an acceptable stability profile over its intended shelf-life is one of the principal challenges in bioproduct development. Accelerated stability studies are routinely used as a tool to better understand long-term stability. Data analysis often employs an overall mass action kinetics description for the degradation and the Arrhenius relationship to capture the temperature dependence of the observed rate constant. To improve predictive accuracy and precision, the current work proposes a least-squares estimation approach with a single nonlinear covariate and uses a polynomial to describe the change in a product attribute with respect to time. The approach, which will be referred to as Arrhenius time-scaled (ATS) least squares, enables accurate, precise predictions to be achieved for degradation profiles commonly encountered during bioproduct development. A Monte Carlo study is conducted to compare the proposed approach with the common method of least-squares estimation on the logarithmic form of the Arrhenius equation and nonlinear estimation of a first-order model. The ATS least squares method accommodates a range of degradation profiles, provides a simple and intuitive approach for data presentation, and can be implemented with ease. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  13. Intermediate scale plasma density irregularities in the polar ionosphere inferred from radio occultation

    NASA Astrophysics Data System (ADS)

    Shume, E. B.; Komjathy, A.; Langley, R. B.; Verkhoglyadova, O. P.; Butala, M.; Mannucci, A. J.

    2014-12-01

    In this research, we report intermediate scale plasma density irregularities in the high-latitude ionosphere inferred from high-resolution radio occultation (RO) measurements in the CASSIOPE (CAScade Smallsat and IOnospheric Polar Explorer) - GPS (Global Positioning System) satellites radio link. The high inclination of the CASSIOPE satellite and high rate of signal receptionby the occultation antenna of the GPS Attitude, Positioning and Profiling (GAP) instrument on the Enhanced Polar Outflow Probe platform on CASSIOPE enable a high temporal and spatial resolution investigation of the dynamics of the polar ionosphere, magnetosphere-ionospherecoupling, solar wind effects, etc. with unprecedented details compared to that possible in the past. We have carried out high spatial resolution analysis in altitude and geomagnetic latitude of scintillation-producing plasma density irregularities in the polar ionosphere. Intermediate scale, scintillation-producing plasma density irregularities, which corresponds to 2 to 40 km spatial scales were inferred by applying multi-scale spectral analysis on the RO phase delay measurements. Using our multi-scale spectral analysis approach and Polar Operational Environmental Satellites (POES) and Defense Meteorological Satellite Program (DMSP) observations, we infer that the irregularity scales and phase scintillations have distinct features in the auroral oval and polar cap regions. In specific terms, we found that large length scales and and more intense phase scintillations are prevalent in the auroral oval compared to the polar cap region. Hence, the irregularity scales and phase scintillation characteristics are a function of the solar wind and the magnetospheric forcing. Multi-scale analysis may become a powerful diagnostic tool for characterizing how the ionosphere is dynamically driven by these factors.

  14. Understanding Mothers' Experiences of Infant Daycare: A New Approach Using Computer-Assisted Analysis of Qualitative Data.

    ERIC Educational Resources Information Center

    Rolfe, Sharne; And Others

    This paper reports on a small-scale introductory study of Australian mothers' experiences of infant day care. Ten employed, middle- and lower-socioeconomic status women with an infant in center-based day care were interviewed. Brief narrative examples from the mothers' accounts are presented. Discussion then concentrates on a new approach to…

  15. Introduction to Chemical Engineering Reactor Analysis: A Web-Based Reactor Design Game

    ERIC Educational Resources Information Center

    Orbey, Nese; Clay, Molly; Russell, T.W. Fraser

    2014-01-01

    An approach to explain chemical engineering through a Web-based interactive game design was developed and used with college freshman and junior/senior high school students. The goal of this approach was to demonstrate how to model a lab-scale experiment, and use the results to design and operate a chemical reactor. The game incorporates both…

  16. Criteria of Career Success among Chinese Employees: Developing a Multidimensional Scale with Qualitative and Quantitative Approaches

    ERIC Educational Resources Information Center

    Zhou, Wenxia; Sun, Jianmin; Guan, Yanjun; Li, Yuhui; Pan, Jingzhou

    2013-01-01

    The current research aimed to develop a multidimensional measure on the criteria of career success in a Chinese context. Items on the criteria of career success were obtained using a qualitative approach among 30 Chinese employees; exploratory factor analysis was conducted to select items and determine the factor structure among a new sample of…

  17. Mapping and navigating mammalian conservation: from analysis to action

    PubMed Central

    Redford, Kent H.; Ray, Justina C.; Boitani, Luigi

    2011-01-01

    Although mammals are often seen as important objects of human interest and affection, many are threatened with extinction. A range of efforts have been proposed and much work has been done to try to conserve mammals, but there is little overall understanding of what has worked and why. As a result, there is no global-scale, coordinated approach to conserving all mammals. Rather, conservation efforts are usually focused at jurisdictional levels where relevant legislation and policies are in force. To help build the framework for a global-scale approach, in this paper we review the many ways that have been proposed for conserving mammals. First, we examine the overall pattern of threat faced by mammals at the global level. Secondly, we look at the major structuring issues in prioritizing and planning mammal conservation, examining in particular the roles of values and scale and a set of approaches to conservation, each of which varies along a continuum. Finally, we lay out the steps necessary to move from planning to implementing mammalian conservation. PMID:21844050

  18. Multi-scale Modeling of the Impact Response of a Strain Rate Sensitive High-Manganese Austenitic Steel

    NASA Astrophysics Data System (ADS)

    Önal, Orkun; Ozmenci, Cemre; Canadinc, Demircan

    2014-09-01

    A multi-scale modeling approach was applied to predict the impact response of a strain rate sensitive high-manganese austenitic steel. The roles of texture, geometry and strain rate sensitivity were successfully taken into account all at once by coupling crystal plasticity and finite element (FE) analysis. Specifically, crystal plasticity was utilized to obtain the multi-axial flow rule at different strain rates based on the experimental deformation response under uniaxial tensile loading. The equivalent stress - equivalent strain response was then incorporated into the FE model for the sake of a more representative hardening rule under impact loading. The current results demonstrate that reliable predictions can be obtained by proper coupling of crystal plasticity and FE analysis even if the experimental flow rule of the material is acquired under uniaxial loading and at moderate strain rates that are significantly slower than those attained during impact loading. Furthermore, the current findings also demonstrate the need for an experiment-based multi-scale modeling approach for the sake of reliable predictions of the impact response.

  19. Spatially-Distributed Cost-Effectiveness Analysis Framework to Control Phosphorus from Agricultural Diffuse Pollution.

    PubMed

    Geng, Runzhe; Wang, Xiaoyan; Sharpley, Andrew N; Meng, Fande

    2015-01-01

    Best management practices (BMPs) for agricultural diffuse pollution control are implemented at the field or small-watershed scale. However, the benefits of BMP implementation on receiving water quality at multiple spatial is an ongoing challenge. In this paper, we introduce an integrated approach that combines risk assessment (i.e., Phosphorus (P) index), model simulation techniques (Hydrological Simulation Program-FORTRAN), and a BMP placement tool at various scales to identify the optimal location for implementing multiple BMPs and estimate BMP effectiveness after implementation. A statistically significant decrease in nutrient discharge from watersheds is proposed to evaluate the effectiveness of BMPs, strategically targeted within watersheds. Specifically, we estimate two types of cost-effectiveness curves (total pollution reduction and proportion of watersheds improved) for four allocation approaches. Selection of a ''best approach" depends on the relative importance of the two types of effectiveness, which involves a value judgment based on the random/aggregated degree of BMP distribution among and within sub-watersheds. A statistical optimization framework is developed and evaluated in Chaohe River Watershed located in the northern mountain area of Beijing. Results show that BMP implementation significantly (p >0.001) decrease P loss from the watershed. Remedial strategies where BMPs were targeted to areas of high risk of P loss, deceased P loads compared with strategies where BMPs were randomly located across watersheds. Sensitivity analysis indicated that aggregated BMP placement in particular watershed is the most cost-effective scenario to decrease P loss. The optimization approach outlined in this paper is a spatially hierarchical method for targeting nonpoint source controls across a range of scales from field to farm, to watersheds, to regions. Further, model estimates showed targeting at multiple scales is necessary to optimize program efficiency. The integrated model approach described that selects and places BMPs at varying levels of implementation, provides a new theoretical basis and technical guidance for diffuse pollution management in agricultural watersheds.

  20. Bi-scale analysis of multitemporal land cover fractions for wetland vegetation mapping

    NASA Astrophysics Data System (ADS)

    Michishita, Ryo; Jiang, Zhiben; Gong, Peng; Xu, Bing

    2012-08-01

    Land cover fractions (LCFs) derived through spectral mixture analysis are useful in understanding sub-pixel information. However, few studies have been conducted on the analysis of time-series LCFs. Although multi-scale comparisons of spectral index, hard classification, and land surface temperature images have received attention, rarely have these approaches been applied to LCFs. This study compared the LCFs derived through Multiple Endmember Spectral Mixture Analysis (MESMA) using the time-series Landsat Thematic Mapper (TM) and Terra Moderate Resolution Imaging Spectroradiometer (MODIS) data acquired in the Poyang Lake area, China between 2004 and 2005. Specifically, we aimed to: (1) propose an approach for optimal endmember (EM) selection in time-series MESMA; (2) understand the trends in time-series LCFs derived from the TM and MODIS data; and (3) examine the trends in the correlation between the bi-scale LCFs derived from the time-series TM and MODIS data. Our results indicated: (1) the EM spectra chosen according to the proposed hierarchical three-step approach (overall, seasonal, and individual) accurately modeled the both the TM and MODIS images; (2) green vegetation (GV) and NPV/soil/impervious surface (N/S/I) classes followed sine curve trends in the overall area, while the two water classes displayed the water level change pattern in the areas primarily covered with wetland vegetation; and (3) GV, N/S/I, and bright water classes indicated a moderately high agreement between the TM and MODIS LCFs in the whole area (adjusted R2 ⩾ 0.6). However, low levels of correlations were found in the areas primarily dominated by wetland vegetation for all land cover classes.

  1. Chemically intuited, large-scale screening of MOFs by machine learning techniques

    NASA Astrophysics Data System (ADS)

    Borboudakis, Giorgos; Stergiannakos, Taxiarchis; Frysali, Maria; Klontzas, Emmanuel; Tsamardinos, Ioannis; Froudakis, George E.

    2017-10-01

    A novel computational methodology for large-scale screening of MOFs is applied to gas storage with the use of machine learning technologies. This approach is a promising trade-off between the accuracy of ab initio methods and the speed of classical approaches, strategically combined with chemical intuition. The results demonstrate that the chemical properties of MOFs are indeed predictable (stochastically, not deterministically) using machine learning methods and automated analysis protocols, with the accuracy of predictions increasing with sample size. Our initial results indicate that this methodology is promising to apply not only to gas storage in MOFs but in many other material science projects.

  2. A systems-biology approach to yeast actin cables.

    PubMed

    Drake, Tyler; Yusuf, Eddy; Vavylonis, Dimitrios

    2012-01-01

    We focus on actin cables in yeast as a model system for understanding cytoskeletal organization and the workings of actin itself. In particular, we highlight quantitative approaches on the kinetics of actin-cable assembly and methods of measuring their morphology by image analysis. Actin cables described by these studies can span greater lengths than a thousand end-to-end actin-monomers. Because of this difference in length scales, control of the actin-cable system constitutes a junction between short-range interactions - among actin-monomers and nucleating, polymerization-facilitating, side-binding, severing, and cross-linking proteins - and the emergence of cell-scale physical form as embodied by the actin cables themselves.

  3. Multi-scale connectivity and graph theory highlight critical areas for conservation under climate change

    USGS Publications Warehouse

    Dilts, Thomas E.; Weisberg, Peter J.; Leitner, Phillip; Matocq, Marjorie D.; Inman, Richard D.; Nussear, Ken E.; Esque, Todd C.

    2016-01-01

    Conservation planning and biodiversity management require information on landscape connectivity across a range of spatial scales from individual home ranges to large regions. Reduction in landscape connectivity due changes in land-use or development is expected to act synergistically with alterations to habitat mosaic configuration arising from climate change. We illustrate a multi-scale connectivity framework to aid habitat conservation prioritization in the context of changing land use and climate. Our approach, which builds upon the strengths of multiple landscape connectivity methods including graph theory, circuit theory and least-cost path analysis, is here applied to the conservation planning requirements of the Mohave ground squirrel. The distribution of this California threatened species, as for numerous other desert species, overlaps with the proposed placement of several utility-scale renewable energy developments in the American Southwest. Our approach uses information derived at three spatial scales to forecast potential changes in habitat connectivity under various scenarios of energy development and climate change. By disentangling the potential effects of habitat loss and fragmentation across multiple scales, we identify priority conservation areas for both core habitat and critical corridor or stepping stone habitats. This approach is a first step toward applying graph theory to analyze habitat connectivity for species with continuously-distributed habitat, and should be applicable across a broad range of taxa.

  4. Engineering large-scale agent-based systems with consensus

    NASA Technical Reports Server (NTRS)

    Bokma, A.; Slade, A.; Kerridge, S.; Johnson, K.

    1994-01-01

    The paper presents the consensus method for the development of large-scale agent-based systems. Systems can be developed as networks of knowledge based agents (KBA) which engage in a collaborative problem solving effort. The method provides a comprehensive and integrated approach to the development of this type of system. This includes a systematic analysis of user requirements as well as a structured approach to generating a system design which exhibits the desired functionality. There is a direct correspondence between system requirements and design components. The benefits of this approach are that requirements are traceable into design components and code thus facilitating verification. The use of the consensus method with two major test applications showed it to be successful and also provided valuable insight into problems typically associated with the development of large systems.

  5. Scalable clustering algorithms for continuous environmental flow cytometry.

    PubMed

    Hyrkas, Jeremy; Clayton, Sophie; Ribalet, Francois; Halperin, Daniel; Armbrust, E Virginia; Howe, Bill

    2016-02-01

    Recent technological innovations in flow cytometry now allow oceanographers to collect high-frequency flow cytometry data from particles in aquatic environments on a scale far surpassing conventional flow cytometers. The SeaFlow cytometer continuously profiles microbial phytoplankton populations across thousands of kilometers of the surface ocean. The data streams produced by instruments such as SeaFlow challenge the traditional sample-by-sample approach in cytometric analysis and highlight the need for scalable clustering algorithms to extract population information from these large-scale, high-frequency flow cytometers. We explore how available algorithms commonly used for medical applications perform at classification of such a large-scale, environmental flow cytometry data. We apply large-scale Gaussian mixture models to massive datasets using Hadoop. This approach outperforms current state-of-the-art cytometry classification algorithms in accuracy and can be coupled with manual or automatic partitioning of data into homogeneous sections for further classification gains. We propose the Gaussian mixture model with partitioning approach for classification of large-scale, high-frequency flow cytometry data. Source code available for download at https://github.com/jhyrkas/seaflow_cluster, implemented in Java for use with Hadoop. hyrkas@cs.washington.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  6. Metabolic Network Modeling of Microbial Communities

    PubMed Central

    Biggs, Matthew B.; Medlock, Gregory L.; Kolling, Glynis L.

    2015-01-01

    Genome-scale metabolic network reconstructions and constraint-based analysis are powerful methods that have the potential to make functional predictions about microbial communities. Current use of genome-scale metabolic networks to characterize the metabolic functions of microbial communities includes species compartmentalization, separating species-level and community-level objectives, dynamic analysis, the “enzyme-soup” approach, multi-scale modeling, and others. There are many challenges inherent to the field, including a need for tools that accurately assign high-level omics signals to individual community members, new automated reconstruction methods that rival manual curation, and novel algorithms for integrating omics data and engineering communities. As technologies and modeling frameworks improve, we expect that there will be proportional advances in the fields of ecology, health science, and microbial community engineering. PMID:26109480

  7. Behavior analytic approaches to problem behavior in intellectual disabilities.

    PubMed

    Hagopian, Louis P; Gregory, Meagan K

    2016-03-01

    The purpose of the current review is to summarize recent behavior analytic research on problem behavior in individuals with intellectual disabilities. We have focused our review on studies published from 2013 to 2015, but also included earlier studies that were relevant. Behavior analytic research on problem behavior continues to focus on the use and refinement of functional behavioral assessment procedures and function-based interventions. During the review period, a number of studies reported on procedures aimed at making functional analysis procedures more time efficient. Behavioral interventions continue to evolve, and there were several larger scale clinical studies reporting on multiple individuals. There was increased attention on the part of behavioral researchers to develop statistical methods for analysis of within subject data and continued efforts to aggregate findings across studies through evaluative reviews and meta-analyses. Findings support continued utility of functional analysis for guiding individualized interventions and for classifying problem behavior. Modifications designed to make functional analysis more efficient relative to the standard method of functional analysis were reported; however, these require further validation. Larger scale studies on behavioral assessment and treatment procedures provided additional empirical support for effectiveness of these approaches and their sustainability outside controlled clinical settings.

  8. Scale problems in assessment of hydrogeological parameters of groundwater flow models

    NASA Astrophysics Data System (ADS)

    Nawalany, Marek; Sinicyn, Grzegorz

    2015-09-01

    An overview is presented of scale problems in groundwater flow, with emphasis on upscaling of hydraulic conductivity, being a brief summary of the conventional upscaling approach with some attention paid to recently emerged approaches. The focus is on essential aspects which may be an advantage in comparison to the occasionally extremely extensive summaries presented in the literature. In the present paper the concept of scale is introduced as an indispensable part of system analysis applied to hydrogeology. The concept is illustrated with a simple hydrogeological system for which definitions of four major ingredients of scale are presented: (i) spatial extent and geometry of hydrogeological system, (ii) spatial continuity and granularity of both natural and man-made objects within the system, (iii) duration of the system and (iv) continuity/granularity of natural and man-related variables of groundwater flow system. Scales used in hydrogeology are categorised into five classes: micro-scale - scale of pores, meso-scale - scale of laboratory sample, macro-scale - scale of typical blocks in numerical models of groundwater flow, local-scale - scale of an aquifer/aquitard and regional-scale - scale of series of aquifers and aquitards. Variables, parameters and groundwater flow equations for the three lowest scales, i.e., pore-scale, sample-scale and (numerical) block-scale, are discussed in detail, with the aim to justify physically deterministic procedures of upscaling from finer to coarser scales (stochastic issues of upscaling are not discussed here). Since the procedure of transition from sample-scale to block-scale is physically well based, it is a good candidate for upscaling block-scale models to local-scale models and likewise for upscaling local-scale models to regional-scale models. Also the latest results in downscaling from block-scale to sample scale are briefly referred to.

  9. A full-Bayesian approach to parameter inference from tracer travel time moments and investigation of scale effects at the Cape Cod experimental site

    USGS Publications Warehouse

    Woodbury, Allan D.; Rubin, Yoram

    2000-01-01

    A method for inverting the travel time moments of solutes in heterogeneous aquifers is presented and is based on peak concentration arrival times as measured at various samplers in an aquifer. The approach combines a Lagrangian [Rubin and Dagan, 1992] solute transport framework with full‐Bayesian hydrogeological parameter inference. In the full‐Bayesian approach the noise values in the observed data are treated as hyperparameters, and their effects are removed by marginalization. The prior probability density functions (pdfs) for the model parameters (horizontal integral scale, velocity, and log K variance) and noise values are represented by prior pdfs developed from minimum relative entropy considerations. Analysis of the Cape Cod (Massachusetts) field experiment is presented. Inverse results for the hydraulic parameters indicate an expected value for the velocity, variance of log hydraulic conductivity, and horizontal integral scale of 0.42 m/d, 0.26, and 3.0 m, respectively. While these results are consistent with various direct‐field determinations, the importance of the findings is in the reduction of confidence range about the various expected values. On selected control planes we compare observed travel time frequency histograms with the theoretical pdf, conditioned on the observed travel time moments. We observe a positive skew in the travel time pdf which tends to decrease as the travel time distance grows. We also test the hypothesis that there is no scale dependence of the integral scale λ with the scale of the experiment at Cape Cod. We adopt two strategies. The first strategy is to use subsets of the full data set and then to see if the resulting parameter fits are different as we use different data from control planes at expanding distances from the source. The second approach is from the viewpoint of entropy concentration. No increase in integral scale with distance is inferred from either approach over the range of the Cape Cod tracer experiment.

  10. The IAEA coordinated research programme on HTGR uncertainty analysis: Phase I status and Ex. I-1 prismatic reference results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bostelmann, Friederike; Strydom, Gerhard; Reitsma, Frederik

    The quantification of uncertainties in design and safety analysis of reactors is today not only broadly accepted, but in many cases became the preferred way to replace traditional conservative analysis for safety and licensing analysis. The use of a more fundamental methodology is also consistent with the reliable high fidelity physics models and robust, efficient, and accurate codes available today. To facilitate uncertainty analysis applications a comprehensive approach and methodology must be developed and applied, in contrast to the historical approach where sensitivity analysis were performed and uncertainties then determined by a simplified statistical combination of a few important inputmore » parameters. New methodologies are currently under development in the OECD/NEA Light Water Reactor (LWR) Uncertainty Analysis in Best-Estimate Modelling (UAM) benchmark activity. High Temperature Gas-cooled Reactor (HTGR) designs require specific treatment of the double heterogeneous fuel design and large graphite quantities at high temperatures. The IAEA has therefore launched a Coordinated Research Project (CRP) on HTGR Uncertainty Analysis in Modelling (UAM) in 2013 to study uncertainty propagation specifically in the HTGR analysis chain. Two benchmark problems are defined, with the prismatic design represented by the General Atomics (GA) MHTGR-350 and a 250 MW modular pebble bed design similar to the Chinese HTR-PM. Work has started on the first phase and the current CRP status is reported in the paper. A comparison of the Serpent and SCALE/KENO-VI reference Monte Carlo results for Ex. I-1 of the MHTGR-350 design is also included. It was observed that the SCALE/KENO-VI Continuous Energy (CE) k ∞ values were 395 pcm (Ex. I-1a) to 803 pcm (Ex. I-1b) higher than the respective Serpent lattice calculations, and that within the set of the SCALE results, the KENO-VI 238 Multi-Group (MG) k ∞ values were up to 800 pcm lower than the KENO-VI CE values. The use of the latest ENDF-B-VII.1 cross section library in Serpent lead to ~180 pcm lower k ∞ values compared to the older ENDF-B-VII.0 dataset, caused by the modified graphite neutron capture cross section. Furthermore, the fourth beta release of SCALE 6.2 likewise produced lower CE k∞ values when compared to SCALE 6.1, and the improved performance of the new 252-group library available in SCALE 6.2 is especially noteworthy. A SCALE/TSUNAMI uncertainty analysis of the Hot Full Power variant for Ex. I-1a furthermore concluded that the 238U(n,γ) (capture) and 235U(View the MathML source) cross-section covariance matrices contributed the most to the total k ∞ uncertainty of 0.58%.« less

  11. The IAEA coordinated research programme on HTGR uncertainty analysis: Phase I status and Ex. I-1 prismatic reference results

    DOE PAGES

    Bostelmann, Friederike; Strydom, Gerhard; Reitsma, Frederik; ...

    2016-01-11

    The quantification of uncertainties in design and safety analysis of reactors is today not only broadly accepted, but in many cases became the preferred way to replace traditional conservative analysis for safety and licensing analysis. The use of a more fundamental methodology is also consistent with the reliable high fidelity physics models and robust, efficient, and accurate codes available today. To facilitate uncertainty analysis applications a comprehensive approach and methodology must be developed and applied, in contrast to the historical approach where sensitivity analysis were performed and uncertainties then determined by a simplified statistical combination of a few important inputmore » parameters. New methodologies are currently under development in the OECD/NEA Light Water Reactor (LWR) Uncertainty Analysis in Best-Estimate Modelling (UAM) benchmark activity. High Temperature Gas-cooled Reactor (HTGR) designs require specific treatment of the double heterogeneous fuel design and large graphite quantities at high temperatures. The IAEA has therefore launched a Coordinated Research Project (CRP) on HTGR Uncertainty Analysis in Modelling (UAM) in 2013 to study uncertainty propagation specifically in the HTGR analysis chain. Two benchmark problems are defined, with the prismatic design represented by the General Atomics (GA) MHTGR-350 and a 250 MW modular pebble bed design similar to the Chinese HTR-PM. Work has started on the first phase and the current CRP status is reported in the paper. A comparison of the Serpent and SCALE/KENO-VI reference Monte Carlo results for Ex. I-1 of the MHTGR-350 design is also included. It was observed that the SCALE/KENO-VI Continuous Energy (CE) k ∞ values were 395 pcm (Ex. I-1a) to 803 pcm (Ex. I-1b) higher than the respective Serpent lattice calculations, and that within the set of the SCALE results, the KENO-VI 238 Multi-Group (MG) k ∞ values were up to 800 pcm lower than the KENO-VI CE values. The use of the latest ENDF-B-VII.1 cross section library in Serpent lead to ~180 pcm lower k ∞ values compared to the older ENDF-B-VII.0 dataset, caused by the modified graphite neutron capture cross section. Furthermore, the fourth beta release of SCALE 6.2 likewise produced lower CE k∞ values when compared to SCALE 6.1, and the improved performance of the new 252-group library available in SCALE 6.2 is especially noteworthy. A SCALE/TSUNAMI uncertainty analysis of the Hot Full Power variant for Ex. I-1a furthermore concluded that the 238U(n,γ) (capture) and 235U(View the MathML source) cross-section covariance matrices contributed the most to the total k ∞ uncertainty of 0.58%.« less

  12. Rasch analysis of the carers quality of life questionnaire for parkinsonism.

    PubMed

    Pillas, Marios; Selai, Caroline; Schrag, Anette

    2017-03-01

    To assess the psychometric properties of the Carers Quality of Life Questionnaire for Parkinsonism using a Rasch modeling approach and determine the optimal cut-off score. We performed a Rasch analysis of the survey answers of 430 carers of patients with atypical parkinsonism. All of the scale items demonstrated acceptable goodness of fit to the Rasch model. The scale was unidimensional and no notable differential item functioning was detected in the items regarding age and disease type. Rating categories were functioning adequately in all scale items. The scale had high reliability (.95) and construct validity and a high degree of precision, distinguishing between 5 distinct groups of carers with different levels of quality of life. A cut-off score of 62 was found to have the optimal screening accuracy based on Hospital Anxiety and Depression Scale subscores. The results suggest that the Carers Quality of Life Questionnaire for Parkinsonism is a useful scale to assess carers' quality of life and allows analyses requiring interval scaling of variables. © 2016 International Parkinson and Movement Disorder Society. © 2016 International Parkinson and Movement Disorder Society.

  13. Intermediate-scale plasma irregularities in the polar ionosphere inferred from GPS radio occultation

    NASA Astrophysics Data System (ADS)

    Shume, E. B.; Komjathy, A.; Langley, R. B.; Verkhoglyadova, O.; Butala, M. D.; Mannucci, A. J.

    2015-02-01

    We report intermediate-scale plasma irregularities in the polar ionosphere inferred from high-resolution radio occultation (RO) measurements using GPS (Global Positioning System) to CASSIOPE (CAScade Smallsat and IOnospheric Polar Explorer) satellite radio links. The high inclination of CASSIOPE and the high rate of signal reception by the GPS Attitude, Positioning, and Profiling RO receiver on CASSIOPE enable a high-resolution investigation of the dynamics of the polar ionosphere with unprecedented detail. Intermediate-scale, scintillation-producing irregularities, which correspond to 1 to 40 km scales, were inferred by applying multiscale spectral analysis on the RO phase measurements. Using our multiscale spectral analysis approach and satellite data (Polar Operational Environmental Satellites and Defense Meteorological Satellite Program), we discovered that the irregularity scales and phase scintillations have distinct features in the auroral oval and polar cap. We found that large length scales and more intense phase scintillations are prevalent in the auroral oval compared to the polar cap implying that the irregularity scales and phase scintillation characteristics are a function of the solar wind and magnetospheric forcings.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolker, Eugene

    Our project focused primarily on analysis of different types of data produced by global high-throughput technologies, data integration of gene annotation, and gene and protein expression information, as well as on getting a better functional annotation of Shewanella genes. Specifically, four of our numerous major activities and achievements include the development of: statistical models for identification and expression proteomics, superior to currently available approaches (including our own earlier ones); approaches to improve gene annotations on the whole-organism scale; standards for annotation, transcriptomics and proteomics approaches; and generalized approaches for data integration of gene annotation, gene and protein expression information.

  15. Systems Proteomics for Translational Network Medicine

    PubMed Central

    Arrell, D. Kent; Terzic, Andre

    2012-01-01

    Universal principles underlying network science, and their ever-increasing applications in biomedicine, underscore the unprecedented capacity of systems biology based strategies to synthesize and resolve massive high throughput generated datasets. Enabling previously unattainable comprehension of biological complexity, systems approaches have accelerated progress in elucidating disease prediction, progression, and outcome. Applied to the spectrum of states spanning health and disease, network proteomics establishes a collation, integration, and prioritization algorithm to guide mapping and decoding of proteome landscapes from large-scale raw data. Providing unparalleled deconvolution of protein lists into global interactomes, integrative systems proteomics enables objective, multi-modal interpretation at molecular, pathway, and network scales, merging individual molecular components, their plurality of interactions, and functional contributions for systems comprehension. As such, network systems approaches are increasingly exploited for objective interpretation of cardiovascular proteomics studies. Here, we highlight network systems proteomic analysis pipelines for integration and biological interpretation through protein cartography, ontological categorization, pathway and functional enrichment and complex network analysis. PMID:22896016

  16. A review of genome-wide approaches to study the genetic basis for spermatogenic defects.

    PubMed

    Aston, Kenneth I; Conrad, Donald F

    2013-01-01

    Rapidly advancing tools for genetic analysis on a genome-wide scale have been instrumental in identifying the genetic bases for many complex diseases. About half of male infertility cases are of unknown etiology in spite of tremendous efforts to characterize the genetic basis for the disorder. Advancing our understanding of the genetic basis for male infertility will require the application of established and emerging genomic tools. This chapter introduces many of the tools available for genetic studies on a genome-wide scale along with principles of study design and data analysis.

  17. Unveiling relationships between crime and property in England and Wales via density scale-adjusted metrics and network tools.

    PubMed

    Ribeiro, Haroldo V; Hanley, Quentin S; Lewis, Dan

    2018-01-01

    Scale-adjusted metrics (SAMs) are a significant achievement of the urban scaling hypothesis. SAMs remove the inherent biases of per capita measures computed in the absence of isometric allometries. However, this approach is limited to urban areas, while a large portion of the world's population still lives outside cities and rural areas dominate land use worldwide. Here, we extend the concept of SAMs to population density scale-adjusted metrics (DSAMs) to reveal relationships among different types of crime and property metrics. Our approach allows all human environments to be considered, avoids problems in the definition of urban areas, and accounts for the heterogeneity of population distributions within urban regions. By combining DSAMs, cross-correlation, and complex network analysis, we find that crime and property types have intricate and hierarchically organized relationships leading to some striking conclusions. Drugs and burglary had uncorrelated DSAMs and, to the extent property transaction values are indicators of affluence, twelve out of fourteen crime metrics showed no evidence of specifically targeting affluence. Burglary and robbery were the most connected in our network analysis and the modular structures suggest an alternative to "zero-tolerance" policies by unveiling the crime and/or property types most likely to affect each other.

  18. Unveiling relationships between crime and property in England and Wales via density scale-adjusted metrics and network tools

    PubMed Central

    Hanley, Quentin S.; Lewis, Dan

    2018-01-01

    Scale-adjusted metrics (SAMs) are a significant achievement of the urban scaling hypothesis. SAMs remove the inherent biases of per capita measures computed in the absence of isometric allometries. However, this approach is limited to urban areas, while a large portion of the world’s population still lives outside cities and rural areas dominate land use worldwide. Here, we extend the concept of SAMs to population density scale-adjusted metrics (DSAMs) to reveal relationships among different types of crime and property metrics. Our approach allows all human environments to be considered, avoids problems in the definition of urban areas, and accounts for the heterogeneity of population distributions within urban regions. By combining DSAMs, cross-correlation, and complex network analysis, we find that crime and property types have intricate and hierarchically organized relationships leading to some striking conclusions. Drugs and burglary had uncorrelated DSAMs and, to the extent property transaction values are indicators of affluence, twelve out of fourteen crime metrics showed no evidence of specifically targeting affluence. Burglary and robbery were the most connected in our network analysis and the modular structures suggest an alternative to “zero-tolerance” policies by unveiling the crime and/or property types most likely to affect each other. PMID:29470499

  19. Controls on mineralisation in the Sierra Foothills gold province, central California, USA: A GIS-based reconnaissance prospectivity analysis

    USGS Publications Warehouse

    Bierlein, F.P.; Northover, H.J.; Groves, D.I.; Goldfarb, R.J.; Marsh, E.E.

    2008-01-01

    The assessment of spatial relationships between the location, abundance and size of orogenic-gold deposits in the highly endowed Sierra Foothills gold province in California, via the combination of field studies and a GIS-based analysis, illustrates the power of such an approach to the characterisation of important parameters of mineral systems, and the prediction of districts likely to host economic mineralisation. Regional- to deposit-scale reconnaissance mapping suggests that deposition of gold-bearing quartz veins occurred in second- and third-order, east-over-west thrusts during regional east - west compression and right-lateral transpression. At the district-scale, significant zones of mineralisation correspond with such transpressional reactivation zones and dilational jogs that developed during the Late Jurassic - Early Cretaceous along the misaligned segments of first-order faults throughout the Sierra Nevada Foothills Metamorphic Belt. Field-based observations and interpretation of GIS data (including solid geology, structural elements, deposit locations, magnetics, gravity) also highlight the importance of structural permeability contrasts, rheological gradients, and variations in fault orientation for localising mineralisation. Although this approach confirms empirical findings and produces promising results at the province scale, enhanced geological, structural, geophysical and geochronological data density is required to generate regionally consistent, high-quality input layers that improve predictive targeting at the goldfield to deposit-scale.

  20. Stationary Wavelet-based Two-directional Two-dimensional Principal Component Analysis for EMG Signal Classification

    NASA Astrophysics Data System (ADS)

    Ji, Yi; Sun, Shanlin; Xie, Hong-Bo

    2017-06-01

    Discrete wavelet transform (WT) followed by principal component analysis (PCA) has been a powerful approach for the analysis of biomedical signals. Wavelet coefficients at various scales and channels were usually transformed into a one-dimensional array, causing issues such as the curse of dimensionality dilemma and small sample size problem. In addition, lack of time-shift invariance of WT coefficients can be modeled as noise and degrades the classifier performance. In this study, we present a stationary wavelet-based two-directional two-dimensional principal component analysis (SW2D2PCA) method for the efficient and effective extraction of essential feature information from signals. Time-invariant multi-scale matrices are constructed in the first step. The two-directional two-dimensional principal component analysis then operates on the multi-scale matrices to reduce the dimension, rather than vectors in conventional PCA. Results are presented from an experiment to classify eight hand motions using 4-channel electromyographic (EMG) signals recorded in healthy subjects and amputees, which illustrates the efficiency and effectiveness of the proposed method for biomedical signal analysis.

  1. General consequences of the violated Feynman scaling

    NASA Technical Reports Server (NTRS)

    Kamberov, G.; Popova, L.

    1985-01-01

    The problem of scaling of the hadronic production cross sections represents an outstanding question in high energy physics especially for interpretation of cosmic ray data. A comprehensive analysis of the accelerator data leads to the conclusion of the existence of breaked Feynman scaling. It was proposed that the Lorentz invariant inclusive cross sections for secondaries of a given type approaches constant in respect to a breaked scaling variable x sub s. Thus, the differential cross sections measured in accelerator energy can be extrapolated to higher cosmic ray energies. This assumption leads to some important consequences. The distribution of secondary multiplicity that follows from the violated Feynman scaling using a similar method of Koba et al is discussed.

  2. Points of View Analysis Revisited: Fitting Multidimensional Structures to Optimal Distance Components with Cluster Restrictions on the Variables.

    ERIC Educational Resources Information Center

    Meulman, Jacqueline J.; Verboon, Peter

    1993-01-01

    Points of view analysis, as a way to deal with individual differences in multidimensional scaling, was largely supplanted by the weighted Euclidean model. It is argued that the approach deserves new attention, especially as a technique to analyze group differences. A streamlined and integrated process is proposed. (SLD)

  3. A New, More Powerful Approach to Multitrait-Multimethod Analyses: An Application of Second-Order Confirmatory Factor Analysis.

    ERIC Educational Resources Information Center

    Marsh, Herbert W.; Hocevar, Dennis

    The advantages of applying confirmatory factor analysis (CFA) to multitrait-multimethod (MTMM) data are widely recognized. However, because CFA as traditionally applied to MTMM data incorporates single indicators of each scale (i.e., each trait/method combination), important weaknesses are the failure to: (1) correct appropriately for measurement…

  4. Towards a Unified Framework in Hydroclimate Extremes Prediction in Changing Climate

    NASA Astrophysics Data System (ADS)

    Moradkhani, H.; Yan, H.; Zarekarizi, M.; Bracken, C.

    2016-12-01

    Spatio-temporal analysis and prediction of hydroclimate extremes are of paramount importance in disaster mitigation and emergency management. The IPCC special report on managing the risks of extreme events and disasters emphasizes that the global warming would change the frequency, severity, and spatial pattern of extremes. In addition to climate change, land use and land cover changes also influence the extreme characteristics at regional scale. Therefore, natural variability and anthropogenic changes to the hydroclimate system result in nonstationarity in hydroclimate variables. In this presentation recent advancements in developing and using Bayesian approaches to account for non-stationarity in hydroclimate extremes are discussed. Also, implications of these approaches in flood frequency analysis, treatment of spatial dependence, the impact of large-scale climate variability, the selection of cause-effect covariates, with quantification of model errors in extreme prediction is explained. Within this framework, the applicability and usefulness of the ensemble data assimilation for extreme flood predictions is also introduced. Finally, a practical and easy to use approach for better communication with decision-makers and emergency managers is presented.

  5. Numerical models for fluid-grains interactions: opportunities and limitations

    NASA Astrophysics Data System (ADS)

    Esteghamatian, Amir; Rahmani, Mona; Wachs, Anthony

    2017-06-01

    In the framework of a multi-scale approach, we develop numerical models for suspension flows. At the micro scale level, we perform particle-resolved numerical simulations using a Distributed Lagrange Multiplier/Fictitious Domain approach. At the meso scale level, we use a two-way Euler/Lagrange approach with a Gaussian filtering kernel to model fluid-solid momentum transfer. At both the micro and meso scale levels, particles are individually tracked in a Lagrangian way and all inter-particle collisions are computed by a Discrete Element/Soft-sphere method. The previous numerical models have been extended to handle particles of arbitrary shape (non-spherical, angular and even non-convex) as well as to treat heat and mass transfer. All simulation tools are fully-MPI parallel with standard domain decomposition and run on supercomputers with a satisfactory scalability on up to a few thousands of cores. The main asset of multi scale analysis is the ability to extend our comprehension of the dynamics of suspension flows based on the knowledge acquired from the high-fidelity micro scale simulations and to use that knowledge to improve the meso scale model. We illustrate how we can benefit from this strategy for a fluidized bed, where we introduce a stochastic drag force model derived from micro-scale simulations to recover the proper level of particle fluctuations. Conversely, we discuss the limitations of such modelling tools such as their limited ability to capture lubrication forces and boundary layers in highly inertial flows. We suggest ways to overcome these limitations in order to enhance further the capabilities of the numerical models.

  6. [Rating scales based on the phenomenological and structural approach].

    PubMed

    Schiltz, L

    2006-01-01

    A current tendency of research in clinical psychology consists in using an integrated quantitative and qualitative methodology. This approach is especially suited to the study of the therapeutic intervention where the researcher is himself part of the situation he is investigating. As to the tools of research, the combination of the semi-structured clinical interview, of psychometric scales and projective tests has proved to be pertinent to describe the multidimensional and fluctuating reality of the therapeutic relationship and the changes induced by it in the two partners. In arts therapeutic research the investigation of the artistic production or of the free expression of people may complete the psychometric and projective tools. The concept of "expressive test" is currently being used to characterise this method. In this context, the development of rating scales, based on the phenomenological and structural or holistic approach allows us making the link between qualitative analysis and quantification, leading to the use of inferential statistics, providing that we remain at the nominal or ordinal level of measurement. We are explaining the principle of construction of these rating scales and we are illustrating our practice with some examples drawn from studies we realized in clinical psychology.

  7. Turbulence in simulated H II regions

    NASA Astrophysics Data System (ADS)

    Medina, S.-N. X.; Arthur, S. J.; Henney, W. J.; Mellema, G.; Gazol, A.

    2014-12-01

    We investigate the scale dependence of fluctuations inside a realistic model of an evolving turbulent H II region and to what extent these may be studied observationally. We find that the multiple scales of energy injection from champagne flows and the photoionization of clumps and filaments leads to a flatter spectrum of fluctuations than would be expected from top-down turbulence driven at the largest scales. The traditional structure function approach to the observational study of velocity fluctuations is shown to be incapable of reliably determining the velocity power spectrum of our simulation. We find that a more promising approach is the Velocity Channel Analysis technique of Lazarian & Pogosyan (2000), which, despite being intrinsically limited by thermal broadening, can successfully recover the logarithmic slope of the velocity power spectrum to a precision of ±0.1 from high-resolution optical emission-line spectroscopy.

  8. Aero-acoustics of Drag Generating Swirling Exhaust Flows

    NASA Technical Reports Server (NTRS)

    Shah, P. N.; Mobed, D.; Spakovszky, Z. S.; Brooks, T. F.; Humphreys, W. M. Jr.

    2007-01-01

    Aircraft on approach in high-drag and high-lift configuration create unsteady flow structures which inherently generate noise. For devices such as flaps, spoilers and the undercarriage there is a strong correlation between overall noise and drag such that, in the quest for quieter aircraft, one challenge is to generate drag at low noise levels. This paper presents a rigorous aero-acoustic assessment of a novel drag concept. The idea is that a swirling exhaust flow can yield a steady, and thus relatively quiet, streamwise vortex which is supported by a radial pressure gradient responsible for pressure drag. Flows with swirl are naturally limited by instabilities such as vortex breakdown. The paper presents a first aero-acoustic assessment of ram pressure driven swirling exhaust flows and their associated instabilities. The technical approach combines an in-depth aerodynamic analysis, plausibility arguments to qualitatively describe the nature of acoustic sources, and detailed, quantitative acoustic measurements using a medium aperture directional microphone array in combination with a previously established Deconvolution Approach for Mapping of Acoustic Sources (DAMAS). A model scale engine nacelle with stationary swirl vanes was designed and tested in the NASA Langley Quiet Flow Facility at a full-scale approach Mach number of 0.17. The analysis shows that the acoustic signature is comprised of quadrupole-type turbulent mixing noise of the swirling core flow and scattering noise from vane boundary layers and turbulent eddies of the burst vortex structure near sharp edges. The exposed edges are the nacelle and pylon trailing edge and the centerbody supporting the vanes. For the highest stable swirl angle setting a nacelle area based drag coefficient of 0.8 was achieved with a full-scale Overall Sound Pressure Level (OASPL) of about 40dBA at the ICAO approach certification point.

  9. Decoding the Principles of Emergence and Resiliency in Biological Collective Systems - A Multi-Scale Approach: Final Report

    DTIC Science & Technology

    2018-02-15

    models and approaches are also valid using other invasive and non - invasive technologies. Finally, we illustrate and experimentally evaluate this...2017 Project Outline q  Pattern formation diversity in wild microbial societies q  Experimental and mathematical analysis methodology q  Skeleton...chemotaxis, nutrient degradation, and the exchange of amino acids between cells. Using both quantitative experimental methods and several theoretical

  10. Renormalization scheme dependence of high-order perturbative QCD predictions

    NASA Astrophysics Data System (ADS)

    Ma, Yang; Wu, Xing-Gang

    2018-02-01

    Conventionally, one adopts typical momentum flow of a physical observable as the renormalization scale for its perturbative QCD (pQCD) approximant. This simple treatment leads to renormalization scheme-and-scale ambiguities due to the renormalization scheme and scale dependence of the strong coupling and the perturbative coefficients do not exactly cancel at any fixed order. It is believed that those ambiguities will be softened by including more higher-order terms. In the paper, to show how the renormalization scheme dependence changes when more loop terms have been included, we discuss the sensitivity of pQCD prediction on the scheme parameters by using the scheme-dependent {βm ≥2}-terms. We adopt two four-loop examples, e+e-→hadrons and τ decays into hadrons, for detailed analysis. Our results show that under the conventional scale setting, by including more-and-more loop terms, the scheme dependence of the pQCD prediction cannot be reduced as efficiently as that of the scale dependence. Thus a proper scale-setting approach should be important to reduce the scheme dependence. We observe that the principle of minimum sensitivity could be such a scale-setting approach, which provides a practical way to achieve optimal scheme and scale by requiring the pQCD approximate be independent to the "unphysical" theoretical conventions.

  11. Variogram Analysis of Response surfaces (VARS): A New Framework for Global Sensitivity Analysis of Earth and Environmental Systems Models

    NASA Astrophysics Data System (ADS)

    Razavi, S.; Gupta, H. V.

    2015-12-01

    Earth and environmental systems models (EESMs) are continually growing in complexity and dimensionality with continuous advances in understanding and computing power. Complexity and dimensionality are manifested by introducing many different factors in EESMs (i.e., model parameters, forcings, boundary conditions, etc.) to be identified. Sensitivity Analysis (SA) provides an essential means for characterizing the role and importance of such factors in producing the model responses. However, conventional approaches to SA suffer from (1) an ambiguous characterization of sensitivity, and (2) poor computational efficiency, particularly as the problem dimension grows. Here, we present a new and general sensitivity analysis framework (called VARS), based on an analogy to 'variogram analysis', that provides an intuitive and comprehensive characterization of sensitivity across the full spectrum of scales in the factor space. We prove, theoretically, that Morris (derivative-based) and Sobol (variance-based) methods and their extensions are limiting cases of VARS, and that their SA indices can be computed as by-products of the VARS framework. We also present a practical strategy for the application of VARS to real-world problems, called STAR-VARS, including a new sampling strategy, called "star-based sampling". Our results across several case studies show the STAR-VARS approach to provide reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being at least 1-2 orders of magnitude more efficient than the benchmark Morris and Sobol approaches.

  12. Conceptual design and analysis of a dynamic scale model of the Space Station Freedom

    NASA Technical Reports Server (NTRS)

    Davis, D. A.; Gronet, M. J.; Tan, M. K.; Thorne, J.

    1994-01-01

    This report documents the conceptual design study performed to evaluate design options for a subscale dynamic test model which could be used to investigate the expected on-orbit structural dynamic characteristics of the Space Station Freedom early build configurations. The baseline option was a 'near-replica' model of the SSF SC-7 pre-integrated truss configuration. The approach used to develop conceptual design options involved three sets of studies: evaluation of the full-scale design and analysis databases, conducting scale factor trade studies, and performing design sensitivity studies. The scale factor trade study was conducted to develop a fundamental understanding of the key scaling parameters that drive design, performance and cost of a SSF dynamic scale model. Four scale model options were estimated: 1/4, 1/5, 1/7, and 1/10 scale. Prototype hardware was fabricated to assess producibility issues. Based on the results of the study, a 1/4-scale size is recommended based on the increased model fidelity associated with a larger scale factor. A design sensitivity study was performed to identify critical hardware component properties that drive dynamic performance. A total of 118 component properties were identified which require high-fidelity replication. Lower fidelity dynamic similarity scaling can be used for non-critical components.

  13. The Interaction with Disabled Persons scale: revisiting its internal consistency and factor structure, and examining item-level properties.

    PubMed

    Iacono, Teresa; Tracy, Jane; Keating, Jenny; Brown, Ted

    2009-01-01

    The Interaction with Disabled Persons scale (IDP) has been used in research into baseline attitudes and to evaluate whether a shift in attitudes towards people with developmental disabilities has occurred following some form of intervention. This research has been conducted on the assumption that the IDP measures attitudes as a multidimensional construct and has good internal consistency. Such assumptions about the IDP appear flawed, particularly in light of failures to replicate its underlying factor structure. The aim of this study was to evaluate the construct validity and dimensionality of the IDP. This study used a prospective survey approach. Participants were recruited from first and second year undergraduate university students enrolled in health sciences, occupational therapy, physiotherapy, community and emergency health, nursing, and combined degrees of nursing and midwifery, and health sciences and social work at a large Australian university (n=373). Students completed the IDP, a 20-item self-report scale of attitudes towards people with disabilities. The IDP data were analysed using a combination of factor analysis (Classical Test Theory approach) and Rasch analysis (Item Response Theory approach). The results indicated that the original IDP 6-factor solution was not supported. Instead, one factor consisting of five IDP items (9, 11, 12, 17, and 18) labelled Discomfort met the four criteria for empirical validation of test quality: interval level scaling (scalability), unidimensionality, lacked of DIF across the two participant groups and data collection occasions, and hierarchical ordering. Researchers should consider using the Discomfort subscale of the IDP in future attitude research since it exhibits sound measurement properties.

  14. Safe Maneuvering Envelope Estimation Based on a Physical Approach

    NASA Technical Reports Server (NTRS)

    Lombaerts, Thomas J. J.; Schuet, Stefan R.; Wheeler, Kevin R.; Acosta, Diana; Kaneshige, John T.

    2013-01-01

    This paper discusses a computationally efficient algorithm for estimating the safe maneuvering envelope of damaged aircraft. The algorithm performs a robust reachability analysis through an optimal control formulation while making use of time scale separation and taking into account uncertainties in the aerodynamic derivatives. This approach differs from others since it is physically inspired. This more transparent approach allows interpreting data in each step, and it is assumed that these physical models based upon flight dynamics theory will therefore facilitate certification for future real life applications.

  15. XLinkDB 2.0: integrated, large-scale structural analysis of protein crosslinking data

    PubMed Central

    Schweppe, Devin K.; Zheng, Chunxiang; Chavez, Juan D.; Navare, Arti T.; Wu, Xia; Eng, Jimmy K.; Bruce, James E.

    2016-01-01

    Motivation: Large-scale chemical cross-linking with mass spectrometry (XL-MS) analyses are quickly becoming a powerful means for high-throughput determination of protein structural information and protein–protein interactions. Recent studies have garnered thousands of cross-linked interactions, yet the field lacks an effective tool to compile experimental data or access the network and structural knowledge for these large scale analyses. We present XLinkDB 2.0 which integrates tools for network analysis, Protein Databank queries, modeling of predicted protein structures and modeling of docked protein structures. The novel, integrated approach of XLinkDB 2.0 enables the holistic analysis of XL-MS protein interaction data without limitation to the cross-linker or analytical system used for the analysis. Availability and Implementation: XLinkDB 2.0 can be found here, including documentation and help: http://xlinkdb.gs.washington.edu/. Contact: jimbruce@uw.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153666

  16. Development and Validation of a Fatigue Assessment Scale for U.S. Construction Workers

    PubMed Central

    Zhang, Mingzong; Sparer, Emily H.; Murphy, Lauren A.; Dennerlein, Jack T.; Fang, Dongping; Katz, Jeffrey N.; Caban-Martinez, Alberto J.

    2015-01-01

    Objective To develop a fatigue assessment scale and test its reliability and validity for commercial construction workers. Methods Using a two-phased approach, we first identified items for the development of a Fatigue Assessment Scale for Construction Workers (FASCW) through review of existing scales in the scientific literature, key informant interviews (n=11) and focus groups (3 groups with 6 workers each) with construction workers. The second phase included assessment for the reliability, validity and sensitivity of the new scale using a repeated-measures study design with a convenience sample of construction workers (n=144). Results Phase one resulted in a 16-item preliminary scale that after factor analysis yielded a final 10-item scale with two sub-scales (“Lethargy” and “Bodily Ailment”).. During phase two, the FASCW and its subscales demonstrated satisfactory internal consistency (alpha coefficients were FASCW (0.91), Lethargy (0.86) and Bodily Ailment (0.84)) and acceptable test-retest reliability (Pearson Correlations Coefficients: 0.59–0.68; Intraclass Correlation Coefficients: 0.74–0.80). Correlation analysis substantiated concurrent and convergent validity. A discriminant analysis demonstrated that the FASCW differentiated between groups with arthritis status and different work hours. Conclusions The 10-item FASCW with good reliability and validity is an effective tool for assessing the severity of fatigue among construction workers. PMID:25603944

  17. Development of Islamic Spiritual Health Scale (ISHS).

    PubMed

    Khorashadizadeh, Fatemeh; Heydari, Abbas; Nabavi, Fatemeh Heshmati; Mazlom, Seyed Reza; Ebrahimi, Mahdi; Esmaili, Habibollah

    2017-03-01

    To develop and psychometrically assess spiritual health scale based on Islamic view in Iran. The cross-sectional study was conducted at Imam Ali and Quem hospitals in Mashhad and Imam Ali and Imam Reza hospitals in Bojnurd, Iran, from 2015 to 2016 In the first stage, an 81-item Likert-type scale was developed using a qualitative approach. The second stage comprised quantitative component. The scale's impact factor, content validity ratio, content validity index, face validity and exploratory factor analysis were calculated. Test-retest and internal consistency was used to examine the reliability of the instrument. Data analysis was done using SPSS 11. Of 81 items in the scale, those with impact factor above 1.5, content validity ratio above 0.62, and content validity index above 0.79 were considered valid and the rest were discarded, resulting in a 61-item scale. Exploratory factor analysis reduced the list of items to 30, which were divided into seven groups with a minimum eigen value of 1 for each factor. But according to scatter plot, attributes of the concept of spiritual health included love to creator, duty-based life, religious rationality, psychological balance, and attention to afterlife. Internal reliability of the scale was calculated by alpha Cronbach coefficient as 0.91. There was solid evidence of the strength factor structure and reliability of the Islamic Spiritual Health Scale which provides a unique way for spiritual health assessment of Muslims.

  18. Scaling-Relation-Based Analysis of Bifunctional Catalysis: The Case for Homogeneous Bimetallic Alloys

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andersen, Mie; Medford, Andrew J.; Norskov, Jens K.

    Here, we present a generic analysis of the implications of energetic scaling relations on the possibilities for bifunctional gains at homogeneous bimetallic alloy catalysts. Such catalysts exhibit a large number of interface sites, where second-order reaction steps can involve intermediates adsorbed at different active sites. Using different types of model reaction schemes, we show that such site-coupling reaction steps can provide bifunctional gains that allow for a bimetallic catalyst composed of two individually poor catalyst materials to approach the activity of the optimal monomaterial catalyst. However, bifunctional gains cannot result in activities higher than the activity peak of the monomaterialmore » volcano curve as long as both sites obey similar scaling relations, as is generally the case for bimetallic catalysts. These scaling-relation-imposed limitations could be overcome by combining different classes of materials such as metals and oxides.« less

  19. Scaling-Relation-Based Analysis of Bifunctional Catalysis: The Case for Homogeneous Bimetallic Alloys

    DOE PAGES

    Andersen, Mie; Medford, Andrew J.; Norskov, Jens K.; ...

    2017-04-14

    Here, we present a generic analysis of the implications of energetic scaling relations on the possibilities for bifunctional gains at homogeneous bimetallic alloy catalysts. Such catalysts exhibit a large number of interface sites, where second-order reaction steps can involve intermediates adsorbed at different active sites. Using different types of model reaction schemes, we show that such site-coupling reaction steps can provide bifunctional gains that allow for a bimetallic catalyst composed of two individually poor catalyst materials to approach the activity of the optimal monomaterial catalyst. However, bifunctional gains cannot result in activities higher than the activity peak of the monomaterialmore » volcano curve as long as both sites obey similar scaling relations, as is generally the case for bimetallic catalysts. These scaling-relation-imposed limitations could be overcome by combining different classes of materials such as metals and oxides.« less

  20. ClinicAl Evaluation of Dental Restorative Materials

    DTIC Science & Technology

    1989-01-01

    use of an Atuarial Life Table Survival Analysis procedure. The median survival time for anterior composites was 13.5 years, as compared to 12.1 years...dental materials. For the first time in clinical biomaterials research, we used a statistical approach of Survival Analysis which utilized the... analysis has been established to assure uniformity in usage. This scale is now in use by clinical investigators throughout the country. Its use at the

  1. Rasch model analysis of the Depression, Anxiety and Stress Scales (DASS)

    PubMed Central

    Shea, Tracey L; Tennant, Alan; Pallant, Julie F

    2009-01-01

    Background There is a growing awareness of the need for easily administered, psychometrically sound screening tools to identify individuals with elevated levels of psychological distress. Although support has been found for the psychometric properties of the Depression, Anxiety and Stress Scales (DASS) using classical test theory approaches it has not been subjected to Rasch analysis. The aim of this study was to use Rasch analysis to assess the psychometric properties of the DASS-21 scales, using two different administration modes. Methods The DASS-21 was administered to 420 participants with half the sample responding to a web-based version and the other half completing a traditional pencil-and-paper version. Conformity of DASS-21 scales to a Rasch partial credit model was assessed using the RUMM2020 software. Results To achieve adequate model fit it was necessary to remove one item from each of the DASS-21 subscales. The reduced scales showed adequate internal consistency reliability, unidimensionality and freedom from differential item functioning for sex, age and mode of administration. Analysis of all DASS-21 items combined did not support its use as a measure of general psychological distress. A scale combining the anxiety and stress items showed satisfactory fit to the Rasch model after removal of three items. Conclusion The results provide support for the measurement properties, internal consistency reliability, and unidimensionality of three slightly modified DASS-21 scales, across two different administration methods. The further use of Rasch analysis on the DASS-21 in larger and broader samples is recommended to confirm the findings of the current study. PMID:19426512

  2. Rasch model analysis of the Depression, Anxiety and Stress Scales (DASS).

    PubMed

    Shea, Tracey L; Tennant, Alan; Pallant, Julie F

    2009-05-09

    There is a growing awareness of the need for easily administered, psychometrically sound screening tools to identify individuals with elevated levels of psychological distress. Although support has been found for the psychometric properties of the Depression, Anxiety and Stress Scales (DASS) using classical test theory approaches it has not been subjected to Rasch analysis. The aim of this study was to use Rasch analysis to assess the psychometric properties of the DASS-21 scales, using two different administration modes. The DASS-21 was administered to 420 participants with half the sample responding to a web-based version and the other half completing a traditional pencil-and-paper version. Conformity of DASS-21 scales to a Rasch partial credit model was assessed using the RUMM2020 software. To achieve adequate model fit it was necessary to remove one item from each of the DASS-21 subscales. The reduced scales showed adequate internal consistency reliability, unidimensionality and freedom from differential item functioning for sex, age and mode of administration. Analysis of all DASS-21 items combined did not support its use as a measure of general psychological distress. A scale combining the anxiety and stress items showed satisfactory fit to the Rasch model after removal of three items. The results provide support for the measurement properties, internal consistency reliability, and unidimensionality of three slightly modified DASS-21 scales, across two different administration methods. The further use of Rasch analysis on the DASS-21 in larger and broader samples is recommended to confirm the findings of the current study.

  3. Introduction to bioinformatics.

    PubMed

    Can, Tolga

    2014-01-01

    Bioinformatics is an interdisciplinary field mainly involving molecular biology and genetics, computer science, mathematics, and statistics. Data intensive, large-scale biological problems are addressed from a computational point of view. The most common problems are modeling biological processes at the molecular level and making inferences from collected data. A bioinformatics solution usually involves the following steps: Collect statistics from biological data. Build a computational model. Solve a computational modeling problem. Test and evaluate a computational algorithm. This chapter gives a brief introduction to bioinformatics by first providing an introduction to biological terminology and then discussing some classical bioinformatics problems organized by the types of data sources. Sequence analysis is the analysis of DNA and protein sequences for clues regarding function and includes subproblems such as identification of homologs, multiple sequence alignment, searching sequence patterns, and evolutionary analyses. Protein structures are three-dimensional data and the associated problems are structure prediction (secondary and tertiary), analysis of protein structures for clues regarding function, and structural alignment. Gene expression data is usually represented as matrices and analysis of microarray data mostly involves statistics analysis, classification, and clustering approaches. Biological networks such as gene regulatory networks, metabolic pathways, and protein-protein interaction networks are usually modeled as graphs and graph theoretic approaches are used to solve associated problems such as construction and analysis of large-scale networks.

  4. An approach to multiscale modelling with graph grammars.

    PubMed

    Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried

    2014-09-01

    Functional-structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models.

  5. An approach to multiscale modelling with graph grammars

    PubMed Central

    Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried

    2014-01-01

    Background and Aims Functional–structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. Methods A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Key Results Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. Conclusions The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models. PMID:25134929

  6. Return Intervals Approach to Financial Fluctuations

    NASA Astrophysics Data System (ADS)

    Wang, Fengzhong; Yamasaki, Kazuko; Havlin, Shlomo; Stanley, H. Eugene

    Financial fluctuations play a key role for financial markets studies. A new approach focusing on properties of return intervals can help to get better understanding of the fluctuations. A return interval is defined as the time between two successive volatilities above a given threshold. We review recent studies and analyze the 1000 most traded stocks in the US stock markets. We find that the distribution of the return intervals has a well approximated scaling over a wide range of thresholds. The scaling is also valid for various time windows from one minute up to one trading day. Moreover, these results are universal for stocks of different countries, commodities, interest rates as well as currencies. Further analysis shows some systematic deviations from a scaling law, which are due to the nonlinear correlations in the volatility sequence. We also examine the memory in return intervals for different time scales, which are related to the long-term correlations in the volatility. Furthermore, we test two popular models, FIGARCH and fractional Brownian motion (fBm). Both models can catch the memory effect but only fBm shows a good scaling in the return interval distribution.

  7. Affordable, automatic quantitative fall risk assessment based on clinical balance scales and Kinect data.

    PubMed

    Colagiorgio, P; Romano, F; Sardi, F; Moraschini, M; Sozzi, A; Bejor, M; Ricevuti, G; Buizza, A; Ramat, S

    2014-01-01

    The problem of a correct fall risk assessment is becoming more and more critical with the ageing of the population. In spite of the available approaches allowing a quantitative analysis of the human movement control system's performance, the clinical assessment and diagnostic approach to fall risk assessment still relies mostly on non-quantitative exams, such as clinical scales. This work documents our current effort to develop a novel method to assess balance control abilities through a system implementing an automatic evaluation of exercises drawn from balance assessment scales. Our aim is to overcome the classical limits characterizing these scales i.e. limited granularity and inter-/intra-examiner reliability, to obtain objective scores and more detailed information allowing to predict fall risk. We used Microsoft Kinect to record subjects' movements while performing challenging exercises drawn from clinical balance scales. We then computed a set of parameters quantifying the execution of the exercises and fed them to a supervised classifier to perform a classification based on the clinical score. We obtained a good accuracy (~82%) and especially a high sensitivity (~83%).

  8. Adjacent-Categories Mokken Models for Rater-Mediated Assessments

    PubMed Central

    Wind, Stefanie A.

    2016-01-01

    Molenaar extended Mokken’s original probabilistic-nonparametric scaling models for use with polytomous data. These polytomous extensions of Mokken’s original scaling procedure have facilitated the use of Mokken scale analysis as an approach to exploring fundamental measurement properties across a variety of domains in which polytomous ratings are used, including rater-mediated educational assessments. Because their underlying item step response functions (i.e., category response functions) are defined using cumulative probabilities, polytomous Mokken models can be classified as cumulative models based on the classifications of polytomous item response theory models proposed by several scholars. In order to permit a closer conceptual alignment with educational performance assessments, this study presents an adjacent-categories variation on the polytomous monotone homogeneity and double monotonicity models. Data from a large-scale rater-mediated writing assessment are used to illustrate the adjacent-categories approach, and results are compared with the original formulations. Major findings suggest that the adjacent-categories models provide additional diagnostic information related to individual raters’ use of rating scale categories that is not observed under the original formulation. Implications are discussed in terms of methods for evaluating rating quality. PMID:29795916

  9. Past and present cosmic structure in the SDSS DR7 main sample

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jasche, J.; Leclercq, F.; Wandelt, B.D., E-mail: jasche@iap.fr, E-mail: florent.leclercq@polytechnique.org, E-mail: wandelt@iap.fr

    2015-01-01

    We present a chrono-cosmography project, aiming at the inference of the four dimensional formation history of the observed large scale structure from its origin to the present epoch. To do so, we perform a full-scale Bayesian analysis of the northern galactic cap of the Sloan Digital Sky Survey (SDSS) Data Release 7 main galaxy sample, relying on a fully probabilistic, physical model of the non-linearly evolved density field. Besides inferring initial conditions from observations, our methodology naturally and accurately reconstructs non-linear features at the present epoch, such as walls and filaments, corresponding to high-order correlation functions generated by late-time structuremore » formation. Our inference framework self-consistently accounts for typical observational systematic and statistical uncertainties such as noise, survey geometry and selection effects. We further account for luminosity dependent galaxy biases and automatic noise calibration within a fully Bayesian approach. As a result, this analysis provides highly-detailed and accurate reconstructions of the present density field on scales larger than ∼ 3 Mpc/h, constrained by SDSS observations. This approach also leads to the first quantitative inference of plausible formation histories of the dynamic large scale structure underlying the observed galaxy distribution. The results described in this work constitute the first full Bayesian non-linear analysis of the cosmic large scale structure with the demonstrated capability of uncertainty quantification. Some of these results will be made publicly available along with this work. The level of detail of inferred results and the high degree of control on observational uncertainties pave the path towards high precision chrono-cosmography, the subject of simultaneously studying the dynamics and the morphology of the inhomogeneous Universe.« less

  10. Scaling and dimensional analysis of acoustic streaming jets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moudjed, B.; Botton, V.; Henry, D.

    2014-09-15

    This paper focuses on acoustic streaming free jets. This is to say that progressive acoustic waves are used to generate a steady flow far from any wall. The derivation of the governing equations under the form of a nonlinear hydrodynamics problem coupled with an acoustic propagation problem is made on the basis of a time scale discrimination approach. This approach is preferred to the usually invoked amplitude perturbations expansion since it is consistent with experimental observations of acoustic streaming flows featuring hydrodynamic nonlinearities and turbulence. Experimental results obtained with a plane transducer in water are also presented together with amore » review of the former experimental investigations using similar configurations. A comparison of the shape of the acoustic field with the shape of the velocity field shows that diffraction is a key ingredient in the problem though it is rarely accounted for in the literature. A scaling analysis is made and leads to two scaling laws for the typical velocity level in acoustic streaming free jets; these are both observed in our setup and in former studies by other teams. We also perform a dimensional analysis of this problem: a set of seven dimensionless groups is required to describe a typical acoustic experiment. We find that a full similarity is usually not possible between two acoustic streaming experiments featuring different fluids. We then choose to relax the similarity with respect to sound attenuation and to focus on the case of a scaled water experiment representing an acoustic streaming application in liquid metals, in particular, in liquid silicon and in liquid sodium. We show that small acoustic powers can yield relatively high Reynolds numbers and velocity levels; this could be a virtue for heat and mass transfer applications, but a drawback for ultrasonic velocimetry.« less

  11. Multiscaling for systems with a broad continuum of characteristic lengths and times: Structural transitions in nanocomposites.

    PubMed

    Pankavich, S; Ortoleva, P

    2010-06-01

    The multiscale approach to N-body systems is generalized to address the broad continuum of long time and length scales associated with collective behaviors. A technique is developed based on the concept of an uncountable set of time variables and of order parameters (OPs) specifying major features of the system. We adopt this perspective as a natural extension of the commonly used discrete set of time scales and OPs which is practical when only a few, widely separated scales exist. The existence of a gap in the spectrum of time scales for such a system (under quasiequilibrium conditions) is used to introduce a continuous scaling and perform a multiscale analysis of the Liouville equation. A functional-differential Smoluchowski equation is derived for the stochastic dynamics of the continuum of Fourier component OPs. A continuum of spatially nonlocal Langevin equations for the OPs is also derived. The theory is demonstrated via the analysis of structural transitions in a composite material, as occurs for viral capsids and molecular circuits.

  12. Minimal clinically important difference of the Modified Fatigue Impact Scale in Parkinson's disease.

    PubMed

    Kluger, Benzi M; Garimella, Sanjana; Garvan, Cynthia

    2017-10-01

    Fatigue is a common and debilitating symptom of Parkinson's disease (PD) with no evidence-based treatments. While several fatigue scales are partially validated in PD the minimal clinically important difference (MCID) is unknown for any scale but is an important psychometric value to design and interpret therapeutic trials. We thus sought to determine the MCID for the Modified Fatigue Impact Scale (MFIS). This is a secondary data analysis from 94 PD participants in an acupuncture trial for PD fatigue. Standard psychometric approaches were used to establish validity and an anchor-based approach was used to determine the MCID. The MFIS demonstrated good concurrent validity with other outcome measures and high internal consistency. MCIDs values were found to be 13.8, 6.8 and 6.2 for the MFIS total, MFIS cognitive, and MFIS physical subscores respectively. The MFIS is a valid multidimensional measure of fatigue in PD with demonstrable MCID. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Earth Observation and Indicators Pertaining to Determinants of Health- An Approach to Support Local Scale Characterization of Environmental Determinants of Vector-Borne Diseases

    NASA Astrophysics Data System (ADS)

    Kotchi, Serge Olivier; Brazeau, Stephanie; Ludwig, Antoinette; Aube, Guy; Berthiaume, Pilippe

    2016-08-01

    Environmental determinants (EVDs) were identified as key determinant of health (DoH) for the emergence and re-emergence of several vector-borne diseases. Maintaining ongoing acquisition of data related to EVDs at local scale and for large regions constitutes a significant challenge. Earth observation (EO) satellites offer a framework to overcome this challenge. However, EO image analysis methods commonly used to estimate EVDs are time and resource consuming. Moreover, variations of microclimatic conditions combined with high landscape heterogeneity limit the effectiveness of climatic variables derived from EO. In this study, we present what are DoH and EVDs, the impacts of EVDs on vector-borne diseases in the context of global environmental change, the need to characterize EVDs of vector-borne diseases at local scale and its challenges, and finally we propose an approach based on EO images to estimate at local scale indicators pertaining to EVDs of vector-borne diseases.

  14. Approaches to 30 Percent Energy Savings at the Community Scale in the Hot-Humid Climate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas-Rees, S.; Beal, D.; Martin, E.

    2013-03-01

    BA-PIRC has worked with several community-scale builders within the hot humid climate zone to improve performance of production, or community scale, housing. Tommy Williams Homes (Gainesville, FL), Lifestyle Homes (Melbourne, FL), and Habitat for Humanity (various locations, FL) have all been continuous partners of the Building America program and are the subjects of this report to document achievement of the Building America goal of 30% whole house energy savings packages adopted at the community scale. Key aspects of this research include determining how to evolve existing energy efficiency packages to produce replicable target savings, identifying what builders' technical assistance needsmore » are for implementation and working with them to create sustainable quality assurance mechanisms, and documenting the commercial viability through neutral cost analysis and market acceptance. This report documents certain barriers builders overcame and the approaches they implemented in order to accomplish Building America (BA) Program goals that have not already been documented in previous reports.« less

  15. A large-scale perspective on stress-induced alterations in resting-state networks

    NASA Astrophysics Data System (ADS)

    Maron-Katz, Adi; Vaisvaser, Sharon; Lin, Tamar; Hendler, Talma; Shamir, Ron

    2016-02-01

    Stress is known to induce large-scale neural modulations. However, its neural effect once the stressor is removed and how it relates to subjective experience are not fully understood. Here we used a statistically sound data-driven approach to investigate alterations in large-scale resting-state functional connectivity (rsFC) induced by acute social stress. We compared rsfMRI profiles of 57 healthy male subjects before and after stress induction. Using a parcellation-based univariate statistical analysis, we identified a large-scale rsFC change, involving 490 parcel-pairs. Aiming to characterize this change, we employed statistical enrichment analysis, identifying anatomic structures that were significantly interconnected by these pairs. This analysis revealed strengthening of thalamo-cortical connectivity and weakening of cross-hemispheral parieto-temporal connectivity. These alterations were further found to be associated with change in subjective stress reports. Integrating report-based information on stress sustainment 20 minutes post induction, revealed a single significant rsFC change between the right amygdala and the precuneus, which inversely correlated with the level of subjective recovery. Our study demonstrates the value of enrichment analysis for exploring large-scale network reorganization patterns, and provides new insight on stress-induced neural modulations and their relation to subjective experience.

  16. Increasing CAD system efficacy for lung texture analysis using a convolutional network

    NASA Astrophysics Data System (ADS)

    Tarando, Sebastian Roberto; Fetita, Catalin; Faccinetto, Alex; Brillet, Pierre-Yves

    2016-03-01

    The infiltrative lung diseases are a class of irreversible, non-neoplastic lung pathologies requiring regular follow-up with CT imaging. Quantifying the evolution of the patient status imposes the development of automated classification tools for lung texture. For the large majority of CAD systems, such classification relies on a two-dimensional analysis of axial CT images. In a previously developed CAD system, we proposed a fully-3D approach exploiting a multi-scale morphological analysis which showed good performance in detecting diseased areas, but with a major drawback consisting of sometimes overestimating the pathological areas and mixing different type of lung patterns. This paper proposes a combination of the existing CAD system with the classification outcome provided by a convolutional network, specifically tuned-up, in order to increase the specificity of the classification and the confidence to diagnosis. The advantage of using a deep learning approach is a better regularization of the classification output (because of a deeper insight into a given pathological class over a large series of samples) where the previous system is extra-sensitive due to the multi-scale response on patient-specific, localized patterns. In a preliminary evaluation, the combined approach was tested on a 10 patient database of various lung pathologies, showing a sharp increase of true detections.

  17. Large-scale derived flood frequency analysis based on continuous simulation

    NASA Astrophysics Data System (ADS)

    Dung Nguyen, Viet; Hundecha, Yeshewatesfa; Guse, Björn; Vorogushyn, Sergiy; Merz, Bruno

    2016-04-01

    There is an increasing need for spatially consistent flood risk assessments at the regional scale (several 100.000 km2), in particular in the insurance industry and for national risk reduction strategies. However, most large-scale flood risk assessments are composed of smaller-scale assessments and show spatial inconsistencies. To overcome this deficit, a large-scale flood model composed of a weather generator and catchments models was developed reflecting the spatially inherent heterogeneity. The weather generator is a multisite and multivariate stochastic model capable of generating synthetic meteorological fields (precipitation, temperature, etc.) at daily resolution for the regional scale. These fields respect the observed autocorrelation, spatial correlation and co-variance between the variables. They are used as input into catchment models. A long-term simulation of this combined system enables to derive very long discharge series at many catchment locations serving as a basic for spatially consistent flood risk estimates at the regional scale. This combined model was set up and validated for major river catchments in Germany. The weather generator was trained by 53-year observation data at 528 stations covering not only the complete Germany but also parts of France, Switzerland, Czech Republic and Australia with the aggregated spatial scale of 443,931 km2. 10.000 years of daily meteorological fields for the study area were generated. Likewise, rainfall-runoff simulations with SWIM were performed for the entire Elbe, Rhine, Weser, Donau and Ems catchments. The validation results illustrate a good performance of the combined system, as the simulated flood magnitudes and frequencies agree well with the observed flood data. Based on continuous simulation this model chain is then used to estimate flood quantiles for the whole Germany including upstream headwater catchments in neighbouring countries. This continuous large scale approach overcomes the several drawbacks reported in traditional approaches for the derived flood frequency analysis and therefore is recommended for large scale flood risk case studies.

  18. Modal coupling procedures adapted to NASTRAN analysis of the 1/8-scale shuttle structural dynamics model. Volume 1: Technical report

    NASA Technical Reports Server (NTRS)

    Zalesak, J.

    1975-01-01

    A dynamic substructuring analysis, utilizing the component modes technique, of the 1/8 scale space shuttle orbiter finite element model is presented. The analysis was accomplished in 3 phases, using NASTRAN RIGID FORMAT 3, with appropriate Alters, on the IBM 360-370. The orbiter was divided into 5 substructures, each of which was reduced to interface degrees of freedom and generalized normal modes. The reduced substructures were coupled to yield the first 23 symmetric free-free orbiter modes, and the eigenvectors in the original grid point degree of freedom lineup were recovered. A comparison was made with an analysis which was performed with the same model using the direct coordinate elimination approach. Eigenvalues were extracted using the inverse power method.

  19. Phylogenomic Reconstruction of the Oomycete Phylogeny Derived from 37 Genomes

    PubMed Central

    McCarthy, Charley G. P.

    2017-01-01

    ABSTRACT The oomycetes are a class of microscopic, filamentous eukaryotes within the Stramenopiles-Alveolata-Rhizaria (SAR) supergroup which includes ecologically significant animal and plant pathogens, most infamously the causative agent of potato blight Phytophthora infestans. Single-gene and concatenated phylogenetic studies both of individual oomycete genera and of members of the larger class have resulted in conflicting conclusions concerning species phylogenies within the oomycetes, particularly for the large Phytophthora genus. Genome-scale phylogenetic studies have successfully resolved many eukaryotic relationships by using supertree methods, which combine large numbers of potentially disparate trees to determine evolutionary relationships that cannot be inferred from individual phylogenies alone. With a sufficient amount of genomic data now available, we have undertaken the first whole-genome phylogenetic analysis of the oomycetes using data from 37 oomycete species and 6 SAR species. In our analysis, we used established supertree methods to generate phylogenies from 8,355 homologous oomycete and SAR gene families and have complemented those analyses with both phylogenomic network and concatenated supermatrix analyses. Our results show that a genome-scale approach to oomycete phylogeny resolves oomycete classes and individual clades within the problematic Phytophthora genus. Support for the resolution of the inferred relationships between individual Phytophthora clades varies depending on the methodology used. Our analysis represents an important first step in large-scale phylogenomic analysis of the oomycetes. IMPORTANCE The oomycetes are a class of eukaryotes and include ecologically significant animal and plant pathogens. Single-gene and multigene phylogenetic studies of individual oomycete genera and of members of the larger classes have resulted in conflicting conclusions concerning interspecies relationships among these species, particularly for the Phytophthora genus. The onset of next-generation sequencing techniques now means that a wealth of oomycete genomic data is available. For the first time, we have used genome-scale phylogenetic methods to resolve oomycete phylogenetic relationships. We used supertree methods to generate single-gene and multigene species phylogenies. Overall, our supertree analyses utilized phylogenetic data from 8,355 oomycete gene families. We have also complemented our analyses with superalignment phylogenies derived from 131 single-copy ubiquitous gene families. Our results show that a genome-scale approach to oomycete phylogeny resolves oomycete classes and clades. Our analysis represents an important first step in large-scale phylogenomic analysis of the oomycetes. PMID:28435885

  20. Brownian motion or Lévy walk? Stepping towards an extended statistical mechanics for animal locomotion

    PubMed Central

    Gautestad, Arild O.

    2012-01-01

    Animals moving under the influence of spatio-temporal scaling and long-term memory generate a kind of space-use pattern that has proved difficult to model within a coherent theoretical framework. An extended kind of statistical mechanics is needed, accounting for both the effects of spatial memory and scale-free space use, and put into a context of ecological conditions. Simulations illustrating the distinction between scale-specific and scale-free locomotion are presented. The results show how observational scale (time lag between relocations of an individual) may critically influence the interpretation of the underlying process. In this respect, a novel protocol is proposed as a method to distinguish between some main movement classes. For example, the ‘power law in disguise’ paradox—from a composite Brownian motion consisting of a superposition of independent movement processes at different scales—may be resolved by shifting the focus from pattern analysis at one particular temporal resolution towards a more process-oriented approach involving several scales of observation. A more explicit consideration of system complexity within a statistical mechanical framework, supplementing the more traditional mechanistic modelling approach, is advocated. PMID:22456456

  1. Distributed Coordinated Control of Large-Scale Nonlinear Networks

    DOE PAGES

    Kundu, Soumya; Anghel, Marian

    2015-11-08

    We provide a distributed coordinated approach to the stability analysis and control design of largescale nonlinear dynamical systems by using a vector Lyapunov functions approach. In this formulation the large-scale system is decomposed into a network of interacting subsystems and the stability of the system is analyzed through a comparison system. However finding such comparison system is not trivial. In this work, we propose a sum-of-squares based completely decentralized approach for computing the comparison systems for networks of nonlinear systems. Moreover, based on the comparison systems, we introduce a distributed optimal control strategy in which the individual subsystems (agents) coordinatemore » with their immediate neighbors to design local control policies that can exponentially stabilize the full system under initial disturbances.We illustrate the control algorithm on a network of interacting Van der Pol systems.« less

  2. Lightweight computational steering of very large scale molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beazley, D.M.; Lomdahl, P.S.

    1996-09-01

    We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show howmore » this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages.« less

  3. The circuit architecture of whole brains at the mesoscopic scale.

    PubMed

    Mitra, Partha P

    2014-09-17

    Vertebrate brains of even moderate size are composed of astronomically large numbers of neurons and show a great degree of individual variability at the microscopic scale. This variation is presumably the result of phenotypic plasticity and individual experience. At a larger scale, however, relatively stable species-typical spatial patterns are observed in neuronal architecture, e.g., the spatial distributions of somata and axonal projection patterns, probably the result of a genetically encoded developmental program. The mesoscopic scale of analysis of brain architecture is the transitional point between a microscopic scale where individual variation is prominent and the macroscopic level where a stable, species-typical neural architecture is observed. The empirical existence of this scale, implicit in neuroanatomical atlases, combined with advances in computational resources, makes studying the circuit architecture of entire brains a practical task. A methodology has previously been proposed that employs a shotgun-like grid-based approach to systematically cover entire brain volumes with injections of neuronal tracers. This methodology is being employed to obtain mesoscale circuit maps in mouse and should be applicable to other vertebrate taxa. The resulting large data sets raise issues of data representation, analysis, and interpretation, which must be resolved. Even for data representation the challenges are nontrivial: the conventional approach using regional connectivity matrices fails to capture the collateral branching patterns of projection neurons. Future success of this promising research enterprise depends on the integration of previous neuroanatomical knowledge, partly through the development of suitable computational tools that encapsulate such expertise. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Multiple shooting shadowing for sensitivity analysis of chaotic dynamical systems

    NASA Astrophysics Data System (ADS)

    Blonigan, Patrick J.; Wang, Qiqi

    2018-02-01

    Sensitivity analysis methods are important tools for research and design with simulations. Many important simulations exhibit chaotic dynamics, including scale-resolving turbulent fluid flow simulations. Unfortunately, conventional sensitivity analysis methods are unable to compute useful gradient information for long-time-averaged quantities in chaotic dynamical systems. Sensitivity analysis with least squares shadowing (LSS) can compute useful gradient information for a number of chaotic systems, including simulations of chaotic vortex shedding and homogeneous isotropic turbulence. However, this gradient information comes at a very high computational cost. This paper presents multiple shooting shadowing (MSS), a more computationally efficient shadowing approach than the original LSS approach. Through an analysis of the convergence rate of MSS, it is shown that MSS can have lower memory usage and run time than LSS.

  5. Models of inertial range spectra of interplanetary magnetohydrodynamic turbulence

    NASA Technical Reports Server (NTRS)

    Zhou, YE; Matthaeus, William H.

    1990-01-01

    A framework based on turbulence theory is presented to develop approximations for the local turbulence effects that are required in transport models. An approach based on Kolmogoroff-style dimensional analysis is presented as well as one based on a wave-number diffusion picture. Particular attention is given to the case of MHD turbulence with arbitrary cross helicity and with arbitrary ratios of the Alfven time scale and the nonlinear time scale.

  6. A scale-invariant change detection method for land use/cover change research

    NASA Astrophysics Data System (ADS)

    Xing, Jin; Sieber, Renee; Caelli, Terrence

    2018-07-01

    Land Use/Cover Change (LUCC) detection relies increasingly on comparing remote sensing images with different spatial and spectral scales. Based on scale-invariant image analysis algorithms in computer vision, we propose a scale-invariant LUCC detection method to identify changes from scale heterogeneous images. This method is composed of an entropy-based spatial decomposition, two scale-invariant feature extraction methods, Maximally Stable Extremal Region (MSER) and Scale-Invariant Feature Transformation (SIFT) algorithms, a spatial regression voting method to integrate MSER and SIFT results, a Markov Random Field-based smoothing method, and a support vector machine classification method to assign LUCC labels. We test the scale invariance of our new method with a LUCC case study in Montreal, Canada, 2005-2012. We found that the scale-invariant LUCC detection method provides similar accuracy compared with the resampling-based approach but this method avoids the LUCC distortion incurred by resampling.

  7. Scale invariant texture descriptors for classifying celiac disease

    PubMed Central

    Hegenbart, Sebastian; Uhl, Andreas; Vécsei, Andreas; Wimmer, Georg

    2013-01-01

    Scale invariant texture recognition methods are applied for the computer assisted diagnosis of celiac disease. In particular, emphasis is given to techniques enhancing the scale invariance of multi-scale and multi-orientation wavelet transforms and methods based on fractal analysis. After fine-tuning to specific properties of our celiac disease imagery database, which consists of endoscopic images of the duodenum, some scale invariant (and often even viewpoint invariant) methods provide classification results improving the current state of the art. However, not each of the investigated scale invariant methods is applicable successfully to our dataset. Therefore, the scale invariance of the employed approaches is explicitly assessed and it is found that many of the analyzed methods are not as scale invariant as they theoretically should be. Results imply that scale invariance is not a key-feature required for successful classification of our celiac disease dataset. PMID:23481171

  8. Spectral analysis of temporal non-stationary rainfall-runoff processes

    NASA Astrophysics Data System (ADS)

    Chang, Ching-Min; Yeh, Hund-Der

    2018-04-01

    This study treats the catchment as a block box system with considering the rainfall input and runoff output being a stochastic process. The temporal rainfall-runoff relationship at the catchment scale is described by a convolution integral on a continuous time scale. Using the Fourier-Stieltjes representation approach, a frequency domain solution to the convolution integral is developed to the spectral analysis of runoff processes generated by temporal non-stationary rainfall events. It is shown that the characteristic time scale of rainfall process increases the runoff discharge variability, while the catchment mean travel time constant plays the role in reducing the variability of runoff discharge. Similar to the behavior of groundwater aquifers, catchments act as a low-pass filter in the frequency domain for the rainfall input signal.

  9. Multi-time-scale hydroclimate dynamics of a regional watershed and links to large-scale atmospheric circulation: Application to the Seine river catchment, France

    NASA Astrophysics Data System (ADS)

    Massei, N.; Dieppois, B.; Hannah, D. M.; Lavers, D. A.; Fossa, M.; Laignel, B.; Debret, M.

    2017-03-01

    In the present context of global changes, considerable efforts have been deployed by the hydrological scientific community to improve our understanding of the impacts of climate fluctuations on water resources. Both observational and modeling studies have been extensively employed to characterize hydrological changes and trends, assess the impact of climate variability or provide future scenarios of water resources. In the aim of a better understanding of hydrological changes, it is of crucial importance to determine how and to what extent trends and long-term oscillations detectable in hydrological variables are linked to global climate oscillations. In this work, we develop an approach associating correlation between large and local scales, empirical statistical downscaling and wavelet multiresolution decomposition of monthly precipitation and streamflow over the Seine river watershed, and the North Atlantic sea level pressure (SLP) in order to gain additional insights on the atmospheric patterns associated with the regional hydrology. We hypothesized that: (i) atmospheric patterns may change according to the different temporal wavelengths defining the variability of the signals; and (ii) definition of those hydrological/circulation relationships for each temporal wavelength may improve the determination of large-scale predictors of local variations. The results showed that the links between large and local scales were not necessarily constant according to time-scale (i.e. for the different frequencies characterizing the signals), resulting in changing spatial patterns across scales. This was then taken into account by developing an empirical statistical downscaling (ESD) modeling approach, which integrated discrete wavelet multiresolution analysis for reconstructing monthly regional hydrometeorological processes (predictand: precipitation and streamflow on the Seine river catchment) based on a large-scale predictor (SLP over the Euro-Atlantic sector). This approach basically consisted in three steps: 1 - decomposing large-scale climate and hydrological signals (SLP field, precipitation or streamflow) using discrete wavelet multiresolution analysis, 2 - generating a statistical downscaling model per time-scale, 3 - summing up all scale-dependent models in order to obtain a final reconstruction of the predictand. The results obtained revealed a significant improvement of the reconstructions for both precipitation and streamflow when using the multiresolution ESD model instead of basic ESD. In particular, the multiresolution ESD model handled very well the significant changes in variance through time observed in either precipitation or streamflow. For instance, the post-1980 period, which had been characterized by particularly high amplitudes in interannual-to-interdecadal variability associated with alternating flood and extremely low-flow/drought periods (e.g., winter/spring 2001, summer 2003), could not be reconstructed without integrating wavelet multiresolution analysis into the model. In accordance with previous studies, the wavelet components detected in SLP, precipitation and streamflow on interannual to interdecadal time-scales could be interpreted in terms of influence of the Gulf-Stream oceanic front on atmospheric circulation.

  10. Double symbolic joint entropy in nonlinear dynamic complexity analysis

    NASA Astrophysics Data System (ADS)

    Yao, Wenpo; Wang, Jun

    2017-07-01

    Symbolizations, the base of symbolic dynamic analysis, are classified as global static and local dynamic approaches which are combined by joint entropy in our works for nonlinear dynamic complexity analysis. Two global static methods, symbolic transformations of Wessel N. symbolic entropy and base-scale entropy, and two local ones, namely symbolizations of permutation and differential entropy, constitute four double symbolic joint entropies that have accurate complexity detections in chaotic models, logistic and Henon map series. In nonlinear dynamical analysis of different kinds of heart rate variability, heartbeats of healthy young have higher complexity than those of the healthy elderly, and congestive heart failure (CHF) patients are lowest in heartbeats' joint entropy values. Each individual symbolic entropy is improved by double symbolic joint entropy among which the combination of base-scale and differential symbolizations have best complexity analysis. Test results prove that double symbolic joint entropy is feasible in nonlinear dynamic complexity analysis.

  11. A Systems-Biology Approach to Yeast Actin Cables

    PubMed Central

    Drake, Tyler; Yusuf, Eddy; Vavylonis, Dimitrios

    2011-01-01

    We focus on actin cables in yeast as a model system for understanding cytoskeletal organization and the workings of actin itself. In particular, we highlight quantitative approaches on the kinetics of actin cable assembly and methods of measuring their morphology by image analysis. Actin cables described by these studies can span greater lengths than a thousand end-to-end actin monomers. Because of this difference in length scales, control of the actin-cable system constitutes a junction between short-range interactions—among actin monomers and nucleating, polymerization-facilitating, side-binding, severing, and cross-linking proteins—and the emergence of cell-scale physical form as embodied by the actin cables themselves. PMID:22161338

  12. Multifractal Approach to Time Clustering of Earthquakes. Application to Mt. Vesuvio Seismicity

    NASA Astrophysics Data System (ADS)

    Codano, C.; Alonzo, M. L.; Vilardo, G.

    The clustering structure of the Vesuvian earthquakes occurring is investigated by means of statistical tools: the inter-event time distribution, the running mean and the multifractal analysis. The first cannot clearly distinguish between a Poissonian process and a clustered one due to the difficulties of clearly distinguishing between an exponential distribution and a power law one. The running mean test reveals the clustering of the earthquakes, but looses information about the structure of the distribution at global scales. The multifractal approach can enlighten the clustering at small scales, while the global behaviour remains Poissonian. Subsequently the clustering of the events is interpreted in terms of diffusive processes of the stress in the earth crust.

  13. Statistical physics approaches to financial fluctuations

    NASA Astrophysics Data System (ADS)

    Wang, Fengzhong

    2009-12-01

    Complex systems attract many researchers from various scientific fields. Financial markets are one of these widely studied complex systems. Statistical physics, which was originally developed to study large systems, provides novel ideas and powerful methods to analyze financial markets. The study of financial fluctuations characterizes market behavior, and helps to better understand the underlying market mechanism. Our study focuses on volatility, a fundamental quantity to characterize financial fluctuations. We examine equity data of the entire U.S. stock market during 2001 and 2002. To analyze the volatility time series, we develop a new approach, called return interval analysis, which examines the time intervals between two successive volatilities exceeding a given value threshold. We find that the return interval distribution displays scaling over a wide range of thresholds. This scaling is valid for a range of time windows, from one minute up to one day. Moreover, our results are similar for commodities, interest rates, currencies, and for stocks of different countries. Further analysis shows some systematic deviations from a scaling law, which we can attribute to nonlinear correlations in the volatility time series. We also find a memory effect in return intervals for different time scales, which is related to the long-term correlations in the volatility. To further characterize the mechanism of price movement, we simulate the volatility time series using two different models, fractionally integrated generalized autoregressive conditional heteroscedasticity (FIGARCH) and fractional Brownian motion (fBm), and test these models with the return interval analysis. We find that both models can mimic time memory but only fBm shows scaling in the return interval distribution. In addition, we examine the volatility of daily opening to closing and of closing to opening. We find that each volatility distribution has a power law tail. Using the detrended fluctuation analysis (DFA) method, we show long-term auto-correlations in these volatility time series. We also analyze return, the actual price changes of stocks, and find that the returns over the two sessions are often anti-correlated.

  14. Scales of variability of black carbon plumes and their dependence on resolution of ECHAM6-HAM

    NASA Astrophysics Data System (ADS)

    Weigum, Natalie; Stier, Philip; Schutgens, Nick; Kipling, Zak

    2015-04-01

    Prediction of the aerosol effect on climate depends on the ability of three-dimensional numerical models to accurately estimate aerosol properties. However, a limitation of traditional grid-based models is their inability to resolve variability on scales smaller than a grid box. Past research has shown that significant aerosol variability exists on scales smaller than these grid-boxes, which can lead to discrepancies between observations and aerosol models. The aim of this study is to understand how a global climate model's (GCM) inability to resolve sub-grid scale variability affects simulations of important aerosol features. This problem is addressed by comparing observed black carbon (BC) plume scales from the HIPPO aircraft campaign to those simulated by ECHAM-HAM GCM, and testing how model resolution affects these scales. This study additionally investigates how model resolution affects BC variability in remote and near-source regions. These issues are examined using three different approaches: comparison of observed and simulated along-flight-track plume scales, two-dimensional autocorrelation analysis, and 3-dimensional plume analysis. We find that the degree to which GCMs resolve variability can have a significant impact on the scales of BC plumes, and it is important for models to capture the scales of aerosol plume structures, which account for a large degree of aerosol variability. In this presentation, we will provide further results from the three analysis techniques along with a summary of the implication of these results on future aerosol model development.

  15. Large-scale retrieval for medical image analytics: A comprehensive review.

    PubMed

    Li, Zhongyu; Zhang, Xiaofan; Müller, Henning; Zhang, Shaoting

    2018-01-01

    Over the past decades, medical image analytics was greatly facilitated by the explosion of digital imaging techniques, where huge amounts of medical images were produced with ever-increasing quality and diversity. However, conventional methods for analyzing medical images have achieved limited success, as they are not capable to tackle the huge amount of image data. In this paper, we review state-of-the-art approaches for large-scale medical image analysis, which are mainly based on recent advances in computer vision, machine learning and information retrieval. Specifically, we first present the general pipeline of large-scale retrieval, summarize the challenges/opportunities of medical image analytics on a large-scale. Then, we provide a comprehensive review of algorithms and techniques relevant to major processes in the pipeline, including feature representation, feature indexing, searching, etc. On the basis of existing work, we introduce the evaluation protocols and multiple applications of large-scale medical image retrieval, with a variety of exploratory and diagnostic scenarios. Finally, we discuss future directions of large-scale retrieval, which can further improve the performance of medical image analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. ESTimating plant phylogeny: lessons from partitioning

    PubMed Central

    de la Torre, Jose EB; Egan, Mary G; Katari, Manpreet S; Brenner, Eric D; Stevenson, Dennis W; Coruzzi, Gloria M; DeSalle, Rob

    2006-01-01

    Background While Expressed Sequence Tags (ESTs) have proven a viable and efficient way to sample genomes, particularly those for which whole-genome sequencing is impractical, phylogenetic analysis using ESTs remains difficult. Sequencing errors and orthology determination are the major problems when using ESTs as a source of characters for systematics. Here we develop methods to incorporate EST sequence information in a simultaneous analysis framework to address controversial phylogenetic questions regarding the relationships among the major groups of seed plants. We use an automated, phylogenetically derived approach to orthology determination called OrthologID generate a phylogeny based on 43 process partitions, many of which are derived from ESTs, and examine several measures of support to assess the utility of EST data for phylogenies. Results A maximum parsimony (MP) analysis resulted in a single tree with relatively high support at all nodes in the tree despite rampant conflict among trees generated from the separate analysis of individual partitions. In a comparison of broader-scale groupings based on cellular compartment (ie: chloroplast, mitochondrial or nuclear) or function, only the nuclear partition tree (based largely on EST data) was found to be topologically identical to the tree based on the simultaneous analysis of all data. Despite topological conflict among the broader-scale groupings examined, only the tree based on morphological data showed statistically significant differences. Conclusion Based on the amount of character support contributed by EST data which make up a majority of the nuclear data set, and the lack of conflict of the nuclear data set with the simultaneous analysis tree, we conclude that the inclusion of EST data does provide a viable and efficient approach to address phylogenetic questions within a parsimony framework on a genomic scale, if problems of orthology determination and potential sequencing errors can be overcome. In addition, approaches that examine conflict and support in a simultaneous analysis framework allow for a more precise understanding of the evolutionary history of individual process partitions and may be a novel way to understand functional aspects of different kinds of cellular classes of gene products. PMID:16776834

  17. PERFORMANCE AND ANALYSIS OF AQUIFER TESTS WITH IMPLICATIONS FOR CONTAMINANT TRANSPORT MODELING

    EPA Science Inventory

    The scale-dependence of dispersivity values used in contaminant transport models to estimate the spreading of contaminant plumes by hydrodynamic dispersion processes was investigated and found to be an artifact of conventional modeling approaches (especially, vertically averaged ...

  18. Comparing Analysis Frames for Visual Data Sets: Using Pupil Views Templates to Explore Perspectives of Learning

    ERIC Educational Resources Information Center

    Wall, Kate; Higgins, Steve; Remedios, Richard; Rafferty, Victoria; Tiplady, Lucy

    2013-01-01

    A key challenge of visual methodology is how to combine large-scale qualitative data sets with epistemologically acceptable and rigorous analysis techniques. The authors argue that a pragmatic approach drawing on ideas from mixed methods is helpful to open up the full potential of visual data. However, before one starts to "mix" the…

  19. Estimating forest ecosystem evapotranspiration at multiple temporal scales with a dimension analysis approach

    Treesearch

    Guoyi Zhou; Ge Sun; Xu Wang; Chuanyan Zhou; Steven G. McNulty; James M. Vose; Devendra M. Amatya

    2008-01-01

    It is critical that evapotranspiration (ET) be quantified accurately so that scientists can evaluate the effects of land management and global change on water availability, streamflow, nutrient and sediment loading, and ecosystem productivity in watersheds. The objective of this study was to derive a new semi-empirical ET modeled using a dimension analysis method that...

  20. Creating Indices from the Control Structure Interview Through Data Collapsing and Multidimensional Scaling: Approaches to Data Analysis in Project MITT.

    ERIC Educational Resources Information Center

    Jovick, Thomas D.

    This paper discribes the analysis of data in the Management Implications of Team Teaching Project (MITT). It touches on the interviews conducted with teachers and principals, presents the breadth of information obtained in the questionnaire, and explains how the data were aggregated and how issues were grouped. Information collected in the…

  1. Finding External Indicators of Load on a Web Server via Analysis of Black-Box Performance Measurements

    ERIC Educational Resources Information Center

    Chiarini, Marc A.

    2010-01-01

    Traditional methods for system performance analysis have long relied on a mix of queuing theory, detailed system knowledge, intuition, and trial-and-error. These approaches often require construction of incomplete gray-box models that can be costly to build and difficult to scale or generalize. In this thesis, we present a black-box analysis…

  2. Investigation of the scaling characteristics of LANDSAT temperature and vegetation data: a wavelet-based approach

    NASA Astrophysics Data System (ADS)

    Rathinasamy, Maheswaran; Bindhu, V. M.; Adamowski, Jan; Narasimhan, Balaji; Khosa, Rakesh

    2017-10-01

    An investigation of the scaling characteristics of vegetation and temperature data derived from LANDSAT data was undertaken for a heterogeneous area in Tamil Nadu, India. A wavelet-based multiresolution technique decomposed the data into large-scale mean vegetation and temperature fields and fluctuations in horizontal, diagonal, and vertical directions at hierarchical spatial resolutions. In this approach, the wavelet coefficients were used to investigate whether the normalized difference vegetation index (NDVI) and land surface temperature (LST) fields exhibited self-similar scaling behaviour. In this study, l-moments were used instead of conventional simple moments to understand scaling behaviour. Using the first six moments of the wavelet coefficients through five levels of dyadic decomposition, the NDVI data were shown to be statistically self-similar, with a slope of approximately -0.45 in each of the horizontal, vertical, and diagonal directions of the image, over scales ranging from 30 to 960 m. The temperature data were also shown to exhibit self-similarity with slopes ranging from -0.25 in the diagonal direction to -0.20 in the vertical direction over the same scales. These findings can help develop appropriate up- and down-scaling schemes of remotely sensed NDVI and LST data for various hydrologic and environmental modelling applications. A sensitivity analysis was also undertaken to understand the effect of mother wavelets on the scaling characteristics of LST and NDVI images.

  3. Can the concept of fundamental and realized niches be applied to the distribution of dominant phytoplankton in the global ocean?

    NASA Astrophysics Data System (ADS)

    Dowell, M.; Moore, T.; Follows, M.; Dutkiewicz, S.

    2006-12-01

    In recent years there has been significant progress both in the use of satellite ocean colour remote sensing and coupled hydrodynamic biological models for producing maps of different dominant phytoplankton groups in the global ocean. In parallel to these initiatives, there is ongoing research largely following on from Alan Longhurst's seminal work on defining a template of distinct ecological and biogeochemical provinces for the oceans based on their physical and biochemical characteristics. For these products and models to be of maximum use in their subsequent inclusion in re-analysis and climate scale models, there is a need to understand how the "observed" distributions of dominant phytoplankton (realized niche) coincide with of the environmental constraints in which they occur (fundamental niche). In the current paper, we base our analysis on the recently published results on the distribution of dominant phytoplankton species at global scale, resulting both from satellite and model analysis. Furthermore, we will present research in defining biogeochemical provinces using satellite and model data inputs and a fuzzy logic based approach. This will be compared with ongoing modelling efforts, which include competitive exclusion and therefore compatible with the definition of the realized ecological niche, to define the emergent distribution of dominant phytoplankton species. Ultimately we investigate the coherence of these two distinct approaches in studying phytoplankton distributions and propose the significance of this in the context of modelling and analysis at various scales.

  4. A comparison of three methods of assessing differential item functioning (DIF) in the Hospital Anxiety Depression Scale: ordinal logistic regression, Rasch analysis and the Mantel chi-square procedure.

    PubMed

    Cameron, Isobel M; Scott, Neil W; Adler, Mats; Reid, Ian C

    2014-12-01

    It is important for clinical practice and research that measurement scales of well-being and quality of life exhibit only minimal differential item functioning (DIF). DIF occurs where different groups of people endorse items in a scale to different extents after being matched by the intended scale attribute. We investigate the equivalence or otherwise of common methods of assessing DIF. Three methods of measuring age- and sex-related DIF (ordinal logistic regression, Rasch analysis and Mantel χ(2) procedure) were applied to Hospital Anxiety Depression Scale (HADS) data pertaining to a sample of 1,068 patients consulting primary care practitioners. Three items were flagged by all three approaches as having either age- or sex-related DIF with a consistent direction of effect; a further three items identified did not meet stricter criteria for important DIF using at least one method. When applying strict criteria for significant DIF, ordinal logistic regression was slightly less sensitive. Ordinal logistic regression, Rasch analysis and contingency table methods yielded consistent results when identifying DIF in the HADS depression and HADS anxiety scales. Regardless of methods applied, investigators should use a combination of statistical significance, magnitude of the DIF effect and investigator judgement when interpreting the results.

  5. Contextual Compression of Large-Scale Wind Turbine Array Simulations: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gruchalla, Kenny M; Brunhart-Lupo, Nicholas J; Potter, Kristin C

    Data sizes are becoming a critical issue particularly for HPC applications. We have developed a user-driven lossy wavelet-based storage model to facilitate the analysis and visualization of large-scale wind turbine array simulations. The model stores data as heterogeneous blocks of wavelet coefficients, providing high-fidelity access to user-defined data regions believed the most salient, while providing lower-fidelity access to less salient regions on a block-by-block basis. In practice, by retaining the wavelet coefficients as a function of feature saliency, we have seen data reductions in excess of 94 percent, while retaining lossless information in the turbine-wake regions most critical to analysismore » and providing enough (low-fidelity) contextual information in the upper atmosphere to track incoming coherent turbulent structures. Our contextual wavelet compression approach has allowed us to deliver interactive visual analysis while providing the user control over where data loss, and thus reduction in accuracy, in the analysis occurs. We argue this reduced but contexualized representation is a valid approach and encourages contextual data management.« less

  6. Contextual Compression of Large-Scale Wind Turbine Array Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gruchalla, Kenny M; Brunhart-Lupo, Nicholas J; Potter, Kristin C

    Data sizes are becoming a critical issue particularly for HPC applications. We have developed a user-driven lossy wavelet-based storage model to facilitate the analysis and visualization of large-scale wind turbine array simulations. The model stores data as heterogeneous blocks of wavelet coefficients, providing high-fidelity access to user-defined data regions believed the most salient, while providing lower-fidelity access to less salient regions on a block-by-block basis. In practice, by retaining the wavelet coefficients as a function of feature saliency, we have seen data reductions in excess of 94 percent, while retaining lossless information in the turbine-wake regions most critical to analysismore » and providing enough (low-fidelity) contextual information in the upper atmosphere to track incoming coherent turbulent structures. Our contextual wavelet compression approach has allowed us to deliver interative visual analysis while providing the user control over where data loss, and thus reduction in accuracy, in the analysis occurs. We argue this reduced but contextualized representation is a valid approach and encourages contextual data management.« less

  7. Integrated simulation of continuous-scale and discrete-scale radiative transfer in metal foams

    NASA Astrophysics Data System (ADS)

    Xia, Xin-Lin; Li, Yang; Sun, Chuang; Ai, Qing; Tan, He-Ping

    2018-06-01

    A novel integrated simulation of radiative transfer in metal foams is presented. It integrates the continuous-scale simulation with the direct discrete-scale simulation in a single computational domain. It relies on the coupling of the real discrete-scale foam geometry with the equivalent continuous-scale medium through a specially defined scale-coupled zone. This zone holds continuous but nonhomogeneous volumetric radiative properties. The scale-coupled approach is compared to the traditional continuous-scale approach using volumetric radiative properties in the equivalent participating medium and to the direct discrete-scale approach employing the real 3D foam geometry obtained by computed tomography. All the analyses are based on geometrical optics. The Monte Carlo ray-tracing procedure is used for computations of the absorbed radiative fluxes and the apparent radiative behaviors of metal foams. The results obtained by the three approaches are in tenable agreement. The scale-coupled approach is fully validated in calculating the apparent radiative behaviors of metal foams composed of very absorbing to very reflective struts and that composed of very rough to very smooth struts. This new approach leads to a reduction in computational time by approximately one order of magnitude compared to the direct discrete-scale approach. Meanwhile, it can offer information on the local geometry-dependent feature and at the same time the equivalent feature in an integrated simulation. This new approach is promising to combine the advantages of the continuous-scale approach (rapid calculations) and direct discrete-scale approach (accurate prediction of local radiative quantities).

  8. Large-scale Granger causality analysis on resting-state functional MRI

    NASA Astrophysics Data System (ADS)

    D'Souza, Adora M.; Abidin, Anas Zainul; Leistritz, Lutz; Wismüller, Axel

    2016-03-01

    We demonstrate an approach to measure the information flow between each pair of time series in resting-state functional MRI (fMRI) data of the human brain and subsequently recover its underlying network structure. By integrating dimensionality reduction into predictive time series modeling, large-scale Granger Causality (lsGC) analysis method can reveal directed information flow suggestive of causal influence at an individual voxel level, unlike other multivariate approaches. This method quantifies the influence each voxel time series has on every other voxel time series in a multivariate sense and hence contains information about the underlying dynamics of the whole system, which can be used to reveal functionally connected networks within the brain. To identify such networks, we perform non-metric network clustering, such as accomplished by the Louvain method. We demonstrate the effectiveness of our approach to recover the motor and visual cortex from resting state human brain fMRI data and compare it with the network recovered from a visuomotor stimulation experiment, where the similarity is measured by the Dice Coefficient (DC). The best DC obtained was 0.59 implying a strong agreement between the two networks. In addition, we thoroughly study the effect of dimensionality reduction in lsGC analysis on network recovery. We conclude that our approach is capable of detecting causal influence between time series in a multivariate sense, which can be used to segment functionally connected networks in the resting-state fMRI.

  9. Comparative evaluation of saliva collection methods for proteome analysis.

    PubMed

    Golatowski, Claas; Salazar, Manuela Gesell; Dhople, Vishnu Mukund; Hammer, Elke; Kocher, Thomas; Jehmlich, Nico; Völker, Uwe

    2013-04-18

    Saliva collection devices are widely used for large-scale screening approaches. This study was designed to compare the suitability of three different whole-saliva collection approaches for subsequent proteome analyses. From 9 young healthy volunteers (4 women and 5 men) saliva samples were collected either unstimulated by passive drooling or stimulated using a paraffin gum or Salivette® (cotton swab). Saliva volume, protein concentration and salivary protein patterns were analyzed comparatively. Samples collected using paraffin gum showed the highest saliva volume (4.1±1.5 ml) followed by Salivette® collection (1.8±0.4 ml) and drooling (1.0±0.4 ml). Saliva protein concentrations (average 1145 μg/ml) showed no significant differences between the three sampling schemes. Each collection approach facilitated the identification of about 160 proteins (≥2 distinct peptides) per subject, but collection-method dependent variations in protein composition were observed. Passive drooling, paraffin gum and Salivette® each allows similar coverage of the whole saliva proteome, but the specific proteins observed depended on the collection approach. Thus, only one type of collection device should be used for quantitative proteome analysis in one experiment, especially when performing large-scale cross-sectional or multi-centric studies. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Churchill: an ultra-fast, deterministic, highly scalable and balanced parallelization strategy for the discovery of human genetic variation in clinical and population-scale genomics.

    PubMed

    Kelly, Benjamin J; Fitch, James R; Hu, Yangqiu; Corsmeier, Donald J; Zhong, Huachun; Wetzel, Amy N; Nordquist, Russell D; Newsom, David L; White, Peter

    2015-01-20

    While advances in genome sequencing technology make population-scale genomics a possibility, current approaches for analysis of these data rely upon parallelization strategies that have limited scalability, complex implementation and lack reproducibility. Churchill, a balanced regional parallelization strategy, overcomes these challenges, fully automating the multiple steps required to go from raw sequencing reads to variant discovery. Through implementation of novel deterministic parallelization techniques, Churchill allows computationally efficient analysis of a high-depth whole genome sample in less than two hours. The method is highly scalable, enabling full analysis of the 1000 Genomes raw sequence dataset in a week using cloud resources. http://churchill.nchri.org/.

  11. Parameter Uncertainty Analysis Using Monte Carlo Simulations for a Regional-Scale Groundwater Model

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Pohlmann, K.

    2016-12-01

    Regional-scale grid-based groundwater models for flow and transport often contain multiple types of parameters that can intensify the challenge of parameter uncertainty analysis. We propose a Monte Carlo approach to systematically quantify the influence of various types of model parameters on groundwater flux and contaminant travel times. The Monte Carlo simulations were conducted based on the steady-state conversion of the original transient model, which was then combined with the PEST sensitivity analysis tool SENSAN and particle tracking software MODPATH. Results identified hydrogeologic units whose hydraulic conductivity can significantly affect groundwater flux, and thirteen out of 173 model parameters that can cause large variation in travel times for contaminant particles originating from given source zones.

  12. How High Frequency Trading Affects a Market Index

    PubMed Central

    Kenett, Dror Y.; Ben-Jacob, Eshel; Stanley, H. Eugene; gur-Gershgoren, Gitit

    2013-01-01

    The relationship between a market index and its constituent stocks is complicated. While an index is a weighted average of its constituent stocks, when the investigated time scale is one day or longer the index has been found to have a stronger effect on the stocks than vice versa. We explore how this interaction changes in short time scales using high frequency data. Using a correlation-based analysis approach, we find that in short time scales stocks have a stronger influence on the index. These findings have implications for high frequency trading and suggest that the price of an index should be published on shorter time scales, as close as possible to those of the actual transaction time scale. PMID:23817553

  13. A situation-specific approach to measure attention in adults with ADHD: The everyday life attention scale (ELAS).

    PubMed

    Groen, Yvonne; Fuermaier, Anselm B M; Tucha, Lara; Weisbrod, Matthias; Aschenbrenner, Steffen; Tucha, Oliver

    2018-03-14

    This study describes the development and utility of a new self-report measure of attentional capacities of adults with Attention Deficit Hyperactivity Disorder (ADHD): the Everyday Life Attention Scale (ELAS). Different from previous attention scales, attentional capacities are rated for nine everyday situations. Study 1 investigated the factor structure, validity, and reliability of the ELAS in 1206 healthy participants. Confirmatory factor analysis supported a situation-specific approach which categorizes everyday attention into nine situation scales: Reading, Movie, Activity, Lecture, Conversation, Assignment, Cooking, Cleaning up, and Driving. Each scale was composed of ratings for sustained, focused, selective, and divided attention as well as motivation, and had good internal consistency. Most scales showed weak correlations with ADHD Symptoms, Executive Functioning, and Memory Efficacy. Study 2 further investigated the sensitivity of the ELAS in 80 adults with ADHD compared to matched healthy controls and a mixed clinical group of 56 patients diagnosed with other psychiatric disorders. Compared to healthy controls, patients with ADHD reported reduced attentional capacities with large effect sizes on all situation scales and had a substantially higher number of situations with impaired attention scores. The ELAS may become useful in the clinical evaluation of ADHD and related psychiatric disorders in adults.

  14. Multi-scale variability and long-range memory in indoor Radon concentrations from Coimbra, Portugal

    NASA Astrophysics Data System (ADS)

    Donner, Reik V.; Potirakis, Stelios; Barbosa, Susana

    2014-05-01

    The presence or absence of long-range correlations in the variations of indoor Radon concentrations has recently attracted considerable interest. As a radioactive gas naturally emitted from the ground in certain geological settings, understanding environmental factors controlling Radon concentrations and their dynamics is important for estimating its effect on human health and the efficiency of possible measures for reducing the corresponding exposition. In this work, we re-analyze two high-resolution records of indoor Radon concentrations from Coimbra, Portugal, each of which spans several months of continuous measurements. In order to evaluate the presence of long-range correlations and fractal scaling, we utilize a multiplicity of complementary methods, including power spectral analysis, ARFIMA modeling, classical and multi-fractal detrended fluctuation analysis, and two different estimators of the signals' fractal dimensions. Power spectra and fluctuation functions reveal some complex behavior with qualitatively different properties on different time-scales: white noise in the high-frequency part, indications of some long-range correlated process dominating time scales of several hours to days, and pronounced low-frequency variability associated with tidal and/or meteorological forcing. In order to further decompose these different scales of variability, we apply two different approaches. On the one hand, applying multi-resolution analysis based on the discrete wavelet transform allows separately studying contributions on different time scales and characterize their specific correlation and scaling properties. On the other hand, singular system analysis (SSA) provides a reconstruction of the essential modes of variability. Specifically, by considering only the first leading SSA modes, we achieve an efficient de-noising of our environmental signals, highlighting the low-frequency variations together with some distinct scaling on sub-daily time-scales resembling the properties of a long-range correlated process.

  15. Microbial Analysis of Australian Dry Lake Cores; Analogs For Biogeochemical Processes

    NASA Astrophysics Data System (ADS)

    Nguyen, A. V.; Baldridge, A. M.; Thomson, B. J.

    2014-12-01

    Lake Gilmore in Western Australia is an acidic ephemeral lake that is analogous to Martian geochemical processes represented by interbedded phyllosilicates and sulfates. These areas demonstrate remnants of a global-scale change on Mars during the late Noachian era from a neutral to alkaline pH to relatively lower pH in the Hesperian era that continues to persist today. The geochemistry of these areas could possibly be caused by small-scale changes such as microbial metabolism. Two approaches were used to determine the presence of microbes in the Australian dry lake cores: DNA analysis and lipid analysis. Detecting DNA or lipids in the cores will provide evidence of living or deceased organisms since they provide distinct markers for life. Basic DNA analysis consists of extraction, amplification through PCR, plasmid cloning, and DNA sequencing. Once the sequence of unknown DNA is known, an online program, BLAST, will be used to identify the microbes for further analysis. The lipid analysis approach consists of phospholipid fatty acid analysis that is done by Microbial ID, which will provide direct identification any microbes from the presence of lipids. Identified microbes are then compared to mineralogy results from the x-ray diffraction of the core samples to determine if the types of metabolic reactions are consistent with the variation in composition in these analog deposits. If so, it provides intriguing implications for the presence of life in similar Martian deposits.

  16. From crater functions to partial differential equations: a new approach to ion bombardment induced nonequilibrium pattern formation.

    PubMed

    Norris, Scott A; Brenner, Michael P; Aziz, Michael J

    2009-06-03

    We develop a methodology for deriving continuum partial differential equations for the evolution of large-scale surface morphology directly from molecular dynamics simulations of the craters formed from individual ion impacts. Our formalism relies on the separation between the length scale of ion impact and the characteristic scale of pattern formation, and expresses the surface evolution in terms of the moments of the crater function. We demonstrate that the formalism reproduces the classical Bradley-Harper results, as well as ballistic atomic drift, under the appropriate simplifying assumptions. Given an actual set of converged molecular dynamics moments and their derivatives with respect to the incidence angle, our approach can be applied directly to predict the presence and absence of surface morphological instabilities. This analysis represents the first work systematically connecting molecular dynamics simulations of ion bombardment to partial differential equations that govern topographic pattern-forming instabilities.

  17. Measuring students' perceptions of plagiarism: modification and Rasch validation of a plagiarism attitude scale.

    PubMed

    Howard, Steven J; Ehrich, John F; Walton, Russell

    2014-01-01

    Plagiarism is a significant area of concern in higher education, given university students' high self-reported rates of plagiarism. However, research remains inconsistent in prevalence estimates and suggested precursors of plagiarism. This may be a function of the unclear psychometric properties of the measurement tools adopted. To investigate this, we modified an existing plagiarism scale (to broaden its scope), established its psychometric properties using traditional (EFA, Cronbach's alpha) and modern (Rasch analysis) survey evaluation approaches, and examined results of well-functioning items. Results indicated that traditional and modern psychometric approaches differed in their recommendations. Further, responses indicated that although most respondents acknowledged the seriousness of plagiarism, these attitudes were neither unanimous nor consistent across the range of issues assessed. This study thus provides rigorous psychometric testing of a plagiarism attitude scale and baseline data from which to begin a discussion of contextual, personal, and external factors that influence students' plagiarism attitudes.

  18. Weakly nonparallel and curvature effects on stationary crossflow instability: Comparison of results from multiple-scales analysis and parabolized stability equations

    NASA Technical Reports Server (NTRS)

    Singer, Bart A.; Choudhari, Meelan; Li, Fei

    1995-01-01

    A multiple-scales approach is used to approximate the effects of nonparallelism and streamwise surface curvature on the growth of stationary crossflow vortices in incompressible, three-dimesional boundary layers. The results agree with results predicted by solving the parabolized stability equations in regions where the nonparallelism is sufficiently weak. As the nonparallelism increases, the agreement between the two approaches worsens. An attempt has been made to quantify the nonparallelism on flow stability in terms of a nondimensional number that describes the rate of change of the mean flow relative to the disturbance wavelength. We find that the above nondimensional number provides useful information about the adequacy of the multiple-scales approximation for different disturbances for a given flow geometry, but the number does not collapse data for different flow geometries onto a single curve.

  19. Multi-Scale Impact and Compression-After-Impact Modeling of Reinforced Benzoxazine/Epoxy Composites using Micromechanics Approach

    NASA Astrophysics Data System (ADS)

    Montero, Marc Villa; Barjasteh, Ehsan; Baid, Harsh K.; Godines, Cody; Abdi, Frank; Nikbin, Kamran

    A multi-scale micromechanics approach along with finite element (FE) model predictive tool is developed to analyze low-energy-impact damage footprint and compression-after-impact (CAI) of composite laminates which is also tested and verified with experimental data. Effective fiber and matrix properties were reverse-engineered from lamina properties using an optimization algorithm and used to assess damage at the micro-level during impact and post-impact FE simulations. Progressive failure dynamic analysis (PFDA) was performed for a two step-process simulation. Damage mechanisms at the micro-level were continuously evaluated during the analyses. Contribution of each failure mode was tracked during the simulations and damage and delamination footprint size and shape were predicted to understand when, where and why failure occurred during both impact and CAI events. The composite laminate was manufactured by the vacuum infusion of the aero-grade toughened Benzoxazine system into the fabric preform. Delamination footprint was measured using C-scan data from the impacted panels and compared with the predicated values obtained from proposed multi-scale micromechanics coupled with FE analysis. Furthermore, the residual strength was predicted from the load-displacement curve and compared with the experimental values as well.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glascoe, Lee; Gowardhan, Akshay; Lennox, Kristin

    In the interest of promoting the international exchange of technical expertise, the US Department of Energy’s Office of Emergency Operations (NA-40) and the French Commissariat à l'Energie Atomique et aux énergies alternatives (CEA) requested that the National Atmospheric Release Advisory Center (NARAC) of Lawrence Livermore National Laboratory (LLNL) in Livermore, California host a joint table top exercise with experts in emergency management and atmospheric transport modeling. In this table top exercise, LLNL and CEA compared each other’s flow and dispersion models. The goal of the comparison is to facilitate the exchange of knowledge, capabilities, and practices, and to demonstrate themore » utility of modeling dispersal at different levels of computational fidelity. Two modeling approaches were examined, a regional scale modeling approach, appropriate for simple terrain and/or very large releases, and an urban scale modeling approach, appropriate for small releases in a city environment. This report is a summary of LLNL and CEA modeling efforts from this exercise. Two different types of LLNL and CEA models were employed in the analysis: urban-scale models (Aeolus CFD at LLNL/NARAC and Parallel- Micro-SWIFT-SPRAY, PMSS, at CEA) for analysis of a 5,000 Ci radiological release and Lagrangian Particle Dispersion Models (LODI at LLNL/NARAC and PSPRAY at CEA) for analysis of a much larger (500,000 Ci) regional radiological release. Two densely-populated urban locations were chosen: Chicago with its high-rise skyline and gridded street network and Paris with its more consistent, lower building height and complex unaligned street network. Each location was considered under early summer daytime and nighttime conditions. Different levels of fidelity were chosen for each scale: (1) lower fidelity mass-consistent diagnostic, intermediate fidelity Navier-Stokes RANS models, and higher fidelity Navier-Stokes LES for urban-scale analysis, and (2) lower-fidelity single-profile meteorology versus higher-fidelity three-dimensional gridded weather forecast for regional-scale analysis. Tradeoffs between computation time and the fidelity of the results are discussed for both scales. LES, for example, requires nearly 100 times more processor time than the mass-consistent diagnostic model or the RANS model, and seems better able to capture flow entrainment behind tall buildings. As anticipated, results obtained by LLNL and CEA at regional scale around Chicago and Paris look very similar in terms of both atmospheric dispersion of the radiological release and total effective dose. Both LLNL and CEA used the same meteorological data, Lagrangian particle dispersion models, and the same dose coefficients. LLNL and CEA urban-scale modeling results show consistent phenomenological behavior and predict similar impacted areas even though the detailed 3D flow patterns differ, particularly for the Chicago cases where differences in vertical entrainment behind tall buildings are particularly notable. Although RANS and LES (LLNL) models incorporate more detailed physics than do mass-consistent diagnostic flow models (CEA), it is not possible to reach definite conclusions about the prediction fidelity of the various models as experimental measurements were not available for comparison. Stronger conclusions about the relative performances of the models involved and evaluation of the tradeoffs involved in model simplification could be made with a systematic benchmarking of urban-scale modeling. This could be the purpose of a future US / French collaborative exercise.« less

  1. Disentangling WTP per QALY data: different analytical approaches, different answers.

    PubMed

    Gyrd-Hansen, Dorte; Kjaer, Trine

    2012-03-01

    A large random sample of the Danish general population was asked to value health improvements by way of both the time trade-off elicitation technique and willingness-to-pay (WTP) using contingent valuation methods. The data demonstrate a high degree of heterogeneity across respondents in their relative valuations on the two scales. This has implications for data analysis. We show that the estimates of WTP per QALY are highly sensitive to the analytical strategy. For both open-ended and dichotomous choice data we demonstrate that choice of aggregated approach (ratios of means) or disaggregated approach (means of ratios) affects estimates markedly as does the interpretation of the constant term (which allows for disproportionality across the two scales) in the regression analyses. We propose that future research should focus on why some respondents are unwilling to trade on the time trade-off scale, on how to interpret the constant value in the regression analyses, and on how best to capture the heterogeneity in preference structures when applying mixed multinomial logit. Copyright © 2011 John Wiley & Sons, Ltd.

  2. Confined wormlike chains in external fields

    NASA Astrophysics Data System (ADS)

    Morrison, Greg

    The confinement of biomolecules is ubiquitous in nature, such as the spatial constraints of viral encapsulation, histone binding, and chromosomal packing. Advances in microfluidics and nanopore fabrication have permitted powerful new tools in single molecule manipulation and gene sequencing through molecular confinement as well. In order to fully understand and exploit these systems, the ability to predict the structure of spatially confined molecules is essential. In this talk, I describe a mean field approach to determine the properties of stiff polymers confined to cylinders and slits, which is relevant for a variety of biological and experimental conditions. I show that this approach is able to not only reproduce known scaling laws for confined wormlike chains, but also provides an improvement over existing weakly bending rod approximations in determining the detailed chain properties (such as correlation functions). Using this approach, we also show that it is possible to study the effect of an externally applied tension or static electric field in a natural and analytically tractable way. These external perturbations can alter the scaling laws and introduce important new length scales into the system, relevant for histone unbinding and single-molecule analysis of DNA.

  3. An integrated assessment of location-dependent scaling for microalgae biofuel production facilities

    DOE PAGES

    Coleman, André M.; Abodeely, Jared M.; Skaggs, Richard L.; ...

    2014-06-19

    Successful development of a large-scale microalgae-based biofuels industry requires comprehensive analysis and understanding of the feedstock supply chain—from facility siting and design through processing and upgrading of the feedstock to a fuel product. The evolution from pilot-scale production facilities to energy-scale operations presents many multi-disciplinary challenges, including a sustainable supply of water and nutrients, operational and infrastructure logistics, and economic competitiveness with petroleum-based fuels. These challenges are partially addressed by applying the Integrated Assessment Framework (IAF) – an integrated multi-scale modeling, analysis, and data management suite – to address key issues in developing and operating an open-pond microalgae production facility.more » This is done by analyzing how variability and uncertainty over space and through time affect feedstock production rates, and determining the site-specific “optimum” facility scale to minimize capital and operational expenses. This approach explicitly and systematically assesses the interdependence of biofuel production potential, associated resource requirements, and production system design trade-offs. To provide a baseline analysis, the IAF was applied in this paper to a set of sites in the southeastern U.S. with the potential to cumulatively produce 5 billion gallons per year. Finally, the results indicate costs can be reduced by scaling downstream processing capabilities to fit site-specific growing conditions, available and economically viable resources, and specific microalgal strains.« less

  4. Experimental design and quantitative analysis of microbial community multiomics.

    PubMed

    Mallick, Himel; Ma, Siyuan; Franzosa, Eric A; Vatanen, Tommi; Morgan, Xochitl C; Huttenhower, Curtis

    2017-11-30

    Studies of the microbiome have become increasingly sophisticated, and multiple sequence-based, molecular methods as well as culture-based methods exist for population-scale microbiome profiles. To link the resulting host and microbial data types to human health, several experimental design considerations, data analysis challenges, and statistical epidemiological approaches must be addressed. Here, we survey current best practices for experimental design in microbiome molecular epidemiology, including technologies for generating, analyzing, and integrating microbiome multiomics data. We highlight studies that have identified molecular bioactives that influence human health, and we suggest steps for scaling translational microbiome research to high-throughput target discovery across large populations.

  5. Uniform color space analysis of LACIE image products

    NASA Technical Reports Server (NTRS)

    Nalepka, R. F. (Principal Investigator); Balon, R. J.; Cicone, R. C.

    1979-01-01

    The author has identified the following significant results. Analysis and comparison of image products generated by different algorithms show that the scaling and biasing of data channels for control of PFC primaries lead to loss of information (in a probability-of misclassification sense) by two major processes. In order of importance they are: neglecting the input of one channel of data in any one image, and failing to provide sufficient color resolution of the data. The scaling and biasing approach tends to distort distance relationships in data space and provides less than desirable resolution when the data variation is typical of a developed, nonhazy agricultural scene.

  6. A probabilistic approach to quantifying spatial patterns of flow regimes and network-scale connectivity

    NASA Astrophysics Data System (ADS)

    Garbin, Silvia; Alessi Celegon, Elisa; Fanton, Pietro; Botter, Gianluca

    2017-04-01

    The temporal variability of river flow regime is a key feature structuring and controlling fluvial ecological communities and ecosystem processes. In particular, streamflow variability induced by climate/landscape heterogeneities or other anthropogenic factors significantly affects the connectivity between streams with notable implication for river fragmentation. Hydrologic connectivity is a fundamental property that guarantees species persistence and ecosystem integrity in riverine systems. In riverine landscapes, most ecological transitions are flow-dependent and the structure of flow regimes may affect ecological functions of endemic biota (i.e., fish spawning or grazing of invertebrate species). Therefore, minimum flow thresholds must be guaranteed to support specific ecosystem services, like fish migration, aquatic biodiversity and habitat suitability. In this contribution, we present a probabilistic approach aiming at a spatially-explicit, quantitative assessment of hydrologic connectivity at the network-scale as derived from river flow variability. Dynamics of daily streamflows are estimated based on catchment-scale climatic and morphological features, integrating a stochastic, physically based approach that accounts for the stochasticity of rainfall with a water balance model and a geomorphic recession flow model. The non-exceedance probability of ecologically meaningful flow thresholds is used to evaluate the fragmentation of individual stream reaches, and the ensuing network-scale connectivity metrics. A multi-dimensional Poisson Process for the stochastic generation of rainfall is used to evaluate the impact of climate signature on reach-scale and catchment-scale connectivity. The analysis shows that streamflow patterns and network-scale connectivity are influenced by the topology of the river network and the spatial variability of climatic properties (rainfall, evapotranspiration). The framework offers a robust basis for the prediction of the impact of land-use/land-cover changes and river regulation on network-scale connectivity.

  7. MultiMetEval: Comparative and Multi-Objective Analysis of Genome-Scale Metabolic Models

    PubMed Central

    Gevorgyan, Albert; Kierzek, Andrzej M.; Breitling, Rainer; Takano, Eriko

    2012-01-01

    Comparative metabolic modelling is emerging as a novel field, supported by the development of reliable and standardized approaches for constructing genome-scale metabolic models in high throughput. New software solutions are needed to allow efficient comparative analysis of multiple models in the context of multiple cellular objectives. Here, we present the user-friendly software framework Multi-Metabolic Evaluator (MultiMetEval), built upon SurreyFBA, which allows the user to compose collections of metabolic models that together can be subjected to flux balance analysis. Additionally, MultiMetEval implements functionalities for multi-objective analysis by calculating the Pareto front between two cellular objectives. Using a previously generated dataset of 38 actinobacterial genome-scale metabolic models, we show how these approaches can lead to exciting novel insights. Firstly, after incorporating several pathways for the biosynthesis of natural products into each of these models, comparative flux balance analysis predicted that species like Streptomyces that harbour the highest diversity of secondary metabolite biosynthetic gene clusters in their genomes do not necessarily have the metabolic network topology most suitable for compound overproduction. Secondly, multi-objective analysis of biomass production and natural product biosynthesis in these actinobacteria shows that the well-studied occurrence of discrete metabolic switches during the change of cellular objectives is inherent to their metabolic network architecture. Comparative and multi-objective modelling can lead to insights that could not be obtained by normal flux balance analyses. MultiMetEval provides a powerful platform that makes these analyses straightforward for biologists. Sources and binaries of MultiMetEval are freely available from https://github.com/PiotrZakrzewski/MetEval/downloads. PMID:23272111

  8. Removal of Waterborne Particles by Electrofiltration: Pilot-Scale Testing

    EPA Science Inventory

    Theoretical analysis using a trajectory approach indicated that in the presence of an external electric field, charged waterborne particles are subject to an additional migration velocity which increases their deposition on the surface of collectors (e.g. sand filter). In this st...

  9. Large Scale EOF Analysis of Climate Data

    NASA Astrophysics Data System (ADS)

    Prabhat, M.; Gittens, A.; Kashinath, K.; Cavanaugh, N. R.; Mahoney, M.

    2016-12-01

    We present a distributed approach towards extracting EOFs from 3D climate data. We implement the method in Apache Spark, and process multi-TB sized datasets on O(1000-10,000) cores. We apply this method to latitude-weighted ocean temperature data from CSFR, a 2.2 terabyte-sized data set comprising ocean and subsurface reanalysis measurements collected at 41 levels in the ocean, at 6 hour intervals over 31 years. We extract the first 100 EOFs of this full data set and compare to the EOFs computed simply on the surface temperature field. Our analyses provide evidence of Kelvin and Rossy waves and components of large-scale modes of oscillation including the ENSO and PDO that are not visible in the usual SST EOFs. Further, they provide information on the the most influential parts of the ocean, such as the thermocline, that exist below the surface. Work is ongoing to understand the factors determining the depth-varying spatial patterns observed in the EOFs. We will experiment with weighting schemes to appropriately account for the differing depths of the observations. We also plan to apply the same distributed approach to analysis of analysis of 3D atmospheric climatic data sets, including multiple variables. Because the atmosphere changes on a quicker time-scale than the ocean, we expect that the results will demonstrate an even greater advantage to computing 3D EOFs in lieu of 2D EOFs.

  10. Spatially-Distributed Cost–Effectiveness Analysis Framework to Control Phosphorus from Agricultural Diffuse Pollution

    PubMed Central

    Geng, Runzhe; Wang, Xiaoyan; Sharpley, Andrew N.; Meng, Fande

    2015-01-01

    Best management practices (BMPs) for agricultural diffuse pollution control are implemented at the field or small-watershed scale. However, the benefits of BMP implementation on receiving water quality at multiple spatial is an ongoing challenge. In this paper, we introduce an integrated approach that combines risk assessment (i.e., Phosphorus (P) index), model simulation techniques (Hydrological Simulation Program–FORTRAN), and a BMP placement tool at various scales to identify the optimal location for implementing multiple BMPs and estimate BMP effectiveness after implementation. A statistically significant decrease in nutrient discharge from watersheds is proposed to evaluate the effectiveness of BMPs, strategically targeted within watersheds. Specifically, we estimate two types of cost-effectiveness curves (total pollution reduction and proportion of watersheds improved) for four allocation approaches. Selection of a ‘‘best approach” depends on the relative importance of the two types of effectiveness, which involves a value judgment based on the random/aggregated degree of BMP distribution among and within sub-watersheds. A statistical optimization framework is developed and evaluated in Chaohe River Watershed located in the northern mountain area of Beijing. Results show that BMP implementation significantly (p >0.001) decrease P loss from the watershed. Remedial strategies where BMPs were targeted to areas of high risk of P loss, deceased P loads compared with strategies where BMPs were randomly located across watersheds. Sensitivity analysis indicated that aggregated BMP placement in particular watershed is the most cost-effective scenario to decrease P loss. The optimization approach outlined in this paper is a spatially hierarchical method for targeting nonpoint source controls across a range of scales from field to farm, to watersheds, to regions. Further, model estimates showed targeting at multiple scales is necessary to optimize program efficiency. The integrated model approach described that selects and places BMPs at varying levels of implementation, provides a new theoretical basis and technical guidance for diffuse pollution management in agricultural watersheds. PMID:26313561

  11. Learning in First-Year Biology: Approaches of Distance and On-Campus Students

    NASA Astrophysics Data System (ADS)

    Quinn, Frances Catherine

    2011-01-01

    This paper aims to extend previous research into learning of tertiary biology, by exploring the learning approaches adopted by two groups of students studying the same first-year biology topic in either on-campus or off-campus "distance" modes. The research involved 302 participants, who responded to a topic-specific version of the Study Process Questionnaire, and in-depth interviews with 16 of these students. Several quantitative analytic techniques, including cluster analysis and Rasch differential item functioning analysis, showed that the younger, on-campus cohort made less use of deep approaches, and more use of surface approaches than the older, off-campus group. At a finer scale, clusters of students within these categories demonstrated different patterns of learning approach. Students' descriptions of their learning approaches at interview provided richer complementary descriptions of the approach they took to their study in the topic, showing how deep and surface approaches were manifested in the study context. These findings are critically analysed in terms of recent literature questioning the applicability of learning approaches theory in mass education, and their implications for teaching and research in undergraduate biology.

  12. The Trapping Index: How to integrate the Eulerian and the Lagrangian approach for the computation of the transport time scales of semi-enclosed basins.

    PubMed

    Cucco, Andrea; Umgiesser, Georg

    2015-09-15

    In this work, we investigated if the Eulerian and the Lagrangian approaches for the computation of the Transport Time Scales (TTS) of semi-enclosed water bodies can be used univocally to define the spatial variability of basin flushing features. The Eulerian and Lagrangian TTS were computed for both simplified test cases and a realistic domain: the Venice Lagoon. The results confirmed the two approaches cannot be adopted univocally and that the spatial variability of the water renewal capacity can be investigated only through the computation of both the TTS. A specific analysis, based on the computation of a so-called Trapping Index, was then suggested to integrate the information provided by the two different approaches. The obtained results proved the Trapping Index to be useful to avoid any misleading interpretation due to the evaluation of the basin renewal features just from an Eulerian only or from a Lagrangian only perspective. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. Gravity Scaling of a Power Reactor Water Shield

    NASA Technical Reports Server (NTRS)

    Reid, Robert S.; Pearson, J. Boise

    2007-01-01

    A similarity analysis on a water-based reactor shield examined the effect of gravity on free convection between a reactor shield inner and outer vessel boundaries. Two approaches established similarity between operation on the Earth and the Moon: 1) direct scaling of Rayleigh number equating gravity-surface heat flux products, 2) temperature difference between the wall and thermal boundary layer held constant. Nusselt number for natural convection (laminar and turbulent) is assumed of form Nu = CRa(sup n).

  14. Large-Scale CTRW Analysis of Push-Pull Tracer Tests and Other Transport in Heterogeneous Porous Media

    NASA Astrophysics Data System (ADS)

    Hansen, S. K.; Berkowitz, B.

    2014-12-01

    Recently, we developed an alternative CTRW formulation which uses a "latching" upscaling scheme to rigorously map continuous or fine-scale stochastic solute motion onto discrete transitions on an arbitrarily coarse lattice (with spacing potentially on the meter scale or more). This approach enables model simplification, among many other things. Under advection, for example, we see that many relevant anomalous transport problems may be mapped into 1D, with latching to a sequence of successive, uniformly spaced planes. On this formulation (which we term RP-CTRW), the spatial transition vector may generally be made deterministic, with CTRW waiting time distributions encapsulating all the stochastic behavior. We demonstrate the excellent performance of this technique alongside Pareto-distributed waiting times in explaining experiments across a variety of scales using only two degrees of freedom. An interesting new application of the RP-CTRW technique is the analysis of radial (push-pull) tracer tests. Given modern computational power, random walk simulations are a natural fit for the inverse problem of inferring subsurface parameters from push-pull test data, and we propose them as an alternative to the classical type curve approach. In particular, we explore the visibility of heterogeneity through non-Fickian behavior in push-pull tests, and illustrate the ability of a radial RP-CTRW technique to encapsulate this behavior using a sparse parameterization which has predictive value.

  15. Statistical analysis of hydrological response in urbanising catchments based on adaptive sampling using inter-amount times

    NASA Astrophysics Data System (ADS)

    ten Veldhuis, Marie-Claire; Schleiss, Marc

    2017-04-01

    Urban catchments are typically characterised by a more flashy nature of the hydrological response compared to natural catchments. Predicting flow changes associated with urbanisation is not straightforward, as they are influenced by interactions between impervious cover, basin size, drainage connectivity and stormwater management infrastructure. In this study, we present an alternative approach to statistical analysis of hydrological response variability and basin flashiness, based on the distribution of inter-amount times. We analyse inter-amount time distributions of high-resolution streamflow time series for 17 (semi-)urbanised basins in North Carolina, USA, ranging from 13 to 238 km2 in size. We show that in the inter-amount-time framework, sampling frequency is tuned to the local variability of the flow pattern, resulting in a different representation and weighting of high and low flow periods in the statistical distribution. This leads to important differences in the way the distribution quantiles, mean, coefficient of variation and skewness vary across scales and results in lower mean intermittency and improved scaling. Moreover, we show that inter-amount-time distributions can be used to detect regulation effects on flow patterns, identify critical sampling scales and characterise flashiness of hydrological response. The possibility to use both the classical approach and the inter-amount-time framework to identify minimum observable scales and analyse flow data opens up interesting areas for future research.

  16. Inverse finite-size scaling for high-dimensional significance analysis

    NASA Astrophysics Data System (ADS)

    Xu, Yingying; Puranen, Santeri; Corander, Jukka; Kabashima, Yoshiyuki

    2018-06-01

    We propose an efficient procedure for significance determination in high-dimensional dependence learning based on surrogate data testing, termed inverse finite-size scaling (IFSS). The IFSS method is based on our discovery of a universal scaling property of random matrices which enables inference about signal behavior from much smaller scale surrogate data than the dimensionality of the original data. As a motivating example, we demonstrate the procedure for ultra-high-dimensional Potts models with order of 1010 parameters. IFSS reduces the computational effort of the data-testing procedure by several orders of magnitude, making it very efficient for practical purposes. This approach thus holds considerable potential for generalization to other types of complex models.

  17. Psychobiological operationalization of RDoC constructs: Methodological and conceptual opportunities and challenges.

    PubMed

    MacNamara, Annmarie; Phan, K Luan

    2016-03-01

    NIMH's Research Domain Criteria (RDoC) project seeks to advance the diagnosis, prevention, and treatment of mental disorders by promoting psychobiological research on dimensional constructs that might cut across traditional diagnostic boundaries (Kozak & Cuthbert, ). At the core of this approach is the notion that these dimensional constructs can be assessed across different units of analysis (e.g., genes, physiology, behavior), enriching the constructs and providing more complete explanations of clinical problems. While the conceptual aspects of RDoC have been discussed in several prior papers, its methodological aspects have received comparatively less attention. For example, how to integrate data from different units of analysis has been relatively unclear. Here, we discuss one means of psychobiologically operationalizing RDoC constructs across different units of analysis (the psychoneurometric approach; Yancey et al., ), highlighting ways in which this approach might be refined in future iterations. We conclude that there is much to be learned from this technique; however, greater attention to scale-development methods and to psychometrics will likely benefit this and other methodological approaches to combining measurements across multiple units of analysis. © 2016 Society for Psychophysiological Research.

  18. Approach for validating actinide and fission product compositions for burnup credit criticality safety analyses

    DOE PAGES

    Radulescu, Georgeta; Gauld, Ian C.; Ilas, Germina; ...

    2014-11-01

    This paper describes a depletion code validation approach for criticality safety analysis using burnup credit for actinide and fission product nuclides in spent nuclear fuel (SNF) compositions. The technical basis for determining the uncertainties in the calculated nuclide concentrations is comparison of calculations to available measurements obtained from destructive radiochemical assay of SNF samples. Probability distributions developed for the uncertainties in the calculated nuclide concentrations were applied to the SNF compositions of a criticality safety analysis model by the use of a Monte Carlo uncertainty sampling method to determine bias and bias uncertainty in effective neutron multiplication factor. Application ofmore » the Monte Carlo uncertainty sampling approach is demonstrated for representative criticality safety analysis models of pressurized water reactor spent fuel pool storage racks and transportation packages using burnup-dependent nuclide concentrations calculated with SCALE 6.1 and the ENDF/B-VII nuclear data. Furthermore, the validation approach and results support a recent revision of the U.S. Nuclear Regulatory Commission Interim Staff Guidance 8.« less

  19. Facing the scaling problem: A multi-methodical approach to simulate soil erosion at hillslope and catchment scale

    NASA Astrophysics Data System (ADS)

    Schmengler, A. C.; Vlek, P. L. G.

    2012-04-01

    Modelling soil erosion requires a holistic understanding of the sediment dynamics in a complex environment. As most erosion models are scale-dependent and their parameterization is spatially limited, their application often requires special care, particularly in data-scarce environments. This study presents a hierarchical approach to overcome the limitations of a single model by using various quantitative methods and soil erosion models to cope with the issues of scale. At hillslope scale, the physically-based Water Erosion Prediction Project (WEPP)-model is used to simulate soil loss and deposition processes. Model simulations of soil loss vary between 5 to 50 t ha-1 yr-1 dependent on the spatial location on the hillslope and have only limited correspondence with the results of the 137Cs technique. These differences in absolute soil loss values could be either due to internal shortcomings of each approach or to external scale-related uncertainties. Pedo-geomorphological soil investigations along a catena confirm that estimations by the 137Cs technique are more appropriate in reflecting both the spatial extent and magnitude of soil erosion at hillslope scale. In order to account for sediment dynamics at a larger scale, the spatially-distributed WaTEM/SEDEM model is used to simulate soil erosion at catchment scale and to predict sediment delivery rates into a small water reservoir. Predicted sediment yield rates are compared with results gained from a bathymetric survey and sediment core analysis. Results show that specific sediment rates of 0.6 t ha-1 yr-1 by the model are in close agreement with observed sediment yield calculated from stratigraphical changes and downcore variations in 137Cs concentrations. Sediment erosion rates averaged over the entire catchment of 1 to 2 t ha-1 yr-1 are significantly lower than results obtained at hillslope scale confirming an inverse correlation between the magnitude of erosion rates and the spatial scale of the model. The study has shown that the use of multiple methods facilitates the calibration and validation of models and might provide a more accurate measure for soil erosion rates in ungauged catchments. Moreover, the approach could be used to identify the most appropriate working and operational scales for soil erosion modelling.

  20. Multiscale connectivity and graph theory highlight critical areas for conservation under climate change.

    PubMed

    Dilt, Thomas E; Weisberg, Peter J; Leitner, Philip; Matocq, Marjorie D; Inman, Richard D; Nussear, Kenneth E; Esque, Todd C

    2016-06-01

    Conservation planning and biodiversity management require information on landscape connectivity across a range of spatial scales from individual home ranges to large regions. Reduction in landscape connectivity due changes in land use or development is expected to act synergistically with alterations to habitat mosaic configuration arising from climate change. We illustrate a multiscale connectivity framework to aid habitat conservation prioritization in the context of changing land use and climate. Our approach, which builds upon the strengths of multiple landscape connectivity methods, including graph theory, circuit theory, and least-cost path analysis, is here applied to the conservation planning requirements of the Mohave ground squirrel. The distribution of this threatened Californian species, as for numerous other desert species, overlaps with the proposed placement of several utility-scale renewable energy developments in the American southwest. Our approach uses information derived at three spatial scales to forecast potential changes in habitat connectivity under various scenarios of energy development and climate change. By disentangling the potential effects of habitat loss and fragmentation across multiple scales, we identify priority conservation areas for both core habitat and critical corridor or stepping stone habitats. This approach is a first step toward applying graph theory to analyze habitat connectivity for species with continuously distributed habitat and should be applicable across a broad range of taxa.

  1. Digital Rocks Portal: a sustainable platform for imaged dataset sharing, translation and automated analysis

    NASA Astrophysics Data System (ADS)

    Prodanovic, M.; Esteva, M.; Hanlon, M.; Nanda, G.; Agarwal, P.

    2015-12-01

    Recent advances in imaging have provided a wealth of 3D datasets that reveal pore space microstructure (nm to cm length scale) and allow investigation of nonlinear flow and mechanical phenomena from first principles using numerical approaches. This framework has popularly been called "digital rock physics". Researchers, however, have trouble storing and sharing the datasets both due to their size and the lack of standardized image types and associated metadata for volumetric datasets. This impedes scientific cross-validation of the numerical approaches that characterize large scale porous media properties, as well as development of multiscale approaches required for correct upscaling. A single research group typically specializes in an imaging modality and/or related modeling on a single length scale, and lack of data-sharing infrastructure makes it difficult to integrate different length scales. We developed a sustainable, open and easy-to-use repository called the Digital Rocks Portal, that (1) organizes images and related experimental measurements of different porous materials, (2) improves access to them for a wider community of geosciences or engineering researchers not necessarily trained in computer science or data analysis. Once widely accepter, the repository will jumpstart productivity and enable scientific inquiry and engineering decisions founded on a data-driven basis. This is the first repository of its kind. We show initial results on incorporating essential software tools and pipelines that make it easier for researchers to store and reuse data, and for educators to quickly visualize and illustrate concepts to a wide audience. For data sustainability and continuous access, the portal is implemented within the reliable, 24/7 maintained High Performance Computing Infrastructure supported by the Texas Advanced Computing Center (TACC) at the University of Texas at Austin. Long-term storage is provided through the University of Texas System Research Cyber-infrastructure initiative.

  2. Data Intensive Analysis of Biomolecular Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Straatsma, TP; Soares, Thereza A.

    2007-12-01

    The advances in biomolecular modeling and simulation made possible by the availability of increasingly powerful high performance computing resources is extending molecular simulations to biological more relevant system size and time scales. At the same time, advances in simulation methodologies are allowing more complex processes to be described more accurately. These developments make a systems approach to computational structural biology feasible, but this will require a focused emphasis on the comparative analysis of the increasing number of molecular simulations that are being carried out for biomolecular systems with more realistic models, multi-component environments, and for longer simulation times. Just asmore » in the case of the analysis of the large data sources created by the new high-throughput experimental technologies, biomolecular computer simulations contribute to the progress in biology through comparative analysis. The continuing increase in available protein structures allows the comparative analysis of the role of structure and conformational flexibility in protein function, and is the foundation of the discipline of structural bioinformatics. This creates the opportunity to derive general findings from the comparative analysis of molecular dynamics simulations of a wide range of proteins, protein-protein complexes and other complex biological systems. Because of the importance of protein conformational dynamics for protein function, it is essential that the analysis of molecular trajectories is carried out using a novel, more integrative and systematic approach. We are developing a much needed rigorous computer science based framework for the efficient analysis of the increasingly large data sets resulting from molecular simulations. Such a suite of capabilities will also provide the required tools for access and analysis of a distributed library of generated trajectories. Our research is focusing on the following areas: (1) the development of an efficient analysis framework for very large scale trajectories on massively parallel architectures, (2) the development of novel methodologies that allow automated detection of events in these very large data sets, and (3) the efficient comparative analysis of multiple trajectories. The goal of the presented work is the development of new algorithms that will allow biomolecular simulation studies to become an integral tool to address the challenges of post-genomic biological research. The strategy to deliver the required data intensive computing applications that can effectively deal with the volume of simulation data that will become available is based on taking advantage of the capabilities offered by the use of large globally addressable memory architectures. The first requirement is the design of a flexible underlying data structure for single large trajectories that will form an adaptable framework for a wide range of analysis capabilities. The typical approach to trajectory analysis is to sequentially process trajectories time frame by time frame. This is the implementation found in molecular simulation codes such as NWChem, and has been designed in this way to be able to run on workstation computers and other architectures with an aggregate amount of memory that would not allow entire trajectories to be held in core. The consequence of this approach is an I/O dominated solution that scales very poorly on parallel machines. We are currently using an approach of developing tools specifically intended for use on large scale machines with sufficient main memory that entire trajectories can be held in core. This greatly reduces the cost of I/O as trajectories are read only once during the analysis. In our current Data Intensive Analysis (DIANA) implementation, each processor determines and skips to the entry within the trajectory that typically will be available in multiple files and independently from all other processors read the appropriate frames.« less

  3. Data Intensive Analysis of Biomolecular Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Straatsma, TP

    2008-03-01

    The advances in biomolecular modeling and simulation made possible by the availability of increasingly powerful high performance computing resources is extending molecular simulations to biological more relevant system size and time scales. At the same time, advances in simulation methodologies are allowing more complex processes to be described more accurately. These developments make a systems approach to computational structural biology feasible, but this will require a focused emphasis on the comparative analysis of the increasing number of molecular simulations that are being carried out for biomolecular systems with more realistic models, multi-component environments, and for longer simulation times. Just asmore » in the case of the analysis of the large data sources created by the new high-throughput experimental technologies, biomolecular computer simulations contribute to the progress in biology through comparative analysis. The continuing increase in available protein structures allows the comparative analysis of the role of structure and conformational flexibility in protein function, and is the foundation of the discipline of structural bioinformatics. This creates the opportunity to derive general findings from the comparative analysis of molecular dynamics simulations of a wide range of proteins, protein-protein complexes and other complex biological systems. Because of the importance of protein conformational dynamics for protein function, it is essential that the analysis of molecular trajectories is carried out using a novel, more integrative and systematic approach. We are developing a much needed rigorous computer science based framework for the efficient analysis of the increasingly large data sets resulting from molecular simulations. Such a suite of capabilities will also provide the required tools for access and analysis of a distributed library of generated trajectories. Our research is focusing on the following areas: (1) the development of an efficient analysis framework for very large scale trajectories on massively parallel architectures, (2) the development of novel methodologies that allow automated detection of events in these very large data sets, and (3) the efficient comparative analysis of multiple trajectories. The goal of the presented work is the development of new algorithms that will allow biomolecular simulation studies to become an integral tool to address the challenges of post-genomic biological research. The strategy to deliver the required data intensive computing applications that can effectively deal with the volume of simulation data that will become available is based on taking advantage of the capabilities offered by the use of large globally addressable memory architectures. The first requirement is the design of a flexible underlying data structure for single large trajectories that will form an adaptable framework for a wide range of analysis capabilities. The typical approach to trajectory analysis is to sequentially process trajectories time frame by time frame. This is the implementation found in molecular simulation codes such as NWChem, and has been designed in this way to be able to run on workstation computers and other architectures with an aggregate amount of memory that would not allow entire trajectories to be held in core. The consequence of this approach is an I/O dominated solution that scales very poorly on parallel machines. We are currently using an approach of developing tools specifically intended for use on large scale machines with sufficient main memory that entire trajectories can be held in core. This greatly reduces the cost of I/O as trajectories are read only once during the analysis. In our current Data Intensive Analysis (DIANA) implementation, each processor determines and skips to the entry within the trajectory that typically will be available in multiple files and independently from all other processors read the appropriate frames.« less

  4. Assessing the equivalence of Web-based and paper-and-pencil questionnaires using differential item and test functioning (DIF and DTF) analysis: a case of the Four-Dimensional Symptom Questionnaire (4DSQ).

    PubMed

    Terluin, Berend; Brouwers, Evelien P M; Marchand, Miquelle A G; de Vet, Henrica C W

    2018-05-01

    Many paper-and-pencil (P&P) questionnaires have been migrated to electronic platforms. Differential item and test functioning (DIF and DTF) analysis constitutes a superior research design to assess measurement equivalence across modes of administration. The purpose of this study was to demonstrate an item response theory (IRT)-based DIF and DTF analysis to assess the measurement equivalence of a Web-based version and the original P&P format of the Four-Dimensional Symptom Questionnaire (4DSQ), measuring distress, depression, anxiety, and somatization. The P&P group (n = 2031) and the Web group (n = 958) consisted of primary care psychology clients. Unidimensionality and local independence of the 4DSQ scales were examined using IRT and Yen's Q3. Bifactor modeling was used to assess the scales' essential unidimensionality. Measurement equivalence was assessed using IRT-based DIF analysis using a 3-stage approach: linking on the latent mean and variance, selection of anchor items, and DIF testing using the Wald test. DTF was evaluated by comparing expected scale scores as a function of the latent trait. The 4DSQ scales proved to be essentially unidimensional in both modalities. Five items, belonging to the distress and somatization scales, displayed small amounts of DIF. DTF analysis revealed that the impact of DIF on the scale level was negligible. IRT-based DIF and DTF analysis is demonstrated as a way to assess the equivalence of Web-based and P&P questionnaire modalities. Data obtained with the Web-based 4DSQ are equivalent to data obtained with the P&P version.

  5. Molecular Imaging of Kerogen and Minerals in Shale Rocks across Micro- and Nano- Scales

    NASA Astrophysics Data System (ADS)

    Hao, Z.; Bechtel, H.; Sannibale, F.; Kneafsey, T. J.; Gilbert, B.; Nico, P. S.

    2016-12-01

    Fourier transform infrared (FTIR) spectroscopy is a reliable and non-destructive quantitative method to evaluate mineralogy and kerogen content / maturity of shale rocks, although it is traditionally difficult to assess the organic and mineralogical heterogeneity at micrometer and nanometer scales due to the diffraction limit of the infrared light. However, it is truly at these scales that the kerogen and mineral content and their formation in share rocks determines the quality of shale gas reserve, the gas flow mechanisms and the gas production. Therefore, it's necessary to develop new approaches which can image across both micro- and nano- scales. In this presentation, we will describe two new molecular imaging approaches to obtain kerogen and mineral information in shale rocks at the unprecedented high spatial resolution, and a cross-scale quantitative multivariate analysis method to provide rapid geochemical characterization of large size samples. The two imaging approaches are enhanced at nearfield respectively by a Ge-hemisphere (GE) and by a metallic scanning probe (SINS). The GE method is a modified microscopic attenuated total reflectance (ATR) method which rapidly captures a chemical image of the shale rock surface at 1 to 5 micrometer resolution with a large field of view of 600 X 600 micrometer, while the SINS probes the surface at 20 nm resolution which provides a chemically "deconvoluted" map at the nano-pore level. The detailed geochemical distribution at nanoscale is then used to build a machine learning model to generate self-calibrated chemical distribution map at micrometer scale with the input of the GE images. A number of geochemical contents across these two important scales are observed and analyzed, including the minerals (oxides, carbonates, sulphides), the organics (carbohydrates, aromatics), and the absorbed gases. These approaches are self-calibrated, optics friendly and non-destructive, so they hold the potential to monitor shale gas flow at real time inside the micro- or nano- pore network, which is of great interest for optimizing the shale gas extraction.

  6. Eulerian frequency analysis of structural vibrations from high-speed video

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venanzoni, Andrea; Siemens Industry Software NV, Interleuvenlaan 68, B-3001 Leuven; De Ryck, Laurent

    An approach for the analysis of the frequency content of structural vibrations from high-speed video recordings is proposed. The techniques and tools proposed rely on an Eulerian approach, that is, using the time history of pixels independently to analyse structural motion, as opposed to Lagrangian approaches, where the motion of the structure is tracked in time. The starting point is an existing Eulerian motion magnification method, which consists in decomposing the video frames into a set of spatial scales through a so-called Laplacian pyramid [1]. Each scale — or level — can be amplified independently to reconstruct a magnified motionmore » of the observed structure. The approach proposed here provides two analysis tools or pre-amplification steps. The first tool provides a representation of the global frequency content of a video per pyramid level. This may be further enhanced by applying an angular filter in the spatial frequency domain to each frame of the video before the Laplacian pyramid decomposition, which allows for the identification of the frequency content of the structural vibrations in a particular direction of space. This proposed tool complements the existing Eulerian magnification method by amplifying selectively the levels containing relevant motion information with respect to their frequency content. This magnifies the displacement while limiting the noise contribution. The second tool is a holographic representation of the frequency content of a vibrating structure, yielding a map of the predominant frequency components across the structure. In contrast to the global frequency content representation of the video, this tool provides a local analysis of the periodic gray scale intensity changes of the frame in order to identify the vibrating parts of the structure and their main frequencies. Validation cases are provided and the advantages and limits of the approaches are discussed. The first validation case consists of the frequency content retrieval of the tip of a shaker, excited at selected fixed frequencies. The goal of this setup is to retrieve the frequencies at which the tip is excited. The second validation case consists of two thin metal beams connected to a randomly excited bar. It is shown that the holographic representation visually highlights the predominant frequency content of each pixel and locates the global frequencies of the motion, thus retrieving the natural frequencies for each beam.« less

  7. Climate Risk Informed Decision Analysis: A Hypothetical Application to the Waas Region

    NASA Astrophysics Data System (ADS)

    Gilroy, Kristin; Mens, Marjolein; Haasnoot, Marjolijn; Jeuken, Ad

    2016-04-01

    More frequent and intense hydrologic events under climate change are expected to enhance water security and flood risk management challenges worldwide. Traditional planning approaches must be adapted to address climate change and develop solutions with an appropriate level of robustness and flexibility. The Climate Risk Informed Decision Analysis (CRIDA) method is a novel planning approach embodying a suite of complementary methods, including decision scaling and adaptation pathways. Decision scaling offers a bottom-up approach to assess risk and tailors the complexity of the analysis to the problem at hand and the available capacity. Through adaptation pathway,s an array of future strategies towards climate robustness are developed, ranging in flexibility and immediacy of investments. Flexible pathways include transfer points to other strategies to ensure that the system can be adapted if future conditions vary from those expected. CRIDA combines these two approaches in a stakeholder driven process which guides decision makers through the planning and decision process, taking into account how the confidence in the available science, the consequences in the system, and the capacity of institutions should influence strategy selection. In this presentation, we will explain the CRIDA method and compare it to existing planning processes, such as the US Army Corps of Engineers Principles and Guidelines as well as Integrated Water Resources Management Planning. Then, we will apply the approach to a hypothetical case study for the Waas Region, a large downstream river basin facing rapid development threatened by increased flood risks. Through the case study, we will demonstrate how a stakeholder driven process can be used to evaluate system robustness to climate change; develop adaptation pathways for multiple objectives and criteria; and illustrate how varying levels of confidence, consequences, and capacity would play a role in the decision making process, specifically in regards to the level of robustness and flexibility in the selected strategy. This work will equip practitioners and decision makers with an example of a structured process for decision making under climate uncertainty that can be scaled as needed to the problem at hand. This presentation builds further on another submitted abstract "Climate Risk Informed Decision Analysis (CRIDA): A novel practical guidance for Climate Resilient Investments and Planning" by Jeuken et al.

  8. A hybrid fault diagnosis approach based on mixed-domain state features for rotating machinery.

    PubMed

    Xue, Xiaoming; Zhou, Jianzhong

    2017-01-01

    To make further improvement in the diagnosis accuracy and efficiency, a mixed-domain state features data based hybrid fault diagnosis approach, which systematically blends both the statistical analysis approach and the artificial intelligence technology, is proposed in this work for rolling element bearings. For simplifying the fault diagnosis problems, the execution of the proposed method is divided into three steps, i.e., fault preliminary detection, fault type recognition and fault degree identification. In the first step, a preliminary judgment about the health status of the equipment can be evaluated by the statistical analysis method based on the permutation entropy theory. If fault exists, the following two processes based on the artificial intelligence approach are performed to further recognize the fault type and then identify the fault degree. For the two subsequent steps, mixed-domain state features containing time-domain, frequency-domain and multi-scale features are extracted to represent the fault peculiarity under different working conditions. As a powerful time-frequency analysis method, the fast EEMD method was employed to obtain multi-scale features. Furthermore, due to the information redundancy and the submergence of original feature space, a novel manifold learning method (modified LGPCA) is introduced to realize the low-dimensional representations for high-dimensional feature space. Finally, two cases with 12 working conditions respectively have been employed to evaluate the performance of the proposed method, where vibration signals were measured from an experimental bench of rolling element bearing. The analysis results showed the effectiveness and the superiority of the proposed method of which the diagnosis thought is more suitable for practical application. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Structure from Motion vs. the Kinect: Comparisons of River Field Measurements at the 10-2 to 102 meter Scales

    NASA Astrophysics Data System (ADS)

    Fonstad, M. A.; Dietrich, J. T.

    2014-12-01

    At the very smallest spatial scales of fluvial field analysis, measurements made historically in situ are often now supplemented, or even replaced by, remote sensing methods. This is particularly true in the case of topographic and particle size measurement. In the field, the scales of in situ observation usually range from millimeters up to hundreds of meters. Two recent approaches for remote mapping of river environments at the scales of historical in situ observations are (1) camera-based structure from motion (SfM), and (2) active patterned-light measurement with devices such as the Kinect. Even if only carried by hand, these two approaches can produce topographic datasets over three to four orders of magnitude of spatial scale. Which approach is most useful? Previous studies have demonstrated that both SfM and the Kinect are precise and accurate over in situ field measurement scales; we instead turn to alternate comparative metrics to help determine which tools might be best for our river measurement tasks. These metrics might include (1) the ease of field use, (2) which general environments are or are not amenable to measurement, (3) robustness to changing environmental conditions, (4) ease of data processing, and (5) cost. We test these metrics in a variety of bar-scale fluvial field environments, including a large-river cobble bar, a sand-bedded river point bar, and a complex mountain stream bar. The structure from motion approach is field-equipment inexpensive, is viable over a wide range of environmental conditions, and is highly spatially scalable. The approach requires some type of spatial referencing to make the data useful. The Kinect has the advantages of an almost real-time display of collected data, so problems can be detected quickly, being fast and easy to use, and the data are collected with arbitrary but metric coordinates, so absolute referencing isn't needed to use the data for many problems. It has the disadvantages of its light field generally being unable to penetrate water surfaces, becoming unusable in strong sunlight, and providing so much data as to be sometimes unwieldy in the data processing stage.

  10. Multi-scale mechanics of granular solids from grain-resolved X-ray measurements

    NASA Astrophysics Data System (ADS)

    Hurley, R. C.; Hall, S. A.; Wright, J. P.

    2017-11-01

    This work discusses an experimental technique for studying the mechanics of three-dimensional (3D) granular solids. The approach combines 3D X-ray diffraction and X-ray computed tomography to measure grain-resolved strains, kinematics and contact fabric in the bulk of a granular solid, from which continuum strains, grain stresses, interparticle forces and coarse-grained elasto-plastic moduli can be determined. We demonstrate the experimental approach and analysis of selected results on a sample of 1099 stiff, frictional grains undergoing multiple uniaxial compression cycles. We investigate the inter-particle force network, elasto-plastic moduli and associated length scales, reversibility of mechanical responses during cyclic loading, the statistics of microscopic responses and microstructure-property relationships. This work serves to highlight both the fundamental insight into granular mechanics that is furnished by combined X-ray measurements and describes future directions in the field of granular materials that can be pursued with such approaches.

  11. [The psychometric properties of the Turkish version of Myocardial Infarction Dimensional Assessment Scale (MIDAS)].

    PubMed

    Yılmaz, Emel; Eser, Erhan; Şekuri, Cevad; Kültürsay, Hakan

    2011-08-01

    The purpose of this study was to describe the psychometric properties of the Myocardial Infarction Dimensional Assessment Scale (MIDAS). This is a methodological cultural adaptation study. The MIDAS consists of 35-items covering seven domains: physical activity, insecurity, emotional reaction, dependency, diet, concerns over medication, and side effects which are rated on a five-point Likert scale from 1: never to 5:always. The highest score of MIDAS is 100.Quality of life (QOL) decreases as the score of scale increases. Overall 185 myocardial infarction (MI) patients were enrolled in this study. Cronbach alpha was used for the reliability analysis. The criterion validity, structural validity, and sensitivity analysis approach was used for validity analysis. New York Heart Association (NYHA) and the Canadian Cardiovascular Society Functional Classifications (CCSFC) for testing the criterion validity; SF-36 for construct validity testing of the Turkish version of the MIDAS were used. The range of Cronbach alpha values is 0.79-0.90 for seven domains of the scale. No problematic items were observed for the entire scale. Medication related domains of the MIDAS showed considerable floor effects (35.7%-22.7%). Confirmatory Factor analysis indicators [Comparative Fit Index (CFI) =0.95 and Root Mean Square Error of Approximation (RMSEA) =0.075] supported the construct validity of MIDAS. Convergent validity of the MIDAS was confirmed with correlation of SF-36 scale where appropriate. Criterion validity results was also satisfactory by comparing different stages of the NYHA and the CCSFC (p<0.05). Overall results revealed that Turkish version of the MIDAS is a reliable and valid instrument.

  12. Using High Spatial Resolution Satellite Imagery to Map Forest Burn Severity Across Spatial Scales in a Pine Barrens Ecosystem

    NASA Technical Reports Server (NTRS)

    Meng, Ran; Wu, Jin; Schwager, Kathy L.; Zhao, Feng; Dennison, Philip E.; Cook, Bruce D.; Brewster, Kristen; Green, Timothy M.; Serbin, Shawn P.

    2017-01-01

    As a primary disturbance agent, fire significantly influences local processes and services of forest ecosystems. Although a variety of remote sensing based approaches have been developed and applied to Landsat mission imagery to infer burn severity at 30 m spatial resolution, forest burn severity have still been seldom assessed at fine spatial scales (less than or equal to 5 m) from very-high-resolution (VHR) data. We assessed a 432 ha forest fire that occurred in April 2012 on Long Island, New York, within the Pine Barrens region, a unique but imperiled fire-dependent ecosystem in the northeastern United States. The mapping of forest burn severity was explored here at fine spatial scales, for the first time using remotely sensed spectral indices and a set of Multiple Endmember Spectral Mixture Analysis (MESMA) fraction images from bi-temporal - pre- and post-fire event - WorldView-2 (WV-2) imagery at 2 m spatial resolution. We first evaluated our approach using 1 m by 1 m validation points at the sub-crown scale per severity class (i.e. unburned, low, moderate, and high severity) from the post-fire 0.10 m color aerial ortho-photos; then, we validated the burn severity mapping of geo-referenced dominant tree crowns (crown scale) and 15 m by 15 m fixed-area plots (inter-crown scale) with the post-fire 0.10 m aerial ortho-photos and measured crown information of twenty forest inventory plots. Our approach can accurately assess forest burn severity at the sub-crown (overall accuracy is 84% with a Kappa value of 0.77), crown (overall accuracy is 82% with a Kappa value of 0.76), and inter-crown scales (89% of the variation in estimated burn severity ratings (i.e. Geo-Composite Burn Index (CBI)). This work highlights that forest burn severity mapping from VHR data can capture heterogeneous fire patterns at fine spatial scales over the large spatial extents. This is important since most ecological processes associated with fire effects vary at the less than 30 m scale and VHR approaches could significantly advance our ability to characterize fire effects on forest ecosystems.

  13. Using high spatial resolution satellite imagery to map forest burn severity across spatial scales in a Pine Barrens ecosystem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, Ran; Wu, Jin; Schwager, Kathy L.

    As a primary disturbance agent, fire significantly influences local processes and services of forest ecosystems. Although a variety of remote sensing based approaches have been developed and applied to Landsat mission imagery to infer burn severity at 30 m spatial resolution, forest burn severity have still been seldom assessed at fine spatial scales (≤ 5 m) from very-high-resolution (VHR) data. Here we assessed a 432 ha forest fire that occurred in April 2012 on Long Island, New York, within the Pine Barrens region, a unique but imperiled fire-dependent ecosystem in the northeastern United States. The mapping of forest burn severitymore » was explored here at fine spatial scales, for the first time using remotely sensed spectral indices and a set of Multiple Endmember Spectral Mixture Analysis (MESMA) fraction images from bi-temporal — pre- and post-fire event — WorldView-2 (WV-2) imagery at 2 m spatial resolution. We first evaluated our approach using 1 m by 1 m validation points at the sub-crown scale per severity class (i.e. unburned, low, moderate, and high severity) from the post-fire 0.10 m color aerial ortho-photos; then, we validated the burn severity mapping of geo-referenced dominant tree crowns (crown scale) and 15 m by 15 m fixed-area plots (inter-crown scale) with the post-fire 0.10 m aerial ortho-photos and measured crown information of twenty forest inventory plots. Our approach can accurately assess forest burn severity at the sub-crown (overall accuracy is 84% with a Kappa value of 0.77), crown (overall accuracy is 82% with a Kappa value of 0.76), and inter-crown scales (89% of the variation in estimated burn severity ratings (i.e. Geo-Composite Burn Index (CBI)). Lastly, this work highlights that forest burn severity mapping from VHR data can capture heterogeneous fire patterns at fine spatial scales over the large spatial extents. This is important since most ecological processes associated with fire effects vary at the < 30 m scale and VHR approaches could significantly advance our ability to characterize fire effects on forest ecosystems.« less

  14. Using high spatial resolution satellite imagery to map forest burn severity across spatial scales in a Pine Barrens ecosystem

    DOE PAGES

    Meng, Ran; Wu, Jin; Schwager, Kathy L.; ...

    2017-01-21

    As a primary disturbance agent, fire significantly influences local processes and services of forest ecosystems. Although a variety of remote sensing based approaches have been developed and applied to Landsat mission imagery to infer burn severity at 30 m spatial resolution, forest burn severity have still been seldom assessed at fine spatial scales (≤ 5 m) from very-high-resolution (VHR) data. Here we assessed a 432 ha forest fire that occurred in April 2012 on Long Island, New York, within the Pine Barrens region, a unique but imperiled fire-dependent ecosystem in the northeastern United States. The mapping of forest burn severitymore » was explored here at fine spatial scales, for the first time using remotely sensed spectral indices and a set of Multiple Endmember Spectral Mixture Analysis (MESMA) fraction images from bi-temporal — pre- and post-fire event — WorldView-2 (WV-2) imagery at 2 m spatial resolution. We first evaluated our approach using 1 m by 1 m validation points at the sub-crown scale per severity class (i.e. unburned, low, moderate, and high severity) from the post-fire 0.10 m color aerial ortho-photos; then, we validated the burn severity mapping of geo-referenced dominant tree crowns (crown scale) and 15 m by 15 m fixed-area plots (inter-crown scale) with the post-fire 0.10 m aerial ortho-photos and measured crown information of twenty forest inventory plots. Our approach can accurately assess forest burn severity at the sub-crown (overall accuracy is 84% with a Kappa value of 0.77), crown (overall accuracy is 82% with a Kappa value of 0.76), and inter-crown scales (89% of the variation in estimated burn severity ratings (i.e. Geo-Composite Burn Index (CBI)). Lastly, this work highlights that forest burn severity mapping from VHR data can capture heterogeneous fire patterns at fine spatial scales over the large spatial extents. This is important since most ecological processes associated with fire effects vary at the < 30 m scale and VHR approaches could significantly advance our ability to characterize fire effects on forest ecosystems.« less

  15. Fine-Scale Structure Design for 3D Printing

    NASA Astrophysics Data System (ADS)

    Panetta, Francis Julian

    Modern additive fabrication technologies can manufacture shapes whose geometric complexities far exceed what existing computational design tools can analyze or optimize. At the same time, falling costs have placed these fabrication technologies within the average consumer's reach. Especially for inexpert designers, new software tools are needed to take full advantage of 3D printing technology. This thesis develops such tools and demonstrates the exciting possibilities enabled by fine-tuning objects at the small scales achievable by 3D printing. The thesis applies two high-level ideas to invent these tools: two-scale design and worst-case analysis. The two-scale design approach addresses the problem that accurately simulating--let alone optimizing--the full-resolution geometry sent to the printer requires orders of magnitude more computational power than currently available. However, we can decompose the design problem into a small-scale problem (designing tileable structures achieving a particular deformation behavior) and a macro-scale problem (deciding where to place these structures in the larger object). This separation is particularly effective, since structures for every useful behavior can be designed once, stored in a database, then reused for many different macroscale problems. Worst-case analysis refers to determining how likely an object is to fracture by studying the worst possible scenario: the forces most efficiently breaking it. This analysis is needed when the designer has insufficient knowledge or experience to predict what forces an object will undergo, or when the design is intended for use in many different scenarios unknown a priori. The thesis begins by summarizing the physics and mathematics necessary to rigorously approach these design and analysis problems. Specifically, the second chapter introduces linear elasticity and periodic homogenization. The third chapter presents a pipeline to design microstructures achieving a wide range of effective isotropic elastic material properties on a single-material 3D printer. It also proposes a macroscale optimization algorithm placing these microstructures to achieve deformation goals under prescribed loads. The thesis then turns to worst-case analysis, first considering the macroscale problem: given a user's design, the fourth chapter aims to determine the distribution of pressures over the surface creating the highest stress at any point in the shape. Solving this problem exactly is difficult, so we introduce two heuristics: one to focus our efforts on only regions likely to concentrate stresses and another converting the pressure optimization into an efficient linear program. Finally, the fifth chapter introduces worst-case analysis at the microscopic scale, leveraging the insight that the structure of periodic homogenization enables us to solve the problem exactly and efficiently. Then we use this worst-case analysis to guide a shape optimization, designing structures with prescribed deformation behavior that experience minimal stresses in generic use.

  16. Efficient Data Mining for Local Binary Pattern in Texture Image Analysis

    PubMed Central

    Kwak, Jin Tae; Xu, Sheng; Wood, Bradford J.

    2015-01-01

    Local binary pattern (LBP) is a simple gray scale descriptor to characterize the local distribution of the grey levels in an image. Multi-resolution LBP and/or combinations of the LBPs have shown to be effective in texture image analysis. However, it is unclear what resolutions or combinations to choose for texture analysis. Examining all the possible cases is impractical and intractable due to the exponential growth in a feature space. This limits the accuracy and time- and space-efficiency of LBP. Here, we propose a data mining approach for LBP, which efficiently explores a high-dimensional feature space and finds a relatively smaller number of discriminative features. The features can be any combinations of LBPs. These may not be achievable with conventional approaches. Hence, our approach not only fully utilizes the capability of LBP but also maintains the low computational complexity. We incorporated three different descriptors (LBP, local contrast measure, and local directional derivative measure) with three spatial resolutions and evaluated our approach using two comprehensive texture databases. The results demonstrated the effectiveness and robustness of our approach to different experimental designs and texture images. PMID:25767332

  17. Diagnostic performance of an automated analysis software for the diagnosis of Alzheimer’s dementia with 18F FDG PET

    PubMed Central

    Partovi, Sasan; Yuh, Roger; Pirozzi, Sara; Lu, Ziang; Couturier, Spencer; Grosse, Ulrich; Schluchter, Mark D; Nelson, Aaron; Jones, Robert; O’Donnell, James K; Faulhaber, Peter

    2017-01-01

    The objective of this study was to assess the ability of a quantitative software-aided approach to improve the diagnostic accuracy of 18F FDG PET for Alzheimer’s dementia over visual analysis alone. Twenty normal subjects (M:F-12:8; mean age 80.6 years) and twenty mild AD subjects (M:F-12:8; mean age 70.6 years) with 18F FDG PET scans were obtained from the ADNI database. Three blinded readers interpreted these PET images first using a visual qualitative approach and then using a quantitative software-aided approach. Images were classified on two five-point scales based on normal/abnormal (1-definitely normal; 5-definitely abnormal) and presence of AD (1-definitely not AD; 5-definitely AD). Diagnostic sensitivity, specificity, and accuracy for both approaches were compared based on the aforementioned scales. The sensitivity, specificity, and accuracy for the normal vs. abnormal readings of all readers combined were higher when comparing the software-aided vs. visual approach (sensitivity 0.93 vs. 0.83 P = 0.0466; specificity 0.85 vs. 0.60 P = 0.0005; accuracy 0.89 vs. 0.72 P<0.0001). The specificity and accuracy for absence vs. presence of AD of all readers combined were higher when comparing the software-aided vs. visual approach (specificity 0.90 vs. 0.70 P = 0.0008; accuracy 0.81 vs. 0.72 P = 0.0356). Sensitivities of the software-aided and visual approaches did not differ significantly (0.72 vs. 0.73 P = 0.74). The quantitative software-aided approach appears to improve the performance of 18F FDG PET for the diagnosis of mild AD. It may be helpful for experienced 18F FDG PET readers analyzing challenging cases. PMID:28123864

  18. A new-old approach for shallow landslide analysis and susceptibility zoning in fine-grained weathered soils of southern Italy

    NASA Astrophysics Data System (ADS)

    Cascini, Leonardo; Ciurleo, Mariantonietta; Di Nocera, Silvio; Gullà, Giovanni

    2015-07-01

    Rainfall-induced shallow landslides involve several geo-environmental contexts and different types of soils. In clayey soils, they affect the most superficial layer, which is generally constituted by physically weathered soils characterised by a diffuse pattern of cracks. This type of landslide most commonly occurs in the form of multiple-occurrence landslide phenomena simultaneously involving large areas and thus has several consequences in terms of environmental and economic damage. Indeed, landslide susceptibility zoning is a relevant issue for land use planning and/or design purposes. This study proposes a multi-scale approach to reach this goal. The proposed approach is tested and validated over an area in southern Italy affected by widespread shallow landslides that can be classified as earth slides and earth slide-flows. Specifically, by moving from a small (1:100,000) to a medium scale (1:25,000), with the aid of heuristic and statistical methods, the approach identifies the main factors leading to landslide occurrence and effectively detects the areas potentially affected by these phenomena. Finally, at a larger scale (1:5000), deterministic methods, i.e., physically based models (TRIGRS and TRIGRS-unsaturated), allow quantitative landslide susceptibility assessment, starting from sample areas representative of those that can be affected by shallow landslides. Considering the reliability of the obtained results, the proposed approach seems useful for analysing other case studies in similar geological contexts.

  19. Static Analysis of Large-Scale Multibody System Using Joint Coordinates and Spatial Algebra Operator

    PubMed Central

    Omar, Mohamed A.

    2014-01-01

    Initial transient oscillations inhibited in the dynamic simulations responses of multibody systems can lead to inaccurate results, unrealistic load prediction, or simulation failure. These transients could result from incompatible initial conditions, initial constraints violation, and inadequate kinematic assembly. Performing static equilibrium analysis before the dynamic simulation can eliminate these transients and lead to stable simulation. Most exiting multibody formulations determine the static equilibrium position by minimizing the system potential energy. This paper presents a new general purpose approach for solving the static equilibrium in large-scale articulated multibody. The proposed approach introduces an energy drainage mechanism based on Baumgarte constraint stabilization approach to determine the static equilibrium position. The spatial algebra operator is used to express the kinematic and dynamic equations of the closed-loop multibody system. The proposed multibody system formulation utilizes the joint coordinates and modal elastic coordinates as the system generalized coordinates. The recursive nonlinear equations of motion are formulated using the Cartesian coordinates and the joint coordinates to form an augmented set of differential algebraic equations. Then system connectivity matrix is derived from the system topological relations and used to project the Cartesian quantities into the joint subspace leading to minimum set of differential equations. PMID:25045732

  20. Static analysis of large-scale multibody system using joint coordinates and spatial algebra operator.

    PubMed

    Omar, Mohamed A

    2014-01-01

    Initial transient oscillations inhibited in the dynamic simulations responses of multibody systems can lead to inaccurate results, unrealistic load prediction, or simulation failure. These transients could result from incompatible initial conditions, initial constraints violation, and inadequate kinematic assembly. Performing static equilibrium analysis before the dynamic simulation can eliminate these transients and lead to stable simulation. Most exiting multibody formulations determine the static equilibrium position by minimizing the system potential energy. This paper presents a new general purpose approach for solving the static equilibrium in large-scale articulated multibody. The proposed approach introduces an energy drainage mechanism based on Baumgarte constraint stabilization approach to determine the static equilibrium position. The spatial algebra operator is used to express the kinematic and dynamic equations of the closed-loop multibody system. The proposed multibody system formulation utilizes the joint coordinates and modal elastic coordinates as the system generalized coordinates. The recursive nonlinear equations of motion are formulated using the Cartesian coordinates and the joint coordinates to form an augmented set of differential algebraic equations. Then system connectivity matrix is derived from the system topological relations and used to project the Cartesian quantities into the joint subspace leading to minimum set of differential equations.

  1. Time-and-Spatially Adapting Simulations for Efficient Dynamic Stall Predictions

    DTIC Science & Technology

    2015-09-01

    Experi- mental Investigation and Fundamental Understand- ing of a Full-Scale Slowed Rotor at High Advance Ratios,” Journal of the American Helicopter ...remains a major roadblock in the design and analysis of conventional rotors as well as new concepts for future vertical lift. Several approaches to...of conventional rotors as well as new concepts for future vertical lift. Several approaches to reduce the cost of these dynamic stall simulations for

  2. Genes, Culture and Conservatism-A Psychometric-Genetic Approach.

    PubMed

    Schwabe, Inga; Jonker, Wilfried; van den Berg, Stéphanie M

    2016-07-01

    The Wilson-Patterson conservatism scale was psychometrically evaluated using homogeneity analysis and item response theory models. Results showed that this scale actually measures two different aspects in people: on the one hand people vary in their agreement with either conservative or liberal catch-phrases and on the other hand people vary in their use of the "?" response category of the scale. A 9-item subscale was constructed, consisting of items that seemed to measure liberalism, and this subscale was subsequently used in a biometric analysis including genotype-environment interaction, correcting for non-homogeneous measurement error. Biometric results showed significant genetic and shared environmental influences, and significant genotype-environment interaction effects, suggesting that individuals with a genetic predisposition for conservatism show more non-shared variance but less shared variance than individuals with a genetic predisposition for liberalism.

  3. Utilization of the Building-Block Approach in Structural Mechanics Research

    NASA Technical Reports Server (NTRS)

    Rouse, Marshall; Jegley, Dawn C.; McGowan, David M.; Bush, Harold G.; Waters, W. Allen

    2005-01-01

    In the last 20 years NASA has worked in collaboration with industry to develop enabling technologies needed to make aircraft safer and more affordable, extend their lifetime, improve their reliability, better understand their behavior, and reduce their weight. To support these efforts, research programs starting with ideas and culminating in full-scale structural testing were conducted at the NASA Langley Research Center. Each program contained development efforts that (a) started with selecting the material system and manufacturing approach; (b) moved on to experimentation and analysis of small samples to characterize the system and quantify behavior in the presence of defects like damage and imperfections; (c) progressed on to examining larger structures to examine buckling behavior, combined loadings, and built-up structures; and (d) finally moved to complicated subcomponents and full-scale components. Each step along the way was supported by detailed analysis, including tool development, to prove that the behavior of these structures was well-understood and predictable. This approach for developing technology became known as the "building-block" approach. In the Advanced Composites Technology Program and the High Speed Research Program the building-block approach was used to develop a true understanding of the response of the structures involved through experimentation and analysis. The philosophy that if the structural response couldn't be accurately predicted, it wasn't really understood, was critical to the progression of these programs. To this end, analytical techniques including closed-form and finite elements were employed and experimentation used to verify assumptions at each step along the way. This paper presents a discussion of the utilization of the building-block approach described previously in structural mechanics research and development programs at NASA Langley Research Center. Specific examples that illustrate the use of this approach are included from recent research and development programs for both subsonic and supersonic transports.

  4. A collaborative sequential meta-analysis of individual patient data from randomized trials of endovascular therapy and tPA vs. tPA alone for acute ischemic stroke: ThRombEctomy And tPA (TREAT) analysis: statistical analysis plan for a sequential meta-analysis performed within the VISTA-Endovascular collaboration.

    PubMed

    MacIsaac, Rachael L; Khatri, Pooja; Bendszus, Martin; Bracard, Serge; Broderick, Joseph; Campbell, Bruce; Ciccone, Alfonso; Dávalos, Antoni; Davis, Stephen M; Demchuk, Andrew; Diener, Hans-Christoph; Dippel, Diederik; Donnan, Geoffrey A; Fiehler, Jens; Fiorella, David; Goyal, Mayank; Hacke, Werner; Hill, Michael D; Jahan, Reza; Jauch, Edward; Jovin, Tudor; Kidwell, Chelsea S; Liebeskind, David; Majoie, Charles B; Martins, Sheila Cristina Ouriques; Mitchell, Peter; Mocco, J; Muir, Keith W; Nogueira, Raul; Saver, Jeffrey L; Schonewille, Wouter J; Siddiqui, Adnan H; Thomalla, Götz; Tomsick, Thomas A; Turk, Aquilla S; White, Philip; Zaidat, Osama; Lees, Kennedy R

    2015-10-01

    Endovascular treatment has been shown to restore blood flow effectively. Second-generation medical devices such as stent retrievers are now showing overwhelming efficacy in clinical trials, particularly in conjunction with intravenous recombinant tissue plasminogen activator. This statistical analysis plan utilizing a novel, sequential approach describes a prospective, individual patient data analysis of endovascular therapy in conjunction with intravenous recombinant tissue plasminogen activator agreed upon by the Thrombectomy and Tissue Plasminogen Activator Collaborative Group. This protocol will specify the primary outcome for efficacy, as 'favorable' outcome defined by the ordinal distribution of the modified Rankin Scale measured at three-months poststroke, but with modified Rankin Scales 5 and 6 collapsed into a single category. The primary analysis will aim to answer the questions: 'what is the treatment effect of endovascular therapy with intravenous recombinant tissue plasminogen activator compared to intravenous tissue plasminogen activator alone on full scale modified Rankin Scale at 3 months?' and 'to what extent do key patient characteristics influence the treatment effect of endovascular therapy?'. Key secondary outcomes include effect of endovascular therapy on death within 90 days; analyses of modified Rankin Scale using dichotomized methods; and effects of endovascular therapy on symptomatic intracranial hemorrhage. Several secondary analyses will be considered as well as expanding patient cohorts to intravenous recombinant tissue plasminogen activator-ineligible patients, should data allow. This collaborative meta-analysis of individual participant data from randomized trials of endovascular therapy vs. control in conjunction with intravenous thrombolysis will demonstrate the efficacy and generalizability of endovascular therapy with intravenous thrombolysis as a concomitant medication. © 2015 World Stroke Organization.

  5. Ocean wavenumber estimation from wave-resolving time series imagery

    USGS Publications Warehouse

    Plant, N.G.; Holland, K.T.; Haller, M.C.

    2008-01-01

    We review several approaches that have been used to estimate ocean surface gravity wavenumbers from wave-resolving remotely sensed image sequences. Two fundamentally different approaches that utilize these data exist. A power spectral density approach identifies wavenumbers where image intensity variance is maximized. Alternatively, a cross-spectral correlation approach identifies wavenumbers where intensity coherence is maximized. We develop a solution to the latter approach based on a tomographic analysis that utilizes a nonlinear inverse method. The solution is tolerant to noise and other forms of sampling deficiency and can be applied to arbitrary sampling patterns, as well as to full-frame imagery. The solution includes error predictions that can be used for data retrieval quality control and for evaluating sample designs. A quantitative analysis of the intrinsic resolution of the method indicates that the cross-spectral correlation fitting improves resolution by a factor of about ten times as compared to the power spectral density fitting approach. The resolution analysis also provides a rule of thumb for nearshore bathymetry retrievals-short-scale cross-shore patterns may be resolved if they are about ten times longer than the average water depth over the pattern. This guidance can be applied to sample design to constrain both the sensor array (image resolution) and the analysis array (tomographic resolution). ?? 2008 IEEE.

  6. Scalable non-negative matrix tri-factorization.

    PubMed

    Čopar, Andrej; Žitnik, Marinka; Zupan, Blaž

    2017-01-01

    Matrix factorization is a well established pattern discovery tool that has seen numerous applications in biomedical data analytics, such as gene expression co-clustering, patient stratification, and gene-disease association mining. Matrix factorization learns a latent data model that takes a data matrix and transforms it into a latent feature space enabling generalization, noise removal and feature discovery. However, factorization algorithms are numerically intensive, and hence there is a pressing challenge to scale current algorithms to work with large datasets. Our focus in this paper is matrix tri-factorization, a popular method that is not limited by the assumption of standard matrix factorization about data residing in one latent space. Matrix tri-factorization solves this by inferring a separate latent space for each dimension in a data matrix, and a latent mapping of interactions between the inferred spaces, making the approach particularly suitable for biomedical data mining. We developed a block-wise approach for latent factor learning in matrix tri-factorization. The approach partitions a data matrix into disjoint submatrices that are treated independently and fed into a parallel factorization system. An appealing property of the proposed approach is its mathematical equivalence with serial matrix tri-factorization. In a study on large biomedical datasets we show that our approach scales well on multi-processor and multi-GPU architectures. On a four-GPU system we demonstrate that our approach can be more than 100-times faster than its single-processor counterpart. A general approach for scaling non-negative matrix tri-factorization is proposed. The approach is especially useful parallel matrix factorization implemented in a multi-GPU environment. We expect the new approach will be useful in emerging procedures for latent factor analysis, notably for data integration, where many large data matrices need to be collectively factorized.

  7. A National Approach for Mapping and Quantifying Habitat-based Biodiversity Metrics Across Multiple Spatial Scales

    EPA Science Inventory

    Ecosystem services, i.e., "services provided to humans from natural systems," have become a key issue of this century in resource management, conservation planning, and environmental decision analysis. Mapping and quantifying ecosystem services have become strategic national inte...

  8. Estimating monetary damages from flooding in the United States under a changing climate

    EPA Science Inventory

    A national-scale analysis of potential changes in monetary damages from flooding under climate change. The approach uses empirically based statistical relationships between historical precipitation and flood damage records from 18 hydrologic regions of the United States, along w...

  9. Analysis of yield and oil from a series of canola breeding trials. Part II. Exploring variety by environment interaction using factor analysis.

    PubMed

    Cullis, B R; Smith, A B; Beeck, C P; Cowling, W A

    2010-11-01

    Exploring and exploiting variety by environment (V × E) interaction is one of the major challenges facing plant breeders. In paper I of this series, we presented an approach to modelling V × E interaction in the analysis of complex multi-environment trials using factor analytic models. In this paper, we develop a range of statistical tools which explore V × E interaction in this context. These tools include graphical displays such as heat-maps of genetic correlation matrices as well as so-called E-scaled uniplots that are a more informative alternative to the classical biplot for large plant breeding multi-environment trials. We also present a new approach to prediction for multi-environment trials that include pedigree information. This approach allows meaningful selection indices to be formed either for potential new varieties or potential parents.

  10. A new framework for comprehensive, robust, and efficient global sensitivity analysis: 2. Application

    NASA Astrophysics Data System (ADS)

    Razavi, Saman; Gupta, Hoshin V.

    2016-01-01

    Based on the theoretical framework for sensitivity analysis called "Variogram Analysis of Response Surfaces" (VARS), developed in the companion paper, we develop and implement a practical "star-based" sampling strategy (called STAR-VARS), for the application of VARS to real-world problems. We also develop a bootstrap approach to provide confidence level estimates for the VARS sensitivity metrics and to evaluate the reliability of inferred factor rankings. The effectiveness, efficiency, and robustness of STAR-VARS are demonstrated via two real-data hydrological case studies (a 5-parameter conceptual rainfall-runoff model and a 45-parameter land surface scheme hydrology model), and a comparison with the "derivative-based" Morris and "variance-based" Sobol approaches are provided. Our results show that STAR-VARS provides reliable and stable assessments of "global" sensitivity across the full range of scales in the factor space, while being 1-2 orders of magnitude more efficient than the Morris or Sobol approaches.

  11. Culture-Dependent and -Independent Identification of Polyphosphate-Accumulating Dechloromonas spp. Predominating in a Full-Scale Oxidation Ditch Wastewater Treatment Plant.

    PubMed

    Terashima, Mia; Yama, Ayano; Sato, Megumi; Yumoto, Isao; Kamagata, Yoichi; Kato, Souichiro

    2016-12-23

    The oxidation ditch process is one of the most economical approaches currently used to simultaneously remove organic carbon, nitrogen, and also phosphorus (P) from wastewater. However, limited information is available on biological P removal in this process. In the present study, microorganisms contributing to P removal in a full-scale oxidation ditch reactor were investigated using culture-dependent and -independent approaches. A microbial community analysis based on 16S rRNA gene sequencing revealed that a phylotype closely related to Dechloromonas spp. in the family Rhodocyclaceae dominated in the oxidation ditch reactor. This dominant Dechloromonas sp. was successfully isolated and subjected to fluorescent staining for polyphosphate, followed by microscopic observations and a spectrofluorometric analysis, which clearly demonstrated that the Dechloromonas isolate exhibited a strong ability to accumulate polyphosphate within its cells. These results indicate the potential key role of Dechloromonas spp. in efficient P removal in the oxidation ditch wastewater treatment process.

  12. Culture-Dependent and -Independent Identification of Polyphosphate-Accumulating Dechloromonas spp. Predominating in a Full-Scale Oxidation Ditch Wastewater Treatment Plant

    PubMed Central

    Terashima, Mia; Yama, Ayano; Sato, Megumi; Yumoto, Isao; Kamagata, Yoichi; Kato, Souichiro

    2016-01-01

    The oxidation ditch process is one of the most economical approaches currently used to simultaneously remove organic carbon, nitrogen, and also phosphorus (P) from wastewater. However, limited information is available on biological P removal in this process. In the present study, microorganisms contributing to P removal in a full-scale oxidation ditch reactor were investigated using culture-dependent and -independent approaches. A microbial community analysis based on 16S rRNA gene sequencing revealed that a phylotype closely related to Dechloromonas spp. in the family Rhodocyclaceae dominated in the oxidation ditch reactor. This dominant Dechloromonas sp. was successfully isolated and subjected to fluorescent staining for polyphosphate, followed by microscopic observations and a spectrofluorometric analysis, which clearly demonstrated that the Dechloromonas isolate exhibited a strong ability to accumulate polyphosphate within its cells. These results indicate the potential key role of Dechloromonas spp. in efficient P removal in the oxidation ditch wastewater treatment process. PMID:27867159

  13. Replica and extreme-value analysis of the Jarzynski free-energy estimator

    NASA Astrophysics Data System (ADS)

    Palassini, Matteo; Ritort, Felix

    2008-03-01

    We analyze the Jarzynski estimator of free-energy differences from nonequilibrium work measurements. By a simple mapping onto Derrida's Random Energy Model, we obtain a scaling limit for the expectation of the bias of the estimator. We then derive analytical approximations in three different regimes of the scaling parameter x = log(N)/W, where N is the number of measurements and W the mean dissipated work. Our approach is valid for a generic distribution of the dissipated work, and is based on a replica symmetry breaking scheme for x >> 1, the asymptotic theory of extreme value statistics for x << 1, and a direct approach for x near one. The combination of the three analytic approximations describes well Monte Carlo data for the expectation value of the estimator, for a wide range of values of N, from N=1 to large N, and for different work distributions. Based on these results, we introduce improved free-energy estimators and discuss the application to the analysis of experimental data.

  14. New strategy for drug discovery by large-scale association analysis of molecular networks of different species.

    PubMed

    Zhang, Bo; Fu, Yingxue; Huang, Chao; Zheng, Chunli; Wu, Ziyin; Zhang, Wenjuan; Yang, Xiaoyan; Gong, Fukai; Li, Yuerong; Chen, Xiaoyu; Gao, Shuo; Chen, Xuetong; Li, Yan; Lu, Aiping; Wang, Yonghua

    2016-02-25

    The development of modern omics technology has not significantly improved the efficiency of drug development. Rather precise and targeted drug discovery remains unsolved. Here a large-scale cross-species molecular network association (CSMNA) approach for targeted drug screening from natural sources is presented. The algorithm integrates molecular network omics data from humans and 267 plants and microbes, establishing the biological relationships between them and extracting evolutionarily convergent chemicals. This technique allows the researcher to assess targeted drugs for specific human diseases based on specific plant or microbe pathways. In a perspective validation, connections between the plant Halliwell-Asada (HA) cycle and the human Nrf2-ARE pathway were verified and the manner by which the HA cycle molecules act on the human Nrf2-ARE pathway as antioxidants was determined. This shows the potential applicability of this approach in drug discovery. The current method integrates disparate evolutionary species into chemico-biologically coherent circuits, suggesting a new cross-species omics analysis strategy for rational drug development.

  15. Large-Scale Overlays and Trends: Visually Mining, Panning and Zooming the Observable Universe.

    PubMed

    Luciani, Timothy Basil; Cherinka, Brian; Oliphant, Daniel; Myers, Sean; Wood-Vasey, W Michael; Labrinidis, Alexandros; Marai, G Elisabeta

    2014-07-01

    We introduce a web-based computing infrastructure to assist the visual integration, mining and interactive navigation of large-scale astronomy observations. Following an analysis of the application domain, we design a client-server architecture to fetch distributed image data and to partition local data into a spatial index structure that allows prefix-matching of spatial objects. In conjunction with hardware-accelerated pixel-based overlays and an online cross-registration pipeline, this approach allows the fetching, displaying, panning and zooming of gigabit panoramas of the sky in real time. To further facilitate the integration and mining of spatial and non-spatial data, we introduce interactive trend images-compact visual representations for identifying outlier objects and for studying trends within large collections of spatial objects of a given class. In a demonstration, images from three sky surveys (SDSS, FIRST and simulated LSST results) are cross-registered and integrated as overlays, allowing cross-spectrum analysis of astronomy observations. Trend images are interactively generated from catalog data and used to visually mine astronomy observations of similar type. The front-end of the infrastructure uses the web technologies WebGL and HTML5 to enable cross-platform, web-based functionality. Our approach attains interactive rendering framerates; its power and flexibility enables it to serve the needs of the astronomy community. Evaluation on three case studies, as well as feedback from domain experts emphasize the benefits of this visual approach to the observational astronomy field; and its potential benefits to large scale geospatial visualization in general.

  16. The origins of modern biodiversity on land

    PubMed Central

    Benton, Michael J.

    2010-01-01

    Comparative studies of large phylogenies of living and extinct groups have shown that most biodiversity arises from a small number of highly species-rich clades. To understand biodiversity, it is important to examine the history of these clades on geological time scales. This is part of a distinct ‘phylogenetic expansion’ view of macroevolution, and contrasts with the alternative, non-phylogenetic ‘equilibrium’ approach to the history of biodiversity. The latter viewpoint focuses on density-dependent models in which all life is described by a single global-scale model, and a case is made here that this approach may be less successful at representing the shape of the evolution of life than the phylogenetic expansion approach. The terrestrial fossil record is patchy, but is adequate for coarse-scale studies of groups such as vertebrates that possess fossilizable hard parts. New methods in phylogenetic analysis, morphometrics and the study of exceptional biotas allow new approaches. Models for diversity regulation through time range from the entirely biotic to the entirely physical, with many intermediates. Tetrapod diversity has risen as a result of the expansion of ecospace, rather than niche subdivision or regional-scale endemicity resulting from continental break-up. Tetrapod communities on land have been remarkably stable and have changed only when there was a revolution in floras (such as the demise of the Carboniferous coal forests, or the Cretaceous radiation of angiosperms) or following particularly severe mass extinction events, such as that at the end of the Permian. PMID:20980315

  17. Femtosecond parabolic pulse shaping in normally dispersive optical fibers.

    PubMed

    Sukhoivanov, Igor A; Iakushev, Sergii O; Shulika, Oleksiy V; Díez, Antonio; Andrés, Miguel

    2013-07-29

    Formation of parabolic pulses at femtosecond time scale by means of passive nonlinear reshaping in normally dispersive optical fibers is analyzed. Two approaches are examined and compared: the parabolic waveform formation in transient propagation regime and parabolic waveform formation in the steady-state propagation regime. It is found that both approaches could produce parabolic pulses as short as few hundred femtoseconds applying commercially available fibers, specially designed all-normal dispersion photonic crystal fiber and modern femtosecond lasers for pumping. The ranges of parameters providing parabolic pulse formation at the femtosecond time scale are found depending on the initial pulse duration, chirp and energy. Applicability of different fibers for femtosecond pulse shaping is analyzed. Recommendation for shortest parabolic pulse formation is made based on the analysis presented.

  18. The Impact of In-situ Chemical Oxidation on Contaminant Mass Discharge: Linking Source-Zone and Plume-Scale Characterizations of Remediation Performance

    NASA Astrophysics Data System (ADS)

    Brusseau, M. L.; Carroll, K. C.; Baker, J. B.; Allen, T.; DiGuiseppi, W.; Hatton, J.; Morrison, C.; Russo, A. E.; Berkompas, J. L.

    2011-12-01

    A large-scale permanganate-based in-situ chemical oxidation (ISCO) effort has been conducted over the past ten years at a federal Superfund site in Tucson, AZ, for which trichloroethene (TCE) is the primary contaminant of concern. Remediation performance was assessed by examining the impact of treatment on contaminant mass discharge, an approach that has been used for only a very few prior ISCO projects. Contaminant mass discharge tests were conducted before and after permanganate injection to measure the impact at the source-zone scale. The results indicate that ISCO caused a significant reduction in mass discharge (approximately 75%). The standard approach of characterizing discharge at the source-zone scale was supplemented with additional characterization at the plume scale, which was evaluated by examining the change in contaminant mass discharge associated with the pump-and-treat system. The integrated contaminant mass discharge decreased by approximately 70%, consistent with the source-zone-scale measurements. The integrated mass discharge rebounded from 0.1 to 0.2 Kg/d within one year after cessation of permanganate injections, after which it has been stable for several years. Collection of the integrated contaminant mass discharge data throughout the ISCO treatment period provided a high-resolution, real-time analysis of the site-wide impact of ISCO, thereby linking source-zone remediation to impacts on overall risk. The results indicate that ISCO was successful in reducing contaminant mass discharge at this site, which comprises a highly heterogeneous subsurface environment. Analysis of TCE sediment concentration data for core material collected before and after ISCO supports the hypothesis that the remaining mass discharge is associated in part with poorly-accessible contaminant mass residing within lower-permeability zones.

  19. Impact of in situ chemical oxidation on contaminant mass discharge: linking source-zone and plume-scale characterizations of remediation performance.

    PubMed

    Brusseau, M L; Carroll, K C; Allen, T; Baker, J; Diguiseppi, W; Hatton, J; Morrison, C; Russo, A; Berkompas, J

    2011-06-15

    A large-scale permanganate-based in situ chemical oxidation (ISCO) effort has been conducted over the past ten years at a federal Superfund site in Tucson, AZ, for which trichloroethene (TCE) is the primary contaminant of concern. Remediation performance was assessed by examining the impact of treatment on contaminant mass discharge, an approach that has been used for only a very few prior ISCO projects. Contaminant mass discharge tests were conducted before and after permanganate injection to measure the impact at the source-zone scale. The results indicate that ISCO caused a significant reduction in mass discharge (approximately 75%). The standard approach of characterizing discharge at the source-zone scale was supplemented with additional characterization at the plume scale, which was evaluated by examining the change in contaminant mass discharge associated with the pump-and-treat system. The integrated contaminant mass discharge decreased by approximately 70%, consistent with the source-zone-scale measurements. The integrated mass discharge rebounded from 0.1 to 0.2 kg/d within one year after cessation of permanganate injections, after which it has been stable for several years. Collection of the integrated contaminant mass discharge data throughout the ISCO treatment period provided a high-resolution, real-time analysis of the site-wide impact of ISCO, thereby linking source-zone remediation to impacts on overall risk. The results indicate that ISCO was successful in reducing contaminant mass discharge at this site, which comprises a highly heterogeneous subsurface environment. Analysis of TCE sediment concentration data for core material collected before and after ISCO supports the hypothesis that the remaining mass discharge is associated in part with poorly accessible contaminant mass residing within lower-permeability zones.

  20. EPE analysis of sub-N10 BEoL flow with and without fully self-aligned via using Coventor SEMulator3D

    NASA Astrophysics Data System (ADS)

    Franke, Joern-Holger; Gallagher, Matt; Murdoch, Gayle; Halder, Sandip; Juncker, Aurelie; Clark, William

    2017-03-01

    During the last few decades, the semiconductor industry has been able to scale device performance up while driving costs down. What started off as simple geometrical scaling, driven mostly by advances in lithography, has recently been accompanied by advances in processing techniques and in device architectures. The trend to combine efforts using process technology and lithography is expected to intensify, as further scaling becomes ever more difficult. One promising component of future nodes are "scaling boosters", i.e. processing techniques that enable further scaling. An indispensable component in developing these ever more complex processing techniques is semiconductor process modeling software. Visualization of complex 3D structures in SEMulator3D, along with budget analysis on film thicknesses, CD and etch budgets, allow process integrators to compare flows before any physical wafers are run. Hundreds of "virtual" wafers allow comparison of different processing approaches, along with EUV or DUV patterning options for defined layers and different overlay schemes. This "virtual fabrication" technology produces massively parallel process variation studies that would be highly time-consuming or expensive in experiment. Here, we focus on one particular scaling booster, the fully self-aligned via (FSAV). We compare metal-via-metal (mevia-me) chains with self-aligned and fully-self-aligned via's using a calibrated model for imec's N7 BEoL flow. To model overall variability, 3D Monte Carlo modeling of as many variability sources as possible is critical. We use Coventor SEMulator3D to extract minimum me-me distances and contact areas and show how fully self-aligned vias allow a better me-via distance control and tighter via-me contact area variability compared with the standard self-aligned via (SAV) approach.

  1. High-resolution Statistics of Solar Wind Turbulence at Kinetic Scales Using the Magnetospheric Multiscale Mission

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chasapis, Alexandros; Matthaeus, W. H.; Parashar, T. N.

    Using data from the Magnetospheric Multiscale (MMS) and Cluster missions obtained in the solar wind, we examine second-order and fourth-order structure functions at varying spatial lags normalized to ion inertial scales. The analysis includes direct two-spacecraft results and single-spacecraft results employing the familiar Taylor frozen-in flow approximation. Several familiar statistical results, including the spectral distribution of energy, and the sale-dependent kurtosis, are extended down to unprecedented spatial scales of ∼6 km, approaching electron scales. The Taylor approximation is also confirmed at those small scales, although small deviations are present in the kinetic range. The kurtosis is seen to attain verymore » high values at sub-proton scales, supporting the previously reported suggestion that monofractal behavior may be due to high-frequency plasma waves at kinetic scales.« less

  2. An Approach to Experimental Design for the Computer Analysis of Complex Phenomenon

    NASA Technical Reports Server (NTRS)

    Rutherford, Brian

    2000-01-01

    The ability to make credible system assessments, predictions and design decisions related to engineered systems and other complex phenomenon is key to a successful program for many large-scale investigations in government and industry. Recently, many of these large-scale analyses have turned to computational simulation to provide much of the required information. Addressing specific goals in the computer analysis of these complex phenomenon is often accomplished through the use of performance measures that are based on system response models. The response models are constructed using computer-generated responses together with physical test results where possible. They are often based on probabilistically defined inputs and generally require estimation of a set of response modeling parameters. As a consequence, the performance measures are themselves distributed quantities reflecting these variabilities and uncertainties. Uncertainty in the values of the performance measures leads to uncertainties in predicted performance and can cloud the decisions required of the analysis. A specific goal of this research has been to develop methodology that will reduce this uncertainty in an analysis environment where limited resources and system complexity together restrict the number of simulations that can be performed. An approach has been developed that is based on evaluation of the potential information provided for each "intelligently selected" candidate set of computer runs. Each candidate is evaluated by partitioning the performance measure uncertainty into two components - one component that could be explained through the additional computational simulation runs and a second that would remain uncertain. The portion explained is estimated using a probabilistic evaluation of likely results for the additional computational analyses based on what is currently known about the system. The set of runs indicating the largest potential reduction in uncertainty is then selected and the computational simulations are performed. Examples are provided to demonstrate this approach on small scale problems. These examples give encouraging results. Directions for further research are indicated.

  3. End to End Digitisation and Analysis of Three-Dimensional Coral Models, from Communities to Corallites.

    PubMed

    Gutierrez-Heredia, Luis; Benzoni, Francesca; Murphy, Emma; Reynaud, Emmanuel G

    2016-01-01

    Coral reefs hosts nearly 25% of all marine species and provide food sources for half a billion people worldwide while only a very small percentage have been surveyed. Advances in technology and processing along with affordable underwater cameras and Internet availability gives us the possibility to provide tools and softwares to survey entire coral reefs. Holistic ecological analyses of corals require not only the community view (10s to 100s of meters), but also the single colony analysis as well as corallite identification. As corals are three-dimensional, classical approaches to determine percent cover and structural complexity across spatial scales are inefficient, time-consuming and limited to experts. Here we propose an end-to-end approach to estimate these parameters using low-cost equipment (GoPro, Canon) and freeware (123D Catch, Meshmixer and Netfabb), allowing every community to participate in surveys and monitoring of their coral ecosystem. We demonstrate our approach on 9 species of underwater colonies in ranging size and morphology. 3D models of underwater colonies, fresh samples and bleached skeletons with high quality texture mapping and detailed topographic morphology were produced, and Surface Area and Volume measurements (parameters widely used for ecological and coral health studies) were calculated and analysed. Moreover, we integrated collected sample models with micro-photogrammetry models of individual corallites to aid identification and colony and polyp scale analysis.

  4. End to End Digitisation and Analysis of Three-Dimensional Coral Models, from Communities to Corallites

    PubMed Central

    Gutierrez-Heredia, Luis; Benzoni, Francesca; Murphy, Emma; Reynaud, Emmanuel G.

    2016-01-01

    Coral reefs hosts nearly 25% of all marine species and provide food sources for half a billion people worldwide while only a very small percentage have been surveyed. Advances in technology and processing along with affordable underwater cameras and Internet availability gives us the possibility to provide tools and softwares to survey entire coral reefs. Holistic ecological analyses of corals require not only the community view (10s to 100s of meters), but also the single colony analysis as well as corallite identification. As corals are three-dimensional, classical approaches to determine percent cover and structural complexity across spatial scales are inefficient, time-consuming and limited to experts. Here we propose an end-to-end approach to estimate these parameters using low-cost equipment (GoPro, Canon) and freeware (123D Catch, Meshmixer and Netfabb), allowing every community to participate in surveys and monitoring of their coral ecosystem. We demonstrate our approach on 9 species of underwater colonies in ranging size and morphology. 3D models of underwater colonies, fresh samples and bleached skeletons with high quality texture mapping and detailed topographic morphology were produced, and Surface Area and Volume measurements (parameters widely used for ecological and coral health studies) were calculated and analysed. Moreover, we integrated collected sample models with micro-photogrammetry models of individual corallites to aid identification and colony and polyp scale analysis. PMID:26901845

  5. Effective Rating Scale Development for Speaking Tests: Performance Decision Trees

    ERIC Educational Resources Information Center

    Fulcher, Glenn; Davidson, Fred; Kemp, Jenny

    2011-01-01

    Rating scale design and development for testing speaking is generally conducted using one of two approaches: the measurement-driven approach or the performance data-driven approach. The measurement-driven approach prioritizes the ordering of descriptors onto a single scale. Meaning is derived from the scaling methodology and the agreement of…

  6. Frequency-Specific Fractal Analysis of Postural Control Accounts for Control Strategies

    PubMed Central

    Gilfriche, Pierre; Deschodt-Arsac, Véronique; Blons, Estelle; Arsac, Laurent M.

    2018-01-01

    Diverse indicators of postural control in Humans have been explored for decades, mostly based on the trajectory of the center-of-pressure. Classical approaches focus on variability, based on the notion that if a posture is too variable, the subject is not stable. Going deeper, an improved understanding of underlying physiology has been gained from studying variability in different frequency ranges, pointing to specific short-loops (proprioception), and long-loops (visuo-vestibular) in neural control. More recently, fractal analyses have proliferated and become useful additional metrics of postural control. They allowed identifying two scaling phenomena, respectively in short and long timescales. Here, we show that one of the most widely used methods for fractal analysis, Detrended Fluctuation Analysis, could be enhanced to account for scalings on specific frequency ranges. By computing and filtering a bank of synthetic fractal signals, we established how scaling analysis can be focused on specific frequency components. We called the obtained method Frequency-specific Fractal Analysis (FsFA) and used it to associate the two scaling phenomena of postural control to proprioceptive-based control loop and visuo-vestibular based control loop. After that, convincing arguments of method validity came from an application on the study of unaltered vs. altered postural control in athletes. Overall, the analysis suggests that at least two timescales contribute to postural control: a velocity-based control in short timescales relying on proprioceptive sensors, and a position-based control in longer timescales with visuo-vestibular sensors, which is a brand-new vision of postural control. Frequency-specific scaling exponents are promising markers of control strategies in Humans. PMID:29643816

  7. A Visual Analytics Approach for Station-Based Air Quality Data

    PubMed Central

    Du, Yi; Ma, Cuixia; Wu, Chao; Xu, Xiaowei; Guo, Yike; Zhou, Yuanchun; Li, Jianhui

    2016-01-01

    With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support. PMID:28029117

  8. A Visual Analytics Approach for Station-Based Air Quality Data.

    PubMed

    Du, Yi; Ma, Cuixia; Wu, Chao; Xu, Xiaowei; Guo, Yike; Zhou, Yuanchun; Li, Jianhui

    2016-12-24

    With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support.

  9. ARM - Midlatitude Continental Convective Clouds - Single Column Model Forcing (xie-scm_forcing)

    DOE Data Explorer

    Xie, Shaocheng; McCoy, Renata; Zhang, Yunyan

    2012-10-25

    The constrained variational objective analysis approach described in Zhang and Lin [1997] and Zhang et al. [2001]was used to derive the large-scale single-column/cloud resolving model forcing and evaluation data set from the observational data collected during Midlatitude Continental Convective Clouds Experiment (MC3E), which was conducted during April to June 2011 near the ARM Southern Great Plains (SGP) site. The analysis data cover the period from 00Z 22 April - 21Z 6 June 2011. The forcing data represent an average over the 3 different analysis domains centered at central facility with a diameter of 300 km (standard SGP forcing domain size), 150 km and 75 km, as shown in Figure 1. This is to support modeling studies on various-scale convective systems.

  10. Stability and change of ego resiliency from late adolescence to young adulthood: a multiperspective study using the ER89-R Scale.

    PubMed

    Vecchione, Michele; Alessandri, Guido; Barbaranelli, Claudio; Gerbino, Maria

    2010-05-01

    In this research, we examined the psychometric properties of the Revised Ego Resiliency 89 Scale (ER89-R; Alessandri, Vecchio, Steca, Caprara, & Caprara, 2008), a brief self-report measure of ego resiliency. The scale has been used to assess the development of ego resiliency from late adolescence to emerging adulthood, focusing on different ways to define continuity and change. We analyzed longitudinal self-report data from 267 late adolescents (44% male) using 4 different approaches: factor analysis for testing construct continuity, correlational analysis for examining differential stability, latent growth modeling for analyzing mean level change, and the reliable change index for studying the occurrence of change at the individual level. Converging evidence points to the marked stability of ego resiliency from 16 to 20 years, both for males and females. The scale predicts externalizing and internalizing problems, both concurrently and at 2 and 4 years of distance. Findings suggest that the ER89-R scale represents a valid and reliable instrument that can be fruitfully suited for studying ego resiliency through various developmental stages.

  11. Refining a self-assessment of informatics competency scale using Mokken scaling analysis.

    PubMed

    Yoon, Sunmoo; Shaffer, Jonathan A; Bakken, Suzanne

    2015-01-01

    Healthcare environments are increasingly implementing health information technology (HIT) and those from various professions must be competent to use HIT in meaningful ways. In addition, HIT has been shown to enable interprofessional approaches to health care. The purpose of this article is to describe the refinement of the Self-Assessment of Nursing Informatics Competencies Scale (SANICS) using analytic techniques based upon item response theory (IRT) and discuss its relevance to interprofessional education and practice. In a sample of 604 nursing students, the 93-item version of SANICS was examined using non-parametric IRT. The iterative modeling procedure included 31 steps comprising: (1) assessing scalability, (2) assessing monotonicity, (3) assessing invariant item ordering, and (4) expert input. SANICS was reduced to an 18-item hierarchical scale with excellent reliability. Fundamental skills for team functioning and shared decision making among team members (e.g. "using monitoring systems appropriately," "describing general systems to support clinical care") had the highest level of difficulty, and "demonstrating basic technology skills" had the lowest difficulty level. Most items reflect informatics competencies relevant to all health professionals. Further, the approaches can be applied to construct a new hierarchical scale or refine an existing scale related to informatics attitudes or competencies for various health professions.

  12. Approaches to 30% Energy Savings at the Community Scale in the Hot-Humid Climate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas-Rees, S.; Beal, D.; Martin, E.

    2013-03-01

    BA-PIRC has worked with several community-scale builders within the hot humid climate zone to improve performance of production, or community scale, housing. Tommy Williams Homes (Gainesville, FL), Lifestyle Homes (Melbourne, FL), and Habitat for Humanity (various locations, FL) have all been continuous partners of the BA Program and are the subjects of this report to document achievement of the Building America goal of 30% whole house energy savings packages adopted at the community scale. The scope of this report is to demonstrate achievement of these goals though the documentation of production-scale homes built cost-effectively at the community scale, and modeledmore » to reduce whole-house energy use by 30% in the Hot Humid climate region. Key aspects of this research include determining how to evolve existing energy efficiency packages to produce replicable target savings, identifying what builders' technical assistance needs are for implementation and working with them to create sustainable quality assurance mechanisms, and documenting the commercial viability through neutral cost analysis and market acceptance. This report documents certain barriers builders overcame and the approaches they implemented in order to accomplish Building America (BA) Program goals that have not already been documented in previous reports.« less

  13. Autonomous smart sensor network for full-scale structural health monitoring

    NASA Astrophysics Data System (ADS)

    Rice, Jennifer A.; Mechitov, Kirill A.; Spencer, B. F., Jr.; Agha, Gul A.

    2010-04-01

    The demands of aging infrastructure require effective methods for structural monitoring and maintenance. Wireless smart sensor networks offer the ability to enhance structural health monitoring (SHM) practices through the utilization of onboard computation to achieve distributed data management. Such an approach is scalable to the large number of sensor nodes required for high-fidelity modal analysis and damage detection. While smart sensor technology is not new, the number of full-scale SHM applications has been limited. This slow progress is due, in part, to the complex network management issues that arise when moving from a laboratory setting to a full-scale monitoring implementation. This paper presents flexible network management software that enables continuous and autonomous operation of wireless smart sensor networks for full-scale SHM applications. The software components combine sleep/wake cycling for enhanced power management with threshold detection for triggering network wide tasks, such as synchronized sensing or decentralized modal analysis, during periods of critical structural response.

  14. Automatic Selection of Order Parameters in the Analysis of Large Scale Molecular Dynamics Simulations.

    PubMed

    Sultan, Mohammad M; Kiss, Gert; Shukla, Diwakar; Pande, Vijay S

    2014-12-09

    Given the large number of crystal structures and NMR ensembles that have been solved to date, classical molecular dynamics (MD) simulations have become powerful tools in the atomistic study of the kinetics and thermodynamics of biomolecular systems on ever increasing time scales. By virtue of the high-dimensional conformational state space that is explored, the interpretation of large-scale simulations faces difficulties not unlike those in the big data community. We address this challenge by introducing a method called clustering based feature selection (CB-FS) that employs a posterior analysis approach. It combines supervised machine learning (SML) and feature selection with Markov state models to automatically identify the relevant degrees of freedom that separate conformational states. We highlight the utility of the method in the evaluation of large-scale simulations and show that it can be used for the rapid and automated identification of relevant order parameters involved in the functional transitions of two exemplary cell-signaling proteins central to human disease states.

  15. Taking innovative vector control interventions in urban Latin America to scale: lessons learnt from multi-country implementation research.

    PubMed

    Quintero, Juliana; García-Betancourt, Tatiana; Caprara, Andrea; Basso, Cesar; Garcia da Rosa, Elsa; Manrique-Saide, Pablo; Coelho, Giovanini; Sánchez-Tejeda, Gustavo; Dzul-Manzanilla, Felipe; García, Diego Alejandro; Carrasquilla, Gabriel; Alfonso-Sierra, Eduardo; Monteiro Vasconcelos Motta, Cyntia; Sommerfeld, Johannes; Kroeger, Axel

    2017-09-01

    Prior to the current public health emergency following the emergence of chikungunya and Zika Virus Disease in the Americas during 2014 and 2015, multi-country research investigated between 2011 and 2013 the efficacy of novel Aedes aegypti intervention packages through cluster randomised controlled trials in four Latin-American cities: Fortaleza (Brazil); Girardot (Colombia), Acapulco (Mexico) and Salto (Uruguay). Results from the trials led to a scaling up effort of the interventions at city levels. Scaling up refers to deliberate efforts to increase the impact of successfully tested health interventions to benefit more people and foster policy and program development in a sustainable way. The different scenarios represent examples for  a 'vertical approach' and a 'horizontal approach'. This paper presents the analysis of a preliminary process evaluation of the scaling up efforts in the mentioned cites, with a focus on challenges and enabling factors encountered by the research teams, analysing the main social, political, administrative, financial and acceptance factors.

  16. A Multi-scale Approach for CO2 Accounting and Risk Analysis in CO2 Enhanced Oil Recovery Sites

    NASA Astrophysics Data System (ADS)

    Dai, Z.; Viswanathan, H. S.; Middleton, R. S.; Pan, F.; Ampomah, W.; Yang, C.; Jia, W.; Lee, S. Y.; McPherson, B. J. O. L.; Grigg, R.; White, M. D.

    2015-12-01

    Using carbon dioxide in enhanced oil recovery (CO2-EOR) is a promising technology for emissions management because CO2-EOR can dramatically reduce carbon sequestration costs in the absence of greenhouse gas emissions policies that include incentives for carbon capture and storage. This study develops a multi-scale approach to perform CO2 accounting and risk analysis for understanding CO2 storage potential within an EOR environment at the Farnsworth Unit of the Anadarko Basin in northern Texas. A set of geostatistical-based Monte Carlo simulations of CO2-oil-water flow and transport in the Marrow formation are conducted for global sensitivity and statistical analysis of the major risk metrics: CO2 injection rate, CO2 first breakthrough time, CO2 production rate, cumulative net CO2 storage, cumulative oil and CH4 production, and water injection and production rates. A global sensitivity analysis indicates that reservoir permeability, porosity, and thickness are the major intrinsic reservoir parameters that control net CO2 injection/storage and oil/CH4 recovery rates. The well spacing (the distance between the injection and production wells) and the sequence of alternating CO2 and water injection are the major operational parameters for designing an effective five-spot CO2-EOR pattern. The response surface analysis shows that net CO2 injection rate increases with the increasing reservoir thickness, permeability, and porosity. The oil/CH4 production rates are positively correlated to reservoir permeability, porosity and thickness, but negatively correlated to the initial water saturation. The mean and confidence intervals are estimated for quantifying the uncertainty ranges of the risk metrics. The results from this study provide useful insights for understanding the CO2 storage potential and the corresponding risks of commercial-scale CO2-EOR fields.

  17. Fragment assignment in the cloud with eXpress-D

    PubMed Central

    2013-01-01

    Background Probabilistic assignment of ambiguously mapped fragments produced by high-throughput sequencing experiments has been demonstrated to greatly improve accuracy in the analysis of RNA-Seq and ChIP-Seq, and is an essential step in many other sequence census experiments. A maximum likelihood method using the expectation-maximization (EM) algorithm for optimization is commonly used to solve this problem. However, batch EM-based approaches do not scale well with the size of sequencing datasets, which have been increasing dramatically over the past few years. Thus, current approaches to fragment assignment rely on heuristics or approximations for tractability. Results We present an implementation of a distributed EM solution to the fragment assignment problem using Spark, a data analytics framework that can scale by leveraging compute clusters within datacenters–“the cloud”. We demonstrate that our implementation easily scales to billions of sequenced fragments, while providing the exact maximum likelihood assignment of ambiguous fragments. The accuracy of the method is shown to be an improvement over the most widely used tools available and can be run in a constant amount of time when cluster resources are scaled linearly with the amount of input data. Conclusions The cloud offers one solution for the difficulties faced in the analysis of massive high-thoughput sequencing data, which continue to grow rapidly. Researchers in bioinformatics must follow developments in distributed systems–such as new frameworks like Spark–for ways to port existing methods to the cloud and help them scale to the datasets of the future. Our software, eXpress-D, is freely available at: http://github.com/adarob/express-d. PMID:24314033

  18. Theoretical approaches to maternal-infant interaction: which approach best discriminates between mothers with and without postpartum depression?

    PubMed

    Logsdon, M Cynthia; Mittelberg, Meghan; Morrison, David; Robertson, Ashley; Luther, James F; Wisniewski, Stephen R; Confer, Andrea; Eng, Heather; Sit, Dorothy K Y; Wisner, Katherine L

    2014-12-01

    The purpose of this study was to determine which of the four common approaches to coding maternal-infant interaction best discriminates between mothers with and without postpartum depression. After extensive training, four research assistants coded 83 three minute videotapes of maternal infant interaction at 12month postpartum visits. Four theoretical approaches to coding (Maternal Behavior Q-Sort, the Dyadic Mini Code, Ainsworth Maternal Sensitivity Scale, and the Child-Caregiver Mutual Regulation Scale) were used. Twelve month data were chosen to allow the maximum possible exposure of the infant to maternal depression during the first postpartum year. The videotapes were created in a laboratory with standard procedures. Inter-rater reliabilities for each coding method ranged from .7 to .9. The coders were blind to depression status of the mother. Twenty-seven of the women had major depressive disorder during the 12month postpartum period. Receiver operating characteristics analysis indicated that none of the four methods of analyzing maternal infant interaction discriminated between mothers with and without major depressive disorder. Limitations of the study include the cross-sectional design and the low number of women with major depressive disorder. Further analysis should include data from videotapes at earlier postpartum time periods, and alternative coding approaches should be considered. Nurses should continue to examine culturally appropriate ways in which new mothers can be supported in how to best nurture their babies. Copyright © 2014 Elsevier Inc. All rights reserved.

  19. Exploring Rating Quality in Rater-Mediated Assessments Using Mokken Scale Analysis

    PubMed Central

    Wind, Stefanie A.; Engelhard, George

    2015-01-01

    Mokken scale analysis is a probabilistic nonparametric approach that offers statistical and graphical tools for evaluating the quality of social science measurement without placing potentially inappropriate restrictions on the structure of a data set. In particular, Mokken scaling provides a useful method for evaluating important measurement properties, such as invariance, in contexts where response processes are not well understood. Because rater-mediated assessments involve complex interactions among many variables, including assessment contexts, student artifacts, rubrics, individual rater characteristics, and others, rater-assigned scores are suitable candidates for Mokken scale analysis. The purposes of this study are to describe a suite of indices that can be used to explore the psychometric quality of data from rater-mediated assessments and to illustrate the substantive interpretation of Mokken-based statistics and displays in this context. Techniques that are commonly used in polytomous applications of Mokken scaling are adapted for use with rater-mediated assessments, with a focus on the substantive interpretation related to individual raters. Overall, the findings suggest that indices of rater monotonicity, rater scalability, and invariant rater ordering based on Mokken scaling provide diagnostic information at the level of individual raters related to the requirements for invariant measurement. These Mokken-based indices serve as an additional suite of diagnostic tools for exploring the quality of data from rater-mediated assessments that can supplement rating quality indices based on parametric models. PMID:29795883

  20. The Rationality/Emotional Defensiveness Scale--II. Convergent and discriminant correlational analysis in males and females with and without cancer.

    PubMed

    Swan, G E; Carmelli, D; Dame, A; Rosenman, R H; Spielberger, C D

    1992-05-01

    The psychological correlates of the Rationality/Emotional Defensiveness Scale and its two subscales were examined in 1236 males and 863 females from the Western Collaborative Group Study. An additional 157 males and 164 females with some form of cancer other than of the skin were also included in this analysis. Characteristics measured included self-reported emotional control, anger expression, trait personality, depressive and neurotic symptomatology, Type A behavior, hostility, and social desirability. Results indicate that the Rationality/Emotional Defensiveness Scale is most strongly related to the suppression and control of emotions, especially anger. Scores on this scale also tend to be associated with less Type A behavior and hostility and with more social conformity. Analysis of the component subscale suggests that Antiemotionality, i.e. the extent to which an individual uses reason and logic to avoid interpersonally related emotions, is most strongly marked by the control of anger, while Rationality, i.e. the extent to which an individual uses reason and logic as a general approach to coping with the environment, is related to the control of anxiety and a higher level of trait curiosity. The psychological interpretation of the scale appears to be largely invariant across gender, unaffected by residualization of the total scale score for its association with Social Desirability, and, except for a few minor instances, unrelated to the diagnosis of cancer.

  1. Robust Maneuvering Envelope Estimation Based on Reachability Analysis in an Optimal Control Formulation

    NASA Technical Reports Server (NTRS)

    Lombaerts, Thomas; Schuet, Stefan R.; Wheeler, Kevin; Acosta, Diana; Kaneshige, John

    2013-01-01

    This paper discusses an algorithm for estimating the safe maneuvering envelope of damaged aircraft. The algorithm performs a robust reachability analysis through an optimal control formulation while making use of time scale separation and taking into account uncertainties in the aerodynamic derivatives. Starting with an optimal control formulation, the optimization problem can be rewritten as a Hamilton- Jacobi-Bellman equation. This equation can be solved by level set methods. This approach has been applied on an aircraft example involving structural airframe damage. Monte Carlo validation tests have confirmed that this approach is successful in estimating the safe maneuvering envelope for damaged aircraft.

  2. Thermal diffusivity study of aged Li-ion batteries using flash method

    NASA Astrophysics Data System (ADS)

    Nagpure, Shrikant C.; Dinwiddie, Ralph; Babu, S. S.; Rizzoni, Giorgio; Bhushan, Bharat; Frech, Tim

    Advanced Li-ion batteries with high energy and power density are fast approaching compatibility with automotive demands. While the mechanism of operation of these batteries is well understood, the aging mechanisms are still under investigation. Investigation of aging mechanisms in Li-ion batteries becomes very challenging, as aging does not occur due to a single process, but because of multiple physical processes occurring at the same time in a cascading manner. As the current characterization techniques such as Raman spectroscopy, X-ray diffraction, and atomic force microscopy are used independent of each other they do not provide a comprehensive understanding of material degradation at different length (nm 2 to m 2) scales. Thus to relate the damage mechanisms of the cathode at mm length scale to micro/nanoscale, data at an intermediate length scale is needed. As such, we demonstrate here the use of thermal diffusivity analysis by flash method to bridge the gap between different length scales. In this paper we present the thermal diffusivity analysis of an unaged and aged cell. Thermal diffusivity analysis maps the damage to the cathode samples at millimeter scale lengths. Based on these maps we also propose a mechanism leading to the increase of the thermal diffusivity as the cells are aged.

  3. Multiscale Analysis of Delamination of Carbon Fiber-Epoxy Laminates with Carbon Nanotubes

    NASA Technical Reports Server (NTRS)

    Riddick, Jaret C.; Frankland, SJV; Gates, TS

    2006-01-01

    A multi-scale analysis is presented to parametrically describe the Mode I delamination of a carbon fiber/epoxy laminate. In the midplane of the laminate, carbon nanotubes are included for the purposes of selectively enhancing the fracture toughness of the laminate. To analyze carbon fiber epoxy carbon nanotube laminate, the multi-scale methodology presented here links a series of parameterizations taken at various length scales ranging from the atomistic through the micromechanical to the structural level. At the atomistic scale molecular dynamics simulations are performed in conjunction with an equivalent continuum approach to develop constitutive properties for representative volume elements of the molecular structure of components of the laminate. The molecular-level constitutive results are then used in the Mori-Tanaka micromechanics to develop bulk properties for the epoxy-carbon nanotube matrix system. In order to demonstrate a possible application of this multi-scale methodology, a double cantilever beam specimen is modeled. An existing analysis is employed which uses discrete springs to model the fiber bridging affect during delamination propagation. In the absence of empirical data or a damage mechanics model describing the effect of CNTs on fracture toughness, several tractions laws are postulated, linking CNT volume fraction to fiber bridging in a DCB specimen. Results from this demonstration are presented in terms of DCB specimen load-displacement responses.

  4. Measuring professional satisfaction in Greek nurses: combination of qualitative and quantitative investigation to evaluate the validity and reliability of the Index of Work Satisfaction.

    PubMed

    Karanikola, Maria N K; Papathanassoglou, Elizabeth D E

    2015-02-01

    The Index of Work Satisfaction (IWS) is a comprehensive scale assessing nurses' professional satisfaction. The aim of the present study was to explore: a) the applicability, reliability and validity of the Greek version of the IWS and b) contrasts among the factors addressed by IWS against the main themes emerging from a qualitative phenomenological investigation of nurses' professional experiences. A descriptive correlational design was applied using a sample of 246 emergency and critical care nurses. Internal consistency and test-retest reliability were tested. Construct and content validity were assessed by factor analysis, and through qualitative phenomenological analysis with a purposive sample of 12 nurses. Scale factors were contrasted to qualitative themes to assure that IWS embraces all aspects of Greek nurses' professional satisfaction. The internal consistency (α = 0.81) and test-retest (tau = 1, p < 0.0001) reliability were adequate. Following appropriate modifications, factor analysis confirmed the construct validity of the scale and subscales. The qualitative data partially clarified the low reliability of one subscale. The Greek version of the IWS scale is supported for use in acute care. The mixed methods approach constitutes a powerful tool for transferring scales to different cultures and healthcare systems. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Does the extended Glasgow Outcome Scale add value to the conventional Glasgow Outcome Scale?

    PubMed

    Weir, James; Steyerberg, Ewout W; Butcher, Isabella; Lu, Juan; Lingsma, Hester F; McHugh, Gillian S; Roozenbeek, Bob; Maas, Andrew I R; Murray, Gordon D

    2012-01-01

    The Glasgow Outcome Scale (GOS) is firmly established as the primary outcome measure for use in Phase III trials of interventions in traumatic brain injury (TBI). However, the GOS has been criticized for its lack of sensitivity to detect small but clinically relevant changes in outcome. The Glasgow Outcome Scale-Extended (GOSE) potentially addresses this criticism, and in this study we estimate the efficiency gain associated with using the GOSE in place of the GOS in ordinal analysis of 6-month outcome. The study uses both simulation and the reanalysis of existing data from two completed TBI studies, one an observational cohort study and the other a randomized controlled trial. As expected, the results show that using an ordinal technique to analyze the GOS gives a substantial gain in efficiency relative to the conventional analysis, which collapses the GOS onto a binary scale (favorable versus unfavorable outcome). We also found that using the GOSE gave a modest but consistent increase in efficiency relative to the GOS in both studies, corresponding to a reduction in the required sample size of the order of 3-5%. We recommend that the GOSE be used in place of the GOS as the primary outcome measure in trials of TBI, with an appropriate ordinal approach being taken to the statistical analysis.

  6. High-throughput nanoparticle sizing using lensfree holographic microscopy and liquid nanolenses (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    McLeod, Euan

    2016-03-01

    The sizing of individual nanoparticles and the recovery of the distributions of sizes from populations of nanoparticles provide valuable information in virology, exosome analysis, air and water quality monitoring, and nanomaterials synthesis. Conventional approaches for nanoparticle sizing include those based on costly or low-throughput laboratory-scale equipment such as transmission electron microscopy or nanoparticle tracking analysis, as well as those approaches that only provide population-averaged quantities, such as dynamic light scattering. Some of these limitations can be overcome using a new family of alternative approaches based on quantitative phase imaging that combines lensfree holographic on-chip microscopy with self-assembled liquid nanolenses. In these approaches, the particles of interest are deposited onto a glass coverslip and the sample is coated with either pure liquid polyethylene glycol (PEG) or aqueous solutions of PEG. Due to surface tension, the PEG self-assembles into nano-scale lenses around the particles of interest. These nanolenses enhance the scattering signatures of the embedded particles such that individual nanoparticles as small as 40 nm are clearly visible in phase images reconstructed from captured holograms. The magnitude of the phase quantitatively corresponds to particle size with an accuracy of +/-11 nm. This family of approaches can individually size more than 10^5 particles in parallel, can handle a large dynamic range of particle sizes (40 nm - 100s of microns), and can accurately size multi-modal distributions of particles. Furthermore, the entire approach has been implemented in a compact and cost-effective device suitable for use in the field or in low-resource settings.

  7. Diagnostic Analysis of Ozone Concentrations Simulated by Two Regional-Scale Air Quality Models

    EPA Science Inventory

    Since the Community Multiscale Air Quality modeling system (CMAQ) and the Weather Research and Forecasting with Chemistry model (WRF/Chem) use different approaches to simulate the interaction of meteorology and chemistry, this study compares the CMAQ and WRF/Chem air quality simu...

  8. Bacterial discrimination by means of a universal array approach mediated by LDR (ligase detection reaction)

    PubMed Central

    Busti, Elena; Bordoni, Roberta; Castiglioni, Bianca; Monciardini, Paolo; Sosio, Margherita; Donadio, Stefano; Consolandi, Clarissa; Rossi Bernardi, Luigi; Battaglia, Cristina; De Bellis, Gianluca

    2002-01-01

    Background PCR amplification of bacterial 16S rRNA genes provides the most comprehensive and flexible means of sampling bacterial communities. Sequence analysis of these cloned fragments can provide a qualitative and quantitative insight of the microbial population under scrutiny although this approach is not suited to large-scale screenings. Other methods, such as denaturing gradient gel electrophoresis, heteroduplex or terminal restriction fragment analysis are rapid and therefore amenable to field-scale experiments. A very recent addition to these analytical tools is represented by microarray technology. Results Here we present our results using a Universal DNA Microarray approach as an analytical tool for bacterial discrimination. The proposed procedure is based on the properties of the DNA ligation reaction and requires the design of two probes specific for each target sequence. One oligo carries a fluorescent label and the other a unique sequence (cZipCode or complementary ZipCode) which identifies a ligation product. Ligated fragments, obtained in presence of a proper template (a PCR amplified fragment of the 16s rRNA gene) contain either the fluorescent label or the unique sequence and therefore are addressed to the location on the microarray where the ZipCode sequence has been spotted. Such an array is therefore "Universal" being unrelated to a specific molecular analysis. Here we present the design of probes specific for some groups of bacteria and their application to bacterial diagnostics. Conclusions The combined use of selective probes, ligation reaction and the Universal Array approach yielded an analytical procedure with a good power of discrimination among bacteria. PMID:12243651

  9. A Scale-up Approach for Film Coating Process Based on Surface Roughness as the Critical Quality Attribute.

    PubMed

    Yoshino, Hiroyuki; Hara, Yuko; Dohi, Masafumi; Yamashita, Kazunari; Hakomori, Tadashi; Kimura, Shin-Ichiro; Iwao, Yasunori; Itai, Shigeru

    2018-04-01

    Scale-up approaches for film coating process have been established for each type of film coating equipment from thermodynamic and mechanical analyses for several decades. The objective of the present study was to establish a versatile scale-up approach for film coating process applicable to commercial production that is based on critical quality attribute (CQA) using the Quality by Design (QbD) approach and is independent of the equipment used. Experiments on a pilot scale using the Design of Experiment (DoE) approach were performed to find a suitable CQA from surface roughness, contact angle, color difference, and coating film properties by terahertz spectroscopy. Surface roughness was determined to be a suitable CQA from a quantitative appearance evaluation. When surface roughness was fixed as the CQA, the water content of the film-coated tablets was determined to be the critical material attribute (CMA), a parameter that does not depend on scale or equipment. Finally, to verify the scale-up approach determined from the pilot scale, experiments on a commercial scale were performed. The good correlation between the surface roughness (CQA) and the water content (CMA) identified at the pilot scale was also retained at the commercial scale, indicating that our proposed method should be useful as a scale-up approach for film coating process.

  10. Recovery and its correlates among patients with bipolar disorder: A study from a tertiary care centre in North India.

    PubMed

    Grover, Sandeep; Hazari, Nandita; Aneja, Jitender; Chakrabarti, Subho; Sharma, Sunil; Avasthi, Ajit

    2016-12-01

    The goal of treatment in mental illness has evolved from a symptom-based approach to a personal recovery-based approach. The aim of this study was to evaluate the predictors of personal recovery among patients with bipolar disorder. A total of 185 patients with bipolar disorder, currently in remission, were evaluated on Recovery Assessment Scale (RAS), Internalized Stigma of Mental Illness Scale (ISMIS), Brief Religious coping scale (RCOPE), Duke University Religiosity Index (DUREL), Religiousness Measures Scale, Hamilton depression rating scale (HDRS), Young Mania rating scale (YMRS) and Global Assessment of Functioning (GAF) scale. The mean age of the sample was 40.5 (standard deviation (SD), 11.26) years. Majority of the participants were male, married, working, Hindu by religion and belonged to extended/joint families of urban background. In the regression analysis, RAS scores were predicted significantly by discrimination experience, stereotype endorsement and alienation domains of ISMIS, level of functioning as assessed by GAF, residual depressive symptoms as assessed by HDRS and occupational status. The level of variance explained for total RAS score and various RAS domains ranged from 36.2% to 46.9%. This study suggests that personal recovery among patients with bipolar disorder is affected by stigma, level of functioning, residual depressive symptoms and employment status of patients with bipolar disorder. © The Author(s) 2016.

  11. Consistency between hydrological models and field observations: Linking processes at the hillslope scale to hydrological responses at the watershed scale

    USGS Publications Warehouse

    Clark, M.P.; Rupp, D.E.; Woods, R.A.; Tromp-van, Meerveld; Peters, N.E.; Freer, J.E.

    2009-01-01

    The purpose of this paper is to identify simple connections between observations of hydrological processes at the hillslope scale and observations of the response of watersheds following rainfall, with a view to building a parsimonious model of catchment processes. The focus is on the well-studied Panola Mountain Research Watershed (PMRW), Georgia, USA. Recession analysis of discharge Q shows that while the relationship between dQ/dt and Q is approximately consistent with a linear reservoir for the hillslope, there is a deviation from linearity that becomes progressively larger with increasing spatial scale. To account for these scale differences conceptual models of streamflow recession are defined at both the hillslope scale and the watershed scale, and an assessment made as to whether models at the hillslope scale can be aggregated to be consistent with models at the watershed scale. Results from this study show that a model with parallel linear reservoirs provides the most plausible explanation (of those tested) for both the linear hillslope response to rainfall and non-linear recession behaviour observed at the watershed outlet. In this model each linear reservoir is associated with a landscape type. The parallel reservoir model is consistent with both geochemical analyses of hydrological flow paths and water balance estimates of bedrock recharge. Overall, this study demonstrates that standard approaches of using recession analysis to identify the functional form of storage-discharge relationships identify model structures that are inconsistent with field evidence, and that recession analysis at multiple spatial scales can provide useful insights into catchment behaviour. Copyright ?? 2008 John Wiley & Sons, Ltd.

  12. Novel patch modelling method for efficient simulation and prediction uncertainty analysis of multi-scale groundwater flow and transport processes

    NASA Astrophysics Data System (ADS)

    Sreekanth, J.; Moore, Catherine

    2018-04-01

    The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.

  13. Shifting from Stewardship to Analytics of Massive Science Data

    NASA Astrophysics Data System (ADS)

    Crichton, D. J.; Doyle, R.; Law, E.; Hughes, S.; Huang, T.; Mahabal, A.

    2015-12-01

    Currently, the analysis of large data collections is executed through traditional computational and data analysis approaches, which require users to bring data to their desktops and perform local data analysis. Data collection, archiving and analysis from future remote sensing missions, be it from earth science satellites, planetary robotic missions, or massive radio observatories may not scale as more capable instruments stress existing architectural approaches and systems due to more continuous data streams, data from multiple observational platforms, and measurements and models from different agencies. A new paradigm is needed in order to increase the productivity and effectiveness of scientific data analysis. This paradigm must recognize that architectural choices, data processing, management, analysis, etc are interrelated, and must be carefully coordinated in any system that aims to allow efficient, interactive scientific exploration and discovery to exploit massive data collections. Future observational systems, including satellite and airborne experiments, and research in climate modeling will significantly increase the size of the data requiring new methodological approaches towards data analytics where users can more effectively interact with the data and apply automated mechanisms for data reduction, reduction and fusion across these massive data repositories. This presentation will discuss architecture, use cases, and approaches for developing a big data analytics strategy across multiple science disciplines.

  14. Learning intervention and the approach to study of engineering undergraduates

    NASA Astrophysics Data System (ADS)

    Solomonides, Ian Paul

    The aim of the research was to: investigate the effect of a learning intervention on the Approach to Study of first year engineering degree students. The learning intervention was a local programme of learning to learn' workshops designed and facilitated by the author. The primary aim of these was to develop students' Approaches to Study. Fifty-three first year engineering undergraduates at The Nottingham Trent University participated in the workshops. Approaches to Study were quantified using data obtained from the Revised Approach to Study Inventory (RASI) which was also subjected to a validity and reliability study using local data. Quantitative outcomes were supplemented using a qualitative analysis of essays written by students during the workshops. These were analysed for detail regarding student Approach to Study. It was intended that any findings would inform the local system of Engineering Education, although more general findings also emerged, in particular in relation to the utility of the research instrument. It was concluded that the intervention did not promote the preferential Deep Approach and did not affect Approaches to Study generally as measured by the RASI. This concurred with previous attempts to change student Approaches to Study at the group level. It was also established that subsequent years of the Integrated Engineering degree course are associated with progressively deteriorating Approaches to Study. Students who were exposed to the intervention followed a similar pattern of deteriorating Approaches suggesting that the local course context and its demands had a greater influence over the Approach of students than the intervention did. It was found that academic outcomes were unrelated to the extent to which students took a Deep Approach to the local assessment demands. There appeared therefore to be a mis-match between the Approach students adopted to pass examinations and those that are required for high quality learning outcomes. It is suggested that more co-ordinated and coherent action for changing the local course demands is needed before an improvement in student Approaches will be observed. These conclusions were broadly supported by the results from the qualitative analysis which also indicated the dominating effects of course context over Approach. However, some students appeared to have gained from the intervention in that they reported being in a better position to evaluate their relationships with the course demands following the workshops. It therefore appeared that some students could be described as being in tension between the desire to take a Deep Approach and the adoption of less desirable Approaches as promoted and encouraged by the course context. It is suggested that questions regarding the integrity of the intervention are thereby left unresolved even though the immediate effects of it are quite clear. It is also suggested that the integrity of the research instrument is open to question in that the Strategic Approach to Study scale failed to be defined by one factor under common factor analysis. The intentional or motivational element which previously defined this scale was found to be associated with a Deep Approach factor within the local context. The Strategic Approach was found to be defined by skill rather than motivation. This indicated that some reinterpretation of the RASI and in particular the Strategic Approach to Study scale is needed.

  15. Stability analysis of nonlinear autonomous systems - General theory and application to flutter

    NASA Technical Reports Server (NTRS)

    Smith, L. L.; Morino, L.

    1975-01-01

    The analysis makes use of a singular perturbation method, the multiple time scaling. Concepts of stable and unstable limit cycles are introduced. The solution is obtained in the form of an asymptotic expansion. Numerical results are presented for the nonlinear flutter of panels and airfoils in supersonic flow. The approach used is an extension of a method for analyzing nonlinear panel flutter reported by Morino (1969).

  16. Iodine Coulometry of Various Reducing Agents Including Thiols with Online Photocell Detection Coupled to a Multifunctional Chemical Analysis Station to Eliminate Student End Point Detection by Eye

    ERIC Educational Resources Information Center

    Padilla Mercado, Jeralyne B.; Coombs, Eri M.; De Jesus, Jenny P.; Bretz, Stacey Lowery; Danielson, Neil D.

    2018-01-01

    Multifunctional chemical analysis (MCA) systems provide a viable alternative for large scale instruction while supporting a hands-on approach to more advanced instrumentation. These systems are robust and typically use student stations connected to a remote central computer for data collection, minimizing the need for computers at every student…

  17. Political Culture and Risk Analysis: An Outline of Somalia, Tunisia, and Libya

    DTIC Science & Technology

    2016-11-21

    analysis, risk assessment, national maturity, independence move- ments, extremists, national violence , Africa, Somalia, Somaliland, Puntland, Tunisia...changed in terms of actors, but not in any appreciable reduction of violence . In the north, any episodes of conflict were negligible in scale and...duration, never approaching that of the south. The continuous violence in the south, in fact, is unprecedented in Somali history.21 Despite any

  18. A Mobile Acoustic Subsurface Sensing (MASS) System for Rapid Roadway Assessment

    PubMed Central

    Lu, Yifeng; Zhang, Yi; Cao, Yinghong; McDaniel, J. Gregory; Wang, Ming L.

    2013-01-01

    Surface waves are commonly used for vibration-based nondestructive testing for infrastructure. Spectral Analysis of Surface Waves (SASW) has been used to detect subsurface properties for geologic inspections. Recently, efforts were made to scale down these subsurface detection approaches to see how they perform on small-scale structures such as concrete slabs and pavements. Additional efforts have been made to replace the traditional surface-mounted transducers with non-contact acoustic transducers. Though some success has been achieved, most of these new approaches are inefficient because they require point-to-point measurements or off-line signal analysis. This article introduces a Mobile Acoustic Subsurface Sensing system as MASS, which is an improved surface wave based implementation for measuring the subsurface profile of roadways. The compact MASS system is a 3-wheeled cart outfitted with an electromagnetic impact source, distance register, non-contact acoustic sensors and data acquisition/processing equipment. The key advantage of the MASS system is the capability to collect measurements continuously at walking speed in an automatic way. The fast scan and real-time analysis advantages are based upon the non-contact acoustic sensing and fast air-coupled surface wave analysis program. This integration of hardware and software makes the MASS system an efficient mobile prototype for the field test. PMID:23698266

  19. Design and development of pressure and repressurization purge system for reusable space shuttle multilayer insulation system

    NASA Technical Reports Server (NTRS)

    1971-01-01

    Preliminary design and analysis of purge system concepts and purge subsystem approaches are defined and evaluated. Acceptable purge subsystem approaches were combined into four predesign layouts which are presented for comparison and evaluation. Two predesigns were selected for further detailed design and evaluation for eventual selection of the best design for a full scale test configuration. An operation plan is included as an appendix for reference to shuttle-oriented operational parameters.

  20. Guidelines for CubeSat's Thermal Design

    NASA Technical Reports Server (NTRS)

    Rodriguez-Ruiz, Juan; Patel, Deepak

    2015-01-01

    Thermal and Fluids Analysis Workshop 2015, Silver Spring, MD. NCTS 19104-15. What does it take to thermally designlow cost, low mass cubesats? What are the differences in the approach when you compare with large scale missions?What additional risk is acceptable? What is the approach to hardware? How is the testing campaign run? These aresome of the questions that will be addressed in this course, which is designed to equip the attendees to support thedevelopment of cubesats at their organization.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rizzi, Silvio; Hereld, Mark; Insley, Joseph

    In this work we perform in-situ visualization of molecular dynamics simulations, which can help scientists to visualize simulation output on-the-fly, without incurring storage overheads. We present a case study to couple LAMMPS, the large-scale molecular dynamics simulation code with vl3, our parallel framework for large-scale visualization and analysis. Our motivation is to identify effective approaches for covisualization and exploration of large-scale atomistic simulations at interactive frame rates.We propose a system of coupled libraries and describe its architecture, with an implementation that runs on GPU-based clusters. We present the results of strong and weak scalability experiments, as well as future researchmore » avenues based on our results.« less

  2. Observations and Interpretation of Magnetofluid Turbulence at Small Scales

    NASA Technical Reports Server (NTRS)

    Goldstein, Melvyn L.; Sahraoui, Fouad

    2011-01-01

    High time resolution magnetic field measurements from the four Cluster spacecraft have revealed new features of the properties of magnetofluid turbulence at small spatial scales; perhaps even revealing the approach to the dissipation regime at scales close to the electron inertial length. Various analysis techniques and theoretical ideas have been put forward to account for the properties of those measurements. The talk will describe the current state of observations and theory, and will point out on-going and planned research that will further our understanding of how magnetofluid turbulence dissipates. The observations and theories are directly germane to studies being planned as part of NASA's forthcoming Magnetospheric Multiscale Mission.

  3. Innovative approaches for improving maternal and newborn health--A landscape analysis.

    PubMed

    Lunze, Karsten; Higgins-Steele, Ariel; Simen-Kapeu, Aline; Vesel, Linda; Kim, Julia; Dickson, Kim

    2015-12-17

    Essential interventions can improve maternal and newborn health (MNH) outcomes in low- and middle-income countries, but their implementation has been challenging. Innovative MNH approaches have the potential to accelerate progress and to lead to better health outcomes for women and newborns, but their added value to health systems remains incompletely understood. This study's aim was to analyze the landscape of innovative MNH approaches and related published evidence. Systematic literature review and descriptive analysis based on the MNH continuum of care framework and the World Health Organization health system building blocks, analyzing the range and nature of currently published MNH approaches that are considered innovative. We used 11 databases (MedLine, Web of Science, CINAHL, Cochrane, Popline, BLDS, ELDIS, 3ie, CAB direct, WHO Global Health Library and WHOLIS) as data source and extracted data according to our study protocol. Most innovative approaches in MNH are iterations of existing interventions, modified for contexts in which they had not been applied previously. Many aim at the direct organization and delivery of maternal and newborn health services or are primarily health workforce interventions. Innovative approaches also include health technologies, interventions based on community ownership and participation, and novel models of financing and policy making. Rigorous randomized trials to assess innovative MNH approaches are rare; most evaluations are smaller pilot studies. Few studies assessed intervention effects on health outcomes or focused on equity in health care delivery. Future implementation and evaluation efforts need to assess innovations' effects on health outcomes and provide evidence on potential for scale-up, considering cost, feasibility, appropriateness, and acceptability. Measuring equity is an important aspect to identify and target population groups at risk of service inequity. Innovative MNH interventions will need innovative implementation, evaluation and scale-up strategies for their sustainable integration into health systems.

  4. Bayesian analysis of the dynamic cosmic web in the SDSS galaxy survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leclercq, Florent; Wandelt, Benjamin; Jasche, Jens, E-mail: florent.leclercq@polytechnique.org, E-mail: jasche@iap.fr, E-mail: wandelt@iap.fr

    Recent application of the Bayesian algorithm \\textsc(borg) to the Sloan Digital Sky Survey (SDSS) main sample galaxies resulted in the physical inference of the formation history of the observed large-scale structure from its origin to the present epoch. In this work, we use these inferences as inputs for a detailed probabilistic cosmic web-type analysis. To do so, we generate a large set of data-constrained realizations of the large-scale structure using a fast, fully non-linear gravitational model. We then perform a dynamic classification of the cosmic web into four distinct components (voids, sheets, filaments, and clusters) on the basis of themore » tidal field. Our inference framework automatically and self-consistently propagates typical observational uncertainties to web-type classification. As a result, this study produces accurate cosmographic classification of large-scale structure elements in the SDSS volume. By also providing the history of these structure maps, the approach allows an analysis of the origin and growth of the early traces of the cosmic web present in the initial density field and of the evolution of global quantities such as the volume and mass filling fractions of different structures. For the problem of web-type classification, the results described in this work constitute the first connection between theory and observations at non-linear scales including a physical model of structure formation and the demonstrated capability of uncertainty quantification. A connection between cosmology and information theory using real data also naturally emerges from our probabilistic approach. Our results constitute quantitative chrono-cosmography of the complex web-like patterns underlying the observed galaxy distribution.« less

  5. Evaluating uncertainty in predicting spatially variable representative elementary scales in fractured aquifers, with application to Turkey Creek Basin, Colorado

    USGS Publications Warehouse

    Wellman, Tristan P.; Poeter, Eileen P.

    2006-01-01

    Computational limitations and sparse field data often mandate use of continuum representation for modeling hydrologic processes in large‐scale fractured aquifers. Selecting appropriate element size is of primary importance because continuum approximation is not valid for all scales. The traditional approach is to select elements by identifying a single representative elementary scale (RES) for the region of interest. Recent advances indicate RES may be spatially variable, prompting unanswered questions regarding the ability of sparse data to spatially resolve continuum equivalents in fractured aquifers. We address this uncertainty of estimating RES using two techniques. In one technique we employ data‐conditioned realizations generated by sequential Gaussian simulation. For the other we develop a new approach using conditioned random walks and nonparametric bootstrapping (CRWN). We evaluate the effectiveness of each method under three fracture densities, three data sets, and two groups of RES analysis parameters. In sum, 18 separate RES analyses are evaluated, which indicate RES magnitudes may be reasonably bounded using uncertainty analysis, even for limited data sets and complex fracture structure. In addition, we conduct a field study to estimate RES magnitudes and resulting uncertainty for Turkey Creek Basin, a crystalline fractured rock aquifer located 30 km southwest of Denver, Colorado. Analyses indicate RES does not correlate to rock type or local relief in several instances but is generally lower within incised creek valleys and higher along mountain fronts. Results of this study suggest that (1) CRWN is an effective and computationally efficient method to estimate uncertainty, (2) RES predictions are well constrained using uncertainty analysis, and (3) for aquifers such as Turkey Creek Basin, spatial variability of RES is significant and complex.

  6. Stable isotope analyses of feather amino acids identify penguin migration strategies at ocean basin scales.

    PubMed

    Polito, Michael J; Hinke, Jefferson T; Hart, Tom; Santos, Mercedes; Houghton, Leah A; Thorrold, Simon R

    2017-08-01

    Identifying the at-sea distribution of wide-ranging marine predators is critical to understanding their ecology. Advances in electronic tracking devices and intrinsic biogeochemical markers have greatly improved our ability to track animal movements on ocean-wide scales. Here, we show that, in combination with direct tracking, stable carbon isotope analysis of essential amino acids in tail feathers provides the ability to track the movement patterns of two, wide-ranging penguin species over ocean basin scales. In addition, we use this isotopic approach across multiple breeding colonies in the Scotia Arc to evaluate migration trends at a regional scale that would be logistically challenging using direct tracking alone. © 2017 The Author(s).

  7. Multidisciplinary Design, Analysis, and Optimization Tool Development Using a Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Pak, Chan-gi; Li, Wesley

    2009-01-01

    Multidisciplinary design, analysis, and optimization using a genetic algorithm is being developed at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California) to automate analysis and design process by leveraging existing tools to enable true multidisciplinary optimization in the preliminary design stage of subsonic, transonic, supersonic, and hypersonic aircraft. This is a promising technology, but faces many challenges in large-scale, real-world application. This report describes current approaches, recent results, and challenges for multidisciplinary design, analysis, and optimization as demonstrated by experience with the Ikhana fire pod design.!

  8. On identifying relationships between the flood scaling exponent and basin attributes.

    PubMed

    Medhi, Hemanta; Tripathi, Shivam

    2015-07-01

    Floods are known to exhibit self-similarity and follow scaling laws that form the basis of regional flood frequency analysis. However, the relationship between basin attributes and the scaling behavior of floods is still not fully understood. Identifying these relationships is essential for drawing connections between hydrological processes in a basin and the flood response of the basin. The existing studies mostly rely on simulation models to draw these connections. This paper proposes a new methodology that draws connections between basin attributes and the flood scaling exponents by using observed data. In the proposed methodology, region-of-influence approach is used to delineate homogeneous regions for each gaging station. Ordinary least squares regression is then applied to estimate flood scaling exponents for each homogeneous region, and finally stepwise regression is used to identify basin attributes that affect flood scaling exponents. The effectiveness of the proposed methodology is tested by applying it to data from river basins in the United States. The results suggest that flood scaling exponent is small for regions having (i) large abstractions from precipitation in the form of large soil moisture storages and high evapotranspiration losses, and (ii) large fractions of overland flow compared to base flow, i.e., regions having fast-responding basins. Analysis of simple scaling and multiscaling of floods showed evidence of simple scaling for regions in which the snowfall dominates the total precipitation.

  9. Effect of extreme data loss on heart rate signals quantified by entropy analysis

    NASA Astrophysics Data System (ADS)

    Li, Yu; Wang, Jun; Li, Jin; Liu, Dazhao

    2015-02-01

    The phenomenon of data loss always occurs in the analysis of large databases. Maintaining the stability of analysis results in the event of data loss is very important. In this paper, we used a segmentation approach to generate a synthetic signal that is randomly wiped from data according to the Gaussian distribution and the exponential distribution of the original signal. Then, the logistic map is used as verification. Finally, two methods of measuring entropy-base-scale entropy and approximate entropy-are comparatively analyzed. Our results show the following: (1) Two key parameters-the percentage and the average length of removed data segments-can change the sequence complexity according to logistic map testing. (2) The calculation results have preferable stability for base-scale entropy analysis, which is not sensitive to data loss. (3) The loss percentage of HRV signals should be controlled below the range (p = 30 %), which can provide useful information in clinical applications.

  10. Sensitivity of the Positive and Negative Syndrome Scale (PANSS) in Detecting Treatment Effects via Network Analysis.

    PubMed

    Esfahlani, Farnaz Zamani; Sayama, Hiroki; Visser, Katherine Frost; Strauss, Gregory P

    2017-12-01

    Objective: The Positive and Negative Syndrome Scale is a primary outcome measure in clinical trials examining the efficacy of antipsychotic medications. Although the Positive and Negative Syndrome Scale has demonstrated sensitivity as a measure of treatment change in studies using traditional univariate statistical approaches, its sensitivity to detecting network-level changes in dynamic relationships among symptoms has yet to be demonstrated using more sophisticated multivariate analyses. In the current study, we examined the sensitivity of the Positive and Negative Syndrome Scale to detecting antipsychotic treatment effects as revealed through network analysis. Design: Participants included 1,049 individuals diagnosed with psychotic disorders from the Phase I portion of the Clinical Antipsychotic Trials of Intervention Effectiveness (CATIE) study. Of these participants, 733 were clinically determined to be treatment-responsive and 316 were found to be treatment-resistant. Item level data from the Positive and Negative Syndrome Scale were submitted to network analysis, and macroscopic, mesoscopic, and microscopic network properties were evaluated for the treatment-responsive and treatment-resistant groups at baseline and post-phase I antipsychotic treatment. Results: Network analysis indicated that treatment-responsive patients had more densely connected symptom networks after antipsychotic treatment than did treatment-responsive patients at baseline, and that symptom centralities increased following treatment. In contrast, symptom networks of treatment-resistant patients behaved more randomly before and after treatment. Conclusions: These results suggest that the Positive and Negative Syndrome Scale is sensitive to detecting treatment effects as revealed through network analysis. Its findings also provide compelling new evidence that strongly interconnected symptom networks confer an overall greater probability of treatment responsiveness in patients with psychosis, suggesting that antipsychotics achieve their effect by enhancing a number of central symptoms, which then facilitate reduction of other highly coupled symptoms in a network-like fashion.

  11. A mediation analysis of achievement motives, goals, learning strategies, and academic achievement.

    PubMed

    Diseth, Age; Kobbeltvedt, Therese

    2010-12-01

    Previous research is inconclusive regarding antecedents and consequences of achievement goals, and there is a need for more research in order to examine the joint effects of different types of motives and learning strategies as predictors of academic achievement. To investigate the relationship between achievement motives, achievement goals, learning strategies (deep, surface, and strategic), and academic achievement in a hierarchical model. Participants were 229 undergraduate students (mean age: 21.2 years) of psychology and economics at the University of Bergen, Norway. Variables were measured by means of items from the Achievement Motives Scale (AMS), the Approaches and Study Skills Inventory for Students, and an achievement goal scale. Correlation analysis showed that academic achievement (examination grade) was positively correlated with performance-approach goal, mastery goal, and strategic learning strategies, and negatively correlated with performance-avoidance goal and surface learning strategy. A path analysis (structural equation model) showed that achievement goals were mediators between achievement motives and learning strategies, and that strategic learning strategies mediated the relationship between achievement goals and academic achievement. This study integrated previous findings from several studies and provided new evidence on the direct and indirect effects of different types of motives and learning strategies as predictors of academic achievement.

  12. A model of objective weighting for EIA.

    PubMed

    Ying, L G; Liu, Y C

    1995-06-01

    In spite of progress achieved in the research of environmental impact assessment (EIA), the problem of weight distribution for a set of parameters has not as yet, been properly solved. This paper presents an approach of objective weighting by using a procedure of P ij principal component-factor analysis (P ij PCFA), which suits specifically those parameters measured directly by physical scales. The P ij PCFA weighting procedure reforms the conventional weighting practice in two aspects: first, the expert subjective judgment is replaced by the standardized measure P ij as the original input of weight processing and, secondly, the principal component-factor analysis is introduced to approach the environmental parameters for their respective contributions to the totality of the regional ecosystem. Not only is the P ij PCFA weighting logical in theoretical reasoning, it also suits practically all levels of professional routines in natural environmental assessment and impact analysis. Having been assured of objectivity and accuracy in the EIA case study of the Chuansha County in Shanghai, China, the P ij PCFA weighting procedure has the potential to be applied in other geographical fields that need assigning weights to parameters that are measured by physical scales.

  13. Combining semi-automated image analysis techniques with machine learning algorithms to accelerate large-scale genetic studies.

    PubMed

    Atkinson, Jonathan A; Lobet, Guillaume; Noll, Manuel; Meyer, Patrick E; Griffiths, Marcus; Wells, Darren M

    2017-10-01

    Genetic analyses of plant root systems require large datasets of extracted architectural traits. To quantify such traits from images of root systems, researchers often have to choose between automated tools (that are prone to error and extract only a limited number of architectural traits) or semi-automated ones (that are highly time consuming). We trained a Random Forest algorithm to infer architectural traits from automatically extracted image descriptors. The training was performed on a subset of the dataset, then applied to its entirety. This strategy allowed us to (i) decrease the image analysis time by 73% and (ii) extract meaningful architectural traits based on image descriptors. We also show that these traits are sufficient to identify the quantitative trait loci that had previously been discovered using a semi-automated method. We have shown that combining semi-automated image analysis with machine learning algorithms has the power to increase the throughput of large-scale root studies. We expect that such an approach will enable the quantification of more complex root systems for genetic studies. We also believe that our approach could be extended to other areas of plant phenotyping. © The Authors 2017. Published by Oxford University Press.

  14. Using unmanned aerial vehicle (UAV) surveys and image analysis in the study of large surface-associated marine species: a case study on reef sharks Carcharhinus melanopterus shoaling behaviour.

    PubMed

    Rieucau, G; Kiszka, J J; Castillo, J C; Mourier, J; Boswell, K M; Heithaus, M R

    2018-06-01

    A novel image analysis-based technique applied to unmanned aerial vehicle (UAV) survey data is described to detect and locate individual free-ranging sharks within aggregations. The method allows rapid collection of data and quantification of fine-scale swimming and collective patterns of sharks. We demonstrate the usefulness of this technique in a small-scale case study exploring the shoaling tendencies of blacktip reef sharks Carcharhinus melanopterus in a large lagoon within Moorea, French Polynesia. Using our approach, we found that C. melanopterus displayed increased alignment with shoal companions when distributed over a sandflat where they are regularly fed for ecotourism purposes as compared with when they shoaled in a deeper adjacent channel. Our case study highlights the potential of a relatively low-cost method that combines UAV survey data and image analysis to detect differences in shoaling patterns of free-ranging sharks in shallow habitats. This approach offers an alternative to current techniques commonly used in controlled settings that require time-consuming post-processing effort. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  15. Combining semi-automated image analysis techniques with machine learning algorithms to accelerate large-scale genetic studies

    PubMed Central

    Atkinson, Jonathan A.; Lobet, Guillaume; Noll, Manuel; Meyer, Patrick E.; Griffiths, Marcus

    2017-01-01

    Abstract Genetic analyses of plant root systems require large datasets of extracted architectural traits. To quantify such traits from images of root systems, researchers often have to choose between automated tools (that are prone to error and extract only a limited number of architectural traits) or semi-automated ones (that are highly time consuming). We trained a Random Forest algorithm to infer architectural traits from automatically extracted image descriptors. The training was performed on a subset of the dataset, then applied to its entirety. This strategy allowed us to (i) decrease the image analysis time by 73% and (ii) extract meaningful architectural traits based on image descriptors. We also show that these traits are sufficient to identify the quantitative trait loci that had previously been discovered using a semi-automated method. We have shown that combining semi-automated image analysis with machine learning algorithms has the power to increase the throughput of large-scale root studies. We expect that such an approach will enable the quantification of more complex root systems for genetic studies. We also believe that our approach could be extended to other areas of plant phenotyping. PMID:29020748

  16. A Matter of Time: Faster Percolator Analysis via Efficient SVM Learning for Large-Scale Proteomics.

    PubMed

    Halloran, John T; Rocke, David M

    2018-05-04

    Percolator is an important tool for greatly improving the results of a database search and subsequent downstream analysis. Using support vector machines (SVMs), Percolator recalibrates peptide-spectrum matches based on the learned decision boundary between targets and decoys. To improve analysis time for large-scale data sets, we update Percolator's SVM learning engine through software and algorithmic optimizations rather than heuristic approaches that necessitate the careful study of their impact on learned parameters across different search settings and data sets. We show that by optimizing Percolator's original learning algorithm, l 2 -SVM-MFN, large-scale SVM learning requires nearly only a third of the original runtime. Furthermore, we show that by employing the widely used Trust Region Newton (TRON) algorithm instead of l 2 -SVM-MFN, large-scale Percolator SVM learning is reduced to nearly only a fifth of the original runtime. Importantly, these speedups only affect the speed at which Percolator converges to a global solution and do not alter recalibration performance. The upgraded versions of both l 2 -SVM-MFN and TRON are optimized within the Percolator codebase for multithreaded and single-thread use and are available under Apache license at bitbucket.org/jthalloran/percolator_upgrade .

  17. A refined method for multivariate meta-analysis and meta-regression.

    PubMed

    Jackson, Daniel; Riley, Richard D

    2014-02-20

    Making inferences about the average treatment effect using the random effects model for meta-analysis is problematic in the common situation where there is a small number of studies. This is because estimates of the between-study variance are not precise enough to accurately apply the conventional methods for testing and deriving a confidence interval for the average effect. We have found that a refined method for univariate meta-analysis, which applies a scaling factor to the estimated effects' standard error, provides more accurate inference. We explain how to extend this method to the multivariate scenario and show that our proposal for refined multivariate meta-analysis and meta-regression can provide more accurate inferences than the more conventional approach. We explain how our proposed approach can be implemented using standard output from multivariate meta-analysis software packages and apply our methodology to two real examples. Copyright © 2013 John Wiley & Sons, Ltd.

  18. Analyzing Remodeling of Cardiac Tissue: A Comprehensive Approach Based on Confocal Microscopy and 3D Reconstructions

    PubMed Central

    Sachse, F. B.

    2015-01-01

    Microstructural characterization of cardiac tissue and its remodeling in disease is a crucial step in many basic research projects. We present a comprehensive approach for three-dimensional characterization of cardiac tissue at the submicrometer scale. We developed a compression-free mounting method as well as labeling and imaging protocols that facilitate acquisition of three-dimensional image stacks with scanning confocal microscopy. We evaluated the approach with normal and infarcted ventricular tissue. We used the acquired image stacks for segmentation, quantitative analysis and visualization of important tissue components. In contrast to conventional mounting, compression-free mounting preserved cell shapes, capillary lumens and extracellular laminas. Furthermore, the new approach and imaging protocols resulted in high signal-to-noise ratios at depths up to 60 μm. This allowed extensive analyses revealing major differences in volume fractions and distribution of cardiomyocytes, blood vessels, fibroblasts, myofibroblasts and extracellular space in control versus infarct border zone. Our results show that the developed approach yields comprehensive data on microstructure of cardiac tissue and its remodeling in disease. In contrast to other approaches, it allows quantitative assessment of all major tissue components. Furthermore, we suggest that the approach will provide important data for physiological models of cardiac tissue at the submicrometer scale. PMID:26399990

  19. An illustrative analysis of technological alternatives for satellite communications

    NASA Technical Reports Server (NTRS)

    Metcalfe, M. R.; Cazalet, E. G.; North, D. W.

    1979-01-01

    The demand for satellite communications services in the domestic market is discussed. Two approaches to increasing system capacity are the expansion of service into frequencies presently allocated but not used for satellite communications, and the development of technologies that provide a greater level of service within the currently used frequency bands. The development of economic models and analytic techniques for evaluating capacity expansion alternatives such as these are presented. The satellite orbit spectrum problem, and also outlines of some suitable analytic approaches are examined. Illustrative analysis of domestic communications satellite technology options for providing increased levels of service are also examined. The analysis illustrates the use of probabilities and decision trees in analyzing alternatives, and provides insight into the important aspects of the orbit spectrum problem that would warrant inclusion in a larger scale analysis.

  20. Identifying transit corridors for elephant using a long time-series

    NASA Astrophysics Data System (ADS)

    Pittiglio, Claudia; Skidmore, Andrew K.; van Gils, Hein A. M. J.; Prins, Herbert H. T.

    2012-02-01

    The role of corridors in mitigating the effects of landscape fragmentation on biodiversity is controversial. Recent studies have highlighted the need for new approaches in corridor design using long-term datasets. We present a method to identify transit corridors for elephant at a population scale over a large area and an extended period of time using long-term aerial surveys. We investigated environmental and anthropogenic factors directly and indirectly related to the wet versus dry season distribution of elephant and its transit corridors. Four environmental variables predicted the presence of elephant at the landscape scale in both seasons: distance from permanent water, protected areas and settlements and vegetation structure. Path analysis revealed that altitude and monthly average NDVI, and distance from temporary water had a significant indirect effect on elephant distribution at local scale in dry and wet seasons respectively. Five transit corridors connecting Tarangire National Park and the northern as well as south-eastern wet season dispersal areas were identified and matched the wildlife migration routes described in the 1960s. The corridors are stable over the decades, providing landscape connectivity for elephant. Our approach yielded insights how advanced spatial analysis can be integrated with biological data available from long-term datasets to identify actual transit corridors and predictors of species distribution.

Top