Multi-scale occupancy estimation and modelling using multiple detection methods
Nichols, James D.; Bailey, Larissa L.; O'Connell, Allan F.; Talancy, Neil W.; Grant, Evan H. Campbell; Gilbert, Andrew T.; Annand, Elizabeth M.; Husband, Thomas P.; Hines, James E.
2008-01-01
Occupancy estimation and modelling based on detection–nondetection data provide an effective way of exploring change in a species’ distribution across time and space in cases where the species is not always detected with certainty. Today, many monitoring programmes target multiple species, or life stages within a species, requiring the use of multiple detection methods. When multiple methods or devices are used at the same sample sites, animals can be detected by more than one method.We develop occupancy models for multiple detection methods that permit simultaneous use of data from all methods for inference about method-specific detection probabilities. Moreover, the approach permits estimation of occupancy at two spatial scales: the larger scale corresponds to species’ use of a sample unit, whereas the smaller scale corresponds to presence of the species at the local sample station or site.We apply the models to data collected on two different vertebrate species: striped skunks Mephitis mephitis and red salamanders Pseudotriton ruber. For striped skunks, large-scale occupancy estimates were consistent between two sampling seasons. Small-scale occupancy probabilities were slightly lower in the late winter/spring when skunks tend to conserve energy, and movements are limited to males in search of females for breeding. There was strong evidence of method-specific detection probabilities for skunks. As anticipated, large- and small-scale occupancy areas completely overlapped for red salamanders. The analyses provided weak evidence of method-specific detection probabilities for this species.Synthesis and applications. Increasingly, many studies are utilizing multiple detection methods at sampling locations. The modelling approach presented here makes efficient use of detections from multiple methods to estimate occupancy probabilities at two spatial scales and to compare detection probabilities associated with different detection methods. The models can be viewed as another variation of Pollock's robust design and may be applicable to a wide variety of scenarios where species occur in an area but are not always near the sampled locations. The estimation approach is likely to be especially useful in multispecies conservation programmes by providing efficient estimates using multiple detection devices and by providing device-specific detection probability estimates for use in survey design.
Dong, Yingying; Luo, Ruisen; Feng, Haikuan; Wang, Jihua; Zhao, Jinling; Zhu, Yining; Yang, Guijun
2014-01-01
Differences exist among analysis results of agriculture monitoring and crop production based on remote sensing observations, which are obtained at different spatial scales from multiple remote sensors in same time period, and processed by same algorithms, models or methods. These differences can be mainly quantitatively described from three aspects, i.e. multiple remote sensing observations, crop parameters estimation models, and spatial scale effects of surface parameters. Our research proposed a new method to analyse and correct the differences between multi-source and multi-scale spatial remote sensing surface reflectance datasets, aiming to provide references for further studies in agricultural application with multiple remotely sensed observations from different sources. The new method was constructed on the basis of physical and mathematical properties of multi-source and multi-scale reflectance datasets. Theories of statistics were involved to extract statistical characteristics of multiple surface reflectance datasets, and further quantitatively analyse spatial variations of these characteristics at multiple spatial scales. Then, taking the surface reflectance at small spatial scale as the baseline data, theories of Gaussian distribution were selected for multiple surface reflectance datasets correction based on the above obtained physical characteristics and mathematical distribution properties, and their spatial variations. This proposed method was verified by two sets of multiple satellite images, which were obtained in two experimental fields located in Inner Mongolia and Beijing, China with different degrees of homogeneity of underlying surfaces. Experimental results indicate that differences of surface reflectance datasets at multiple spatial scales could be effectively corrected over non-homogeneous underlying surfaces, which provide database for further multi-source and multi-scale crop growth monitoring and yield prediction, and their corresponding consistency analysis evaluation.
Dong, Yingying; Luo, Ruisen; Feng, Haikuan; Wang, Jihua; Zhao, Jinling; Zhu, Yining; Yang, Guijun
2014-01-01
Differences exist among analysis results of agriculture monitoring and crop production based on remote sensing observations, which are obtained at different spatial scales from multiple remote sensors in same time period, and processed by same algorithms, models or methods. These differences can be mainly quantitatively described from three aspects, i.e. multiple remote sensing observations, crop parameters estimation models, and spatial scale effects of surface parameters. Our research proposed a new method to analyse and correct the differences between multi-source and multi-scale spatial remote sensing surface reflectance datasets, aiming to provide references for further studies in agricultural application with multiple remotely sensed observations from different sources. The new method was constructed on the basis of physical and mathematical properties of multi-source and multi-scale reflectance datasets. Theories of statistics were involved to extract statistical characteristics of multiple surface reflectance datasets, and further quantitatively analyse spatial variations of these characteristics at multiple spatial scales. Then, taking the surface reflectance at small spatial scale as the baseline data, theories of Gaussian distribution were selected for multiple surface reflectance datasets correction based on the above obtained physical characteristics and mathematical distribution properties, and their spatial variations. This proposed method was verified by two sets of multiple satellite images, which were obtained in two experimental fields located in Inner Mongolia and Beijing, China with different degrees of homogeneity of underlying surfaces. Experimental results indicate that differences of surface reflectance datasets at multiple spatial scales could be effectively corrected over non-homogeneous underlying surfaces, which provide database for further multi-source and multi-scale crop growth monitoring and yield prediction, and their corresponding consistency analysis evaluation. PMID:25405760
ERIC Educational Resources Information Center
Busse, R. T.; Elliott, Stephen N.; Kratochwill, Thomas R.
2010-01-01
The purpose of this article is to present Convergent Evidence Scaling (CES) as an emergent method for combining data from multiple assessment indicators. The CES method combines single-case assessment data by converging data gathered across multiple persons, settings, or measures, thereby providing an overall criterion-referenced outcome on which…
Clustering "N" Objects into "K" Groups under Optimal Scaling of Variables.
ERIC Educational Resources Information Center
van Buuren, Stef; Heiser, Willem J.
1989-01-01
A method based on homogeneity analysis (multiple correspondence analysis or multiple scaling) is proposed to reduce many categorical variables to one variable with "k" categories. The method is a generalization of the sum of squared distances cluster analysis problem to the case of mixed measurement level variables. (SLD)
Hobart, J; Cano, S
2009-02-01
In this monograph we examine the added value of new psychometric methods (Rasch measurement and Item Response Theory) over traditional psychometric approaches by comparing and contrasting their psychometric evaluations of existing sets of rating scale data. We have concentrated on Rasch measurement rather than Item Response Theory because we believe that it is the more advantageous method for health measurement from a conceptual, theoretical and practical perspective. Our intention is to provide an authoritative document that describes the principles of Rasch measurement and the practice of Rasch analysis in a clear, detailed, non-technical form that is accurate and accessible to clinicians and researchers in health measurement. A comparison was undertaken of traditional and new psychometric methods in five large sets of rating scale data: (1) evaluation of the Rivermead Mobility Index (RMI) in data from 666 participants in the Cannabis in Multiple Sclerosis (CAMS) study; (2) evaluation of the Multiple Sclerosis Impact Scale (MSIS-29) in data from 1725 people with multiple sclerosis; (3) evaluation of test-retest reliability of MSIS-29 in data from 150 people with multiple sclerosis; (4) examination of the use of Rasch analysis to equate scales purporting to measure the same health construct in 585 people with multiple sclerosis; and (5) comparison of relative responsiveness of the Barthel Index and Functional Independence Measure in data from 1400 people undergoing neurorehabilitation. Both Rasch measurement and Item Response Theory are conceptually and theoretically superior to traditional psychometric methods. Findings from each of the five studies show that Rasch analysis is empirically superior to traditional psychometric methods for evaluating rating scales, developing rating scales, analysing rating scale data, understanding and measuring stability and change, and understanding the health constructs we seek to quantify. There is considerable added value in using Rasch analysis rather than traditional psychometric methods in health measurement. Future research directions include the need to reproduce our findings in a range of clinical populations, detailed head-to-head comparisons of Rasch analysis and Item Response Theory, and the application of Rasch analysis to clinical practice.
Construction of multi-scale consistent brain networks: methods and applications.
Ge, Bao; Tian, Yin; Hu, Xintao; Chen, Hanbo; Zhu, Dajiang; Zhang, Tuo; Han, Junwei; Guo, Lei; Liu, Tianming
2015-01-01
Mapping human brain networks provides a basis for studying brain function and dysfunction, and thus has gained significant interest in recent years. However, modeling human brain networks still faces several challenges including constructing networks at multiple spatial scales and finding common corresponding networks across individuals. As a consequence, many previous methods were designed for a single resolution or scale of brain network, though the brain networks are multi-scale in nature. To address this problem, this paper presents a novel approach to constructing multi-scale common structural brain networks from DTI data via an improved multi-scale spectral clustering applied on our recently developed and validated DICCCOLs (Dense Individualized and Common Connectivity-based Cortical Landmarks). Since the DICCCOL landmarks possess intrinsic structural correspondences across individuals and populations, we employed the multi-scale spectral clustering algorithm to group the DICCCOL landmarks and their connections into sub-networks, meanwhile preserving the intrinsically-established correspondences across multiple scales. Experimental results demonstrated that the proposed method can generate multi-scale consistent and common structural brain networks across subjects, and its reproducibility has been verified by multiple independent datasets. As an application, these multi-scale networks were used to guide the clustering of multi-scale fiber bundles and to compare the fiber integrity in schizophrenia and healthy controls. In general, our methods offer a novel and effective framework for brain network modeling and tract-based analysis of DTI data.
Estimating scaled treatment effects with multiple outcomes.
Kennedy, Edward H; Kangovi, Shreya; Mitra, Nandita
2017-01-01
In classical study designs, the aim is often to learn about the effects of a treatment or intervention on a single outcome; in many modern studies, however, data on multiple outcomes are collected and it is of interest to explore effects on multiple outcomes simultaneously. Such designs can be particularly useful in patient-centered research, where different outcomes might be more or less important to different patients. In this paper, we propose scaled effect measures (via potential outcomes) that translate effects on multiple outcomes to a common scale, using mean-variance and median-interquartile range based standardizations. We present efficient, nonparametric, doubly robust methods for estimating these scaled effects (and weighted average summary measures), and for testing the null hypothesis that treatment affects all outcomes equally. We also discuss methods for exploring how treatment effects depend on covariates (i.e., effect modification). In addition to describing efficiency theory for our estimands and the asymptotic behavior of our estimators, we illustrate the methods in a simulation study and a data analysis. Importantly, and in contrast to much of the literature concerning effects on multiple outcomes, our methods are nonparametric and can be used not only in randomized trials to yield increased efficiency, but also in observational studies with high-dimensional covariates to reduce confounding bias.
Vernier effect-based multiplication of the Sagnac beating frequency in ring laser gyroscope sensors
NASA Astrophysics Data System (ADS)
Adib, George A.; Sabry, Yasser M.; Khalil, Diaa
2018-02-01
A multiplication method of the Sagnac effect scale factor in ring laser gyroscopes is presented based on the Vernier effect of a dual-coupler passive ring resonator coupled to the active ring. The multiplication occurs when the two rings have comparable lengths or integer multiples and their scale factors have opposite signs. In this case, and when the rings have similar areas, the scale factor is multiplied by ratio of their length to their length difference. The scale factor of the presented configuration is derived analytically and the lock-in effect is analyzed. The principle is demonstrated using optical fiber rings and semiconductor optical amplifier as gain medium. A scale factor multiplication by about 175 is experimentally measured, demonstrating larger than two orders of magnitude enhancement in the Sagnac effect scale factor for the first time in literature, up to the authors' knowledge.
Modified multiple time scale method for solving strongly nonlinear damped forced vibration systems
NASA Astrophysics Data System (ADS)
Razzak, M. A.; Alam, M. Z.; Sharif, M. N.
2018-03-01
In this paper, modified multiple time scale (MTS) method is employed to solve strongly nonlinear forced vibration systems. The first-order approximation is only considered in order to avoid complexicity. The formulations and the determination of the solution procedure are very easy and straightforward. The classical multiple time scale (MS) and multiple scales Lindstedt-Poincare method (MSLP) do not give desire result for the strongly damped forced vibration systems with strong damping effects. The main aim of this paper is to remove these limitations. Two examples are considered to illustrate the effectiveness and convenience of the present procedure. The approximate external frequencies and the corresponding approximate solutions are determined by the present method. The results give good coincidence with corresponding numerical solution (considered to be exact) and also provide better result than other existing results. For weak nonlinearities with weak damping effect, the absolute relative error measures (first-order approximate external frequency) in this paper is only 0.07% when amplitude A = 1.5 , while the relative error gives MSLP method is surprisingly 28.81%. Furthermore, for strong nonlinearities with strong damping effect, the absolute relative error found in this article is only 0.02%, whereas the relative error obtained by MSLP method is 24.18%. Therefore, the present method is not only valid for weakly nonlinear damped forced systems, but also gives better result for strongly nonlinear systems with both small and strong damping effect.
SDG and qualitative trend based model multiple scale validation
NASA Astrophysics Data System (ADS)
Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike
2017-09-01
Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.
Data fusion of multi-scale representations for structural damage detection
NASA Astrophysics Data System (ADS)
Guo, Tian; Xu, Zili
2018-01-01
Despite extensive researches into structural health monitoring (SHM) in the past decades, there are few methods that can detect multiple slight damage in noisy environments. Here, we introduce a new hybrid method that utilizes multi-scale space theory and data fusion approach for multiple damage detection in beams and plates. A cascade filtering approach provides multi-scale space for noisy mode shapes and filters the fluctuations caused by measurement noise. In multi-scale space, a series of amplification and data fusion algorithms are utilized to search the damage features across all possible scales. We verify the effectiveness of the method by numerical simulation using damaged beams and plates with various types of boundary conditions. Monte Carlo simulations are conducted to illustrate the effectiveness and noise immunity of the proposed method. The applicability is further validated via laboratory cases studies focusing on different damage scenarios. Both results demonstrate that the proposed method has a superior noise tolerant ability, as well as damage sensitivity, without knowing material properties or boundary conditions.
NASA Astrophysics Data System (ADS)
Sreekanth, J.; Moore, Catherine
2018-04-01
The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.
NASA Astrophysics Data System (ADS)
Zhang, Yang; Liu, Wei; Li, Xiaodong; Yang, Fan; Gao, Peng; Jia, Zhenyuan
2015-10-01
Large-scale triangulation scanning measurement systems are widely used to measure the three-dimensional profile of large-scale components and parts. The accuracy and speed of the laser stripe center extraction are essential for guaranteeing the accuracy and efficiency of the measuring system. However, in the process of large-scale measurement, multiple factors can cause deviation of the laser stripe center, including the spatial light intensity distribution, material reflectivity characteristics, and spatial transmission characteristics. A center extraction method is proposed for improving the accuracy of the laser stripe center extraction based on image evaluation of Gaussian fitting structural similarity and analysis of the multiple source factors. First, according to the features of the gray distribution of the laser stripe, evaluation of the Gaussian fitting structural similarity is estimated to provide a threshold value for center compensation. Then using the relationships between the gray distribution of the laser stripe and the multiple source factors, a compensation method of center extraction is presented. Finally, measurement experiments for a large-scale aviation composite component are carried out. The experimental results for this specific implementation verify the feasibility of the proposed center extraction method and the improved accuracy for large-scale triangulation scanning measurements.
A multiple scales approach to maximal superintegrability
NASA Astrophysics Data System (ADS)
Gubbiotti, G.; Latini, D.
2018-07-01
In this paper we present a simple, algorithmic test to establish if a Hamiltonian system is maximally superintegrable or not. This test is based on a very simple corollary of a theorem due to Nekhoroshev and on a perturbative technique called the multiple scales method. If the outcome is positive, this test can be used to suggest maximal superintegrability, whereas when the outcome is negative it can be used to disprove it. This method can be regarded as a finite dimensional analog of the multiple scales method as a way to produce soliton equations. We use this technique to show that the real counterpart of a mechanical system found by Jules Drach in 1935 is, in general, not maximally superintegrable. We give some hints on how this approach could be applied to classify maximally superintegrable systems by presenting a direct proof of the well-known Bertrand’s theorem.
MIMIC Methods for Assessing Differential Item Functioning in Polytomous Items
ERIC Educational Resources Information Center
Wang, Wen-Chung; Shih, Ching-Lin
2010-01-01
Three multiple indicators-multiple causes (MIMIC) methods, namely, the standard MIMIC method (M-ST), the MIMIC method with scale purification (M-SP), and the MIMIC method with a pure anchor (M-PA), were developed to assess differential item functioning (DIF) in polytomous items. In a series of simulations, it appeared that all three methods…
A rapid random-sampling method was used to relate densities of juvenile winter flounder to multiple scales of habitat variation in Narragansett Bay and two nearby coastal lagoons in Rhode Island. We used a 1-m beam trawl with attached video camera, continuous GPS track overlay, ...
Incremental dynamical downscaling for probabilistic analysis based on multiple GCM projections
NASA Astrophysics Data System (ADS)
Wakazuki, Y.
2015-12-01
A dynamical downscaling method for probabilistic regional scale climate change projections was developed to cover an uncertainty of multiple general circulation model (GCM) climate simulations. The climatological increments (future minus present climate states) estimated by GCM simulation results were statistically analyzed using the singular vector decomposition. Both positive and negative perturbations from the ensemble mean with the magnitudes of their standard deviations were extracted and were added to the ensemble mean of the climatological increments. The analyzed multiple modal increments were utilized to create multiple modal lateral boundary conditions for the future climate regional climate model (RCM) simulations by adding to an objective analysis data. This data handling is regarded to be an advanced method of the pseudo-global-warming (PGW) method previously developed by Kimura and Kitoh (2007). The incremental handling for GCM simulations realized approximated probabilistic climate change projections with the smaller number of RCM simulations. Three values of a climatological variable simulated by RCMs for a mode were used to estimate the response to the perturbation of the mode. For the probabilistic analysis, climatological variables of RCMs were assumed to show linear response to the multiple modal perturbations, although the non-linearity was seen for local scale rainfall. Probability of temperature was able to be estimated within two modes perturbation simulations, where the number of RCM simulations for the future climate is five. On the other hand, local scale rainfalls needed four modes simulations, where the number of the RCM simulations is nine. The probabilistic method is expected to be used for regional scale climate change impact assessment in the future.
Identification of varying time scales in sediment transport using the Hilbert-Huang Transform method
NASA Astrophysics Data System (ADS)
Kuai, Ken Z.; Tsai, Christina W.
2012-02-01
SummarySediment transport processes vary at a variety of time scales - from seconds, hours, days to months and years. Multiple time scales exist in the system of flow, sediment transport and bed elevation change processes. As such, identification and selection of appropriate time scales for flow and sediment processes can assist in formulating a system of flow and sediment governing equations representative of the dynamic interaction of flow and particles at the desired details. Recognizing the importance of different varying time scales in the fluvial processes of sediment transport, we introduce the Hilbert-Huang Transform method (HHT) to the field of sediment transport for the time scale analysis. The HHT uses the Empirical Mode Decomposition (EMD) method to decompose a time series into a collection of the Intrinsic Mode Functions (IMFs), and uses the Hilbert Spectral Analysis (HSA) to obtain instantaneous frequency data. The EMD extracts the variability of data with different time scales, and improves the analysis of data series. The HSA can display the succession of time varying time scales, which cannot be captured by the often-used Fast Fourier Transform (FFT) method. This study is one of the earlier attempts to introduce the state-of-the-art technique for the multiple time sales analysis of sediment transport processes. Three practical applications of the HHT method for data analysis of both suspended sediment and bedload transport time series are presented. The analysis results show the strong impact of flood waves on the variations of flow and sediment time scales at a large sampling time scale, as well as the impact of flow turbulence on those time scales at a smaller sampling time scale. Our analysis reveals that the existence of multiple time scales in sediment transport processes may be attributed to the fractal nature in sediment transport. It can be demonstrated by the HHT analysis that the bedload motion time scale is better represented by the ratio of the water depth to the settling velocity, h/ w. In the final part, HHT results are compared with an available time scale formula in literature.
Multiscale Modeling in the Clinic: Drug Design and Development
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clancy, Colleen E.; An, Gary; Cannon, William R.
A wide range of length and time scales are relevant to pharmacology, especially in drug development, drug design and drug delivery. Therefore, multi-scale computational modeling and simulation methods and paradigms that advance the linkage of phenomena occurring at these multiple scales have become increasingly important. Multi-scale approaches present in silico opportunities to advance laboratory research to bedside clinical applications in pharmaceuticals research. This is achievable through the capability of modeling to reveal phenomena occurring across multiple spatial and temporal scales, which are not otherwise readily accessible to experimentation. The resultant models, when validated, are capable of making testable predictions tomore » guide drug design and delivery. In this review we describe the goals, methods, and opportunities of multi-scale modeling in drug design and development. We demonstrate the impact of multiple scales of modeling in this field. We indicate the common mathematical techniques employed for multi-scale modeling approaches used in pharmacology and present several examples illustrating the current state-of-the-art regarding drug development for: Excitable Systems (Heart); Cancer (Metastasis and Differentiation); Cancer (Angiogenesis and Drug Targeting); Metabolic Disorders; and Inflammation and Sepsis. We conclude with a focus on barriers to successful clinical translation of drug development, drug design and drug delivery multi-scale models.« less
The detection of local irreversibility in time series based on segmentation
NASA Astrophysics Data System (ADS)
Teng, Yue; Shang, Pengjian
2018-06-01
We propose a strategy for the detection of local irreversibility in stationary time series based on multiple scale. The detection is beneficial to evaluate the displacement of irreversibility toward local skewness. By means of this method, we can availably discuss the local irreversible fluctuations of time series as the scale changes. The method was applied to simulated nonlinear signals generated by the ARFIMA process and logistic map to show how the irreversibility functions react to the increasing of the multiple scale. The method was applied also to series of financial markets i.e., American, Chinese and European markets. The local irreversibility for different markets demonstrate distinct characteristics. Simulations and real data support the need of exploring local irreversibility.
A multiple scales approach to sound generation by vibrating bodies
NASA Technical Reports Server (NTRS)
Geer, James F.; Pope, Dennis S.
1992-01-01
The problem of determining the acoustic field in an inviscid, isentropic fluid generated by a solid body whose surface executes prescribed vibrations is formulated and solved as a multiple scales perturbation problem, using the Mach number M based on the maximum surface velocity as the perturbation parameter. Following the idea of multiple scales, new 'slow' spacial scales are introduced, which are defined as the usual physical spacial scale multiplied by powers of M. The governing nonlinear differential equations lead to a sequence of linear problems for the perturbation coefficient functions. However, it is shown that the higher order perturbation functions obtained in this manner will dominate the lower order solutions unless their dependence on the slow spacial scales is chosen in a certain manner. In particular, it is shown that the perturbation functions must satisfy an equation similar to Burgers' equation, with a slow spacial scale playing the role of the time-like variable. The method is illustrated by a simple one-dimenstional example, as well as by three different cases of a vibrating sphere. The results are compared with solutions obtained by purely numerical methods and some insights provided by the perturbation approach are discussed.
Wave excitation at Lindblad resonances using the method of multiple scales
NASA Astrophysics Data System (ADS)
Horák, Jiří
2017-12-01
In this note, the method of multiple scales is adopted to the problem of excitation of non–axisymmetric acoustic waves in vertically integrated disk by tidal gravitational fields. We derive a formula describing a waveform of exited wave that is uniformly valid in a whole disk as long as only a single Lindblad resonance is present. Our formalism is subsequently applied to two classical problems: trapped p–mode oscillations in relativistic accretion disks and the excitation of waves in infinite disks.
Recent progress in invariant pattern recognition
NASA Astrophysics Data System (ADS)
Arsenault, Henri H.; Chang, S.; Gagne, Philippe; Gualdron Gonzalez, Oscar
1996-12-01
We present some recent results in invariant pattern recognition, including methods that are invariant under two or more distortions of position, orientation and scale. There are now a few methods that yield good results under changes of both rotation and scale. Some new methods are introduced. These include locally adaptive nonlinear matched filters, scale-adapted wavelet transforms and invariant filters for disjoint noise. Methods using neural networks will also be discussed, including an optical method that allows simultaneous classification of multiple targets.
Monson, Daniel H.; Bowen, Lizabeth
2015-01-01
Overall, a variety of indices used to measure population status throughout the sea otter’s range have provided insights for understanding the mechanisms driving the trajectory of various sea otter populations, which a single index could not, and we suggest using multiple methods to measure a population’s status at multiple spatial and temporal scales. The work described here also illustrates the usefulness of long-term data sets and/or approaches that can be used to assess population status retrospectively, providing information otherwise not available. While not all systems will be as amenable to using all the approaches presented here, we expect innovative researchers could adapt analogous multi-scale methods to a broad range of habitats and species including apex predators occupying the top trophic levels, which are often of conservation concern.
Measuring Growth with Vertical Scales
ERIC Educational Resources Information Center
Briggs, Derek C.
2013-01-01
A vertical score scale is needed to measure growth across multiple tests in terms of absolute changes in magnitude. Since the warrant for subsequent growth interpretations depends upon the assumption that the scale has interval properties, the validation of a vertical scale would seem to require methods for distinguishing interval scales from…
Huh, Yeamin; Smith, David E.; Feng, Meihau Rose
2014-01-01
Human clearance prediction for small- and macro-molecule drugs was evaluated and compared using various scaling methods and statistical analysis.Human clearance is generally well predicted using single or multiple species simple allometry for macro- and small-molecule drugs excreted renally.The prediction error is higher for hepatically eliminated small-molecules using single or multiple species simple allometry scaling, and it appears that the prediction error is mainly associated with drugs with low hepatic extraction ratio (Eh). The error in human clearance prediction for hepatically eliminated small-molecules was reduced using scaling methods with a correction of maximum life span (MLP) or brain weight (BRW).Human clearance of both small- and macro-molecule drugs is well predicted using the monkey liver blood flow method. Predictions using liver blood flow from other species did not work as well, especially for the small-molecule drugs. PMID:21892879
Rocchini, Duccio
2009-01-01
Measuring heterogeneity in satellite imagery is an important task to deal with. Most measures of spectral diversity have been based on Shannon Information theory. However, this approach does not inherently address different scales, ranging from local (hereafter referred to alpha diversity) to global scales (gamma diversity). The aim of this paper is to propose a method for measuring spectral heterogeneity at multiple scales based on rarefaction curves. An algorithmic solution of rarefaction applied to image pixel values (Digital Numbers, DNs) is provided and discussed. PMID:22389600
No-Reference Image Quality Assessment by Wide-Perceptual-Domain Scorer Ensemble Method.
Liu, Tsung-Jung; Liu, Kuan-Hsien
2018-03-01
A no-reference (NR) learning-based approach to assess image quality is presented in this paper. The devised features are extracted from wide perceptual domains, including brightness, contrast, color, distortion, and texture. These features are used to train a model (scorer) which can predict scores. The scorer selection algorithms are utilized to help simplify the proposed system. In the final stage, the ensemble method is used to combine the prediction results from selected scorers. Two multiple-scale versions of the proposed approach are also presented along with the single-scale one. They turn out to have better performances than the original single-scale method. Because of having features from five different domains at multiple image scales and using the outputs (scores) from selected score prediction models as features for multi-scale or cross-scale fusion (i.e., ensemble), the proposed NR image quality assessment models are robust with respect to more than 24 image distortion types. They also can be used on the evaluation of images with authentic distortions. The extensive experiments on three well-known and representative databases confirm the performance robustness of our proposed model.
Deep convolutional neural network based antenna selection in multiple-input multiple-output system
NASA Astrophysics Data System (ADS)
Cai, Jiaxin; Li, Yan; Hu, Ying
2018-03-01
Antenna selection of wireless communication system has attracted increasing attention due to the challenge of keeping a balance between communication performance and computational complexity in large-scale Multiple-Input MultipleOutput antenna systems. Recently, deep learning based methods have achieved promising performance for large-scale data processing and analysis in many application fields. This paper is the first attempt to introduce the deep learning technique into the field of Multiple-Input Multiple-Output antenna selection in wireless communications. First, the label of attenuation coefficients channel matrix is generated by minimizing the key performance indicator of training antenna systems. Then, a deep convolutional neural network that explicitly exploits the massive latent cues of attenuation coefficients is learned on the training antenna systems. Finally, we use the adopted deep convolutional neural network to classify the channel matrix labels of test antennas and select the optimal antenna subset. Simulation experimental results demonstrate that our method can achieve better performance than the state-of-the-art baselines for data-driven based wireless antenna selection.
Synthetic Sediments and Stochastic Groundwater Hydrology
NASA Astrophysics Data System (ADS)
Wilson, J. L.
2002-12-01
For over twenty years the groundwater community has pursued the somewhat elusive goal of describing the effects of aquifer heterogeneity on subsurface flow and chemical transport. While small perturbation stochastic moment methods have significantly advanced theoretical understanding, why is it that stochastic applications use instead simulations of flow and transport through multiple realizations of synthetic geology? Allan Gutjahr was a principle proponent of the Fast Fourier Transform method for the synthetic generation of aquifer properties and recently explored new, more geologically sound, synthetic methods based on multi-scale Markov random fields. Focusing on sedimentary aquifers, how has the state-of-the-art of synthetic generation changed and what new developments can be expected, for example, to deal with issues like conceptual model uncertainty, the differences between measurement and modeling scales, and subgrid scale variability? What will it take to get stochastic methods, whether based on moments, multiple realizations, or some other approach, into widespread application?
On simulating flow with multiple time scales using a method of averages
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margolin, L.G.
1997-12-31
The author presents a new computational method based on averaging to efficiently simulate certain systems with multiple time scales. He first develops the method in a simple one-dimensional setting and employs linear stability analysis to demonstrate numerical stability. He then extends the method to multidimensional fluid flow. His method of averages does not depend on explicit splitting of the equations nor on modal decomposition. Rather he combines low order and high order algorithms in a generalized predictor-corrector framework. He illustrates the methodology in the context of a shallow fluid approximation to an ocean basin circulation. He finds that his newmore » method reproduces the accuracy of a fully explicit second-order accurate scheme, while costing less than a first-order accurate scheme.« less
Equating in Small-Scale Language Testing Programs
ERIC Educational Resources Information Center
LaFlair, Geoffrey T.; Isbell, Daniel; May, L. D. Nicolas; Gutierrez Arvizu, Maria Nelly; Jamieson, Joan
2017-01-01
Language programs need multiple test forms for secure administrations and effective placement decisions, but can they have confidence that scores on alternate test forms have the same meaning? In large-scale testing programs, various equating methods are available to ensure the comparability of forms. The choice of equating method is informed by…
Behrangrad, Shabnam; Kordi Yoosefinejad, Amin
2018-03-01
The purpose of this study is to investigate the validity and reliability of the Persian version of the Multidimensional Assessment of Fatigue Scale (MAFS) in an Iranian population with multiple sclerosis. A self-reported survey on fatigue including the MAFS, Fatigue Impact Scale and demographic measures was completed by 130 patients with multiple sclerosis and 60 healthy persons sampled with a convenience method. Test-retest reliability and validity were evaluated 3 days apart. Construct validity of the MAFS was assessed with the Fatigue Impact Scale. The MAFS had high internal consistency (Cronbach's alpha >0.9) and 3-d test-retest reliability (intraclass correlation coefficient = 0.99). Correlation between the Fatigue Impact Scale and MAFS was high (r = 0.99). Correlation between MAFS scores and the Expanded Disability Status Scale was also strong (r = 0.85). Questionnaire items showed acceptable item-scale correlation (0.968-0.993). The Persian version of the MAFS appears to be a valid and reliable questionnaire. It is an appropriate short multidimensional instrument to assess fatigue in patients with multiple sclerosis in clinical practice and research. Implications for Rehabilitation The Persian version of Multidimensional Assessment of Fatigue is a valid and reliable instrument for the assessment and monitoring the fatigue in Persian-language patients with multiple sclerosis. It is very easy to administer and a time efficient scale in comparison to other instruments evaluating fatigue in patients with multiple sclerosis.
A variance-decomposition approach to investigating multiscale habitat associations
Lawler, J.J.; Edwards, T.C.
2006-01-01
The recognition of the importance of spatial scale in ecology has led many researchers to take multiscale approaches to studying habitat associations. However, few of the studies that investigate habitat associations at multiple spatial scales have considered the potential effects of cross-scale correlations in measured habitat variables. When cross-scale correlations in such studies are strong, conclusions drawn about the relative strength of habitat associations at different spatial scales may be inaccurate. Here we adapt and demonstrate an analytical technique based on variance decomposition for quantifying the influence of cross-scale correlations on multiscale habitat associations. We used the technique to quantify the variation in nest-site locations of Red-naped Sapsuckers (Sphyrapicus nuchalis) and Northern Flickers (Colaptes auratus) associated with habitat descriptors at three spatial scales. We demonstrate how the method can be used to identify components of variation that are associated only with factors at a single spatial scale as well as shared components of variation that represent cross-scale correlations. Despite the fact that no explanatory variables in our models were highly correlated (r < 0.60), we found that shared components of variation reflecting cross-scale correlations accounted for roughly half of the deviance explained by the models. These results highlight the importance of both conducting habitat analyses at multiple spatial scales and of quantifying the effects of cross-scale correlations in such analyses. Given the limits of conventional analytical techniques, we recommend alternative methods, such as the variance-decomposition technique demonstrated here, for analyzing habitat associations at multiple spatial scales. ?? The Cooper Ornithological Society 2006.
Method for preparing small volume reaction containers
Retterer, Scott T.; Doktycz, Mitchel J.
2017-04-25
Engineered reaction containers that can be physically and chemically defined to control the flux of molecules of different sizes and charge are disclosed. Methods for constructing small volume reaction containers through a combination of etching and deposition are also disclosed. The methods allow for the fabrication of multiple devices that possess features on multiple length scales, specifically small volume containers with controlled porosity on the nanoscale.
GPA, GMAT, and Scale: A Method of Quantification of Admissions Criteria.
ERIC Educational Resources Information Center
Sobol, Marion G.
1984-01-01
Multiple regression analysis was used to establish a scale, measuring college student involvement in campus activities, work experience, technical background, references, and goals. This scale was tested to see whether it improved the prediction of success in graduate school. (Author/MLW)
Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D
2015-05-08
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.
Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.
2015-01-01
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714
Ruiz-Peña, Juan Luis; Piñero, Pilar; Sellers, Guillermo; Argente, Joaquín; Casado, Alfredo; Foronda, Jesus; Uclés, Antonio; Izquierdo, Guillermo
2004-01-01
Background What currently appears to be irreversible axonal loss in normal appearing white matter, measured by proton magnetic resonance spectroscopy is of great interest in the study of Multiple Sclerosis. Our aim is to determine the axonal damage in normal appearing white matter measured by magnetic resonance spectroscopy and to correlate this with the functional disability measured by Multiple Sclerosis Functional Composite scale, Neurological Rating Scale, Ambulation Index scale, and Expanded Disability Scale Score. Methods Thirty one patients (9 male and 22 female) with relapsing remitting Multiple Sclerosis and a Kurtzke Expanded Disability Scale Score of 0–5.5 were recruited from four hospitals in Andalusia, Spain and included in the study. Magnetic resonance spectroscopy scans and neurological disability assessments were performed the same day. Results A statistically significant correlation was found (r = -0.38 p < 0.05) between disability (measured by Expanded Disability Scale Score) and N-Acetyl Aspartate (NAA/Cr ratio) levels in normal appearing white matter in these patients. No correlation was found between the NAA/Cr ratio and disability measured by any of the other disability assessment scales. Conclusions There is correlation between disability (measured by Expanded Disability Scale Score) and the NAA/Cr ratio in normal appearing white matter. The lack of correlation between the NAA/Cr ratio and the Multiple Sclerosis Functional Composite score indicates that the Multiple Sclerosis Functional Composite is not able to measure irreversible disability and would be more useful as a marker in stages where axonal damage is not a predominant factor. PMID:15191618
BME Estimation of Residential Exposure to Ambient PM10 and Ozone at Multiple Time Scales
Yu, Hwa-Lung; Chen, Jiu-Chiuan; Christakos, George; Jerrett, Michael
2009-01-01
Background Long-term human exposure to ambient pollutants can be an important contributing or etiologic factor of many chronic diseases. Spatiotemporal estimation (mapping) of long-term exposure at residential areas based on field observations recorded in the U.S. Environmental Protection Agency’s Air Quality System often suffer from missing data issues due to the scarce monitoring network across space and the inconsistent recording periods at different monitors. Objective We developed and compared two upscaling methods: UM1 (data aggregation followed by exposure estimation) and UM2 (exposure estimation followed by data aggregation) for the long-term PM10 (particulate matter with aerodynamic diameter ≤ 10 μm) and ozone exposure estimations and applied them in multiple time scales to estimate PM and ozone exposures for the residential areas of the Health Effects of Air Pollution on Lupus (HEAPL) study. Method We used Bayesian maximum entropy (BME) analysis for the two upscaling methods. We performed spatiotemporal cross-validations at multiple time scales by UM1 and UM2 to assess the estimation accuracy across space and time. Results Compared with the kriging method, the integration of soft information by the BME method can effectively increase the estimation accuracy for both pollutants. The spatiotemporal distributions of estimation errors from UM1 and UM2 were similar. The cross-validation results indicated that UM2 is generally better than UM1 in exposure estimations at multiple time scales in terms of predictive accuracy and lack of bias. For yearly PM10 estimations, both approaches have comparable performance, but the implementation of UM1 is associated with much lower computation burden. Conclusion BME-based upscaling methods UM1 and UM2 can assimilate core and site-specific knowledge bases of different formats for long-term exposure estimation. This study shows that UM1 can perform reasonably well when the aggregation process does not alter the spatiotemporal structure of the original data set; otherwise, UM2 is preferable. PMID:19440491
NASA Astrophysics Data System (ADS)
Yuan, Cadmus C. A.
2015-12-01
Optical ray tracing modeling applied Beer-Lambert method in the single luminescence material system to model the white light pattern from blue LED light source. This paper extends such algorithm to a mixed multiple luminescence material system by introducing the equivalent excitation and emission spectrum of individual luminescence materials. The quantum efficiency numbers of individual material and self-absorption of the multiple luminescence material system are considered as well. By this combination, researchers are able to model the luminescence characteristics of LED chip-scaled packaging (CSP), which provides simple process steps and the freedom of the luminescence material geometrical dimension. The method will be first validated by the experimental results. Afterward, a further parametric investigation has been then conducted.
NASA Technical Reports Server (NTRS)
Kim, S.-W.; Chen, C.-P.
1988-01-01
The paper presents a multiple-time-scale turbulence model of a single point closure and a simplified split-spectrum method. Consideration is given to a class of turbulent boundary layer flows and of separated and/or swirling elliptic turbulent flows. For the separated and/or swirling turbulent flows, the present turbulence model yielded significantly improved computational results over those obtained with the standard k-epsilon turbulence model.
Informative graphing of continuous safety variables relative to normal reference limits.
Breder, Christopher D
2018-05-16
Interpreting graphs of continuous safety variables can be complicated because differences in age, gender, and testing site methodologies data may give rise to multiple reference limits. Furthermore, data below the lower limit of normal are compressed relative to those points above the upper limit of normal. The objective of this study is to develop a graphing technique that addresses these issues and is visually intuitive. A mock dataset with multiple reference ranges is initially used to develop the graphing technique. Formulas are developed for conditions where data are above the upper limit of normal, normal, below the lower limit of normal, and below the lower limit of normal when the data value equals zero. After the formulae are developed, an anonymized dataset from an actual set of trials for an approved drug is evaluated comparing the technique developed in this study to standard graphical methods. Formulas are derived for the novel graphing method based on multiples of the normal limits. The formula for values scaled between the upper and lower limits of normal is a novel application of a readily available scaling formula. The formula for the lower limit of normal is novel and addresses the issue of this value potentially being indeterminate when the result to be scaled as a multiple is zero. The formulae and graphing method described in this study provides a visually intuitive method to graph continuous safety data including laboratory values, vital sign data.
The role of multiple-scale modelling of epilepsy in seizure forecasting
Kuhlmann, Levin; Grayden, David B.; Wendling, Fabrice; Schiff, Steven J.
2014-01-01
Over the past three decades, a number of seizure prediction, or forecasting, methods have been developed. Although major achievements were accomplished regarding the statistical evaluation of proposed algorithms, it is recognized that further progress is still necessary for clinical application in patients. The lack of physiological motivation can partly explain this limitation. Therefore, a natural question is raised: can computational models of epilepsy be used to improve these methods? Here we review the literature on the multiple-scale neural modelling of epilepsy and the use of such models to infer physiological changes underlying epilepsy and epileptic seizures. We argue how these methods can be applied to advance the state-of-the-art in seizure forecasting. PMID:26035674
Cleanthous, Sophie; Kinter, Elizabeth; Marquis, Patrick; Petrillo, Jennifer; You, Xiaojun; Wakeford, Craig; Sabatella, Guido
2017-01-01
Background Study objectives were to evaluate the Multiple Sclerosis Impact Scale (MSIS-29) and explore an optimized scoring structure based on empirical post-hoc analyses of data from the Phase III ADVANCE clinical trial. Methods ADVANCE MSIS-29 data from six time-points were analyzed in a sample of patients with relapsing–remitting multiple sclerosis (RRMS). Rasch Measurement Theory (RMT) analysis was undertaken to examine three broad areas: sample-to-scale targeting, measurement scale properties, and sample measurement validity. Interpretation of results led to an alternative MSIS-29 scoring structure, further evaluated alongside responsiveness of the original and revised scales at Week 48. Results RMT analysis provided mixed evidence for Physical and Psychological Impact scales that were sub-optimally targeted at the lower functioning end of the scales. Their conceptual basis could also stand to improve based on item fit results. The revised MSIS-29 rescored scales improved but did not resolve the measurement scale properties and targeting of the MSIS-29. In two out of three revised scales, responsiveness analysis indicated strengthened ability to detect change. Conclusion The revised MSIS-29 provides an initial evidence-based improved patient-reported outcome (PRO) instrument for evaluating the impact of MS. Revised scoring improves conceptual clarity and interpretation of scores by refining scale structure to include Symptoms, Psychological Impact, and General Limitations. Clinical trial ADVANCE (ClinicalTrials.gov identifier NCT00906399). PMID:29104758
Lyons, James E.; Andrew, Royle J.; Thomas, Susan M.; Elliott-Smith, Elise; Evenson, Joseph R.; Kelly, Elizabeth G.; Milner, Ruth L.; Nysewander, David R.; Andres, Brad A.
2012-01-01
Large-scale monitoring of bird populations is often based on count data collected across spatial scales that may include multiple physiographic regions and habitat types. Monitoring at large spatial scales may require multiple survey platforms (e.g., from boats and land when monitoring coastal species) and multiple survey methods. It becomes especially important to explicitly account for detection probability when analyzing count data that have been collected using multiple survey platforms or methods. We evaluated a new analytical framework, N-mixture models, to estimate actual abundance while accounting for multiple detection biases. During May 2006, we made repeated counts of Black Oystercatchers (Haematopus bachmani) from boats in the Puget Sound area of Washington (n = 55 sites) and from land along the coast of Oregon (n = 56 sites). We used a Bayesian analysis of N-mixture models to (1) assess detection probability as a function of environmental and survey covariates and (2) estimate total Black Oystercatcher abundance during the breeding season in the two regions. Probability of detecting individuals during boat-based surveys was 0.75 (95% credible interval: 0.42–0.91) and was not influenced by tidal stage. Detection probability from surveys conducted on foot was 0.68 (0.39–0.90); the latter was not influenced by fog, wind, or number of observers but was ~35% lower during rain. The estimated population size was 321 birds (262–511) in Washington and 311 (276–382) in Oregon. N-mixture models provide a flexible framework for modeling count data and covariates in large-scale bird monitoring programs designed to understand population change.
Multicriteria decision analysis: Overview and implications for environmental decision making
Hermans, Caroline M.; Erickson, Jon D.; Erickson, Jon D.; Messner, Frank; Ring, Irene
2007-01-01
Environmental decision making involving multiple stakeholders can benefit from the use of a formal process to structure stakeholder interactions, leading to more successful outcomes than traditional discursive decision processes. There are many tools available to handle complex decision making. Here we illustrate the use of a multicriteria decision analysis (MCDA) outranking tool (PROMETHEE) to facilitate decision making at the watershed scale, involving multiple stakeholders, multiple criteria, and multiple objectives. We compare various MCDA methods and their theoretical underpinnings, examining methods that most realistically model complex decision problems in ways that are understandable and transparent to stakeholders.
Satellite attitude prediction by multiple time scales method
NASA Technical Reports Server (NTRS)
Tao, Y. C.; Ramnath, R.
1975-01-01
An investigation is made of the problem of predicting the attitude of satellites under the influence of external disturbing torques. The attitude dynamics are first expressed in a perturbation formulation which is then solved by the multiple scales approach. The independent variable, time, is extended into new scales, fast, slow, etc., and the integration is carried out separately in the new variables. The theory is applied to two different satellite configurations, rigid body and dual spin, each of which may have an asymmetric mass distribution. The disturbing torques considered are gravity gradient and geomagnetic. Finally, as multiple time scales approach separates slow and fast behaviors of satellite attitude motion, this property is used for the design of an attitude control device. A nutation damping control loop, using the geomagnetic torque for an earth pointing dual spin satellite, is designed in terms of the slow equation.
Fernández-Varea, J M; Andreo, P; Tabata, T
1996-07-01
Average penetration depths and detour factors of 1-50 MeV electrons in water and plastic materials have been computed by means of analytical calculation, within the continuous-slowing-down approximation and including multiple scattering, and using the Monte Carlo codes ITS and PENELOPE. Results are compared to detour factors from alternative definitions previously proposed in the literature. Different procedures used in low-energy electron-beam dosimetry to convert ranges and depths measured in plastic phantoms into water-equivalent ranges and depths are analysed. A new simple and accurate scaling method, based on Monte Carlo-derived ratios of average electron penetration depths and thus incorporating the effect of multiple scattering, is presented. Data are given for most plastics used in electron-beam dosimetry together with a fit which extends the method to any other low-Z plastic material. A study of scaled depth-dose curves and mean energies as a function of depth for some plastics of common usage shows that the method improves the consistency and results of other scaling procedures in dosimetry with electron beams at therapeutic energies.
Measuring the scale dependence of intrinsic alignments using multiple shear estimates
NASA Astrophysics Data System (ADS)
Leonard, C. Danielle; Mandelbaum, Rachel
2018-06-01
We present a new method for measuring the scale dependence of the intrinsic alignment (IA) contamination to the galaxy-galaxy lensing signal, which takes advantage of multiple shear estimation methods applied to the same source galaxy sample. By exploiting the resulting correlation of both shape noise and cosmic variance, our method can provide an increase in the signal-to-noise of the measured IA signal as compared to methods which rely on the difference of the lensing signal from multiple photometric redshift bins. For a galaxy-galaxy lensing measurement which uses LSST sources and DESI lenses, the signal-to-noise on the IA signal from our method is predicted to improve by a factor of ˜2 relative to the method of Blazek et al. (2012), for pairs of shear estimates which yield substantially different measured IA amplitudes and highly correlated shape noise terms. We show that statistical error necessarily dominates the measurement of intrinsic alignments using our method. We also consider a physically motivated extension of the Blazek et al. (2012) method which assumes that all nearby galaxy pairs, rather than only excess pairs, are subject to IA. In this case, the signal-to-noise of the method of Blazek et al. (2012) is improved.
Parish, Chad M.; Miller, Michael K.
2014-12-09
Nanostructured ferritic alloys (NFAs) exhibit complex microstructures consisting of 100-500 nm ferrite grains, grain boundary solute enrichment, and multiple populations of precipitates and nanoclusters (NCs). Understanding these materials' excellent creep and radiation-tolerance properties requires a combination of multiple atomic-scale experimental techniques. Recent advances in scanning transmission electron microscopy (STEM) hardware and data analysis methods have the potential to revolutionize nanometer to micrometer scale materials analysis. The application of these methods is applied to NFAs as a test case and is compared to both conventional STEM methods as well as complementary methods such as scanning electron microscopy and atom probe tomography.more » In this paper, we review past results and present new results illustrating the effectiveness of latest-generation STEM instrumentation and data analysis.« less
Sunderland, Matthew; Batterham, Philip; Calear, Alison; Carragher, Natacha; Baillie, Andrew; Slade, Tim
2018-04-10
There is no standardized approach to the measurement of social anxiety. Researchers and clinicians are faced with numerous self-report scales with varying strengths, weaknesses, and psychometric properties. The lack of standardization makes it difficult to compare scores across populations that utilise different scales. Item response theory offers one solution to this problem via equating different scales using an anchor scale to set a standardized metric. This study is the first to equate several scales for social anxiety disorder. Data from two samples (n=3,175 and n=1,052), recruited from the Australian community using online advertisements, were utilised to equate a network of 11 self-report social anxiety scales via a fixed parameter item calibration method. Comparisons between actual and equated scores for most of the scales indicted a high level of agreement with mean differences <0.10 (equivalent to a mean difference of less than one point on the standardized metric). This study demonstrates that scores from multiple scales that measure social anxiety can be converted to a common scale. Re-scoring observed scores to a common scale provides opportunities to combine research from multiple studies and ultimately better assess social anxiety in treatment and research settings. Copyright © 2018. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Yahyanejad, Saeed; Rinner, Bernhard
2015-06-01
The use of multiple small-scale UAVs to support first responders in disaster management has become popular because of their speed and low deployment costs. We exploit such UAVs to perform real-time monitoring of target areas by fusing individual images captured from heterogeneous aerial sensors. Many approaches have already been presented to register images from homogeneous sensors. These methods have demonstrated robustness against scale, rotation and illumination variations and can also cope with limited overlap among individual images. In this paper we focus on thermal and visual image registration and propose different methods to improve the quality of interspectral registration for the purpose of real-time monitoring and mobile mapping. Images captured by low-altitude UAVs represent a very challenging scenario for interspectral registration due to the strong variations in overlap, scale, rotation, point of view and structure of such scenes. Furthermore, these small-scale UAVs have limited processing and communication power. The contributions of this paper include (i) the introduction of a feature descriptor for robustly identifying corresponding regions of images in different spectrums, (ii) the registration of image mosaics, and (iii) the registration of depth maps. We evaluated the first method using a test data set consisting of 84 image pairs. In all instances our approach combined with SIFT or SURF feature-based registration was superior to the standard versions. Although we focus mainly on aerial imagery, our evaluation shows that the presented approach would also be beneficial in other scenarios such as surveillance and human detection. Furthermore, we demonstrated the advantages of the other two methods in case of multiple image pairs.
Regression Models for the Analysis of Longitudinal Gaussian Data from Multiple Sources
O’Brien, Liam M.; Fitzmaurice, Garrett M.
2006-01-01
We present a regression model for the joint analysis of longitudinal multiple source Gaussian data. Longitudinal multiple source data arise when repeated measurements are taken from two or more sources, and each source provides a measure of the same underlying variable and on the same scale. This type of data generally produces a relatively large number of observations per subject; thus estimation of an unstructured covariance matrix often may not be possible. We consider two methods by which parsimonious models for the covariance can be obtained for longitudinal multiple source data. The methods are illustrated with an example of multiple informant data arising from a longitudinal interventional trial in psychiatry. PMID:15726666
Query-Adaptive Hash Code Ranking for Large-Scale Multi-View Visual Search.
Liu, Xianglong; Huang, Lei; Deng, Cheng; Lang, Bo; Tao, Dacheng
2016-10-01
Hash-based nearest neighbor search has become attractive in many applications. However, the quantization in hashing usually degenerates the discriminative power when using Hamming distance ranking. Besides, for large-scale visual search, existing hashing methods cannot directly support the efficient search over the data with multiple sources, and while the literature has shown that adaptively incorporating complementary information from diverse sources or views can significantly boost the search performance. To address the problems, this paper proposes a novel and generic approach to building multiple hash tables with multiple views and generating fine-grained ranking results at bitwise and tablewise levels. For each hash table, a query-adaptive bitwise weighting is introduced to alleviate the quantization loss by simultaneously exploiting the quality of hash functions and their complement for nearest neighbor search. From the tablewise aspect, multiple hash tables are built for different data views as a joint index, over which a query-specific rank fusion is proposed to rerank all results from the bitwise ranking by diffusing in a graph. Comprehensive experiments on image search over three well-known benchmarks show that the proposed method achieves up to 17.11% and 20.28% performance gains on single and multiple table search over the state-of-the-art methods.
An approach to multiscale modelling with graph grammars
Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried
2014-01-01
Background and Aims Functional–structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. Methods A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Key Results Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. Conclusions The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models. PMID:25134929
Assessment of the spatial scaling behaviour of floods in the United Kingdom
NASA Astrophysics Data System (ADS)
Formetta, Giuseppe; Stewart, Elizabeth; Bell, Victoria
2017-04-01
Floods are among the most dangerous natural hazards, causing loss of life and significant damage to private and public property. Regional flood-frequency analysis (FFA) methods are essential tools to assess the flood hazard and plan interventions for its mitigation. FFA methods are often based on the well-known index flood method that assumes the invariance of the coefficient of variation of floods with drainage area. This assumption is equivalent to the simple scaling or self-similarity assumption for peak floods, i.e. their spatial structure remains similar in a particular, relatively simple, way to itself over a range of scales. Spatial scaling of floods has been evaluated at national scale for different countries such as Canada, USA, and Australia. According our knowledge. Such a study has not been conducted for the United Kingdom even though the standard FFA method there is based on the index flood assumption. In this work we present an integrated approach to assess of the spatial scaling behaviour of floods in the United Kingdom using three different methods: product moments (PM), probability weighted moments (PWM), and quantile analysis (QA). We analyse both instantaneous and daily annual observed maximum floods and performed our analysis both across the entire country and in its sub-climatic regions as defined in the Flood Studies Report (NERC, 1975). To evaluate the relationship between the k-th moments or quantiles and the drainage area we used both regression with area alone and multiple regression considering other explanatory variables to account for the geomorphology, amount of rainfall, and soil type of the catchments. The latter multiple regression approach was only recently demonstrated being more robust than the traditional regression with area alone that can lead to biased estimates of scaling exponents and misinterpretation of spatial scaling behaviour. We tested our framework on almost 600 rural catchments in UK considered as entire region and split in 11 sub-regions with 50 catchments per region on average. Preliminary results from the three different spatial scaling methods are generally in agreement and indicate that: i) only some of the peak flow variability is explained by area alone (approximately 50% for the entire country and ranging between the 40% and 70% for the sub-regions); ii) this percentage increases to 90% for the entire country and ranges between 80% and 95% for the sub-regions when the multiple regression is used; iii) the simple scaling hypothesis holds in all sub-regions with the exception of weak multi-scaling found in the regions 2 (North), and 5 and 6 (South East). We hypothesize that these deviations can be explained by heterogeneity in large scale precipitation and by the influence of the soil type (predominantly chalk) on the flood formation process in regions 5 and 6.
NASA Astrophysics Data System (ADS)
Li, Jinghe; Song, Linping; Liu, Qing Huo
2016-02-01
A simultaneous multiple frequency contrast source inversion (CSI) method is applied to reconstructing hydrocarbon reservoir targets in a complex multilayered medium in two dimensions. It simulates the effects of a salt dome sedimentary formation in the context of reservoir monitoring. In this method, the stabilized biconjugate-gradient fast Fourier transform (BCGS-FFT) algorithm is applied as a fast solver for the 2D volume integral equation for the forward computation. The inversion technique with CSI combines the efficient FFT algorithm to speed up the matrix-vector multiplication and the stable convergence of the simultaneous multiple frequency CSI in the iteration process. As a result, this method is capable of making quantitative conductivity image reconstruction effectively for large-scale electromagnetic oil exploration problems, including the vertical electromagnetic profiling (VEP) survey investigated here. A number of numerical examples have been demonstrated to validate the effectiveness and capacity of the simultaneous multiple frequency CSI method for a limited array view in VEP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shrestha, Roshan; Houser, Paul R.; Anantharaj, Valentine G.
2011-04-01
Precipitation products are currently available from various sources at higher spatial and temporal resolution than any time in the past. Each of the precipitation products has its strengths and weaknesses in availability, accuracy, resolution, retrieval techniques and quality control. By merging the precipitation data obtained from multiple sources, one can improve its information content by minimizing these issues. However, precipitation data merging poses challenges of scale-mismatch, and accurate error and bias assessment. In this paper we present Optimal Merging of Precipitation (OMP), a new method to merge precipitation data from multiple sources that are of different spatial and temporal resolutionsmore » and accuracies. This method is a combination of scale conversion and merging weight optimization, involving performance-tracing based on Bayesian statistics and trend-analysis, which yields merging weights for each precipitation data source. The weights are optimized at multiple scales to facilitate multiscale merging and better precipitation downscaling. Precipitation data used in the experiment include products from the 12-km resolution North American Land Data Assimilation (NLDAS) system, the 8-km resolution CMORPH and the 4-km resolution National Stage-IV QPE. The test cases demonstrate that the OMP method is capable of identifying a better data source and allocating a higher priority for them in the merging procedure, dynamically over the region and time period. This method is also effective in filtering out poor quality data introduced into the merging process.« less
USDA-ARS?s Scientific Manuscript database
Background/Question/Methods Standardized monitoring data collection efforts using a probabilistic sample design, such as in the Bureau of Land Management’s (BLM) Assessment, Inventory, and Monitoring (AIM) Strategy, provide a core suite of ecological indicators, maximize data collection efficiency,...
Phillips, Edward Geoffrey; Shadid, John N.; Cyr, Eric C.
2018-05-01
Here, we report multiple physical time-scales can arise in electromagnetic simulations when dissipative effects are introduced through boundary conditions, when currents follow external time-scales, and when material parameters vary spatially. In such scenarios, the time-scales of interest may be much slower than the fastest time-scales supported by the Maxwell equations, therefore making implicit time integration an efficient approach. The use of implicit temporal discretizations results in linear systems in which fast time-scales, which severely constrain the stability of an explicit method, can manifest as so-called stiff modes. This study proposes a new block preconditioner for structure preserving (also termed physicsmore » compatible) discretizations of the Maxwell equations in first order form. The intent of the preconditioner is to enable the efficient solution of multiple-time-scale Maxwell type systems. An additional benefit of the developed preconditioner is that it requires only a traditional multigrid method for its subsolves and compares well against alternative approaches that rely on specialized edge-based multigrid routines that may not be readily available. Lastly, results demonstrate parallel scalability at large electromagnetic wave CFL numbers on a variety of test problems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, Edward Geoffrey; Shadid, John N.; Cyr, Eric C.
Here, we report multiple physical time-scales can arise in electromagnetic simulations when dissipative effects are introduced through boundary conditions, when currents follow external time-scales, and when material parameters vary spatially. In such scenarios, the time-scales of interest may be much slower than the fastest time-scales supported by the Maxwell equations, therefore making implicit time integration an efficient approach. The use of implicit temporal discretizations results in linear systems in which fast time-scales, which severely constrain the stability of an explicit method, can manifest as so-called stiff modes. This study proposes a new block preconditioner for structure preserving (also termed physicsmore » compatible) discretizations of the Maxwell equations in first order form. The intent of the preconditioner is to enable the efficient solution of multiple-time-scale Maxwell type systems. An additional benefit of the developed preconditioner is that it requires only a traditional multigrid method for its subsolves and compares well against alternative approaches that rely on specialized edge-based multigrid routines that may not be readily available. Lastly, results demonstrate parallel scalability at large electromagnetic wave CFL numbers on a variety of test problems.« less
Le Pichon, Céline; Tales, Évelyne; Belliard, Jérôme; Torgersen, Christian E.
2017-01-01
Spatially intensive sampling by electrofishing is proposed as a method for quantifying spatial variation in fish assemblages at multiple scales along extensive stream sections in headwater catchments. We used this method to sample fish species at 10-m2 points spaced every 20 m throughout 5 km of a headwater stream in France. The spatially intensive sampling design provided information at a spatial resolution and extent that enabled exploration of spatial heterogeneity in fish assemblage structure and aquatic habitat at multiple scales with empirical variograms and wavelet analysis. These analyses were effective for detecting scales of periodicity, trends, and discontinuities in the distribution of species in relation to tributary junctions and obstacles to fish movement. This approach to sampling riverine fishes may be useful in fisheries research and management for evaluating stream fish responses to natural and altered habitats and for identifying sites for potential restoration.
Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems
NASA Astrophysics Data System (ADS)
Koch, Patrick Nathan
Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.
Salganik, Matthew J; Fazito, Dimitri; Bertoni, Neilane; Abdo, Alexandre H; Mello, Maeve B; Bastos, Francisco I
2011-11-15
One of the many challenges hindering the global response to the human immunodeficiency virus (HIV)/acquired immunodeficiency syndrome (AIDS) epidemic is the difficulty of collecting reliable information about the populations most at risk for the disease. Thus, the authors empirically assessed a promising new method for estimating the sizes of most at-risk populations: the network scale-up method. Using 4 different data sources, 2 of which were from other researchers, the authors produced 5 estimates of the number of heavy drug users in Curitiba, Brazil. The authors found that the network scale-up and generalized network scale-up estimators produced estimates 5-10 times higher than estimates made using standard methods (the multiplier method and the direct estimation method using data from 2004 and 2010). Given that equally plausible methods produced such a wide range of results, the authors recommend that additional studies be undertaken to compare estimates based on the scale-up method with those made using other methods. If scale-up-based methods routinely produce higher estimates, this would suggest that scale-up-based methods are inappropriate for populations most at risk of HIV/AIDS or that standard methods may tend to underestimate the sizes of these populations.
Determinations of pesticides in food are often complicated by the presence of fats and require multiple cleanup steps before analysis. Cost-effective analytical methods are needed for conducting large-scale exposure studies. We examined two extraction methods, supercritical flu...
Geospatial Optimization of Siting Large-Scale Solar Projects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macknick, Jordan; Quinby, Ted; Caulfield, Emmet
2014-03-01
Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent withmore » each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.« less
Spotted Towhee population dynamics in a riparian restoration context
Stacy L. Small; Frank R., III Thompson; Geoffery R. Geupel; John Faaborg
2007-01-01
We investigated factors at multiple scales that might influence nest predation risk for Spotted Towhees (Pipilo maculates) along the Sacramento River, California, within the context of large-scale riparian habitat restoration. We used the logistic-exposure method and Akaike's information criterion (AIC) for model selection to compare predator...
Sequential Superresolution Imaging of Multiple Targets Using a Single Fluorophore
Lidke, Diane S.; Lidke, Keith A.
2015-01-01
Fluorescence superresolution (SR) microscopy, or fluorescence nanoscopy, provides nanometer scale detail of cellular structures and allows for imaging of biological processes at the molecular level. Specific SR imaging methods, such as localization-based imaging, rely on stochastic transitions between on (fluorescent) and off (dark) states of fluorophores. Imaging multiple cellular structures using multi-color imaging is complicated and limited by the differing properties of various organic dyes including their fluorescent state duty cycle, photons per switching event, number of fluorescent cycles before irreversible photobleaching, and overall sensitivity to buffer conditions. In addition, multiple color imaging requires consideration of multiple optical paths or chromatic aberration that can lead to differential aberrations that are important at the nanometer scale. Here, we report a method for sequential labeling and imaging that allows for SR imaging of multiple targets using a single fluorophore with negligible cross-talk between images. Using brightfield image correlation to register and overlay multiple image acquisitions with ~10 nm overlay precision in the x-y imaging plane, we have exploited the optimal properties of AlexaFluor647 for dSTORM to image four distinct cellular proteins. We also visualize the changes in co-localization of the epidermal growth factor (EGF) receptor and clathrin upon EGF addition that are consistent with clathrin-mediated endocytosis. These results are the first to demonstrate sequential SR (s-SR) imaging using direct stochastic reconstruction microscopy (dSTORM), and this method for sequential imaging can be applied to any superresolution technique. PMID:25860558
NASA Astrophysics Data System (ADS)
Imamura, Seigo; Ono, Kenji; Yokokawa, Mitsuo
2016-07-01
Ensemble computing, which is an instance of capacity computing, is an effective computing scenario for exascale parallel supercomputers. In ensemble computing, there are multiple linear systems associated with a common coefficient matrix. We improve the performance of iterative solvers for multiple vectors by solving them at the same time, that is, by solving for the product of the matrices. We implemented several iterative methods and compared their performance. The maximum performance on Sparc VIIIfx was 7.6 times higher than that of a naïve implementation. Finally, to deal with the different convergence processes of linear systems, we introduced a control method to eliminate the calculation of already converged vectors.
Multi-scale signed envelope inversion
NASA Astrophysics Data System (ADS)
Chen, Guo-Xin; Wu, Ru-Shan; Wang, Yu-Qing; Chen, Sheng-Chang
2018-06-01
Envelope inversion based on modulation signal mode was proposed to reconstruct large-scale structures of underground media. In order to solve the shortcomings of conventional envelope inversion, multi-scale envelope inversion was proposed using new envelope Fréchet derivative and multi-scale inversion strategy to invert strong contrast models. In multi-scale envelope inversion, amplitude demodulation was used to extract the low frequency information from envelope data. However, only to use amplitude demodulation method will cause the loss of wavefield polarity information, thus increasing the possibility of inversion to obtain multiple solutions. In this paper we proposed a new demodulation method which can contain both the amplitude and polarity information of the envelope data. Then we introduced this demodulation method into multi-scale envelope inversion, and proposed a new misfit functional: multi-scale signed envelope inversion. In the numerical tests, we applied the new inversion method to the salt layer model and SEG/EAGE 2-D Salt model using low-cut source (frequency components below 4 Hz were truncated). The results of numerical test demonstrated the effectiveness of this method.
Development and Validation of a Multiple Intelligences Assessment Scale for Children.
ERIC Educational Resources Information Center
Shearer, C. Branton
Since Howard Gardner proposed the theory of multiple intelligences as an alternative to the unitary concept of general intelligence, educators have been searching for an acceptable method of assessment. To help with this search, three studies that describe the development and validation of a self- (and parent-) report measure of children's…
MSClique: Multiple Structure Discovery through the Maximum Weighted Clique Problem.
Sanroma, Gerard; Penate-Sanchez, Adrian; Alquézar, René; Serratosa, Francesc; Moreno-Noguer, Francesc; Andrade-Cetto, Juan; González Ballester, Miguel Ángel
2016-01-01
We present a novel approach for feature correspondence and multiple structure discovery in computer vision. In contrast to existing methods, we exploit the fact that point-sets on the same structure usually lie close to each other, thus forming clusters in the image. Given a pair of input images, we initially extract points of interest and extract hierarchical representations by agglomerative clustering. We use the maximum weighted clique problem to find the set of corresponding clusters with maximum number of inliers representing the multiple structures at the correct scales. Our method is parameter-free and only needs two sets of points along with their tentative correspondences, thus being extremely easy to use. We demonstrate the effectiveness of our method in multiple-structure fitting experiments in both publicly available and in-house datasets. As shown in the experiments, our approach finds a higher number of structures containing fewer outliers compared to state-of-the-art methods.
Grayscale inhomogeneity correction method for multiple mosaicked electron microscope images
NASA Astrophysics Data System (ADS)
Zhou, Fangxu; Chen, Xi; Sun, Rong; Han, Hua
2018-04-01
Electron microscope image stitching is highly desired to acquire microscopic resolution images of large target scenes in neuroscience. However, the result of multiple Mosaicked electron microscope images may exist severe gray scale inhomogeneity due to the instability of the electron microscope system and registration errors, which degrade the visual effect of the mosaicked EM images and aggravate the difficulty of follow-up treatment, such as automatic object recognition. Consequently, the grayscale correction method for multiple mosaicked electron microscope images is indispensable in these areas. Different from most previous grayscale correction methods, this paper designs a grayscale correction process for multiple EM images which tackles the difficulty of the multiple images monochrome correction and achieves the consistency of grayscale in the overlap regions. We adjust overall grayscale of the mosaicked images with the location and grayscale information of manual selected seed images, and then fuse local overlap regions between adjacent images using Poisson image editing. Experimental result demonstrates the effectiveness of our proposed method.
Acoustic Treatment Design Scaling Methods. Phase 2
NASA Technical Reports Server (NTRS)
Clark, L. (Technical Monitor); Parrott, T. (Technical Monitor); Jones, M. (Technical Monitor); Kraft, R. E.; Yu, J.; Kwan, H. W.; Beer, B.; Seybert, A. F.; Tathavadekar, P.
2003-01-01
The ability to design, build and test miniaturized acoustic treatment panels on scale model fan rigs representative of full scale engines provides not only cost-savings, but also an opportunity to optimize the treatment by allowing multiple tests. To use scale model treatment as a design tool, the impedance of the sub-scale liner must be known with confidence. This study was aimed at developing impedance measurement methods for high frequencies. A normal incidence impedance tube method that extends the upper frequency range to 25,000 Hz. without grazing flow effects was evaluated. The free field method was investigated as a potential high frequency technique. The potential of the two-microphone in-situ impedance measurement method was evaluated in the presence of grazing flow. Difficulties in achieving the high frequency goals were encountered in all methods. Results of developing a time-domain finite difference resonator impedance model indicated that a re-interpretation of the empirical fluid mechanical models used in the frequency domain model for nonlinear resistance and mass reactance may be required. A scale model treatment design that could be tested on the Universal Propulsion Simulator vehicle was proposed.
Quantifying drivers of wild pig movement across multiple spatial and temporal scales.
Kay, Shannon L; Fischer, Justin W; Monaghan, Andrew J; Beasley, James C; Boughton, Raoul; Campbell, Tyler A; Cooper, Susan M; Ditchkoff, Stephen S; Hartley, Steve B; Kilgo, John C; Wisely, Samantha M; Wyckoff, A Christy; VerCauteren, Kurt C; Pepin, Kim M
2017-01-01
The movement behavior of an animal is determined by extrinsic and intrinsic factors that operate at multiple spatio-temporal scales, yet much of our knowledge of animal movement comes from studies that examine only one or two scales concurrently. Understanding the drivers of animal movement across multiple scales is crucial for understanding the fundamentals of movement ecology, predicting changes in distribution, describing disease dynamics, and identifying efficient methods of wildlife conservation and management. We obtained over 400,000 GPS locations of wild pigs from 13 different studies spanning six states in southern U.S.A., and quantified movement rates and home range size within a single analytical framework. We used a generalized additive mixed model framework to quantify the effects of five broad predictor categories on movement: individual-level attributes, geographic factors, landscape attributes, meteorological conditions, and temporal variables. We examined effects of predictors across three temporal scales: daily, monthly, and using all data during the study period. We considered both local environmental factors such as daily weather data and distance to various resources on the landscape, as well as factors acting at a broader spatial scale such as ecoregion and season. We found meteorological variables (temperature and pressure), landscape features (distance to water sources), a broad-scale geographic factor (ecoregion), and individual-level characteristics (sex-age class), drove wild pig movement across all scales, but both the magnitude and shape of covariate relationships to movement differed across temporal scales. The analytical framework we present can be used to assess movement patterns arising from multiple data sources for a range of species while accounting for spatio-temporal correlations. Our analyses show the magnitude by which reaction norms can change based on the temporal scale of response data, illustrating the importance of appropriately defining temporal scales of both the movement response and covariates depending on the intended implications of research (e.g., predicting effects of movement due to climate change versus planning local-scale management). We argue that consideration of multiple spatial scales within the same framework (rather than comparing across separate studies post-hoc ) gives a more accurate quantification of cross-scale spatial effects by appropriately accounting for error correlation.
Michael C. Dietze; Rodrigo Vargas; Andrew D. Richardson; Paul C. Stoy; Alan G. Barr; Ryan S. Anderson; M. Altaf Arain; Ian T. Baker; T. Andrew Black; Jing M. Chen; Philippe Ciais; Lawrence B. Flanagan; Christopher M. Gough; Robert F. Grant; David Hollinger; R. Cesar Izaurralde; Christopher J. Kucharik; Peter Lafleur; Shugang Liu; Erandathie Lokupitiya; Yiqi Luo; J. William Munger; Changhui Peng; Benjamin Poulter; David T. Price; Daniel M. Ricciuto; William J. Riley; Alok Kumar Sahoo; Kevin Schaefer; Andrew E. Suyker; Hanqin Tian; Christina Tonitto; Hans Verbeeck; Shashi B. Verma; Weifeng Wang; Ensheng Weng
2011-01-01
Ecosystem models are important tools for diagnosing the carbon cycle and projecting its behavior across space and time. Despite the fact that ecosystems respond to drivers at multiple time scales, most assessments of model performance do not discriminate different time scales. Spectral methods, such as wavelet analyses, present an alternative approach that enables the...
An open-access CMIP5 pattern library for temperature and precipitation: Description and methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lynch, Cary D.; Hartin, Corinne A.; Bond-Lamberty, Benjamin
Pattern scaling is used to efficiently emulate general circulation models and explore uncertainty in climate projections under multiple forcing scenarios. Pattern scaling methods assume that local climate changes scale with a global mean temperature increase, allowing for spatial patterns to be generated for multiple models for any future emission scenario. For uncertainty quantification and probabilistic statistical analysis, a library of patterns with descriptive statistics for each file would be beneficial, but such a library does not presently exist. Of the possible techniques used to generate patterns, the two most prominent are the delta and least squared regression methods. We exploremore » the differences and statistical significance between patterns generated by each method and assess performance of the generated patterns across methods and scenarios. Differences in patterns across seasons between methods and epochs were largest in high latitudes (60-90°N/S). Bias and mean errors between modeled and pattern predicted output from the linear regression method were smaller than patterns generated by the delta method. Across scenarios, differences in the linear regression method patterns were more statistically significant, especially at high latitudes. We found that pattern generation methodologies were able to approximate the forced signal of change to within ≤ 0.5°C, but choice of pattern generation methodology for pattern scaling purposes should be informed by user goals and criteria. As a result, this paper describes our library of least squared regression patterns from all CMIP5 models for temperature and precipitation on an annual and sub-annual basis, along with the code used to generate these patterns.« less
An open-access CMIP5 pattern library for temperature and precipitation: Description and methodology
Lynch, Cary D.; Hartin, Corinne A.; Bond-Lamberty, Benjamin; ...
2017-05-15
Pattern scaling is used to efficiently emulate general circulation models and explore uncertainty in climate projections under multiple forcing scenarios. Pattern scaling methods assume that local climate changes scale with a global mean temperature increase, allowing for spatial patterns to be generated for multiple models for any future emission scenario. For uncertainty quantification and probabilistic statistical analysis, a library of patterns with descriptive statistics for each file would be beneficial, but such a library does not presently exist. Of the possible techniques used to generate patterns, the two most prominent are the delta and least squared regression methods. We exploremore » the differences and statistical significance between patterns generated by each method and assess performance of the generated patterns across methods and scenarios. Differences in patterns across seasons between methods and epochs were largest in high latitudes (60-90°N/S). Bias and mean errors between modeled and pattern predicted output from the linear regression method were smaller than patterns generated by the delta method. Across scenarios, differences in the linear regression method patterns were more statistically significant, especially at high latitudes. We found that pattern generation methodologies were able to approximate the forced signal of change to within ≤ 0.5°C, but choice of pattern generation methodology for pattern scaling purposes should be informed by user goals and criteria. As a result, this paper describes our library of least squared regression patterns from all CMIP5 models for temperature and precipitation on an annual and sub-annual basis, along with the code used to generate these patterns.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghafarian, M.; Ariaei, A., E-mail: ariaei@eng.ui.ac.ir
The free vibration analysis of a multiple rotating nanobeams' system applying the nonlocal Eringen elasticity theory is presented. Multiple nanobeams' systems are of great importance in nano-optomechanical applications. At nanoscale, the nonlocal effects become non-negligible. According to the nonlocal Euler-Bernoulli beam theory, the governing partial differential equations are derived by incorporating the nonlocal scale effects. Assuming a structure of n parallel nanobeams, the vibration of the system is described by a coupled set of n partial differential equations. The method involves a change of variables to uncouple the equations and the differential transform method as an efficient mathematical technique tomore » solve the nonlocal governing differential equations. Then a number of parametric studies are conducted to assess the effect of the nonlocal scaling parameter, rotational speed, boundary conditions, hub radius, and the stiffness coefficients of the elastic interlayer media on the vibration behavior of the coupled rotating multiple-carbon-nanotube-beam system. It is revealed that the bending vibration of the system is significantly influenced by the rotational speed, elastic mediums, and the nonlocal scaling parameters. This model is validated by comparing the results with those available in the literature. The natural frequencies are in a reasonably good agreement with the reported results.« less
Farmer, William H.; Over, Thomas M.; Vogel, Richard M.
2015-01-01
Understanding the spatial structure of daily streamflow is essential for managing freshwater resources, especially in poorly-gaged regions. Spatial scaling assumptions are common in flood frequency prediction (e.g., index-flood method) and the prediction of continuous streamflow at ungaged sites (e.g. drainage-area ratio), with simple scaling by drainage area being the most common assumption. In this study, scaling analyses of daily streamflow from 173 streamgages in the southeastern US resulted in three important findings. First, the use of only positive integer moment orders, as has been done in most previous studies, captures only the probabilistic and spatial scaling behavior of flows above an exceedance probability near the median; negative moment orders (inverse moments) are needed for lower streamflows. Second, assessing scaling by using drainage area alone is shown to result in a high degree of omitted-variable bias, masking the true spatial scaling behavior. Multiple regression is shown to mitigate this bias, controlling for regional heterogeneity of basin attributes, especially those correlated with drainage area. Previous univariate scaling analyses have neglected the scaling of low-flow events and may have produced biased estimates of the spatial scaling exponent. Third, the multiple regression results show that mean flows scale with an exponent of one, low flows scale with spatial scaling exponents greater than one, and high flows scale with exponents less than one. The relationship between scaling exponents and exceedance probabilities may be a fundamental signature of regional streamflow. This signature may improve our understanding of the physical processes generating streamflow at different exceedance probabilities.
NASA Astrophysics Data System (ADS)
Chen, Tao; Clauser, Christoph; Marquart, Gabriele; Willbrand, Karen; Hiller, Thomas
2018-02-01
Upscaling permeability of grid blocks is crucial for groundwater models. A novel upscaling method for three-dimensional fractured porous rocks is presented. The objective of the study was to compare this method with the commonly used Oda upscaling method and the volume averaging method. First, the multiple boundary method and its computational framework were defined for three-dimensional stochastic fracture networks. Then, the different upscaling methods were compared for a set of rotated fractures, for tortuous fractures, and for two discrete fracture networks. The results computed by the multiple boundary method are comparable with those of the other two methods and fit best the analytical solution for a set of rotated fractures. The errors in flow rate of the equivalent fracture model decrease when using the multiple boundary method. Furthermore, the errors of the equivalent fracture models increase from well-connected fracture networks to poorly connected ones. Finally, the diagonal components of the equivalent permeability tensors tend to follow a normal or log-normal distribution for the well-connected fracture network model with infinite fracture size. By contrast, they exhibit a power-law distribution for the poorly connected fracture network with multiple scale fractures. The study demonstrates the accuracy and the flexibility of the multiple boundary upscaling concept. This makes it attractive for being incorporated into any existing flow-based upscaling procedures, which helps in reducing the uncertainty of groundwater models.
NASA Astrophysics Data System (ADS)
Tang, Yunwei; Atkinson, Peter M.; Zhang, Jingxiong
2015-03-01
A cross-scale data integration method was developed and tested based on the theory of geostatistics and multiple-point geostatistics (MPG). The goal was to downscale remotely sensed images while retaining spatial structure by integrating images at different spatial resolutions. During the process of downscaling, a rich spatial correlation model in the form of a training image was incorporated to facilitate reproduction of similar local patterns in the simulated images. Area-to-point cokriging (ATPCK) was used as locally varying mean (LVM) (i.e., soft data) to deal with the change of support problem (COSP) for cross-scale integration, which MPG cannot achieve alone. Several pairs of spectral bands of remotely sensed images were tested for integration within different cross-scale case studies. The experiment shows that MPG can restore the spatial structure of the image at a fine spatial resolution given the training image and conditioning data. The super-resolution image can be predicted using the proposed method, which cannot be realised using most data integration methods. The results show that ATPCK-MPG approach can achieve greater accuracy than methods which do not account for the change of support issue.
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary
2013-01-01
With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.
Grains of connectivity: analysis at multiple spatial scales in landscape genetics.
Galpern, Paul; Manseau, Micheline; Wilson, Paul
2012-08-01
Landscape genetic analyses are typically conducted at one spatial scale. Considering multiple scales may be essential for identifying landscape features influencing gene flow. We examined landscape connectivity for woodland caribou (Rangifer tarandus caribou) at multiple spatial scales using a new approach based on landscape graphs that creates a Voronoi tessellation of the landscape. To illustrate the potential of the method, we generated five resistance surfaces to explain how landscape pattern may influence gene flow across the range of this population. We tested each resistance surface using a raster at the spatial grain of available landscape data (200 m grid squares). We then used our method to produce up to 127 additional grains for each resistance surface. We applied a causal modelling framework with partial Mantel tests, where evidence of landscape resistance is tested against an alternative hypothesis of isolation-by-distance, and found statistically significant support for landscape resistance to gene flow in 89 of the 507 spatial grains examined. We found evidence that major roads as well as the cumulative effects of natural and anthropogenic disturbance may be contributing to the genetic structure. Using only the original grid surface yielded no evidence for landscape resistance to gene flow. Our results show that using multiple spatial grains can reveal landscape influences on genetic structure that may be overlooked with a single grain, and suggest that coarsening the grain of landcover data may be appropriate for highly mobile species. We discuss how grains of connectivity and related analyses have potential landscape genetic applications in a broad range of systems. © 2012 Blackwell Publishing Ltd.
Multiple time scale analysis of pressure oscillations in solid rocket motors
NASA Astrophysics Data System (ADS)
Ahmed, Waqas; Maqsood, Adnan; Riaz, Rizwan
2018-03-01
In this study, acoustic pressure oscillations for single and coupled longitudinal acoustic modes in Solid Rocket Motor (SRM) are investigated using Multiple Time Scales (MTS) method. Two independent time scales are introduced. The oscillations occur on fast time scale whereas the amplitude and phase changes on slow time scale. Hopf bifurcation is employed to investigate the properties of the solution. The supercritical bifurcation phenomenon is observed for linearly unstable system. The amplitude of the oscillations result from equal energy gain and loss rates of longitudinal acoustic modes. The effect of linear instability and frequency of longitudinal modes on amplitude and phase of oscillations are determined for both single and coupled modes. For both cases, the maximum amplitude of oscillations decreases with the frequency of acoustic mode and linear instability of SRM. The comparison of analytical MTS results and numerical simulations demonstrate an excellent agreement.
Efficient species-level monitoring at the landscape scale
Barry R. Noon; Larissa L. Bailey; Thomas D. Sisk; Kevin S. McKelvey
2012-01-01
Monitoring the population trends of multiple animal species at a landscape scale is prohibitively expensive. However, advances in survey design, statistical methods, and the ability to estimate species presence on the basis of detectionÂnondetection data have greatly increased the feasibility of species-level monitoring. For example, recent advances in monitoring make...
NASA Astrophysics Data System (ADS)
Shen, Wei; Zhao, Kai; Jiang, Yuan; Wang, Yan; Bai, Xiang; Yuille, Alan
2017-11-01
Object skeletons are useful for object representation and object detection. They are complementary to the object contour, and provide extra information, such as how object scale (thickness) varies among object parts. But object skeleton extraction from natural images is very challenging, because it requires the extractor to be able to capture both local and non-local image context in order to determine the scale of each skeleton pixel. In this paper, we present a novel fully convolutional network with multiple scale-associated side outputs to address this problem. By observing the relationship between the receptive field sizes of the different layers in the network and the skeleton scales they can capture, we introduce two scale-associated side outputs to each stage of the network. The network is trained by multi-task learning, where one task is skeleton localization to classify whether a pixel is a skeleton pixel or not, and the other is skeleton scale prediction to regress the scale of each skeleton pixel. Supervision is imposed at different stages by guiding the scale-associated side outputs toward the groundtruth skeletons at the appropriate scales. The responses of the multiple scale-associated side outputs are then fused in a scale-specific way to detect skeleton pixels using multiple scales effectively. Our method achieves promising results on two skeleton extraction datasets, and significantly outperforms other competitors. Additionally, the usefulness of the obtained skeletons and scales (thickness) are verified on two object detection applications: Foreground object segmentation and object proposal detection.
NASA Astrophysics Data System (ADS)
Gong, L.
2013-12-01
Large-scale hydrological models and land surface models are by far the only tools for accessing future water resources in climate change impact studies. Those models estimate discharge with large uncertainties, due to the complex interaction between climate and hydrology, the limited quality and availability of data, as well as model uncertainties. A new purely data-based scale-extrapolation method is proposed, to estimate water resources for a large basin solely from selected small sub-basins, which are typically two-orders-of-magnitude smaller than the large basin. Those small sub-basins contain sufficient information, not only on climate and land surface, but also on hydrological characteristics for the large basin In the Baltic Sea drainage basin, best discharge estimation for the gauged area was achieved with sub-basins that cover 2-4% of the gauged area. There exist multiple sets of sub-basins that resemble the climate and hydrology of the basin equally well. Those multiple sets estimate annual discharge for gauged area consistently well with 5% average error. The scale-extrapolation method is completely data-based; therefore it does not force any modelling error into the prediction. The multiple predictions are expected to bracket the inherent variations and uncertainties of the climate and hydrology of the basin. The method can be applied in both un-gauged basins and un-gauged periods with uncertainty estimation.
NASA Astrophysics Data System (ADS)
Seeley, M.; Walther, B. D.
2016-02-01
Atlantic tarpon, Megalops atlanticus, are highly migratory euryhaline predators that occupy different habitats throughout ontogeny. Specifically, Atlantic tarpon are known to inhabit oligohaline waters, although the frequency and duration of movements across estuarine gradients into these waters are relatively unknown. This species supports over a two billion dollar industry within the Gulf of Mexico and is currently listed as vulnerable under the International Union for the Conservation of Nature (IUCN). A new non-lethal method for reconstructing migrations across estuaries relies on trace element and stable isotope compositions of growth increments in scales. We analyzed Atlantic tarpon scales from the Texas coast to validate this method using inductively coupled plasma mass spectrometry (ICP-MS) for trace elements and isotope ratio mass spectrometry (IR-MS) for stable isotope ratios. Multiple scales were also taken from the same individual to confirm the consistency of elemental uptake within the same individual. Results show that scale Ba:Ca, Sr:Ca and δ13C are effective proxies for salinity, while enrichments in δ15N are consistent with known ontogenetic trophic shifts. In addition, chemical transects across multiple scales from the same individual were highly consistent, suggesting that any non-regenerated scale removed from a fish can provide equivalent time series. Continuous life history profiles of scales were obtained via laser ablation transects of scale cross-sections to quantify trace element concentrations from the core (youngest increments) to the edge (oldest increments). Stable isotope and trace element results together indicate that behavior is highly variable between individuals, with some but not all fish transiting estuarine gradients into oligohaline waters. Our findings will provide novel opportunities to investigate alternative non-lethal methods to monitor fish migrations across chemical gradients.
An approach to multiscale modelling with graph grammars.
Ong, Yongzhi; Streit, Katarína; Henke, Michael; Kurth, Winfried
2014-09-01
Functional-structural plant models (FSPMs) simulate biological processes at different spatial scales. Methods exist for multiscale data representation and modification, but the advantages of using multiple scales in the dynamic aspects of FSPMs remain unclear. Results from multiscale models in various other areas of science that share fundamental modelling issues with FSPMs suggest that potential advantages do exist, and this study therefore aims to introduce an approach to multiscale modelling in FSPMs. A three-part graph data structure and grammar is revisited, and presented with a conceptual framework for multiscale modelling. The framework is used for identifying roles, categorizing and describing scale-to-scale interactions, thus allowing alternative approaches to model development as opposed to correlation-based modelling at a single scale. Reverse information flow (from macro- to micro-scale) is catered for in the framework. The methods are implemented within the programming language XL. Three example models are implemented using the proposed multiscale graph model and framework. The first illustrates the fundamental usage of the graph data structure and grammar, the second uses probabilistic modelling for organs at the fine scale in order to derive crown growth, and the third combines multiscale plant topology with ozone trends and metabolic network simulations in order to model juvenile beech stands under exposure to a toxic trace gas. The graph data structure supports data representation and grammar operations at multiple scales. The results demonstrate that multiscale modelling is a viable method in FSPM and an alternative to correlation-based modelling. Advantages and disadvantages of multiscale modelling are illustrated by comparisons with single-scale implementations, leading to motivations for further research in sensitivity analysis and run-time efficiency for these models.
Methods for measuring denitrification: Diverse approaches to a difficult problem
Groffman, Peter M; Altabet, Mary A.; Böhlke, J.K.; Butterbach-Bahl, Klaus; David, Mary B.; Firestone, Mary K.; Giblin, Anne E.; Kana, Todd M.; Nielsen , Lars Peter; Voytek, Mary A.
2006-01-01
Denitrification, the reduction of the nitrogen (N) oxides, nitrate (NO3−) and nitrite (NO2−), to the gases nitric oxide (NO), nitrous oxide (N2O), and dinitrogen (N2), is important to primary production, water quality, and the chemistry and physics of the atmosphere at ecosystem, landscape, regional, and global scales. Unfortunately, this process is very difficult to measure, and existing methods are problematic for different reasons in different places at different times. In this paper, we review the major approaches that have been taken to measure denitrification in terrestrial and aquatic environments and discuss the strengths, weaknesses, and future prospects for the different methods. Methodological approaches covered include (1) acetylene-based methods, (2) 15N tracers, (3) direct N2 quantification, (4) N2:Ar ratio quantification, (5) mass balance approaches, (6) stoichiometric approaches, (7) methods based on stable isotopes, (8) in situ gradients with atmospheric environmental tracers, and (9) molecular approaches. Our review makes it clear that the prospects for improved quantification of denitrification vary greatly in different environments and at different scales. While current methodology allows for the production of accurate estimates of denitrification at scales relevant to water and air quality and ecosystem fertility questions in some systems (e.g., aquatic sediments, well-defined aquifers), methodology for other systems, especially upland terrestrial areas, still needs development. Comparison of mass balance and stoichiometric approaches that constrain estimates of denitrification at large scales with point measurements (made using multiple methods), in multiple systems, is likely to propel more improvement in denitrification methods over the next few years.
A Bayesian method for assessing multiscalespecies-habitat relationships
Stuber, Erica F.; Gruber, Lutz F.; Fontaine, Joseph J.
2017-01-01
ContextScientists face several theoretical and methodological challenges in appropriately describing fundamental wildlife-habitat relationships in models. The spatial scales of habitat relationships are often unknown, and are expected to follow a multi-scale hierarchy. Typical frequentist or information theoretic approaches often suffer under collinearity in multi-scale studies, fail to converge when models are complex or represent an intractable computational burden when candidate model sets are large.ObjectivesOur objective was to implement an automated, Bayesian method for inference on the spatial scales of habitat variables that best predict animal abundance.MethodsWe introduce Bayesian latent indicator scale selection (BLISS), a Bayesian method to select spatial scales of predictors using latent scale indicator variables that are estimated with reversible-jump Markov chain Monte Carlo sampling. BLISS does not suffer from collinearity, and substantially reduces computation time of studies. We present a simulation study to validate our method and apply our method to a case-study of land cover predictors for ring-necked pheasant (Phasianus colchicus) abundance in Nebraska, USA.ResultsOur method returns accurate descriptions of the explanatory power of multiple spatial scales, and unbiased and precise parameter estimates under commonly encountered data limitations including spatial scale autocorrelation, effect size, and sample size. BLISS outperforms commonly used model selection methods including stepwise and AIC, and reduces runtime by 90%.ConclusionsGiven the pervasiveness of scale-dependency in ecology, and the implications of mismatches between the scales of analyses and ecological processes, identifying the spatial scales over which species are integrating habitat information is an important step in understanding species-habitat relationships. BLISS is a widely applicable method for identifying important spatial scales, propagating scale uncertainty, and testing hypotheses of scaling relationships.
Spatially extended hybrid methods: a review
2018-01-01
Many biological and physical systems exhibit behaviour at multiple spatial, temporal or population scales. Multiscale processes provide challenges when they are to be simulated using numerical techniques. While coarser methods such as partial differential equations are typically fast to simulate, they lack the individual-level detail that may be required in regions of low concentration or small spatial scale. However, to simulate at such an individual level throughout a domain and in regions where concentrations are high can be computationally expensive. Spatially coupled hybrid methods provide a bridge, allowing for multiple representations of the same species in one spatial domain by partitioning space into distinct modelling subdomains. Over the past 20 years, such hybrid methods have risen to prominence, leading to what is now a very active research area across multiple disciplines including chemistry, physics and mathematics. There are three main motivations for undertaking this review. Firstly, we have collated a large number of spatially extended hybrid methods and presented them in a single coherent document, while comparing and contrasting them, so that anyone who requires a multiscale hybrid method will be able to find the most appropriate one for their need. Secondly, we have provided canonical examples with algorithms and accompanying code, serving to demonstrate how these types of methods work in practice. Finally, we have presented papers that employ these methods on real biological and physical problems, demonstrating their utility. We also consider some open research questions in the area of hybrid method development and the future directions for the field. PMID:29491179
Conjugate-Gradient Algorithms For Dynamics Of Manipulators
NASA Technical Reports Server (NTRS)
Fijany, Amir; Scheid, Robert E.
1993-01-01
Algorithms for serial and parallel computation of forward dynamics of multiple-link robotic manipulators by conjugate-gradient method developed. Parallel algorithms have potential for speedup of computations on multiple linked, specialized processors implemented in very-large-scale integrated circuits. Such processors used to stimulate dynamics, possibly faster than in real time, for purposes of planning and control.
ERIC Educational Resources Information Center
Si, Yajuan; Reiter, Jerome P.
2013-01-01
In many surveys, the data comprise a large number of categorical variables that suffer from item nonresponse. Standard methods for multiple imputation, like log-linear models or sequential regression imputation, can fail to capture complex dependencies and can be difficult to implement effectively in high dimensions. We present a fully Bayesian,…
Integrating computational methods to retrofit enzymes to synthetic pathways.
Brunk, Elizabeth; Neri, Marilisa; Tavernelli, Ivano; Hatzimanikatis, Vassily; Rothlisberger, Ursula
2012-02-01
Microbial production of desired compounds provides an efficient framework for the development of renewable energy resources. To be competitive to traditional chemistry, one requirement is to utilize the full capacity of the microorganism to produce target compounds with high yields and turnover rates. We use integrated computational methods to generate and quantify the performance of novel biosynthetic routes that contain highly optimized catalysts. Engineering a novel reaction pathway entails addressing feasibility on multiple levels, which involves handling the complexity of large-scale biochemical networks while respecting the critical chemical phenomena at the atomistic scale. To pursue this multi-layer challenge, our strategy merges knowledge-based metabolic engineering methods with computational chemistry methods. By bridging multiple disciplines, we provide an integral computational framework that could accelerate the discovery and implementation of novel biosynthetic production routes. Using this approach, we have identified and optimized a novel biosynthetic route for the production of 3HP from pyruvate. Copyright © 2011 Wiley Periodicals, Inc.
Accurate age estimation in small-scale societies
Smith, Daniel; Gerbault, Pascale; Dyble, Mark; Migliano, Andrea Bamberg; Thomas, Mark G.
2017-01-01
Precise estimation of age is essential in evolutionary anthropology, especially to infer population age structures and understand the evolution of human life history diversity. However, in small-scale societies, such as hunter-gatherer populations, time is often not referred to in calendar years, and accurate age estimation remains a challenge. We address this issue by proposing a Bayesian approach that accounts for age uncertainty inherent to fieldwork data. We developed a Gibbs sampling Markov chain Monte Carlo algorithm that produces posterior distributions of ages for each individual, based on a ranking order of individuals from youngest to oldest and age ranges for each individual. We first validate our method on 65 Agta foragers from the Philippines with known ages, and show that our method generates age estimations that are superior to previously published regression-based approaches. We then use data on 587 Agta collected during recent fieldwork to demonstrate how multiple partial age ranks coming from multiple camps of hunter-gatherers can be integrated. Finally, we exemplify how the distributions generated by our method can be used to estimate important demographic parameters in small-scale societies: here, age-specific fertility patterns. Our flexible Bayesian approach will be especially useful to improve cross-cultural life history datasets for small-scale societies for which reliable age records are difficult to acquire. PMID:28696282
Accurate age estimation in small-scale societies.
Diekmann, Yoan; Smith, Daniel; Gerbault, Pascale; Dyble, Mark; Page, Abigail E; Chaudhary, Nikhil; Migliano, Andrea Bamberg; Thomas, Mark G
2017-08-01
Precise estimation of age is essential in evolutionary anthropology, especially to infer population age structures and understand the evolution of human life history diversity. However, in small-scale societies, such as hunter-gatherer populations, time is often not referred to in calendar years, and accurate age estimation remains a challenge. We address this issue by proposing a Bayesian approach that accounts for age uncertainty inherent to fieldwork data. We developed a Gibbs sampling Markov chain Monte Carlo algorithm that produces posterior distributions of ages for each individual, based on a ranking order of individuals from youngest to oldest and age ranges for each individual. We first validate our method on 65 Agta foragers from the Philippines with known ages, and show that our method generates age estimations that are superior to previously published regression-based approaches. We then use data on 587 Agta collected during recent fieldwork to demonstrate how multiple partial age ranks coming from multiple camps of hunter-gatherers can be integrated. Finally, we exemplify how the distributions generated by our method can be used to estimate important demographic parameters in small-scale societies: here, age-specific fertility patterns. Our flexible Bayesian approach will be especially useful to improve cross-cultural life history datasets for small-scale societies for which reliable age records are difficult to acquire.
Scale-dependent intrinsic entropies of complex time series.
Yeh, Jia-Rong; Peng, Chung-Kang; Huang, Norden E
2016-04-13
Multi-scale entropy (MSE) was developed as a measure of complexity for complex time series, and it has been applied widely in recent years. The MSE algorithm is based on the assumption that biological systems possess the ability to adapt and function in an ever-changing environment, and these systems need to operate across multiple temporal and spatial scales, such that their complexity is also multi-scale and hierarchical. Here, we present a systematic approach to apply the empirical mode decomposition algorithm, which can detrend time series on various time scales, prior to analysing a signal's complexity by measuring the irregularity of its dynamics on multiple time scales. Simulated time series of fractal Gaussian noise and human heartbeat time series were used to study the performance of this new approach. We show that our method can successfully quantify the fractal properties of the simulated time series and can accurately distinguish modulations in human heartbeat time series in health and disease. © 2016 The Author(s).
Quantification of Treatment Effect Modification on Both an Additive and Multiplicative Scale
Girerd, Nicolas; Rabilloud, Muriel; Pibarot, Philippe; Mathieu, Patrick; Roy, Pascal
2016-01-01
Background In both observational and randomized studies, associations with overall survival are by and large assessed on a multiplicative scale using the Cox model. However, clinicians and clinical researchers have an ardent interest in assessing absolute benefit associated with treatments. In older patients, some studies have reported lower relative treatment effect, which might translate into similar or even greater absolute treatment effect given their high baseline hazard for clinical events. Methods The effect of treatment and the effect modification of treatment were respectively assessed using a multiplicative and an additive hazard model in an analysis adjusted for propensity score in the context of coronary surgery. Results The multiplicative model yielded a lower relative hazard reduction with bilateral internal thoracic artery grafting in older patients (Hazard ratio for interaction/year = 1.03, 95%CI: 1.00 to 1.06, p = 0.05) whereas the additive model reported a similar absolute hazard reduction with increasing age (Delta for interaction/year = 0.10, 95%CI: -0.27 to 0.46, p = 0.61). The number needed to treat derived from the propensity score-adjusted multiplicative model was remarkably similar at the end of the follow-up in patients aged < = 60 and in patients >70. Conclusions The present example demonstrates that a lower treatment effect in older patients on a relative scale can conversely translate into a similar treatment effect on an additive scale due to large baseline hazard differences. Importantly, absolute risk reduction, either crude or adjusted, can be calculated from multiplicative survival models. We advocate for a wider use of the absolute scale, especially using additive hazard models, to assess treatment effect and treatment effect modification. PMID:27045168
Liu, Huiyu; Zhang, Mingyang; Lin, Zhenshan
2017-10-05
Climate changes are considered to significantly impact net primary productivity (NPP). However, there are few studies on how climate changes at multiple time scales impact NPP. With MODIS NPP product and station-based observations of sunshine duration, annual average temperature and annual precipitation, impacts of climate changes at different time scales on annual NPP, have been studied with EEMD (ensemble empirical mode decomposition) method in the Karst area of northwest Guangxi, China, during 2000-2013. Moreover, with partial least squares regression (PLSR) model, the relative importance of climatic variables for annual NPP has been explored. The results show that (1) only at quasi 3-year time scale do sunshine duration and temperature have significantly positive relations with NPP. (2) Annual precipitation has no significant relation to NPP by direct comparison, but significantly positive relation at 5-year time scale, which is because 5-year time scale is not the dominant scale of precipitation; (3) the changes of NPP may be dominated by inter-annual variabilities. (4) Multiple time scales analysis will greatly improve the performance of PLSR model for estimating NPP. The variable importance in projection (VIP) scores of sunshine duration and temperature at quasi 3-year time scale, and precipitation at quasi 5-year time scale are greater than 0.8, indicating important for NPP during 2000-2013. However, sunshine duration and temperature at quasi 3-year time scale are much more important. Our results underscore the importance of multiple time scales analysis for revealing the relations of NPP to changing climate.
An open-access CMIP5 pattern library for temperature and precipitation: description and methodology
NASA Astrophysics Data System (ADS)
Lynch, Cary; Hartin, Corinne; Bond-Lamberty, Ben; Kravitz, Ben
2017-05-01
Pattern scaling is used to efficiently emulate general circulation models and explore uncertainty in climate projections under multiple forcing scenarios. Pattern scaling methods assume that local climate changes scale with a global mean temperature increase, allowing for spatial patterns to be generated for multiple models for any future emission scenario. For uncertainty quantification and probabilistic statistical analysis, a library of patterns with descriptive statistics for each file would be beneficial, but such a library does not presently exist. Of the possible techniques used to generate patterns, the two most prominent are the delta and least squares regression methods. We explore the differences and statistical significance between patterns generated by each method and assess performance of the generated patterns across methods and scenarios. Differences in patterns across seasons between methods and epochs were largest in high latitudes (60-90° N/S). Bias and mean errors between modeled and pattern-predicted output from the linear regression method were smaller than patterns generated by the delta method. Across scenarios, differences in the linear regression method patterns were more statistically significant, especially at high latitudes. We found that pattern generation methodologies were able to approximate the forced signal of change to within ≤ 0.5 °C, but the choice of pattern generation methodology for pattern scaling purposes should be informed by user goals and criteria. This paper describes our library of least squares regression patterns from all CMIP5 models for temperature and precipitation on an annual and sub-annual basis, along with the code used to generate these patterns. The dataset and netCDF data generation code are available at doi:10.5281/zenodo.495632.
NASA Astrophysics Data System (ADS)
Nanjo, K.; Izutsu, J.; Orihara, Y.; Furuse, N.; Togo, S.; Nitta, H.; Okada, T.; Tanaka, R.; Kamogawa, M.; Nagao, T.
2016-12-01
We show the first results of recognizing seismic patterns as possible precursory episodes to the 2016 Kumamoto earthquakes, using existing four different methods: b-value method (e.g., Schorlemmer and Wiemer, 2005; Nanjo et al., 2012), two kinds of seismic quiescence evaluation methods (RTM-algorithm, Nagao et al., 2011; Z-value method, Wiemer and Wyss, 1994), and foreshock seismic density analysis based on Lippiello et al. (2012). We used the earthquake catalog maintained by the Japan Meteorological Agency (JMA). To ensure data quality, we performed catalog completeness check as a pre-processing step of individual analyses. Our finding indicates the methods we adopted do not allow the Kumamoto earthquakes to be predicted exactly. However, we found that the spatial extent of possible precursory patterns differs from one method to the other and ranges from local scales (typically asperity size), to regional scales (e.g., 2° × 3° around the source zone). The earthquakes are preceded by periods of pronounced anomalies, which lasted decade scales (e.g., 20 years or longer) to yearly scales (e.g., 1 2 years). Our results demonstrate that combination of multiple methods detects different signals prior to the Kumamoto earthquakes with more considerable reliability than if measured by single method. This strongly suggests great potential to reduce the possible future sites of earthquakes relative to long-term seismic hazard assessment. This study was partly supported by MEXT under its Earthquake and Volcano Hazards Observation and Research Program and Grant-in-Aid for Scientific Research (C), No. 26350483, 2014-2017, by Chubu University under the Collaboration Research Program of IDEAS, IDEAS201614, and by Tokai University under Project Resarch of IORD. A part of this presentation is given in Nanjo et al. (2016, submitted).
Score Calculation in Informatics Contests Using Multiple Criteria Decision Methods
ERIC Educational Resources Information Center
Skupiene, Jurate
2011-01-01
The Lithuanian Informatics Olympiad is a problem solving contest for high school students. The work of each contestant is evaluated in terms of several criteria, where each criterion is measured according to its own scale (but the same scale for each contestant). Several jury members are involved in the evaluation. This paper analyses the problem…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schwerdtfeger, Christine A.; Soudackov, Alexander V.; Hammes-Schiffer, Sharon, E-mail: shs3@illinois.edu
2014-01-21
The development of efficient theoretical methods for describing electron transfer (ET) reactions in condensed phases is important for a variety of chemical and biological applications. Previously, dynamical dielectric continuum theory was used to derive Langevin equations for a single collective solvent coordinate describing ET in a polar solvent. In this theory, the parameters are directly related to the physical properties of the system and can be determined from experimental data or explicit molecular dynamics simulations. Herein, we combine these Langevin equations with surface hopping nonadiabatic dynamics methods to calculate the rate constants for thermal ET reactions in polar solvents formore » a wide range of electronic couplings and reaction free energies. Comparison of explicit and implicit solvent calculations illustrates that the mapping from explicit to implicit solvent models is valid even for solvents exhibiting complex relaxation behavior with multiple relaxation time scales and a short-time inertial response. The rate constants calculated for implicit solvent models with a single solvent relaxation time scale corresponding to water, acetonitrile, and methanol agree well with analytical theories in the Golden rule and solvent-controlled regimes, as well as in the intermediate regime. The implicit solvent models with two relaxation time scales are in qualitative agreement with the analytical theories but quantitatively overestimate the rate constants compared to these theories. Analysis of these simulations elucidates the importance of multiple relaxation time scales and the inertial component of the solvent response, as well as potential shortcomings of the analytical theories based on single time scale solvent relaxation models. This implicit solvent approach will enable the simulation of a wide range of ET reactions via the stochastic dynamics of a single collective solvent coordinate with parameters that are relevant to experimentally accessible systems.« less
[Effect of preventive treatment on cognitive performance in patients with multiple sclerosis].
Shorobura, Maria S
2018-01-01
Introduction: cognitive, emotional and psychopathological changes play a significant role in the clinical picture of multiple sclerosis and influence the effectiveness of drug therapy, working capacity, quality of life, and the process of rehabilitation of patients with multiple sclerosis. The aim: investigate the changes in cognitive function in patients with multiple sclerosis, such as information processing speed and working memory of patients before and after treatment with immunomodulating drug. Materials and methods:33 patients examined reliably diagnosed with multiple sclerosis who were treated with preventive examinations and treatment from 2012 to 2016. For all patients with multiple sclerosis had clinical-neurological examination (neurological status using the EDSS scale) and the cognitive status was evaluated using the PASAT auditory test. Patient screening was performed before, during and after the therapy. Statistical analysis of the results was performed in the system Statistica 8.0. We used Student's t-test (t), Mann-Whitney test (Z). Person evaluated the correlation coefficients and Spearman (r, R), Wilcoxon criterion (T), Chi-square (X²). Results: The age of patients with multiple sclerosis affects the growth and EDSS scale score decrease PASAT to treatment. Duration of illness affects the EDSS scale score and performance PASAT. Indicators PASAT not significantly decreased throughout the treatment. Conclusions: glatiramer acetate has a positive effect on cognitive function, information processing speed and working memory patients with multiple sclerosis, which is one of the important components of the therapeutic effect of this drug.
Satellite-Scale Snow Water Equivalent Assimilation into a High-Resolution Land Surface Model
NASA Technical Reports Server (NTRS)
De Lannoy, Gabrielle J.M.; Reichle, Rolf H.; Houser, Paul R.; Arsenault, Kristi R.; Verhoest, Niko E.C.; Paulwels, Valentijn R.N.
2009-01-01
An ensemble Kalman filter (EnKF) is used in a suite of synthetic experiments to assimilate coarse-scale (25 km) snow water equivalent (SWE) observations (typical of satellite retrievals) into fine-scale (1 km) model simulations. Coarse-scale observations are assimilated directly using an observation operator for mapping between the coarse and fine scales or, alternatively, after disaggregation (re-gridding) to the fine-scale model resolution prior to data assimilation. In either case observations are assimilated either simultaneously or independently for each location. Results indicate that assimilating disaggregated fine-scale observations independently (method 1D-F1) is less efficient than assimilating a collection of neighboring disaggregated observations (method 3D-Fm). Direct assimilation of coarse-scale observations is superior to a priori disaggregation. Independent assimilation of individual coarse-scale observations (method 3D-C1) can bring the overall mean analyzed field close to the truth, but does not necessarily improve estimates of the fine-scale structure. There is a clear benefit to simultaneously assimilating multiple coarse-scale observations (method 3D-Cm) even as the entire domain is observed, indicating that underlying spatial error correlations can be exploited to improve SWE estimates. Method 3D-Cm avoids artificial transitions at the coarse observation pixel boundaries and can reduce the RMSE by 60% when compared to the open loop in this study.
ERIC Educational Resources Information Center
Sussman, Joan E.; Tjaden, Kris
2012-01-01
Purpose: The primary purpose of this study was to compare percent correct word and sentence intelligibility scores for individuals with multiple sclerosis (MS) and Parkinson's disease (PD) with scaled estimates of speech severity obtained for a reading passage. Method: Speech samples for 78 talkers were judged, including 30 speakers with MS, 16…
ERIC Educational Resources Information Center
Tjaden, Kris; Sussman, Joan E.; Wilding, Gregory E.
2014-01-01
Purpose: The perceptual consequences of rate reduction, increased vocal intensity, and clear speech were studied in speakers with multiple sclerosis (MS), Parkinson's disease (PD), and healthy controls. Method: Seventy-eight speakers read sentences in habitual, clear, loud, and slow conditions. Sentences were equated for peak amplitude and…
Multiscale Medical Image Fusion in Wavelet Domain
Khare, Ashish
2013-01-01
Wavelet transforms have emerged as a powerful tool in image fusion. However, the study and analysis of medical image fusion is still a challenging area of research. Therefore, in this paper, we propose a multiscale fusion of multimodal medical images in wavelet domain. Fusion of medical images has been performed at multiple scales varying from minimum to maximum level using maximum selection rule which provides more flexibility and choice to select the relevant fused images. The experimental analysis of the proposed method has been performed with several sets of medical images. Fusion results have been evaluated subjectively and objectively with existing state-of-the-art fusion methods which include several pyramid- and wavelet-transform-based fusion methods and principal component analysis (PCA) fusion method. The comparative analysis of the fusion results has been performed with edge strength (Q), mutual information (MI), entropy (E), standard deviation (SD), blind structural similarity index metric (BSSIM), spatial frequency (SF), and average gradient (AG) metrics. The combined subjective and objective evaluations of the proposed fusion method at multiple scales showed the effectiveness and goodness of the proposed approach. PMID:24453868
Modeling potential evapotranspiration of two forested watersheds in the southern Appalachians
L.Y. Rao; G. Sun; C.R. Ford; J.M. Vose
2011-01-01
Global climate change has direct impacts on watershed hydrology through altering evapotranspiration (ET) processes at multiple scales. There are many methods to estimate forest ET with models, but the most practical and the most popular one is the potential ET (PET) based method. However, the choice of PET methods for AET estimation remains challenging. This study...
Kim, Min-Kyu; Hong, Seong-Kwan; Kwon, Oh-Kyong
2015-12-26
This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS) applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs). The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC) with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB.
Optical power transfer and communication methods for wireless implantable sensing platforms.
Mujeeb-U-Rahman, Muhammad; Adalian, Dvin; Chang, Chieh-Feng; Scherer, Axel
2015-09-01
Ultrasmall scale implants have recently attracted focus as valuable tools for monitoring both acute and chronic diseases. Semiconductor optical technologies are the key to miniaturizing these devices to the long-sought sub-mm scale, which will enable long-term use of these devices for medical applications. This can also enable the use of multiple implantable devices concurrently to form a true body area network of sensors. We demonstrate optical power transfer techniques and methods to effectively harness this power for implantable devices. Furthermore, we also present methods for optical data transfer from such implants. Simultaneous use of these technologies can result in miniaturized sensing platforms that can allow for large-scale use of such systems in real world applications.
Optical power transfer and communication methods for wireless implantable sensing platforms
NASA Astrophysics Data System (ADS)
Mujeeb-U-Rahman, Muhammad; Adalian, Dvin; Chang, Chieh-Feng; Scherer, Axel
2015-09-01
Ultrasmall scale implants have recently attracted focus as valuable tools for monitoring both acute and chronic diseases. Semiconductor optical technologies are the key to miniaturizing these devices to the long-sought sub-mm scale, which will enable long-term use of these devices for medical applications. This can also enable the use of multiple implantable devices concurrently to form a true body area network of sensors. We demonstrate optical power transfer techniques and methods to effectively harness this power for implantable devices. Furthermore, we also present methods for optical data transfer from such implants. Simultaneous use of these technologies can result in miniaturized sensing platforms that can allow for large-scale use of such systems in real world applications.
DistributedFBA.jl: High-level, high-performance flux balance analysis in Julia
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heirendt, Laurent; Thiele, Ines; Fleming, Ronan M. T.
Flux balance analysis and its variants are widely used methods for predicting steady-state reaction rates in biochemical reaction networks. The exploration of high dimensional networks with such methods is currently hampered by software performance limitations. DistributedFBA.jl is a high-level, high-performance, open-source implementation of flux balance analysis in Julia. It is tailored to solve multiple flux balance analyses on a subset or all the reactions of large and huge-scale networks, on any number of threads or nodes. DistributedFBA.jl is a high-level, high-performance, open-source implementation of flux balance analysis in Julia. It is tailored to solve multiple flux balance analyses on amore » subset or all the reactions of large and huge-scale networks, on any number of threads or nodes.« less
DistributedFBA.jl: High-level, high-performance flux balance analysis in Julia
Heirendt, Laurent; Thiele, Ines; Fleming, Ronan M. T.
2017-01-16
Flux balance analysis and its variants are widely used methods for predicting steady-state reaction rates in biochemical reaction networks. The exploration of high dimensional networks with such methods is currently hampered by software performance limitations. DistributedFBA.jl is a high-level, high-performance, open-source implementation of flux balance analysis in Julia. It is tailored to solve multiple flux balance analyses on a subset or all the reactions of large and huge-scale networks, on any number of threads or nodes. DistributedFBA.jl is a high-level, high-performance, open-source implementation of flux balance analysis in Julia. It is tailored to solve multiple flux balance analyses on amore » subset or all the reactions of large and huge-scale networks, on any number of threads or nodes.« less
Baylis, Adriane; Chapman, Kathy; Whitehill, Tara L; Group, The Americleft Speech
2015-11-01
To investigate the validity and reliability of multiple listener judgments of hypernasality and audible nasal emission, in children with repaired cleft palate, using visual analog scaling (VAS) and equal-appearing interval (EAI) scaling. Prospective comparative study of multiple listener ratings of hypernasality and audible nasal emission. Multisite institutional. Five trained and experienced speech-language pathologist listeners from the Americleft Speech Project. Average VAS and EAI ratings of hypernasality and audible nasal emission/turbulence for 12 video-recorded speech samples from the Americleft Speech Project. Intrarater and interrater reliability was computed, as well as linear and polynomial models of best fit. Intrarater and interrater reliability was acceptable for both rating methods; however, reliability was higher for VAS as compared to EAI ratings. When VAS ratings were plotted against EAI ratings, results revealed a stronger curvilinear relationship. The results of this study provide additional evidence that alternate rating methods such as VAS may offer improved validity and reliability over EAI ratings of speech. VAS should be considered a viable method for rating hypernasality and nasal emission in speech in children with repaired cleft palate.
Beyond Low Rank + Sparse: Multi-scale Low Rank Matrix Decomposition
Ong, Frank; Lustig, Michael
2016-01-01
We present a natural generalization of the recent low rank + sparse matrix decomposition and consider the decomposition of matrices into components of multiple scales. Such decomposition is well motivated in practice as data matrices often exhibit local correlations in multiple scales. Concretely, we propose a multi-scale low rank modeling that represents a data matrix as a sum of block-wise low rank matrices with increasing scales of block sizes. We then consider the inverse problem of decomposing the data matrix into its multi-scale low rank components and approach the problem via a convex formulation. Theoretically, we show that under various incoherence conditions, the convex program recovers the multi-scale low rank components either exactly or approximately. Practically, we provide guidance on selecting the regularization parameters and incorporate cycle spinning to reduce blocking artifacts. Experimentally, we show that the multi-scale low rank decomposition provides a more intuitive decomposition than conventional low rank methods and demonstrate its effectiveness in four applications, including illumination normalization for face images, motion separation for surveillance videos, multi-scale modeling of the dynamic contrast enhanced magnetic resonance imaging and collaborative filtering exploiting age information. PMID:28450978
Mission Concepts and Operations for Asteroid Mitigation Involving Multiple Gravity Tractors
NASA Technical Reports Server (NTRS)
Foster, Cyrus; Bellerose, Julie; Jaroux, Belgacem; Mauro, David
2012-01-01
The gravity tractor concept is a proposed method to deflect an imminent asteroid impact through gravitational tugging over a time scale of years. In this study, we present mission scenarios and operational considerations for asteroid mitigation efforts involving multiple gravity tractors. We quantify the deflection performance improvement provided by a multiple gravity tractor campaign and assess its sensitivity to staggered launches. We next explore several proximity operation strategies to accommodate multiple gravity tractors at a single asteroid including formation-flying and mechanically-docked configurations. Finally, we utilize 99942 Apophis as an illustrative example to assess the performance of a multiple gravity tractor campaign.
NASA Astrophysics Data System (ADS)
Liu, Q.; Jing, L.; Li, Y.; Tang, Y.; Li, H.; Lin, Q.
2016-04-01
For the purpose of forest management, high resolution LIDAR and optical remote sensing imageries are used for treetop detection, tree crown delineation, and classification. The purpose of this study is to develop a self-adjusted dominant scales calculation method and a new crown horizontal cutting method of tree canopy height model (CHM) to detect and delineate tree crowns from LIDAR, under the hypothesis that a treetop is radiometric or altitudinal maximum and tree crowns consist of multi-scale branches. The major concept of the method is to develop an automatic selecting strategy of feature scale on CHM, and a multi-scale morphological reconstruction-open crown decomposition (MRCD) to get morphological multi-scale features of CHM by: cutting CHM from treetop to the ground; analysing and refining the dominant multiple scales with differential horizontal profiles to get treetops; segmenting LiDAR CHM using watershed a segmentation approach marked with MRCD treetops. This method has solved the problems of false detection of CHM side-surface extracted by the traditional morphological opening canopy segment (MOCS) method. The novel MRCD delineates more accurate and quantitative multi-scale features of CHM, and enables more accurate detection and segmentation of treetops and crown. Besides, the MRCD method can also be extended to high optical remote sensing tree crown extraction. In an experiment on aerial LiDAR CHM of a forest of multi-scale tree crowns, the proposed method yielded high-quality tree crown maps.
Adjacent bin stability evaluating for feature description
NASA Astrophysics Data System (ADS)
Nie, Dongdong; Ma, Qinyong
2018-04-01
Recent study improves descriptor performance by accumulating stability votes for all scale pairs to compose the local descriptor. We argue that the stability of a bin depends on the differences across adjacent pairs more than the differences across all scale pairs, and a new local descriptor is composed based on the hypothesis. A series of SIFT descriptors are extracted from multiple scales firstly. Then the difference value of the bin across adjacent scales is calculated, and the stability value of a bin is calculated based on it and accumulated to compose the final descriptor. The performance of the proposed method is evaluated with two popular matching datasets, and compared with other state-of-the-art works. Experimental results show that the proposed method performs satisfactorily.
ERIC Educational Resources Information Center
Waninge, A.; van Wijck, R.; Steenbergen, B.; van der Schans, C. P.
2011-01-01
Background: The purpose of this study was to determine the feasibility and reliability of the modified Berg Balance Scale (mBBS) in persons with severe intellectual and visual disabilities (severe multiple disabilities, SMD) assigned Gross Motor Function Classification System (GMFCS) grades I and II. Method: Thirty-nine participants with SMD and…
A Comparison of Methods to Screen Middle School Students for Reading and Math Difficulties
ERIC Educational Resources Information Center
Nelson, Peter M.; Van Norman, Ethan R.; Lackner, Stacey K.
2016-01-01
The current study explored multiple ways in which middle schools can use and integrate data sources to predict proficiency on future high-stakes state achievement tests. The diagnostic accuracy of (a) prior achievement data, (b) teacher rating scale scores, (c) a composite score combining state test scores and rating scale responses, and (d) two…
Carbon nanotube growth density control
NASA Technical Reports Server (NTRS)
Delzeit, Lance D. (Inventor); Schipper, John F. (Inventor)
2010-01-01
Method and system for combined coarse scale control and fine scale control of growth density of a carbon nanotube (CNT) array on a substrate, using a selected electrical field adjacent to a substrate surface for coarse scale density control (by one or more orders of magnitude) and a selected CNT growth temperature range for fine scale density control (by multiplicative factors of less than an order of magnitude) of CNT growth density. Two spaced apart regions on a substrate may have different CNT growth densities and/or may use different feed gases for CNT growth.
Multiple Point Statistics algorithm based on direct sampling and multi-resolution images
NASA Astrophysics Data System (ADS)
Julien, S.; Renard, P.; Chugunova, T.
2017-12-01
Multiple Point Statistics (MPS) has become popular for more than one decade in Earth Sciences, because these methods allow to generate random fields reproducing highly complex spatial features given in a conceptual model, the training image, while classical geostatistics techniques based on bi-point statistics (covariance or variogram) fail to generate realistic models. Among MPS methods, the direct sampling consists in borrowing patterns from the training image to populate a simulation grid. This latter is sequentially filled by visiting each of these nodes in a random order, and then the patterns, whose the number of nodes is fixed, become narrower during the simulation process, as the simulation grid is more densely informed. Hence, large scale structures are caught in the beginning of the simulation and small scale ones in the end. However, MPS may mix spatial characteristics distinguishable at different scales in the training image, and then loose the spatial arrangement of different structures. To overcome this limitation, we propose to perform MPS simulation using a decomposition of the training image in a set of images at multiple resolutions. Applying a Gaussian kernel onto the training image (convolution) results in a lower resolution image, and iterating this process, a pyramid of images depicting fewer details at each level is built, as it can be done in image processing for example to lighten the space storage of a photography. The direct sampling is then employed to simulate the lowest resolution level, and then to simulate each level, up to the finest resolution, conditioned to the level one rank coarser. This scheme helps reproduce the spatial structures at any scale of the training image and then generate more realistic models. We illustrate the method with aerial photographies (satellite images) and natural textures. Indeed, these kinds of images often display typical structures at different scales and are well-suited for MPS simulation techniques.
Effects of random tooth profile errors on the dynamic behaviors of planetary gears
NASA Astrophysics Data System (ADS)
Xun, Chao; Long, Xinhua; Hua, Hongxing
2018-02-01
In this paper, a nonlinear random model is built to describe the dynamics of planetary gear trains (PGTs), in which the time-varying mesh stiffness, tooth profile modification (TPM), tooth contact loss, and random tooth profile error are considered. A stochastic method based on the method of multiple scales (MMS) is extended to analyze the statistical property of the dynamic performance of PGTs. By the proposed multiple-scales based stochastic method, the distributions of the dynamic transmission errors (DTEs) are investigated, and the lower and upper bounds are determined based on the 3σ principle. Monte Carlo method is employed to verify the proposed method. Results indicate that the proposed method can be used to determine the distribution of the DTE of PGTs high efficiently and allow a link between the manufacturing precision and the dynamical response. In addition, the effects of tooth profile modification on the distributions of vibration amplitudes and the probability of tooth contact loss with different manufacturing tooth profile errors are studied. The results show that the manufacturing precision affects the distribution of dynamic transmission errors dramatically and appropriate TPMs are helpful to decrease the nominal value and the deviation of the vibration amplitudes.
Rasch analysis of the Multiple Sclerosis Impact Scale (MSIS-29)
Ramp, Melina; Khan, Fary; Misajon, Rose Anne; Pallant, Julie F
2009-01-01
Background Multiple Sclerosis (MS) is a degenerative neurological disease that causes impairments, including spasticity, pain, fatigue, and bladder dysfunction, which negatively impact on quality of life. The Multiple Sclerosis Impact Scale (MSIS-29) is a disease-specific health-related quality of life (HRQoL) instrument, developed using the patient's perspective on disease impact. It consists of two subscales assessing the physical (MSIS-29-PHYS) and psychological (MSIS-29-PSYCH) impact of MS. Although previous studies have found support for the psychometric properties of the MSIS-29 using traditional methods of scale evaluation, the scale has not been subjected to a detailed Rasch analysis. Therefore, the objective of this study was to use Rasch analysis to assess the internal validity of the scale, and its response format, item fit, targeting, internal consistency and dimensionality. Methods Ninety-two persons with definite MS residing in the community were recruited from a tertiary hospital database. Patients completed the MSIS-29 as part of a larger study. Rasch analysis was undertaken to assess the psychometric properties of the MSIS-29. Results Rasch analysis showed overall support for the psychometric properties of the two MSIS-29 subscales, however it was necessary to reduce the response format of the MSIS-29-PHYS to a 3-point response scale. Both subscales were unidimensional, had good internal consistency, and were free from item bias for sex and age. Dimensionality testing indicated it was not appropriate to combine the two subscales to form a total MSIS score. Conclusion In this first study to use Rasch analysis to fully assess the psychometric properties of the MSIS-29 support was found for the two subscales but not for the use of the total scale. Further use of Rasch analysis on the MSIS-29 in larger and broader samples is recommended to confirm these findings. PMID:19545445
Adaptive Discontinuous Galerkin Methods in Multiwavelets Bases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Archibald, Richard K; Fann, George I; Shelton Jr, William Allison
2011-01-01
We use a multiwavelet basis with the Discontinuous Galerkin (DG) method to produce a multi-scale DG method. We apply this Multiwavelet DG method to convection and convection-diffusion problems in multiple dimensions. Merging the DG method with multiwavelets allows the adaptivity in the DG method to be resolved through manipulation of multiwavelet coefficients rather than grid manipulation. Additionally, the Multiwavelet DG method is tested on non-linear equations in one dimension and on the cubed sphere.
A successful trap design for capturing large terrestrial snakes
Shirley J. Burgdorf; D. Craig Rudolph; Richard N. Conner; Daniel Saenz; Richard R. Schaefer
2005-01-01
Large scale trapping protocols for snakes can be expensive and require large investments of personnel and time. Typical methods, such as pitfall and small funnel traps, are not useful or suitable for capturing large snakes. A method was needed to survey multiple blocks of habitat for the Louisiana Pine Snake (Pituophis ruthveni), throughout its...
Metabolic Imaging in Multiple Time Scales
Ramanujan, V Krishnan
2013-01-01
We report here a novel combination of time-resolved imaging methods for probing mitochondrial metabolism multiple time scales at the level of single cells. By exploiting a mitochondrial membrane potential reporter fluorescence we demonstrate the single cell metabolic dynamics in time scales ranging from milliseconds to seconds to minutes in response to glucose metabolism and mitochondrial perturbations in real time. Our results show that in comparison with normal human mammary epithelial cells, the breast cancer cells display significant alterations in metabolic responses at all measured time scales by single cell kinetics, fluorescence recovery after photobleaching and by scaling analysis of time-series data obtained from mitochondrial fluorescence fluctuations. Furthermore scaling analysis of time-series data in living cells with distinct mitochondrial dysfunction also revealed significant metabolic differences thereby suggesting the broader applicability (e.g. in mitochondrial myopathies and other metabolic disorders) of the proposed strategies beyond the scope of cancer metabolism. We discuss the scope of these findings in the context of developing portable, real-time metabolic measurement systems that can find applications in preclinical and clinical diagnostics. PMID:24013043
Multiple time step integrators in ab initio molecular dynamics.
Luehr, Nathan; Markland, Thomas E; Martínez, Todd J
2014-02-28
Multiple time-scale algorithms exploit the natural separation of time-scales in chemical systems to greatly accelerate the efficiency of molecular dynamics simulations. Although the utility of these methods in systems where the interactions are described by empirical potentials is now well established, their application to ab initio molecular dynamics calculations has been limited by difficulties associated with splitting the ab initio potential into fast and slowly varying components. Here we present two schemes that enable efficient time-scale separation in ab initio calculations: one based on fragment decomposition and the other on range separation of the Coulomb operator in the electronic Hamiltonian. We demonstrate for both water clusters and a solvated hydroxide ion that multiple time-scale molecular dynamics allows for outer time steps of 2.5 fs, which are as large as those obtained when such schemes are applied to empirical potentials, while still allowing for bonds to be broken and reformed throughout the dynamics. This permits computational speedups of up to 4.4x, compared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5 fs time step, while maintaining the same energy conservation and accuracy.
Robots, systems, and methods for hazard evaluation and visualization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nielsen, Curtis W.; Bruemmer, David J.; Walton, Miles C.
A robot includes a hazard sensor, a locomotor, and a system controller. The robot senses a hazard intensity at a location of the robot, moves to a new location in response to the hazard intensity, and autonomously repeats the sensing and moving to determine multiple hazard levels at multiple locations. The robot may also include a communicator to communicate the multiple hazard levels to a remote controller. The remote controller includes a communicator for sending user commands to the robot and receiving the hazard levels from the robot. A graphical user interface displays an environment map of the environment proximatemore » the robot and a scale for indicating a hazard intensity. A hazard indicator corresponds to a robot position in the environment map and graphically indicates the hazard intensity at the robot position relative to the scale.« less
Fully implicit adaptive mesh refinement solver for 2D MHD
NASA Astrophysics Data System (ADS)
Philip, B.; Chacon, L.; Pernice, M.
2008-11-01
Application of implicit adaptive mesh refinement (AMR) to simulate resistive magnetohydrodynamics is described. Solving this challenging multi-scale, multi-physics problem can improve understanding of reconnection in magnetically-confined plasmas. AMR is employed to resolve extremely thin current sheets, essential for an accurate macroscopic description. Implicit time stepping allows us to accurately follow the dynamical time scale of the developing magnetic field, without being restricted by fast Alfven time scales. At each time step, the large-scale system of nonlinear equations is solved by a Jacobian-free Newton-Krylov method together with a physics-based preconditioner. Each block within the preconditioner is solved optimally using the Fast Adaptive Composite grid method, which can be considered as a multiplicative Schwarz method on AMR grids. We will demonstrate the excellent accuracy and efficiency properties of the method with several challenging reduced MHD applications, including tearing, island coalescence, and tilt instabilities. B. Philip, L. Chac'on, M. Pernice, J. Comput. Phys., in press (2008)
Gray, B.R.; Haro, R.J.; Rogala, J.T.; Sauer, J.S.
2005-01-01
1. Macroinvertebrate count data often exhibit nested or hierarchical structure. Examples include multiple measurements along each of a set of streams, and multiple synoptic measurements from each of a set of ponds. With data exhibiting hierarchical structure, outcomes at both sampling (e.g. Within stream) and aggregated (e.g. Stream) scales are often of interest. Unfortunately, methods for modelling hierarchical count data have received little attention in the ecological literature. 2. We demonstrate the use of hierarchical count models using fingernail clam (Family: Sphaeriidae) count data and habitat predictors derived from sampling and aggregated spatial scales. The sampling scale corresponded to that of a standard Ponar grab (0.052 m(2)) and the aggregated scale to impounded and backwater regions within 38-197 km reaches of the Upper Mississippi River. Impounded and backwater regions were resampled annually for 10 years. Consequently, measurements on clams were nested within years. Counts were treated as negative binomial random variates, and means from each resampling event as random departures from the impounded and backwater region grand means. 3. Clam models were improved by the addition of covariates that varied at both the sampling and regional scales. Substrate composition varied at the sampling scale and was associated with model improvements, and reductions (for a given mean) in variance at the sampling scale. Inorganic suspended solids (ISS) levels, measured in the summer preceding sampling, also yielded model improvements and were associated with reductions in variances at the regional rather than sampling scales. ISS levels were negatively associated with mean clam counts. 4. Hierarchical models allow hierarchically structured data to be modelled without ignoring information specific to levels of the hierarchy. In addition, information at each hierarchical level may be modelled as functions of covariates that themselves vary by and within levels. As a result, hierarchical models provide researchers and resource managers with a method for modelling hierarchical data that explicitly recognises both the sampling design and the information contained in the corresponding data.
Linear regulator design for stochastic systems by a multiple time scales method
NASA Technical Reports Server (NTRS)
Teneketzis, D.; Sandell, N. R., Jr.
1976-01-01
A hierarchically-structured, suboptimal controller for a linear stochastic system composed of fast and slow subsystems is considered. The controller is optimal in the limit as the separation of time scales of the subsystems becomes infinite. The methodology is illustrated by design of a controller to suppress the phugoid and short period modes of the longitudinal dynamics of the F-8 aircraft.
A comparative study of turbulence models for overset grids
NASA Technical Reports Server (NTRS)
Renze, Kevin J.; Buning, Pieter G.; Rajagopalan, R. G.
1992-01-01
The implementation of two different types of turbulence models for a flow solver using the Chimera overset grid method is examined. Various turbulence model characteristics, such as length scale determination and transition modeling, are found to have a significant impact on the computed pressure distribution for a multielement airfoil case. No inherent problem is found with using either algebraic or one-equation turbulence models with an overset grid scheme, but simulation of turbulence for multiple-body or complex geometry flows is very difficult regardless of the gridding method. For complex geometry flowfields, modification of the Baldwin-Lomax turbulence model is necessary to select the appropriate length scale in wall-bounded regions. The overset grid approach presents no obstacle to use of a one- or two-equation turbulence model. Both Baldwin-Lomax and Baldwin-Barth models have problems providing accurate eddy viscosity levels for complex multiple-body flowfields such as those involving the Space Shuttle.
NASA Astrophysics Data System (ADS)
Sajjadi, Mohammadreza; Pishkenari, Hossein Nejat; Vossoughi, Gholamreza
2018-06-01
Trolling mode atomic force microscopy (TR-AFM) has resolved many imaging problems by a considerable reduction of the liquid-resonator interaction forces in liquid environments. The present study develops a nonlinear model of the meniscus force exerted to the nanoneedle of TR-AFM and presents an analytical solution to the distributed-parameter model of TR-AFM resonator utilizing multiple time scales (MTS) method. Based on the developed analytical solution, the frequency-response curves of the resonator operation in air and liquid (for different penetration length of the nanoneedle) are obtained. The closed-form analytical solution and the frequency-response curves are validated by the comparison with both the finite element solution of the main partial differential equations and the experimental observations. The effect of excitation angle of the resonator on horizontal oscillation of the probe tip and the effect of different parameters on the frequency-response of the system are investigated.
Levecke, Bruno; Behnke, Jerzy M.; Ajjampur, Sitara S. R.; Albonico, Marco; Ame, Shaali M.; Charlier, Johannes; Geiger, Stefan M.; Hoa, Nguyen T. V.; Kamwa Ngassam, Romuald I.; Kotze, Andrew C.; McCarthy, James S.; Montresor, Antonio; Periago, Maria V.; Roy, Sheela; Tchuem Tchuenté, Louis-Albert; Thach, D. T. C.; Vercruysse, Jozef
2011-01-01
Background The Kato-Katz thick smear (Kato-Katz) is the diagnostic method recommended for monitoring large-scale treatment programs implemented for the control of soil-transmitted helminths (STH) in public health, yet it is difficult to standardize. A promising alternative is the McMaster egg counting method (McMaster), commonly used in veterinary parasitology, but rarely so for the detection of STH in human stool. Methodology/Principal Findings The Kato-Katz and McMaster methods were compared for the detection of STH in 1,543 subjects resident in five countries across Africa, Asia and South America. The consistency of the performance of both methods in different trials, the validity of the fixed multiplication factor employed in the Kato-Katz method and the accuracy of these methods for estimating ‘true’ drug efficacies were assessed. The Kato-Katz method detected significantly more Ascaris lumbricoides infections (88.1% vs. 75.6%, p<0.001), whereas the difference in sensitivity between the two methods was non-significant for hookworm (78.3% vs. 72.4%) and Trichuris trichiura (82.6% vs. 80.3%). The sensitivity of the methods varied significantly across trials and magnitude of fecal egg counts (FEC). Quantitative comparison revealed a significant correlation (Rs >0.32) in FEC between both methods, and indicated no significant difference in FEC, except for A. lumbricoides, where the Kato-Katz resulted in significantly higher FEC (14,197 eggs per gram of stool (EPG) vs. 5,982 EPG). For the Kato-Katz, the fixed multiplication factor resulted in significantly higher FEC than the multiplication factor adjusted for mass of feces examined for A. lumbricoides (16,538 EPG vs. 15,396 EPG) and T. trichiura (1,490 EPG vs. 1,363 EPG), but not for hookworm. The McMaster provided more accurate efficacy results (absolute difference to ‘true’ drug efficacy: 1.7% vs. 4.5%). Conclusions/Significance The McMaster is an alternative method for monitoring large-scale treatment programs. It is a robust (accurate multiplication factor) and accurate (reliable efficacy results) method, which can be easily standardized. PMID:21695104
ERIC Educational Resources Information Center
Hostyn, Ine; Petry, Katja; Lambrechts, Greet; Maes, Bea
2011-01-01
Background: Affective and reciprocal interactions with others are essential for persons with profound intellectual and multiple disabilities (PIMD), but it is a challenge to assess their quality. This study aimed to investigate the usefulness of instruments from parent-infant research to evaluate these interactions. Method: Eighteen videotaped…
Kim, Min-Kyu; Hong, Seong-Kwan; Kwon, Oh-Kyong
2015-01-01
This paper presents a fast multiple sampling method for low-noise CMOS image sensor (CIS) applications with column-parallel successive approximation register analog-to-digital converters (SAR ADCs). The 12-bit SAR ADC using the proposed multiple sampling method decreases the A/D conversion time by repeatedly converting a pixel output to 4-bit after the first 12-bit A/D conversion, reducing noise of the CIS by one over the square root of the number of samplings. The area of the 12-bit SAR ADC is reduced by using a 10-bit capacitor digital-to-analog converter (DAC) with four scaled reference voltages. In addition, a simple up/down counter-based digital processing logic is proposed to perform complex calculations for multiple sampling and digital correlated double sampling. To verify the proposed multiple sampling method, a 256 × 128 pixel array CIS with 12-bit SAR ADCs was fabricated using 0.18 μm CMOS process. The measurement results shows that the proposed multiple sampling method reduces each A/D conversion time from 1.2 μs to 0.45 μs and random noise from 848.3 μV to 270.4 μV, achieving a dynamic range of 68.1 dB and an SNR of 39.2 dB. PMID:26712765
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurley, R. C.; Vorobiev, O. Y.; Ezzedine, S. M.
Here, we present a numerical method for modeling the mechanical effects of nonlinearly-compliant joints in elasto-plastic media. The method uses a series of strain-rate and stress update algorithms to determine joint closure, slip, and solid stress within computational cells containing multiple “embedded” joints. This work facilitates efficient modeling of nonlinear wave propagation in large spatial domains containing a large number of joints that affect bulk mechanical properties. We implement the method within the massively parallel Lagrangian code GEODYN-L and provide verification and examples. We highlight the ability of our algorithms to capture joint interactions and multiple weakness planes within individualmore » computational cells, as well as its computational efficiency. We also discuss the motivation for developing the proposed technique: to simulate large-scale wave propagation during the Source Physics Experiments (SPE), a series of underground explosions conducted at the Nevada National Security Site (NNSS).« less
Hurley, R. C.; Vorobiev, O. Y.; Ezzedine, S. M.
2017-04-06
Here, we present a numerical method for modeling the mechanical effects of nonlinearly-compliant joints in elasto-plastic media. The method uses a series of strain-rate and stress update algorithms to determine joint closure, slip, and solid stress within computational cells containing multiple “embedded” joints. This work facilitates efficient modeling of nonlinear wave propagation in large spatial domains containing a large number of joints that affect bulk mechanical properties. We implement the method within the massively parallel Lagrangian code GEODYN-L and provide verification and examples. We highlight the ability of our algorithms to capture joint interactions and multiple weakness planes within individualmore » computational cells, as well as its computational efficiency. We also discuss the motivation for developing the proposed technique: to simulate large-scale wave propagation during the Source Physics Experiments (SPE), a series of underground explosions conducted at the Nevada National Security Site (NNSS).« less
Evaluating groundwater flow using passive electrical measurements
NASA Astrophysics Data System (ADS)
Voytek, E.; Revil, A.; Singha, K.
2016-12-01
Accurate quantification of groundwater flow patterns, both in magnitude and direction, is a necessary component of evaluating any hydrologic system. Groundwater flow patterns are often determined using a dense network of wells or piezometers, which can be limited due to logistical or regulatory constraints. The self-potential (SP) method, a passive geophysical technique that relies on currents generated by water movement through porous materials, is a re-emerging alternative or addition to traditional piezometer networks. Naturally generated currents can be measured as voltage differences at the ground surface using only two electrodes, or a more complex electrode array. While the association between SP measurements and groundwater flow was observed as early as 1890s, the method has seen resurgence in hydrology since the governing equations were refined in the 1980s. The method can be used to analyze hydrologic processes at various temporal and spatial scales. Here we present the results of multiple SP surveys collected a multiple scales (1 to 10s of meters). Here single SP grid surveys are used to evaluate flow patterns through artic hillslopes at a discrete point in time. Additionally, a coupled groundwater and electrical model is used to analyze multiple SP data sets to evaluate seasonal changes in groundwater flow through an alpine meadow.
Wyrwich, Kathleen W; Guo, Shien; Medori, Rossella; Altincatal, Arman; Wagner, Linda; Elkins, Jacob
2014-01-01
Background: The 29-item Multiple Sclerosis Impact Scale (MSIS-29) was developed to examine the impact of multiple sclerosis (MS) on physical and psychological functioning from a patient’s perspective. Objective: To determine the responder definition (RD) of the MSIS-29 physical impact subscale (PHYS) in a group of patients with relapsing–remitting MS (RRMS) participating in a clinical trial. Methods: Data from the SELECT trial comparing daclizumab high-yield process with placebo in patients with RRMS were used. Physical function was evaluated in SELECT using three patient-reported outcomes measures and the Expanded Disability Status Scale (EDSS). Anchor- and distribution-based methods were used to identify an RD for the MSIS-29. Results: Results across the anchor-based approach suggested MSIS-29 PHYS RD values of 6.91 (mean), 7.14 (median) and 7.50 (mode). Distribution-based RD estimates ranged from 6.24 to 10.40. An RD of 7.50 was selected as the most appropriate threshold for physical worsening based on corresponding changes in the EDSS (primary anchor of interest). Conclusion: These findings indicate that a ≥7.50 point worsening on the MSIS-29 PHYS is a reasonable and practical threshold for identifying patients with RRMS who have experienced a clinically significant change in the physical impact of MS. PMID:24740371
New methods of MR image intensity standardization via generalized scale
NASA Astrophysics Data System (ADS)
Madabhushi, Anant; Udupa, Jayaram K.
2005-04-01
Image intensity standardization is a post-acquisition processing operation designed for correcting acquisition-to-acquisition signal intensity variations (non-standardness) inherent in Magnetic Resonance (MR) images. While existing standardization methods based on histogram landmarks have been shown to produce a significant gain in the similarity of resulting image intensities, their weakness is that, in some instances the same histogram-based landmark may represent one tissue, while in other cases it may represent different tissues. This is often true for diseased or abnormal patient studies in which significant changes in the image intensity characteristics may occur. In an attempt to overcome this problem, in this paper, we present two new intensity standardization methods based on the concept of generalized scale. In reference 1 we introduced the concept of generalized scale (g-scale) to overcome the shape, topological, and anisotropic constraints imposed by other local morphometric scale models. Roughly speaking, the g-scale of a voxel in a scene was defined as the largest set of voxels connected to the voxel that satisfy some homogeneity criterion. We subsequently formulated a variant of the generalized scale notion, referred to as generalized ball scale (gB-scale), which, in addition to having the advantages of g-scale, also has superior noise resistance properties. These scale concepts are utilized in this paper to accurately determine principal tissue regions within MR images, and landmarks derived from these regions are used to perform intensity standardization. The new methods were qualitatively and quantitatively evaluated on a total of 67 clinical 3D MR images corresponding to four different protocols and to normal, Multiple Sclerosis (MS), and brain tumor patient studies. The generalized scale-based methods were found to be better than the existing methods, with a significant improvement observed for severely diseased and abnormal patient studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schuster, Benjamin S.; Allan, Daniel B.; Kays, Joshua C.
Diffusion through biological gels is crucial for effective drug delivery using nanoparticles. Here, we demonstrate a new method to measure diffusivity over a large range of length scales – from tens of nanometers to tens of micrometers – using photoactivatable fluorescent nanoparticle probes. We have applied this method to investigate the length-scale dependent mobility of nanoparticles in fibrin gels and in sputum from patients with cystic fibrosis (CF). Nanoparticles composed of poly(lactic-co-glycolic acid), with polyethylene glycol coatings to resist bioadhesion, were internally labeled with caged rhodamine to make the particles photoactivatable. We activated particles within a region of sample usingmore » brief, targeted exposure to UV light, uncaging the rhodamine and causing the particles in that region to become fluorescent. We imaged the subsequent spatiotemporal evolution in fluorescence intensity and observed the collective particle diffusion over tens of minutes and tens of micrometers. We also performed complementary multiple particle tracking experiments on the same particles, extending significantly the range over which particle motion and its heterogeneity can be observed. In fibrin gels, both methods showed an immobile fraction of particles and a mobile fraction that diffused over all measured length scales. In the CF sputum, particle diffusion was spatially heterogeneous and locally anisotropic but nevertheless typically led to unbounded transport extending tens of micrometers within tens of minutes. Lastly, these findings provide insight into the mesoscale architecture of these gels and its role in setting their permeability on physiologically relevant length scales, pointing toward strategies for improving nanoparticle drug delivery.« less
Schuster, Benjamin S.; Allan, Daniel B.; Kays, Joshua C.; ...
2017-05-31
Diffusion through biological gels is crucial for effective drug delivery using nanoparticles. Here, we demonstrate a new method to measure diffusivity over a large range of length scales – from tens of nanometers to tens of micrometers – using photoactivatable fluorescent nanoparticle probes. We have applied this method to investigate the length-scale dependent mobility of nanoparticles in fibrin gels and in sputum from patients with cystic fibrosis (CF). Nanoparticles composed of poly(lactic-co-glycolic acid), with polyethylene glycol coatings to resist bioadhesion, were internally labeled with caged rhodamine to make the particles photoactivatable. We activated particles within a region of sample usingmore » brief, targeted exposure to UV light, uncaging the rhodamine and causing the particles in that region to become fluorescent. We imaged the subsequent spatiotemporal evolution in fluorescence intensity and observed the collective particle diffusion over tens of minutes and tens of micrometers. We also performed complementary multiple particle tracking experiments on the same particles, extending significantly the range over which particle motion and its heterogeneity can be observed. In fibrin gels, both methods showed an immobile fraction of particles and a mobile fraction that diffused over all measured length scales. In the CF sputum, particle diffusion was spatially heterogeneous and locally anisotropic but nevertheless typically led to unbounded transport extending tens of micrometers within tens of minutes. Lastly, these findings provide insight into the mesoscale architecture of these gels and its role in setting their permeability on physiologically relevant length scales, pointing toward strategies for improving nanoparticle drug delivery.« less
NASA Astrophysics Data System (ADS)
Sentić, Stipo; Sessions, Sharon L.
2017-06-01
The weak temperature gradient (WTG) approximation is a method of parameterizing the influences of the large scale on local convection in limited domain simulations. WTG simulations exhibit multiple equilibria in precipitation; depending on the initial moisture content, simulations can precipitate or remain dry for otherwise identical boundary conditions. We use a hypothesized analogy between multiple equilibria in precipitation in WTG simulations, and dry and moist regions of organized convection to study tropical convective organization. We find that the range of wind speeds that support multiple equilibria depends on sea surface temperature (SST). Compared to the present SST, low SSTs support a narrower range of multiple equilibria at higher wind speeds. In contrast, high SSTs exhibit a narrower range of multiple equilibria at low wind speeds. This suggests that at high SSTs, organized convection might occur with lower surface forcing. To characterize convection at different SSTs, we analyze the change in relationships between precipitation rate, atmospheric stability, moisture content, and the large-scale transport of moist entropy and moisture with increasing SSTs. We find an increase in large-scale export of moisture and moist entropy from dry simulations with increasing SST, which is consistent with a strengthening of the up-gradient transport of moisture from dry regions to moist regions in organized convection. Furthermore, the changes in diagnostic relationships with SST are consistent with more intense convection in precipitating regions of organized convection for higher SSTs.
Cilfone, Nicholas A.; Kirschner, Denise E.; Linderman, Jennifer J.
2015-01-01
Biologically related processes operate across multiple spatiotemporal scales. For computational modeling methodologies to mimic this biological complexity, individual scale models must be linked in ways that allow for dynamic exchange of information across scales. A powerful methodology is to combine a discrete modeling approach, agent-based models (ABMs), with continuum models to form hybrid models. Hybrid multi-scale ABMs have been used to simulate emergent responses of biological systems. Here, we review two aspects of hybrid multi-scale ABMs: linking individual scale models and efficiently solving the resulting model. We discuss the computational choices associated with aspects of linking individual scale models while simultaneously maintaining model tractability. We demonstrate implementations of existing numerical methods in the context of hybrid multi-scale ABMs. Using an example model describing Mycobacterium tuberculosis infection, we show relative computational speeds of various combinations of numerical methods. Efficient linking and solution of hybrid multi-scale ABMs is key to model portability, modularity, and their use in understanding biological phenomena at a systems level. PMID:26366228
ERIC Educational Resources Information Center
He, Yong
2013-01-01
Common test items play an important role in equating multiple test forms under the common-item nonequivalent groups design. Inconsistent item parameter estimates among common items can lead to large bias in equated scores for IRT true score equating. Current methods extensively focus on detection and elimination of outlying common items, which…
Shaw, Emily E; Schultz, Aaron P; Sperling, Reisa A; Hedden, Trey
2015-10-01
Intrinsic functional connectivity MRI has become a widely used tool for measuring integrity in large-scale cortical networks. This study examined multiple cortical networks using Template-Based Rotation (TBR), a method that applies a priori network and nuisance component templates defined from an independent dataset to test datasets of interest. A priori templates were applied to a test dataset of 276 older adults (ages 65-90) from the Harvard Aging Brain Study to examine the relationship between multiple large-scale cortical networks and cognition. Factor scores derived from neuropsychological tests represented processing speed, executive function, and episodic memory. Resting-state BOLD data were acquired in two 6-min acquisitions on a 3-Tesla scanner and processed with TBR to extract individual-level metrics of network connectivity in multiple cortical networks. All results controlled for data quality metrics, including motion. Connectivity in multiple large-scale cortical networks was positively related to all cognitive domains, with a composite measure of general connectivity positively associated with general cognitive performance. Controlling for the correlations between networks, the frontoparietal control network (FPCN) and executive function demonstrated the only significant association, suggesting specificity in this relationship. Further analyses found that the FPCN mediated the relationships of the other networks with cognition, suggesting that this network may play a central role in understanding individual variation in cognition during aging.
NASA Astrophysics Data System (ADS)
Koliopanos, F.; Ciambur, B.; Graham, A.; Webb, N.; Coriat, M.; Mutlu-Pakdil, B.; Davis, B.; Godet, O.; Barret, D.; Seigar, M.
2017-10-01
Intermediate Mass Black Holes (IMBHs) are predicted by a variety of models and are the likely seeds for super massive BHs (SMBHs). However, we have yet to establish their existence. One method, by which we can discover IMBHs, is by measuring the mass of an accreting BH, using X-ray and radio observations and drawing on the correlation between radio luminosity, X-ray luminosity and the BH mass, known as the fundamental plane of BH activity (FP-BH). Furthermore, the mass of BHs in the centers of galaxies, can be estimated using scaling relations between BH mass and galactic properties. We are initiating a campaign to search for IMBH candidates in dwarf galaxies with low-luminosity AGN, using - for the first time - three different scaling relations and the FP-BH, simultaneously. In this first stage of our campaign, we measure the mass of seven LLAGN, that have been previously suggested to host central IMBHs, investigate the consistency between the predictions of the BH scaling relations and the FP-BH, in the low mass regime and demonstrate that this multiple method approach provides a robust average mass prediction. In my talk, I will discuss our methodology, results and next steps of this campaign.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clement, T. Prabhakar; Barnett, Mark O.; Zheng, Chunmiao
DE-FG02-06ER64213: Development of Modeling Methods and Tools for Predicting Coupled Reactive Transport Processes in Porous Media at Multiple Scales Investigators: T. Prabhakar Clement (PD/PI) and Mark O. Barnett (Auburn), Chunmiao Zheng (Univ. of Alabama), and Norman L. Jones (BYU). The objective of this project was to develop scalable modeling approaches for predicting the reactive transport of metal contaminants. We studied two contaminants, a radioactive cation [U(VI)] and a metal(loid) oxyanion system [As(III/V)], and investigated their interactions with two types of subsurface materials, iron and manganese oxyhydroxides. We also developed modeling methods for describing the experimental results. Overall, the project supportedmore » 25 researchers at three universities. Produced 15 journal articles, 3 book chapters, 6 PhD dissertations and 6 MS theses. Three key journal articles are: 1) Jeppu et al., A scalable surface complexation modeling framework for predicting arsenate adsorption on goethite-coated sands, Environ. Eng. Sci., 27(2): 147-158, 2010. 2) Loganathan et al., Scaling of adsorption reactions: U(VI) experiments and modeling, Applied Geochemistry, 24 (11), 2051-2060, 2009. 3) Phillippi, et al., Theoretical solid/solution ratio effects on adsorption and transport: uranium (VI) and carbonate, Soil Sci. Soci. of America, 71:329-335, 2007« less
Glenn, Edward P.; Nagler, Pamela L.; Huete, Alfredo R.; Weng, Qihao
2014-01-01
This chapter describes emerging methods for using satellite imagery across temporal and spatial scales using a case study approach to illustrate some of the opportunities now available for combining observations across scales. It explores the use of multiplatform sensor systems to characterize ecological change, as exemplified by efforts to scale the effects of a biocontrol insect (the leaf beetle Diorhabda carinulata) on the phenology and water use of Tamarix shrubs (Tamarix ramosissima and related species and hybrids) targeted for removal on western U.S. rivers, from the level of individual leaves to the regional level of measurement. Finally, the chapter summarizes the lessons learned and emphasize the need for ground data to calibrate and validate remote sensing data and the types of errors inherent in scaling point data over wide areas, illustrated with research on evapotranspiration (ET) of Tamarix using a wide range of ground measurement and remote sensing methods.
ED(MF)n: Humidity-Convection Feedbacks in a Mass Flux Scheme Based on Resolved Size Densities
NASA Astrophysics Data System (ADS)
Neggers, R.
2014-12-01
Cumulus cloud populations remain at least partially unresolved in present-day numerical simulations of global weather and climate, and accordingly their impact on the larger-scale flow has to be represented through parameterization. Various methods have been developed over the years, ranging in complexity from the early bulk models relying on a single plume to more recent approaches that attempt to reconstruct the underlying probability density functions, such as statistical schemes and multiple plume approaches. Most of these "classic" methods capture key aspects of cumulus cloud populations, and have been successfully implemented in operational weather and climate models. However, the ever finer discretizations of operational circulation models, driven by advances in the computational efficiency of supercomputers, is creating new problems for existing sub-grid schemes. Ideally, a sub-grid scheme should automatically adapt its impact on the resolved scales to the dimension of the grid-box within which it is supposed to act. It can be argued that this is only possible when i) the scheme is aware of the range of scales of the processes it represents, and ii) it can distinguish between contributions as a function of size. How to conceptually represent this knowledge of scale in existing parameterization schemes remains an open question that is actively researched. This study considers a relatively new class of models for sub-grid transport in which ideas from the field of population dynamics are merged with the concept of multi plume modelling. More precisely, a multiple mass flux framework for moist convective transport is formulated in which the ensemble of plumes is created in "size-space". It is argued that thus resolving the underlying size-densities creates opportunities for introducing scale-awareness and scale-adaptivity in the scheme. The behavior of an implementation of this framework in the Eddy Diffusivity Mass Flux (EDMF) model, named ED(MF)n, is examined for a standard case of subtropical marine shallow cumulus. We ask if a system of multiple independently resolved plumes is able to automatically create the vertical profile of bulk (mass) flux at which the sub-grid scale transport balances the imposed larger-scale forcings in the cloud layer.
Anderson, R.N.; Boulanger, A.; Bagdonas, E.P.; Xu, L.; He, W.
1996-12-17
The invention utilizes 3-D and 4-D seismic surveys as a means of deriving information useful in petroleum exploration and reservoir management. The methods use both single seismic surveys (3-D) and multiple seismic surveys separated in time (4-D) of a region of interest to determine large scale migration pathways within sedimentary basins, and fine scale drainage structure and oil-water-gas regions within individual petroleum producing reservoirs. Such structure is identified using pattern recognition tools which define the regions of interest. The 4-D seismic data sets may be used for data completion for large scale structure where time intervals between surveys do not allow for dynamic evolution. The 4-D seismic data sets also may be used to find variations over time of small scale structure within individual reservoirs which may be used to identify petroleum drainage pathways, oil-water-gas regions and, hence, attractive drilling targets. After spatial orientation, and amplitude and frequency matching of the multiple seismic data sets, High Amplitude Event (HAE) regions consistent with the presence of petroleum are identified using seismic attribute analysis. High Amplitude Regions are grown and interconnected to establish plumbing networks on the large scale and reservoir structure on the small scale. Small scale variations over time between seismic surveys within individual reservoirs are identified and used to identify drainage patterns and bypassed petroleum to be recovered. The location of such drainage patterns and bypassed petroleum may be used to site wells. 22 figs.
Anderson, Roger N.; Boulanger, Albert; Bagdonas, Edward P.; Xu, Liqing; He, Wei
1996-01-01
The invention utilizes 3-D and 4-D seismic surveys as a means of deriving information useful in petroleum exploration and reservoir management. The methods use both single seismic surveys (3-D) and multiple seismic surveys separated in time (4-D) of a region of interest to determine large scale migration pathways within sedimentary basins, and fine scale drainage structure and oil-water-gas regions within individual petroleum producing reservoirs. Such structure is identified using pattern recognition tools which define the regions of interest. The 4-D seismic data sets may be used for data completion for large scale structure where time intervals between surveys do not allow for dynamic evolution. The 4-D seismic data sets also may be used to find variations over time of small scale structure within individual reservoirs which may be used to identify petroleum drainage pathways, oil-water-gas regions and, hence, attractive drilling targets. After spatial orientation, and amplitude and frequency matching of the multiple seismic data sets, High Amplitude Event (HAE) regions consistent with the presence of petroleum are identified using seismic attribute analysis. High Amplitude Regions are grown and interconnected to establish plumbing networks on the large scale and reservoir structure on the small scale. Small scale variations over time between seismic surveys within individual reservoirs are identified and used to identify drainage patterns and bypassed petroleum to be recovered. The location of such drainage patterns and bypassed petroleum may be used to site wells.
Scaling in cognitive performance reflects multiplicative multifractal cascade dynamics
Stephen, Damian G.; Anastas, Jason R.; Dixon, James A.
2012-01-01
Self-organized criticality purports to build multi-scaled structures out of local interactions. Evidence of scaling in various domains of biology may be more generally understood to reflect multiplicative interactions weaving together many disparate scales. The self-similarity of power-law scaling entails homogeneity: fluctuations distribute themselves similarly across many spatial and temporal scales. However, this apparent homogeneity can be misleading, especially as it spans more scales. Reducing biological processes to one power-law relationship neglects rich cascade dynamics. We review recent research into multifractality in executive-function cognitive tasks and propose that scaling reflects not criticality but instead interactions across multiple scales and among fluctuations of multiple sizes. PMID:22529819
Vittadello, Fabio; Mischo-Kelling, Maria; Wieser, Heike; Cavada, Luisa; Lochner, Lukas; Naletto, Carla; Fink, Verena; Reeves, Scott
2018-05-01
This article presents a study that aimed to validate a translation of a multiple-group measurement scale for interprofessional collaboration (IPC). We used survey data gathered over a three month period as part of a mixed methods study that explored the nature of IPC in Northern Italy. Following a translation from English into Italian and German the survey was distributed online to over 5,000 health professionals (dieticians, nurses, occupational therapists, physicians, physiotherapists, speech therapists and psychologists) based in one regional health trust. In total, 2,238 different health professions completed the survey. Based on the original scale, three principal components were extracted and confirmed as relevant factors for IPC (communication, accommodation and isolation). A confirmatory analysis (3-factor model) was applied to the data of physicians and nurses by language group. In conclusion, the validation of the German and Italian IPC scale has provided an instrument of acceptable reliability and validity for the assessment of IPC involving physicians and nurses.
Dynamical tuning for MPC using population games: A water supply network application.
Barreiro-Gomez, Julian; Ocampo-Martinez, Carlos; Quijano, Nicanor
2017-07-01
Model predictive control (MPC) is a suitable strategy for the control of large-scale systems that have multiple design requirements, e.g., multiple physical and operational constraints. Besides, an MPC controller is able to deal with multiple control objectives considering them within the cost function, which implies to determine a proper prioritization for each of the objectives. Furthermore, when the system has time-varying parameters and/or disturbances, the appropriate prioritization might vary along the time as well. This situation leads to the need of a dynamical tuning methodology. This paper addresses the dynamical tuning issue by using evolutionary game theory. The advantages of the proposed method are highlighted and tested over a large-scale water supply network with periodic time-varying disturbances. Finally, results are analyzed with respect to a multi-objective MPC controller that uses static tuning. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Tian, Fei; Lu, Xinbian; Zheng, Songqing; Zhang, Hongfang; Rong, Yuanshuai; Yang, Debin; Liu, Naigui
2017-06-01
The Ordovician paleokarst reservoirs in the Tahe oilfield, with burial depths of over 5300 m, experienced multiple phases of geologic processes and exhibit strong heterogeneity. Core testing can be used to analyse the characteristics of typical points at the centimetre scale, and seismic datasets can reveal the macroscopic outlines of reservoirs at the >10-m scale. However, neither method can identify caves, cave fills and fractures at the meter scale. Guided by outcrop investigations and calibrations based on core sample observations, this paper describes the interpretation of high longitudinal resolution borehole images, the identification of the characteristics of caves, cave fills (sedimentary, breccia and chemical fills) and fractures in single wells, and the identification of structures and fill characteristics at the meter scale in the strongly heterogeneous paleokarst reservoirs. The paleogeomorphology, a major controlling factor in the distribution of paleokarst reservoirs, was also analysed. The results show that one well can penetrate multiple cave layers of various sizes and that the caves are filled with multiple types of fill. The paleogeomorphology can be divided into highlands, slopes and depressions, which controlled the structure and fill characteristics of the paleokarst reservoirs. The results of this study can provide fundamental meter-scale datasets for interpreting detailed geologic features of deeply buried paleocaves, can be used to connect core- and seismic-scale interpretations, and can provide support for the recognition and development of these strongly heterogeneous reservoirs.
Spatiotemporal analysis of land use and land cover change in the Brazilian Amazon
Li, Guiying; Moran, Emilio; Hetrick, Scott
2013-01-01
This paper provides a comparative analysis of land use and land cover (LULC) changes among three study areas with different biophysical environments in the Brazilian Amazon at multiple scales, from per-pixel, polygon, census sector, to study area. Landsat images acquired in the years of 1990/1991, 1999/2000, and 2008/2010 were used to examine LULC change trajectories with the post-classification comparison approach. A classification system composed of six classes – forest, savanna, other-vegetation (secondary succession and plantations), agro-pasture, impervious surface, and water, was designed for this study. A hierarchical-based classification method was used to classify Landsat images into thematic maps. This research shows different spatiotemporal change patterns, composition and rates among the three study areas and indicates the importance of analyzing LULC change at multiple scales. The LULC change analysis over time for entire study areas provides an overall picture of change trends, but detailed change trajectories and their spatial distributions can be better examined at a per-pixel scale. The LULC change at the polygon scale provides the information of the changes in patch sizes over time, while the LULC change at census sector scale gives new insights on how human-induced activities (e.g., urban expansion, roads, and land use history) affect LULC change patterns and rates. This research indicates the necessity to implement change detection at multiple scales for better understanding the mechanisms of LULC change patterns and rates. PMID:24127130
Nie, Yifan; Liang, Chaoping; Cha, Pil-Ryung; Colombo, Luigi; Wallace, Robert M; Cho, Kyeongjae
2017-06-07
Controlled growth of crystalline solids is critical for device applications, and atomistic modeling methods have been developed for bulk crystalline solids. Kinetic Monte Carlo (KMC) simulation method provides detailed atomic scale processes during a solid growth over realistic time scales, but its application to the growth modeling of van der Waals (vdW) heterostructures has not yet been developed. Specifically, the growth of single-layered transition metal dichalcogenides (TMDs) is currently facing tremendous challenges, and a detailed understanding based on KMC simulations would provide critical guidance to enable controlled growth of vdW heterostructures. In this work, a KMC simulation method is developed for the growth modeling on the vdW epitaxy of TMDs. The KMC method has introduced full material parameters for TMDs in bottom-up synthesis: metal and chalcogen adsorption/desorption/diffusion on substrate and grown TMD surface, TMD stacking sequence, chalcogen/metal ratio, flake edge diffusion and vacancy diffusion. The KMC processes result in multiple kinetic behaviors associated with various growth behaviors observed in experiments. Different phenomena observed during vdW epitaxy process are analysed in terms of complex competitions among multiple kinetic processes. The KMC method is used in the investigation and prediction of growth mechanisms, which provide qualitative suggestions to guide experimental study.
BAYESIAN METHODS FOR REGIONAL-SCALE EUTROPHICATION MODELS. (R830887)
We demonstrate a Bayesian classification and regression tree (CART) approach to link multiple environmental stressors to biological responses and quantify uncertainty in model predictions. Such an approach can: (1) report prediction uncertainty, (2) be consistent with the amou...
Unfitted Two-Phase Flow Simulations in Pore-Geometries with Accurate
NASA Astrophysics Data System (ADS)
Heimann, Felix; Engwer, Christian; Ippisch, Olaf; Bastian, Peter
2013-04-01
The development of better macro scale models for multi-phase flow in porous media is still impeded by the lack of suitable methods for the simulation of such flow regimes on the pore scale. The highly complicated geometry of natural porous media imposes requirements with regard to stability and computational efficiency which current numerical methods fail to meet. Therefore, current simulation environments are still unable to provide a thorough understanding of porous media in multi-phase regimes and still fail to reproduce well known effects like hysteresis or the more peculiar dynamics of the capillary fringe with satisfying accuracy. Although flow simulations in pore geometries were initially the domain of Lattice-Boltzmann and other particle methods, the development of Galerkin methods for such applications is important as they complement the range of feasible flow and parameter regimes. In the recent past, it has been shown that unfitted Galerkin methods can be applied efficiently to topologically demanding geometries. However, in the context of two-phase flows, the interface of the two immiscible fluids effectively separates the domain in two sub-domains. The exact representation of such setups with multiple independent and time depending geometries exceeds the functionality of common unfitted methods. We present a new approach to pore scale simulations with an unfitted discontinuous Galerkin (UDG) method. Utilizing a recursive sub-triangulation algorithm, we extent the UDG method to setups with multiple independent geometries. This approach allows an accurate representation of the moving contact line and the interface conditions, i.e. the pressure jump across the interface. Example simulations in two and three dimensions illustrate and verify the stability and accuracy of this approach.
NASA Astrophysics Data System (ADS)
Guo, Tian; Xu, Zili
2018-03-01
Measurement noise is inevitable in practice; thus, it is difficult to identify defects, cracks or damage in a structure while suppressing noise simultaneously. In this work, a novel method is introduced to detect multiple damage in noisy environments. Based on multi-scale space analysis for discrete signals, a method for extracting damage characteristics from the measured displacement mode shape is illustrated. Moreover, the proposed method incorporates a data fusion algorithm to further eliminate measurement noise-based interference. The effectiveness of the method is verified by numerical and experimental methods applied to different structural types. The results demonstrate that there are two advantages to the proposed method. First, damage features are extracted by the difference of the multi-scale representation; this step is taken such that the interference of noise amplification can be avoided. Second, a data fusion technique applied to the proposed method provides a global decision, which retains the damage features while maximally eliminating the uncertainty. Monte Carlo simulations are utilized to validate that the proposed method has a higher accuracy in damage detection.
Estimating basin scale evapotranspiration (ET) by water balance and remote sensing methods
Senay, G.B.; Leake, S.; Nagler, P.L.; Artan, G.; Dickinson, J.; Cordova, J.T.; Glenn, E.P.
2011-01-01
Evapotranspiration (ET) is an important hydrological process that can be studied and estimated at multiple spatial scales ranging from a leaf to a river basin. We present a review of methods in estimating basin scale ET and its applications in understanding basin water balance dynamics. The review focuses on two aspects of ET: (i) how the basin scale water balance approach is used to estimate ET; and (ii) how ‘direct’ measurement and modelling approaches are used to estimate basin scale ET. Obviously, the basin water balance-based ET requires the availability of good precipitation and discharge data to calculate ET as a residual on longer time scales (annual) where net storage changes are assumed to be negligible. ET estimated from such a basin water balance principle is generally used for validating the performance of ET models. On the other hand, many of the direct estimation methods involve the use of remotely sensed data to estimate spatially explicit ET and use basin-wide averaging to estimate basin scale ET. The direct methods can be grouped into soil moisture balance modelling, satellite-based vegetation index methods, and methods based on satellite land surface temperature measurements that convert potential ET into actual ET using a proportionality relationship. The review also includes the use of complementary ET estimation principles for large area applications. The review identifies the need to compare and evaluate the different ET approaches using standard data sets in basins covering different hydro-climatic regions of the world.
Scale-adaptive compressive tracking with feature integration
NASA Astrophysics Data System (ADS)
Liu, Wei; Li, Jicheng; Chen, Xiao; Li, Shuxin
2016-05-01
Numerous tracking-by-detection methods have been proposed for robust visual tracking, among which compressive tracking (CT) has obtained some promising results. A scale-adaptive CT method based on multifeature integration is presented to improve the robustness and accuracy of CT. We introduce a keypoint-based model to achieve the accurate scale estimation, which can additionally give a prior location of the target. Furthermore, by the high efficiency of data-independent random projection matrix, multiple features are integrated into an effective appearance model to construct the naïve Bayes classifier. At last, an adaptive update scheme is proposed to update the classifier conservatively. Experiments on various challenging sequences demonstrate substantial improvements by our proposed tracker over CT and other state-of-the-art trackers in terms of dealing with scale variation, abrupt motion, deformation, and illumination changes.
Fabbris, G.; Hücker, M.; Gu, G. D.; ...
2016-07-14
Some of the most exotic material properties derive from electronic states with short correlation length (~10-500 Å), suggesting that the local structural symmetry may play a relevant role in their behavior. In this study, we discuss the combined use of polarized x-ray absorption fine structure and x-ray diffraction at high pressure as a powerful method to tune and probe structural and electronic orders at multiple length scales. Besides addressing some of the technical challenges associated with such experiments, we illustrate this approach with results obtained in the cuprate La 1.875Ba 0.125CuO 4, in which the response of electronic order tomore » pressure can only be understood by probing the structure at the relevant length scales.« less
Apportioning Sources of Riverine Nitrogen at Multiple Watershed Scales
NASA Astrophysics Data System (ADS)
Boyer, E. W.; Alexander, R. B.; Sebestyen, S. D.
2005-05-01
Loadings of reactive nitrogen (N) entering terrestrial landscapes have increased in recent decades due to anthropogenic activities associated with food and energy production. In the northeastern USA, this enhanced supply of N has been linked to many environmental concerns in both terrestrial and aquatic ecosystems, such as forest decline, lake and stream acidification, human respiratory problems, and coastal eutrophication. Thus N is a priority pollutant with regard to a whole host of air, land, and water quality issues, highlighting the need for methods to identify and quantify various N sources. Further, understanding precursor sources of N is critical to current and proposed public policies targeted at the reduction of N inputs to the terrestrial landscape and receiving waters. We present results from published and ongoing studies using multiple approaches to fingerprint sources of N in the northeastern USA, at watershed scales ranging from the headwaters to the coastal zone. The approaches include: 1) a mass balance model with a nitrogen-budgeting approach for analyses of large watersheds; 2) a spatially-referenced regression model with an empirical modeling approach for analyses of water quality at regional scales; and 3) a meta-analysis of monitoring data with a chemical tracer approach, utilizing concentrations of multiple elements and isotopic composition of N from water samples collected in the streams and rivers. We discuss the successes and limitations of these various approaches for apportioning contributions of N from multiple sources to receiving waters at regional scales.
Resolving occlusion and segmentation errors in multiple video object tracking
NASA Astrophysics Data System (ADS)
Cheng, Hsu-Yung; Hwang, Jenq-Neng
2009-02-01
In this work, we propose a method to integrate the Kalman filter and adaptive particle sampling for multiple video object tracking. The proposed framework is able to detect occlusion and segmentation error cases and perform adaptive particle sampling for accurate measurement selection. Compared with traditional particle filter based tracking methods, the proposed method generates particles only when necessary. With the concept of adaptive particle sampling, we can avoid degeneracy problem because the sampling position and range are dynamically determined by parameters that are updated by Kalman filters. There is no need to spend time on processing particles with very small weights. The adaptive appearance for the occluded object refers to the prediction results of Kalman filters to determine the region that should be updated and avoids the problem of using inadequate information to update the appearance under occlusion cases. The experimental results have shown that a small number of particles are sufficient to achieve high positioning and scaling accuracy. Also, the employment of adaptive appearance substantially improves the positioning and scaling accuracy on the tracking results.
Küçük, Fadime; Kara, Bilge; Poyraz, Esra Çoşkuner; İdiman, Egemen
2016-01-01
[Purpose] The aim of this study was to determine the effects of clinical Pilates in multiple sclerosis patients. [Subjects and Methods] Twenty multiple sclerosis patients were enrolled in this study. The participants were divided into two groups as the clinical Pilates and control groups. Cognition (Multiple Sclerosis Functional Composite), balance (Berg Balance Scale), physical performance (timed performance tests, Timed up and go test), tiredness (Modified Fatigue Impact scale), depression (Beck Depression Inventory), and quality of life (Multiple Sclerosis International Quality of Life Questionnaire) were measured before and after treatment in all participants. [Results] There were statistically significant differences in balance, timed performance, tiredness and Multiple Sclerosis Functional Composite tests between before and after treatment in the clinical Pilates group. We also found significant differences in timed performance tests, the Timed up and go test and the Multiple Sclerosis Functional Composite between before and after treatment in the control group. According to the difference analyses, there were significant differences in Multiple Sclerosis Functional Composite and Multiple Sclerosis International Quality of Life Questionnaire scores between the two groups in favor of the clinical Pilates group. There were statistically significant clinical differences in favor of the clinical Pilates group in comparison of measurements between the groups. Clinical Pilates improved cognitive functions and quality of life compared with traditional exercise. [Conclusion] In Multiple Sclerosis treatment, clinical Pilates should be used as a holistic approach by physical therapists. PMID:27134355
Küçük, Fadime; Kara, Bilge; Poyraz, Esra Çoşkuner; İdiman, Egemen
2016-03-01
[Purpose] The aim of this study was to determine the effects of clinical Pilates in multiple sclerosis patients. [Subjects and Methods] Twenty multiple sclerosis patients were enrolled in this study. The participants were divided into two groups as the clinical Pilates and control groups. Cognition (Multiple Sclerosis Functional Composite), balance (Berg Balance Scale), physical performance (timed performance tests, Timed up and go test), tiredness (Modified Fatigue Impact scale), depression (Beck Depression Inventory), and quality of life (Multiple Sclerosis International Quality of Life Questionnaire) were measured before and after treatment in all participants. [Results] There were statistically significant differences in balance, timed performance, tiredness and Multiple Sclerosis Functional Composite tests between before and after treatment in the clinical Pilates group. We also found significant differences in timed performance tests, the Timed up and go test and the Multiple Sclerosis Functional Composite between before and after treatment in the control group. According to the difference analyses, there were significant differences in Multiple Sclerosis Functional Composite and Multiple Sclerosis International Quality of Life Questionnaire scores between the two groups in favor of the clinical Pilates group. There were statistically significant clinical differences in favor of the clinical Pilates group in comparison of measurements between the groups. Clinical Pilates improved cognitive functions and quality of life compared with traditional exercise. [Conclusion] In Multiple Sclerosis treatment, clinical Pilates should be used as a holistic approach by physical therapists.
False Discovery Control in Large-Scale Spatial Multiple Testing
Sun, Wenguang; Reich, Brian J.; Cai, T. Tony; Guindani, Michele; Schwartzman, Armin
2014-01-01
Summary This article develops a unified theoretical and computational framework for false discovery control in multiple testing of spatial signals. We consider both point-wise and cluster-wise spatial analyses, and derive oracle procedures which optimally control the false discovery rate, false discovery exceedance and false cluster rate, respectively. A data-driven finite approximation strategy is developed to mimic the oracle procedures on a continuous spatial domain. Our multiple testing procedures are asymptotically valid and can be effectively implemented using Bayesian computational algorithms for analysis of large spatial data sets. Numerical results show that the proposed procedures lead to more accurate error control and better power performance than conventional methods. We demonstrate our methods for analyzing the time trends in tropospheric ozone in eastern US. PMID:25642138
The fast multipole method and point dipole moment polarizable force fields.
Coles, Jonathan P; Masella, Michel
2015-01-14
We present an implementation of the fast multipole method for computing Coulombic electrostatic and polarization forces from polarizable force-fields based on induced point dipole moments. We demonstrate the expected O(N) scaling of that approach by performing single energy point calculations on hexamer protein subunits of the mature HIV-1 capsid. We also show the long time energy conservation in molecular dynamics at the nanosecond scale by performing simulations of a protein complex embedded in a coarse-grained solvent using a standard integrator and a multiple time step integrator. Our tests show the applicability of fast multipole method combined with state-of-the-art chemical models in molecular dynamical systems.
Coeli M. Hoover; Mark J. Ducey; R. Andy Colter; Mariko Yamasaki
2018-01-01
There is growing interest in estimating and mapping biomass and carbon content of forests across large landscapes. LiDAR-based inventory methods are increasingly common and have been successfully implemented in multiple forest types. Asner et al. (2011) developed a simple universal forest carbon estimation method for tropical forests that reduces the amount of required...
ERIC Educational Resources Information Center
Wilson, Kristy J.; Rigakos, Bessie
2016-01-01
The scientific process is nonlinear, unpredictable, and ongoing. Assessing the nature of science is difficult with methods that rely on Likert-scale or multiple-choice questions. This study evaluated conceptions about the scientific process using student-created visual representations that we term "flowcharts." The methodology,…
Ecosystem evapotranspiration: Challenges in measurements, estimates, and modeling
USDA-ARS?s Scientific Manuscript database
Evapotranspiration (ET) processes at the leaf-to-landscape scales in multiple land uses have important controls and feedbacks for the local, regional and global climate and water resource systems. Innovative methods, tools, and technologies for improved understanding and quantification of ET and cro...
General consequences of the violated Feynman scaling
NASA Technical Reports Server (NTRS)
Kamberov, G.; Popova, L.
1985-01-01
The problem of scaling of the hadronic production cross sections represents an outstanding question in high energy physics especially for interpretation of cosmic ray data. A comprehensive analysis of the accelerator data leads to the conclusion of the existence of breaked Feynman scaling. It was proposed that the Lorentz invariant inclusive cross sections for secondaries of a given type approaches constant in respect to a breaked scaling variable x sub s. Thus, the differential cross sections measured in accelerator energy can be extrapolated to higher cosmic ray energies. This assumption leads to some important consequences. The distribution of secondary multiplicity that follows from the violated Feynman scaling using a similar method of Koba et al is discussed.
Intercomparison of Multiscale Modeling Approaches in Simulating Subsurface Flow and Transport
NASA Astrophysics Data System (ADS)
Yang, X.; Mehmani, Y.; Barajas-Solano, D. A.; Song, H. S.; Balhoff, M.; Tartakovsky, A. M.; Scheibe, T. D.
2016-12-01
Hybrid multiscale simulations that couple models across scales are critical to advance predictions of the larger system behavior using understanding of fundamental processes. In the current study, three hybrid multiscale methods are intercompared: multiscale loose-coupling method, multiscale finite volume (MsFV) method and multiscale mortar method. The loose-coupling method enables a parallel workflow structure based on the Swift scripting environment that manages the complex process of executing coupled micro- and macro-scale models without being intrusive to the at-scale simulators. The MsFV method applies microscale and macroscale models over overlapping subdomains of the modeling domain and enforces continuity of concentration and transport fluxes between models via restriction and prolongation operators. The mortar method is a non-overlapping domain decomposition approach capable of coupling all permutations of pore- and continuum-scale models with each other. In doing so, Lagrange multipliers are used at interfaces shared between the subdomains so as to establish continuity of species/fluid mass flux. Subdomain computations can be performed either concurrently or non-concurrently depending on the algorithm used. All the above methods have been proven to be accurate and efficient in studying flow and transport in porous media. However, there has not been any field-scale applications and benchmarking among various hybrid multiscale approaches. To address this challenge, we apply all three hybrid multiscale methods to simulate water flow and transport in a conceptualized 2D modeling domain of the hyporheic zone, where strong interactions between groundwater and surface water exist across multiple scales. In all three multiscale methods, fine-scale simulations are applied to a thin layer of riverbed alluvial sediments while the macroscopic simulations are used for the larger subsurface aquifer domain. Different numerical coupling methods are then applied between scales and inter-compared. Comparisons are drawn in terms of velocity distributions, solute transport behavior, algorithm-induced numerical error and computing cost. The intercomparison work provides support for confidence in a variety of hybrid multiscale methods and motivates further development and applications.
Viewpoint: observations on scaled average bioequivalence.
Patterson, Scott D; Jones, Byron
2012-01-01
The two one-sided test procedure (TOST) has been used for average bioequivalence testing since 1992 and is required when marketing new formulations of an approved drug. TOST is known to require comparatively large numbers of subjects to demonstrate bioequivalence for highly variable drugs, defined as those drugs having intra-subject coefficients of variation greater than 30%. However, TOST has been shown to protect public health when multiple generic formulations enter the marketplace following patent expiration. Recently, scaled average bioequivalence (SABE) has been proposed as an alternative statistical analysis procedure for such products by multiple regulatory agencies. SABE testing requires that a three-period partial replicate cross-over or full replicate cross-over design be used. Following a brief summary of SABE analysis methods applied to existing data, we will consider three statistical ramifications of the proposed additional decision rules and the potential impact of implementation of scaled average bioequivalence in the marketplace using simulation. It is found that a constraint being applied is biased, that bias may also result from the common problem of missing data and that the SABE methods allow for much greater changes in exposure when generic-generic switching occurs in the marketplace. Copyright © 2011 John Wiley & Sons, Ltd.
Meteorological Contribution to Variability in Particulate Matter Concentrations
NASA Astrophysics Data System (ADS)
Woods, H. L.; Spak, S. N.; Holloway, T.
2006-12-01
Local concentrations of fine particulate matter (PM) are driven by a number of processes, including emissions of aerosols and gaseous precursors, atmospheric chemistry, and meteorology at local, regional, and global scales. We apply statistical downscaling methods, typically used for regional climate analysis, to estimate the contribution of regional scale meteorology to PM mass concentration variability at a range of sites in the Upper Midwestern U.S. Multiple years of daily PM10 and PM2.5 data, reported by the U.S. Environmental Protection Agency (EPA), are correlated with large-scale meteorology over the region from the National Centers for Environmental Prediction (NCEP) reanalysis data. We use two statistical downscaling methods (multiple linear regression, MLR, and analog) to identify which processes have the greatest impact on aerosol concentration variability. Empirical Orthogonal Functions of the NCEP meteorological data are correlated with PM timeseries at measurement sites. We examine which meteorological variables exert the greatest influence on PM variability, and which sites exhibit the greatest response to regional meteorology. To evaluate model performance, measurement data are withheld for limited periods, and compared with model results. Preliminary results suggest that regional meteorological processes account over 50% of aerosol concentration variability at study sites.
Aggregation of carbon dioxide sequestration storage assessment units
Blondes, Madalyn S.; Schuenemeyer, John H.; Olea, Ricardo A.; Drew, Lawrence J.
2013-01-01
The U.S. Geological Survey is currently conducting a national assessment of carbon dioxide (CO2) storage resources, mandated by the Energy Independence and Security Act of 2007. Pre-emission capture and storage of CO2 in subsurface saline formations is one potential method to reduce greenhouse gas emissions and the negative impact of global climate change. Like many large-scale resource assessments, the area under investigation is split into smaller, more manageable storage assessment units (SAUs), which must be aggregated with correctly propagated uncertainty to the basin, regional, and national scales. The aggregation methodology requires two types of data: marginal probability distributions of storage resource for each SAU, and a correlation matrix obtained by expert elicitation describing interdependencies between pairs of SAUs. Dependencies arise because geologic analogs, assessment methods, and assessors often overlap. The correlation matrix is used to induce rank correlation, using a Cholesky decomposition, among the empirical marginal distributions representing individually assessed SAUs. This manuscript presents a probabilistic aggregation method tailored to the correlations and dependencies inherent to a CO2 storage assessment. Aggregation results must be presented at the basin, regional, and national scales. A single stage approach, in which one large correlation matrix is defined and subsets are used for different scales, is compared to a multiple stage approach, in which new correlation matrices are created to aggregate intermediate results. Although the single-stage approach requires determination of significantly more correlation coefficients, it captures geologic dependencies among similar units in different basins and it is less sensitive to fluctuations in low correlation coefficients than the multiple stage approach. Thus, subsets of one single-stage correlation matrix are used to aggregate to basin, regional, and national scales.
Multi-objects recognition for distributed intelligent sensor networks
NASA Astrophysics Data System (ADS)
He, Haibo; Chen, Sheng; Cao, Yuan; Desai, Sachi; Hohil, Myron E.
2008-04-01
This paper proposes an innovative approach for multi-objects recognition for homeland security and defense based intelligent sensor networks. Unlike the conventional way of information analysis, data mining in such networks is typically characterized with high information ambiguity/uncertainty, data redundancy, high dimensionality and real-time constrains. Furthermore, since a typical military based network normally includes multiple mobile sensor platforms, ground forces, fortified tanks, combat flights, and other resources, it is critical to develop intelligent data mining approaches to fuse different information resources to understand dynamic environments, to support decision making processes, and finally to achieve the goals. This paper aims to address these issues with a focus on multi-objects recognition. Instead of classifying a single object as in the traditional image classification problems, the proposed method can automatically learn multiple objectives simultaneously. Image segmentation techniques are used to identify the interesting regions in the field, which correspond to multiple objects such as soldiers or tanks. Since different objects will come with different feature sizes, we propose a feature scaling method to represent each object in the same number of dimensions. This is achieved by linear/nonlinear scaling and sampling techniques. Finally, support vector machine (SVM) based learning algorithms are developed to learn and build the associations for different objects, and such knowledge will be adaptively accumulated for objects recognition in the testing stage. We test the effectiveness of proposed method in different simulated military environments.
Tanumihardjo, Sherry A; Mokhtar, Najat; Haskell, Marjorie J; Brown, Kenneth H
2016-06-01
Vitamin A (VA) deficiency (VAD) is still a concern in many parts of the world, and multiple intervention strategies are being implemented to reduce the prevalence of VAD and associated morbidity and mortality. Because some individuals within a population may be exposed to multiple VA interventions, concerns have been raised about the possible risk of hypervitaminosis A. A consultative meeting was held in Vienna, Austria, in March 2014 to (1) review current knowledge concerning the safety and effectiveness of large-scale programs to control VAD, (2) develop a related research agenda, and (3) review current available methods to assess VA status and risk of hypervitaminosis A. Multiple countries were represented and shared their experiences using a variety of assessment methods, including retinol isotope dilution (RID) techniques. Discussion included next steps to refine assessment methodology, investigate RID limitations under different conditions, and review programmatic approaches to ensure VA adequacy and avoid excessive intakes. Fortification programs have resulted in adequate VA status in Guatemala, Zambia, and parts of Cameroon. Dietary patterns in several countries revealed that some people may consume excessive preformed VA from fortified foods. Additional studies are needed to compare biomarkers of tissue damage to RID methods during hypervitaminosis A and to determine what other biomarkers can be used to assess excessive preformed VA intake. © The Author(s) 2016.
Multiple well-shutdown tests and site-scale flow simulation in fractured rocks
Tiedeman, Claire; Lacombe, Pierre J.; Goode, Daniel J.
2010-01-01
A new method was developed for conducting aquifer tests in fractured-rock flow systems that have a pump-and-treat (P&T) operation for containing and removing groundwater contaminants. The method involves temporary shutdown of individual pumps in wells of the P&T system. Conducting aquifer tests in this manner has several advantages, including (1) no additional contaminated water is withdrawn, and (2) hydraulic containment of contaminants remains largely intact because pumping continues at most wells. The well-shutdown test method was applied at the former Naval Air Warfare Center (NAWC), West Trenton, New Jersey, where a P&T operation is designed to contain and remove trichloroethene and its daughter products in the dipping fractured sedimentary rocks underlying the site. The detailed site-scale subsurface geologic stratigraphy, a three-dimensional MODFLOW model, and inverse methods in UCODE_2005 were used to analyze the shutdown tests. In the model, a deterministic method was used for representing the highly heterogeneous hydraulic conductivity distribution and simulations were conducted using an equivalent porous media method. This approach was very successful for simulating the shutdown tests, contrary to a common perception that flow in fractured rocks must be simulated using a stochastic or discrete fracture representation of heterogeneity. Use of inverse methods to simultaneously calibrate the model to the multiple shutdown tests was integral to the effectiveness of the approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Li; He, Ya-Ling; Kang, Qinjun
2013-12-15
A coupled (hybrid) simulation strategy spatially combining the finite volume method (FVM) and the lattice Boltzmann method (LBM), called CFVLBM, is developed to simulate coupled multi-scale multi-physicochemical processes. In the CFVLBM, computational domain of multi-scale problems is divided into two sub-domains, i.e., an open, free fluid region and a region filled with porous materials. The FVM and LBM are used for these two regions, respectively, with information exchanged at the interface between the two sub-domains. A general reconstruction operator (RO) is proposed to derive the distribution functions in the LBM from the corresponding macro scalar, the governing equation of whichmore » obeys the convection–diffusion equation. The CFVLBM and the RO are validated in several typical physicochemical problems and then are applied to simulate complex multi-scale coupled fluid flow, heat transfer, mass transport, and chemical reaction in a wall-coated micro reactor. The maximum ratio of the grid size between the FVM and LBM regions is explored and discussed. -- Highlights: •A coupled simulation strategy for simulating multi-scale phenomena is developed. •Finite volume method and lattice Boltzmann method are coupled. •A reconstruction operator is derived to transfer information at the sub-domains interface. •Coupled multi-scale multiple physicochemical processes in micro reactor are simulated. •Techniques to save computational resources and improve the efficiency are discussed.« less
NASA Astrophysics Data System (ADS)
Sergeyev, Yaroslav D.; Kvasov, Dmitri E.; Mukhametzhanov, Marat S.
2018-06-01
The necessity to find the global optimum of multiextremal functions arises in many applied problems where finding local solutions is insufficient. One of the desirable properties of global optimization methods is strong homogeneity meaning that a method produces the same sequences of points where the objective function is evaluated independently both of multiplication of the function by a scaling constant and of adding a shifting constant. In this paper, several aspects of global optimization using strongly homogeneous methods are considered. First, it is shown that even if a method possesses this property theoretically, numerically very small and large scaling constants can lead to ill-conditioning of the scaled problem. Second, a new class of global optimization problems where the objective function can have not only finite but also infinite or infinitesimal Lipschitz constants is introduced. Third, the strong homogeneity of several Lipschitz global optimization algorithms is studied in the framework of the Infinity Computing paradigm allowing one to work numerically with a variety of infinities and infinitesimals. Fourth, it is proved that a class of efficient univariate methods enjoys this property for finite, infinite and infinitesimal scaling and shifting constants. Finally, it is shown that in certain cases the usage of numerical infinities and infinitesimals can avoid ill-conditioning produced by scaling. Numerical experiments illustrating theoretical results are described.
NASA Astrophysics Data System (ADS)
Wagner, Jenny; Liesenborgs, Jori; Tessore, Nicolas
2018-04-01
Context. Local gravitational lensing properties, such as convergence and shear, determined at the positions of multiply imaged background objects, yield valuable information on the smaller-scale lensing matter distribution in the central part of galaxy clusters. Highly distorted multiple images with resolved brightness features like the ones observed in CL0024 allow us to study these local lensing properties and to tighten the constraints on the properties of dark matter on sub-cluster scale. Aim. We investigate to what precision local magnification ratios, J, ratios of convergences, f, and reduced shears, g = (g1, g2), can be determined independently of a lens model for the five resolved multiple images of the source at zs = 1.675 in CL0024. We also determine if a comparison to the respective results obtained by the parametric modelling tool Lenstool and by the non-parametric modelling tool Grale can detect biases in the models. For these lens models, we analyse the influence of the number and location of the constraints from multiple images on the lens properties at the positions of the five multiple images of the source at zs = 1.675. Methods: Our model-independent approach uses a linear mapping between the five resolved multiple images to determine the magnification ratios, ratios of convergences, and reduced shears at their positions. With constraints from up to six multiple image systems, we generate Lenstool and Grale models using the same image positions, cosmological parameters, and number of generated convergence and shear maps to determine the local values of J, f, and g at the same positions across all methods. Results: All approaches show strong agreement on the local values of J, f, and g. We find that Lenstool obtains the tightest confidence bounds even for convergences around one using constraints from six multiple-image systems, while the best Grale model is generated only using constraints from all multiple images with resolved brightness features and adding limited small-scale mass corrections. Yet, confidence bounds as large as the values themselves can occur for convergences close to one in all approaches. Conclusions: Our results agree with previous findings, support the light-traces-mass assumption, and the merger hypothesis for CL0024. Comparing the different approaches can detect model biases. The model-independent approach determines the local lens properties to a comparable precision in less than one second.
Multiple testing corrections in quantitative proteomics: A useful but blunt tool.
Pascovici, Dana; Handler, David C L; Wu, Jemma X; Haynes, Paul A
2016-09-01
Multiple testing corrections are a useful tool for restricting the FDR, but can be blunt in the context of low power, as we demonstrate by a series of simple simulations. Unfortunately, in proteomics experiments low power can be common, driven by proteomics-specific issues like small effects due to ratio compression, and few replicates due to reagent high cost, instrument time availability and other issues; in such situations, most multiple testing corrections methods, if used with conventional thresholds, will fail to detect any true positives even when many exist. In this low power, medium scale situation, other methods such as effect size considerations or peptide-level calculations may be a more effective option, even if they do not offer the same theoretical guarantee of a low FDR. Thus, we aim to highlight in this article that proteomics presents some specific challenges to the standard multiple testing corrections methods, which should be employed as a useful tool but not be regarded as a required rubber stamp. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Clarke, John R; Ragone, Andrew V; Greenwald, Lloyd
2005-09-01
We conducted a comparison of methods for predicting survival using survival risk ratios (SRRs), including new comparisons based on International Classification of Diseases, Ninth Revision (ICD-9) versus Abbreviated Injury Scale (AIS) six-digit codes. From the Pennsylvania trauma center's registry, all direct trauma admissions were collected through June 22, 1999. Patients with no comorbid medical diagnoses and both ICD-9 and AIS injury codes were used for comparisons based on a single set of data. SRRs for ICD-9 and then for AIS diagnostic codes were each calculated two ways: from the survival rate of patients with each diagnosis and when each diagnosis was an isolated diagnosis. Probabilities of survival for the cohort were calculated using each set of SRRs by the multiplicative ICISS method and, where appropriate, the minimum SRR method. These prediction sets were then internally validated against actual survival by the Hosmer-Lemeshow goodness-of-fit statistic. The 41,364 patients had 1,224 different ICD-9 injury diagnoses in 32,261 combinations and 1,263 corresponding AIS injury diagnoses in 31,755 combinations, ranging from 1 to 27 injuries per patient. All conventional ICD-9-based combinations of SRRs and methods had better Hosmer-Lemeshow goodness-of-fit statistic fits than their AIS-based counterparts. The minimum SRR method produced better calibration than the multiplicative methods, presumably because it did not magnify inaccuracies in the SRRs that might occur with multiplication. Predictions of survival based on anatomic injury alone can be performed using ICD-9 codes, with no advantage from extra coding of AIS diagnoses. Predictions based on the single worst SRR were closer to actual outcomes than those based on multiplying SRRs.
ERIC Educational Resources Information Center
Liu, Jinghua; Guo, Hongwen; Dorans, Neil J.
2014-01-01
Maintaining score interchangeability and scale consistency is crucial for any testing programs that administer multiple forms across years. The use of a multiple linking design, which involves equating a new form to multiple old forms and averaging the conversions, has been proposed to control scale drift. However, the use of multiple linking…
On Efficient Multigrid Methods for Materials Processing Flows with Small Particles
NASA Technical Reports Server (NTRS)
Thomas, James (Technical Monitor); Diskin, Boris; Harik, VasylMichael
2004-01-01
Multiscale modeling of materials requires simulations of multiple levels of structural hierarchy. The computational efficiency of numerical methods becomes a critical factor for simulating large physical systems with highly desperate length scales. Multigrid methods are known for their superior efficiency in representing/resolving different levels of physical details. The efficiency is achieved by employing interactively different discretizations on different scales (grids). To assist optimization of manufacturing conditions for materials processing with numerous particles (e.g., dispersion of particles, controlling flow viscosity and clusters), a new multigrid algorithm has been developed for a case of multiscale modeling of flows with small particles that have various length scales. The optimal efficiency of the algorithm is crucial for accurate predictions of the effect of processing conditions (e.g., pressure and velocity gradients) on the local flow fields that control the formation of various microstructures or clusters.
Multiscale Structure of UXO Site Characterization: Spatial Estimation and Uncertainty Quantification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ostrouchov, George; Doll, William E.; Beard, Les P.
2009-01-01
Unexploded ordnance (UXO) site characterization must consider both how the contamination is generated and how we observe that contamination. Within the generation and observation processes, dependence structures can be exploited at multiple scales. We describe a conceptual site characterization process, the dependence structures available at several scales, and consider their statistical estimation aspects. It is evident that most of the statistical methods that are needed to address the estimation problems are known but their application-specific implementation may not be available. We demonstrate estimation at one scale and propose a representation for site contamination intensity that takes full account of uncertainty,more » is flexible enough to answer regulatory requirements, and is a practical tool for managing detailed spatial site characterization and remediation. The representation is based on point process spatial estimation methods that require modern computational resources for practical application. These methods have provisions for including prior and covariate information.« less
Saravanan, Chandra; Shao, Yihan; Baer, Roi; Ross, Philip N; Head-Gordon, Martin
2003-04-15
A sparse matrix multiplication scheme with multiatom blocks is reported, a tool that can be very useful for developing linear-scaling methods with atom-centered basis functions. Compared to conventional element-by-element sparse matrix multiplication schemes, efficiency is gained by the use of the highly optimized basic linear algebra subroutines (BLAS). However, some sparsity is lost in the multiatom blocking scheme because these matrix blocks will in general contain negligible elements. As a result, an optimal block size that minimizes the CPU time by balancing these two effects is recovered. In calculations on linear alkanes, polyglycines, estane polymers, and water clusters the optimal block size is found to be between 40 and 100 basis functions, where about 55-75% of the machine peak performance was achieved on an IBM RS6000 workstation. In these calculations, the blocked sparse matrix multiplications can be 10 times faster than a standard element-by-element sparse matrix package. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 618-622, 2003
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Zhiming; Radboud University/NIKHEF, NL-6525 ED Nijmegen
We report on an entropy analysis using Ma's coincidence method on {pi}+p and K+p collisions at {radical}(s) = 22 GeV. A scaling law and additivity properties of Renyi entropies and their charged-particle multiplicity dependence are investigated. The results are compared with those from the PYTHIA Monte Carlo model.
Ku, Taeyun; Swaney, Justin; Park, Jeong-Yoon; Albanese, Alexandre; Murray, Evan; Cho, Jae Hun; Park, Young-Gyun; Mangena, Vamsi; Chen, Jiapei; Chung, Kwanghun
2016-09-01
The biology of multicellular organisms is coordinated across multiple size scales, from the subnanoscale of molecules to the macroscale, tissue-wide interconnectivity of cell populations. Here we introduce a method for super-resolution imaging of the multiscale organization of intact tissues. The method, called magnified analysis of the proteome (MAP), linearly expands entire organs fourfold while preserving their overall architecture and three-dimensional proteome organization. MAP is based on the observation that preventing crosslinking within and between endogenous proteins during hydrogel-tissue hybridization allows for natural expansion upon protein denaturation and dissociation. The expanded tissue preserves its protein content, its fine subcellular details, and its organ-scale intercellular connectivity. We use off-the-shelf antibodies for multiple rounds of immunolabeling and imaging of a tissue's magnified proteome, and our experiments demonstrate a success rate of 82% (100/122 antibodies tested). We show that specimen size can be reversibly modulated to image both inter-regional connections and fine synaptic architectures in the mouse brain.
NASA Astrophysics Data System (ADS)
Comas, X.; Wright, W. J.; Hynek, S. A.; Ntarlagiannis, D.; Terry, N.; Job, M. J.; Fletcher, R. C.; Brantley, S.
2017-12-01
Previous studies in the Rio Icacos watershed in the Luquillo Mountains (Puerto Rico) have shown that regolith materials are rapidly developed from the alteration of quartz diorite bedrock, and create a blanket on top of the bedrock with a thickness that decreases with proximity to the knickpoint. The watershed is also characterized by a system of heterogeneous fractures that likely drive bedrock weathering and the formation of corestones and associated spheroidal fracturing and rindlets. Previous efforts to characterize the spatial distribution of fractures were based on aerial images that did not account for the architecture of the critical zone below the subsurface. In this study we use an array of near-surface geophysical methods at multiple scales to better understand how the spatial distribution and density of fractures varies with topography and proximity to the knickpoint. Large km-scale surveys using ground penetrating radar (GPR), terrain conductivity, and capacitively coupled resistivity, were combined with smaller scale surveys (10-100 m) using electrical resistivity imaging (ERI), and shallow seismics, and were directly constrained with boreholes from previous studies. Geophysical results were compared to theoretical models of compressive stress as due to gravity and regional compression, and showed consistency at describing increased dilation of fractures with proximity to the knickpoint. This study shows the potential of multidisciplinary approaches to model critical zone processes at multiple scales of measurement and high spatial resolution. The approach can be particularly efficient at large km-scales when applying geophysical methods that allow for rapid data acquisition (i.e. walking pace) at high spatial resolution (i.e. cm scales).
Space vehicle engine and heat shield environment review. Volume 1: Engineering analysis
NASA Technical Reports Server (NTRS)
Mcanelly, W. B.; Young, C. T. K.
1973-01-01
Methods for predicting the base heating characteristics of a multiple rocket engine installation are discussed. The environmental data is applied to the design of adequate protection system for the engine components. The methods for predicting the base region thermal environment are categorized as: (1) scale model testing, (2) extrapolation of previous and related flight test results, and (3) semiempirical analytical techniques.
Collins, Kodi; Warnow, Tandy
2018-06-19
PASTA is a multiple sequence method that uses divide-and-conquer plus iteration to enable base alignment methods to scale with high accuracy to large sequence datasets. By default, PASTA included MAFFT L-INS-i; our new extension of PASTA enables the use of MAFFT G-INS-i, MAFFT Homologs, CONTRAlign, and ProbCons. We analyzed the performance of each base method and PASTA using these base methods on 224 datasets from BAliBASE 4 with at least 50 sequences. We show that PASTA enables the most accurate base methods to scale to larger datasets at reduced computational effort, and generally improves alignment and tree accuracy on the largest BAliBASE datasets. PASTA is available at https://github.com/kodicollins/pasta and has also been integrated into the original PASTA repository at https://github.com/smirarab/pasta. Supplementary data are available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Ying, Shen; Li, Lin; Gao, Yurong
2009-10-01
Spatial visibility analysis is the important direction of pedestrian behaviors because our visual conception in space is the straight method to get environment information and navigate your actions. Based on the agent modeling and up-tobottom method, the paper develop the framework about the analysis of the pedestrian flow depended on visibility. We use viewshed in visibility analysis and impose the parameters on agent simulation to direct their motion in urban space. We analyze the pedestrian behaviors in micro-scale and macro-scale of urban open space. The individual agent use visual affordance to determine his direction of motion in micro-scale urban street on district. And we compare the distribution of pedestrian flow with configuration in macro-scale urban environment, and mine the relationship between the pedestrian flow and distribution of urban facilities and urban function. The paper first computes the visibility situations at the vantage point in urban open space, such as street network, quantify the visibility parameters. The multiple agents use visibility parameters to decide their direction of motion, and finally pedestrian flow reach to a stable state in urban environment through the simulation of multiple agent system. The paper compare the morphology of visibility parameters and pedestrian distribution with urban function and facilities layout to confirm the consistence between them, which can be used to make decision support in urban design.
Optimal knockout strategies in genome-scale metabolic networks using particle swarm optimization.
Nair, Govind; Jungreuthmayer, Christian; Zanghellini, Jürgen
2017-02-01
Knockout strategies, particularly the concept of constrained minimal cut sets (cMCSs), are an important part of the arsenal of tools used in manipulating metabolic networks. Given a specific design, cMCSs can be calculated even in genome-scale networks. We would however like to find not only the optimal intervention strategy for a given design but the best possible design too. Our solution (PSOMCS) is to use particle swarm optimization (PSO) along with the direct calculation of cMCSs from the stoichiometric matrix to obtain optimal designs satisfying multiple objectives. To illustrate the working of PSOMCS, we apply it to a toy network. Next we show its superiority by comparing its performance against other comparable methods on a medium sized E. coli core metabolic network. PSOMCS not only finds solutions comparable to previously published results but also it is orders of magnitude faster. Finally, we use PSOMCS to predict knockouts satisfying multiple objectives in a genome-scale metabolic model of E. coli and compare it with OptKnock and RobustKnock. PSOMCS finds competitive knockout strategies and designs compared to other current methods and is in some cases significantly faster. It can be used in identifying knockouts which will force optimal desired behaviors in large and genome scale metabolic networks. It will be even more useful as larger metabolic models of industrially relevant organisms become available.
Karasek, Robert; Choi, BongKyoo; Ostergren, Per-Olof; Ferrario, Marco; De Smet, Patrick
2007-01-01
Scale comparative properties of "JCQ-like" questionnaires with respect to the JCQ have been little known. Assessing validity and reliability of two methods for generating comparable scale scores between the Job Content Questionnaire (JCQ) and JCQ-like questionnaires in sub-populations of the large Job Stress, Absenteeism and Coronary Heart Disease European Cooperative (JACE) study: the Swedish version of Demand-Control Questionnaire (DCQ) and a transformed Multinational Monitoring of Trends and Determinants in Cardiovascular Disease Project (MONICA) questionnaire. A random population sample of all Malmo males and females aged 52-58 (n = 682) years was given a new test questionnaire with both instruments (the JCQ and the DCQ). Comparability-facilitating algorithms were created (Method I). For the transformed Milan MONICA questionnaire, a simple weighting system was used (Method II). The converted scale scores from the JCQ-like questionnaires were found to be reliable and highly correlated to those of the original JCQ. However, agreements for the high job strain group between the JCQ and the DCQ, and between the JCQ and the DCQ (Method I applied) were only moderate (Kappa). Use of a multiple level job strain scale generated higher levels of job strain agreement, as did a new job strain definition that excludes the intermediate levels of the job strain distribution. The two methods were valid and generally reliable.
NASA Astrophysics Data System (ADS)
Barajas-Solano, D. A.; Tartakovsky, A. M.
2017-12-01
We present a multiresolution method for the numerical simulation of flow and reactive transport in porous, heterogeneous media, based on the hybrid Multiscale Finite Volume (h-MsFV) algorithm. The h-MsFV algorithm allows us to couple high-resolution (fine scale) flow and transport models with lower resolution (coarse) models to locally refine both spatial resolution and transport models. The fine scale problem is decomposed into various "local'' problems solved independently in parallel and coordinated via a "global'' problem. This global problem is then coupled with the coarse model to strictly ensure domain-wide coarse-scale mass conservation. The proposed method provides an alternative to adaptive mesh refinement (AMR), due to its capacity to rapidly refine spatial resolution beyond what's possible with state-of-the-art AMR techniques, and the capability to locally swap transport models. We illustrate our method by applying it to groundwater flow and reactive transport of multiple species.
Partitioning sources of variation in vertebrate species richness
Boone, R.B.; Krohn, W.B.
2000-01-01
Aim: To explore biogeographic patterns of terrestrial vertebrates in Maine, USA using techniques that would describe local and spatial correlations with the environment. Location: Maine, USA. Methods: We delineated the ranges within Maine (86,156 km2) of 275 species using literature and expert review. Ranges were combined into species richness maps, and compared to geomorphology, climate, and woody plant distributions. Methods were adapted that compared richness of all vertebrate classes to each environmental correlate, rather than assessing a single explanatory theory. We partitioned variation in species richness into components using tree and multiple linear regression. Methods were used that allowed for useful comparisons between tree and linear regression results. For both methods we partitioned variation into broad-scale (spatially autocorrelated) and fine-scale (spatially uncorrelated) explained and unexplained components. By partitioning variance, and using both tree and linear regression in analyses, we explored the degree of variation in species richness for each vertebrate group that Could be explained by the relative contribution of each environmental variable. Results: In tree regression, climate variation explained richness better (92% of mean deviance explained for all species) than woody plant variation (87%) and geomorphology (86%). Reptiles were highly correlated with environmental variation (93%), followed by mammals, amphibians, and birds (each with 84-82% deviance explained). In multiple linear regression, climate was most closely associated with total vertebrate richness (78%), followed by woody plants (67%) and geomorphology (56%). Again, reptiles were closely correlated with the environment (95%), followed by mammals (73%), amphibians (63%) and birds (57%). Main conclusions: Comparing variation explained using tree and multiple linear regression quantified the importance of nonlinear relationships and local interactions between species richness and environmental variation, identifying the importance of linear relationships between reptiles and the environment, and nonlinear relationships between birds and woody plants, for example. Conservation planners should capture climatic variation in broad-scale designs; temperatures may shift during climate change, but the underlying correlations between the environment and species richness will presumably remain.
Bean, William T.; Stafford, Robert; Butterfield, H. Scott; Brashares, Justin S.
2014-01-01
Species distributions are known to be limited by biotic and abiotic factors at multiple temporal and spatial scales. Species distribution models, however, frequently assume a population at equilibrium in both time and space. Studies of habitat selection have repeatedly shown the difficulty of estimating resource selection if the scale or extent of analysis is incorrect. Here, we present a multi-step approach to estimate the realized and potential distribution of the endangered giant kangaroo rat. First, we estimate the potential distribution by modeling suitability at a range-wide scale using static bioclimatic variables. We then examine annual changes in extent at a population-level. We define “available” habitat based on the total suitable potential distribution at the range-wide scale. Then, within the available habitat, model changes in population extent driven by multiple measures of resource availability. By modeling distributions for a population with robust estimates of population extent through time, and ecologically relevant predictor variables, we improved the predictive ability of SDMs, as well as revealed an unanticipated relationship between population extent and precipitation at multiple scales. At a range-wide scale, the best model indicated the giant kangaroo rat was limited to areas that received little to no precipitation in the summer months. In contrast, the best model for shorter time scales showed a positive relation with resource abundance, driven by precipitation, in the current and previous year. These results suggest that the distribution of the giant kangaroo rat was limited to the wettest parts of the drier areas within the study region. This multi-step approach reinforces the differing relationship species may have with environmental variables at different scales, provides a novel method for defining “available” habitat in habitat selection studies, and suggests a way to create distribution models at spatial and temporal scales relevant to theoretical and applied ecologists. PMID:25237807
Trans-National Scale-Up of Services in Global Health
Shahin, Ilan; Sohal, Raman; Ginther, John; Hayden, Leigh; MacDonald, John A.; Mossman, Kathryn; Parikh, Himanshu; McGahan, Anita; Mitchell, Will; Bhattacharyya, Onil
2014-01-01
Background Scaling up innovative healthcare programs offers a means to improve access, quality, and health equity across multiple health areas. Despite large numbers of promising projects, little is known about successful efforts to scale up. This study examines trans-national scale, whereby a program operates in two or more countries. Trans-national scale is a distinct measure that reflects opportunities to replicate healthcare programs in multiple countries, thereby providing services to broader populations. Methods Based on the Center for Health Market Innovations (CHMI) database of nearly 1200 health programs, the study contrasts 116 programs that have achieved trans-national scale with 1,068 single-country programs. Data was collected on the programs' health focus, service activity, legal status, and funding sources, as well as the programs' locations (rural v. urban emphasis), and founding year; differences are reported with statistical significance. Findings This analysis examines 116 programs that have achieved trans-national scale (TNS) across multiple disease areas and activity types. Compared to 1,068 single-country programs, we find that trans-nationally scaled programs are more donor-reliant; more likely to focus on targeted health needs such as HIV/AIDS, TB, malaria, or family planning rather than provide more comprehensive general care; and more likely to engage in activities that support healthcare services rather than provide direct clinical care. Conclusion This work, based on a large data set of health programs, reports on trans-national scale with comparison to single-country programs. The work is a step towards understanding when programs are able to replicate their services as they attempt to expand health services for the poor across countries and health areas. A subset of these programs should be the subject of case studies to understand factors that affect the scaling process, particularly seeking to identify mechanisms that lead to improved health outcomes. PMID:25375328
Pratt, Bethany; Chang, Heejun
2012-03-30
The relationship among land cover, topography, built structure and stream water quality in the Portland Metro region of Oregon and Clark County, Washington areas, USA, is analyzed using ordinary least squares (OLS) and geographically weighted (GWR) multiple regression models. Two scales of analysis, a sectional watershed and a buffer, offered a local and a global investigation of the sources of stream pollutants. Model accuracy, measured by R(2) values, fluctuated according to the scale, season, and regression method used. While most wet season water quality parameters are associated with urban land covers, most dry season water quality parameters are related topographic features such as elevation and slope. GWR models, which take into consideration local relations of spatial autocorrelation, had stronger results than OLS regression models. In the multiple regression models, sectioned watershed results were consistently better than the sectioned buffer results, except for dry season pH and stream temperature parameters. This suggests that while riparian land cover does have an effect on water quality, a wider contributing area needs to be included in order to account for distant sources of pollutants. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Kim, S.-W.; Chen, C.-P.
1987-01-01
A multiple-time-scale turbulence model of a single point closure and a simplified split-spectrum method is presented. In the model, the effect of the ratio of the production rate to the dissipation rate on eddy viscosity is modeled by use of the multiple-time-scales and a variable partitioning of the turbulent kinetic energy spectrum. The concept of a variable partitioning of the turbulent kinetic energy spectrum and the rest of the model details are based on the previously reported algebraic stress turbulence model. Example problems considered include: a fully developed channel flow, a plane jet exhausting into a moving stream, a wall jet flow, and a weakly coupled wake-boundary layer interaction flow. The computational results compared favorably with those obtained by using the algebraic stress turbulence model as well as experimental data. The present turbulence model, as well as the algebraic stress turbulence model, yielded significantly improved computational results for the complex turbulent boundary layer flows, such as the wall jet flow and the wake boundary layer interaction flow, compared with available computational results obtained by using the standard kappa-epsilon turbulence model.
NASA Technical Reports Server (NTRS)
Kim, S.-W.; Chen, C.-P.
1989-01-01
A multiple-time-scale turbulence model of a single point closure and a simplified split-spectrum method is presented. In the model, the effect of the ratio of the production rate to the dissipation rate on eddy viscosity is modeled by use of the multiple-time-scales and a variable partitioning of the turbulent kinetic energy spectrum. The concept of a variable partitioning of the turbulent kinetic energy spectrum and the rest of the model details are based on the previously reported algebraic stress turbulence model. Example problems considered include: a fully developed channel flow, a plane jet exhausting into a moving stream, a wall jet flow, and a weakly coupled wake-boundary layer interaction flow. The computational results compared favorably with those obtained by using the algebraic stress turbulence model as well as experimental data. The present turbulence model, as well as the algebraic stress turbulence model, yielded significantly improved computational results for the complex turbulent boundary layer flows, such as the wall jet flow and the wake boundary layer interaction flow, compared with available computational results obtained by using the standard kappa-epsilon turbulence model.
Yang, Yingbao; Li, Xiaolong; Pan, Xin; Zhang, Yong; Cao, Chen
2017-01-01
Many downscaling algorithms have been proposed to address the issue of coarse-resolution land surface temperature (LST) derived from available satellite-borne sensors. However, few studies have focused on improving LST downscaling in urban areas with several mixed surface types. In this study, LST was downscaled by a multiple linear regression model between LST and multiple scale factors in mixed areas with three or four surface types. The correlation coefficients (CCs) between LST and the scale factors were used to assess the importance of the scale factors within a moving window. CC thresholds determined which factors participated in the fitting of the regression equation. The proposed downscaling approach, which involves an adaptive selection of the scale factors, was evaluated using the LST derived from four Landsat 8 thermal imageries of Nanjing City in different seasons. Results of the visual and quantitative analyses show that the proposed approach achieves relatively satisfactory downscaling results on 11 August, with coefficient of determination and root-mean-square error of 0.87 and 1.13 °C, respectively. Relative to other approaches, our approach shows the similar accuracy and the availability in all seasons. The best (worst) availability occurred in the region of vegetation (water). Thus, the approach is an efficient and reliable LST downscaling method. Future tasks include reliable LST downscaling in challenging regions and the application of our model in middle and low spatial resolutions. PMID:28368301
Validating Bayesian truth serum in large-scale online human experiments.
Frank, Morgan R; Cebrian, Manuel; Pickard, Galen; Rahwan, Iyad
2017-01-01
Bayesian truth serum (BTS) is an exciting new method for improving honesty and information quality in multiple-choice survey, but, despite the method's mathematical reliance on large sample sizes, existing literature about BTS only focuses on small experiments. Combined with the prevalence of online survey platforms, such as Amazon's Mechanical Turk, which facilitate surveys with hundreds or thousands of participants, BTS must be effective in large-scale experiments for BTS to become a readily accepted tool in real-world applications. We demonstrate that BTS quantifiably improves honesty in large-scale online surveys where the "honest" distribution of answers is known in expectation on aggregate. Furthermore, we explore a marketing application where "honest" answers cannot be known, but find that BTS treatment impacts the resulting distributions of answers.
Wang, Ling; Muralikrishnan, Bala; Rachakonda, Prem; Sawyer, Daniel
2017-01-01
Terrestrial laser scanners (TLS) are increasingly used in large-scale manufacturing and assembly where required measurement uncertainties are on the order of few tenths of a millimeter or smaller. In order to meet these stringent requirements, systematic errors within a TLS are compensated in-situ through self-calibration. In the Network method of self-calibration, numerous targets distributed in the work-volume are measured from multiple locations with the TLS to determine parameters of the TLS error model. In this paper, we propose two new self-calibration methods, the Two-face method and the Length-consistency method. The Length-consistency method is proposed as a more efficient way of realizing the Network method where the length between any pair of targets from multiple TLS positions are compared to determine TLS model parameters. The Two-face method is a two-step process. In the first step, many model parameters are determined directly from the difference between front-face and back-face measurements of targets distributed in the work volume. In the second step, all remaining model parameters are determined through the Length-consistency method. We compare the Two-face method, the Length-consistency method, and the Network method in terms of the uncertainties in the model parameters, and demonstrate the validity of our techniques using a calibrated scale bar and front-face back-face target measurements. The clear advantage of these self-calibration methods is that a reference instrument or calibrated artifacts are not required, thus significantly lowering the cost involved in the calibration process. PMID:28890607
Experimental design and quantitative analysis of microbial community multiomics.
Mallick, Himel; Ma, Siyuan; Franzosa, Eric A; Vatanen, Tommi; Morgan, Xochitl C; Huttenhower, Curtis
2017-11-30
Studies of the microbiome have become increasingly sophisticated, and multiple sequence-based, molecular methods as well as culture-based methods exist for population-scale microbiome profiles. To link the resulting host and microbial data types to human health, several experimental design considerations, data analysis challenges, and statistical epidemiological approaches must be addressed. Here, we survey current best practices for experimental design in microbiome molecular epidemiology, including technologies for generating, analyzing, and integrating microbiome multiomics data. We highlight studies that have identified molecular bioactives that influence human health, and we suggest steps for scaling translational microbiome research to high-throughput target discovery across large populations.
Optical methods for wireless implantable sensing platforms
NASA Astrophysics Data System (ADS)
Mujeeb-U-Rahman, Muhammad; Chang, Chieh-Feng; Scherer, Axel
2013-09-01
Ultra small scale implants have gained lots of importance for both acute and chronic applications. Optical techniques hold the key to miniaturizing these devices to long sought sub-mm scale. This will lead towards long term use of these devices for medically relevant applications. It can also allow using multiple of these devices at the same time and forming a true body area network of sensors. In this paper, we present optical power transfer to such devices and the techniques to harness this power for different applications, for example high voltage or high current applications. We also present methods for wireless data transfer from such implants.
2013-01-01
Background As high-throughput genomic technologies become accurate and affordable, an increasing number of data sets have been accumulated in the public domain and genomic information integration and meta-analysis have become routine in biomedical research. In this paper, we focus on microarray meta-analysis, where multiple microarray studies with relevant biological hypotheses are combined in order to improve candidate marker detection. Many methods have been developed and applied in the literature, but their performance and properties have only been minimally investigated. There is currently no clear conclusion or guideline as to the proper choice of a meta-analysis method given an application; the decision essentially requires both statistical and biological considerations. Results We performed 12 microarray meta-analysis methods for combining multiple simulated expression profiles, and such methods can be categorized for different hypothesis setting purposes: (1) HS A : DE genes with non-zero effect sizes in all studies, (2) HS B : DE genes with non-zero effect sizes in one or more studies and (3) HS r : DE gene with non-zero effect in "majority" of studies. We then performed a comprehensive comparative analysis through six large-scale real applications using four quantitative statistical evaluation criteria: detection capability, biological association, stability and robustness. We elucidated hypothesis settings behind the methods and further apply multi-dimensional scaling (MDS) and an entropy measure to characterize the meta-analysis methods and data structure, respectively. Conclusions The aggregated results from the simulation study categorized the 12 methods into three hypothesis settings (HS A , HS B , and HS r ). Evaluation in real data and results from MDS and entropy analyses provided an insightful and practical guideline to the choice of the most suitable method in a given application. All source files for simulation and real data are available on the author’s publication website. PMID:24359104
Chang, Lun-Ching; Lin, Hui-Min; Sibille, Etienne; Tseng, George C
2013-12-21
As high-throughput genomic technologies become accurate and affordable, an increasing number of data sets have been accumulated in the public domain and genomic information integration and meta-analysis have become routine in biomedical research. In this paper, we focus on microarray meta-analysis, where multiple microarray studies with relevant biological hypotheses are combined in order to improve candidate marker detection. Many methods have been developed and applied in the literature, but their performance and properties have only been minimally investigated. There is currently no clear conclusion or guideline as to the proper choice of a meta-analysis method given an application; the decision essentially requires both statistical and biological considerations. We performed 12 microarray meta-analysis methods for combining multiple simulated expression profiles, and such methods can be categorized for different hypothesis setting purposes: (1) HS(A): DE genes with non-zero effect sizes in all studies, (2) HS(B): DE genes with non-zero effect sizes in one or more studies and (3) HS(r): DE gene with non-zero effect in "majority" of studies. We then performed a comprehensive comparative analysis through six large-scale real applications using four quantitative statistical evaluation criteria: detection capability, biological association, stability and robustness. We elucidated hypothesis settings behind the methods and further apply multi-dimensional scaling (MDS) and an entropy measure to characterize the meta-analysis methods and data structure, respectively. The aggregated results from the simulation study categorized the 12 methods into three hypothesis settings (HS(A), HS(B), and HS(r)). Evaluation in real data and results from MDS and entropy analyses provided an insightful and practical guideline to the choice of the most suitable method in a given application. All source files for simulation and real data are available on the author's publication website.
Daleu, C. L.; Plant, R. S.; Woolnough, S. J.; ...
2015-10-24
Here, as part of an international intercomparison project, a set of single-column models (SCMs) and cloud-resolving models (CRMs) are run under the weak-temperature gradient (WTG) method and the damped gravity wave (DGW) method. For each model, the implementation of the WTG or DGW method involves a simulated column which is coupled to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. The simulated column has the same surface conditions as the reference state and is initialized with profiles from the reference state. We performed systematic comparison of the behavior of different models under a consistentmore » implementation of the WTG method and the DGW method and systematic comparison of the WTG and DGW methods in models with different physics and numerics. CRMs and SCMs produce a variety of behaviors under both WTG and DGW methods. Some of the models reproduce the reference state while others sustain a large-scale circulation which results in either substantially lower or higher precipitation compared to the value of the reference state. CRMs show a fairly linear relationship between precipitation and circulation strength. SCMs display a wider range of behaviors than CRMs. Some SCMs under the WTG method produce zero precipitation. Within an individual SCM, a DGW simulation and a corresponding WTG simulation can produce different signed circulation. When initialized with a dry troposphere, DGW simulations always result in a precipitating equilibrium state. The greatest sensitivities to the initial moisture conditions occur for multiple stable equilibria in some WTG simulations, corresponding to either a dry equilibrium state when initialized as dry or a precipitating equilibrium state when initialized as moist. Multiple equilibria are seen in more WTG simulations for higher SST. In some models, the existence of multiple equilibria is sensitive to some parameters in the WTG calculations.« less
Application of up-sampling and resolution scaling to Fresnel reconstruction of digital holograms.
Williams, Logan A; Nehmetallah, Georges; Aylo, Rola; Banerjee, Partha P
2015-02-20
Fresnel transform implementation methods using numerical preprocessing techniques are investigated in this paper. First, it is shown that up-sampling dramatically reduces the minimum reconstruction distance requirements and allows maximal signal recovery by eliminating aliasing artifacts which typically occur at distances much less than the Rayleigh range of the object. Second, zero-padding is employed to arbitrarily scale numerical resolution for the purpose of resolution matching multiple holograms, where each hologram is recorded using dissimilar geometric or illumination parameters. Such preprocessing yields numerical resolution scaling at any distance. Both techniques are extensively illustrated using experimental results.
Detecting text in natural scenes with multi-level MSER and SWT
NASA Astrophysics Data System (ADS)
Lu, Tongwei; Liu, Renjun
2018-04-01
The detection of the characters in the natural scene is susceptible to factors such as complex background, variable viewing angle and diverse forms of language, which leads to poor detection results. Aiming at these problems, a new text detection method was proposed, which consisted of two main stages, candidate region extraction and text region detection. At first stage, the method used multiple scale transformations of original image and multiple thresholds of maximally stable extremal regions (MSER) to detect the text regions which could detect character regions comprehensively. At second stage, obtained SWT maps by using the stroke width transform (SWT) algorithm to compute the candidate regions, then using cascaded classifiers to propose non-text regions. The proposed method was evaluated on the standard benchmark datasets of ICDAR2011 and the datasets that we made our own data sets. The experiment results showed that the proposed method have greatly improved that compared to other text detection methods.
Computational Issues in Damping Identification for Large Scale Problems
NASA Technical Reports Server (NTRS)
Pilkey, Deborah L.; Roe, Kevin P.; Inman, Daniel J.
1997-01-01
Two damping identification methods are tested for efficiency in large-scale applications. One is an iterative routine, and the other a least squares method. Numerical simulations have been performed on multiple degree-of-freedom models to test the effectiveness of the algorithm and the usefulness of parallel computation for the problems. High Performance Fortran is used to parallelize the algorithm. Tests were performed using the IBM-SP2 at NASA Ames Research Center. The least squares method tested incurs high communication costs, which reduces the benefit of high performance computing. This method's memory requirement grows at a very rapid rate meaning that larger problems can quickly exceed available computer memory. The iterative method's memory requirement grows at a much slower pace and is able to handle problems with 500+ degrees of freedom on a single processor. This method benefits from parallelization, and significant speedup can he seen for problems of 100+ degrees-of-freedom.
Recording large-scale neuronal ensembles with silicon probes in the anesthetized rat.
Schjetnan, Andrea Gomez Palacio; Luczak, Artur
2011-10-19
Large scale electrophysiological recordings from neuronal ensembles offer the opportunity to investigate how the brain orchestrates the wide variety of behaviors from the spiking activity of its neurons. One of the most effective methods to monitor spiking activity from a large number of neurons in multiple local neuronal circuits simultaneously is by using silicon electrode arrays. Action potentials produce large transmembrane voltage changes in the vicinity of cell somata. These output signals can be measured by placing a conductor in close proximity of a neuron. If there are many active (spiking) neurons in the vicinity of the tip, the electrode records combined signal from all of them, where contribution of a single neuron is weighted by its 'electrical distance'. Silicon probes are ideal recording electrodes to monitor multiple neurons because of a large number of recording sites (+64) and a small volume. Furthermore, multiple sites can be arranged over a distance of millimeters, thus allowing for the simultaneous recordings of neuronal activity in the various cortical layers or in multiple cortical columns (Fig. 1). Importantly, the geometrically precise distribution of the recording sites also allows for the determination of the spatial relationship of the isolated single neurons. Here, we describe an acute, large-scale neuronal recording from the left and right forelimb somatosensory cortex simultaneously in an anesthetized rat with silicon probes (Fig. 2).
Recording Large-scale Neuronal Ensembles with Silicon Probes in the Anesthetized Rat
Schjetnan, Andrea Gomez Palacio; Luczak, Artur
2011-01-01
Large scale electrophysiological recordings from neuronal ensembles offer the opportunity to investigate how the brain orchestrates the wide variety of behaviors from the spiking activity of its neurons. One of the most effective methods to monitor spiking activity from a large number of neurons in multiple local neuronal circuits simultaneously is by using silicon electrode arrays1-3. Action potentials produce large transmembrane voltage changes in the vicinity of cell somata. These output signals can be measured by placing a conductor in close proximity of a neuron. If there are many active (spiking) neurons in the vicinity of the tip, the electrode records combined signal from all of them, where contribution of a single neuron is weighted by its 'electrical distance'. Silicon probes are ideal recording electrodes to monitor multiple neurons because of a large number of recording sites (+64) and a small volume. Furthermore, multiple sites can be arranged over a distance of millimeters, thus allowing for the simultaneous recordings of neuronal activity in the various cortical layers or in multiple cortical columns (Fig. 1). Importantly, the geometrically precise distribution of the recording sites also allows for the determination of the spatial relationship of the isolated single neurons4. Here, we describe an acute, large-scale neuronal recording from the left and right forelimb somatosensory cortex simultaneously in an anesthetized rat with silicon probes (Fig. 2). PMID:22042361
Rank Dynamics of Word Usage at Multiple Scales
NASA Astrophysics Data System (ADS)
Morales, José A.; Colman, Ewan; Sánchez, Sergio; Sánchez-Puig, Fernanda; Pineda, Carlos; Iñiguez, Gerardo; Cocho, Germinal; Flores, Jorge; Gershenson, Carlos
2018-05-01
The recent dramatic increase in online data availability has allowed researchers to explore human culture with unprecedented detail, such as the growth and diversification of language. In particular, it provides statistical tools to explore whether word use is similar across languages, and if so, whether these generic features appear at different scales of language structure. Here we use the Google Books N-grams dataset to analyze the temporal evolution of word usage in several languages. We apply measures proposed recently to study rank dynamics, such as the diversity of N-grams in a given rank, the probability that an N-gram changes rank between successive time intervals, the rank entropy, and the rank complexity. Using different methods, results show that there are generic properties for different languages at different scales, such as a core of words necessary to minimally understand a language. We also propose a null model to explore the relevance of linguistic structure across multiple scales, concluding that N-gram statistics cannot be reduced to word statistics. We expect our results to be useful in improving text prediction algorithms, as well as in shedding light on the large-scale features of language use, beyond linguistic and cultural differences across human populations.
Decomposing Multifractal Crossovers
Nagy, Zoltan; Mukli, Peter; Herman, Peter; Eke, Andras
2017-01-01
Physiological processes—such as, the brain's resting-state electrical activity or hemodynamic fluctuations—exhibit scale-free temporal structuring. However, impacts common in biological systems such as, noise, multiple signal generators, or filtering by transport function, result in multimodal scaling that cannot be reliably assessed by standard analytical tools that assume unimodal scaling. Here, we present two methods to identify breakpoints or crossovers in multimodal multifractal scaling functions. These methods incorporate the robust iterative fitting approach of the focus-based multifractal formalism (FMF). The first approach (moment-wise scaling range adaptivity) allows for a breakpoint-based adaptive treatment that analyzes segregated scale-invariant ranges. The second method (scaling function decomposition method, SFD) is a crossover-based design aimed at decomposing signal constituents from multimodal scaling functions resulting from signal addition or co-sampling, such as, contamination by uncorrelated fractals. We demonstrated that these methods could handle multimodal, mono- or multifractal, and exact or empirical signals alike. Their precision was numerically characterized on ideal signals, and a robust performance was demonstrated on exemplary empirical signals capturing resting-state brain dynamics by near infrared spectroscopy (NIRS), electroencephalography (EEG), and blood oxygen level-dependent functional magnetic resonance imaging (fMRI-BOLD). The NIRS and fMRI-BOLD low-frequency fluctuations were dominated by a multifractal component over an underlying biologically relevant random noise, thus forming a bimodal signal. The crossover between the EEG signal components was found at the boundary between the δ and θ bands, suggesting an independent generator for the multifractal δ rhythm. The robust implementation of the SFD method should be regarded as essential in the seamless processing of large volumes of bimodal fMRI-BOLD imaging data for the topology of multifractal metrics free of the masking effect of the underlying random noise. PMID:28798694
NASA Astrophysics Data System (ADS)
Pan, Zhenying; Yu, Ye Feng; Valuckas, Vytautas; Yap, Sherry L. K.; Vienne, Guillaume G.; Kuznetsov, Arseniy I.
2018-05-01
Cheap large-scale fabrication of ordered nanostructures is important for multiple applications in photonics and biomedicine including optical filters, solar cells, plasmonic biosensors, and DNA sequencing. Existing methods are either expensive or have strict limitations on the feature size and fabrication complexity. Here, we present a laser-based technique, plasmonic nanoparticle lithography, which is capable of rapid fabrication of large-scale arrays of sub-50 nm holes on various substrates. It is based on near-field enhancement and melting induced under ordered arrays of plasmonic nanoparticles, which are brought into contact or in close proximity to a desired material and acting as optical near-field lenses. The nanoparticles are arranged in ordered patterns on a flexible substrate and can be attached and removed from the patterned sample surface. At optimized laser fluence, the nanohole patterning process does not create any observable changes to the nanoparticles and they have been applied multiple times as reusable near-field masks. This resist-free nanolithography technique provides a simple and cheap solution for large-scale nanofabrication.
Davis, Deborah Winders; Finkel, Deborah; Turkheimer, Eric; Dickens, William
2015-11-01
The Infant Behavior Record (IBR) from the Bayley Scales of Infant Development has been used to study behavioral development since the 1960s. Matheny (1983) examined behavioral development at 6, 12, 18, and 24 months from the Louisville Twin Study (LTS). The extracted temperament scales included Task Orientation, Affect-Extraversion, and Activity. He concluded that monozygotic twins were more similar than same-sex dizygotic twins on these dimensions. Since this seminal work was published, a larger LTS sample and more advanced analytical methods are available. In the current analyses, Choleksy decomposition was applied to behavioral data (n = 1231) from twins 6-36 months. Different patterns of genetic continuity vs genetic innovations were identified for each IBR scale. Single common genetic and shared environmental factors explained cross-age twin similarity in the Activity scale. Multiple shared environmental factors and a single genetic factor coming on line at age 18 months contributed to Affect-Extraversion. A single shared environmental factor and multiple genetic factors explained cross-age twin similarity in Task Orientation.
NASA Astrophysics Data System (ADS)
O’Donoghue, D.; Frizzell, R.; Punch, J.
2018-07-01
Vibration energy harvesters (VEHs) offer an alternative to batteries for the autonomous operation of low-power electronics. Understanding the influence of scaling on VEHs is of great importance in the design of reduced scale harvesters. The nonlinear harvesters investigated here employ velocity amplification, a technique used to increase velocity through impacts, to improve the power output of multiple-degree-of-freedom VEHs, compared to linear resonators. Such harvesters, employing electromagnetic induction, are referred to as velocity amplified electromagnetic generators (VAEGs), with gains in power achieved by increasing the relative velocity between the magnet and coil in the transducer. The influence of scaling on a nonlinear 2-DoF VAEG is presented. Due to the increased complexity of VAEGs, compared to linear systems, linear scaling theory cannot be directly applied to VAEGs. Therefore, a detailed nonlinear scaling method is utilised. Experimental and numerical methods are employed. This nonlinear scaling method can be used for analysing the scaling behaviour of all nonlinear electromagnetic VEHs. It is demonstrated that the electromagnetic coupling coefficient degrades more rapidly with scale for systems with larger displacement amplitudes, meaning that systems operating at low frequencies will scale poorly compared to those operating at higher frequencies. The load power of the 2-DoF VAEG is predicted to scale as {P}L\\propto {s}5.51 (s = volume1/3), suggesting that achieving high power densities in a VAEG with low device volume is extremely challenging.
Development of Underwater Laser Scaling Adapter
NASA Astrophysics Data System (ADS)
Bluss, Kaspars
2012-12-01
In this paper the developed laser scaling adapter is presented. The scaling adapter is equipped with a twin laser unit where the two parallel laser beams are projected onto any target giving an exact indication of scale. The body of the laser scaling adapter is made of Teflon, the density of which is approximately two times the water density. The development involved multiple challenges - numerical hydrodynamic calculations for choosing an appropriate shape which would reduce the effects of turbulence, an accurate sealing of the power supply and the laser diodes, and others. The precision is estimated by the partial derivation method. Both experimental and theoretical data conclude the overall precision error to be in the 1% margin. This paper presents the development steps of such an underwater laser scaling adapter for a remotely operated vehicle (ROV).
Kebede, Abiy S; Nicholls, Robert J; Allan, Andrew; Arto, Iñaki; Cazcarro, Ignacio; Fernandes, Jose A; Hill, Chris T; Hutton, Craig W; Kay, Susan; Lázár, Attila N; Macadam, Ian; Palmer, Matthew; Suckall, Natalie; Tompkins, Emma L; Vincent, Katharine; Whitehead, Paul W
2018-09-01
To better anticipate potential impacts of climate change, diverse information about the future is required, including climate, society and economy, and adaptation and mitigation. To address this need, a global RCP (Representative Concentration Pathways), SSP (Shared Socio-economic Pathways), and SPA (Shared climate Policy Assumptions) (RCP-SSP-SPA) scenario framework has been developed by the Intergovernmental Panel on Climate Change Fifth Assessment Report (IPCC-AR5). Application of this full global framework at sub-national scales introduces two key challenges: added complexity in capturing the multiple dimensions of change, and issues of scale. Perhaps for this reason, there are few such applications of this new framework. Here, we present an integrated multi-scale hybrid scenario approach that combines both expert-based and participatory methods. The framework has been developed and applied within the DECCMA 1 project with the purpose of exploring migration and adaptation in three deltas across West Africa and South Asia: (i) the Volta delta (Ghana), (ii) the Mahanadi delta (India), and (iii) the Ganges-Brahmaputra-Meghna (GBM) delta (Bangladesh/India). Using a climate scenario that encompasses a wide range of impacts (RCP8.5) combined with three SSP-based socio-economic scenarios (SSP2, SSP3, SSP5), we generate highly divergent and challenging scenario contexts across multiple scales against which robustness of the human and natural systems within the deltas are tested. In addition, we consider four distinct adaptation policy trajectories: Minimum intervention, Economic capacity expansion, System efficiency enhancement, and System restructuring, which describe alternative future bundles of adaptation actions/measures under different socio-economic trajectories. The paper highlights the importance of multi-scale (combined top-down and bottom-up) and participatory (joint expert-stakeholder) scenario methods for addressing uncertainty in adaptation decision-making. The framework facilitates improved integrated assessments of the potential impacts and plausible adaptation policy choices (including migration) under uncertain future changing conditions. The concept, methods, and processes presented are transferable to other sub-national socio-ecological settings with multi-scale challenges. Copyright © 2018. Published by Elsevier B.V.
Kubsik, Anna; Klimkiewicz, Paulina; Klimkiewicz, Robert; Jankowska, Katarzyna; Jankowska, Agnieszka; Woldańska-Okońska, Marta
2014-07-01
Multiple sclerosis is a chronic, inflammatory, demyelinating disease of the central nervous system, which is characterized by diverse symptomatology. Most often affects people at a young age gradually leading to their disability. Looking for new therapies to alleviate neurological deficits caused by the disease. One of the alternative methods of therapy is high - tone power therapy. The article is a comparison of high-tone power therapy and kinesis in improving patients with multiple sclerosis. The aim of this study was to evaluate the effectiveness of high-tone power therapy and exercises in kinesis on the functional status of patients with multiple sclerosis. The study involved 20 patients with multiple sclerosis, both sexes, treated at the Department of Rehabilitation and Physical Medicine in Lodz. Patients were randomly divided into two groups studied. In group high-tone power therapy applied for 60 minutes, while in group II were used exercises for kinesis. Treatment time for both groups of patients was 15 days. To assess the functional status scale was used: Expanded Disability Status Scale of Kurtzke (EDSS), as well as by Barthel ADL Index. Assessment of quality of life were made using MSQOL Questionnaire-54. For the evaluation of gait and balance using Tinetti scale, and pain VAS rated, and Laitinen. Changes in muscle tone was assessed on the basis of the Ashworth scale. Both group I and II improved on scales conducted before and after therapy. In group I, in which the applied high-tone power therapy, reported statistically significant results in 9 out of 10 tested parameters, while in group II, which was used in the exercises in kinesis an improvement in 6 out of 10 tested parameters. Correlating the results of both the test groups in relation to each other did not show statistically significant differences. High-Tone Power Therapy beneficial effect on the functional status of patients with multiple sclerosis. Obtaining results in terms of number of tested parameters allows for the use of this therapy in the comprehensive improvement of patients with multiple sclerosis. Exercises from the scheme kinesis favorable impact on the functional status of patients with MS and are essential in the rehabilitation of these patients. In any group, no adverse effects were observed.
Chen, Jin; Roth, Robert E; Naito, Adam T; Lengerich, Eugene J; MacEachren, Alan M
2008-01-01
Background Kulldorff's spatial scan statistic and its software implementation – SaTScan – are widely used for detecting and evaluating geographic clusters. However, two issues make using the method and interpreting its results non-trivial: (1) the method lacks cartographic support for understanding the clusters in geographic context and (2) results from the method are sensitive to parameter choices related to cluster scaling (abbreviated as scaling parameters), but the system provides no direct support for making these choices. We employ both established and novel geovisual analytics methods to address these issues and to enhance the interpretation of SaTScan results. We demonstrate our geovisual analytics approach in a case study analysis of cervical cancer mortality in the U.S. Results We address the first issue by providing an interactive visual interface to support the interpretation of SaTScan results. Our research to address the second issue prompted a broader discussion about the sensitivity of SaTScan results to parameter choices. Sensitivity has two components: (1) the method can identify clusters that, while being statistically significant, have heterogeneous contents comprised of both high-risk and low-risk locations and (2) the method can identify clusters that are unstable in location and size as the spatial scan scaling parameter is varied. To investigate cluster result stability, we conducted multiple SaTScan runs with systematically selected parameters. The results, when scanning a large spatial dataset (e.g., U.S. data aggregated by county), demonstrate that no single spatial scan scaling value is known to be optimal to identify clusters that exist at different scales; instead, multiple scans that vary the parameters are necessary. We introduce a novel method of measuring and visualizing reliability that facilitates identification of homogeneous clusters that are stable across analysis scales. Finally, we propose a logical approach to proceed through the analysis of SaTScan results. Conclusion The geovisual analytics approach described in this manuscript facilitates the interpretation of spatial cluster detection methods by providing cartographic representation of SaTScan results and by providing visualization methods and tools that support selection of SaTScan parameters. Our methods distinguish between heterogeneous and homogeneous clusters and assess the stability of clusters across analytic scales. Method We analyzed the cervical cancer mortality data for the United States aggregated by county between 2000 and 2004. We ran SaTScan on the dataset fifty times with different parameter choices. Our geovisual analytics approach couples SaTScan with our visual analytic platform, allowing users to interactively explore and compare SaTScan results produced by different parameter choices. The Standardized Mortality Ratio and reliability scores are visualized for all the counties to identify stable, homogeneous clusters. We evaluated our analysis result by comparing it to that produced by other independent techniques including the Empirical Bayes Smoothing and Kafadar spatial smoother methods. The geovisual analytics approach introduced here is developed and implemented in our Java-based Visual Inquiry Toolkit. PMID:18992163
Passive detection of vehicle loading
NASA Astrophysics Data System (ADS)
McKay, Troy R.; Salvaggio, Carl; Faulring, Jason W.; Salvaggio, Philip S.; McKeown, Donald M.; Garrett, Alfred J.; Coleman, David H.; Koffman, Larry D.
2012-01-01
The Digital Imaging and Remote Sensing Laboratory (DIRS) at the Rochester Institute of Technology, along with the Savannah River National Laboratory is investigating passive methods to quantify vehicle loading. The research described in this paper investigates multiple vehicle indicators including brake temperature, tire temperature, engine temperature, acceleration and deceleration rates, engine acoustics, suspension response, tire deformation and vibrational response. Our investigation into these variables includes building and implementing a sensing system for data collection as well as multiple full-scale vehicle tests. The sensing system includes; infrared video cameras, triaxial accelerometers, microphones, video cameras and thermocouples. The full scale testing includes both a medium size dump truck and a tractor-trailer truck on closed courses with loads spanning the full range of the vehicle's capacity. Statistical analysis of the collected data is used to determine the effectiveness of each of the indicators for characterizing the weight of a vehicle. The final sensing system will monitor multiple load indicators and combine the results to achieve a more accurate measurement than any of the indicators could provide alone.
Evaluating single-pass catch as a tool for identifying spatial pattern in fish distribution
Bateman, Douglas S.; Gresswell, Robert E.; Torgersen, Christian E.
2005-01-01
We evaluate the efficacy of single-pass electrofishing without blocknets as a tool for collecting spatially continuous fish distribution data in headwater streams. We compare spatial patterns in abundance, sampling effort, and length-frequency distributions from single-pass sampling of coastal cutthroat trout (Oncorhynchus clarki clarki) to data obtained from a more precise multiple-pass removal electrofishing method in two mid-sized (500–1000 ha) forested watersheds in western Oregon. Abundance estimates from single- and multiple-pass removal electrofishing were positively correlated in both watersheds, r = 0.99 and 0.86. There were no significant trends in capture probabilities at the watershed scale (P > 0.05). Moreover, among-sample variation in fish abundance was higher than within-sample error in both streams indicating that increased precision of unit-scale abundance estimates would provide less information on patterns of abundance than increasing the fraction of habitat units sampled. In the two watersheds, respectively, single-pass electrofishing captured 78 and 74% of the estimated population of cutthroat trout with 7 and 10% of the effort. At the scale of intermediate-sized watersheds, single-pass electrofishing exhibited a sufficient level of precision to be effective in detecting spatial patterns of cutthroat trout abundance and may be a useful tool for providing the context for investigating fish-habitat relationships at multiple scales.
Evaluating Hierarchical Structure in Music Annotations
McFee, Brian; Nieto, Oriol; Farbood, Morwaread M.; Bello, Juan Pablo
2017-01-01
Music exhibits structure at multiple scales, ranging from motifs to large-scale functional components. When inferring the structure of a piece, different listeners may attend to different temporal scales, which can result in disagreements when they describe the same piece. In the field of music informatics research (MIR), it is common to use corpora annotated with structural boundaries at different levels. By quantifying disagreements between multiple annotators, previous research has yielded several insights relevant to the study of music cognition. First, annotators tend to agree when structural boundaries are ambiguous. Second, this ambiguity seems to depend on musical features, time scale, and genre. Furthermore, it is possible to tune current annotation evaluation metrics to better align with these perceptual differences. However, previous work has not directly analyzed the effects of hierarchical structure because the existing methods for comparing structural annotations are designed for “flat” descriptions, and do not readily generalize to hierarchical annotations. In this paper, we extend and generalize previous work on the evaluation of hierarchical descriptions of musical structure. We derive an evaluation metric which can compare hierarchical annotations holistically across multiple levels. sing this metric, we investigate inter-annotator agreement on the multilevel annotations of two different music corpora, investigate the influence of acoustic properties on hierarchical annotations, and evaluate existing hierarchical segmentation algorithms against the distribution of inter-annotator agreement. PMID:28824514
Multivariate analysis of scale-dependent associations between bats and landscape structure
Gorresen, P.M.; Willig, M.R.; Strauss, R.E.
2005-01-01
The assessment of biotic responses to habitat disturbance and fragmentation generally has been limited to analyses at a single spatial scale. Furthermore, methods to compare responses between scales have lacked the ability to discriminate among patterns related to the identity, strength, or direction of associations of biotic variables with landscape attributes. We present an examination of the relationship of population- and community-level characteristics of phyllostomid bats with habitat features that were measured at multiple spatial scales in Atlantic rain forest of eastern Paraguay. We used a matrix of partial correlations between each biotic response variable (i.e., species abundance, species richness, and evenness) and a suite of landscape characteristics to represent the multifaceted associations of bats with spatial structure. Correlation matrices can correspond based on either the strength (i.e., magnitude) or direction (i.e., sign) of association. Therefore, a simulation model independently evaluated correspondence in the magnitude and sign of correlations among scales, and results were combined via a meta-analysis to provide an overall test of significance. Our approach detected both species-specific differences in response to landscape structure and scale dependence in those responses. This matrix-simulation approach has broad applicability to ecological situations in which multiple intercorrelated factors contribute to patterns in space or time. ?? 2005 by the Ecological Society of America.
Measures of Agreement Between Many Raters for Ordinal Classifications
Nelson, Kerrie P.; Edwards, Don
2015-01-01
Screening and diagnostic procedures often require a physician's subjective interpretation of a patient's test result using an ordered categorical scale to define the patient's disease severity. Due to wide variability observed between physicians’ ratings, many large-scale studies have been conducted to quantify agreement between multiple experts’ ordinal classifications in common diagnostic procedures such as mammography. However, very few statistical approaches are available to assess agreement in these large-scale settings. Existing summary measures of agreement rely on extensions of Cohen's kappa [1 - 5]. These are prone to prevalence and marginal distribution issues, become increasingly complex for more than three experts or are not easily implemented. Here we propose a model-based approach to assess agreement in large-scale studies based upon a framework of ordinal generalized linear mixed models. A summary measure of agreement is proposed for multiple experts assessing the same sample of patients’ test results according to an ordered categorical scale. This measure avoids some of the key flaws associated with Cohen's kappa and its extensions. Simulation studies are conducted to demonstrate the validity of the approach with comparison to commonly used agreement measures. The proposed methods are easily implemented using the software package R and are applied to two large-scale cancer agreement studies. PMID:26095449
Improving Unstructured Mesh Partitions for Multiple Criteria Using Mesh Adjacencies
Smith, Cameron W.; Rasquin, Michel; Ibanez, Dan; ...
2018-02-13
The scalability of unstructured mesh based applications depends on partitioning methods that quickly balance the computational work while reducing communication costs. Zhou et al. [SIAM J. Sci. Comput., 32 (2010), pp. 3201{3227; J. Supercomput., 59 (2012), pp. 1218{1228] demonstrated the combination of (hyper)graph methods with vertex and element partition improvement for PHASTA CFD scaling to hundreds of thousands of processes. Our work generalizes partition improvement to support balancing combinations of all the mesh entity dimensions (vertices, edges, faces, regions) in partitions with imbalances exceeding 70%. Improvement results are then presented for multiple entity dimensions on up to one million processesmore » on meshes with over 12 billion tetrahedral elements.« less
Improving Unstructured Mesh Partitions for Multiple Criteria Using Mesh Adjacencies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Cameron W.; Rasquin, Michel; Ibanez, Dan
The scalability of unstructured mesh based applications depends on partitioning methods that quickly balance the computational work while reducing communication costs. Zhou et al. [SIAM J. Sci. Comput., 32 (2010), pp. 3201{3227; J. Supercomput., 59 (2012), pp. 1218{1228] demonstrated the combination of (hyper)graph methods with vertex and element partition improvement for PHASTA CFD scaling to hundreds of thousands of processes. Our work generalizes partition improvement to support balancing combinations of all the mesh entity dimensions (vertices, edges, faces, regions) in partitions with imbalances exceeding 70%. Improvement results are then presented for multiple entity dimensions on up to one million processesmore » on meshes with over 12 billion tetrahedral elements.« less
An Efficient Method for Classifying Perfectionists
ERIC Educational Resources Information Center
Rice, Kenneth G.; Ashby, Jeffrey S.
2007-01-01
Multiple samples of university students (N = 1,537) completed the Almost Perfect Scale-Revised (APS-R; R. B. Slaney, M. Mobley, J. Trippi, J. Ashby, & D. G. Johnson, 1996). Cluster analyses, cross-validated discriminant function analyses, and receiver operating characteristic curves for sensitivity and specificity of APS-R scores were used to…
Dark Solitons in FPU Lattice Chain
NASA Astrophysics Data System (ADS)
Wang, Deng-Long; Yang, Ru-Shu; Yang, You-Tian
2007-11-01
Based on multiple scales method, we study the nonlinear properties of a new Fermi-Pasta-Ulam lattice model analytically. It is found that the lattice chain exhibits a novel nonlinear elementary excitation, i.e. a dark soliton. Moreover, the modulation depth of dark soliton is increasing as the anharmonic parameter increases.
The U.S. Environmental Protection Agency is currently developing methods to quantify freshwater fisheries services (e.g., standing-stock abundance and/or biomass) at multiple spatial scales, and to forecast their future distributions. One approach uses linked, ecosystem process ...
USDA-ARS?s Scientific Manuscript database
Long-term studies of agro-ecosystems at the continental scale are providing an extraordinary understanding of regional environmental dynamics. The new Long-Term Agro-ecosystem Research (LTAR) network (established in 2013) has designed an explicit research program with multiple USDA experimental wat...
Metric Scale Calculation for Visual Mapping Algorithms
NASA Astrophysics Data System (ADS)
Hanel, A.; Mitschke, A.; Boerner, R.; Van Opdenbosch, D.; Hoegner, L.; Brodie, D.; Stilla, U.
2018-05-01
Visual SLAM algorithms allow localizing the camera by mapping its environment by a point cloud based on visual cues. To obtain the camera locations in a metric coordinate system, the metric scale of the point cloud has to be known. This contribution describes a method to calculate the metric scale for a point cloud of an indoor environment, like a parking garage, by fusing multiple individual scale values. The individual scale values are calculated from structures and objects with a-priori known metric extension, which can be identified in the unscaled point cloud. Extensions of building structures, like the driving lane or the room height, are derived from density peaks in the point distribution. The extension of objects, like traffic signs with a known metric size, are derived using projections of their detections in images onto the point cloud. The method is tested with synthetic image sequences of a drive with a front-looking mono camera through a virtual 3D model of a parking garage. It has been shown, that each individual scale value improves either the robustness of the fused scale value or reduces its error. The error of the fused scale is comparable to other recent works.
Stability analysis of nonlinear autonomous systems - General theory and application to flutter
NASA Technical Reports Server (NTRS)
Smith, L. L.; Morino, L.
1975-01-01
The analysis makes use of a singular perturbation method, the multiple time scaling. Concepts of stable and unstable limit cycles are introduced. The solution is obtained in the form of an asymptotic expansion. Numerical results are presented for the nonlinear flutter of panels and airfoils in supersonic flow. The approach used is an extension of a method for analyzing nonlinear panel flutter reported by Morino (1969).
Robert E. Keane; Stacy A. Drury; Eva C. Karau; Paul F. Hessburg; Keith M. Reynolds
2010-01-01
This paper presents modeling methods for mapping fire hazard and fire risk using a research model called FIREHARM (FIRE Hazard and Risk Model) that computes common measures of fire behavior, fire danger, and fire effects to spatially portray fire hazard over space. FIREHARM can compute a measure of risk associated with the distribution of these measures over time using...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maurer, Simon A.; Clin, Lucien; Ochsenfeld, Christian, E-mail: christian.ochsenfeld@uni-muenchen.de
2014-06-14
Our recently developed QQR-type integral screening is introduced in our Cholesky-decomposed pseudo-densities Møller-Plesset perturbation theory of second order (CDD-MP2) method. We use the resolution-of-the-identity (RI) approximation in combination with efficient integral transformations employing sparse matrix multiplications. The RI-CDD-MP2 method shows an asymptotic cubic scaling behavior with system size and a small prefactor that results in an early crossover to conventional methods for both small and large basis sets. We also explore the use of local fitting approximations which allow to further reduce the scaling behavior for very large systems. The reliability of our method is demonstrated on test sets formore » interaction and reaction energies of medium sized systems and on a diverse selection from our own benchmark set for total energies of larger systems. Timings on DNA systems show that fast calculations for systems with more than 500 atoms are feasible using a single processor core. Parallelization extends the range of accessible system sizes on one computing node with multiple cores to more than 1000 atoms in a double-zeta basis and more than 500 atoms in a triple-zeta basis.« less
A two-layer multiple-time-scale turbulence model and grid independence study
NASA Technical Reports Server (NTRS)
Kim, S.-W.; Chen, C.-P.
1989-01-01
A two-layer multiple-time-scale turbulence model is presented. The near-wall model is based on the classical Kolmogorov-Prandtl turbulence hypothesis and the semi-empirical logarithmic law of the wall. In the two-layer model presented, the computational domain of the conservation of mass equation and the mean momentum equation penetrated up to the wall, where no slip boundary condition has been prescribed; and the near wall boundary of the turbulence equations has been located at the fully turbulent region, yet very close to the wall, where the standard wall function method has been applied. Thus, the conservation of mass constraint can be satisfied more rigorously in the two-layer model than in the standard wall function method. In most of the two-layer turbulence models, the number of grid points to be used inside the near-wall layer posed the issue of computational efficiency. The present finite element computational results showed that the grid independent solutions were obtained with as small as two grid points, i.e., one quadratic element, inside the near wall layer. Comparison of the computational results obtained by using the two-layer model and those obtained by using the wall function method is also presented.
Teng, Xian; Pei, Sen; Morone, Flaviano; Makse, Hernán A
2016-10-26
Identifying the most influential spreaders that maximize information flow is a central question in network theory. Recently, a scalable method called "Collective Influence (CI)" has been put forward through collective influence maximization. In contrast to heuristic methods evaluating nodes' significance separately, CI method inspects the collective influence of multiple spreaders. Despite that CI applies to the influence maximization problem in percolation model, it is still important to examine its efficacy in realistic information spreading. Here, we examine real-world information flow in various social and scientific platforms including American Physical Society, Facebook, Twitter and LiveJournal. Since empirical data cannot be directly mapped to ideal multi-source spreading, we leverage the behavioral patterns of users extracted from data to construct "virtual" information spreading processes. Our results demonstrate that the set of spreaders selected by CI can induce larger scale of information propagation. Moreover, local measures as the number of connections or citations are not necessarily the deterministic factors of nodes' importance in realistic information spreading. This result has significance for rankings scientists in scientific networks like the APS, where the commonly used number of citations can be a poor indicator of the collective influence of authors in the community.
New methods for indexing multi-lattice diffraction data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gildea, Richard J.; Waterman, David G.; CCP4, Research Complex at Harwell, Rutherford Appleton Laboratory, Didcot OX11 0FA
2014-10-01
A new indexing method is presented which is capable of indexing multiple crystal lattices from narrow wedges of data. The efficacy of this method is demonstrated with both semi-synthetic multi-lattice data and real multi-lattice data recorded from microcrystals of ∼1 µm in size. A new indexing method is presented which is capable of indexing multiple crystal lattices from narrow wedges of diffraction data. The method takes advantage of a simplification of Fourier transform-based methods that is applicable when the unit-cell dimensions are known a priori. The efficacy of this method is demonstrated with both semi-synthetic multi-lattice data and real multi-latticemore » data recorded from crystals of ∼1 µm in size, where it is shown that up to six lattices can be successfully indexed and subsequently integrated from a 1° wedge of data. Analysis is presented which shows that improvements in data-quality indicators can be obtained through accurate identification and rejection of overlapping reflections prior to scaling.« less
Zhu, Xiang; Stephens, Matthew
2017-01-01
Bayesian methods for large-scale multiple regression provide attractive approaches to the analysis of genome-wide association studies (GWAS). For example, they can estimate heritability of complex traits, allowing for both polygenic and sparse models; and by incorporating external genomic data into the priors, they can increase power and yield new biological insights. However, these methods require access to individual genotypes and phenotypes, which are often not easily available. Here we provide a framework for performing these analyses without individual-level data. Specifically, we introduce a “Regression with Summary Statistics” (RSS) likelihood, which relates the multiple regression coefficients to univariate regression results that are often easily available. The RSS likelihood requires estimates of correlations among covariates (SNPs), which also can be obtained from public databases. We perform Bayesian multiple regression analysis by combining the RSS likelihood with previously proposed prior distributions, sampling posteriors by Markov chain Monte Carlo. In a wide range of simulations RSS performs similarly to analyses using the individual data, both for estimating heritability and detecting associations. We apply RSS to a GWAS of human height that contains 253,288 individuals typed at 1.06 million SNPs, for which analyses of individual-level data are practically impossible. Estimates of heritability (52%) are consistent with, but more precise, than previous results using subsets of these data. We also identify many previously unreported loci that show evidence for association with height in our analyses. Software is available at https://github.com/stephenslab/rss. PMID:29399241
Fluctuation scaling in the visual cortex at threshold
NASA Astrophysics Data System (ADS)
Medina, José M.; Díaz, José A.
2016-05-01
Fluctuation scaling relates trial-to-trial variability to the average response by a power function in many physical processes. Here we address whether fluctuation scaling holds in sensory psychophysics and its functional role in visual processing. We report experimental evidence of fluctuation scaling in human color vision and form perception at threshold. Subjects detected thresholds in a psychophysical masking experiment that is considered a standard reference for studying suppression between neurons in the visual cortex. For all subjects, the analysis of threshold variability that results from the masking task indicates that fluctuation scaling is a global property that modulates detection thresholds with a scaling exponent that departs from 2, β =2.48 ±0.07 . We also examine a generalized version of fluctuation scaling between the sample kurtosis K and the sample skewness S of threshold distributions. We find that K and S are related and follow a unique quadratic form K =(1.19 ±0.04 ) S2+(2.68 ±0.06 ) that departs from the expected 4/3 power function regime. A random multiplicative process with weak additive noise is proposed based on a Langevin-type equation. The multiplicative process provides a unifying description of fluctuation scaling and the quadratic S -K relation and is related to on-off intermittency in sensory perception. Our findings provide an insight into how the human visual system interacts with the external environment. The theoretical methods open perspectives for investigating fluctuation scaling and intermittency effects in a wide variety of natural, economic, and cognitive phenomena.
Discontinuities, cross-scale patterns, and the organization of ecosystems
Nash, Kirsty L.; Allen, Craig R.; Angeler, David G.; Barichievy, Chris; Eason, Tarsha; Garmestani, Ahjond S.; Graham, Nicholas A.J.; Granholm, Dean; Knutson, Melinda; Nelson, R. John; Nystrom, Magnus; Stow, Craig A.; Sandstrom, Shana M.
2014-01-01
Ecological structures and processes occur at specific spatiotemporal scales, and interactions that occur across multiple scales mediate scale-specific (e.g., individual, community, local, or regional) responses to disturbance. Despite the importance of scale, explicitly incorporating a multi-scale perspective into research and management actions remains a challenge. The discontinuity hypothesis provides a fertile avenue for addressing this problem by linking measureable proxies to inherent scales of structure within ecosystems. Here we outline the conceptual framework underlying discontinuities and review the evidence supporting the discontinuity hypothesis in ecological systems. Next we explore the utility of this approach for understanding cross-scale patterns and the organization of ecosystems by describing recent advances for examining nonlinear responses to disturbance and phenomena such as extinctions, invasions, and resilience. To stimulate new research, we present methods for performing discontinuity analysis, detail outstanding knowledge gaps, and discuss potential approaches for addressing these gaps.
Chen, Jin; Roth, Robert E; Naito, Adam T; Lengerich, Eugene J; Maceachren, Alan M
2008-11-07
Kulldorff's spatial scan statistic and its software implementation - SaTScan - are widely used for detecting and evaluating geographic clusters. However, two issues make using the method and interpreting its results non-trivial: (1) the method lacks cartographic support for understanding the clusters in geographic context and (2) results from the method are sensitive to parameter choices related to cluster scaling (abbreviated as scaling parameters), but the system provides no direct support for making these choices. We employ both established and novel geovisual analytics methods to address these issues and to enhance the interpretation of SaTScan results. We demonstrate our geovisual analytics approach in a case study analysis of cervical cancer mortality in the U.S. We address the first issue by providing an interactive visual interface to support the interpretation of SaTScan results. Our research to address the second issue prompted a broader discussion about the sensitivity of SaTScan results to parameter choices. Sensitivity has two components: (1) the method can identify clusters that, while being statistically significant, have heterogeneous contents comprised of both high-risk and low-risk locations and (2) the method can identify clusters that are unstable in location and size as the spatial scan scaling parameter is varied. To investigate cluster result stability, we conducted multiple SaTScan runs with systematically selected parameters. The results, when scanning a large spatial dataset (e.g., U.S. data aggregated by county), demonstrate that no single spatial scan scaling value is known to be optimal to identify clusters that exist at different scales; instead, multiple scans that vary the parameters are necessary. We introduce a novel method of measuring and visualizing reliability that facilitates identification of homogeneous clusters that are stable across analysis scales. Finally, we propose a logical approach to proceed through the analysis of SaTScan results. The geovisual analytics approach described in this manuscript facilitates the interpretation of spatial cluster detection methods by providing cartographic representation of SaTScan results and by providing visualization methods and tools that support selection of SaTScan parameters. Our methods distinguish between heterogeneous and homogeneous clusters and assess the stability of clusters across analytic scales. We analyzed the cervical cancer mortality data for the United States aggregated by county between 2000 and 2004. We ran SaTScan on the dataset fifty times with different parameter choices. Our geovisual analytics approach couples SaTScan with our visual analytic platform, allowing users to interactively explore and compare SaTScan results produced by different parameter choices. The Standardized Mortality Ratio and reliability scores are visualized for all the counties to identify stable, homogeneous clusters. We evaluated our analysis result by comparing it to that produced by other independent techniques including the Empirical Bayes Smoothing and Kafadar spatial smoother methods. The geovisual analytics approach introduced here is developed and implemented in our Java-based Visual Inquiry Toolkit.
Effects of Ensemble Configuration on Estimates of Regional Climate Uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldenson, N.; Mauger, G.; Leung, L. R.
Internal variability in the climate system can contribute substantial uncertainty in climate projections, particularly at regional scales. Internal variability can be quantified using large ensembles of simulations that are identical but for perturbed initial conditions. Here we compare methods for quantifying internal variability. Our study region spans the west coast of North America, which is strongly influenced by El Niño and other large-scale dynamics through their contribution to large-scale internal variability. Using a statistical framework to simultaneously account for multiple sources of uncertainty, we find that internal variability can be quantified consistently using a large ensemble or an ensemble ofmore » opportunity that includes small ensembles from multiple models and climate scenarios. The latter also produce estimates of uncertainty due to model differences. We conclude that projection uncertainties are best assessed using small single-model ensembles from as many model-scenario pairings as computationally feasible, which has implications for ensemble design in large modeling efforts.« less
Xu, Jiuping; Feng, Cuiying
2014-01-01
This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method.
Methods, caveats and the future of large-scale microelectrode recordings in the non-human primate
Dotson, Nicholas M.; Goodell, Baldwin; Salazar, Rodrigo F.; Hoffman, Steven J.; Gray, Charles M.
2015-01-01
Cognitive processes play out on massive brain-wide networks, which produce widely distributed patterns of activity. Capturing these activity patterns requires tools that are able to simultaneously measure activity from many distributed sites with high spatiotemporal resolution. Unfortunately, current techniques with adequate coverage do not provide the requisite spatiotemporal resolution. Large-scale microelectrode recording devices, with dozens to hundreds of microelectrodes capable of simultaneously recording from nearly as many cortical and subcortical areas, provide a potential way to minimize these tradeoffs. However, placing hundreds of microelectrodes into a behaving animal is a highly risky and technically challenging endeavor that has only been pursued by a few groups. Recording activity from multiple electrodes simultaneously also introduces several statistical and conceptual dilemmas, such as the multiple comparisons problem and the uncontrolled stimulus response problem. In this perspective article, we discuss some of the techniques that we, and others, have developed for collecting and analyzing large-scale data sets, and address the future of this emerging field. PMID:26578906
Xu, Jiuping
2014-01-01
This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method. PMID:24550708
Han, Zhongyi; Wei, Benzheng; Leung, Stephanie; Nachum, Ilanit Ben; Laidley, David; Li, Shuo
2018-02-15
Pathogenesis-based diagnosis is a key step to prevent and control lumbar neural foraminal stenosis (LNFS). It conducts both early diagnosis and comprehensive assessment by drawing crucial pathological links between pathogenic factors and LNFS. Automated pathogenesis-based diagnosis would simultaneously localize and grade multiple spinal organs (neural foramina, vertebrae, intervertebral discs) to diagnose LNFS and discover pathogenic factors. The automated way facilitates planning optimal therapeutic schedules and relieving clinicians from laborious workloads. However, no successful work has been achieved yet due to its extreme challenges since 1) multiple targets: each lumbar spine has at least 17 target organs, 2) multiple scales: each type of target organ has structural complexity and various scales across subjects, and 3) multiple tasks, i.e., simultaneous localization and diagnosis of all lumbar organs, are extremely difficult than individual tasks. To address these huge challenges, we propose a deep multiscale multitask learning network (DMML-Net) integrating a multiscale multi-output learning and a multitask regression learning into a fully convolutional network. 1) DMML-Net merges semantic representations to reinforce the salience of numerous target organs. 2) DMML-Net extends multiscale convolutional layers as multiple output layers to boost the scale-invariance for various organs. 3) DMML-Net joins a multitask regression module and a multitask loss module to prompt the mutual benefit between tasks. Extensive experimental results demonstrate that DMML-Net achieves high performance (0.845 mean average precision) on T1/T2-weighted MRI scans from 200 subjects. This endows our method an efficient tool for clinical LNFS diagnosis.
NASA Astrophysics Data System (ADS)
Chia, Nicholas; Bundschuh, Ralf
2005-11-01
In the universality class of the one-dimensional Kardar-Parisi-Zhang (KPZ) surface growth, Derrida and Lebowitz conjectured the universality of not only the scaling exponents, but of an entire scaling function. Since and Derrida and Lebowitz’s original publication [Phys. Rev. Lett. 80, 209 (1998)] this universality has been verified for a variety of continuous-time, periodic-boundary systems in the KPZ universality class. Here, we present a numerical method for directly examining the entire particle flux of the asymmetric exclusion process (ASEP), thus providing an alternative to more difficult cumulant ratios studies. Using this method, we find that the Derrida-Lebowitz scaling function (DLSF) properly characterizes the large-system-size limit (N→∞) of a single-particle discrete time system, even in the case of very small system sizes (N⩽22) . This fact allows us to not only verify that the DLSF properly characterizes multiple-particle discrete-time asymmetric exclusion processes, but also provides a way to numerically solve for quantities of interest, such as the particle hopping flux. This method can thus serve to further increase the ease and accessibility of studies involving even more challenging dynamics, such as the open-boundary ASEP.
2010-01-01
Background Fatigue is a common and debilitating symptom in multiple sclerosis (MS). Best-practice guidelines suggest that health services should repeatedly assess fatigue in persons with MS. Several fatigue scales are available but concern has been expressed about their validity. The objective of this study was to examine the reliability and validity of a new scale for MS fatigue, the Neurological Fatigue Index (NFI-MS). Methods Qualitative analysis of 40 MS patient interviews had previously contributed to a coherent definition of fatigue, and a potential 52 item set representing the salient themes. A draft questionnaire was mailed out to 1223 people with MS, and the resulting data subjected to both factor and Rasch analysis. Results Data from 635 (51.9% response) respondents were split randomly into an 'evaluation' and 'validation' sample. Exploratory factor analysis identified four potential subscales: 'physical', 'cognitive', 'relief by diurnal sleep or rest' and 'abnormal nocturnal sleep and sleepiness'. Rasch analysis led to further item reduction and the generation of a Summary scale comprising items from the Physical and Cognitive subscales. The scales were shown to fit Rasch model expectations, across both the evaluation and validation samples. Conclusion A simple 10-item Summary scale, together with scales measuring the physical and cognitive components of fatigue, were validated for MS fatigue. PMID:20152031
Scale in Remote Sensing and GIS: An Advancement in Methods Towards a Science of Scale
NASA Technical Reports Server (NTRS)
Quattrochi, Dale A.
1998-01-01
The term "scale", both in space and time, is central to remote sensing and geographic information systems (GIS). The emergence and widespread use of GIS technologies, including remote sensing, has generated significant interest in addressing scale as a generic topic, and in the development and implementation of techniques for dealing explicitly with the vicissitudes of scale as a multidisciplinary issue. As science becomes more complex and utilizes databases that are capable of performing complex space-time data analyses, it becomes paramount that we develop the tools and techniques needed to operate at multiple scales, to work with data whose scales are not necessarily ideal, and to produce results that can be aggregated or disaggregated in ways that suit the decision-making process. Contemporary science is constantly coping with compromises, and the data available for a particular study rarely fit perfectly with the scales at which the processes being investigated operate, or the scales that policy-makers require to make sound, rational decisions. This presentation discusses some of the problems associated with scale as related to remote sensing and GIS, and describes some of the questions that need to be addressed in approaching the development of a multidisciplinary "science of scale". Techniques for dealing with multiple scaled data that have been developed or explored recently are described as a means for recognizing scale as a generic issue, along with associated theory and tools that can be of simultaneous value to a large number of disciplines. These can be used to seek answers to a host of interrelated questions in the interest of providing a formal structure for the management and manipulation of scale and its universality as a key concept from a multidisciplinary perspective.
Scale in Remote Sensing and GIS: An Advancement in Methods Towards a Science of Scale
NASA Technical Reports Server (NTRS)
Quattrochi, D. A.
1998-01-01
The term "scale", both in space and time, is central to remote sensing and Geographic Information Systems (GIS). The emergence and widespread use of GIS technologies, including remote sensing, has generated significant interest in addressing scale as a generic topic, and in the development and implementation of techniques for dealing explicitly with the vicissitudes of scale as a multidisciplinary issue. As science becomes more complex and utilizes databases that are capable of performing complex space-time data analyses, it becomes paramount that we develop the tools and techniques needed to operate at multiple scales, to work with data whose scales are not necessarily ideal, and to produce results that can be aggregated or disaggregated ways that suit the decision-making process. Contemporary science is constantly coping with compromises, and the data available for a particular study rarely fit perfectly with the scales at which the processes being investigated operate, or the scales that policy-makers require to make sound, rational decisions. This presentation discusses some of the problems associated with scale as related to remote sensing and GIS, and describes some of the questions that need to be addressed in approaching the development of a multidisciplinary "science of scale". Techniques for dealing with multiple scaled data that have been developed or explored recently are described as a means for recognizing scale as a generic issue, along with associated theory and tools that can be of simultaneous value to a large number of disciplines. These can be used to seek answers to a host of interrelated questions in the interest of providing a formal structure for the management and manipulation of scale and its universality as a key concept from a multidisciplinary perspective.
Ishihara, Koji; Morimoto, Jun
2018-03-01
Humans use multiple muscles to generate such joint movements as an elbow motion. With multiple lightweight and compliant actuators, joint movements can also be efficiently generated. Similarly, robots can use multiple actuators to efficiently generate a one degree of freedom movement. For this movement, the desired joint torque must be properly distributed to each actuator. One approach to cope with this torque distribution problem is an optimal control method. However, solving the optimal control problem at each control time step has not been deemed a practical approach due to its large computational burden. In this paper, we propose a computationally efficient method to derive an optimal control strategy for a hybrid actuation system composed of multiple actuators, where each actuator has different dynamical properties. We investigated a singularly perturbed system of the hybrid actuator model that subdivided the original large-scale control problem into smaller subproblems so that the optimal control outputs for each actuator can be derived at each control time step and applied our proposed method to our pneumatic-electric hybrid actuator system. Our method derived a torque distribution strategy for the hybrid actuator by dealing with the difficulty of solving real-time optimal control problems. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Fast dictionary generation and searching for magnetic resonance fingerprinting.
Jun Xie; Mengye Lyu; Jian Zhang; Hui, Edward S; Wu, Ed X; Ze Wang
2017-07-01
A super-fast dictionary generation and searching (DGS) algorithm was developed for MR parameter quantification using magnetic resonance fingerprinting (MRF). MRF is a new technique for simultaneously quantifying multiple MR parameters using one temporally resolved MR scan. But it has a multiplicative computation complexity, resulting in a big burden of dictionary generating, saving, and retrieving, which can easily be intractable for any state-of-art computers. Based on retrospective analysis of the dictionary matching object function, a multi-scale ZOOM like DGS algorithm, dubbed as MRF-ZOOM, was proposed. MRF ZOOM is quasi-parameter-separable so the multiplicative computation complexity is broken into additive one. Evaluations showed that MRF ZOOM was hundreds or thousands of times faster than the original MRF parameter quantification method even without counting the dictionary generation time in. Using real data, it yielded nearly the same results as produced by the original method. MRF ZOOM provides a super-fast solution for MR parameter quantification.
On the effect of boundary layer growth on the stability of compressible flows
NASA Technical Reports Server (NTRS)
El-Hady, N. M.
1981-01-01
The method of multiple scales is used to describe a formally correct method based on the nonparallel linear stability theory, that examines the two and three dimensional stability of compressible boundary layer flows. The method is applied to the supersonic flat plate layer at Mach number 4.5. The theoretical growth rates are in good agreement with experimental results. The method is also applied to the infinite-span swept wing transonic boundary layer with suction to evaluate the effect of the nonparallel flow on the development of crossflow disturbances.
Multiple scales approach to weakly nonparallel and curvature effects: Details for the novice
NASA Technical Reports Server (NTRS)
Singer, Bart A.; Choudhari, Meelan
1995-01-01
A multiple scales approach is used to approximate the effects of nonparallelism and streamwise curvature on the stability of three-dimensional disturbances in incompressible flow. The multiple scales approach is implemented with the full second-order system of equations. A detailed exposition of the source of all terms is provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Puerari, Ivânio; Elmegreen, Bruce G.; Block, David L., E-mail: puerari@inaoep.mx
2014-12-01
We examine 8 μm IRAC images of the grand design two-arm spiral galaxies M81 and M51 using a new method whereby pitch angles are locally determined as a function of scale and position, in contrast to traditional Fourier transform spectral analyses which fit to average pitch angles for whole galaxies. The new analysis is based on a correlation between pieces of a galaxy in circular windows of (lnR,θ) space and logarithmic spirals with various pitch angles. The diameter of the windows is varied to study different scales. The result is a best-fit pitch angle to the spiral structure as amore » function of position and scale, or a distribution function of pitch angles as a function of scale for a given galactic region or area. We apply the method to determine the distribution of pitch angles in the arm and interarm regions of these two galaxies. In the arms, the method reproduces the known pitch angles for the main spirals on a large scale, but also shows higher pitch angles on smaller scales resulting from dust feathers. For the interarms, there is a broad distribution of pitch angles representing the continuation and evolution of the spiral arm feathers as the flow moves into the interarm regions. Our method shows a multiplicity of spiral structures on different scales, as expected from gas flow processes in a gravitating, turbulent and shearing interstellar medium. We also present results for M81 using classical 1D and 2D Fourier transforms, together with a new correlation method, which shows good agreement with conventional 2D Fourier transforms.« less
A multiple maximum scatter difference discriminant criterion for facial feature extraction.
Song, Fengxi; Zhang, David; Mei, Dayong; Guo, Zhongwei
2007-12-01
Maximum scatter difference (MSD) discriminant criterion was a recently presented binary discriminant criterion for pattern classification that utilizes the generalized scatter difference rather than the generalized Rayleigh quotient as a class separability measure, thereby avoiding the singularity problem when addressing small-sample-size problems. MSD classifiers based on this criterion have been quite effective on face-recognition tasks, but as they are binary classifiers, they are not as efficient on large-scale classification tasks. To address the problem, this paper generalizes the classification-oriented binary criterion to its multiple counterpart--multiple MSD (MMSD) discriminant criterion for facial feature extraction. The MMSD feature-extraction method, which is based on this novel discriminant criterion, is a new subspace-based feature-extraction method. Unlike most other subspace-based feature-extraction methods, the MMSD computes its discriminant vectors from both the range of the between-class scatter matrix and the null space of the within-class scatter matrix. The MMSD is theoretically elegant and easy to calculate. Extensive experimental studies conducted on the benchmark database, FERET, show that the MMSD out-performs state-of-the-art facial feature-extraction methods such as null space method, direct linear discriminant analysis (LDA), eigenface, Fisherface, and complete LDA.
Weighted Statistical Binning: Enabling Statistically Consistent Genome-Scale Phylogenetic Analyses
Bayzid, Md Shamsuzzoha; Mirarab, Siavash; Boussau, Bastien; Warnow, Tandy
2015-01-01
Because biological processes can result in different loci having different evolutionary histories, species tree estimation requires multiple loci from across multiple genomes. While many processes can result in discord between gene trees and species trees, incomplete lineage sorting (ILS), modeled by the multi-species coalescent, is considered to be a dominant cause for gene tree heterogeneity. Coalescent-based methods have been developed to estimate species trees, many of which operate by combining estimated gene trees, and so are called "summary methods". Because summary methods are generally fast (and much faster than more complicated coalescent-based methods that co-estimate gene trees and species trees), they have become very popular techniques for estimating species trees from multiple loci. However, recent studies have established that summary methods can have reduced accuracy in the presence of gene tree estimation error, and also that many biological datasets have substantial gene tree estimation error, so that summary methods may not be highly accurate in biologically realistic conditions. Mirarab et al. (Science 2014) presented the "statistical binning" technique to improve gene tree estimation in multi-locus analyses, and showed that it improved the accuracy of MP-EST, one of the most popular coalescent-based summary methods. Statistical binning, which uses a simple heuristic to evaluate "combinability" and then uses the larger sets of genes to re-calculate gene trees, has good empirical performance, but using statistical binning within a phylogenomic pipeline does not have the desirable property of being statistically consistent. We show that weighting the re-calculated gene trees by the bin sizes makes statistical binning statistically consistent under the multispecies coalescent, and maintains the good empirical performance. Thus, "weighted statistical binning" enables highly accurate genome-scale species tree estimation, and is also statistically consistent under the multi-species coalescent model. New data used in this study are available at DOI: http://dx.doi.org/10.6084/m9.figshare.1411146, and the software is available at https://github.com/smirarab/binning. PMID:26086579
Dai, Weiying; Soman, Salil; Hackney, David B.; Wong, Eric T.; Robson, Philip M.; Alsop, David C.
2017-01-01
Functional imaging provides hemodynamic and metabolic information and is increasingly being incorporated into clinical diagnostic and research studies. Typically functional images have reduced signal-to-noise ratio and spatial resolution compared to other non-functional cross sectional images obtained as part of a routine clinical protocol. We hypothesized that enhancing visualization and interpretation of functional images with anatomic information could provide preferable quality and superior diagnostic value. In this work, we implemented five methods (frequency addition, frequency multiplication, wavelet transform, non-subsampled contourlet transform and intensity-hue-saturation) and a newly proposed ShArpening by Local Similarity with Anatomic images (SALSA) method to enhance the visualization of functional images, while preserving the original functional contrast and quantitative signal intensity characteristics over larger spatial scales. Arterial spin labeling blood flow MR images of the brain were visualization enhanced using anatomic images with multiple contrasts. The algorithms were validated on a numerical phantom and their performance on images of brain tumor patients were assessed by quantitative metrics and neuroradiologist subjective ratings. The frequency multiplication method had the lowest residual error for preserving the original functional image contrast at larger spatial scales (55%–98% of the other methods with simulated data and 64%–86% with experimental data). It was also significantly more highly graded by the radiologists (p<0.005 for clear brain anatomy around the tumor). Compared to other methods, the SALSA provided 11%–133% higher similarity with ground truth images in the simulation and showed just slightly lower neuroradiologist grading score. Most of these monochrome methods do not require any prior knowledge about the functional and anatomic image characteristics, except the acquired resolution. Hence, automatic implementation on clinical images should be readily feasible. PMID:27723582
Wang, Dandan; Zong, Qun; Tian, Bailing; Shao, Shikai; Zhang, Xiuyun; Zhao, Xinyi
2018-02-01
The distributed finite-time formation tracking control problem for multiple unmanned helicopters is investigated in this paper. The control object is to maintain the positions of follower helicopters in formation with external interferences. The helicopter model is divided into a second order outer-loop subsystem and a second order inner-loop subsystem based on multiple-time scale features. Using radial basis function neural network (RBFNN) technique, we first propose a novel finite-time multivariable neural network disturbance observer (FMNNDO) to estimate the external disturbance and model uncertainty, where the neural network (NN) approximation errors can be dynamically compensated by adaptive law. Next, based on FMNNDO, a distributed finite-time formation tracking controller and a finite-time attitude tracking controller are designed using the nonsingular fast terminal sliding mode (NFTSM) method. In order to estimate the second derivative of the virtual desired attitude signal, a novel finite-time sliding mode integral filter is designed. Finally, Lyapunov analysis and multiple-time scale principle ensure the realization of control goal in finite-time. The effectiveness of the proposed FMNNDO and controllers are then verified by numerical simulations. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kuo, C.; Yu, P.; Yang, T.; Davis, T. W.; Liang, X.; Tseng, C.; Cheng, C.
2011-12-01
The objective of this study proposed herein is to estimate regional evapotranspiration via sap flow and soil moisture measurements associated with wireless sensor network in the field. Evapotranspiration is one of the important factors in water balance computation. Pan evaporation collected from the meteorological station can only be accounted as a single-point scale measurement rather than the water loss of the entire region. Thus, we need a multiple-site measurement for understanding the regional evapotranspiration. Applying sap flow method with self-made probes, we could calculate transpiration. Soil moisture measurement was used to monitor the daily soil moisture variety for evaporation. Sap flow and soil moisture measurements in multiple sites are integrated by using wireless sensor network (WSN). Then, the measurement results of each site were scaled up and combined into the regional evapotranspiration. This study used thermal dissipation method to measure sap flow in trees to represent the plant transpiration. Sap flow was measured by using the self-made sap probes which needed to be calibrated before setting up at the observation field. Regional transpiration was scaled up through the Leaf Area Index (LAI). The LAI of regional scale was from the MODIS image calculated at 1km X 1km grid size. The soil moistures collected from areas outside the distributing area of tree roots and tree canopy were used to represent the evaporation. The observation was undertaken to collect soil moisture variety from five different soil depths of 10, 20, 30, 40 and 50 cm respectively. The regional evaporation can be estimated by averaging the variation of soil moisture from each site within the region. The result data measured by both sap flow and soil moisture measurements of each site were collected through the wireless sensor network. The WSN performs the functions of P2P and mesh networking. That can collect data in multiple locations simultaneously and has less power consumption. WSN is the best way for collecting sap flow and soil moisture data in this study. Since the data were collected through the radio in the field, there may have some noise randomly. The weighted least-squares method was used to filter the raw data. Through collecting the observation data by WSN and transferring them into regional scale, we could get regional evapotranspiration.
NASA Astrophysics Data System (ADS)
Aouabdi, Salim; Taibi, Mahmoud; Bouras, Slimane; Boutasseta, Nadir
2017-06-01
This paper describes an approach for identifying localized gear tooth defects, such as pitting, using phase currents measured from an induction machine driving the gearbox. A new tool of anomaly detection based on multi-scale entropy (MSE) algorithm SampEn which allows correlations in signals to be identified over multiple time scales. The motor current signature analysis (MCSA) in conjunction with principal component analysis (PCA) and the comparison of observed values with those predicted from a model built using nominally healthy data. The Simulation results show that the proposed method is able to detect gear tooth pitting in current signals.
Approaches to advancescientific understanding of macrosystems ecology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levy, Ofir; Ball, Becky; Bond-Lamberty, Benjamin
Macrosystem ecological studies inherently investigate processes that interact across multiple spatial and temporal scales, requiring intensive sampling and massive amounts of data from diverse sources to incorporate complex cross-scale and hierarchical interactions. Inherent challenges associated with these characteristics include high computational demands, data standardization and assimilation, identification of important processes and scales without prior knowledge, and the need for large, cross-disciplinary research teams that conduct long-term studies. Therefore, macrosystem ecology studies must utilize a unique set of approaches that are capable of encompassing these methodological characteristics and associated challenges. Several case studies demonstrate innovative methods used in current macrosystem ecologymore » studies.« less
Selected methods for quantification of community exposure to aircraft noise
NASA Technical Reports Server (NTRS)
Edge, P. M., Jr.; Cawthorn, J. M.
1976-01-01
A review of the state-of-the-art for the quantification of community exposure to aircraft noise is presented. Physical aspects, people response considerations, and practicalities of useful application of scales of measure are included. Historical background up through the current technology is briefly presented. The developments of both single-event and multiple-event scales are covered. Selective choice is made of scales currently in the forefront of interest and recommended methodology is presented for use in computer programing to translate aircraft noise data into predictions of community noise exposure. Brief consideration is given to future programing developments and to supportive research needs.
Nelson, Kerrie P; Mitani, Aya A; Edwards, Don
2017-09-10
Widespread inconsistencies are commonly observed between physicians' ordinal classifications in screening tests results such as mammography. These discrepancies have motivated large-scale agreement studies where many raters contribute ratings. The primary goal of these studies is to identify factors related to physicians and patients' test results, which may lead to stronger consistency between raters' classifications. While ordered categorical scales are frequently used to classify screening test results, very few statistical approaches exist to model agreement between multiple raters. Here we develop a flexible and comprehensive approach to assess the influence of rater and subject characteristics on agreement between multiple raters' ordinal classifications in large-scale agreement studies. Our approach is based upon the class of generalized linear mixed models. Novel summary model-based measures are proposed to assess agreement between all, or a subgroup of raters, such as experienced physicians. Hypothesis tests are described to formally identify factors such as physicians' level of experience that play an important role in improving consistency of ratings between raters. We demonstrate how unique characteristics of individual raters can be assessed via conditional modes generated during the modeling process. Simulation studies are presented to demonstrate the performance of the proposed methods and summary measure of agreement. The methods are applied to a large-scale mammography agreement study to investigate the effects of rater and patient characteristics on the strength of agreement between radiologists. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Nonlinear Waves, Dynamical Systems and Other Applied Mathematics Programs
1991-10-04
present a general scheme of perturbation method for perturbed soliton systems, based on the normal form theory and the method of multiple scales. By this...dimension, and discuss possible consequences of the interplay between wavefront- interactions and curvature in two dimensions. Thursday, October 19 All ... normal speed D parametrized by the local mean surface curvature x. Its solution provides a relation D = D(x) which determines the evolution of the front
Physical activity in subjects with multiple sclerosis with focus on gender differences: a survey
2014-01-01
Background There is increasing research that examines gender-issues in multiple sclerosis (MS), but little focus has been placed on gender-issues regarding physical activity. The aim of the present study was to describe levels of physical activity, self-efficacy for physical activity, fall-related self-efficacy, social support for physical activity, fatigue levels and the impact of MS on daily life, in addition to investigating gender differences. Methods The sample for this cross-sectional cohort study consisted of 287 (84 men; 29.3%) adults with MS recruited from the Swedish Multiple Sclerosis Registry. A questionnaire was sent to the subjects consisting of the self-administrated measurements: Physical Activity Disability Survey – Revised, Exercise Self-Efficacy Scale, Falls- Efficacy Scale (Swedish version), Social Influences on Physical Activity, Fatigue Severity Scale and Multiple Sclerosis Impact Scale. Response rate was 58.2%. Results Men were less physically active, had lower self-efficacy for physical activity and lower fall-related self-efficacy than women. This was explained by men being more physically affected by the disease. Men also received less social support for physical activity from family members. The level of fatigue and psychological consequences of the disease were similar between the genders in the total sample, but subgroups of women with moderate MS and relapsing remitting MS experienced more fatigue than men. Conclusions Men were less physically active, probably a result of being more physically affected by the disease. Men being more physically affected explained most of the gender differences found in this study. However, the number of men in the subgroup analyses was small and more research is needed. A gender perspective should be considered in strategies for promoting physical activity in subjects with MS, e.g. men may need more support to be physically active. PMID:24612446
A Comparative Analysis of Three Monocular Passive Ranging Methods on Real Infrared Sequences
NASA Astrophysics Data System (ADS)
Bondžulić, Boban P.; Mitrović, Srđan T.; Barbarić, Žarko P.; Andrić, Milenko S.
2013-09-01
Three monocular passive ranging methods are analyzed and tested on the real infrared sequences. The first method exploits scale changes of an object in successive frames, while other two use Beer-Lambert's Law. Ranging methods are evaluated by comparing with simultaneously obtained reference data at the test site. Research is addressed on scenarios where multiple sensor views or active measurements are not possible. The results show that these methods for range estimation can provide the fidelity required for object tracking. Maximum values of relative distance estimation errors in near-ideal conditions are less than 8%.
DOT National Transportation Integrated Search
2012-03-01
There is an increasing need to deliver energy from sources in remote areas to demand centers. For example, in North America, the delivery of gas from Alaska to demand centers in the lower 48 states is of major economic and strategic interest. This wi...
Parent and Child Agreement on Anxiety Disorder Symptoms Using the DISC Predictive Scales
ERIC Educational Resources Information Center
Weems, Carl F.; Feaster, Daniel J.; Horigian, Viviana E.; Robbins, Michael S.
2011-01-01
Growing recognition of the negative impact of anxiety disorders in the lives of youth has made their identification an important clinical task. Multiple perspective assessment (e.g., parents, children) is generally considered a preferred method in the assessment of anxiety disorder symptoms, although it has been generally thought that disagreement…
A national-scale survey of 247 contaminants of emerging concern (CECs), including organic and inorganic chemical compounds, and microbial contaminants, was conducted in source and treated drinking water samples from 25 treatment plants across the United States. Multiple methods w...
Improving automatic peptide mass fingerprint protein identification by combining many peak sets.
Rögnvaldsson, Thorsteinn; Häkkinen, Jari; Lindberg, Claes; Marko-Varga, György; Potthast, Frank; Samuelsson, Jim
2004-08-05
An automated peak picking strategy is presented where several peak sets with different signal-to-noise levels are combined to form a more reliable statement on the protein identity. The strategy is compared against both manual peak picking and industry standard automated peak picking on a set of mass spectra obtained after tryptic in gel digestion of 2D-gel samples from human fetal fibroblasts. The set of spectra contain samples ranging from strong to weak spectra, and the proposed multiple-scale method is shown to be much better on weak spectra than the industry standard method and a human operator, and equal in performance to these on strong and medium strong spectra. It is also demonstrated that peak sets selected by a human operator display a considerable variability and that it is impossible to speak of a single "true" peak set for a given spectrum. The described multiple-scale strategy both avoids time-consuming parameter tuning and exceeds the human operator in protein identification efficiency. The strategy therefore promises reliable automated user-independent protein identification using peptide mass fingerprints.
High-Speed Interrogation for Large-Scale Fiber Bragg Grating Sensing
Hu, Chenyuan; Bai, Wei
2018-01-01
A high-speed interrogation scheme for large-scale fiber Bragg grating (FBG) sensing arrays is presented. This technique employs parallel computing and pipeline control to modulate incident light and demodulate the reflected sensing signal. One Electro-optic modulator (EOM) and one semiconductor optical amplifier (SOA) were used to generate a phase delay to filter reflected spectrum form multiple candidate FBGs with the same optical path difference (OPD). Experimental results showed that the fastest interrogation delay time for the proposed method was only about 27.2 us for a single FBG interrogation, and the system scanning period was only limited by the optical transmission delay in the sensing fiber owing to the multiple simultaneous central wavelength calculations. Furthermore, the proposed FPGA-based technique had a verified FBG wavelength demodulation stability of ±1 pm without average processing. PMID:29495263
High-Speed Interrogation for Large-Scale Fiber Bragg Grating Sensing.
Hu, Chenyuan; Bai, Wei
2018-02-24
A high-speed interrogation scheme for large-scale fiber Bragg grating (FBG) sensing arrays is presented. This technique employs parallel computing and pipeline control to modulate incident light and demodulate the reflected sensing signal. One Electro-optic modulator (EOM) and one semiconductor optical amplifier (SOA) were used to generate a phase delay to filter reflected spectrum form multiple candidate FBGs with the same optical path difference (OPD). Experimental results showed that the fastest interrogation delay time for the proposed method was only about 27.2 us for a single FBG interrogation, and the system scanning period was only limited by the optical transmission delay in the sensing fiber owing to the multiple simultaneous central wavelength calculations. Furthermore, the proposed FPGA-based technique had a verified FBG wavelength demodulation stability of ±1 pm without average processing.
Great Basin Integrated Landscape Monitoring Pilot Summary Report
Finn, Sean P.; Kitchell, Kate; Baer, Lori Anne; Bedford, David R.; Brooks, Matthew L.; Flint, Alan L.; Flint, Lorraine E.; Matchett, J.R.; Mathie, Amy; Miller, David M.; Pilliod, David S.; Torregrosa, Alicia; Woodward, Andrea
2010-01-01
The Great Basin Integrated Landscape Monitoring Pilot project (GBILM) was one of four regional pilots to implement the U.S. Geological Survey (USGS) Science Thrust on Integrated Landscape Monitoring (ILM) whose goal was to observe, understand, and predict landscape change and its implications on natural resources at multiple spatial and temporal scales and address priority natural resource management and policy issues. The Great Basin is undergoing rapid environmental change stemming from interactions among global climate trends, increasing human populations, expanding and accelerating land and water uses, invasive species, and altered fire regimes. GBLIM tested concepts and developed tools to store and analyze monitoring data, understand change at multiple scales, and forecast landscape change. The GBILM endeavored to develop and test a landscape-level monitoring approach in the Great Basin that integrates USGS disciplines, addresses priority management questions, catalogs and uses existing monitoring data, evaluates change at multiple scales, and contributes to development of regional monitoring strategies. GBILM functioned as an integrative team from 2005 to 2010, producing more than 35 science and data management products that addressed pressing ecosystem drivers and resource management agency needs in the region. This report summarizes the approaches and methods of this interdisciplinary effort, identifies and describes the products generated, and provides lessons learned during the project.
Phillips, Glenn A; Wyrwich, Kathleen W; Guo, Shien; Medori, Rossella; Altincatal, Arman; Wagner, Linda; Elkins, Jacob
2014-11-01
The 29-item Multiple Sclerosis Impact Scale (MSIS-29) was developed to examine the impact of multiple sclerosis (MS) on physical and psychological functioning from a patient's perspective. To determine the responder definition (RD) of the MSIS-29 physical impact subscale (PHYS) in a group of patients with relapsing-remitting MS (RRMS) participating in a clinical trial. Data from the SELECT trial comparing daclizumab high-yield process with placebo in patients with RRMS were used. Physical function was evaluated in SELECT using three patient-reported outcomes measures and the Expanded Disability Status Scale (EDSS). Anchor- and distribution-based methods were used to identify an RD for the MSIS-29. Results across the anchor-based approach suggested MSIS-29 PHYS RD values of 6.91 (mean), 7.14 (median) and 7.50 (mode). Distribution-based RD estimates ranged from 6.24 to 10.40. An RD of 7.50 was selected as the most appropriate threshold for physical worsening based on corresponding changes in the EDSS (primary anchor of interest). These findings indicate that a ≥7.50 point worsening on the MSIS-29 PHYS is a reasonable and practical threshold for identifying patients with RRMS who have experienced a clinically significant change in the physical impact of MS. © The Author(s), 2014.
Multiple-scale structures: from Faraday waves to soft-matter quasicrystals.
Savitz, Samuel; Babadi, Mehrtash; Lifshitz, Ron
2018-05-01
For many years, quasicrystals were observed only as solid-state metallic alloys, yet current research is now actively exploring their formation in a variety of soft materials, including systems of macromolecules, nanoparticles and colloids. Much effort is being invested in understanding the thermodynamic properties of these soft-matter quasicrystals in order to predict and possibly control the structures that form, and hopefully to shed light on the broader yet unresolved general questions of quasicrystal formation and stability. Moreover, the ability to control the self-assembly of soft quasicrystals may contribute to the development of novel photonics or other applications based on self-assembled metamaterials. Here a path is followed, leading to quantitative stability predictions, that starts with a model developed two decades ago to treat the formation of multiple-scale quasiperiodic Faraday waves (standing wave patterns in vibrating fluid surfaces) and which was later mapped onto systems of soft particles, interacting via multiple-scale pair potentials. The article reviews, and substantially expands, the quantitative predictions of these models, while correcting a few discrepancies in earlier calculations, and presents new analytical methods for treating the models. In so doing, a number of new stable quasicrystalline structures are found with octagonal, octadecagonal and higher-order symmetries, some of which may, it is hoped, be observed in future experiments.
2014-01-01
Background The sore throat pain model has been conducted by different clinical investigators to demonstrate the efficacy of acute analgesic drugs in single-dose randomized clinical trials. The model used here was designed to study the multiple-dose safety and efficacy of lozenges containing flurbiprofen at 8.75 mg. Methods Adults (n = 198) with moderate or severe acute sore throat and findings of pharyngitis on a Tonsillo-Pharyngitis Assessment (TPA) were randomly assigned to use either flurbiprofen 8.75 mg lozenges (n = 101) or matching placebo lozenges (n = 97) under double-blind conditions. Patients sucked one lozenge every three to six hours as needed, up to five lozenges per day, and rated symptoms on 100-mm scales: the Sore Throat Pain Intensity Scale (STPIS), the Difficulty Swallowing Scale (DSS), and the Swollen Throat Scale (SwoTS). Results Reductions in pain (lasting for three hours) and in difficulty swallowing and throat swelling (for four hours) were observed after a single dose of the flurbiprofen 8.75 mg lozenge (P <0.05 compared with placebo). After using multiple doses over 24 hours, flurbiprofen-treated patients experienced a 59% greater reduction in throat pain, 45% less difficulty swallowing, and 44% less throat swelling than placebo-treated patients (all P <0.01). There were no serious adverse events. Conclusions Utilizing the sore throat pain model with multiple doses over 24 hours, flurbiprofen 8.75 mg lozenges were shown to be an effective, well-tolerated treatment for sore throat pain. Other pharmacologic actions (reduced difficulty swallowing and reduced throat swelling) and overall patient satisfaction from the flurbiprofen lozenges were also demonstrated in this multiple-dose implementation of the sore throat pain model. Trial registration This trial was registered with ClinicalTrials.gov, registration number: NCT01048866, registration date: January 13, 2010. PMID:24988909
Accurate hybrid stochastic simulation of a system of coupled chemical or biochemical reactions.
Salis, Howard; Kaznessis, Yiannis
2005-02-01
The dynamical solution of a well-mixed, nonlinear stochastic chemical kinetic system, described by the Master equation, may be exactly computed using the stochastic simulation algorithm. However, because the computational cost scales with the number of reaction occurrences, systems with one or more "fast" reactions become costly to simulate. This paper describes a hybrid stochastic method that partitions the system into subsets of fast and slow reactions, approximates the fast reactions as a continuous Markov process, using a chemical Langevin equation, and accurately describes the slow dynamics using the integral form of the "Next Reaction" variant of the stochastic simulation algorithm. The key innovation of this method is its mechanism of efficiently monitoring the occurrences of slow, discrete events while simultaneously simulating the dynamics of a continuous, stochastic or deterministic process. In addition, by introducing an approximation in which multiple slow reactions may occur within a time step of the numerical integration of the chemical Langevin equation, the hybrid stochastic method performs much faster with only a marginal decrease in accuracy. Multiple examples, including a biological pulse generator and a large-scale system benchmark, are simulated using the exact and proposed hybrid methods as well as, for comparison, a previous hybrid stochastic method. Probability distributions of the solutions are compared and the weak errors of the first two moments are computed. In general, these hybrid methods may be applied to the simulation of the dynamics of a system described by stochastic differential, ordinary differential, and Master equations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di, Sheng; Berrocal, Eduardo; Cappello, Franck
The silent data corruption (SDC) problem is attracting more and more attentions because it is expected to have a great impact on exascale HPC applications. SDC faults are hazardous in that they pass unnoticed by hardware and can lead to wrong computation results. In this work, we formulate SDC detection as a runtime one-step-ahead prediction method, leveraging multiple linear prediction methods in order to improve the detection results. The contributions are twofold: (1) we propose an error feedback control model that can reduce the prediction errors for different linear prediction methods, and (2) we propose a spatial-data-based even-sampling method tomore » minimize the detection overheads (including memory and computation cost). We implement our algorithms in the fault tolerance interface, a fault tolerance library with multiple checkpoint levels, such that users can conveniently protect their HPC applications against both SDC errors and fail-stop errors. We evaluate our approach by using large-scale traces from well-known, large-scale HPC applications, as well as by running those HPC applications on a real cluster environment. Experiments show that our error feedback control model can improve detection sensitivity by 34-189% for bit-flip memory errors injected with the bit positions in the range [20,30], without any degradation on detection accuracy. Furthermore, memory size can be reduced by 33% with our spatial-data even-sampling method, with only a slight and graceful degradation in the detection sensitivity.« less
A Bayesian hierarchical approach to galaxy-galaxy lensing
NASA Astrophysics Data System (ADS)
Sonnenfeld, Alessandro; Leauthaud, Alexie
2018-07-01
We present a Bayesian hierarchical inference formalism to study the relation between the properties of dark matter haloes and those of their central galaxies using weak gravitational lensing. Unlike traditional methods, this technique does not resort to stacking the weak lensing signal in bins, and thus allows for a more efficient use of the information content in the data. Our method is particularly useful for constraining scaling relations between two or more galaxy properties and dark matter halo mass, and can also be used to constrain the intrinsic scatter in these scaling relations. We show that, if observational scatter is not properly accounted for, the traditional stacking method can produce biased results when exploring correlations between multiple galaxy properties and halo mass. For example, this bias can affect studies of the joint correlation between galaxy mass, halo mass, and galaxy size, or galaxy colour. In contrast, our method easily and efficiently handles the intrinsic and observational scatter in multiple galaxy properties and halo mass. We test our method on mocks with varying degrees of complexity. We find that we can recover the mean halo mass and concentration, each with a 0.1 dex accuracy, and the intrinsic scatter in halo mass with a 0.05 dex accuracy. In its current version, our method will be most useful for studying the weak lensing signal around central galaxies in groups and clusters, as well as massive galaxies samples with log M* > 11, which have low satellite fractions.
A Bayesian Hierarchical Approach to Galaxy-Galaxy Lensing
NASA Astrophysics Data System (ADS)
Sonnenfeld, Alessandro; Leauthaud, Alexie
2018-04-01
We present a Bayesian hierarchical inference formalism to study the relation between the properties of dark matter halos and those of their central galaxies using weak gravitational lensing. Unlike traditional methods, this technique does not resort to stacking the weak lensing signal in bins, and thus allows for a more efficient use of the information content in the data. Our method is particularly useful for constraining scaling relations between two or more galaxy properties and dark matter halo mass, and can also be used to constrain the intrinsic scatter in these scaling relations. We show that, if observational scatter is not properly accounted for, the traditional stacking method can produce biased results when exploring correlations between multiple galaxy properties and halo mass. For example, this bias can affect studies of the joint correlation between galaxy mass, halo mass, and galaxy size, or galaxy colour. In contrast, our method easily and efficiently handles the intrinsic and observational scatter in multiple galaxy properties and halo mass. We test our method on mocks with varying degrees of complexity. We find that we can recover the mean halo mass and concentration, each with a 0.1 dex accuracy, and the intrinsic scatter in halo mass with a 0.05 dex accuracy. In its current version, our method will be most useful for studying the weak lensing signal around central galaxies in groups and clusters, as well as massive galaxies samples with log M* > 11, which have low satellite fractions.
NASA Astrophysics Data System (ADS)
Dou, Hao; Sun, Xiao; Li, Bin; Deng, Qianqian; Yang, Xubo; Liu, Di; Tian, Jinwen
2018-03-01
Aircraft detection from very high resolution remote sensing images, has gained more increasing interest in recent years due to the successful civil and military applications. However, several problems still exist: 1) how to extract the high-level features of aircraft; 2) locating objects within such a large image is difficult and time consuming; 3) A common problem of multiple resolutions of satellite images still exists. In this paper, inspirited by biological visual mechanism, the fusion detection framework is proposed, which fusing the top-down visual mechanism (deep CNN model) and bottom-up visual mechanism (GBVS) to detect aircraft. Besides, we use multi-scale training method for deep CNN model to solve the problem of multiple resolutions. Experimental results demonstrate that our method can achieve a better detection result than the other methods.
Wright, Stuart J; Vass, Caroline M; Sim, Gene; Burton, Michael; Fiebig, Denzil G; Payne, Katherine
2018-02-28
Scale heterogeneity, or differences in the error variance of choices, may account for a significant amount of the observed variation in the results of discrete choice experiments (DCEs) when comparing preferences between different groups of respondents. The aim of this study was to identify if, and how, scale heterogeneity has been addressed in healthcare DCEs that compare the preferences of different groups. A systematic review identified all healthcare DCEs published between 1990 and February 2016. The full-text of each DCE was then screened to identify studies that compared preferences using data generated from multiple groups. Data were extracted and tabulated on year of publication, samples compared, tests for scale heterogeneity, and analytical methods to account for scale heterogeneity. Narrative analysis was used to describe if, and how, scale heterogeneity was accounted for when preferences were compared. A total of 626 healthcare DCEs were identified. Of these 199 (32%) aimed to compare the preferences of different groups specified at the design stage, while 79 (13%) compared the preferences of groups identified at the analysis stage. Of the 278 included papers, 49 (18%) discussed potential scale issues, 18 (7%) used a formal method of analysis to account for scale between groups, and 2 (1%) accounted for scale differences between preference groups at the analysis stage. Scale heterogeneity was present in 65% (n = 13) of studies that tested for it. Analytical methods to test for scale heterogeneity included coefficient plots (n = 5, 2%), heteroscedastic conditional logit models (n = 6, 2%), Swait and Louviere tests (n = 4, 1%), generalised multinomial logit models (n = 5, 2%), and scale-adjusted latent class analysis (n = 2, 1%). Scale heterogeneity is a prevalent issue in healthcare DCEs. Despite this, few published DCEs have discussed such issues, and fewer still have used formal methods to identify and account for the impact of scale heterogeneity. The use of formal methods to test for scale heterogeneity should be used, otherwise the results of DCEs potentially risk producing biased and potentially misleading conclusions regarding preferences for aspects of healthcare.
Kubsik, Anna; Klimkiewicz, Robert; Klimkiewicz, Paulina; Janczewska, Katarzyna; Jankowska, Agnieszka; Łukasiak, Adam; Woldańska-Okońska, Marta
2016-04-01
Multiple sclerosis is one of the most common demyelinating disease of the CNS connected with the autoimmune action. The effect of the disease is progressive disability, and one of the symptoms is pain. In relieving pain in the course of MS physical procedures and exercises of physiotherapy are used. The aim of the study was assessment of the pain in patients with the multiple sclerosis after applying laser radiation, magnetostimulation and kinesiotherapy. The studied material was consisted of 120 patients with multiple sclerosis of both sexes (82 women and 38 men) aged 21-81 years. Patients were randomly divided into 4 treatment groups and the assesment was performed three times. In the first group laser therapy, in the group II laser and magnetostimulation, in the third group kinesiotherapy, in the fourth group magnetostimulation was used. The same program of physiotherapy in all groups was used. All patients were performed the following tests to assess of the pain: The Laitinen Modified Questionnaire Indicators of Pain of and the Visual- Analogue Scale (VAS). In all treatment groups was observed tends to decrease a result of a point in The Laitinen Modified Questionnaire Indicators of Pain and the Visual-Analogue Scale (VAS). Correlation between groups demonstrated statistically significant result on the level p<0.05 in the group where the laser treatment was applied towards group II assessed with parameter of the Questionnaire of Pain according to Laitinen, as well as towards group II and III assessed with parameter - of the Visual Analogue Scale (VAS). The good result, i.e. the reduction of the spot value, after the III examination towards the preliminary examination were got in the group II. Laser radiation is an effective method which has an analgesisc action. The combination of laser radiation and magnetostimulation reduces pain in patients with multiple sclerosis, and also allows to maintain a therapeutic effect even after the cessation of the application of these procedures, which indicates the possibility to elicitation the biological phenomenon of hysteresis in these methods. © 2016 MEDPRESS.
Extracting information in spike time patterns with wavelets and information theory.
Lopes-dos-Santos, Vítor; Panzeri, Stefano; Kayser, Christoph; Diamond, Mathew E; Quian Quiroga, Rodrigo
2015-02-01
We present a new method to assess the information carried by temporal patterns in spike trains. The method first performs a wavelet decomposition of the spike trains, then uses Shannon information to select a subset of coefficients carrying information, and finally assesses timing information in terms of decoding performance: the ability to identify the presented stimuli from spike train patterns. We show that the method allows: 1) a robust assessment of the information carried by spike time patterns even when this is distributed across multiple time scales and time points; 2) an effective denoising of the raster plots that improves the estimate of stimulus tuning of spike trains; and 3) an assessment of the information carried by temporally coordinated spikes across neurons. Using simulated data, we demonstrate that the Wavelet-Information (WI) method performs better and is more robust to spike time-jitter, background noise, and sample size than well-established approaches, such as principal component analysis, direct estimates of information from digitized spike trains, or a metric-based method. Furthermore, when applied to real spike trains from monkey auditory cortex and from rat barrel cortex, the WI method allows extracting larger amounts of spike timing information. Importantly, the fact that the WI method incorporates multiple time scales makes it robust to the choice of partly arbitrary parameters such as temporal resolution, response window length, number of response features considered, and the number of available trials. These results highlight the potential of the proposed method for accurate and objective assessments of how spike timing encodes information. Copyright © 2015 the American Physiological Society.
Germovsek, Eva; Barker, Charlotte I S; Sharland, Mike; Standing, Joseph F
2018-04-19
Pharmacokinetic/pharmacodynamic (PKPD) modeling is important in the design and conduct of clinical pharmacology research in children. During drug development, PKPD modeling and simulation should underpin rational trial design and facilitate extrapolation to investigate efficacy and safety. The application of PKPD modeling to optimize dosing recommendations and therapeutic drug monitoring is also increasing, and PKPD model-based dose individualization will become a core feature of personalized medicine. Following extensive progress on pediatric PK modeling, a greater emphasis now needs to be placed on PD modeling to understand age-related changes in drug effects. This paper discusses the principles of PKPD modeling in the context of pediatric drug development, summarizing how important PK parameters, such as clearance (CL), are scaled with size and age, and highlights a standardized method for CL scaling in children. One standard scaling method would facilitate comparison of PK parameters across multiple studies, thus increasing the utility of existing PK models and facilitating optimal design of new studies.
Hu, Meng; Liang, Hualou
2013-04-01
Generalized flash suppression (GFS), in which a salient visual stimulus can be rendered invisible despite continuous retinal input, provides a rare opportunity to directly study the neural mechanism of visual perception. Previous work based on linear methods, such as spectral analysis, on local field potential (LFP) during GFS has shown that the LFP power at distinctive frequency bands are differentially modulated by perceptual suppression. Yet, the linear method alone may be insufficient for the full assessment of neural dynamic due to the fundamentally nonlinear nature of neural signals. In this study, we set forth to analyze the LFP data collected from multiple visual areas in V1, V2 and V4 of macaque monkeys while performing the GFS task using a nonlinear method - adaptive multi-scale entropy (AME) - to reveal the neural dynamic of perceptual suppression. In addition, we propose a new cross-entropy measure at multiple scales, namely adaptive multi-scale cross-entropy (AMCE), to assess the nonlinear functional connectivity between two cortical areas. We show that: (1) multi-scale entropy exhibits percept-related changes in all three areas, with higher entropy observed during perceptual suppression; (2) the magnitude of the perception-related entropy changes increases systematically over successive hierarchical stages (i.e. from lower areas V1 to V2, up to higher area V4); and (3) cross-entropy between any two cortical areas reveals higher degree of asynchrony or dissimilarity during perceptual suppression, indicating a decreased functional connectivity between cortical areas. These results, taken together, suggest that perceptual suppression is related to a reduced functional connectivity and increased uncertainty of neural responses, and the modulation of perceptual suppression is more effective at higher visual cortical areas. AME is demonstrated to be a useful technique in revealing the underlying dynamic of nonlinear/nonstationary neural signal.
Successive equimarginal approach for optimal design of a pump and treat system
NASA Astrophysics Data System (ADS)
Guo, Xiaoniu; Zhang, Chuan-Mian; Borthwick, John C.
2007-08-01
An economic concept-based optimization method is developed for groundwater remediation design. Design of a pump and treat (P&T) system is viewed as a resource allocation problem constrained by specified cleanup criteria. An optimal allocation of resources requires that the equimarginal principle, a fundamental economic principle, must hold. The proposed method is named successive equimarginal approach (SEA), which continuously shifts a pumping rate from a less effective well to a more effective one until equal marginal productivity for all units is reached. Through the successive process, the solution evenly approaches the multiple inequality constraints that represent the specified cleanup criteria in space and in time. The goal is to design an equal protection system so that the distributed contaminant plumes can be equally contained without bypass and overprotection is minimized. SEA is a hybrid of the gradient-based method and the deterministic heuristics-based method, which allows flexibility in dealing with multiple inequality constraints without using a penalty function and in balancing computational efficiency with robustness. This method was applied to design a large-scale P&T system for containment of multiple plumes at the former Blaine Naval Ammunition Depot (NAD) site, near Hastings, Nebraska. To evaluate this method, the SEA results were also compared with those using genetic algorithms.
Qian, Zhi-Ming; Wang, Shuo Hong; Cheng, Xi En; Chen, Yan Qiu
2016-06-23
Fish tracking is an important step for video based analysis of fish behavior. Due to severe body deformation and mutual occlusion of multiple swimming fish, accurate and robust fish tracking from video image sequence is a highly challenging problem. The current tracking methods based on motion information are not accurate and robust enough to track the waving body and handle occlusion. In order to better overcome these problems, we propose a multiple fish tracking method based on fish head detection. The shape and gray scale characteristics of the fish image are employed to locate the fish head position. For each detected fish head, we utilize the gray distribution of the head region to estimate the fish head direction. Both the position and direction information from fish detection are then combined to build a cost function of fish swimming. Based on the cost function, global optimization method can be applied to associate the target between consecutive frames. Results show that our method can accurately detect the position and direction information of fish head, and has a good tracking performance for dozens of fish. The proposed method can successfully obtain the motion trajectories for dozens of fish so as to provide more precise data to accommodate systematic analysis of fish behavior.
NASA Astrophysics Data System (ADS)
Brankov, Elvira
This thesis presents a methodology for examining the relationship between synoptic-scale atmospheric transport patterns and observed pollutant concentration levels. It involves calculating a large number of back-trajectories from the observational site and subjecting them to cluster analysis. The pollutant concentration data observed at that site are then segregated according to the back-trajectory clusters. If the pollutant observations extend over several seasons, it is important to filter out seasonal and long-term components from the time series data before pollutant cluster-segregation, because only the short-term component of the time series data is related to the synoptic-scale transport. Multiple comparison procedures are used to test for significant differences in the chemical composition of pollutant data associated with each cluster. This procedure is useful in indicating potential pollutant source regions and isolating meteorological regimes associated with pollutant transport from those regions. If many observational sites are available, the spatial and temporal scales of the pollution transport from a given direction can be extracted through the time-lagged inter- site correlation analysis of pollutant concentrations. The proposed methodology is applicable to any pollutant at any site if sufficiently abundant data set is available. This is illustrated through examination of five-year long time series data of ozone concentrations at several sites in the Northeast. The results provide evidence of ozone transport to these sites, revealing the characteristic spatial and temporal scales involved in the transport and identifying source regions for this pollutant. Problems related to statistical analyses of censored data are addressed in the second half of this thesis. Although censoring (reporting concentrations in a non-quantitative way) is typical for trace-level measurements, methods for statistical analysis, inference and interpretation of such data are complex and still under development. In this study, multiple comparison of censored data sets was required in order to examine the influence of synoptic- scale circulations on concentration levels of several trace-level toxic pollutants observed in the Northeast (e.g., As, Se, Mn, V, etc.). Since the traditional multiple comparison procedures are not readily applicable to such data sets, a Monte Carlo simulation study was performed to assess several nonparametric methods for multiple comparison of censored data sets. Application of an appropriate comparison procedure to clusters of toxic trace elements observed in the Northeast led to the identification of potential source regions and atmospheric patterns associated with the long-range transport of these pollutants. A method for comparison of proportions and elemental ratio calculations were used to confirm/clarify these inferences with a greater degree of confidence.
NASA Astrophysics Data System (ADS)
Millar, David J.; Ewers, Brent E.; Mackay, D. Scott; Peckham, Scott; Reed, David E.; Sekoni, Adewale
2017-09-01
Mountain pine beetle outbreaks in western North America have led to extensive forest mortality, justifiably generating interest in improving our understanding of how this type of ecological disturbance affects hydrological cycles. While observational studies and simulations have been used to elucidate the effects of mountain beetle mortality on hydrological fluxes, an ecologically mechanistic model of forest evapotranspiration (ET) evaluated against field data has yet to be developed. In this work, we use the Terrestrial Regional Ecosystem Exchange Simulator (TREES) to incorporate the ecohydrological impacts of mountain pine beetle disturbance on ET for a lodgepole pine-dominated forest equipped with an eddy covariance tower. An existing degree-day model was incorporated that predicted the life cycle of mountain pine beetles, along with an empirically derived submodel that allowed sap flux to decline as a function of temperature-dependent blue stain fungal growth. The eddy covariance footprint was divided into multiple cohorts for multiple growing seasons, including representations of recently attacked trees and the compensatory effects of regenerating understory, using two different spatial scaling methods. Our results showed that using a multiple cohort approach matched eddy covariance-measured ecosystem-scale ET fluxes well, and showed improved performance compared to model simulations assuming a binary framework of only areas of live and dead overstory. Cumulative growing season ecosystem-scale ET fluxes were 8 - 29% greater using the multicohort approach during years in which beetle attacks occurred, highlighting the importance of including compensatory ecological mechanism in ET models.
Community Delivery of a Comprehensive Fall-Prevention Program in People with Multiple Sclerosis
Frankel, Debra; Tompkins, Sara A.; Cameron, Michelle
2016-01-01
Background: People with multiple sclerosis (MS) fall frequently. In 2011, the National Multiple Sclerosis Society launched a multifactorial fall-prevention group exercise and education program, Free From Falls (FFF), to prevent falls in MS. The objective of this study was to assess the impact of participation in the FFF program on balance, mobility, and falls in people with MS. Methods: This was a retrospective evaluation of assessments from community delivery of FFF. Changes in Activities-specific Balance Confidence scale scores, Berg Balance Scale scores, 8-foot Timed Up and Go performance, and falls were assessed. Results: A total of 134 participants completed the measures at the first and last FFF sessions, and 109 completed a 6-month follow-up assessment. Group mean scores on the Activities-specific Balance Confidence scale (F1,66 = 17.14, P < .05, η2 = 0.21), Berg Balance Scale (F1,68 = 23.39, P < .05, η2 = 0.26), and 8-foot Timed Up and Go (F1,79 = 4.83, P < .05, η2 = 0.06) all improved significantly from the first to the last session. At the 6-month follow-up, fewer falls were reported (χ2 [4, N = 239] = 10.56, P < .05, Phi = 0.21). Conclusions: These observational data suggest that the FFF group education and exercise program improves balance confidence, balance performance, and functional mobility and reduces falls in people with MS. PMID:26917997
Derivation of scaled surface reflectances from AVIRIS data
NASA Technical Reports Server (NTRS)
Gao, Bo-Cai; Heidebrecht, Kathleen B.; Goetz, Alexander F. H.
1993-01-01
A method for retrieving 'scaled surface reflectances' assuming horizontal surfaces having Lambertian reflectances from spectral data collected by Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) is presented here. In this method, the integrated water vapor amount on a pixel by pixel basis is derived from the 0.94 micron and 1.14 micron water vapor absorption features. The transmission spectra of H2O, CO2, O3, N2O, CO, CH4, and O2 in the 0.4-2.5 micron region are simulated. The scattering effect due to atmospheric molecules and aerosols is modeled with the 5S computer code. The AVIRIS radiances are divided by solar irradiances above the atmosphere to obtain the apparent reflectances. The scaled surface reflectances are derived from the apparent reflectances using the simulated atmospheric gaseous transmittances and the simulated molecular and aerosol scattering data. The scaled surface reflectances differ from the real surface reflectances by a multiplicative factor. In order to convert the scaled surface reflectances into real surface reflectances, the slopes and aspects of the surfaces must be known.
Cross-cultural validity of the scale for interpersonal behavior.
Nota, Laura; Arrindell, Willem A; Soresi, Salvatore; van der Ende, Jan; Sanavio, Ezio
2011-01-01
The Scale for Interpersonal Behavior (SIB) is a 50-item multidimensional measure of difficulty and distress in assertiveness. The SIB assesses negative assertion, expression of and dealing with personal limitations, initiating assertiveness and positive assertion. The SIB was originally developed in the Netherlands. The present study attempted to replicate the original factors with an Italian student sample (n = 995). The four distress and four performance factors were replicable across two methods of analysis (the multiple group method of confirmatory analysis and Tucker's coefficient of congruence (phi). The corresponding scales were internally consistent and showed predicted patterns of correlations with a measure of self-efficacy. Sex and age differences in assertiveness were generally negligible. Italian students had higher positive assertion-performance scores than the Dutch and comparable scores on other performance scales; by contrast, the Italian subjects had significantly higher scores on all SIB distress scales than their Dutch equivalents. This was ascribed to the stronger pressure on people in Italian society to behave assertively (Hofstede's National Masculinity score = 70) as opposed to the Dutch society (National Masculinity score = 14).
Cross-scale integration of knowledge for predicting species ranges: a metamodeling framework
Talluto, Matthew V.; Boulangeat, Isabelle; Ameztegui, Aitor; Aubin, Isabelle; Berteaux, Dominique; Butler, Alyssa; Doyon, Frédérik; Drever, C. Ronnie; Fortin, Marie-Josée; Franceschini, Tony; Liénard, Jean; McKenney, Dan; Solarik, Kevin A.; Strigul, Nikolay; Thuiller, Wilfried; Gravel, Dominique
2016-01-01
Aim Current interest in forecasting changes to species ranges have resulted in a multitude of approaches to species distribution models (SDMs). However, most approaches include only a small subset of the available information, and many ignore smaller-scale processes such as growth, fecundity, and dispersal. Furthermore, different approaches often produce divergent predictions with no simple method to reconcile them. Here, we present a flexible framework for integrating models at multiple scales using hierarchical Bayesian methods. Location Eastern North America (as an example). Methods Our framework builds a metamodel that is constrained by the results of multiple sub-models and provides probabilistic estimates of species presence. We applied our approach to a simulated dataset to demonstrate the integration of a correlative SDM with a theoretical model. In a second example, we built an integrated model combining the results of a physiological model with presence-absence data for sugar maple (Acer saccharum), an abundant tree native to eastern North America. Results For both examples, the integrated models successfully included information from all data sources and substantially improved the characterization of uncertainty. For the second example, the integrated model outperformed the source models with respect to uncertainty when modelling the present range of the species. When projecting into the future, the model provided a consensus view of two models that differed substantially in their predictions. Uncertainty was reduced where the models agreed and was greater where they diverged, providing a more realistic view of the state of knowledge than either source model. Main conclusions We conclude by discussing the potential applications of our method and its accessibility to applied ecologists. In ideal cases, our framework can be easily implemented using off-the-shelf software. The framework has wide potential for use in species distribution modelling and can drive better integration of multi-source and multi-scale data into ecological decision-making. PMID:27499698
An Illustrative Guide to the Minerva Framework
NASA Astrophysics Data System (ADS)
Flom, Erik; Leonard, Patrick; Hoeffel, Udo; Kwak, Sehyun; Pavone, Andrea; Svensson, Jakob; Krychowiak, Maciej; Wendelstein 7-X Team Collaboration
2017-10-01
Modern phsyics experiments require tracking and modelling data and their associated uncertainties on a large scale, as well as the combined implementation of multiple independent data streams for sophisticated modelling and analysis. The Minerva Framework offers a centralized, user-friendly method of large-scale physics modelling and scientific inference. Currently used by teams at multiple large-scale fusion experiments including the Joint European Torus (JET) and Wendelstein 7-X (W7-X), the Minerva framework provides a forward-model friendly architecture for developing and implementing models for large-scale experiments. One aspect of the framework involves so-called data sources, which are nodes in the graphical model. These nodes are supplied with engineering and physics parameters. When end-user level code calls a node, it is checked network-wide against its dependent nodes for changes since its last implementation and returns version-specific data. Here, a filterscope data node is used as an illustrative example of the Minerva Framework's data management structure and its further application to Bayesian modelling of complex systems. This work has been carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training programme 2014-2018 under Grant Agreement No. 633053.
Multiple time scale analysis of sediment and runoff changes in the Lower Yellow River
NASA Astrophysics Data System (ADS)
Chi, Kaige; Gang, Zhao; Pang, Bo; Huang, Ziqian
2018-06-01
Sediment and runoff changes of seven hydrological stations along the Lower Yellow River (LYR) (Huayuankou Station, Jiahetan Station, Gaocun Station, Sunkou Station, Ai Shan Station, Qikou Station and Lijin Station) from 1980 to 2003 were alanyzed at multiple time scale. The maximum value of monthly, daily and hourly sediment load and runoff conservations were also analyzed with the annually mean value. Mann-Kendall non-parametric mathematics correlation test and Hurst coefficient method were adopted in the study. Research results indicate that (1) the runoff of seven hydrological stations was significantly reduced in the study period at different time scales. However, the trends of sediment load in these stations were not obvious. The sediment load of Huayuankou, Jiahetan and Aishan stations even slightly increased with the runoff decrease. (2) The trends of the sediment load with different time scale showed differences at Luokou and Lijin stations. Although the annually and monthly sediment load were broadly flat, the maximum hourly sediment load showed decrease trend. (3) According to the Hurst coefficients, the trend of sediment and runoff will be continue without taking measures, which proved the necessary of runoff-sediment regulation scheme.
Dilts, Thomas E.; Weisberg, Peter J.; Leitner, Phillip; Matocq, Marjorie D.; Inman, Richard D.; Nussear, Ken E.; Esque, Todd C.
2016-01-01
Conservation planning and biodiversity management require information on landscape connectivity across a range of spatial scales from individual home ranges to large regions. Reduction in landscape connectivity due changes in land-use or development is expected to act synergistically with alterations to habitat mosaic configuration arising from climate change. We illustrate a multi-scale connectivity framework to aid habitat conservation prioritization in the context of changing land use and climate. Our approach, which builds upon the strengths of multiple landscape connectivity methods including graph theory, circuit theory and least-cost path analysis, is here applied to the conservation planning requirements of the Mohave ground squirrel. The distribution of this California threatened species, as for numerous other desert species, overlaps with the proposed placement of several utility-scale renewable energy developments in the American Southwest. Our approach uses information derived at three spatial scales to forecast potential changes in habitat connectivity under various scenarios of energy development and climate change. By disentangling the potential effects of habitat loss and fragmentation across multiple scales, we identify priority conservation areas for both core habitat and critical corridor or stepping stone habitats. This approach is a first step toward applying graph theory to analyze habitat connectivity for species with continuously-distributed habitat, and should be applicable across a broad range of taxa.
MULTIPLE SCALES FOR SUSTAINABLE RESULTS
This session will highlight recent research that incorporates the use of multiple scales and innovative environmental accounting to better inform decisions that affect sustainability, resilience, and vulnerability at all scales. Effective decision-making involves assessment at mu...
Peridynamic Multiscale Finite Element Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costa, Timothy; Bond, Stephen D.; Littlewood, David John
The problem of computing quantum-accurate design-scale solutions to mechanics problems is rich with applications and serves as the background to modern multiscale science research. The prob- lem can be broken into component problems comprised of communicating across adjacent scales, which when strung together create a pipeline for information to travel from quantum scales to design scales. Traditionally, this involves connections between a) quantum electronic structure calculations and molecular dynamics and between b) molecular dynamics and local partial differ- ential equation models at the design scale. The second step, b), is particularly challenging since the appropriate scales of molecular dynamic andmore » local partial differential equation models do not overlap. The peridynamic model for continuum mechanics provides an advantage in this endeavor, as the basic equations of peridynamics are valid at a wide range of scales limiting from the classical partial differential equation models valid at the design scale to the scale of molecular dynamics. In this work we focus on the development of multiscale finite element methods for the peridynamic model, in an effort to create a mathematically consistent channel for microscale information to travel from the upper limits of the molecular dynamics scale to the design scale. In particular, we first develop a Nonlocal Multiscale Finite Element Method which solves the peridynamic model at multiple scales to include microscale information at the coarse-scale. We then consider a method that solves a fine-scale peridynamic model to build element-support basis functions for a coarse- scale local partial differential equation model, called the Mixed Locality Multiscale Finite Element Method. Given decades of research and development into finite element codes for the local partial differential equation models of continuum mechanics there is a strong desire to couple local and nonlocal models to leverage the speed and state of the art of local models with the flexibility and accuracy of the nonlocal peridynamic model. In the mixed locality method this coupling occurs across scales, so that the nonlocal model can be used to communicate material heterogeneity at scales inappropriate to local partial differential equation models. Additionally, the computational burden of the weak form of the peridynamic model is reduced dramatically by only requiring that the model be solved on local patches of the simulation domain which may be computed in parallel, taking advantage of the heterogeneous nature of next generation computing platforms. Addition- ally, we present a novel Galerkin framework, the 'Ambulant Galerkin Method', which represents a first step towards a unified mathematical analysis of local and nonlocal multiscale finite element methods, and whose future extension will allow the analysis of multiscale finite element methods that mix models across scales under certain assumptions of the consistency of those models.« less
Assessment of the Casualty Risk of Multiple Meteorological Hazards in China
Xu, Wei; Zhuo, Li; Zheng, Jing; Ge, Yi; Gu, Zhihui; Tian, Yugang
2016-01-01
A study of the frequency, intensity, and risk of extreme climatic events or natural hazards is important for assessing the impacts of climate change. Many models have been developed to assess the risk of multiple hazards, however, most of the existing approaches can only model the relative levels of risk. This paper reports the development of a method for the quantitative assessment of the risk of multiple hazards based on information diffusion. This method was used to assess the risks of loss of human lives from 11 types of meteorological hazards in China at the prefectural and provincial levels. Risk curves of multiple hazards were obtained for each province and the risks of 10-year, 20-year, 50-year, and 100-year return periods were mapped. The results show that the provinces (municipalities, autonomous regions) in southeastern China are at higher risk of multiple meteorological hazards as a result of their geographical location and topography. The results of this study can be used as references for the management of meteorological disasters in China. The model can be used to quantitatively calculate the risks of casualty, direct economic losses, building collapse, and agricultural losses for any hazards at different spatial scales. PMID:26901210
Assessment of the Casualty Risk of Multiple Meteorological Hazards in China.
Xu, Wei; Zhuo, Li; Zheng, Jing; Ge, Yi; Gu, Zhihui; Tian, Yugang
2016-02-17
A study of the frequency, intensity, and risk of extreme climatic events or natural hazards is important for assessing the impacts of climate change. Many models have been developed to assess the risk of multiple hazards, however, most of the existing approaches can only model the relative levels of risk. This paper reports the development of a method for the quantitative assessment of the risk of multiple hazards based on information diffusion. This method was used to assess the risks of loss of human lives from 11 types of meteorological hazards in China at the prefectural and provincial levels. Risk curves of multiple hazards were obtained for each province and the risks of 10-year, 20-year, 50-year, and 100-year return periods were mapped. The results show that the provinces (municipalities, autonomous regions) in southeastern China are at higher risk of multiple meteorological hazards as a result of their geographical location and topography. The results of this study can be used as references for the management of meteorological disasters in China. The model can be used to quantitatively calculate the risks of casualty, direct economic losses, building collapse, and agricultural losses for any hazards at different spatial scales.
Multiple Phenotype Association Tests Using Summary Statistics in Genome-Wide Association Studies
Liu, Zhonghua; Lin, Xihong
2017-01-01
Summary We study in this paper jointly testing the associations of a genetic variant with correlated multiple phenotypes using the summary statistics of individual phenotype analysis from Genome-Wide Association Studies (GWASs). We estimated the between-phenotype correlation matrix using the summary statistics of individual phenotype GWAS analyses, and developed genetic association tests for multiple phenotypes by accounting for between-phenotype correlation without the need to access individual-level data. Since genetic variants often affect multiple phenotypes differently across the genome and the between-phenotype correlation can be arbitrary, we proposed robust and powerful multiple phenotype testing procedures by jointly testing a common mean and a variance component in linear mixed models for summary statistics. We computed the p-values of the proposed tests analytically. This computational advantage makes our methods practically appealing in large-scale GWASs. We performed simulation studies to show that the proposed tests maintained correct type I error rates, and to compare their powers in various settings with the existing methods. We applied the proposed tests to a GWAS Global Lipids Genetics Consortium summary statistics data set and identified additional genetic variants that were missed by the original single-trait analysis. PMID:28653391
Multiple phenotype association tests using summary statistics in genome-wide association studies.
Liu, Zhonghua; Lin, Xihong
2018-03-01
We study in this article jointly testing the associations of a genetic variant with correlated multiple phenotypes using the summary statistics of individual phenotype analysis from Genome-Wide Association Studies (GWASs). We estimated the between-phenotype correlation matrix using the summary statistics of individual phenotype GWAS analyses, and developed genetic association tests for multiple phenotypes by accounting for between-phenotype correlation without the need to access individual-level data. Since genetic variants often affect multiple phenotypes differently across the genome and the between-phenotype correlation can be arbitrary, we proposed robust and powerful multiple phenotype testing procedures by jointly testing a common mean and a variance component in linear mixed models for summary statistics. We computed the p-values of the proposed tests analytically. This computational advantage makes our methods practically appealing in large-scale GWASs. We performed simulation studies to show that the proposed tests maintained correct type I error rates, and to compare their powers in various settings with the existing methods. We applied the proposed tests to a GWAS Global Lipids Genetics Consortium summary statistics data set and identified additional genetic variants that were missed by the original single-trait analysis. © 2017, The International Biometric Society.
Tie, Qiang; Hu, Hongchang; Tian, Fuqiang; Holbrook, N Michele
2018-08-15
Accurately estimating forest evapotranspiration and its components is of great importance for hydrology, ecology, and meteorology. In this study, a comparison of methods for determining forest evapotranspiration and its components at annual, monthly, daily, and diurnal scales was conducted based on in situ measurements in the subhumid mountainous forest of North China. The goal of the study was to evaluate the accuracies and reliabilities of the different methods. The results indicate the following: (1) The sap flow upscaling procedure, taking into account diversities in forest types and tree species, produced component-based forest evapotranspiration estimate that agreed with eddy covariance-based estimate at the temporal scales of year, month, and day, while soil water budget-based forest evapotranspiration estimate was also qualitatively consistent with eddy covariance-based estimate at the daily scale; (2) At the annual scale, catchment water balance-based forest evapotranspiration estimate was significantly higher than eddy covariance-based estimate, which might probably result from non-negligible subsurface runoff caused by the widely distributed regolith and fractured bedrock under the ground; (3) At the sub-daily scale, the diurnal course of sap flow based-canopy transpiration estimate lagged significantly behind eddy covariance-based forest evapotranspiration estimate, which might physiologically be due to stem water storage and stem hydraulic conductivity. The results in this region may have much referential significance for forest evapotranspiration estimation and method evaluation in regions with similar environmental conditions. Copyright © 2018 Elsevier B.V. All rights reserved.
A non-isotropic multiple-scale turbulence model
NASA Technical Reports Server (NTRS)
Chen, C. P.
1990-01-01
A newly developed non-isotropic multiple scale turbulence model (MS/ASM) is described for complex flow calculations. This model focuses on the direct modeling of Reynolds stresses and utilizes split-spectrum concepts for modeling multiple scale effects in turbulence. Validation studies on free shear flows, rotating flows and recirculating flows show that the current model perform significantly better than the single scale k-epsilon model. The present model is relatively inexpensive in terms of CPU time which makes it suitable for broad engineering flow applications.
A measure of association for ordered categorical data in population-based studies
Nelson, Kerrie P; Edwards, Don
2016-01-01
Ordinal classification scales are commonly used to define a patient’s disease status in screening and diagnostic tests such as mammography. Challenges arise in agreement studies when evaluating the association between many raters’ classifications of patients’ disease or health status when an ordered categorical scale is used. In this paper, we describe a population-based approach and chance-corrected measure of association to evaluate the strength of relationship between multiple raters’ ordinal classifications where any number of raters can be accommodated. In contrast to Shrout and Fleiss’ intraclass correlation coefficient, the proposed measure of association is invariant with respect to changes in disease prevalence. We demonstrate how unique characteristics of individual raters can be explored using random effects. Simulation studies are conducted to demonstrate the properties of the proposed method under varying assumptions. The methods are applied to two large-scale agreement studies of breast cancer screening and prostate cancer severity. PMID:27184590
Managing adaptively for multifunctionality in agricultural systems
Hodbod, Jennifer; Barreteau, Olivier; Allen, Craig R.; Magda, Danièle
2016-01-01
The critical importance of agricultural systems for food security and as a dominant global landcover requires management that considers the full dimensions of system functions at appropriate scales, i.e. multifunctionality. We propose that adaptive management is the most suitable management approach for such goals, given its ability to reduce uncertainty over time and support multiple objectives within a system, for multiple actors. As such, adaptive management may be the most appropriate method for sustainably intensifying production whilst increasing the quantity and quality of ecosystem services. However, the current assessment of performance of agricultural systems doesn’t reward ecosystem service provision. Therefore, we present an overview of the ecosystem functions agricultural systems should and could provide, coupled with a revised definition for assessing the performance of agricultural systems from a multifunctional perspective that, when all satisfied, would create adaptive agricultural systems that can increase production whilst ensuring food security and the quantity and quality of ecosystem services. The outcome of this high level of performance is the capacity to respond to multiple shocks without collapse, equity and triple bottom line sustainability. Through the assessment of case studies, we find that alternatives to industrialized agricultural systems incorporate more functional goals, but that there are mixed findings as to whether these goals translate into positive measurable outcomes. We suggest that an adaptive management perspective would support the implementation of a systematic analysis of the social, ecological and economic trade-offs occurring within such systems, particularly between ecosystem services and functions, in order to provide suitable and comparable assessments. We also identify indicators to monitor performance at multiple scales in agricultural systems which can be used within an adaptive management framework to increase resilience at multiple scales.
Managing adaptively for multifunctionality in agricultural systems.
Hodbod, Jennifer; Barreteau, Olivier; Allen, Craig; Magda, Danièle
2016-12-01
The critical importance of agricultural systems for food security and as a dominant global landcover requires management that considers the full dimensions of system functions at appropriate scales, i.e. multifunctionality. We propose that adaptive management is the most suitable management approach for such goals, given its ability to reduce uncertainty over time and support multiple objectives within a system, for multiple actors. As such, adaptive management may be the most appropriate method for sustainably intensifying production whilst increasing the quantity and quality of ecosystem services. However, the current assessment of performance of agricultural systems doesn't reward ecosystem service provision. Therefore, we present an overview of the ecosystem functions agricultural systems should and could provide, coupled with a revised definition for assessing the performance of agricultural systems from a multifunctional perspective that, when all satisfied, would create adaptive agricultural systems that can increase production whilst ensuring food security and the quantity and quality of ecosystem services. The outcome of this high level of performance is the capacity to respond to multiple shocks without collapse, equity and triple bottom line sustainability. Through the assessment of case studies, we find that alternatives to industrialized agricultural systems incorporate more functional goals, but that there are mixed findings as to whether these goals translate into positive measurable outcomes. We suggest that an adaptive management perspective would support the implementation of a systematic analysis of the social, ecological and economic trade-offs occurring within such systems, particularly between ecosystem services and functions, in order to provide suitable and comparable assessments. We also identify indicators to monitor performance at multiple scales in agricultural systems which can be used within an adaptive management framework to increase resilience at multiple scales. Copyright © 2016 Elsevier Ltd. All rights reserved.
Müller, Marco; Wasmer, Katharina; Vetter, Walter
2018-06-29
Countercurrent chromatography (CCC) is an all liquid based separation technique typically used for the isolation and purification of natural compounds. The simplicity of the method makes it easy to scale up CCC separations from analytical to preparative and even industrial scale. However, scale-up of CCC separations requires two different instruments with varying coil dimensions. Here we developed two variants of the CCC multiple injection mode as an alternative to increase the throughput and enhance productivity of a CCC separation when using only one instrument. The concept is based on the parallel injection of samples at different points in the CCC column system and the simultaneous separation using one pump only. The wiring of the CCC setup was modified by the insertion of a 6-port selection valve, multiple T-pieces and sample loops. Furthermore, the introduction of storage sample loops enabled the CCC system to be used with repeated injection cycles. Setup and advantages of both multiple injection modes were shown by the isolation of the furan fatty acid 11-(3,4-dimethyl-5-pentylfuran-2-yl)-undecanoic acid (11D5-EE) from an ethyl ester oil rich in 4,7,10,13,16,19-docosahexaenoic acid (DHA-EE). 11D5-EE was enriched in one step from 1.9% to 99% purity. The solvent consumption per isolated amount of analyte could be reduced by ∼40% compared to increased throughput CCC and by ∼5% in the repeated multiple injection mode which also facilitated the isolation of the major compound (DHA-EE) in the sample. Copyright © 2018 Elsevier B.V. All rights reserved.
Tilley, Barbara C.; LaPelle, Nancy R.; Goetz, Christopher G.; Stebbins, Glenn T.
2016-01-01
Background Cognitive pretesting, a qualitative step in scale development, precedes field testing and assesses the difficulty of instrument completion for examiners and respondents. Cognitive pretesting assesses respondent interest, attention span, discomfort, and comprehension, and highlights problems with the logical structure of questions/response options that can affect understanding. In the past this approach was not consistently used in the development or revision of movement disorders scales. Methods We applied qualitative cognitive pretesting using testing guides in development of the Movement Disorder Society-sponsored revision of the Unified Parkinson’s Disease Rating Scale (MDS-UPDRS). The guides were based on qualitative techniques, verbal probing and “think-aloud” interviewing, to identify problems with the scale from the patient and rater perspectives. English-speaking Parkinson’s disease patients and movement disorders specialists (raters) from multiple specialty clinics in the United States, Western Europe and Canada used the MDS-UPDRS and completed the testing guides. Results Two rounds of cognitive pretesting were necessary before proceeding to field testing of the revised scale to assess clinimetric properties. Scale revisions based on cognitive pretesting included changes in phrasing, simplification of some questions, and addition of a reassuring statement explaining that not all PD patients experience the symptoms described in the questions. Conclusions The strategy of incorporating cognitive pretesting into scale development and revision provides a model for other movement disorders scales. Cognitive pretesting is being used in translating the MDS-UPDRS into multiple languages to improve comprehension and acceptance and in the development of a new Unified Dyskinesia Rating Scale for Parkinson’s disease patients. PMID:24613868
Multiplexed Affinity-Based Separation of Proteins and Cells Using Inertial Microfluidics.
Sarkar, Aniruddh; Hou, Han Wei; Mahan, Alison E; Han, Jongyoon; Alter, Galit
2016-03-30
Isolation of low abundance proteins or rare cells from complex mixtures, such as blood, is required for many diagnostic, therapeutic and research applications. Current affinity-based protein or cell separation methods use binary 'bind-elute' separations and are inefficient when applied to the isolation of multiple low-abundance proteins or cell types. We present a method for rapid and multiplexed, yet inexpensive, affinity-based isolation of both proteins and cells, using a size-coded mixture of multiple affinity-capture microbeads and an inertial microfluidic particle sorter device. In a single binding step, different targets-cells or proteins-bind to beads of different sizes, which are then sorted by flowing them through a spiral microfluidic channel. This technique performs continuous-flow, high throughput affinity-separation of milligram-scale protein samples or millions of cells in minutes after binding. We demonstrate the simultaneous isolation of multiple antibodies from serum and multiple cell types from peripheral blood mononuclear cells or whole blood. We use the technique to isolate low abundance antibodies specific to different HIV antigens and rare HIV-specific cells from blood obtained from HIV+ patients.
[Construction of a multiple-scale implant surface with super-hydrophilicity].
Luo, Qiao-jie; Li, Xiao-dong; Huang, Ying; Zhao, Shi-fang
2012-05-01
To construct a multiple-scale organized implant surface with super-hydrophilicity. The SiC paper polished titanium disc was sandblasted and treated with HF/HNO₃ and HCl/H₂SO₄, then acid-etched with H₂SO₄/H₂O₂. The physicochemical properties of the surfaces were characterized by scanning electron microscope, static state contact angle and X-ray diffraction. MC3T3-E1 cells were used to evaluate the effects of the surface on the cell adhesion, proliferation and differentiation. The acid-etching process with a mixture of H₂SO₄/H₂O₂ superimposed the nano-scale structure on the micro-scale texture. The multiple-scale implant surface promoted its hydrophilicity and was more favorable to the responses of osteoprogenitor cells, characterized by increased DNA content, enhanced ALP activity and promoted OC production. A multiple-scale implant surface with super-hydrophilicity has been constructed in this study, which facilitates cell proliferation and adhesion.
Three-dimensional tracking of small aquatic organisms using fluorescent nanoparticles.
Ekvall, Mikael T; Bianco, Giuseppe; Linse, Sara; Linke, Heiner; Bäckman, Johan; Hansson, Lars-Anders
2013-01-01
Tracking techniques are vital for the understanding of the biology and ecology of organisms. While such techniques have provided important information on the movement and migration of large animals, such as mammals and birds, scientific advances in understanding the individual behaviour and interactions of small (mm-scale) organisms have been hampered by constraints, such as the sizes of existing tracking devices, in existing tracking methods. By combining biology, chemistry and physics we here present a method that allows three-dimensional (3D) tracking of individual mm-sized aquatic organisms. The method is based on in-vivo labelling of the organisms with fluorescent nanoparticles, so-called quantum dots, and tracking of the organisms in 3D via the quantum-dot fluorescence using a synchronized multiple camera system. It allows for the efficient and simultaneous study of the behaviour of one as well as multiple individuals in large volumes of observation, thus enabling the study of behavioural interactions at the community scale. The method is non-perturbing - we demonstrate that the labelling is not affecting the behavioural response of the organisms - and is applicable over a wide range of taxa, including cladocerans as well as insects, suggesting that our methodological concept opens up for new research fields on individual behaviour of small animals. Hence, this offers opportunities to focus on important biological, ecological and behavioural questions never before possible to address.
Micro/Nano-scale Strain Distribution Measurement from Sampling Moiré Fringes.
Wang, Qinghua; Ri, Shien; Tsuda, Hiroshi
2017-05-23
This work describes the measurement procedure and principles of a sampling moiré technique for full-field micro/nano-scale deformation measurements. The developed technique can be performed in two ways: using the reconstructed multiplication moiré method or the spatial phase-shifting sampling moiré method. When the specimen grid pitch is around 2 pixels, 2-pixel sampling moiré fringes are generated to reconstruct a multiplication moiré pattern for a deformation measurement. Both the displacement and strain sensitivities are twice as high as in the traditional scanning moiré method in the same wide field of view. When the specimen grid pitch is around or greater than 3 pixels, multi-pixel sampling moiré fringes are generated, and a spatial phase-shifting technique is combined for a full-field deformation measurement. The strain measurement accuracy is significantly improved, and automatic batch measurement is easily achievable. Both methods can measure the two-dimensional (2D) strain distributions from a single-shot grid image without rotating the specimen or scanning lines, as in traditional moiré techniques. As examples, the 2D displacement and strain distributions, including the shear strains of two carbon fiber-reinforced plastic specimens, were measured in three-point bending tests. The proposed technique is expected to play an important role in the non-destructive quantitative evaluations of mechanical properties, crack occurrences, and residual stresses of a variety of materials.
Poirier, Frédéric J A M; Gurnsey, Rick
2005-08-01
Eccentricity-dependent resolution losses are sometimes compensated for in psychophysical experiments by magnifying (scaling) stimuli at each eccentricity. The use of either pre-selected scaling factors or unscaled stimuli sometimes leads to non-monotonic changes in performance as a function of eccentricity. We argue that such non-monotonic changes arise when performance is limited by more than one type of constraint at each eccentricity. Building on current methods developed to investigate peripheral perception [e.g., Watson, A. B. (1987). Estimation of local spatial scale. Journal of the Optical Society of America A, 4 (8), 1579-1582; Poirier, F. J. A. M., & Gurnsey, R. (2002). Two eccentricity dependent limitations on subjective contour discrimination. Vision Research, 42, 227-238; Strasburger, H., Rentschler, I., & Harvey Jr., L. O. (1994). Cortical magnification theory fails to predict visual recognition. European Journal of Neuroscience, 6, 1583-1588], we show how measured scaling can deviate from a linear function of eccentricity in a grating acuity task [Thibos, L. N., Still, D. L., & Bradley, A. (1996). Characterization of spatial aliasing and contrast sensitivity in peripheral vision. Vision Research, 36(2), 249-258]. This framework can also explain the central performance drop [Kehrer, L. (1989). Central performance drop on perceptual segregation tasks. Spatial Vision, 4, 45-62] and a case of "reverse scaling" of the integration window in symmetry [Tyler, C. W. (1999). Human symmetry detection exhibits reverse eccentricity scaling. Visual Neuroscience, 16, 919-922]. These cases of non-monotonic performance are shown to be consistent with multiple sources of resolution loss, each of which increases linearly with eccentricity. We conclude that most eccentricity research, including "oddities", can be explained by multiple-scaling theory as extended here, where the receptive field properties of all underlying mechanisms in a task increase in size with eccentricity, but not necessarily at the same rate.
Disanto, Giulio; Hall, Carolina; Lucas, Robyn; Ponsonby, Anne-Louise; Berlanga-Taylor, Antonio J; Giovannoni, Gavin; Ramagopalan, Sreeram V
2013-09-01
Gene-environment interactions may shed light on the mechanisms underlying multiple sclerosis (MS). We pooled data from two case-control studies on incident demyelination and used different methods to assess interaction between HLA-DRB1*15 (DRB1-15) and history of infectious mononucleosis (IM). Individuals exposed to both factors were at substantially increased risk of disease (OR=7.32, 95% CI=4.92-10.90). In logistic regression models, DRB1-15 and IM status were independent predictors of disease while their interaction term was not (DRB1-15*IM: OR=1.35, 95% CI=0.79-2.23). However, interaction on an additive scale was evident (Synergy index=2.09, 95% CI=1.59-2.59; excess risk due to interaction=3.30, 95%CI=0.47-6.12; attributable proportion due to interaction=45%, 95% CI=22-68%). This suggests, if the additive model is appropriate, the DRB1-15 and IM may be involved in the same causal process leading to MS and highlights the benefit of reporting gene-environment interactions on both a multiplicative and additive scale.
NASA Astrophysics Data System (ADS)
Zhang, Dong-Hai; Chen, Yan-Ling; Wang, Guo-Rong; Li, Wang-Dong; Wang, Qing; Yao, Ji-Jie; Zhou, Jian-Guo; Zheng, Su-Hua; Xu, Li-Ling; Miao, Hui-Feng; Wang, Peng
2014-07-01
Multiplicity fluctuation of the target evaporated fragments emitted in 290 MeV/u 12C-AgBr, 400 MeV/u 12C-AgBr, 400 MeV/u 20Ne-AgBr and 500 MeV/u 56Fe-AgBr interactions is investigated using the scaled factorial moment method in two-dimensional normal phase space and cumulative variable space, respectively. It is found that in normal phase space the scaled factorial moment (ln
NASA Astrophysics Data System (ADS)
Li, Shuangcai; Duffy, Christopher J.
2011-03-01
Our ability to predict complex environmental fluid flow and transport hinges on accurate and efficient simulations of multiple physical phenomenon operating simultaneously over a wide range of spatial and temporal scales, including overbank floods, coastal storm surge events, drying and wetting bed conditions, and simultaneous bed form evolution. This research implements a fully coupled strategy for solving shallow water hydrodynamics, sediment transport, and morphological bed evolution in rivers and floodplains (PIHM_Hydro) and applies the model to field and laboratory experiments that cover a wide range of spatial and temporal scales. The model uses a standard upwind finite volume method and Roe's approximate Riemann solver for unstructured grids. A multidimensional linear reconstruction and slope limiter are implemented, achieving second-order spatial accuracy. Model efficiency and stability are treated using an explicit-implicit method for temporal discretization with operator splitting. Laboratory-and field-scale experiments were compiled where coupled processes across a range of scales were observed and where higher-order spatial and temporal accuracy might be needed for accurate and efficient solutions. These experiments demonstrate the ability of the fully coupled strategy in capturing dynamics of field-scale flood waves and small-scale drying-wetting processes.
Network Modeling and Energy-Efficiency Optimization for Advanced Machine-to-Machine Sensor Networks
Jung, Sungmo; Kim, Jong Hyun; Kim, Seoksoo
2012-01-01
Wireless machine-to-machine sensor networks with multiple radio interfaces are expected to have several advantages, including high spatial scalability, low event detection latency, and low energy consumption. Here, we propose a network model design method involving network approximation and an optimized multi-tiered clustering algorithm that maximizes node lifespan by minimizing energy consumption in a non-uniformly distributed network. Simulation results show that the cluster scales and network parameters determined with the proposed method facilitate a more efficient performance compared to existing methods. PMID:23202190
NASA Astrophysics Data System (ADS)
Teng, Xian; Pei, Sen; Morone, Flaviano; Makse, Hernán A.
2016-10-01
Identifying the most influential spreaders that maximize information flow is a central question in network theory. Recently, a scalable method called “Collective Influence (CI)” has been put forward through collective influence maximization. In contrast to heuristic methods evaluating nodes’ significance separately, CI method inspects the collective influence of multiple spreaders. Despite that CI applies to the influence maximization problem in percolation model, it is still important to examine its efficacy in realistic information spreading. Here, we examine real-world information flow in various social and scientific platforms including American Physical Society, Facebook, Twitter and LiveJournal. Since empirical data cannot be directly mapped to ideal multi-source spreading, we leverage the behavioral patterns of users extracted from data to construct “virtual” information spreading processes. Our results demonstrate that the set of spreaders selected by CI can induce larger scale of information propagation. Moreover, local measures as the number of connections or citations are not necessarily the deterministic factors of nodes’ importance in realistic information spreading. This result has significance for rankings scientists in scientific networks like the APS, where the commonly used number of citations can be a poor indicator of the collective influence of authors in the community.
Teng, Xian; Pei, Sen; Morone, Flaviano; Makse, Hernán A.
2016-01-01
Identifying the most influential spreaders that maximize information flow is a central question in network theory. Recently, a scalable method called “Collective Influence (CI)” has been put forward through collective influence maximization. In contrast to heuristic methods evaluating nodes’ significance separately, CI method inspects the collective influence of multiple spreaders. Despite that CI applies to the influence maximization problem in percolation model, it is still important to examine its efficacy in realistic information spreading. Here, we examine real-world information flow in various social and scientific platforms including American Physical Society, Facebook, Twitter and LiveJournal. Since empirical data cannot be directly mapped to ideal multi-source spreading, we leverage the behavioral patterns of users extracted from data to construct “virtual” information spreading processes. Our results demonstrate that the set of spreaders selected by CI can induce larger scale of information propagation. Moreover, local measures as the number of connections or citations are not necessarily the deterministic factors of nodes’ importance in realistic information spreading. This result has significance for rankings scientists in scientific networks like the APS, where the commonly used number of citations can be a poor indicator of the collective influence of authors in the community. PMID:27782207
Airlie, J; Baker, G A; Smith, S J; Young, C A
2001-06-01
To develop a scale to measure self-efficacy in neurologically impaired patients with multiple sclerosis and to assess the scale's psychometric properties. Cross-sectional questionnaire study in a clinical setting, the retest questionnaire returned by mail after completion at home. Regional multiple sclerosis (MS) outpatient clinic or the Clinical Trials Unit (CTU) at a large neuroscience centre in the UK. One hundred persons with MS attending the Walton Centre for Neurology and Neurosurgery and Clatterbridge Hospital, Wirral, as outpatients. Cognitively impaired patients were excluded at an initial clinic assessment. Patients were asked to provide demographic data and complete the self-efficacy scale along with the following validated scales: Hospital Anxiety and Depression Scale, Rosenberg Self-Esteem Scale, Impact, Stigma and Mastery and Rankin Scales. The Rankin Scale and Barthel Index were also assessed by the physician. A new 11-item self-efficacy scale was constructed consisting of two domains of control and personal agency. The validity of the scale was confirmed using Cronbach's alpha analysis of internal consistency (alpha = 0.81). The test-retest reliability of the scale over two weeks was acceptable with an intraclass correlation coefficient of 0.79. Construct validity was investigated using Pearson's product moment correlation coefficient resulting in significant correlations with depression (r= -0.52) anxiety (r =-0.50) and mastery (r= 0.73). Multiple regression analysis demonstrated that these factors accounted for 70% of the variance of scores on the self-efficacy scale, with scores on mastery, anxiety and perceived disability being independently significant. Assessment of the psychometric properties of this new self-efficacy scale suggest that it possesses good validity and reliability in patients with multiple sclerosis.
Xiao, Xiaolin; Moreno-Moral, Aida; Rotival, Maxime; Bottolo, Leonardo; Petretto, Enrico
2014-01-01
Recent high-throughput efforts such as ENCODE have generated a large body of genome-scale transcriptional data in multiple conditions (e.g., cell-types and disease states). Leveraging these data is especially important for network-based approaches to human disease, for instance to identify coherent transcriptional modules (subnetworks) that can inform functional disease mechanisms and pathological pathways. Yet, genome-scale network analysis across conditions is significantly hampered by the paucity of robust and computationally-efficient methods. Building on the Higher-Order Generalized Singular Value Decomposition, we introduce a new algorithmic approach for efficient, parameter-free and reproducible identification of network-modules simultaneously across multiple conditions. Our method can accommodate weighted (and unweighted) networks of any size and can similarly use co-expression or raw gene expression input data, without hinging upon the definition and stability of the correlation used to assess gene co-expression. In simulation studies, we demonstrated distinctive advantages of our method over existing methods, which was able to recover accurately both common and condition-specific network-modules without entailing ad-hoc input parameters as required by other approaches. We applied our method to genome-scale and multi-tissue transcriptomic datasets from rats (microarray-based) and humans (mRNA-sequencing-based) and identified several common and tissue-specific subnetworks with functional significance, which were not detected by other methods. In humans we recapitulated the crosstalk between cell-cycle progression and cell-extracellular matrix interactions processes in ventricular zones during neocortex expansion and further, we uncovered pathways related to development of later cognitive functions in the cortical plate of the developing brain which were previously unappreciated. Analyses of seven rat tissues identified a multi-tissue subnetwork of co-expressed heat shock protein (Hsp) and cardiomyopathy genes (Bag3, Cryab, Kras, Emd, Plec), which was significantly replicated using separate failing heart and liver gene expression datasets in humans, thus revealing a conserved functional role for Hsp genes in cardiovascular disease.
Early Colleges at Scale: Impacts on Secondary and Postsecondary Outcomes
ERIC Educational Resources Information Center
Lauen, Douglas L.; Fuller, Sarah; Barrett, Nathan; Janda, Ludmila
2017-01-01
We examine the impacts of early college high schools, small schools of choice located on college campuses. These schools provide a no-cost opportunity for students to earn college credit--or a 2-year degree--while in high school. Using rich administrative data on multiple cohorts of students and quasiexperimental methods informed by the…
Nearest neighbor imputation of species-level, plot-scale forest structure attributes from LiDAR data
Andrew T. Hudak; Nicholas L. Crookston; Jeffrey S. Evans; David E. Hall; Michael J. Falkowski
2008-01-01
Meaningful relationships between forest structure attributes measured in representative field plots on the ground and remotely sensed data measured comprehensively across the same forested landscape facilitate the production of maps of forest attributes such as basal area (BA) and tree density (TD). Because imputation methods can efficiently predict multiple response...
Ecosystem evapotranspiration: challenges in measurements, estimates, and modeling
Devendra Amatya; S. Irmak; P. Gowda; Ge Sun; J.E. Nettles; K.R. Douglas-Mankin
2016-01-01
Evapotranspiration (ET) processes at the leaf to landscape scales in multiple land uses have important controls and feedbacks for local, regional, and global climate and water resource systems. Innovative methods, tools, and technologies for improved understanding and quantification of ET and crop water use are critical for adapting more effective management strategies...
ERIC Educational Resources Information Center
Davis, Kristin; Franzel, Steven; Hildebrand, Peter; Irani, Tracy; Place, Nick
2004-01-01
Agricultural extension is evolving worldwide, and there is much emphasis today on community-based mechanisms of dissemination in order to bring sustainable change. The goal of this study was to examine the factors that make farmer groups successful in dissemination of information and technologies. A mixed-methods, multiple-stage approach was used…
A national scale survey of 251 chemical contaminants in source and finished drinking water was conducted at 25 drinking water treatment plants across the U.S. To address the necessity of using multiple methods in determining a broad array of CECs, we designed a quality assurance/...
Direct Numerical Simulation of Low Capillary Number Pore Scale Flows
NASA Astrophysics Data System (ADS)
Esmaeilzadeh, S.; Soulaine, C.; Tchelepi, H.
2017-12-01
The arrangement of void spaces and the granular structure of a porous medium determines multiple macroscopic properties of the rock such as porosity, capillary pressure, and relative permeability. Therefore, it is important to study the microscopic structure of the reservoir pores and understand the dynamics of fluid displacements through them. One approach for doing this, is direct numerical simulation of pore-scale flow that requires a robust numerical tool for prediction of fluid dynamics and a detailed understanding of the physical processes occurring at the pore-scale. In pore scale flows with a low capillary number, Eulerian multiphase methods are well-known to produce additional vorticity close to the interface. This is mainly due to discretization errors which lead to an imbalance of capillary pressure and surface tension forces that causes unphysical spurious currents. At the pore scale, these spurious currents can become significantly stronger than the average velocity in the phases, and lead to unphysical displacement of the interface. In this work, we first investigate the capability of the algebraic Volume of Fluid (VOF) method in OpenFOAM for low capillary number pore scale flow simulations. Afterward, we compare VOF results with a Coupled Level-Set Volume of Fluid (CLSVOF) method and Iso-Advector method. It has been shown that the former one reduces the VOF's unphysical spurious currents in some cases, and both are known to capture interfaces sharper than VOF. As the conclusion, we will investigate that whether the use of CLSVOF or Iso-Advector will lead to less spurious velocities and more accurate results for capillary driven pore-scale multiphase flows or not. Keywords: Pore-scale multiphase flow, Capillary driven flows, Spurious currents, OpenFOAM
Intercomparison of 3D pore-scale flow and solute transport simulation methods
Mehmani, Yashar; Schoenherr, Martin; Pasquali, Andrea; ...
2015-09-28
Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include 1) methods that explicitly model the three-dimensional geometry of pore spaces and 2) methods that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of the first type, using computational fluid dynamics (CFD) codes employing a standard finite volume method (FVM), against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that validation to include additional models of the first type based onmore » the lattice Boltzmann method (LBM) and smoothed particle hydrodynamics (SPH), as well as a model of the second type, a pore-network model (PNM). The PNM approach used in the current study was recently improved and demonstrated to accurately simulate solute transport in a two-dimensional experiment. While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries on solute transport in the manner of PNMs has not been fully determined. We apply all four approaches (FVM-based CFD, LBM, SPH and PNM) to simulate pore-scale velocity distributions and (for capable codes) nonreactive solute transport, and intercompare the model results. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations). Generally good agreement was achieved among the various approaches, but some differences were observed depending on the model context. The intercomparison work was challenging because of variable capabilities of the codes, and inspired some code enhancements to allow consistent comparison of flow and transport simulations across the full suite of methods. This paper provides support for confidence in a variety of pore-scale modeling methods and motivates further development and application of pore-scale simulation methods.« less
Intercomparison of 3D pore-scale flow and solute transport simulation methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiaofan; Mehmani, Yashar; Perkins, William A.
2016-09-01
Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include 1) methods that explicitly model the three-dimensional geometry of pore spaces and 2) methods that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of the first type, using computational fluid dynamics (CFD) codes employing a standard finite volume method (FVM), against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that validation to include additional models of the first type based onmore » the lattice Boltzmann method (LBM) and smoothed particle hydrodynamics (SPH), as well as a model of the second type, a pore-network model (PNM). The PNM approach used in the current study was recently improved and demonstrated to accurately simulate solute transport in a two-dimensional experiment. While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries on solute transport in the manner of PNMs has not been fully determined. We apply all four approaches (FVM-based CFD, LBM, SPH and PNM) to simulate pore-scale velocity distributions and (for capable codes) nonreactive solute transport, and intercompare the model results. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations). Generally good agreement was achieved among the various approaches, but some differences were observed depending on the model context. The intercomparison work was challenging because of variable capabilities of the codes, and inspired some code enhancements to allow consistent comparison of flow and transport simulations across the full suite of methods. This study provides support for confidence in a variety of pore-scale modeling methods and motivates further development and application of pore-scale simulation methods.« less
Target matching based on multi-view tracking
NASA Astrophysics Data System (ADS)
Liu, Yahui; Zhou, Changsheng
2011-01-01
A feature matching method is proposed based on Maximally Stable Extremal Regions (MSER) and Scale Invariant Feature Transform (SIFT) to solve the problem of the same target matching in multiple cameras. Target foreground is extracted by using frame difference twice and bounding box which is regarded as target regions is calculated. Extremal regions are got by MSER. After fitted into elliptical regions, those regions will be normalized into unity circles and represented with SIFT descriptors. Initial matching is obtained from the ratio of the closest distance to second distance less than some threshold and outlier points are eliminated in terms of RANSAC. Experimental results indicate the method can reduce computational complexity effectively and is also adapt to affine transformation, rotation, scale and illumination.
Spike-train communities: finding groups of similar spike trains.
Humphries, Mark D
2011-02-09
Identifying similar spike-train patterns is a key element in understanding neural coding and computation. For single neurons, similar spike patterns evoked by stimuli are evidence of common coding. Across multiple neurons, similar spike trains indicate potential cell assemblies. As recording technology advances, so does the urgent need for grouping methods to make sense of large-scale datasets of spike trains. Existing methods require specifying the number of groups in advance, limiting their use in exploratory analyses. I derive a new method from network theory that solves this key difficulty: it self-determines the maximum number of groups in any set of spike trains, and groups them to maximize intragroup similarity. This method brings us revealing new insights into the encoding of aversive stimuli by dopaminergic neurons, and the organization of spontaneous neural activity in cortex. I show that the characteristic pause response of a rat's dopaminergic neuron depends on the state of the superior colliculus: when it is inactive, aversive stimuli invoke a single pattern of dopaminergic neuron spiking; when active, multiple patterns occur, yet the spike timing in each is reliable. In spontaneous multineuron activity from the cortex of anesthetized cat, I show the existence of neural ensembles that evolve in membership and characteristic timescale of organization during global slow oscillations. I validate these findings by showing that the method both is remarkably reliable at detecting known groups and can detect large-scale organization of dynamics in a model of the striatum.
Subpixel resolution from multiple images
NASA Technical Reports Server (NTRS)
Cheeseman, Peter; Kanefsky, Rob; Stutz, John; Kraft, Richard
1994-01-01
Multiple images taken from similar locations and under similar lighting conditions contain similar, but not identical, information. Slight differences in instrument orientation and position produces mismatches between the projected pixel grids. These mismatches ensure that any point on the ground is sampled differently in each image. If all the images can be registered with respect to each other to a small fraction of a pixel accuracy, then the information from the multiple images can be combined to increase linear resolution by roughly the square root of the number of images. In addition, the gray-scale resolution of the composite image is also improved. We describe methods for multiple image registration and combination, and discuss some of the problems encountered in developing and extending them. We display test results with 8:1 resolution enhancement, and Viking Orbiter imagery with 2:1 and 4:1 enhancements.
Wang, Yunlong; Liu, Fei; Zhang, Kunbo; Hou, Guangqi; Sun, Zhenan; Tan, Tieniu
2018-09-01
The low spatial resolution of light-field image poses significant difficulties in exploiting its advantage. To mitigate the dependency of accurate depth or disparity information as priors for light-field image super-resolution, we propose an implicitly multi-scale fusion scheme to accumulate contextual information from multiple scales for super-resolution reconstruction. The implicitly multi-scale fusion scheme is then incorporated into bidirectional recurrent convolutional neural network, which aims to iteratively model spatial relations between horizontally or vertically adjacent sub-aperture images of light-field data. Within the network, the recurrent convolutions are modified to be more effective and flexible in modeling the spatial correlations between neighboring views. A horizontal sub-network and a vertical sub-network of the same network structure are ensembled for final outputs via stacked generalization. Experimental results on synthetic and real-world data sets demonstrate that the proposed method outperforms other state-of-the-art methods by a large margin in peak signal-to-noise ratio and gray-scale structural similarity indexes, which also achieves superior quality for human visual systems. Furthermore, the proposed method can enhance the performance of light field applications such as depth estimation.
Verification of watershed vegetation restoration policies, arid China
Zhang, Chengqi; Li, Yu
2016-01-01
Verification of restoration policies that have been implemented is of significance to simultaneously reduce global environmental risks while also meeting economic development goals. This paper proposed a novel method according to the idea of multiple time scales to verify ecological restoration policies in the Shiyang River drainage basin, arid China. We integrated modern pollen transport characteristics of the entire basin and pollen records from 8 Holocene sedimentary sections, and quantitatively reconstructed the millennial-scale changes of watershed vegetation zones by defining a new pollen-precipitation index. Meanwhile, Empirical Orthogonal Function method was used to quantitatively analyze spatial and temporal variations of Normalized Difference Vegetation Index in summer (June to August) of 2000–2014. By contrasting the vegetation changes that mainly controlled by millennial-scale natural ecological evolution with that under conditions of modern ecological restoration measures, we found that vegetation changes of the entire Shiyang River drainage basin are synchronous in both two time scales, and the current ecological restoration policies met the requirements of long-term restoration objectives and showed promising early results on ecological environmental restoration. Our findings present an innovative method to verify river ecological restoration policies, and also provide the scientific basis to propose future emphasizes of ecological restoration strategies. PMID:27470948
Verification of watershed vegetation restoration policies, arid China
NASA Astrophysics Data System (ADS)
Zhang, Chengqi; Li, Yu
2016-07-01
Verification of restoration policies that have been implemented is of significance to simultaneously reduce global environmental risks while also meeting economic development goals. This paper proposed a novel method according to the idea of multiple time scales to verify ecological restoration policies in the Shiyang River drainage basin, arid China. We integrated modern pollen transport characteristics of the entire basin and pollen records from 8 Holocene sedimentary sections, and quantitatively reconstructed the millennial-scale changes of watershed vegetation zones by defining a new pollen-precipitation index. Meanwhile, Empirical Orthogonal Function method was used to quantitatively analyze spatial and temporal variations of Normalized Difference Vegetation Index in summer (June to August) of 2000-2014. By contrasting the vegetation changes that mainly controlled by millennial-scale natural ecological evolution with that under conditions of modern ecological restoration measures, we found that vegetation changes of the entire Shiyang River drainage basin are synchronous in both two time scales, and the current ecological restoration policies met the requirements of long-term restoration objectives and showed promising early results on ecological environmental restoration. Our findings present an innovative method to verify river ecological restoration policies, and also provide the scientific basis to propose future emphasizes of ecological restoration strategies.
Quinn, Gillian; Comber, Laura; Galvin, Rose; Coote, Susan
2018-05-01
To determine the ability of clinical measures of balance to distinguish fallers from non-fallers and to determine their predictive validity in identifying those at risk of falls. AMED, CINAHL, Medline, Scopus, PubMed Central and Google Scholar. First search: July 2015. Final search: October 2017. Inclusion criteria were studies of adults with a definite multiple sclerosis diagnosis, a clinical balance assessment and method of falls recording. Data were extracted independently by two reviewers. Study quality was assessed using the Quality Assessment of Diagnostic Accuracy Studies-2 scale and the modified Newcastle-Ottawa Quality Assessment Scale. Statistical analysis was conducted for the cross-sectional studies using Review Manager 5. The mean difference with 95% confidence interval in balance outcomes between fallers and non-fallers was used as the mode of analysis. We included 33 studies (19 cross-sectional, 5 randomised controlled trials, 9 prospective) with a total of 3901 participants, of which 1917 (49%) were classified as fallers. The balance measures most commonly reported were the Berg Balance Scale, Timed Up and Go and Falls Efficacy Scale International. Meta-analysis demonstrated fallers perform significantly worse than non-fallers on all measures analysed except the Timed Up and Go Cognitive ( p < 0.05), but discriminative ability of the measures is commonly not reported. Of those reported, the Activities-specific Balance Confidence Scale had the highest area under the receiver operating characteristic curve value (0.92), but without reporting corresponding measures of clinical utility. Clinical measures of balance differ significantly between fallers and non-fallers but have poor predictive ability for falls risk in people with multiple sclerosis.
Disturbance patterns in a socio-ecological system at multiple scales
G. Zurlini; Kurt H. Riitters; N. Zaccarelli; I. Petrosillo; K.B. Jones; L. Rossi
2006-01-01
Ecological systems with hierarchical organization and non-equilibrium dynamics require multiple-scale analyses to comprehend how a system is structured and to formulate hypotheses about regulatory mechanisms. Characteristic scales in real landscapes are determined by, or at least reflect, the spatial patterns and scales of constraining human interactions with the...
A Comparison of Item-Level and Scale-Level Multiple Imputation for Questionnaire Batteries
ERIC Educational Resources Information Center
Gottschall, Amanda C.; West, Stephen G.; Enders, Craig K.
2012-01-01
Behavioral science researchers routinely use scale scores that sum or average a set of questionnaire items to address their substantive questions. A researcher applying multiple imputation to incomplete questionnaire data can either impute the incomplete items prior to computing scale scores or impute the scale scores directly from other scale…
A novel statistical method for quantitative comparison of multiple ChIP-seq datasets.
Chen, Li; Wang, Chi; Qin, Zhaohui S; Wu, Hao
2015-06-15
ChIP-seq is a powerful technology to measure the protein binding or histone modification strength in the whole genome scale. Although there are a number of methods available for single ChIP-seq data analysis (e.g. 'peak detection'), rigorous statistical method for quantitative comparison of multiple ChIP-seq datasets with the considerations of data from control experiment, signal to noise ratios, biological variations and multiple-factor experimental designs is under-developed. In this work, we develop a statistical method to perform quantitative comparison of multiple ChIP-seq datasets and detect genomic regions showing differential protein binding or histone modification. We first detect peaks from all datasets and then union them to form a single set of candidate regions. The read counts from IP experiment at the candidate regions are assumed to follow Poisson distribution. The underlying Poisson rates are modeled as an experiment-specific function of artifacts and biological signals. We then obtain the estimated biological signals and compare them through the hypothesis testing procedure in a linear model framework. Simulations and real data analyses demonstrate that the proposed method provides more accurate and robust results compared with existing ones. An R software package ChIPComp is freely available at http://web1.sph.emory.edu/users/hwu30/software/ChIPComp.html. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Combining states without scale hierarchies with ordered parton showers
Fischer, Nadine; Prestel, Stefan
2017-09-12
Here, we present a parameter-free scheme to combine fixed-order multi-jet results with parton-shower evolution. The scheme produces jet cross sections with leading-order accuracy in the complete phase space of multiple emissions, resumming large logarithms when appropriate, while not arbitrarily enforcing ordering on momentum configurations beyond the reach of the parton-shower evolution equation. This then requires the development of a matrix-element correction scheme for complex phase-spaces including ordering conditions as well as a systematic scale-setting procedure for unordered phase-space points. Our algorithm does not require a merging-scale parameter. We implement the new method in the Vincia framework and compare to LHCmore » data.« less
NASA Astrophysics Data System (ADS)
Lu, Hua; Yue, Zengqi; Zhao, Jianlin
2018-05-01
We propose and investigate a new kind of bandpass filters based on the plasmonically induced transparency (PIT) effect in a special metal-insulator-metal (MIM) waveguide system. The finite element method (FEM) simulations illustrate that the obvious PIT response can be generated in the metallic nanostructure with the stub and coupled cavities. The lineshape and position of the PIT peak are particularly dependent on the lengths of the stub and coupled cavities, the waveguide width, as well as the coupling distance between the stub and coupled cavities. The numerical simulations are in accordance with the results obtained by the temporal coupled-mode theory. The multi-peak PIT effect can be achieved by integrating multiple coupled cavities into the plasmonic waveguide. This PIT response contributes to the flexible realization of chip-scale multi-channel bandpass filters, which could find crucial applications in highly integrated optical circuits for signal processing.
Optimize of shrink process with X-Y CD bias on hole pattern
NASA Astrophysics Data System (ADS)
Koike, Kyohei; Hara, Arisa; Natori, Sakurako; Yamauchi, Shohei; Yamato, Masatoshi; Oyama, Kenichi; Yaegashi, Hidetami
2017-03-01
Gridded design rules[1] is major process in configuring logic circuit used 193-immersion lithography. In the scaling of grid patterning, we can make 10nm order line and space pattern by using multiple patterning techniques such as self-aligned multiple patterning (SAMP) and litho-etch- litho-etch (LELE)[2][3][4] . On the other hand, Line cut process has some error parameters such as pattern defect, placement error, roughness and X-Y CD bias with the decreasing scale. We tried to cure hole pattern roughness to use additional process such as Line smoothing[5] . Each smoothing process showed different effect. As the result, CDx shrink amount is smaller than CDy without one additional process. In this paper, we will report the pattern controllability comparison of EUV and 193-immersion. And we will discuss optimum method about CD bias on hole pattern.
Kim, Jong-Ahn; Kim, Jae Wan; Kang, Chu-Shik; Jin, Jonghan; Eom, Tae Bong
2011-11-01
We present an angle generator with high resolution and accuracy, which uses multiple ultrasonic motors and a self-calibratable encoder. A cylindrical air bearing guides a rotational motion, and the ultrasonic motors achieve high resolution over the full circle range with a simple configuration. The self-calibratable encoder can compensate the scale error of a divided circle (signal period: 20") effectively by applying the equal-division-averaged method. The angle generator configures a position feedback control loop using the readout of the encoder. By combining the ac and dc operation mode, the angle generator produced stepwise angular motion with 0.005" resolution. We also evaluated the performance of the angle generator using a precision angle encoder and an autocollimator. The expanded uncertainty (k = 2) in the angle generation was estimated less than 0.03", which included the calibrated scale error and the nonlinearity error. © 2011 American Institute of Physics
Geovisualization of Local and Regional Migration Using Web-mined Demographics
NASA Astrophysics Data System (ADS)
Schuermann, R. T.; Chow, T. E.
2014-11-01
The intent of this research was to augment and facilitate analyses, which gauges the feasibility of web-mined demographics to study spatio-temporal dynamics of migration. As a case study, we explored the spatio-temporal dynamics of Vietnamese Americans (VA) in Texas through geovisualization of mined demographic microdata from the World Wide Web. Based on string matching across all demographic attributes, including full name, address, date of birth, age and phone number, multiple records of the same entity (i.e. person) over time were resolved and reconciled into a database. Migration trajectories were geovisualized through animated sprites by connecting the different addresses associated with the same person and segmenting the trajectory into small fragments. Intra-metropolitan migration patterns appeared at the local scale within many metropolitan areas. At the scale of metropolitan area, varying degrees of immigration and emigration manifest different types of migration clusters. This paper presents a methodology incorporating GIS methods and cartographic design to produce geovisualization animation, enabling the cognitive identification of migration patterns at multiple scales. Identification of spatio-temporal patterns often stimulates further research to better understand the phenomenon and enhance subsequent modeling.
Muñoz-Lasa, Susana; López de Silanes, Carlos; Atín-Arratibel, M Ángeles; Bravo-Llatas, Carmen; Pastor-Jimeno, Salvador; Máximo-Bocanegra, Nuria
2018-04-19
Hippotherapy is being used as a promising method in the physical treatment of multiple sclerosis (MS). Comparative open clinical pre-post study into hippotherapy intervention during a 6-month period in patients with MS (n=6). Not randomised and with control group (n=4). The study was performed by MHG Foundation. A statistically significant improvement was observed in the therapy group in: spasticity pre-post measured by the modified Ashworth scale (P=.01). Statistically significant improvement in fatigue impact (P<.0001) measured with FIS; in general, perception of heath outcome in urinary quality of life scale KHQ (P=.033), and in subscales 2, 3 and 4 of MSQOL-54 (P=.011). Control group showed no improvement in any scale. This study reinforces current literature that supports hippotherapy as an adequate intervention for MS patients. Further studies with more participants, control groups and blinded research would be logical steps for future research in this field. Copyright © 2018 Elsevier España, S.L.U. All rights reserved.
Protein Structure Classification and Loop Modeling Using Multiple Ramachandran Distributions.
Najibi, Seyed Morteza; Maadooliat, Mehdi; Zhou, Lan; Huang, Jianhua Z; Gao, Xin
2017-01-01
Recently, the study of protein structures using angular representations has attracted much attention among structural biologists. The main challenge is how to efficiently model the continuous conformational space of the protein structures based on the differences and similarities between different Ramachandran plots. Despite the presence of statistical methods for modeling angular data of proteins, there is still a substantial need for more sophisticated and faster statistical tools to model the large-scale circular datasets. To address this need, we have developed a nonparametric method for collective estimation of multiple bivariate density functions for a collection of populations of protein backbone angles. The proposed method takes into account the circular nature of the angular data using trigonometric spline which is more efficient compared to existing methods. This collective density estimation approach is widely applicable when there is a need to estimate multiple density functions from different populations with common features. Moreover, the coefficients of adaptive basis expansion for the fitted densities provide a low-dimensional representation that is useful for visualization, clustering, and classification of the densities. The proposed method provides a novel and unique perspective to two important and challenging problems in protein structure research: structure-based protein classification and angular-sampling-based protein loop structure prediction.
DISTURBANCE PATTERNS IN A SOCIO-ECOLOGICAL SYSTEM AT MULTIPLE SCALES
Ecological systems with hierarchical organization and non-equilibrium dynamics require multiple-scale analyses to comprehend how a system is structured and to formulate hypotheses about regulatory mechanisms. Characteristic scales in real landscapes are determined by, or at least...
3D plasmonic nanoantennas integrated with MEA biosensors
NASA Astrophysics Data System (ADS)
Dipalo, Michele; Messina, Gabriele C.; Amin, Hayder; La Rocca, Rosanna; Shalabaeva, Victoria; Simi, Alessandro; Maccione, Alessandro; Zilio, Pierfrancesco; Berdondini, Luca; de Angelis, Francesco
2015-02-01
Neuronal signaling in brain circuits occurs at multiple scales ranging from molecules and cells to large neuronal assemblies. However, current sensing neurotechnologies are not designed for parallel access of signals at multiple scales. With the aim of combining nanoscale molecular sensing with electrical neural activity recordings within large neuronal assemblies, in this work three-dimensional (3D) plasmonic nanoantennas are integrated with multielectrode arrays (MEA). Nanoantennas are fabricated by fast ion beam milling on optical resist; gold is deposited on the nanoantennas in order to connect them electrically to the MEA microelectrodes and to obtain plasmonic behavior. The optical properties of these 3D nanostructures are studied through finite elements method (FEM) simulations that show a high electromagnetic field enhancement. This plasmonic enhancement is confirmed by surface enhancement Raman spectroscopy of a dye performed in liquid, which presents an enhancement of almost 100 times the incident field amplitude at resonant excitation. Finally, the reported MEA devices are tested on cultured rat hippocampal neurons. Neurons develop by extending branches on the nanostructured electrodes and extracellular action potentials are recorded over multiple days in vitro. Raman spectra of living neurons cultured on the nanoantennas are also acquired. These results highlight that these nanostructures could be potential candidates for combining electrophysiological measures of large networks with simultaneous spectroscopic investigations at the molecular level.Neuronal signaling in brain circuits occurs at multiple scales ranging from molecules and cells to large neuronal assemblies. However, current sensing neurotechnologies are not designed for parallel access of signals at multiple scales. With the aim of combining nanoscale molecular sensing with electrical neural activity recordings within large neuronal assemblies, in this work three-dimensional (3D) plasmonic nanoantennas are integrated with multielectrode arrays (MEA). Nanoantennas are fabricated by fast ion beam milling on optical resist; gold is deposited on the nanoantennas in order to connect them electrically to the MEA microelectrodes and to obtain plasmonic behavior. The optical properties of these 3D nanostructures are studied through finite elements method (FEM) simulations that show a high electromagnetic field enhancement. This plasmonic enhancement is confirmed by surface enhancement Raman spectroscopy of a dye performed in liquid, which presents an enhancement of almost 100 times the incident field amplitude at resonant excitation. Finally, the reported MEA devices are tested on cultured rat hippocampal neurons. Neurons develop by extending branches on the nanostructured electrodes and extracellular action potentials are recorded over multiple days in vitro. Raman spectra of living neurons cultured on the nanoantennas are also acquired. These results highlight that these nanostructures could be potential candidates for combining electrophysiological measures of large networks with simultaneous spectroscopic investigations at the molecular level. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr05578k
The art and science of hyperbolic tessellations.
Van Dusen, B; Taylor, R P
2013-04-01
The visual impact of hyperbolic tessellations has captured artists' imaginations ever since M.C. Escher generated his Circle Limit series in the 1950s. The scaling properties generated by hyperbolic geometry are different to the fractal scaling properties found in nature's scenery. Consequently, prevalent interpretations of Escher's art emphasize the lack of connection with nature's patterns. However, a recent collaboration between the two authors proposed that Escher's motivation for using hyperbolic geometry was as a method to deliberately distort nature's rules. Inspired by this hypothesis, this year's cover artist, Ben Van Dusen, embeds natural fractals such as trees, clouds and lightning into a hyperbolic scaling grid. The resulting interplay of visual structure at multiple size scales suggests that hybridizations of fractal and hyperbolic geometries provide a rich compositional tool for artists.
A trust-based recommendation method using network diffusion processes
NASA Astrophysics Data System (ADS)
Chen, Ling-Jiao; Gao, Jian
2018-09-01
A variety of rating-based recommendation methods have been extensively studied including the well-known collaborative filtering approaches and some network diffusion-based methods, however, social trust relations are not sufficiently considered when making recommendations. In this paper, we contribute to the literature by proposing a trust-based recommendation method, named CosRA+T, after integrating the information of trust relations into the resource-redistribution process. Specifically, a tunable parameter is used to scale the resources received by trusted users before the redistribution back to the objects. Interestingly, we find an optimal scaling parameter for the proposed CosRA+T method to achieve its best recommendation accuracy, and the optimal value seems to be universal under several evaluation metrics across different datasets. Moreover, results of extensive experiments on the two real-world rating datasets with trust relations, Epinions and FriendFeed, suggest that CosRA+T has a remarkable improvement in overall accuracy, diversity and novelty. Our work takes a step towards designing better recommendation algorithms by employing multiple resources of social network information.
Clinical application of ICF key codes to evaluate patients with dysphagia following stroke
Dong, Yi; Zhang, Chang-Jie; Shi, Jie; Deng, Jinggui; Lan, Chun-Na
2016-01-01
Abstract This study was aimed to identify and evaluate the International Classification of Functioning (ICF) key codes for dysphagia in stroke patients. Thirty patients with dysphagia after stroke were enrolled in our study. To evaluate the ICF dysphagia scale, 6 scales were used as comparisons, namely the Barthel Index (BI), Repetitive Saliva Swallowing Test (RSST), Kubota Water Swallowing Test (KWST), Frenchay Dysarthria Assessment, Mini-Mental State Examination (MMSE), and the Montreal Cognitive Assessment (MoCA). Multiple regression analysis was performed to quantitate the relationship between the ICF scale and the other 7 scales. In addition, 60 ICF scales were analyzed by the least absolute shrinkage and selection operator (LASSO) method. A total of 21 ICF codes were identified, which were closely related with the other scales. These included 13 codes from Body Function, 1 from Body Structure, 3 from Activities and Participation, and 4 from Environmental Factors. A topographic network map with 30 ICF key codes was also generated to visualize their relationships. The number of ICF codes identified is in line with other well-established evaluation methods. The network topographic map generated here could be used as an instruction tool in future evaluations. We also found that attention functions and biting were critical codes of these scales, and could be used as treatment targets. PMID:27661012
Veauthier, Christian
2013-01-01
Background The Fatigue Severity Scale (FSS) is widely used to assess fatigue, not only in the context of multiple sclerosis-related fatigue, but also in many other medical conditions. Some polysomnographic studies have shown high FSS values in sleep-disordered patients without multiple sclerosis. The Modified Fatigue Impact Scale (MFIS) has increasingly been used in order to assess fatigue, but polysomnographic data investigating sleep-disordered patients are thus far unavailable. Moreover, the pathophysiological link between sleep architecture and fatigue measured with the MFIS and the FSS has not been previously investigated. Methods This was a retrospective observational study (n = 410) with subgroups classified according to sleep diagnosis. The statistical analysis included nonparametric correlation between questionnaire results and polysomnographic data, age and sex, and univariate and multiple logistic regression. Results The multiple logistic regression showed a significant relationship between FSS/MFIS values and younger age and female sex. Moreover, there was a significant relationship between FSS values and number of arousals and between MFIS values and number of awakenings. Conclusion Younger age, female sex, and high number of awakenings and arousals are predictive of fatigue in sleep-disordered patients. Further investigations are needed to find the pathophysiological explanation for these relationships. PMID:24109185
Toward a methodical framework for comprehensively assessing forest multifunctionality.
Trogisch, Stefan; Schuldt, Andreas; Bauhus, Jürgen; Blum, Juliet A; Both, Sabine; Buscot, François; Castro-Izaguirre, Nadia; Chesters, Douglas; Durka, Walter; Eichenberg, David; Erfmeier, Alexandra; Fischer, Markus; Geißler, Christian; Germany, Markus S; Goebes, Philipp; Gutknecht, Jessica; Hahn, Christoph Zacharias; Haider, Sylvia; Härdtle, Werner; He, Jin-Sheng; Hector, Andy; Hönig, Lydia; Huang, Yuanyuan; Klein, Alexandra-Maria; Kühn, Peter; Kunz, Matthias; Leppert, Katrin N; Li, Ying; Liu, Xiaojuan; Niklaus, Pascal A; Pei, Zhiqin; Pietsch, Katherina A; Prinz, Ricarda; Proß, Tobias; Scherer-Lorenzen, Michael; Schmidt, Karsten; Scholten, Thomas; Seitz, Steffen; Song, Zhengshan; Staab, Michael; von Oheimb, Goddert; Weißbecker, Christina; Welk, Erik; Wirth, Christian; Wubet, Tesfaye; Yang, Bo; Yang, Xuefei; Zhu, Chao-Dong; Schmid, Bernhard; Ma, Keping; Bruelheide, Helge
2017-12-01
Biodiversity-ecosystem functioning (BEF) research has extended its scope from communities that are short-lived or reshape their structure annually to structurally complex forest ecosystems. The establishment of tree diversity experiments poses specific methodological challenges for assessing the multiple functions provided by forest ecosystems. In particular, methodological inconsistencies and nonstandardized protocols impede the analysis of multifunctionality within, and comparability across the increasing number of tree diversity experiments. By providing an overview on key methods currently applied in one of the largest forest biodiversity experiments, we show how methods differing in scale and simplicity can be combined to retrieve consistent data allowing novel insights into forest ecosystem functioning. Furthermore, we discuss and develop recommendations for the integration and transferability of diverse methodical approaches to present and future forest biodiversity experiments. We identified four principles that should guide basic decisions concerning method selection for tree diversity experiments and forest BEF research: (1) method selection should be directed toward maximizing data density to increase the number of measured variables in each plot. (2) Methods should cover all relevant scales of the experiment to consider scale dependencies of biodiversity effects. (3) The same variable should be evaluated with the same method across space and time for adequate larger-scale and longer-time data analysis and to reduce errors due to changing measurement protocols. (4) Standardized, practical and rapid methods for assessing biodiversity and ecosystem functions should be promoted to increase comparability among forest BEF experiments. We demonstrate that currently available methods provide us with a sophisticated toolbox to improve a synergistic understanding of forest multifunctionality. However, these methods require further adjustment to the specific requirements of structurally complex and long-lived forest ecosystems. By applying methods connecting relevant scales, trophic levels, and above- and belowground ecosystem compartments, knowledge gain from large tree diversity experiments can be optimized.
Aquatic ecosystem protection and restoration: Advances in methods for assessment and evaluation
Bain, M.B.; Harig, A.L.; Loucks, D.P.; Goforth, R.R.; Mills, K.E.
2000-01-01
Many methods and criteria are available to assess aquatic ecosystems, and this review focuses on a set that demonstrates advancements from community analyses to methods spanning large spatial and temporal scales. Basic methods have been extended by incorporating taxa sensitivity to different forms of stress, adding measures linked to system function, synthesizing multiple faunal groups, integrating biological and physical attributes, spanning large spatial scales, and enabling simulations through time. These tools can be customized to meet the needs of a particular assessment and ecosystem. Two case studies are presented to show how new methods were applied at the ecosystem scale for achieving practical management goals. One case used an assessment of biotic structure to demonstrate how enhanced river flows can improve habitat conditions and restore a diverse fish fauna reflective of a healthy riverine ecosystem. In the second case, multitaxonomic integrity indicators were successful in distinguishing lake ecosystems that were disturbed, healthy, and in the process of restoration. Most methods strive to address the concept of biological integrity and assessment effectiveness often can be impeded by the lack of more specific ecosystem management objectives. Scientific and policy explorations are needed to define new ways for designating a healthy system so as to allow specification of precise quality criteria that will promote further development of ecosystem analysis tools.
Multiple shooting shadowing for sensitivity analysis of chaotic dynamical systems
NASA Astrophysics Data System (ADS)
Blonigan, Patrick J.; Wang, Qiqi
2018-02-01
Sensitivity analysis methods are important tools for research and design with simulations. Many important simulations exhibit chaotic dynamics, including scale-resolving turbulent fluid flow simulations. Unfortunately, conventional sensitivity analysis methods are unable to compute useful gradient information for long-time-averaged quantities in chaotic dynamical systems. Sensitivity analysis with least squares shadowing (LSS) can compute useful gradient information for a number of chaotic systems, including simulations of chaotic vortex shedding and homogeneous isotropic turbulence. However, this gradient information comes at a very high computational cost. This paper presents multiple shooting shadowing (MSS), a more computationally efficient shadowing approach than the original LSS approach. Through an analysis of the convergence rate of MSS, it is shown that MSS can have lower memory usage and run time than LSS.
Intrinsic fluctuations of the proton saturation momentum scale in high multiplicity p+p collisions
McLerran, Larry; Tribedy, Prithwish
2015-11-02
High multiplicity events in p+p collisions are studied using the theory of the Color Glass Condensate. Here, we show that intrinsic fluctuations of the proton saturation momentum scale are needed in addition to the sub-nucleonic color charge fluctuations to explain the very high multiplicity tail of distributions in p+p collisions. It is presumed that the origin of such intrinsic fluctuations is non-perturbative in nature. Classical Yang Mills simulations using the IP-Glasma model are performed to make quantitative estimations. Furthermore, we find that fluctuations as large as O(1) of the average values of the saturation momentum scale can lead to raremore » high multiplicity events seen in p+p data at RHIC and LHC energies. Using the available data on multiplicity distributions we try to constrain the distribution of the proton saturation momentum scale and make predictions for the multiplicity distribution in 13 TeV p+p collisions.« less
Modeling the interaction of biological cells with a solidifying interface
NASA Astrophysics Data System (ADS)
Chang, Anthony; Dantzig, Jonathan A.; Darr, Brian T.; Hubel, Allison
2007-10-01
In this article, we develop a modified level set method for modeling the interaction of particles with a solidifying interface. The dynamic computation of the van der Waals and drag forces between the particles and the solidification front leads to a problem of multiple length scales, which we resolve using adaptive grid techniques. We present a variety of example problems to demonstrate the accuracy and utility of the method. We also use the model to interpret experimental results obtained using directional solidification in a cryomicroscope.
Initial-boundary layer associated with the nonlinear Darcy-Brinkman-Oberbeck-Boussinesq system
NASA Astrophysics Data System (ADS)
Fei, Mingwen; Han, Daozhi; Wang, Xiaoming
2017-01-01
In this paper, we study the vanishing Darcy number limit of the nonlinear Darcy-Brinkman-Oberbeck-Boussinesq system (DBOB). This singular perturbation problem involves singular structures both in time and in space giving rise to initial layers, boundary layers and initial-boundary layers. We construct an approximate solution to the DBOB system by the method of multiple scale expansions. The convergence with optimal convergence rates in certain Sobolev norms is established rigorously via the energy method.
Nolen, Matthew S.; Magoulick, Daniel D.; DiStefano, Robert J.; Imhoff, Emily M.; Wagner, Brian K.
2014-01-01
We found that a range of environmental variables were important in predicting crayfish distribution and abundance at multiple spatial scales and their importance was species-, response variable- and scale dependent. We would encourage others to examine the influence of spatial scale on species distribution and abundance patterns.
Quantifying drivers of wild pig movement across multiple spatial and temporal scales
Kay, Shannon L.; Fischer, Justin W.; Monaghan, Andrew J.; Beasley, James C; Boughton, Raoul; Campbell, Tyler A; Cooper, Susan M; Ditchkoff, Stephen S.; Hartley, Stephen B.; Kilgo, John C; Wisely, Samantha M; Wyckoff, A Christy; Vercauteren, Kurt C.; Pipen, Kim M
2017-01-01
The analytical framework we present can be used to assess movement patterns arising from multiple data sources for a range of species while accounting for spatio-temporal correlations. Our analyses show the magnitude by which reaction norms can change based on the temporal scale of response data, illustrating the importance of appropriately defining temporal scales of both the movement response and covariates depending on the intended implications of research (e.g., predicting effects of movement due to climate change versus planning local-scale management). We argue that consideration of multiple spatial scales within the same framework (rather than comparing across separate studies post-hoc) gives a more accurate quantification of cross-scale spatial effects by appropriately accounting for error correlation.
Knowledge-Based Methods To Train and Optimize Virtual Screening Ensembles
2016-01-01
Ensemble docking can be a successful virtual screening technique that addresses the innate conformational heterogeneity of macromolecular drug targets. Yet, lacking a method to identify a subset of conformational states that effectively segregates active and inactive small molecules, ensemble docking may result in the recommendation of a large number of false positives. Here, three knowledge-based methods that construct structural ensembles for virtual screening are presented. Each method selects ensembles by optimizing an objective function calculated using the receiver operating characteristic (ROC) curve: either the area under the ROC curve (AUC) or a ROC enrichment factor (EF). As the number of receptor conformations, N, becomes large, the methods differ in their asymptotic scaling. Given a set of small molecules with known activities and a collection of target conformations, the most resource intense method is guaranteed to find the optimal ensemble but scales as O(2N). A recursive approximation to the optimal solution scales as O(N2), and a more severe approximation leads to a faster method that scales linearly, O(N). The techniques are generally applicable to any system, and we demonstrate their effectiveness on the androgen nuclear hormone receptor (AR), cyclin-dependent kinase 2 (CDK2), and the peroxisome proliferator-activated receptor δ (PPAR-δ) drug targets. Conformations that consisted of a crystal structure and molecular dynamics simulation cluster centroids were used to form AR and CDK2 ensembles. Multiple available crystal structures were used to form PPAR-δ ensembles. For each target, we show that the three methods perform similarly to one another on both the training and test sets. PMID:27097522
Value, Challenges, and Satisfaction of Certification for Multiple Sclerosis Specialists
Halper, June
2014-01-01
Background: Specialist certification among interdisciplinary multiple sclerosis (MS) team members provides formal recognition of a specialized body of knowledge felt to be necessary to provide optimal care to individuals and families living with MS. Multiple sclerosis specialist certification (MS Certified Specialist, or MSCS) first became available in 2004 for MS interdisciplinary team members, but prior to the present study had not been evaluated for its perceived value, challenges, and satisfaction. Methods: A sample consisting of 67 currently certified MS specialists and 20 lapsed-certification MS specialists completed the following instruments: Perceived Value of Certification Tool (PVCT), Perceived Challenges and Barriers to Certification Scale (PCBCS), Overall Satisfaction with Certification Scale, and a demographic data form. Results: Satisfactory reliability was shown for the total scale and four factored subscales of the PVCT and for two of the three factored PCBCS subscales. Currently certified MS specialists perceived significantly greater value and satisfaction than lapsed-certification MS specialists in terms of employer and peer recognition, validation of MS knowledge, and empowering MS patients. Lapsed-certification MS specialists reported increased confidence and caring for MS patients using evidence-based practice. Both currently certified and lapsed-certification groups reported dissatisfaction with MSCS recognition and pay/salary rewards. Conclusions: The results of this study can be used in efforts to encourage initial certification and recertification of interdisciplinary MS team members. PMID:25061432
Liu, Renzhi; Liu, Jing; Zhang, Zhijiao; Borthwick, Alistair; Zhang, Ke
2015-12-02
Over the past half century, a surprising number of major pollution incidents occurred due to tailings dam failures. Most previous studies of such incidents comprised forensic analyses of environmental impacts after a tailings dam failure, with few considering the combined pollution risk before incidents occur at a watershed-scale. We therefore propose Watershed-scale Tailings-pond Pollution Risk Analysis (WTPRA), designed for multiple mine tailings ponds, stemming from previous watershed-scale accidental pollution risk assessments. Transferred and combined risk is embedded using risk rankings of multiple routes of the "source-pathway-target" in the WTPRA. The previous approach is modified using multi-criteria analysis, dam failure models, and instantaneous water quality models, which are modified for application to multiple tailings ponds. The study area covers the basin of Gutanting Reservoir (the largest backup drinking water source for Beijing) in Zhangjiakou City, where many mine tailings ponds are located. The resultant map shows that risk is higher downstream of Gutanting Reservoir and in its two tributary basins (i.e., Qingshui River and Longyang River). Conversely, risk is lower in the midstream and upstream reaches. The analysis also indicates that the most hazardous mine tailings ponds are located in Chongli and Xuanhua, and that Guanting Reservoir is the most vulnerable receptor. Sensitivity and uncertainty analyses are performed to validate the robustness of the WTPRA method.
Liu, Renzhi; Liu, Jing; Zhang, Zhijiao; Borthwick, Alistair; Zhang, Ke
2015-01-01
Over the past half century, a surprising number of major pollution incidents occurred due to tailings dam failures. Most previous studies of such incidents comprised forensic analyses of environmental impacts after a tailings dam failure, with few considering the combined pollution risk before incidents occur at a watershed-scale. We therefore propose Watershed-scale Tailings-pond Pollution Risk Analysis (WTPRA), designed for multiple mine tailings ponds, stemming from previous watershed-scale accidental pollution risk assessments. Transferred and combined risk is embedded using risk rankings of multiple routes of the “source-pathway-target” in the WTPRA. The previous approach is modified using multi-criteria analysis, dam failure models, and instantaneous water quality models, which are modified for application to multiple tailings ponds. The study area covers the basin of Gutanting Reservoir (the largest backup drinking water source for Beijing) in Zhangjiakou City, where many mine tailings ponds are located. The resultant map shows that risk is higher downstream of Gutanting Reservoir and in its two tributary basins (i.e., Qingshui River and Longyang River). Conversely, risk is lower in the midstream and upstream reaches. The analysis also indicates that the most hazardous mine tailings ponds are located in Chongli and Xuanhua, and that Guanting Reservoir is the most vulnerable receptor. Sensitivity and uncertainty analyses are performed to validate the robustness of the WTPRA method. PMID:26633450
Optimization of OT-MACH Filter Generation for Target Recognition
NASA Technical Reports Server (NTRS)
Johnson, Oliver C.; Edens, Weston; Lu, Thomas T.; Chao, Tien-Hsin
2009-01-01
An automatic Optimum Trade-off Maximum Average Correlation Height (OT-MACH) filter generator for use in a gray-scale optical correlator (GOC) has been developed for improved target detection at JPL. While the OT-MACH filter has been shown to be an optimal filter for target detection, actually solving for the optimum is too computationally intensive for multiple targets. Instead, an adaptive step gradient descent method was tested to iteratively optimize the three OT-MACH parameters, alpha, beta, and gamma. The feedback for the gradient descent method was a composite of the performance measures, correlation peak height and peak to side lobe ratio. The automated method generated and tested multiple filters in order to approach the optimal filter quicker and more reliably than the current manual method. Initial usage and testing has shown preliminary success at finding an approximation of the optimal filter, in terms of alpha, beta, gamma values. This corresponded to a substantial improvement in detection performance where the true positive rate increased for the same average false positives per image.
Synthetic Minority Oversampling Technique and Fractal Dimension for Identifying Multiple Sclerosis
NASA Astrophysics Data System (ADS)
Zhang, Yu-Dong; Zhang, Yin; Phillips, Preetha; Dong, Zhengchao; Wang, Shuihua
Multiple sclerosis (MS) is a severe brain disease. Early detection can provide timely treatment. Fractal dimension can provide statistical index of pattern changes with scale at a given brain image. In this study, our team used susceptibility weighted imaging technique to obtain 676 MS slices and 880 healthy slices. We used synthetic minority oversampling technique to process the unbalanced dataset. Then, we used Canny edge detector to extract distinguishing edges. The Minkowski-Bouligand dimension was a fractal dimension estimation method and used to extract features from edges. Single hidden layer neural network was used as the classifier. Finally, we proposed a three-segment representation biogeography-based optimization to train the classifier. Our method achieved a sensitivity of 97.78±1.29%, a specificity of 97.82±1.60% and an accuracy of 97.80±1.40%. The proposed method is superior to seven state-of-the-art methods in terms of sensitivity and accuracy.
Scaling and modeling of turbulent suspension flows
NASA Technical Reports Server (NTRS)
Chen, C. P.
1989-01-01
Scaling factors determining various aspects of particle-fluid interactions and the development of physical models to predict gas-solid turbulent suspension flow fields are discussed based on two-fluid, continua formulation. The modes of particle-fluid interactions are discussed based on the length and time scale ratio, which depends on the properties of the particles and the characteristics of the flow turbulence. For particle size smaller than or comparable with the Kolmogorov length scale and concentration low enough for neglecting direct particle-particle interaction, scaling rules can be established in various parameter ranges. The various particle-fluid interactions give rise to additional mechanisms which affect the fluid mechanics of the conveying gas phase. These extra mechanisms are incorporated into a turbulence modeling method based on the scaling rules. A multiple-scale two-phase turbulence model is developed, which gives reasonable predictions for dilute suspension flow. Much work still needs to be done to account for the poly-dispersed effects and the extension to dense suspension flows.
Bush, Steffani; Gappmaier, Eduard
2016-01-01
Background: Fatigue is a common symptom in people with multiple sclerosis (MS), but its associations with disability, functional mobility, depression, and quality of life (QOL) remain unclear. We aimed to determine the associations between different levels of fatigue and disability, functional mobility, depression, and physical and mental QOL in people with MS. Methods: Eighty-nine individuals with MS (mean [SD] disease duration = 13.6 [9.8] years, mean [SD] Expanded Disability Status Scale [EDSS] score = 5.3 [1.5]) and no concurrent relapses were retrospectively analyzed. Participants were divided into two groups based on five-item Modified Fatigue Impact Scale (MFIS-5) scores: group LF (n = 32, MFIS-5 score ≤10 [low levels of fatigue]) and group HF (n = 57, MFIS-5 score >10 [high levels of fatigue]). Results: Sixty-four percent of the sample reported high levels of fatigue. Compared with group LF, group HF demonstrated significantly (P < .05) greater impairments in the Timed Up and Go test, Activities-specific Balance Confidence scale, and 12-item Multiple Sclerosis Walking Scale scores; depression; and QOL but not in the EDSS scores, which were not significantly different between groups. Conclusions: Fatigue was found to be a predominant symptom in the study participants. Individuals reporting higher levels of fatigue concomitantly exhibited greater impairments in functional mobility, depression, and physical and mental QOL. Disability was not found to be related to level of fatigue. These findings can be important for appropriate assessment and management of individuals with MS with fatigue. PMID:27134580
Aarons, Gregory A.; Fettes, Danielle; Hurlburt, Michael; Palinkas, Lawrence; Gunderson, Lara; Willging, Cathleen; Chaffin, Mark
2014-01-01
Objective Implementation and scale-up of evidence-based practices (EBPs) is often portrayed as involving multiple stakeholders collaborating harmoniously in the service of a shared vision. In practice, however, collaboration is a more complex process that may involve shared and competing interests and agendas, and negotiation. The present study examined the scale-up of an EBP across an entire service system using the Interagency Collaborative Team (ICT) approach. Methods Participants were key stakeholders in a large-scale county-wide implementation of an EBP to reduce child neglect, SafeCare®. Semi-structured interviews and/or focus groups were conducted with 54 individuals representing diverse constituents in the service system, followed by an iterative approach to coding and analysis of transcripts. The study was conceptualized using the Exploration, Preparation, Implementation, and Sustainment (EPIS) framework. Results Although community stakeholders eventually coalesced around implementation of SafeCare, several challenges affected the implementation process. These challenges included differing organizational cultures, strategies, and approaches to collaboration, competing priorities across levels of leadership, power struggles, and role ambiguity. Each of the factors identified influenced how stakeholders approached the EBP implementation process. Conclusions System wide scale-up of EBPs involves multiple stakeholders operating in a nexus of differing agendas, priorities, leadership styles, and negotiation strategies. The term collaboration may oversimplify the multifaceted nature of the scale-up process. Implementation efforts should openly acknowledge and consider this nexus when individual stakeholders and organizations enter into EBP implementation through collaborative processes. PMID:24611580
Multi-scale functional mapping of tidal marsh vegetation for restoration monitoring
NASA Astrophysics Data System (ADS)
Tuxen Bettman, Karin
2007-12-01
Nearly half of the world's natural wetlands have been destroyed or degraded, and in recent years, there have been significant endeavors to restore wetland habitat throughout the world. Detailed mapping of restoring wetlands can offer valuable information about changes in vegetation and geomorphology, which can inform the restoration process and ultimately help to improve chances of restoration success. I studied six tidal marshes in the San Francisco Estuary, CA, US, between 2003 and 2004 in order to develop techniques for mapping tidal marshes at multiple scales by incorporating specific restoration objectives for improved longer term monitoring. I explored a "pixel-based" remote sensing image analysis method for mapping vegetation in restored and natural tidal marshes, describing the benefits and limitations of this type of approach (Chapter 2). I also performed a multi-scale analysis of vegetation pattern metrics for a recently restored tidal marsh in order to target the metrics that are consistent across scales and will be robust measures of marsh vegetation change (Chapter 3). Finally, I performed an "object-based" image analysis using the same remotely sensed imagery, which maps vegetation type and specific wetland functions at multiple scales (Chapter 4). The combined results of my work highlight important trends and management implications for monitoring wetland restoration using remote sensing, and will better enable restoration ecologists to use remote sensing for tidal marsh monitoring. Several findings important for tidal marsh restoration monitoring were made. Overall results showed that pixel-based methods are effective at quantifying landscape changes in composition and diversity in recently restored marshes, but are limited in their use for quantifying smaller, more fine-scale changes. While pattern metrics can highlight small but important changes in vegetation composition and configuration across years, scientists should exercise caution when using metrics in their studies or to validate restoration management decisions, and multi-scale analyses should be performed before metrics are used in restoration science for important management decisions. Lastly, restoration objectives, ecosystem function, and scale can each be integrated into monitoring techniques using remote sensing for improved restoration monitoring.
NASA Astrophysics Data System (ADS)
Hamshaw, S. D.; Dewoolkar, M. M.; Rizzo, D.; ONeil-Dunne, J.; Frolik, J.
2016-12-01
Measurement of rates and extent of streambank erosion along river corridors is an important component of many catchment studies and necessary for engineering projects such as river restoration, hazard assessment, and total maximum daily load (TMDL) development. A variety of methods have been developed to quantify streambank erosion, including bank pins, ground surveys, photogrammetry, LiDAR, and analytical models. However, these methods are not only resource intensive, but many are feasible and appropriate only for site-specific studies and not practical for erosion estimates at larger scales. Recent advancements in unmanned aircraft systems (UAS) and photogrammetry software provide capabilities for more rapid and economical quantification of streambank erosion and deposition at multiple scales (from site-specific to river network). At the site-specific scale, the capability of UAS to quantify streambank erosion was compared to terrestrial laser scanning (TLS) and RTK-GPS ground survey and assessed at seven streambank monitoring sites in central Vermont. Across all sites, the UAS-derived bank topography had mean errors of 0.21 m compared to TLS and GPS data. Highest accuracies were achieved in early spring conditions where mean errors approached 10 cm. The cross sectional area of bank erosion at a typical, vegetated streambank site was found to be reliably calculated within 10% of actual for erosion areas greater than 3.5 m2. At the river network-level scale, 20 km of river corridor along the New Haven, Winooski, and Mad Rivers was flown on multiple dates with UAS and used to generate digital elevation models (DEMs) that were then compared for change detection analysis. Airborne LiDAR data collected prior to UAS surveys was also compared to UAS data to determine multi-year rates of bank erosion. UAS-based photogrammetry for generation of fine scale topographic data shows promise for the monitoring of streambank erosion both at the individual site scale and river-network scale in areas that are not densely covered with vegetation year-round.
On the Interactions Between Planetary and Mesoscale Dynamics in the Oceans
NASA Astrophysics Data System (ADS)
Grooms, I.; Julien, K. A.; Fox-Kemper, B.
2011-12-01
Multiple-scales asymptotic methods are used to investigate the interaction of planetary and mesoscale dynamics in the oceans. We find three regimes. In the first, the slow, large-scale planetary flow sets up a baroclinically unstable background which leads to vigorous mesoscale eddy generation, but the eddy dynamics do not affect the planetary dynamics. In the second, the planetary flow feels the effects of the eddies, but appears to be unable to generate them. The first two regimes rely on horizontally isotropic large-scale dynamics. In the third regime, large-scale anisotropy, as exists for example in the Antarctic Circumpolar Current and in western boundary currents, allows the large-scale dynamics to both generate and respond to mesoscale eddies. We also discuss how the investigation may be brought to bear on the problem of parameterization of unresolved mesoscale dynamics in ocean general circulation models.
Energy Management and Optimization Methods for Grid Energy Storage Systems
Byrne, Raymond H.; Nguyen, Tu A.; Copp, David A.; ...
2017-08-24
Today, the stability of the electric power grid is maintained through real time balancing of generation and demand. Grid scale energy storage systems are increasingly being deployed to provide grid operators the flexibility needed to maintain this balance. Energy storage also imparts resiliency and robustness to the grid infrastructure. Over the last few years, there has been a significant increase in the deployment of large scale energy storage systems. This growth has been driven by improvements in the cost and performance of energy storage technologies and the need to accommodate distributed generation, as well as incentives and government mandates. Energymore » management systems (EMSs) and optimization methods are required to effectively and safely utilize energy storage as a flexible grid asset that can provide multiple grid services. The EMS needs to be able to accommodate a variety of use cases and regulatory environments. In this paper, we provide a brief history of grid-scale energy storage, an overview of EMS architectures, and a summary of the leading applications for storage. These serve as a foundation for a discussion of EMS optimization methods and design.« less
Energy Management and Optimization Methods for Grid Energy Storage Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Byrne, Raymond H.; Nguyen, Tu A.; Copp, David A.
Today, the stability of the electric power grid is maintained through real time balancing of generation and demand. Grid scale energy storage systems are increasingly being deployed to provide grid operators the flexibility needed to maintain this balance. Energy storage also imparts resiliency and robustness to the grid infrastructure. Over the last few years, there has been a significant increase in the deployment of large scale energy storage systems. This growth has been driven by improvements in the cost and performance of energy storage technologies and the need to accommodate distributed generation, as well as incentives and government mandates. Energymore » management systems (EMSs) and optimization methods are required to effectively and safely utilize energy storage as a flexible grid asset that can provide multiple grid services. The EMS needs to be able to accommodate a variety of use cases and regulatory environments. In this paper, we provide a brief history of grid-scale energy storage, an overview of EMS architectures, and a summary of the leading applications for storage. These serve as a foundation for a discussion of EMS optimization methods and design.« less
Digital relief generation from 3D models
NASA Astrophysics Data System (ADS)
Wang, Meili; Sun, Yu; Zhang, Hongming; Qian, Kun; Chang, Jian; He, Dongjian
2016-09-01
It is difficult to extend image-based relief generation to high-relief generation, as the images contain insufficient height information. To generate reliefs from three-dimensional (3D) models, it is necessary to extract the height fields from the model, but this can only generate bas-reliefs. To overcome this problem, an efficient method is proposed to generate bas-reliefs and high-reliefs directly from 3D meshes. To produce relief features that are visually appropriate, the 3D meshes are first scaled. 3D unsharp masking is used to enhance the visual features in the 3D mesh, and average smoothing and Laplacian smoothing are implemented to achieve better smoothing results. A nonlinear variable scaling scheme is then employed to generate the final bas-reliefs and high-reliefs. Using the proposed method, relief models can be generated from arbitrary viewing positions with different gestures and combinations of multiple 3D models. The generated relief models can be printed by 3D printers. The proposed method provides a means of generating both high-reliefs and bas-reliefs in an efficient and effective way under the appropriate scaling factors.
Guoyi Zhou; Ge Sun; Xu Wang; Chuanyan Zhou; Steven G. McNulty; James M. Vose; Devendra M. Amatya
2008-01-01
It is critical that evapotranspiration (ET) be quantified accurately so that scientists can evaluate the effects of land management and global change on water availability, streamflow, nutrient and sediment loading, and ecosystem productivity in watersheds. The objective of this study was to derive a new semi-empirical ET modeled using a dimension analysis method that...
Batch effects in single-cell RNA-sequencing data are corrected by matching mutual nearest neighbors.
Haghverdi, Laleh; Lun, Aaron T L; Morgan, Michael D; Marioni, John C
2018-06-01
Large-scale single-cell RNA sequencing (scRNA-seq) data sets that are produced in different laboratories and at different times contain batch effects that may compromise the integration and interpretation of the data. Existing scRNA-seq analysis methods incorrectly assume that the composition of cell populations is either known or identical across batches. We present a strategy for batch correction based on the detection of mutual nearest neighbors (MNNs) in the high-dimensional expression space. Our approach does not rely on predefined or equal population compositions across batches; instead, it requires only that a subset of the population be shared between batches. We demonstrate the superiority of our approach compared with existing methods by using both simulated and real scRNA-seq data sets. Using multiple droplet-based scRNA-seq data sets, we demonstrate that our MNN batch-effect-correction method can be scaled to large numbers of cells.
Joint classification and contour extraction of large 3D point clouds
NASA Astrophysics Data System (ADS)
Hackel, Timo; Wegner, Jan D.; Schindler, Konrad
2017-08-01
We present an effective and efficient method for point-wise semantic classification and extraction of object contours of large-scale 3D point clouds. What makes point cloud interpretation challenging is the sheer size of several millions of points per scan and the non-grid, sparse, and uneven distribution of points. Standard image processing tools like texture filters, for example, cannot handle such data efficiently, which calls for dedicated point cloud labeling methods. It turns out that one of the major drivers for efficient computation and handling of strong variations in point density, is a careful formulation of per-point neighborhoods at multiple scales. This allows, both, to define an expressive feature set and to extract topologically meaningful object contours. Semantic classification and contour extraction are interlaced problems. Point-wise semantic classification enables extracting a meaningful candidate set of contour points while contours help generating a rich feature representation that benefits point-wise classification. These methods are tailored to have fast run time and small memory footprint for processing large-scale, unstructured, and inhomogeneous point clouds, while still achieving high classification accuracy. We evaluate our methods on the semantic3d.net benchmark for terrestrial laser scans with >109 points.
Multiple Spacecraft Study of the Impact of Turbulence on Reconnection Rates
NASA Technical Reports Server (NTRS)
Wendel, Deirdre; Goldstein, Melvyn; Figueroa-Vinas, Adolfo; Adrian, Mark; Sahraoui, Fouad
2011-01-01
Magnetic turbulence and secondary island formation have reemerged as possible explanations for fast reconnection. Recent three-dimensional simulations reveal the formation of secondary islands that serve to shorten the current sheet and increase the accelerating electric field, while both simulations and observations witness electron holes whose collapse energizes electrons. However, few data studies have explicitly investigated the effect of turbulence and islands on the reconnection rate. We present a more comprehensive analysis of the effect of turbulence and islands on reconnection rates observed in space. Our approach takes advantage of multiple spacecraft to find the location of the spacecraft relative to the inflow and the outflow, to estimate the reconnection electric field, to indicate the presence and size of islands, and to determine wave vectors indicating turbulence. A superposed epoch analysis provides independent estimates of spatial scales and a reconnection electric field. We apply k-filtering and a new method adopted from seismological analyses to identify the wavevectors. From several case studies of reconnection events, we obtain preliminary estimates of the spectral scaling law, identify wave modes, and present a method for finding the reconnection electric field associated with the wave modes.
Wavefield reconstruction inversion with a multiplicative cost function
NASA Astrophysics Data System (ADS)
da Silva, Nuno V.; Yao, Gang
2018-01-01
We present a method for the automatic estimation of the trade-off parameter in the context of wavefield reconstruction inversion (WRI). WRI formulates the inverse problem as an optimisation problem, minimising the data misfit while penalising with a wave equation constraining term. The trade-off between the two terms is balanced by a scaling factor that balances the contributions of the data-misfit term and the constraining term to the value of the objective function. If this parameter is too large then it implies penalizing for the wave equation imposing a hard constraint in the inversion. If it is too small, then this leads to a poorly constrained solution as it is essentially penalizing for the data misfit and not taking into account the physics that explains the data. This paper introduces a new approach for the formulation of WRI recasting its formulation into a multiplicative cost function. We demonstrate that the proposed method outperforms the additive cost function when the trade-off parameter is appropriately scaled in the latter, when adapting it throughout the iterations, and when the data is contaminated with Gaussian random noise. Thus this work contributes with a framework for a more automated application of WRI.
A Multiple-Label Guided Clustering Algorithm for Historical Document Dating and Localization.
He, Sheng; Samara, Petros; Burgers, Jan; Schomaker, Lambert
2016-11-01
It is of essential importance for historians to know the date and place of origin of the documents they study. It would be a huge advancement for historical scholars if it would be possible to automatically estimate the geographical and temporal provenance of a handwritten document by inferring them from the handwriting style of such a document. We propose a multiple-label guided clustering algorithm to discover the correlations between the concrete low-level visual elements in historical documents and abstract labels, such as date and location. First, a novel descriptor, called histogram of orientations of handwritten strokes, is proposed to extract and describe the visual elements, which is built on a scale-invariant polar-feature space. In addition, the multi-label self-organizing map (MLSOM) is proposed to discover the correlations between the low-level visual elements and their labels in a single framework. Our proposed MLSOM can be used to predict the labels directly. Moreover, the MLSOM can also be considered as a pre-structured clustering method to build a codebook, which contains more discriminative information on date and geography. The experimental results on the medieval paleographic scale data set demonstrate that our method achieves state-of-the-art results.
Whitty, Jennifer A; Rundle-Thiele, Sharyn R; Scuffham, Paul A
2012-03-01
Discrete choice experiments (DCEs) and the Juster scale are accepted methods for the prediction of individual purchase probabilities. Nevertheless, these methods have seldom been applied to a social decision-making context. To gain an overview of social decisions for a decision-making population through data triangulation, these two methods were used to understand purchase probability in a social decision-making context. We report an exploratory social decision-making study of pharmaceutical subsidy in Australia. A DCE and selected Juster scale profiles were presented to current and past members of the Australian Pharmaceutical Benefits Advisory Committee and its Economic Subcommittee. Across 66 observations derived from 11 respondents for 6 different pharmaceutical profiles, there was a small overall median difference of 0.024 in the predicted probability of public subsidy (p = 0.003), with the Juster scale predicting the higher likelihood. While consistency was observed at the extremes of the probability scale, the funding probability differed over the mid-range of profiles. There was larger variability in the DCE than Juster predictions within each individual respondent, suggesting the DCE is better able to discriminate between profiles. However, large variation was observed between individuals in the Juster scale but not DCE predictions. It is important to use multiple methods to obtain a complete picture of the probability of purchase or public subsidy in a social decision-making context until further research can elaborate on our findings. This exploratory analysis supports the suggestion that the mixed logit model, which was used for the DCE analysis, may fail to adequately account for preference heterogeneity in some contexts.
Intercomparison of 3D pore-scale flow and solute transport simulation methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiaofan; Mehmani, Yashar; Perkins, William A.
2016-09-01
Multiple numerical approaches have been developed to simulate porous media fluid flow and solute transport at the pore scale. These include methods that 1) explicitly model the three-dimensional geometry of pore spaces and 2) those that conceptualize the pore space as a topologically consistent set of stylized pore bodies and pore throats. In previous work we validated a model of class 1, based on direct numerical simulation using computational fluid dynamics (CFD) codes, against magnetic resonance velocimetry (MRV) measurements of pore-scale velocities. Here we expand that validation to include additional models of class 1 based on the immersed-boundary method (IMB),more » lattice Boltzmann method (LBM), smoothed particle hydrodynamics (SPH), as well as a model of class 2 (a pore-network model or PNM). The PNM approach used in the current study was recently improved and demonstrated to accurately simulate solute transport in a two-dimensional experiment. While the PNM approach is computationally much less demanding than direct numerical simulation methods, the effect of conceptualizing complex three-dimensional pore geometries on solute transport in the manner of PNMs has not been fully determined. We apply all four approaches (CFD, LBM, SPH and PNM) to simulate pore-scale velocity distributions and nonreactive solute transport, and intercompare the model results with previously reported experimental observations. Experimental observations are limited to measured pore-scale velocities, so solute transport comparisons are made only among the various models. Comparisons are drawn both in terms of macroscopic variables (e.g., permeability, solute breakthrough curves) and microscopic variables (e.g., local velocities and concentrations).« less
K.D. Brosofske; J. Chen; Thomas R. Crow; S.C. Saunders
1999-01-01
Increasing awareness of the importance of scale and landscape structure to landscape processes and concern about loss of biodiversity has resulted in efforts to understand patterns of biodiversity across multiple scales. We examined plant species distributions and their relationships to landscape structure at varying spatial scales across a pine barrens landscape in...
ERIC Educational Resources Information Center
Gitchel, W. Dent; Roessler, Richard T.; Turner, Ronna C.
2011-01-01
Assessment is critical to rehabilitation practice and research, and self-reports are a commonly used form of assessment. This study examines a gender effect according to item wording on the "Perceived Stress Scale" for adults with multiple sclerosis. Past studies have demonstrated two-factor solutions on this scale and other scales measuring…
Ipsative imputation for a 15-item Geriatric Depression Scale in community-dwelling elderly people.
Imai, Hissei; Furukawa, Toshiaki A; Kasahara, Yoriko; Ishimoto, Yasuko; Kimura, Yumi; Fukutomi, Eriko; Chen, Wen-Ling; Tanaka, Mire; Sakamoto, Ryota; Wada, Taizo; Fujisawa, Michiko; Okumiya, Kiyohito; Matsubayashi, Kozo
2014-09-01
Missing data are inevitable in almost all medical studies. Imputation methods using the probabilistic model are common, but they cannot impute individual data and require special software. In contrast, the ipsative imputation method, which substitutes the missing items by the mean of the remaining items within the individual, is easy and does not need any special software, but it can provide individual scores. The aim of the present study was to evaluate the validity of the ipsative imputation method using data involving the 15-item Geriatric Depression Scale. Participants were community-dwelling elderly individuals (n = 1178). A structural equation model was constructed. The model fit indexes were calculated to assess the validity of the imputation method when it is used for individuals who were missing 20% of data or less and 40% of data or less, depending on whether we assumed that their correlation coefficients were the same as the dataset with no missing items. Finally, we compared path coefficients of the dataset imputed by ipsative imputation with those by multiple imputation. When compared with the assumption that the datasets differed, all of the model fit indexes were better under the assumption that the dataset without missing data is the same as that that was missing 20% of data or less. However, by the same assumption, the model fit indexes were worse in the dataset that was missing 40% of data or less. The path coefficients of the dataset imputed by ipsative imputation and by multiple imputation were compatible with each other if the proportion of missing items was 20% or less. Ipsative imputation appears to be a valid imputation method and can be used to impute data in studies using the 15-item Geriatric Depression Scale, if the percentage of its missing items is 20% or less. © 2014 The Authors. Psychogeriatrics © 2014 Japanese Psychogeriatric Society.
Xiao, Li; Cai, Qin; Li, Zhilin; Zhao, Hongkai; Luo, Ray
2014-11-25
A multi-scale framework is proposed for more realistic molecular dynamics simulations in continuum solvent models by coupling a molecular mechanics treatment of solute with a fluid mechanics treatment of solvent. This article reports our initial efforts to formulate the physical concepts necessary for coupling the two mechanics and develop a 3D numerical algorithm to simulate the solvent fluid via the Navier-Stokes equation. The numerical algorithm was validated with multiple test cases. The validation shows that the algorithm is effective and stable, with observed accuracy consistent with our design.
Ibrahim, Khaled Z.; Madduri, Kamesh; Williams, Samuel; ...
2013-07-18
The Gyrokinetic Toroidal Code (GTC) uses the particle-in-cell method to efficiently simulate plasma microturbulence. This paper presents novel analysis and optimization techniques to enhance the performance of GTC on large-scale machines. We introduce cell access analysis to better manage locality vs. synchronization tradeoffs on CPU and GPU-based architectures. Finally, our optimized hybrid parallel implementation of GTC uses MPI, OpenMP, and NVIDIA CUDA, achieves up to a 2× speedup over the reference Fortran version on multiple parallel systems, and scales efficiently to tens of thousands of cores.
Validity and reliability of a pilot scale for assessment of multiple system atrophy symptoms.
Matsushima, Masaaki; Yabe, Ichiro; Takahashi, Ikuko; Hirotani, Makoto; Kano, Takahiro; Horiuchi, Kazuhiro; Houzen, Hideki; Sasaki, Hidenao
2017-01-01
Multiple system atrophy (MSA) is a rare progressive neurodegenerative disorder for which brief yet sensitive scale is required in order for use in clinical trials and general screening. We previously compared several scales for the assessment of MSA symptoms and devised an eight-item pilot scale with large standardized response mean [handwriting, finger taps, transfers, standing with feet together, turning trunk, turning 360°, gait, body sway]. The aim of the present study is to investigate the validity and reliability of a simple pilot scale for assessment of multiple system atrophy symptoms. Thirty-two patients with MSA (15 male/17 female; 20 cerebellar subtype [MSA-C]/12 parkinsonian subtype [MSA-P]) were prospectively registered between January 1, 2014 and February 28, 2015. Patients were evaluated by two independent raters using the Unified MSA Rating Scale (UMSARS), Scale for Assessment and Rating of Ataxia (SARA), and the pilot scale. Correlations between UMSARS, SARA, pilot scale scores, intraclass correlation coefficients (ICCs), and Cronbach's alpha coefficients were calculated. Pilot scale scores significantly correlated with scores for UMSARS Parts I, II, and IV as well as with SARA scores. Intra-rater and inter-rater ICCs and Cronbach's alpha coefficients remained high (> 0.94) for all measures. The results of the present study indicate the validity and reliability of the eight-item pilot scale, particularly for the assessment of symptoms in patients with early state multiple system atrophy.
Kangas, Antti J; Soininen, Pasi; Lawlor, Debbie A; Davey Smith, George; Ala-Korpela, Mika
2017-01-01
Abstract Detailed metabolic profiling in large-scale epidemiologic studies has uncovered novel biomarkers for cardiometabolic diseases and clarified the molecular associations of established risk factors. A quantitative metabolomics platform based on nuclear magnetic resonance spectroscopy has found widespread use, already profiling over 400,000 blood samples. Over 200 metabolic measures are quantified per sample; in addition to many biomarkers routinely used in epidemiology, the method simultaneously provides fine-grained lipoprotein subclass profiling and quantification of circulating fatty acids, amino acids, gluconeogenesis-related metabolites, and many other molecules from multiple metabolic pathways. Here we focus on applications of magnetic resonance metabolomics for quantifying circulating biomarkers in large-scale epidemiology. We highlight the molecular characterization of risk factors, use of Mendelian randomization, and the key issues of study design and analyses of metabolic profiling for epidemiology. We also detail how integration of metabolic profiling data with genetics can enhance drug development. We discuss why quantitative metabolic profiling is becoming widespread in epidemiology and biobanking. Although large-scale applications of metabolic profiling are still novel, it seems likely that comprehensive biomarker data will contribute to etiologic understanding of various diseases and abilities to predict disease risks, with the potential to translate into multiple clinical settings. PMID:29106475
NASA Astrophysics Data System (ADS)
Faes, Luca; Nollo, Giandomenico; Stramaglia, Sebastiano; Marinazzo, Daniele
2017-10-01
In the study of complex physical and biological systems represented by multivariate stochastic processes, an issue of great relevance is the description of the system dynamics spanning multiple temporal scales. While methods to assess the dynamic complexity of individual processes at different time scales are well established, multiscale analysis of directed interactions has never been formalized theoretically, and empirical evaluations are complicated by practical issues such as filtering and downsampling. Here we extend the very popular measure of Granger causality (GC), a prominent tool for assessing directed lagged interactions between joint processes, to quantify information transfer across multiple time scales. We show that the multiscale processing of a vector autoregressive (AR) process introduces a moving average (MA) component, and describe how to represent the resulting ARMA process using state space (SS) models and to combine the SS model parameters for computing exact GC values at arbitrarily large time scales. We exploit the theoretical formulation to identify peculiar features of multiscale GC in basic AR processes, and demonstrate with numerical simulations the much larger estimation accuracy of the SS approach compared to pure AR modeling of filtered and downsampled data. The improved computational reliability is exploited to disclose meaningful multiscale patterns of information transfer between global temperature and carbon dioxide concentration time series, both in paleoclimate and in recent years.
Dilt, Thomas E; Weisberg, Peter J; Leitner, Philip; Matocq, Marjorie D; Inman, Richard D; Nussear, Kenneth E; Esque, Todd C
2016-06-01
Conservation planning and biodiversity management require information on landscape connectivity across a range of spatial scales from individual home ranges to large regions. Reduction in landscape connectivity due changes in land use or development is expected to act synergistically with alterations to habitat mosaic configuration arising from climate change. We illustrate a multiscale connectivity framework to aid habitat conservation prioritization in the context of changing land use and climate. Our approach, which builds upon the strengths of multiple landscape connectivity methods, including graph theory, circuit theory, and least-cost path analysis, is here applied to the conservation planning requirements of the Mohave ground squirrel. The distribution of this threatened Californian species, as for numerous other desert species, overlaps with the proposed placement of several utility-scale renewable energy developments in the American southwest. Our approach uses information derived at three spatial scales to forecast potential changes in habitat connectivity under various scenarios of energy development and climate change. By disentangling the potential effects of habitat loss and fragmentation across multiple scales, we identify priority conservation areas for both core habitat and critical corridor or stepping stone habitats. This approach is a first step toward applying graph theory to analyze habitat connectivity for species with continuously distributed habitat and should be applicable across a broad range of taxa.
Myszkowski, Nils; Storme, Martin; Tavani, Jean-Louis
2018-04-27
Because of their length and objective of broad content coverage, very short scales can show limited internal consistency and structural validity. We argue that it is because their objectives may be better aligned with formative investigations than with reflective measurement methods that capitalize on content overlap. As proofs of concept of formative investigations of short scales, we investigate the Ten Item Personality Inventory (TIPI). In Study 1, we administered the TIPI and the Big Five Inventory (BFI) to 938 adults, and fitted a formative Multiple Indicator Multiple Causes model, which consisted of the TIPI items forming 5 latent variables, which in turn predicted the 5 BFI scores. These results were replicated in Study 2, on a sample of 759 adults, with, this time, the Revised NEO Personality Inventory (NEO-PI-R) as the external criterion. The models fit the data adequately, and moderate to strong significant effects (.37<|β|<.69, all p<.001) of all 5 latent formative variables on their corresponding BFI and NEOPI-R scores were observed. This study presents a formative approach that we propose to be more consistent with the aims of scales with broad content and short length like the TIPI. This article is protected by copyright. All rights reserved. © 2018 Wiley Periodicals, Inc.
The scale-dependent market trend: Empirical evidences using the lagged DFA method
NASA Astrophysics Data System (ADS)
Li, Daye; Kou, Zhun; Sun, Qiankun
2015-09-01
In this paper we make an empirical research and test the efficiency of 44 important market indexes in multiple scales. A modified method based on the lagged detrended fluctuation analysis is utilized to maximize the information of long-term correlations from the non-zero lags and keep the margin of errors small when measuring the local Hurst exponent. Our empirical result illustrates that a common pattern can be found in the majority of the measured market indexes which tend to be persistent (with the local Hurst exponent > 0.5) in the small time scale, whereas it displays significant anti-persistent characteristics in large time scales. Moreover, not only the stock markets but also the foreign exchange markets share this pattern. Considering that the exchange markets are only weakly synchronized with the economic cycles, it can be concluded that the economic cycles can cause anti-persistence in the large time scale but there are also other factors at work. The empirical result supports the view that financial markets are multi-fractal and it indicates that deviations from efficiency and the type of model to describe the trend of market price are dependent on the forecasting horizon.
Towards an eco-phylogenetic framework for infectious disease ecology.
Fountain-Jones, Nicholas M; Pearse, William D; Escobar, Luis E; Alba-Casals, Ana; Carver, Scott; Davies, T Jonathan; Kraberger, Simona; Papeş, Monica; Vandegrift, Kurt; Worsley-Tonks, Katherine; Craft, Meggan E
2018-05-01
Identifying patterns and drivers of infectious disease dynamics across multiple scales is a fundamental challenge for modern science. There is growing awareness that it is necessary to incorporate multi-host and/or multi-parasite interactions to understand and predict current and future disease threats better, and new tools are needed to help address this task. Eco-phylogenetics (phylogenetic community ecology) provides one avenue for exploring multi-host multi-parasite systems, yet the incorporation of eco-phylogenetic concepts and methods into studies of host pathogen dynamics has lagged behind. Eco-phylogenetics is a transformative approach that uses evolutionary history to infer present-day dynamics. Here, we present an eco-phylogenetic framework to reveal insights into parasite communities and infectious disease dynamics across spatial and temporal scales. We illustrate how eco-phylogenetic methods can help untangle the mechanisms of host-parasite dynamics from individual (e.g. co-infection) to landscape scales (e.g. parasite/host community structure). An improved ecological understanding of multi-host and multi-pathogen dynamics across scales will increase our ability to predict disease threats. © 2017 Cambridge Philosophical Society.
NASA Astrophysics Data System (ADS)
Bhakti, Satria Seto; Samsudin, Achmad; Chandra, Didi Teguh; Siahaan, Parsaoran
2017-05-01
The aim of research is developing multiple-choices test items as tools for measuring the scientific of generic skills on solar system. To achieve the aim that the researchers used the ADDIE model consisting Of: Analyzing, Design, Development, Implementation, dan Evaluation, all of this as a method research. While The scientific of generic skills limited research to five indicator including: (1) indirect observation, (2) awareness of the scale, (3) inference logic, (4) a causal relation, and (5) mathematical modeling. The participants are 32 students at one of junior high schools in Bandung. The result shown that multiple-choices that are constructed test items have been declared valid by the expert validator, and after the tests show that the matter of developing multiple-choices test items be able to measuring the scientific of generic skills on solar system.
The evolution of scaling laws in the sea ice floe size distribution
NASA Astrophysics Data System (ADS)
Horvat, Christopher; Tziperman, Eli
2017-09-01
The sub-gridscale floe size and thickness distribution (FSTD) is an emerging climate variable, playing a leading-order role in the coupling between sea ice, the ocean, and the atmosphere. The FSTD, however, is difficult to measure given the vast range of horizontal scales of individual floes, leading to the common use of power-law scaling to describe it. The evolution of a coupled mixed-layer-FSTD model of a typical marginal ice zone is explicitly simulated here, to develop a deeper understanding of how processes active at the floe scale may or may not lead to scaling laws in the floe size distribution. The time evolution of mean quantities obtained from the FSTD (sea ice concentration, mean thickness, volume) is complex even in simple scenarios, suggesting that these quantities, which affect climate feedbacks, should be carefully calculated in climate models. The emergence of FSTDs with multiple separate power-law regimes, as seen in observations, is found to be due to the combination of multiple scale-selective processes. Limitations in assuming a power-law FSTD are carefully analyzed, applying methods used in observations to FSTD model output. Two important sources of error are identified that may lead to model biases: one when observing an insufficiently small range of floe sizes, and one from the fact that floe-scale processes often do not produce power-law behavior. These two sources of error may easily lead to biases in mean quantities derived from the FSTD of greater than 100%, and therefore biases in modeled sea ice evolution.
NASA Astrophysics Data System (ADS)
Ji, Hantao; Bhattacharjee, A.; Goodman, A.; Prager, S.; Daughton, W.; Cutler, R.; Fox, W.; Hoffmann, F.; Kalish, M.; Kozub, T.; Jara-Almonte, J.; Myers, C.; Ren, Y.; Sloboda, P.; Yamada, M.; Yoo, J.; Bale, S. D.; Carter, T.; Dorfman, S.; Drake, J.; Egedal, J.; Sarff, J.; Wallace, J.
2017-10-01
The FLARE device (Facility for Laboratory Reconnection Experiments; flare.pppl.gov) is a new laboratory experiment under construction at Princeton for the studies of magnetic reconnection in the multiple X-line regimes directly relevant to space, solar, astrophysical, and fusion plasmas, as guided by a reconnection phase diagram. The whole device have been assembled with first plasmas expected in the fall of 2017. The main diagnostics is an extensive set of magnetic probe arrays, currently under construction, to cover multiple scales from local electron scales ( 2 mm), to intermediate ion scales ( 10 cm), and global MHD scales ( 1 m), simultaneously providing in-situ measurements over all these relevant scales. The planned procedures and example topics as a user facility will be discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wardle, Kent E.; Frey, Kurt; Pereira, Candido
2014-02-02
This task is aimed at predictive modeling of solvent extraction processes in typical extraction equipment through multiple simulation methods at various scales of resolution. We have conducted detailed continuum fluid dynamics simulation on the process unit level as well as simulations of the molecular-level physical interactions which govern extraction chemistry. Through combination of information gained through simulations at each of these two tiers along with advanced techniques such as the Lattice Boltzmann Method (LBM) which can bridge these two scales, we can develop the tools to work towards predictive simulation for solvent extraction on the equipment scale (Figure 1). Themore » goal of such a tool-along with enabling optimized design and operation of extraction units-would be to allow prediction of stage extraction effrciency under specified conditions. Simulation efforts on each of the two scales will be described below. As the initial application of FELBM in the work performed during FYl0 has been on annular mixing it will be discussed in context of the continuum-scale. In the future, however, it is anticipated that the real value of FELBM will be in its use as a tool for sub-grid model development through highly refined DNS-like multiphase simulations facilitating exploration and development of droplet models including breakup and coalescence which will be needed for the large-scale simulations where droplet level physics cannot be resolved. In this area, it can have a significant advantage over traditional CFD methods as its high computational efficiency allows exploration of significantly greater physical detail especially as computational resources increase in the future.« less
NASA Astrophysics Data System (ADS)
Saha, Punam K.; Gao, Zhiyun; Alford, Sara; Sonka, Milan; Hoffman, Eric
2009-02-01
Distinguishing arterial and venous trees in pulmonary multiple-detector X-ray computed tomography (MDCT) images (contrast-enhanced or unenhanced) is a critical first step in the quantification of vascular geometry for purposes of determining, for instance, pulmonary hypertension, using vascular dimensions as a comparator for assessment of airway size, detection of pulmonary emboli and more. Here, a novel method is reported for separating arteries and veins in MDCT pulmonary images. Arteries and veins are modeled as two iso-intensity objects closely entwined with each other at different locations at various scales. The method starts with two sets of seeds -- one for arteries and another for veins. Initialized with seeds, arteries and veins grow iteratively while maintaining their spatial separation and eventually forming two disjoint objects at convergence. The method combines fuzzy distance transform, a morphologic feature, with a topologic connectivity property to iteratively separate finer and finer details starting at a large scale and progressing towards smaller scales. The method has been validated in mathematically generated tubular objects with different levels of fuzziness, scale and noise. Also, it has been successfully applied to clinical CT pulmonary data. The accuracy of the method has been quantitatively evaluated by comparing its results with manual outlining. For arteries, the method has yielded correctness of 81.7% at the cost of 6.7% false positives and 11.6% false negatives. Our method is very promising for automated separation of arteries and veins in MDCT pulmonary images even when there is no mark of intensity variation at conjoining locations.
Direct system parameter identification of mechanical structures with application to modal analysis
NASA Technical Reports Server (NTRS)
Leuridan, J. M.; Brown, D. L.; Allemang, R. J.
1982-01-01
In this paper a method is described to estimate mechanical structure characteristics in terms of mass, stiffness and damping matrices using measured force input and response data. The estimated matrices can be used to calculate a consistent set of damped natural frequencies and damping values, mode shapes and modal scale factors for the structure. The proposed technique is attractive as an experimental modal analysis method since the estimation of the matrices does not require previous estimation of frequency responses and since the method can be used, without any additional complications, for multiple force input structure testing.
Towards sound epistemological foundations of statistical methods for high-dimensional biology.
Mehta, Tapan; Tanik, Murat; Allison, David B
2004-09-01
A sound epistemological foundation for biological inquiry comes, in part, from application of valid statistical procedures. This tenet is widely appreciated by scientists studying the new realm of high-dimensional biology, or 'omic' research, which involves multiplicity at unprecedented scales. Many papers aimed at the high-dimensional biology community describe the development or application of statistical techniques. The validity of many of these is questionable, and a shared understanding about the epistemological foundations of the statistical methods themselves seems to be lacking. Here we offer a framework in which the epistemological foundation of proposed statistical methods can be evaluated.
Kubsik, Anna; Klimkiewicz, Robert; Janczewska, Katarzyna; Klimkiewicz, Paulina; Jankowska, Agnieszka; Woldańska-Okońska, Marta
2016-01-01
Multiple sclerosis is one of the most common neurological disorders. It is a chronic inflammatory demyelinating disease of the CNS, whose etiology is not fully understood. Application of new rehabilitation methods are essential to improve functional status. The material studied consisted of 120 patients of both sexes (82 women and 38 men) aged 21-81 years. The study involved patients with a diagnosis of multiple sclerosis. The aim of the study was to evaluate the effect of laser radiation and other therapies on the functional status of patients with multiple sclerosis. Patients were randomly divided into four treatment groups. The evaluation was performed three times - before the start of rehabilitation, immediately after rehabilitation (21 days of treatment) and subsequent control - 30 days after the patients leave the clinic. The following tests were performed for all patients to assess functional status: Expanded Disability Status Scale (EDSS) of Kurtzke and Barthel Index. Results of all testing procedures show that the treatment methods are improving the functional status of patients with multiple sclerosis, with the significant advantage of the synergistic action of laser and magneto stimulation. The combination of laser and magneto stimulation significantly confirmed beneficial effect on quality of life. The results of these studies present new scientific value and are improved compared to program of rehabilitation of patients with multiple sclerosis by laser radiation which was previously used. This study showed that synergic action of laser radiation and magneto stimulation has a beneficial effect on improving functional status, and thus improves the quality of life of patients with multiple sclerosis. The effects of all methods of rehabilitation are persisted after cessation of treatment applications, with a particular advantage of the synergistic action of laser radiation and magneto stimulation, which indicates the possibility to elicitation in these methods the phenomenon of the biological hysteresis.
Detecting anomalies in CMB maps: a new method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neelakanta, Jayanth T., E-mail: jayanthtn@gmail.com
2015-10-01
Ever since WMAP announced its first results, different analyses have shown that there is weak evidence for several large-scale anomalies in the CMB data. While the evidence for each anomaly appears to be weak, the fact that there are multiple seemingly unrelated anomalies makes it difficult to account for them via a single statistical fluke. So, one is led to considering a combination of these anomalies. But, if we ''hand-pick'' the anomalies (test statistics) to consider, we are making an a posteriori choice. In this article, we propose two statistics that do not suffer from this problem. The statistics aremore » linear and quadratic combinations of the a{sub ℓ m}'s with random co-efficients, and they test the null hypothesis that the a{sub ℓ m}'s are independent, normally-distributed, zero-mean random variables with an m-independent variance. The motivation for considering multiple modes is this: because most physical models that lead to large-scale anomalies result in coupling multiple ℓ and m modes, the ''coherence'' of this coupling should get enhanced if a combination of different modes is considered. In this sense, the statistics are thus much more generic than those that have been hitherto considered in literature. Using fiducial data, we demonstrate that the method works and discuss how it can be used with actual CMB data to make quite general statements about the incompatibility of the data with the null hypothesis.« less
Detecting scaling in the period dynamics of multimodal signals: Application to Parkinsonian tremor
NASA Astrophysics Data System (ADS)
Sapir, Nir; Karasik, Roman; Havlin, Shlomo; Simon, Ely; Hausdorff, Jeffrey M.
2003-03-01
Patients with Parkinson’s disease exhibit tremor, involuntary movement of the limbs. The frequency spectrum of tremor typically has broad peaks at “harmonic” frequencies, much like that seen in other physical processes. In general, this type of harmonic structure in the frequency domain may be due to two possible mechanisms: a nonlinear oscillation or a superposition of (multiple) independent modes of oscillation. A broad peak spectrum generally indicates that a signal is semiperiodic with a fluctuating period. These fluctuations may posses intrinsic order that can be quantified using scaling analysis. We propose a method to extract the correlation (scaling) properties in the period dynamics of multimodal oscillations, in order to distinguish between a nonlinear oscillation and a superposition of individual modes of oscillation. The method is based on our finding that the information content of the temporal correlations in a fluctuating period of a single oscillator is contained in a finite frequency band in the power spectrum, allowing for decomposition of modes by bandpass filtering. Our simulations for a nonlinear oscillation show that harmonic modes possess the same scaling properties. In contrast, when the method is applied to tremor records from patients with Parkinson’s disease, the first two modes of oscillations yield different scaling patterns, suggesting that these modes may not be simple harmonics, as might be initially assumed.
Multiscale spatial and temporal estimation of the b-value
NASA Astrophysics Data System (ADS)
García-Hernández, R.; D'Auria, L.; Barrancos, J.; Padilla, G.
2017-12-01
The estimation of the spatial and temporal variations of the Gutenberg-Richter b-value is of great importance in different seismological applications. One of the problems affecting its estimation is the heterogeneous distribution of the seismicity which makes its estimate strongly dependent upon the selected spatial and/or temporal scale. This is especially important in volcanoes where dense clusters of earthquakes often overlap the background seismicity. Proposed solutions for estimating temporal variations of the b-value include considering equally spaced time intervals or variable intervals having an equal number of earthquakes. Similar approaches have been proposed to image the spatial variations of this parameter as well.We propose a novel multiscale approach, based on the method of Ogata and Katsura (1993), allowing a consistent estimation of the b-value regardless of the considered spatial and/or temporal scales. Our method, named MUST-B (MUltiscale Spatial and Temporal characterization of the B-value), basically consists in computing estimates of the b-value at multiple temporal and spatial scales, extracting for a give spatio-temporal point a statistical estimator of the value, as well as and indication of the characteristic spatio-temporal scale. This approach includes also a consistent estimation of the completeness magnitude (Mc) and of the uncertainties over both b and Mc.We applied this method to example datasets for volcanic (Tenerife, El Hierro) and tectonic areas (Central Italy) as well as an example application at global scale.
NASA Astrophysics Data System (ADS)
Jeyavijayan, S.
2015-04-01
This study is a comparative analysis of FTIR and FT-Raman spectra of 2-amino-4-hydroxypyrimidine. The total energies of different conformations have been obtained from DFT (B3LYP) method with 6-31+G(d,p) and 6-311++G(d,p) basis sets. The barrier of planarity between the most stable and planar form is also predicted. The molecular structure, vibrational wavenumbers, infrared intensities, Raman scattering activities were calculated for the molecule using the B3LYP density functional theory (DFT) method. The computed values of frequencies are scaled using multiple scaling factors to yield good coherence with the observed values. Reliable vibrational assignments were made on the basis of total energy distribution (TED) along with scaled quantum mechanical (SQM) method. The stability of the molecule arising from hyperconjugative interactions, charge delocalization has been analyzed using natural bond orbital (NBO) analysis. Non-linear properties such as electric dipole moment (μ), polarizability (α), and hyperpolarizability (β) values of the investigated molecule have been computed using B3LYP quantum chemical calculation. The calculated HOMO and LUMO energies show that charge transfer occurs within the molecule. Besides, molecular electrostatic potential (MEP), Mulliken's charges analysis, and several thermodynamic properties were performed by the DFT method.
Application of physical scaling towards downscaling climate model precipitation data
NASA Astrophysics Data System (ADS)
Gaur, Abhishek; Simonovic, Slobodan P.
2018-04-01
Physical scaling (SP) method downscales climate model data to local or regional scales taking into consideration physical characteristics of the area under analysis. In this study, multiple SP method based models are tested for their effectiveness towards downscaling North American regional reanalysis (NARR) daily precipitation data. Model performance is compared with two state-of-the-art downscaling methods: statistical downscaling model (SDSM) and generalized linear modeling (GLM). The downscaled precipitation is evaluated with reference to recorded precipitation at 57 gauging stations located within the study region. The spatial and temporal robustness of the downscaling methods is evaluated using seven precipitation based indices. Results indicate that SP method-based models perform best in downscaling precipitation followed by GLM, followed by the SDSM model. Best performing models are thereafter used to downscale future precipitations made by three global circulation models (GCMs) following two emission scenarios: representative concentration pathway (RCP) 2.6 and RCP 8.5 over the twenty-first century. The downscaled future precipitation projections indicate an increase in mean and maximum precipitation intensity as well as a decrease in the total number of dry days. Further an increase in the frequency of short (1-day), moderately long (2-4 day), and long (more than 5-day) precipitation events is projected.
NASA Astrophysics Data System (ADS)
Chen, Y.; Luo, M.; Xu, L.; Zhou, X.; Ren, J.; Zhou, J.
2018-04-01
The RF method based on grid-search parameter optimization could achieve a classification accuracy of 88.16 % in the classification of images with multiple feature variables. This classification accuracy was higher than that of SVM and ANN under the same feature variables. In terms of efficiency, the RF classification method performs better than SVM and ANN, it is more capable of handling multidimensional feature variables. The RF method combined with object-based analysis approach could highlight the classification accuracy further. The multiresolution segmentation approach on the basis of ESP scale parameter optimization was used for obtaining six scales to execute image segmentation, when the segmentation scale was 49, the classification accuracy reached the highest value of 89.58 %. The classification accuracy of object-based RF classification was 1.42 % higher than that of pixel-based classification (88.16 %), and the classification accuracy was further improved. Therefore, the RF classification method combined with object-based analysis approach could achieve relatively high accuracy in the classification and extraction of land use information for industrial and mining reclamation areas. Moreover, the interpretation of remotely sensed imagery using the proposed method could provide technical support and theoretical reference for remotely sensed monitoring land reclamation.
New clinical grading scales and objective measurement for conjunctival injection.
Park, In Ki; Chun, Yeoun Sook; Kim, Kwang Gi; Yang, Hee Kyung; Hwang, Jeong-Min
2013-08-05
To establish a new clinical grading scale and objective measurement method to evaluate conjunctival injection. Photographs of conjunctival injection with variable ocular diseases in 429 eyes were reviewed. Seventy-three images with concordance by three ophthalmologists were classified into a 4-step and 10-step subjective grading scale, and used as standard photographs. Each image was quantified in four ways: the relative magnitude of the redness component of each red-green-blue (RGB) pixel; two different algorithms based on the occupied area by blood vessels (K-means clustering with LAB color model and contrast-limited adaptive histogram equalization [CLAHE] algorithm); and the presence of blood vessel edges, based on the Canny edge-detection algorithm. Area under the receiver operating characteristic curves (AUCs) were calculated to summarize diagnostic accuracies of the four algorithms. The RGB color model, K-means clustering with LAB color model, and CLAHE algorithm showed good correlation with the clinical 10-step grading scale (R = 0.741, 0.784, 0.919, respectively) and with the clinical 4-step grading scale (R = 0.645, 0.702, 0.838, respectively). The CLAHE method showed the largest AUC, best distinction power (P < 0.001, ANOVA, Bonferroni multiple comparison test), and high reproducibility (R = 0.996). CLAHE algorithm showed the best correlation with the 10-step and 4-step subjective clinical grading scales together with high distinction power and reproducibility. CLAHE algorithm can be a useful for method for assessment of conjunctival injection.
Fear of Death in a Sample of Physicians
Wood, Keith; Robinson, Paul J.
1984-01-01
Recently, reliable and valid methods of assessing fear of death have been developed. In this study, three well established questionnaires (the Threat Index, the Death Anxiety Scale and the Collett-Lester Fear of Death Scale) were used to assess and compare fear of death in a group of physicians (n = 30) with a group of non-physicians (n = 30). T-tests and hierarchical multiple regression analyses revealed no significant differences between physicians' and non-physicians' fear of death as measured by the Threat Index and Templer's Death Anxiety Scale. The Collett-Lester Fear of Death Scale revealed that physicians were less fearful of death. More specifically, physicians demonstrated less fear on the Collett-Lester subscales, `fear of dying of self' and `fear of dying of others', than did non-physicians. These findings and those of earlier, contradictory research, are discussed. PMID:21279021
Using analogy to learn about phenomena at scales outside human perception.
Resnick, Ilyse; Davatzes, Alexandra; Newcombe, Nora S; Shipley, Thomas F
2017-01-01
Understanding and reasoning about phenomena at scales outside human perception (for example, geologic time) is critical across science, technology, engineering, and mathematics. Thus, devising strong methods to support acquisition of reasoning at such scales is an important goal in science, technology, engineering, and mathematics education. In two experiments, we examine the use of analogical principles in learning about geologic time. Across both experiments we find that using a spatial analogy (for example, a time line) to make multiple alignments, and keeping all unrelated components of the analogy held constant (for example, keep the time line the same length), leads to better understanding of the magnitude of geologic time. Effective approaches also include hierarchically and progressively aligning scale information (Experiment 1) and active prediction in making alignments paired with immediate feedback (Experiments 1 and 2).
Cerebrospinal fluid ATP metabolites in multiple sclerosis.
Lazzarino, G; Amorini, A M; Eikelenboom, M J; Killestein, J; Belli, A; Di Pietro, V; Tavazzi, B; Barkhof, F; Polman, C H; Uitdehaag, B M J; Petzold, A
2010-05-01
Increased axonal energy demand and mitochondrial failure have been suggested as possible causes for axonal degeneration and disability in multiple sclerosis. Our objective was to test whether ATP depletion precedes clinical, imaging and biomarker evidence for axonal degeneration in multiple sclerosis. The method consisted of a longitudinal study which included 21 patients with multiple sclerosis. High performance liquid chromatography was used to quantify biomarkers of the ATP metabolism (oxypurines and purines) from the cerebrospinal fluid at baseline. The Expanded Disability Status Scale, MRI brain imaging measures for brain atrophy (ventricular and parenchymal fractions), and cerebrospinal fluid biomarkers for axonal damage (phosphorylated and hyperphosphorylated neurofilaments) were quantified at baseline and 3-year follow-up. Central ATP depletion (sum of ATP metabolites >19.7 micromol/litre) was followed by more severe progression of disability if compared to normal ATP metabolites (median 1.5 versus 0, p< 0.05). Baseline ATP metabolite levels correlated with change of Expanded Disability Status Scale in the pooled cohort (r= 0.66, p= 0.001) and subgroups (relapsing-remitting patients: r= 0.79, p< 0.05 and secondary progressive/primary progressive patients: r= 0.69, p< 0.01). There was no relationship between central ATP metabolites and either biomarker or MRI evidence for axonal degeneration. The data suggests that an increased energy demand in multiple sclerosis may cause a quantifiable degree of central ATP depletion. We speculate that the observed clinical disability may be related to depolarisation associated conduction block.
An unsupervised method for quantifying the behavior of paired animals
NASA Astrophysics Data System (ADS)
Klibaite, Ugne; Berman, Gordon J.; Cande, Jessica; Stern, David L.; Shaevitz, Joshua W.
2017-02-01
Behaviors involving the interaction of multiple individuals are complex and frequently crucial for an animal’s survival. These interactions, ranging across sensory modalities, length scales, and time scales, are often subtle and difficult to characterize. Contextual effects on the frequency of behaviors become even more difficult to quantify when physical interaction between animals interferes with conventional data analysis, e.g. due to visual occlusion. We introduce a method for quantifying behavior in fruit fly interaction that combines high-throughput video acquisition and tracking of individuals with recent unsupervised methods for capturing an animal’s entire behavioral repertoire. We find behavioral differences between solitary flies and those paired with an individual of the opposite sex, identifying specific behaviors that are affected by social and spatial context. Our pipeline allows for a comprehensive description of the interaction between two individuals using unsupervised machine learning methods, and will be used to answer questions about the depth of complexity and variance in fruit fly courtship.
NASA Astrophysics Data System (ADS)
Leinhardt, Zoë M.; Richardson, Derek C.
2005-08-01
We present a new code ( companion) that identifies bound systems of particles in O(NlogN) time. Simple binaries consisting of pairs of mutually bound particles and complex hierarchies consisting of collections of mutually bound particles are identifiable with this code. In comparison, brute force binary search methods scale as O(N) while full hierarchy searches can be as expensive as O(N), making analysis highly inefficient for multiple data sets with N≳10. A simple test case is provided to illustrate the method. Timing tests demonstrating O(NlogN) scaling with the new code on real data are presented. We apply our method to data from asteroid satellite simulations [Durda et al., 2004. Icarus 167, 382-396; Erratum: Icarus 170, 242; reprinted article: Icarus 170, 243-257] and note interesting multi-particle configurations. The code is available at http://www.astro.umd.edu/zoe/companion/ and is distributed under the terms and conditions of the GNU Public License.
Cross-scale integration of knowledge for predicting species ranges: a metamodeling framework.
Talluto, Matthew V; Boulangeat, Isabelle; Ameztegui, Aitor; Aubin, Isabelle; Berteaux, Dominique; Butler, Alyssa; Doyon, Frédérik; Drever, C Ronnie; Fortin, Marie-Josée; Franceschini, Tony; Liénard, Jean; McKenney, Dan; Solarik, Kevin A; Strigul, Nikolay; Thuiller, Wilfried; Gravel, Dominique
2016-02-01
Current interest in forecasting changes to species ranges have resulted in a multitude of approaches to species distribution models (SDMs). However, most approaches include only a small subset of the available information, and many ignore smaller-scale processes such as growth, fecundity, and dispersal. Furthermore, different approaches often produce divergent predictions with no simple method to reconcile them. Here, we present a flexible framework for integrating models at multiple scales using hierarchical Bayesian methods. Eastern North America (as an example). Our framework builds a metamodel that is constrained by the results of multiple sub-models and provides probabilistic estimates of species presence. We applied our approach to a simulated dataset to demonstrate the integration of a correlative SDM with a theoretical model. In a second example, we built an integrated model combining the results of a physiological model with presence-absence data for sugar maple ( Acer saccharum ), an abundant tree native to eastern North America. For both examples, the integrated models successfully included information from all data sources and substantially improved the characterization of uncertainty. For the second example, the integrated model outperformed the source models with respect to uncertainty when modelling the present range of the species. When projecting into the future, the model provided a consensus view of two models that differed substantially in their predictions. Uncertainty was reduced where the models agreed and was greater where they diverged, providing a more realistic view of the state of knowledge than either source model. We conclude by discussing the potential applications of our method and its accessibility to applied ecologists. In ideal cases, our framework can be easily implemented using off-the-shelf software. The framework has wide potential for use in species distribution modelling and can drive better integration of multi-source and multi-scale data into ecological decision-making.
Continuous analog of multiplicative algebraic reconstruction technique for computed tomography
NASA Astrophysics Data System (ADS)
Tateishi, Kiyoko; Yamaguchi, Yusaku; Abou Al-Ola, Omar M.; Kojima, Takeshi; Yoshinaga, Tetsuya
2016-03-01
We propose a hybrid dynamical system as a continuous analog to the block-iterative multiplicative algebraic reconstruction technique (BI-MART), which is a well-known iterative image reconstruction algorithm for computed tomography. The hybrid system is described by a switched nonlinear system with a piecewise smooth vector field or differential equation and, for consistent inverse problems, the convergence of non-negatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem. Namely, we can prove theoretically that a weighted Kullback-Leibler divergence measure can be a common Lyapunov function for the switched system. We show that discretizing the differential equation by using the first-order approximation (Euler's method) based on the geometric multiplicative calculus leads to the same iterative formula of the BI-MART with the scaling parameter as a time-step of numerical discretization. The present paper is the first to reveal that a kind of iterative image reconstruction algorithm is constructed by the discretization of a continuous-time dynamical system for solving tomographic inverse problems. Iterative algorithms with not only the Euler method but also the Runge-Kutta methods of lower-orders applied for discretizing the continuous-time system can be used for image reconstruction. A numerical example showing the characteristics of the discretized iterative methods is presented.
ZnO-based multiple channel and multiple gate FinMOSFETs
NASA Astrophysics Data System (ADS)
Lee, Ching-Ting; Huang, Hung-Lin; Tseng, Chun-Yen; Lee, Hsin-Ying
2016-02-01
In recent years, zinc oxide (ZnO)-based metal-oxide-semiconductor field-effect transistors (MOSFETs) have attracted much attention, because ZnO-based semiconductors possess several advantages, including large exciton binding energy, nontoxicity, biocompatibility, low material cost, and wide direct bandgap. Moreover, the ZnO-based MOSFET is one of most potential devices, due to the applications in microwave power amplifiers, logic circuits, large scale integrated circuits, and logic swing. In this study, to enhance the performances of the ZnO-based MOSFETs, the ZnObased multiple channel and multiple gate structured FinMOSFETs were fabricated using the simple laser interference photolithography method and the self-aligned photolithography method. The multiple channel structure possessed the additional sidewall depletion width control ability to improve the channel controllability, because the multiple channel sidewall portions were surrounded by the gate electrode. Furthermore, the multiple gate structure had a shorter distance between source and gate and a shorter gate length between two gates to enhance the gate operating performances. Besides, the shorter distance between source and gate could enhance the electron velocity in the channel fin structure of the multiple gate structure. In this work, ninety one channels and four gates were used in the FinMOSFETs. Consequently, the drain-source saturation current (IDSS) and maximum transconductance (gm) of the ZnO-based multiple channel and multiple gate structured FinFETs operated at a drain-source voltage (VDS) of 10 V and a gate-source voltage (VGS) of 0 V were respectively improved from 11.5 mA/mm to 13.7 mA/mm and from 4.1 mS/mm to 6.9 mS/mm in comparison with that of the conventional ZnO-based single channel and single gate MOSFETs.
Garland, Ellen C; Rendell, Luke; Lilley, Matthew S; Poole, M Michael; Allen, Jenny; Noad, Michael J
2017-07-01
Identifying and quantifying variation in vocalizations is fundamental to advancing our understanding of processes such as speciation, sexual selection, and cultural evolution. The song of the humpback whale (Megaptera novaeangliae) presents an extreme example of complexity and cultural evolution. It is a long, hierarchically structured vocal display that undergoes constant evolutionary change. Obtaining robust metrics to quantify song variation at multiple scales (from a sound through to population variation across the seascape) is a substantial challenge. Here, the authors present a method to quantify song similarity at multiple levels within the hierarchy. To incorporate the complexity of these multiple levels, the calculation of similarity is weighted by measurements of sound units (lower levels within the display) to bridge the gap in information between upper and lower levels. Results demonstrate that the inclusion of weighting provides a more realistic and robust representation of song similarity at multiple levels within the display. This method permits robust quantification of cultural patterns and processes that will also contribute to the conservation management of endangered humpback whale populations, and is applicable to any hierarchically structured signal sequence.
Multiple-time scales analysis of physiological time series under neural control
NASA Technical Reports Server (NTRS)
Peng, C. K.; Hausdorff, J. M.; Havlin, S.; Mietus, J. E.; Stanley, H. E.; Goldberger, A. L.
1998-01-01
We discuss multiple-time scale properties of neurophysiological control mechanisms, using heart rate and gait regulation as model systems. We find that scaling exponents can be used as prognostic indicators. Furthermore, detection of more subtle degradation of scaling properties may provide a novel early warning system in subjects with a variety of pathologies including those at high risk of sudden death.
Role of tautomerism in RNA biochemistry
Singh, Vipender; Fedeles, Bogdan I.
2015-01-01
Heterocyclic nucleic acid bases and their analogs can adopt multiple tautomeric forms due to the presence of multiple solvent-exchangeable protons. In DNA, spontaneous formation of minor tautomers has been speculated to contribute to mutagenic mispairings during DNA replication, whereas in RNA, minor tautomeric forms have been proposed to enhance the structural and functional diversity of RNA enzymes and aptamers. This review summarizes the role of tautomerism in RNA biochemistry, specifically focusing on the role of tautomerism in catalysis of small self-cleaving ribozymes and recognition of ligand analogs by riboswitches. Considering that the presence of multiple tautomers of nucleic acid bases is a rare occurrence, and that tautomers typically interconvert on a fast time scale, methods for studying rapid tautomerism in the context of nucleic acids under biologically relevant aqueous conditions are also discussed. PMID:25516996
Reliability, Validity and Utility of a Multiple Intelligences Assessment for Career Planning.
ERIC Educational Resources Information Center
Shearer, C. Branton
"The Multiple Intelligences Developmental Assessment Scales" (MIDAS) is a self- (or other-) completed instrument which is based upon the theory of multiple intelligences. The validity, reliability, and utility data regarding the MIDAS are reported here. The measure consists of 7 main scales and 24 subscales which summarize a person's intellectual…
Rueckl, Martin; Lenzi, Stephen C; Moreno-Velasquez, Laura; Parthier, Daniel; Schmitz, Dietmar; Ruediger, Sten; Johenning, Friedrich W
2017-01-01
The measurement of activity in vivo and in vitro has shifted from electrical to optical methods. While the indicators for imaging activity have improved significantly over the last decade, tools for analysing optical data have not kept pace. Most available analysis tools are limited in their flexibility and applicability to datasets obtained at different spatial scales. Here, we present SamuROI (Structured analysis of multiple user-defined ROIs), an open source Python-based analysis environment for imaging data. SamuROI simplifies exploratory analysis and visualization of image series of fluorescence changes in complex structures over time and is readily applicable at different spatial scales. In this paper, we show the utility of SamuROI in Ca 2+ -imaging based applications at three spatial scales: the micro-scale (i.e., sub-cellular compartments including cell bodies, dendrites and spines); the meso-scale, (i.e., whole cell and population imaging with single-cell resolution); and the macro-scale (i.e., imaging of changes in bulk fluorescence in large brain areas, without cellular resolution). The software described here provides a graphical user interface for intuitive data exploration and region of interest (ROI) management that can be used interactively within Jupyter Notebook: a publicly available interactive Python platform that allows simple integration of our software with existing tools for automated ROI generation and post-processing, as well as custom analysis pipelines. SamuROI software, source code and installation instructions are publicly available on GitHub and documentation is available online. SamuROI reduces the energy barrier for manual exploration and semi-automated analysis of spatially complex Ca 2+ imaging datasets, particularly when these have been acquired at different spatial scales.
Rueckl, Martin; Lenzi, Stephen C.; Moreno-Velasquez, Laura; Parthier, Daniel; Schmitz, Dietmar; Ruediger, Sten; Johenning, Friedrich W.
2017-01-01
The measurement of activity in vivo and in vitro has shifted from electrical to optical methods. While the indicators for imaging activity have improved significantly over the last decade, tools for analysing optical data have not kept pace. Most available analysis tools are limited in their flexibility and applicability to datasets obtained at different spatial scales. Here, we present SamuROI (Structured analysis of multiple user-defined ROIs), an open source Python-based analysis environment for imaging data. SamuROI simplifies exploratory analysis and visualization of image series of fluorescence changes in complex structures over time and is readily applicable at different spatial scales. In this paper, we show the utility of SamuROI in Ca2+-imaging based applications at three spatial scales: the micro-scale (i.e., sub-cellular compartments including cell bodies, dendrites and spines); the meso-scale, (i.e., whole cell and population imaging with single-cell resolution); and the macro-scale (i.e., imaging of changes in bulk fluorescence in large brain areas, without cellular resolution). The software described here provides a graphical user interface for intuitive data exploration and region of interest (ROI) management that can be used interactively within Jupyter Notebook: a publicly available interactive Python platform that allows simple integration of our software with existing tools for automated ROI generation and post-processing, as well as custom analysis pipelines. SamuROI software, source code and installation instructions are publicly available on GitHub and documentation is available online. SamuROI reduces the energy barrier for manual exploration and semi-automated analysis of spatially complex Ca2+ imaging datasets, particularly when these have been acquired at different spatial scales. PMID:28706482
Intelligent query by humming system based on score level fusion of multiple classifiers
NASA Astrophysics Data System (ADS)
Pyo Nam, Gi; Thu Trang Luong, Thi; Ha Nam, Hyun; Ryoung Park, Kang; Park, Sung-Joo
2011-12-01
Recently, the necessity for content-based music retrieval that can return results even if a user does not know information such as the title or singer has increased. Query-by-humming (QBH) systems have been introduced to address this need, as they allow the user to simply hum snatches of the tune to find the right song. Even though there have been many studies on QBH, few have combined multiple classifiers based on various fusion methods. Here we propose a new QBH system based on the score level fusion of multiple classifiers. This research is novel in the following three respects: three local classifiers [quantized binary (QB) code-based linear scaling (LS), pitch-based dynamic time warping (DTW), and LS] are employed; local maximum and minimum point-based LS and pitch distribution feature-based LS are used as global classifiers; and the combination of local and global classifiers based on the score level fusion by the PRODUCT rule is used to achieve enhanced matching accuracy. Experimental results with the 2006 MIREX QBSH and 2009 MIR-QBSH corpus databases show that the performance of the proposed method is better than that of single classifier and other fusion methods.
Forslin, Mia; Kottorp, Anders; Kierkegaard, Marie; Johansson, Sverker
2016-11-11
To translate and culturally adapt the Acceptance of Chronic Health Conditions (ACHC) Scale for people with multiple sclerosis into Swedish, and to analyse the psychometric properties of the Swedish version. Ten people with multiple sclerosis participated in translation and cultural adaptation of the ACHC Scale; 148 people with multiple sclerosis were included in evaluation of the psychometric properties of the scale. Translation and cultural adaptation were carried out through translation and back-translation, by expert committee evaluation and pre-test with cognitive interviews in people with multiple sclerosis. The psychometric properties of the Swedish version were evaluated using Rasch analysis. The Swedish version of the ACHC Scale was an acceptable equivalent to the original version. Seven of the original 10 items fitted the Rasch model and demonstrated ability to separate between groups. A 5-item version, including 2 items and 3 super-items, demonstrated better psychometric properties, but lower ability to separate between groups. The Swedish version of the ACHC Scale with the original 10 items did not fit the Rasch model. Two solutions, either with 7 items (ACHC-7) or with 2 items and 3 super-items (ACHC-5), demonstrated acceptable psychometric properties. Use of the ACHC-5 Scale with super-items is recommended, since this solution adjusts for local dependency among items.
Huang, Yunda; Huang, Ying; Moodie, Zoe; Li, Sue; Self, Steve
2014-01-01
Summary In biomedical research such as the development of vaccines for infectious diseases or cancer, measures from the same assay are often collected from multiple sources or laboratories. Measurement error that may vary between laboratories needs to be adjusted for when combining samples across laboratories. We incorporate such adjustment in comparing and combining independent samples from different labs via integration of external data, collected on paired samples from the same two laboratories. We propose: 1) normalization of individual level data from two laboratories to the same scale via the expectation of true measurements conditioning on the observed; 2) comparison of mean assay values between two independent samples in the Main study accounting for inter-source measurement error; and 3) sample size calculations of the paired-sample study so that hypothesis testing error rates are appropriately controlled in the Main study comparison. Because the goal is not to estimate the true underlying measurements but to combine data on the same scale, our proposed methods do not require that the true values for the errorprone measurements are known in the external data. Simulation results under a variety of scenarios demonstrate satisfactory finite sample performance of our proposed methods when measurement errors vary. We illustrate our methods using real ELISpot assay data generated by two HIV vaccine laboratories. PMID:22764070
Techno-ecological synergy: a framework for sustainable engineering.
Bakshi, Bhavik R; Ziv, Guy; Lepech, Michael D
2015-02-03
Even though the importance of ecosystems in sustaining all human activities is well-known, methods for sustainable engineering fail to fully account for this role of nature. Most methods account for the demand for ecosystem services, but almost none account for the supply. Incomplete accounting of the very foundation of human well-being can result in perverse outcomes from decisions meant to enhance sustainability and lost opportunities for benefiting from the ability of nature to satisfy human needs in an economically and environmentally superior manner. This paper develops a framework for understanding and designing synergies between technological and ecological systems to encourage greater harmony between human activities and nature. This framework considers technological systems ranging from individual processes to supply chains and life cycles, along with corresponding ecological systems at multiple spatial scales ranging from local to global. The demand for specific ecosystem services is determined from information about emissions and resource use, while the supply is obtained from information about the capacity of relevant ecosystems. Metrics calculate the sustainability of individual ecosystem services at multiple spatial scales and help define necessary but not sufficient conditions for local and global sustainability. Efforts to reduce ecological overshoot encourage enhancement of life cycle efficiency, development of industrial symbiosis, innovative designs and policies, and ecological restoration, thus combining the best features of many existing methods. Opportunities for theoretical and applied research to make this framework practical are also discussed.
Stochastic Downscaling of Digital Elevation Models
NASA Astrophysics Data System (ADS)
Rasera, Luiz Gustavo; Mariethoz, Gregoire; Lane, Stuart N.
2016-04-01
High-resolution digital elevation models (HR-DEMs) are extremely important for the understanding of small-scale geomorphic processes in Alpine environments. In the last decade, remote sensing techniques have experienced a major technological evolution, enabling fast and precise acquisition of HR-DEMs. However, sensors designed to measure elevation data still feature different spatial resolution and coverage capabilities. Terrestrial altimetry allows the acquisition of HR-DEMs with centimeter to millimeter-level precision, but only within small spatial extents and often with dead ground problems. Conversely, satellite radiometric sensors are able to gather elevation measurements over large areas but with limited spatial resolution. In the present study, we propose an algorithm to downscale low-resolution satellite-based DEMs using topographic patterns extracted from HR-DEMs derived for example from ground-based and airborne altimetry. The method consists of a multiple-point geostatistical simulation technique able to generate high-resolution elevation data from low-resolution digital elevation models (LR-DEMs). Initially, two collocated DEMs with different spatial resolutions serve as an input to construct a database of topographic patterns, which is also used to infer the statistical relationships between the two scales. High-resolution elevation patterns are then retrieved from the database to downscale a LR-DEM through a stochastic simulation process. The output of the simulations are multiple equally probable DEMs with higher spatial resolution that also depict the large-scale geomorphic structures present in the original LR-DEM. As these multiple models reflect the uncertainty related to the downscaling, they can be employed to quantify the uncertainty of phenomena that are dependent on fine topography, such as catchment hydrological processes. The proposed methodology is illustrated for a case study in the Swiss Alps. A swissALTI3D HR-DEM (with 5 m resolution) and a SRTM-derived LR-DEM from the Western Alps are used to downscale a SRTM-based LR-DEM from the eastern part of the Alps. The results show that the method is capable of generating multiple high-resolution synthetic DEMs that reproduce the spatial structure and statistics of the original DEM.
Extracting the information of coastline shape and its multiple representations
NASA Astrophysics Data System (ADS)
Liu, Ying; Li, Shujun; Tian, Zhen; Chen, Huirong
2007-06-01
According to studying the coastline, a new way of multiple representations is put forward in the paper. That is stimulating human thinking way when they generalized, building the appropriate math model and describing the coastline with graphics, extracting all kinds of the coastline shape information. The coastline automatic generalization will be finished based on the knowledge rules and arithmetic operators. Showing the information of coastline shape by building the curve Douglas binary tree, it can reveal the shape character of coastline not only microcosmically but also macroscopically. Extracting the information of coastline concludes the local characteristic point and its orientation, the curve structure and the topology trait. The curve structure can be divided the single curve and the curve cluster. By confirming the knowledge rules of the coastline generalization, the generalized scale and its shape parameter, the coastline automatic generalization model is established finally. The method of the multiple scale representation of coastline in this paper has some strong points. It is human's thinking mode and can keep the nature character of the curve prototype. The binary tree structure can control the coastline comparability, avoid the self-intersect phenomenon and hold the unanimous topology relationship.
Visualization of Spatio-Temporal Relations in Movement Event Using Multi-View
NASA Astrophysics Data System (ADS)
Zheng, K.; Gu, D.; Fang, F.; Wang, Y.; Liu, H.; Zhao, W.; Zhang, M.; Li, Q.
2017-09-01
Spatio-temporal relations among movement events extracted from temporally varying trajectory data can provide useful information about the evolution of individual or collective movers, as well as their interactions with their spatial and temporal contexts. However, the pure statistical tools commonly used by analysts pose many difficulties, due to the large number of attributes embedded in multi-scale and multi-semantic trajectory data. The need for models that operate at multiple scales to search for relations at different locations within time and space, as well as intuitively interpret what these relations mean, also presents challenges. Since analysts do not know where or when these relevant spatio-temporal relations might emerge, these models must compute statistical summaries of multiple attributes at different granularities. In this paper, we propose a multi-view approach to visualize the spatio-temporal relations among movement events. We describe a method for visualizing movement events and spatio-temporal relations that uses multiple displays. A visual interface is presented, and the user can interactively select or filter spatial and temporal extents to guide the knowledge discovery process. We also demonstrate how this approach can help analysts to derive and explain the spatio-temporal relations of movement events from taxi trajectory data.
Spatial and Temporal Scaling of Thermal Infrared Remote Sensing Data
NASA Technical Reports Server (NTRS)
Quattrochi, Dale A.; Goel, Narendra S.
1995-01-01
Although remote sensing has a central role to play in the acquisition of synoptic data obtained at multiple spatial and temporal scales to facilitate our understanding of local and regional processes as they influence the global climate, the use of thermal infrared (TIR) remote sensing data in this capacity has received only minimal attention. This results from some fundamental challenges that are associated with employing TIR data collected at different space and time scales, either with the same or different sensing systems, and also from other problems that arise in applying a multiple scaled approach to the measurement of surface temperatures. In this paper, we describe some of the more important problems associated with using TIR remote sensing data obtained at different spatial and temporal scales, examine why these problems appear as impediments to using multiple scaled TIR data, and provide some suggestions for future research activities that may address these problems. We elucidate the fundamental concept of scale as it relates to remote sensing and explore how space and time relationships affect TIR data from a problem-dependency perspective. We also describe how linearity and non-linearity observation versus parameter relationships affect the quantitative analysis of TIR data. Some insight is given on how the atmosphere between target and sensor influences the accurate measurement of surface temperatures and how these effects will be compounded in analyzing multiple scaled TIR data. Last, we describe some of the challenges in modeling TIR data obtained at different space and time scales and discuss how multiple scaled TIR data can be used to provide new and important information for measuring and modeling land-atmosphere energy balance processes.
Natural convection in melt crystal growth - The influence of flow pattern on solute segregation
NASA Technical Reports Server (NTRS)
Brown, R. A.; Yamaguchi, Y.; Chang, C. J.
1982-01-01
The results of two lines of research aimed at calculating the structure of the flows driven by buoyancy in small-scale crystal growth systems and at understanding the coupling between these flows, the shape of the solidification interface, and dopant segregation in the crystal are reviewed. First, finite-element methods are combined with computer-aided methods for detecting multiple steady solutions to analyze the structure of the buoyancy-driven axisymmetric flows in a vertical cylinder heated from below. This system exhibits onset of convection, multiple steady flows, and loss of the primary stable flow beyond a critical value of the Rayleigh number. Second, results are presented for calculations of convection, melt/solid interface shape, and dopant segregation within a vertical ampoule with thermal boundary conditions that represent a prototype of the vertical Bridgman growth system.
NASA Astrophysics Data System (ADS)
Candia, Julián
2013-03-01
The multidimensional nature of many single-cell measurements (e.g. multiple markers measured simultaneously using Fluorescence-Activated Cell Sorting (FACS) technologies) offers unprecedented opportunities to unravel emergent phenomena that are governed by the cooperative action of multiple elements across different scales, from molecules and proteins to cells and organisms. We will discuss an integrated analysis framework to investigate multicolor FACS data from different perspectives: Singular Value Decomposition to achieve an effective dimensional reduction in the data representation, machine learning techniques to separate different patient classes and improve diagnosis, as well as a novel cell-similarity network analysis method to identify cell subpopulations in an unbiased manner. Besides FACS data, this framework is versatile: in this vein, we will demonstrate an application to the multidimensional single-cell shape analysis of healthy and prematurely aged cells.
Ding, Fangyu; Ge, Quansheng; Fu, Jingying; Hao, Mengmeng
2017-01-01
Terror events can cause profound consequences for the whole society. Finding out the regularity of terrorist attacks has important meaning for the global counter-terrorism strategy. In the present study, we demonstrate a novel method using relatively popular and robust machine learning methods to simulate the risk of terrorist attacks at a global scale based on multiple resources, long time series and globally distributed datasets. Historical data from 1970 to 2015 was adopted to train and evaluate machine learning models. The model performed fairly well in predicting the places where terror events might occur in 2015, with a success rate of 96.6%. Moreover, it is noteworthy that the model with optimized tuning parameter values successfully predicted 2,037 terrorism event locations where a terrorist attack had never happened before. PMID:28591138
Ding, Fangyu; Ge, Quansheng; Jiang, Dong; Fu, Jingying; Hao, Mengmeng
2017-01-01
Terror events can cause profound consequences for the whole society. Finding out the regularity of terrorist attacks has important meaning for the global counter-terrorism strategy. In the present study, we demonstrate a novel method using relatively popular and robust machine learning methods to simulate the risk of terrorist attacks at a global scale based on multiple resources, long time series and globally distributed datasets. Historical data from 1970 to 2015 was adopted to train and evaluate machine learning models. The model performed fairly well in predicting the places where terror events might occur in 2015, with a success rate of 96.6%. Moreover, it is noteworthy that the model with optimized tuning parameter values successfully predicted 2,037 terrorism event locations where a terrorist attack had never happened before.
Cloud Detection by Fusing Multi-Scale Convolutional Features
NASA Astrophysics Data System (ADS)
Li, Zhiwei; Shen, Huanfeng; Wei, Yancong; Cheng, Qing; Yuan, Qiangqiang
2018-04-01
Clouds detection is an important pre-processing step for accurate application of optical satellite imagery. Recent studies indicate that deep learning achieves best performance in image segmentation tasks. Aiming at boosting the accuracy of cloud detection for multispectral imagery, especially for those that contain only visible and near infrared bands, in this paper, we proposed a deep learning based cloud detection method termed MSCN (multi-scale cloud net), which segments cloud by fusing multi-scale convolutional features. MSCN was trained on a global cloud cover validation collection, and was tested in more than ten types of optical images with different resolution. Experiment results show that MSCN has obvious advantages over the traditional multi-feature combined cloud detection method in accuracy, especially when in snow and other areas covered by bright non-cloud objects. Besides, MSCN produced more detailed cloud masks than the compared deep cloud detection convolution network. The effectiveness of MSCN make it promising for practical application in multiple kinds of optical imagery.
Melt inclusion shapes: Timekeepers of short-lived giant magma bodies
Pamukcu, Ayla S.; Gualda, Guilherme A. R.; Bégué, Florence; ...
2015-09-24
The longevity of giant magma bodies in the Earth’s crust prior to eruption is poorly constrained, but recognition of short time scales by multiple methods suggests that the accumulation and eruption of these giant bodies may occur rapidly. We describe a new method that uses textures of quartz-hosted melt inclusions, determined using quantitative three-dimensional propagation phase-contrast X-ray tomography, to estimate quartz crystallization times and growth rates, and we compare the results to those from Ti diffusion profiles. We investigate three large-volume, high-silica rhyolite eruptions: the 240 ka Ohakuri-Mamaku and 26.5 ka Oruanui (Taupo Volcanic Zone, New Zealand), and the 760more » ka Bishop Tuff (California, USA). Our results show that (1) longevity estimates from melt inclusion textures and Ti diffusion profiles are comparable, (2) quartz growth rates average ∼10−12 m/s, and (3) quartz melt inclusions give decadal to centennial time scales, revealing that giant magma bodies can develop over notably short historical time scales.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schiffmann, Florian; VandeVondele, Joost, E-mail: Joost.VandeVondele@mat.ethz.ch
2015-06-28
We present an improved preconditioning scheme for electronic structure calculations based on the orbital transformation method. First, a preconditioner is developed which includes information from the full Kohn-Sham matrix but avoids computationally demanding diagonalisation steps in its construction. This reduces the computational cost of its construction, eliminating a bottleneck in large scale simulations, while maintaining rapid convergence. In addition, a modified form of Hotelling’s iterative inversion is introduced to replace the exact inversion of the preconditioner matrix. This method is highly effective during molecular dynamics (MD), as the solution obtained in earlier MD steps is a suitable initial guess. Filteringmore » small elements during sparse matrix multiplication leads to linear scaling inversion, while retaining robustness, already for relatively small systems. For system sizes ranging from a few hundred to a few thousand atoms, which are typical for many practical applications, the improvements to the algorithm lead to a 2-5 fold speedup per MD step.« less
An Active Fire Temperature Retrieval Model Using Hyperspectral Remote Sensing
NASA Astrophysics Data System (ADS)
Quigley, K. W.; Roberts, D. A.; Miller, D.
2017-12-01
Wildfire is both an important ecological process and a dangerous natural threat that humans face. In situ measurements of wildfire temperature are notoriously difficult to collect due to dangerous conditions. Imaging spectrometry data has the potential to provide some of the most accurate and highest temporally-resolved active fire temperature retrieval information for monitoring and modeling. Recent studies on fire temperature retrieval have used have used Multiple Endmember Spectral Mixture Analysis applied to Airborne Visible applied to Airborne Visible / Infrared Imaging Spectrometer (AVIRIS) bands to model fire temperatures within the regions marked to contain fire, but these methods are less effective at coarser spatial resolutions, as linear mixing methods are degraded by saturation within the pixel. The assumption of a distribution of temperatures within pixels allows us to model pixels with an effective maximum and likely minimum temperature. This assumption allows a more robust approach to modeling temperature at different spatial scales. In this study, instrument-corrected radiance is forward-modeled for different ranges of temperatures, with weighted temperatures from an effective maximum temperature to a likely minimum temperature contributing to the total radiance of the modeled pixel. Effective maximum fire temperature is estimated by minimizing the Root Mean Square Error (RMSE) between modeled and measured fires. The model was tested using AVIRIS collected over the 2016 Sherpa Fire in Santa Barbara County, California,. While only in situ experimentation would be able to confirm active fire temperatures, the fit of the data to modeled radiance can be assessed, as well as the similarity in temperature distributions seen on different spatial resolution scales. Results show that this model improves upon current modeling methods in producing similar effective temperatures on multiple spatial scales as well as a similar modeled area distribution of those temperatures.
Numerical Technology for Large-Scale Computational Electromagnetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharpe, R; Champagne, N; White, D
The key bottleneck of implicit computational electromagnetics tools for large complex geometries is the solution of the resulting linear system of equations. The goal of this effort was to research and develop critical numerical technology that alleviates this bottleneck for large-scale computational electromagnetics (CEM). The mathematical operators and numerical formulations used in this arena of CEM yield linear equations that are complex valued, unstructured, and indefinite. Also, simultaneously applying multiple mathematical modeling formulations to different portions of a complex problem (hybrid formulations) results in a mixed structure linear system, further increasing the computational difficulty. Typically, these hybrid linear systems aremore » solved using a direct solution method, which was acceptable for Cray-class machines but does not scale adequately for ASCI-class machines. Additionally, LLNL's previously existing linear solvers were not well suited for the linear systems that are created by hybrid implicit CEM codes. Hence, a new approach was required to make effective use of ASCI-class computing platforms and to enable the next generation design capabilities. Multiple approaches were investigated, including the latest sparse-direct methods developed by our ASCI collaborators. In addition, approaches that combine domain decomposition (or matrix partitioning) with general-purpose iterative methods and special purpose pre-conditioners were investigated. Special-purpose pre-conditioners that take advantage of the structure of the matrix were adapted and developed based on intimate knowledge of the matrix properties. Finally, new operator formulations were developed that radically improve the conditioning of the resulting linear systems thus greatly reducing solution time. The goal was to enable the solution of CEM problems that are 10 to 100 times larger than our previous capability.« less
Chen, Xinguang; Wang, Yan; Li, Fang; Gong, Jie; Yan, Yaqiong
2015-01-01
Obtaining reliable and valid data on sensitive questions represents a longstanding challenge for public health, particularly HIV research. To overcome the challenge, we assessed a construal level theory (CLT)-based novel method. The method was previously established and pilot-tested using the Brief Sexual Openness Scale (BSOS). This scale consists of five items assessing attitudes toward premarital sex, multiple sexual partners, homosexuality, extramarital sex, and commercial sex, all rated on a standard 5-point Likert scale. In addition to self-assessment, the participants were asked to assess rural residents, urban residents, and foreigners. The self-assessment plus the assessment of the three other groups were all used as subconstructs of one latent construct: sexual openness. The method was validated with data from 1,132 rural-to-urban migrants (mean age = 32.5, SD = 7.9; 49.6% female) recruited in China. Consistent with CLT, the Cronbach alpha of the BSOS as a conventional tool increased with social distance, from .81 for self-assessment to .97 for assessing foreigners. In addition to a satisfactory fit of the data to a one-factor model (CFI = .94, TLI = .93, RMSEA = .08), a common factor was separated from the four perspective factors (i.e., migrants’ self-perspective and their perspectives of rural residents, urban residents and foreigners) through a trifactor modeling analysis (CFI = .95, TLI = .94, RMSEA = .08). Relative to its conventional form, CTL-based BSOS was more reliable (alpha: .96 vs .81) and valid in predicting sexual desire, frequency of dating, age of first sex, multiple sexual partners and STD history. This novel technique can be used to assess sexual openness, and possibly other sensitive questions among Chinese domestic migrants. PMID:26308336
Voormolen, Eduard H.J.; Wei, Corie; Chow, Eva W.C.; Bassett, Anne S.; Mikulis, David J.; Crawley, Adrian P.
2011-01-01
Voxel-based morphometry (VBM) and automated lobar region of interest (ROI) volumetry are comprehensive and fast methods to detect differences in overall brain anatomy on magnetic resonance images. However, VBM and automated lobar ROI volumetry have detected dissimilar gray matter differences within identical image sets in our own experience and in previous reports. To gain more insight into how diverging results arise and to attempt to establish whether one method is superior to the other, we investigated how differences in spatial scale and in the need to statistically correct for multiple spatial comparisons influence the relative sensitivity of either technique to group differences in gray matter volumes. We assessed the performance of both techniques on a small dataset containing simulated gray matter deficits and additionally on a dataset of 22q11-deletion syndrome patients with schizophrenia (22q11DS-SZ) vs. matched controls. VBM was more sensitive to simulated focal deficits compared to automated ROI volumetry, and could detect global cortical deficits equally well. Moreover, theoretical calculations of VBM and ROI detection sensitivities to focal deficits showed that at increasing ROI size, ROI volumetry suffers more from loss in sensitivity than VBM. Furthermore, VBM and automated ROI found corresponding GM deficits in 22q11DS-SZ patients, except in the parietal lobe. Here, automated lobar ROI volumetry found a significant deficit only after a smaller subregion of interest was employed. Thus, sensitivity to focal differences is impaired relatively more by averaging over larger volumes in automated ROI methods than by the correction for multiple comparisons in VBM. These findings indicate that VBM is to be preferred over automated lobar-scale ROI volumetry for assessing gray matter volume differences between groups. PMID:19619660
Multiple network alignment via multiMAGNA+.
Vijayan, Vipin; Milenkovic, Tijana
2017-08-21
Network alignment (NA) aims to find a node mapping that identifies topologically or functionally similar network regions between molecular networks of different species. Analogous to genomic sequence alignment, NA can be used to transfer biological knowledge from well- to poorly-studied species between aligned network regions. Pairwise NA (PNA) finds similar regions between two networks while multiple NA (MNA) can align more than two networks. We focus on MNA. Existing MNA methods aim to maximize total similarity over all aligned nodes (node conservation). Then, they evaluate alignment quality by measuring the amount of conserved edges, but only after the alignment is constructed. Directly optimizing edge conservation during alignment construction in addition to node conservation may result in superior alignments. Thus, we present a novel MNA method called multiMAGNA++ that can achieve this. Indeed, multiMAGNA++ outperforms or is on par with existing MNA methods, while often completing faster than existing methods. That is, multiMAGNA++ scales well to larger network data and can be parallelized effectively. During method evaluation, we also introduce new MNA quality measures to allow for more fair MNA method comparison compared to the existing alignment quality measures. MultiMAGNA++ code is available on the method's web page at http://nd.edu/~cone/multiMAGNA++/.
Unlocking Flexibility: Integrated Optimization and Control of Multienergy Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dall'Anese, Emiliano; Mancarella, Pierluigi; Monti, Antonello
Electricity, natural gas, water, and dis trict heating/cooling systems are predominantly planned and operated independently. However, it is increasingly recognized that integrated optimization and control of such systems at multiple spatiotemporal scales can bring significant socioeconomic, operational efficiency, and environmental benefits. Accordingly, the concept of the multi-energy system is gaining considerable attention, with the overarching objectives of 1) uncovering fundamental gains (and potential drawbacks) that emerge from the integrated operation of multiple systems and 2) developing holistic yet computationally affordable optimization and control methods that maximize operational benefits, while 3) acknowledging intrinsic interdependencies and quality-of-service requirements for each provider.
An integrated data model to estimate spatiotemporal occupancy, abundance, and colonization dynamics
Williams, Perry J.; Hooten, Mevin B.; Womble, Jamie N.; Esslinger, George G.; Bower, Michael R.; Hefley, Trevor J.
2017-01-01
Ecological invasions and colonizations occur dynamically through space and time. Estimating the distribution and abundance of colonizing species is critical for efficient management or conservation. We describe a statistical framework for simultaneously estimating spatiotemporal occupancy and abundance dynamics of a colonizing species. Our method accounts for several issues that are common when modeling spatiotemporal ecological data including multiple levels of detection probability, multiple data sources, and computational limitations that occur when making fine-scale inference over a large spatiotemporal domain. We apply the model to estimate the colonization dynamics of sea otters (Enhydra lutris) in Glacier Bay, in southeastern Alaska.
Patterns of disturbance at multiple scales in real and simulated landscapes
Giovanni Zurlini; Kurt H. Riitters; Nicola Zaccarelli; Irene Petrosoillo
2007-01-01
We describe a framework to characterize and interpret the spatial patterns of disturbances at multiple scales in socio-ecological systems. Domains of scale are defined in pattern metric space and mapped in geographic space, which can help to understand how anthropogenic disturbances might impact biodiversity through habitat modification. The approach identifies typical...
Imaging multi-scale dynamics in vivo with spiral volumetric optoacoustic tomography
NASA Astrophysics Data System (ADS)
Deán-Ben, X. Luís.; Fehm, Thomas F.; Ford, Steven J.; Gottschalk, Sven; Razansky, Daniel
2017-03-01
Imaging dynamics in living organisms is essential for the understanding of biological complexity. While multiple imaging modalities are often required to cover both microscopic and macroscopic spatial scales, dynamic phenomena may also extend over different temporal scales, necessitating the use of different imaging technologies based on the trade-off between temporal resolution and effective field of view. Optoacoustic (photoacoustic) imaging has been shown to offer the exclusive capability to link multiple spatial scales ranging from organelles to entire organs of small animals. Yet, efficient visualization of multi-scale dynamics remained difficult with state-of-the-art systems due to inefficient trade-offs between image acquisition and effective field of view. Herein, we introduce a spiral volumetric optoacoustic tomography (SVOT) technique that provides spectrally-enriched high-resolution optical absorption contrast across multiple spatio-temporal scales. We demonstrate that SVOT can be used to monitor various in vivo dynamics, from video-rate volumetric visualization of cardiac-associated motion in whole organs to high-resolution imaging of pharmacokinetics in larger regions. The multi-scale dynamic imaging capability thus emerges as a powerful and unique feature of the optoacoustic technology that adds to the multiple advantages of this technology for structural, functional and molecular imaging.
Physical Activity and Its Correlates in Youth with Multiple Sclerosis.
Grover, Stephanie A; Sawicki, Carolyn P; Kinnett-Hopkins, Dominique; Finlayson, Marcia; Schneiderman, Jane E; Banwell, Brenda; Till, Christine; Motl, Robert W; Yeh, E Ann
2016-12-01
To investigate physical activity levels in youth with multiple sclerosis and monophasic acquired demyelinating syndromes ([mono-ADS], ie, children without relapsing disease) compared with healthy controls and to determine factors that contribute to engagement in physical activity. We hypothesized that greater physical activity goal setting and physical activity self-efficacy would be associated with greater levels of vigorous physical activity in youth with multiple sclerosis. A total of 68 consecutive patients (27 multiple sclerosis, 41 mono-ADS) and 37 healthy controls completed fatigue, depression, Physical Activity Self-Efficacy Scale, perceived disability, Exercise Goal-Setting scale, and physical activity questionnaires, and wore an accelerometer for 7 days. All patients had no ambulatory limitations (Expanded Disability Status Scale, scores all <4). Youth with multiple sclerosis engaged in fewer minutes per day of vigorous (P = .009) and moderate and vigorous physical activity (P = .048) than did patients with mono-ADS and healthy controls. A lower proportion of the group with multiple sclerosis (63%) reported participating in any strenuous physical activity than the mono-ADS (85%) and healthy control (89%) groups (P = .020). When we adjusted for age and sex, the Physical Activity Self-Efficacy Scale and Exercise Goal-Setting scale were associated positively with vigorous physical activity in the group with multiple sclerosis. Fatigue and depression did not predict physical activity or accelerometry metrics. Youth with multiple sclerosis participate in less physical activity than their counterparts with mono-ADS and healthy controls. Physical activity self-efficacy and exercise goal setting serve as potentially modifiable correlates of physical activity, and are measures suited to future interventions aimed to increase physical activity in youth with multiple sclerosis. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hu, Yijia; Zhong, Zhong; Zhu, Yimin; Ha, Yao
2018-04-01
In this paper, a statistical forecast model using the time-scale decomposition method is established to do the seasonal prediction of the rainfall during flood period (FPR) over the middle and lower reaches of the Yangtze River Valley (MLYRV). This method decomposites the rainfall over the MLYRV into three time-scale components, namely, the interannual component with the period less than 8 years, the interdecadal component with the period from 8 to 30 years, and the interdecadal component with the period larger than 30 years. Then, the predictors are selected for the three time-scale components of FPR through the correlation analysis. At last, a statistical forecast model is established using the multiple linear regression technique to predict the three time-scale components of the FPR, respectively. The results show that this forecast model can capture the interannual and interdecadal variation of FPR. The hindcast of FPR during 14 years from 2001 to 2014 shows that the FPR can be predicted successfully in 11 out of the 14 years. This forecast model performs better than the model using traditional scheme without time-scale decomposition. Therefore, the statistical forecast model using the time-scale decomposition technique has good skills and application value in the operational prediction of FPR over the MLYRV.
On WKB expansions for Alfven waves in the solar wind
NASA Technical Reports Server (NTRS)
Hollweg, Joseph V.
1990-01-01
The WKB expansion for 'toroidal' Alfven waves in solar wind, which is described by equations of Heinemann and Olbert (1980), is examined. In this case, the multiple scales method (Nayfeh, 1981) is used to obtain a uniform expansion. It is shown that the WKB expansion used by Belcher (1971) and Hollweg (1973) for Alfven waves in the solar wind is nonuniformly convergent.
On WKB expansions for Alfven waves in the solar wind
NASA Astrophysics Data System (ADS)
Hollweg, Joseph V.
1990-09-01
The WKB expansion for 'toroidal' Alfven waves in solar wind, which is described by equations of Heinemann and Olbert (1980), is examined. In this case, the multiple scales method (Nayfeh, 1981) is used to obtain a uniform expansion. It is shown that the WKB expansion used by Belcher (1971) and Hollweg (1973) for Alfven waves in the solar wind is nonuniformly convergent.
ERIC Educational Resources Information Center
Garrett Dikkers, Amy
2015-01-01
This mixed-method study reports perspectives of virtual school teachers on the impact of online teaching on their face-to-face practice. Data from a large-scale survey of teachers in the North Carolina Virtual Public School (n = 214), focus groups (n = 7), and interviews (n = 5) demonstrate multiple intersections between online and face-to-face…
Estimating watershed evapotranspiration across the United States using multiple methods
Ge Sun; Shanlei Sun; Jingfeng Xiao; Peter Caldwell; Devendra Amatya; Suat Irmak; Prasanna H. Gowda; Sudhanshu Panda; Steve McNulty; Yang Zhang
2016-01-01
Evapotranspiration (ET) is the largest watershed water balance component only next to precipitation in the United States. ET is closely coupled with ecosystem carbon and energy fluxes, affects flooding or drought magnitude, and is also a good predictor for biodiversity at a regional scale.Thus, accurately estimating ET is of paramount importance to quantify the effects...
Thomas C. Edwards; Gretchen G. Moisen; Tracey S. Frescino; Joshua L. Lawler
2002-01-01
We describe our collective efforts to develop and apply methods for using FIA data to model forest resources and wildlife habitat. Our work demonstrates how flexible regression techniques, such as generalized additive models, can be linked with spatially explicit environmental information for the mapping of forest type and structure. We illustrate how these maps of...
Large scale study of multiple-molecule queries
2009-01-01
Background In ligand-based screening, as well as in other chemoinformatics applications, one seeks to effectively search large repositories of molecules in order to retrieve molecules that are similar typically to a single molecule lead. However, in some case, multiple molecules from the same family are available to seed the query and search for other members of the same family. Multiple-molecule query methods have been less studied than single-molecule query methods. Furthermore, the previous studies have relied on proprietary data and sometimes have not used proper cross-validation methods to assess the results. In contrast, here we develop and compare multiple-molecule query methods using several large publicly available data sets and background. We also create a framework based on a strict cross-validation protocol to allow unbiased benchmarking for direct comparison in future studies across several performance metrics. Results Fourteen different multiple-molecule query methods were defined and benchmarked using: (1) 41 publicly available data sets of related molecules with similar biological activity; and (2) publicly available background data sets consisting of up to 175,000 molecules randomly extracted from the ChemDB database and other sources. Eight of the fourteen methods were parameter free, and six of them fit one or two free parameters to the data using a careful cross-validation protocol. All the methods were assessed and compared for their ability to retrieve members of the same family against the background data set by using several performance metrics including the Area Under the Accumulation Curve (AUAC), Area Under the Curve (AUC), F1-measure, and BEDROC metrics. Consistent with the previous literature, the best parameter-free methods are the MAX-SIM and MIN-RANK methods, which score a molecule to a family by the maximum similarity, or minimum ranking, obtained across the family. One new parameterized method introduced in this study and two previously defined methods, the Exponential Tanimoto Discriminant (ETD), the Tanimoto Power Discriminant (TPD), and the Binary Kernel Discriminant (BKD), outperform most other methods but are more complex, requiring one or two parameters to be fit to the data. Conclusion Fourteen methods for multiple-molecule querying of chemical databases, including novel methods, (ETD) and (TPD), are validated using publicly available data sets, standard cross-validation protocols, and established metrics. The best results are obtained with ETD, TPD, BKD, MAX-SIM, and MIN-RANK. These results can be replicated and compared with the results of future studies using data freely downloadable from http://cdb.ics.uci.edu/. PMID:20298525
Comparative water use of native and invasive plants at multiple scales: a global meta-analysis.
Cavaleri, Molly A; Sack, Lawren
2010-09-01
Ecohydrology and invasive ecology have become increasingly important in the context of global climate change. This study presents the first in-depth analysis of the water use of invasive and native plants of the same growth form at multiple scales: leaf, plant, and ecosystem. We reanalyzed data for several hundred native and invasive species from over 40 published studies worldwide to glean global trends and to highlight how patterns vary depending on both scale and climate. We analyzed all pairwise combinations of co-occurring native and invasive species for higher comparative resolution of the likelihood of an invasive species using more water than a native species and tested for significance using bootstrap methods. At each scale, we found several-fold differences in water use between specific paired invasive and native species. At the leaf scale, we found a strong tendency for invasive species to have greater stomatal conductance than native species. At the plant scale, however, natives and invasives were equally likely to have the higher sap flow rates. Available data were much fewer for the ecosystem scale; nevertheless, we found that invasive-dominated ecosystems were more likely to have higher sap flow rates per unit ground area than native-dominated ecosystems. Ecosystem-scale evapotranspiration, on the other hand, was equally likely to be greater for systems dominated by invasive and native species of the same growth form. The inherent disconnects in the determination of water use when changing scales from leaf to plant to ecosystem reveal hypotheses for future studies and a critical need for more ecosystem-scale water use measurements in invasive- vs. native-dominated systems. The differences in water use of native and invasive species also depended strongly on climate, with the greater water use of invasives enhanced in hotter, wetter climates at the coarser scales.
DL-sQUAL: A Multiple-Item Scale for Measuring Service Quality of Online Distance Learning Programs
ERIC Educational Resources Information Center
Shaik, Naj; Lowe, Sue; Pinegar, Kem
2006-01-01
Education is a service with multiplicity of student interactions over time and across multiple touch points. Quality teaching needs to be supplemented by consistent quality supporting services for programs to succeed under the competitive distance learning landscape. ServQual and e-SQ scales have been proposed for measuring quality of traditional…
Cross-Domain Multi-View Object Retrieval via Multi-Scale Topic Models.
Hong, Richang; Hu, Zhenzhen; Wang, Ruxin; Wang, Meng; Tao, Dacheng
2016-09-27
The increasing number of 3D objects in various applications has increased the requirement for effective and efficient 3D object retrieval methods, which attracted extensive research efforts in recent years. Existing works mainly focus on how to extract features and conduct object matching. With the increasing applications, 3D objects come from different areas. In such circumstances, how to conduct object retrieval becomes more important. To address this issue, we propose a multi-view object retrieval method using multi-scale topic models in this paper. In our method, multiple views are first extracted from each object, and then the dense visual features are extracted to represent each view. To represent the 3D object, multi-scale topic models are employed to extract the hidden relationship among these features with respected to varied topic numbers in the topic model. In this way, each object can be represented by a set of bag of topics. To compare the objects, we first conduct topic clustering for the basic topics from two datasets, and then generate the common topic dictionary for new representation. Then, the two objects can be aligned to the same common feature space for comparison. To evaluate the performance of the proposed method, experiments are conducted on two datasets. The 3D object retrieval experimental results and comparison with existing methods demonstrate the effectiveness of the proposed method.
Scheffe, Richard D; Strum, Madeleine; Phillips, Sharon B; Thurman, James; Eyth, Alison; Fudge, Steve; Morris, Mark; Palma, Ted; Cook, Richard
2016-11-15
A hybrid air quality model has been developed and applied to estimate annual concentrations of 40 hazardous air pollutants (HAPs) across the continental United States (CONUS) to support the 2011 calendar year National Air Toxics Assessment (NATA). By combining a chemical transport model (CTM) with a Gaussian dispersion model, both reactive and nonreactive HAPs are accommodated across local to regional spatial scales, through a multiplicative technique designed to improve mass conservation relative to previous additive methods. The broad scope of multiple pollutants capturing regional to local spatial scale patterns across a vast spatial domain is precedent setting within the air toxics community. The hybrid design exhibits improved performance relative to the stand alone CTM and dispersion model. However, model performance varies widely across pollutant categories and quantifiably definitive performance assessments are hampered by a limited observation base and challenged by the multiple physical and chemical attributes of HAPs. Formaldehyde and acetaldehyde are the dominant HAP concentration and cancer risk drivers, characterized by strong regional signals associated with naturally emitted carbonyl precursors enhanced in urban transport corridors with strong mobile source sector emissions. The multiple pollutant emission characteristics of combustion dominated source sectors creates largely similar concentration patterns across the majority of HAPs. However, reactive carbonyls exhibit significantly less spatial variability relative to nonreactive HAPs across the CONUS.
Tsugawa, Hiroshi; Arita, Masanori; Kanazawa, Mitsuhiro; Ogiwara, Atsushi; Bamba, Takeshi; Fukusaki, Eiichiro
2013-05-21
We developed a new software program, MRMPROBS, for widely targeted metabolomics by using the large-scale multiple reaction monitoring (MRM) mode. The strategy became increasingly popular for the simultaneous analysis of up to several hundred metabolites at high sensitivity, selectivity, and quantitative capability. However, the traditional method of assessing measured metabolomics data without probabilistic criteria is not only time-consuming but is often subjective and makeshift work. Our program overcomes these problems by detecting and identifying metabolites automatically, by separating isomeric metabolites, and by removing background noise using a probabilistic score defined as the odds ratio from an optimized multivariate logistic regression model. Our software program also provides a user-friendly graphical interface to curate and organize data matrices and to apply principal component analyses and statistical tests. For a demonstration, we conducted a widely targeted metabolome analysis (152 metabolites) of propagating Saccharomyces cerevisiae measured at 15 time points by gas and liquid chromatography coupled to triple quadrupole mass spectrometry. MRMPROBS is a useful and practical tool for the assessment of large-scale MRM data available to any instrument or any experimental condition.
Maurice, Corinne Ferrier; Turnbaugh, Peter James
2013-01-01
Humans are home to complex microbial communities, whose aggregate genomes and their encoded metabolic activities are referred to as the human microbiome. Recently, researchers have begun to appreciate that different human body habitats and the activities of their resident microorganisms can be better understood in ecological terms, as a range of spatial scales encompassing single cells, guilds of microorganisms responsive to a similar substrate, microbial communities, body habitats, and host populations. However, the bulk of the work to date has focused on studies of culturable microorganisms in isolation or on DNA sequencing-based surveys of microbial diversity in small to moderately sized cohorts of individuals. Here, we discuss recent work that highlights the potential for assessing the human microbiome at a range of spatial scales, and for developing novel techniques that bridge multiple levels: for example, through the combination of single cell methods and metagenomic sequencing. These studies promise to not only provide a much-needed epidemiological and ecological context for mechanistic studies of culturable and genetically tractable microorganisms, but may also lead to the discovery of fundamental rules that govern the assembly and function of host-associated microbial communities. PMID:23550823
Time-localized wavelet multiple regression and correlation
NASA Astrophysics Data System (ADS)
Fernández-Macho, Javier
2018-02-01
This paper extends wavelet methodology to handle comovement dynamics of multivariate time series via moving weighted regression on wavelet coefficients. The concept of wavelet local multiple correlation is used to produce one single set of multiscale correlations along time, in contrast with the large number of wavelet correlation maps that need to be compared when using standard pairwise wavelet correlations with rolling windows. Also, the spectral properties of weight functions are investigated and it is argued that some common time windows, such as the usual rectangular rolling window, are not satisfactory on these grounds. The method is illustrated with a multiscale analysis of the comovements of Eurozone stock markets during this century. It is shown how the evolution of the correlation structure in these markets has been far from homogeneous both along time and across timescales featuring an acute divide across timescales at about the quarterly scale. At longer scales, evidence from the long-term correlation structure can be interpreted as stable perfect integration among Euro stock markets. On the other hand, at intramonth and intraweek scales, the short-term correlation structure has been clearly evolving along time, experiencing a sharp increase during financial crises which may be interpreted as evidence of financial 'contagion'.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramuhalli, Pradeep; Roy, Surajit; Hirt, Evelyn H.
2014-09-12
This report describes research results to date in support of the integration and demonstration of diagnostics technologies for prototypical AdvSMR passive components (to establish condition indices for monitoring) with model-based prognostics methods. The focus of the PHM methodology and algorithm development in this study is at the localized scale. Multiple localized measurements of material condition (using advanced nondestructive measurement methods), along with available measurements of the stressor environment, enhance the performance of localized diagnostics and prognostics of passive AdvSMR components and systems.
Gandolfi, Marialuisa; Geroin, Christian; Picelli, Alessandro; Munari, Daniele; Waldner, Andreas; Tamburin, Stefano; Marchioretto, Fabio; Smania, Nicola
2014-01-01
Background: Extensive research on both healthy subjects and patients with central nervous damage has elucidated a crucial role of postural adjustment reactions and central sensory integration processes in generating and “shaping” locomotor function, respectively. Whether robotic-assisted gait devices might improve these functions in Multiple sclerosis (MS) patients is not fully investigated in literature. Purpose: The aim of this study was to compare the effectiveness of end-effector robot-assisted gait training (RAGT) and sensory integration balance training (SIBT) in improving walking and balance performance in patients with MS. Methods: Twenty-two patients with MS (EDSS: 1.5–6.5) were randomly assigned to two groups. The RAGT group (n = 12) underwent end-effector system training. The SIBT group (n = 10) underwent specific balance exercises. Each patient received twelve 50-min treatment sessions (2 days/week). A blinded rater evaluated patients before and after treatment as well as 1 month post treatment. Primary outcomes were walking speed and Berg Balance Scale. Secondary outcomes were the Activities-specific Balance Confidence Scale, Sensory Organization Balance Test, Stabilometric Assessment, Fatigue Severity Scale, cadence, step length, single and double support time, Multiple Sclerosis Quality of Life-54. Results: Between groups comparisons showed no significant differences on primary and secondary outcome measures over time. Within group comparisons showed significant improvements in both groups on the Berg Balance Scale (P = 0.001). Changes approaching significance were found on gait speed (P = 0.07) only in the RAGT group. Significant changes in balance task-related domains during standing and walking conditions were found in the SIBT group. Conclusion: Balance disorders in patients with MS may be ameliorated by RAGT and by SIBT. PMID:24904361
Integrating neuroinformatics tools in TheVirtualBrain.
Woodman, M Marmaduke; Pezard, Laurent; Domide, Lia; Knock, Stuart A; Sanz-Leon, Paula; Mersmann, Jochen; McIntosh, Anthony R; Jirsa, Viktor
2014-01-01
TheVirtualBrain (TVB) is a neuroinformatics Python package representing the convergence of clinical, systems, and theoretical neuroscience in the analysis, visualization and modeling of neural and neuroimaging dynamics. TVB is composed of a flexible simulator for neural dynamics measured across scales from local populations to large-scale dynamics measured by electroencephalography (EEG), magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI), and core analytic and visualization functions, all accessible through a web browser user interface. A datatype system modeling neuroscientific data ties together these pieces with persistent data storage, based on a combination of SQL and HDF5. These datatypes combine with adapters allowing TVB to integrate other algorithms or computational systems. TVB provides infrastructure for multiple projects and multiple users, possibly participating under multiple roles. For example, a clinician might import patient data to identify several potential lesion points in the patient's connectome. A modeler, working on the same project, tests these points for viability through whole brain simulation, based on the patient's connectome, and subsequent analysis of dynamical features. TVB also drives research forward: the simulator itself represents the culmination of several simulation frameworks in the modeling literature. The availability of the numerical methods, set of neural mass models and forward solutions allows for the construction of a wide range of brain-scale simulation scenarios. This paper briefly outlines the history and motivation for TVB, describing the framework and simulator, giving usage examples in the web UI and Python scripting.
Integrating neuroinformatics tools in TheVirtualBrain
Woodman, M. Marmaduke; Pezard, Laurent; Domide, Lia; Knock, Stuart A.; Sanz-Leon, Paula; Mersmann, Jochen; McIntosh, Anthony R.; Jirsa, Viktor
2014-01-01
TheVirtualBrain (TVB) is a neuroinformatics Python package representing the convergence of clinical, systems, and theoretical neuroscience in the analysis, visualization and modeling of neural and neuroimaging dynamics. TVB is composed of a flexible simulator for neural dynamics measured across scales from local populations to large-scale dynamics measured by electroencephalography (EEG), magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI), and core analytic and visualization functions, all accessible through a web browser user interface. A datatype system modeling neuroscientific data ties together these pieces with persistent data storage, based on a combination of SQL and HDF5. These datatypes combine with adapters allowing TVB to integrate other algorithms or computational systems. TVB provides infrastructure for multiple projects and multiple users, possibly participating under multiple roles. For example, a clinician might import patient data to identify several potential lesion points in the patient's connectome. A modeler, working on the same project, tests these points for viability through whole brain simulation, based on the patient's connectome, and subsequent analysis of dynamical features. TVB also drives research forward: the simulator itself represents the culmination of several simulation frameworks in the modeling literature. The availability of the numerical methods, set of neural mass models and forward solutions allows for the construction of a wide range of brain-scale simulation scenarios. This paper briefly outlines the history and motivation for TVB, describing the framework and simulator, giving usage examples in the web UI and Python scripting. PMID:24795617
Michael S. Mitchell; Scott H. Rutzmoser; T. Bently Wigley; Craig Loehle; John A. Gerwin; Patrick D. Keyser; Richard A. Lancia; Roger W. Perry; Christopher L. Reynolds; Ronald E. Thill; Robert Weih; Don White; Petra Bohall Wood
2006-01-01
Little is known about factors that structure biodiversity on landscape scales, yet current land management protocols, such as forest certification programs, place an increasing emphasis on managing for sustainable biodiversity at landscape scales. We used a replicated landscape study to evaluate relationships between forest structure and avian diversity at both stand...
An Eulerian time filtering technique to study large-scale transient flow phenomena
NASA Astrophysics Data System (ADS)
Vanierschot, Maarten; Persoons, Tim; van den Bulck, Eric
2009-10-01
Unsteady fluctuating velocity fields can contain large-scale periodic motions with frequencies well separated from those of turbulence. Examples are the wake behind a cylinder or the processing vortex core in a swirling jet. These turbulent flow fields contain large-scale, low-frequency oscillations, which are obscured by turbulence, making it impossible to identify them. In this paper, we present an Eulerian time filtering (ETF) technique to extract the large-scale motions from unsteady statistical non-stationary velocity fields or flow fields with multiple phenomena that have sufficiently separated spectral content. The ETF method is based on non-causal time filtering of the velocity records in each point of the flow field. It is shown that the ETF technique gives good results, similar to the ones obtained by the phase-averaging method. In this paper, not only the influence of the temporal filter is checked, but also parameters such as the cut-off frequency and sampling frequency of the data are investigated. The technique is validated on a selected set of time-resolved stereoscopic particle image velocimetry measurements such as the initial region of an annular jet and the transition between flow patterns in an annular jet. The major advantage of the ETF method in the extraction of large scales is that it is computationally less expensive and it requires less measurement time compared to other extraction methods. Therefore, the technique is suitable in the startup phase of an experiment or in a measurement campaign where several experiments are needed such as parametric studies.
Multiple Illuminant Colour Estimation via Statistical Inference on Factor Graphs.
Mutimbu, Lawrence; Robles-Kelly, Antonio
2016-08-31
This paper presents a method to recover a spatially varying illuminant colour estimate from scenes lit by multiple light sources. Starting with the image formation process, we formulate the illuminant recovery problem in a statistically datadriven setting. To do this, we use a factor graph defined across the scale space of the input image. In the graph, we utilise a set of illuminant prototypes computed using a data driven approach. As a result, our method delivers a pixelwise illuminant colour estimate being devoid of libraries or user input. The use of a factor graph also allows for the illuminant estimates to be recovered making use of a maximum a posteriori (MAP) inference process. Moreover, we compute the probability marginals by performing a Delaunay triangulation on our factor graph. We illustrate the utility of our method for pixelwise illuminant colour recovery on widely available datasets and compare against a number of alternatives. We also show sample colour correction results on real-world images.
Classification of Farmland Landscape Structure in Multiple Scales
NASA Astrophysics Data System (ADS)
Jiang, P.; Cheng, Q.; Li, M.
2017-12-01
Farmland is one of the basic terrestrial resources that support the development and survival of human beings and thus plays a crucial role in the national security of every country. Pattern change is the intuitively spatial representation of the scale and quality variation of farmland. Through the characteristic development of spatial shapes as well as through changes in system structures, functions and so on, farmland landscape patterns may indicate the landscape health level. Currently, it is still difficult to perform positioning analyses of landscape pattern changes that reflect the landscape structure variations of farmland with an index model. Depending on a number of spatial properties such as locations and adjacency relations, distance decay, fringe effect, and on the model of patch-corridor-matrix that is applied, this study defines a type system of farmland landscape structure on the national, provincial, and city levels. According to such a definition, the classification model of farmland landscape-structure type at the pixel scale is developed and validated based on mathematical-morphology concepts and on spatial-analysis methods. Then, the laws that govern farmland landscape-pattern change in multiple scales are analyzed from the perspectives of spatial heterogeneity, spatio-temporal evolution, and function transformation. The result shows that the classification model of farmland landscape-structure type can reflect farmland landscape-pattern change and its effects on farmland production function. Moreover, farmland landscape change in different scales displayed significant disparity in zonality, both within specific regions and in urban-rural areas.
A boundary condition for layer to level ocean model interaction
NASA Astrophysics Data System (ADS)
Mask, A.; O'Brien, J.; Preller, R.
2003-04-01
A radiation boundary condition based on vertical normal modes is introduced to allow a physical transition between nested/coupled ocean models that are of differing vertical structure and/or differing physics. In this particular study, a fine resolution regional/coastal sigma-coordinate Naval Coastal Ocean Model (NCOM) has been successfully nested to a coarse resolution (in the horizontal and vertical) basin scale NCOM and a coarse resolution basin scale Navy Layered Ocean Model (NLOM). Both of these models were developed at the Naval Research Laboratory (NRL) at Stennis Space Center, Mississippi, USA. This new method, which decomposes the vertical structure of the models into barotropic and baroclinic modes, gives improved results in the coastal domain over Orlanski radiation boundary conditions for the test cases. The principle reason for the improvement is that each mode has the radiation boundary condition applied individually; therefore, the packet of information passing through the boundary is allowed to have multiple phase speeds instead of a single-phase speed. Allowing multiple phase speeds reduces boundary reflections, thus improving results.
Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models
NASA Astrophysics Data System (ADS)
Altuntas, Alper; Baugh, John
2017-07-01
Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.
From a meso- to micro-scale connectome: array tomography and mGRASP
Rah, Jong-Cheol; Feng, Linqing; Druckmann, Shaul; Lee, Hojin; Kim, Jinhyun
2015-01-01
Mapping mammalian synaptic connectivity has long been an important goal of neuroscience because knowing how neurons and brain areas are connected underpins an understanding of brain function. Meeting this goal requires advanced techniques with single synapse resolution and large-scale capacity, especially at multiple scales tethering the meso- and micro-scale connectome. Among several advanced LM-based connectome technologies, Array Tomography (AT) and mammalian GFP-Reconstitution Across Synaptic Partners (mGRASP) can provide relatively high-throughput mapping synaptic connectivity at multiple scales. AT- and mGRASP-assisted circuit mapping (ATing and mGRASPing), combined with techniques such as retrograde virus, brain clearing techniques, and activity indicators will help unlock the secrets of complex neural circuits. Here, we discuss these useful new tools to enable mapping of brain circuits at multiple scales, some functional implications of spatial synaptic distribution, and future challenges and directions of these endeavors. PMID:26089781
ProtoMD: A prototyping toolkit for multiscale molecular dynamics
NASA Astrophysics Data System (ADS)
Somogyi, Endre; Mansour, Andrew Abi; Ortoleva, Peter J.
2016-05-01
ProtoMD is a toolkit that facilitates the development of algorithms for multiscale molecular dynamics (MD) simulations. It is designed for multiscale methods which capture the dynamic transfer of information across multiple spatial scales, such as the atomic to the mesoscopic scale, via coevolving microscopic and coarse-grained (CG) variables. ProtoMD can be also be used to calibrate parameters needed in traditional CG-MD methods. The toolkit integrates 'GROMACS wrapper' to initiate MD simulations, and 'MDAnalysis' to analyze and manipulate trajectory files. It facilitates experimentation with a spectrum of coarse-grained variables, prototyping rare events (such as chemical reactions), or simulating nanocharacterization experiments such as terahertz spectroscopy, AFM, nanopore, and time-of-flight mass spectroscopy. ProtoMD is written in python and is freely available under the GNU General Public License from github.com/CTCNano/proto_md.
Ham, Ok-Kyung; Kang, Youjeong; Teng, Helen; Lee, Yaelim; Im, Eun-Ok
2014-01-01
Background Standardized pain-intensity measurement across different tools would enable practitioners to have confidence in clinical decision-making for pain management. Objectives The purpose was to examine the degree of agreement among unidimensional pain scales, and to determine the accuracy of the multidimensional pain scales in the diagnosis of severe pain. Methods A secondary analysis was performed. The sample included a convenience sample of 480 cancer patients recruited from both the internet and community settings. Cancer pain was measured using the Verbal Descriptor Scale (VDS), the Visual Analog Scale (VAS), the Faces Pain Scale (FPS), the McGill Pain Questionnaire-Short Form (MPQ-SF) and the Brief Pain Inventory-Short Form (BPI-SF). Data were analyzed using a multivariate analysis of variance (MANOVA) and a receiver operating characteristics (ROC) curve. Results The agreement between the VDS and VAS was 77.25%, while the agreement was 71.88% and 71.60% between the VDS and FPS, and VAS and FPS, respectively. The MPQ-SF and BPI-SF yielded high accuracy in the diagnosis of severe pain. Cutoff points for severe pain were > 8 for the MPQ-SF and > 14 for the BPI-SF, which exhibited high sensitivity and relatively low specificity. Conclusion The study found substantial agreement between the unidimensional pain scales, and high accuracy of the MPQ-SF and the BPI-SF in the diagnosis of severe pain. Implications for Practice Use of one or more pain screening tools that have been validated diagnostic accuracy and consistency will help classify pain effectively and subsequently promote optimal pain control in multi-ethnic groups of cancer patients. PMID:25068188
NASA Astrophysics Data System (ADS)
Donner, R. V.; Potirakis, S. M.; Barbosa, S. M.; Matos, J. A. O.; Pereira, A. J. S. C.; Neves, L. J. P. F.
2015-05-01
The presence or absence of long-range correlations in the environmental radioactivity fluctuations has recently attracted considerable interest. Among a multiplicity of practically relevant applications, identifying and disentangling the environmental factors controlling the variable concentrations of the radioactive noble gas radon is important for estimating its effect on human health and the efficiency of possible measures for reducing the corresponding exposition. In this work, we present a critical re-assessment of a multiplicity of complementary methods that have been previously applied for evaluating the presence of long-range correlations and fractal scaling in environmental radon variations with a particular focus on the specific properties of the underlying time series. As an illustrative case study, we subsequently re-analyze two high-frequency records of indoor radon concentrations from Coimbra, Portugal, each of which spans several weeks of continuous measurements at a high temporal resolution of five minutes.Our results reveal that at the study site, radon concentrations exhibit complex multi-scale dynamics with qualitatively different properties at different time-scales: (i) essentially white noise in the high-frequency part (up to time-scales of about one hour), (ii) spurious indications of a non-stationary, apparently long-range correlated process (at time scales between some hours and one day) arising from marked periodic components, and (iii) low-frequency variability indicating a true long-range dependent process. In the presence of such multi-scale variability, common estimators of long-range memory in time series are prone to fail if applied to the raw data without previous separation of time-scales with qualitatively different dynamics.
Vickers, T. Winston; Ernest, Holly B.; Boyce, Walter M.
2017-01-01
The importance of examining multiple hierarchical levels when modeling resource use for wildlife has been acknowledged for decades. Multi-level resource selection functions have recently been promoted as a method to synthesize resource use across nested organizational levels into a single predictive surface. Analyzing multiple scales of selection within each hierarchical level further strengthens multi-level resource selection functions. We extend this multi-level, multi-scale framework to modeling resistance for wildlife by combining multi-scale resistance surfaces from two data types, genetic and movement. Resistance estimation has typically been conducted with one of these data types, or compared between the two. However, we contend it is not an either/or issue and that resistance may be better-modeled using a combination of resistance surfaces that represent processes at different hierarchical levels. Resistance surfaces estimated from genetic data characterize temporally broad-scale dispersal and successful breeding over generations, whereas resistance surfaces estimated from movement data represent fine-scale travel and contextualized movement decisions. We used telemetry and genetic data from a long-term study on pumas (Puma concolor) in a highly developed landscape in southern California to develop a multi-level, multi-scale resource selection function and a multi-level, multi-scale resistance surface. We used these multi-level, multi-scale surfaces to identify resource use patches and resistant kernel corridors. Across levels, we found puma avoided urban, agricultural areas, and roads and preferred riparian areas and more rugged terrain. For other landscape features, selection differed among levels, as did the scales of selection for each feature. With these results, we developed a conservation plan for one of the most isolated puma populations in the U.S. Our approach captured a wide spectrum of ecological relationships for a population, resulted in effective conservation planning, and can be readily applied to other wildlife species. PMID:28609466
Zeller, Katherine A; Vickers, T Winston; Ernest, Holly B; Boyce, Walter M
2017-01-01
The importance of examining multiple hierarchical levels when modeling resource use for wildlife has been acknowledged for decades. Multi-level resource selection functions have recently been promoted as a method to synthesize resource use across nested organizational levels into a single predictive surface. Analyzing multiple scales of selection within each hierarchical level further strengthens multi-level resource selection functions. We extend this multi-level, multi-scale framework to modeling resistance for wildlife by combining multi-scale resistance surfaces from two data types, genetic and movement. Resistance estimation has typically been conducted with one of these data types, or compared between the two. However, we contend it is not an either/or issue and that resistance may be better-modeled using a combination of resistance surfaces that represent processes at different hierarchical levels. Resistance surfaces estimated from genetic data characterize temporally broad-scale dispersal and successful breeding over generations, whereas resistance surfaces estimated from movement data represent fine-scale travel and contextualized movement decisions. We used telemetry and genetic data from a long-term study on pumas (Puma concolor) in a highly developed landscape in southern California to develop a multi-level, multi-scale resource selection function and a multi-level, multi-scale resistance surface. We used these multi-level, multi-scale surfaces to identify resource use patches and resistant kernel corridors. Across levels, we found puma avoided urban, agricultural areas, and roads and preferred riparian areas and more rugged terrain. For other landscape features, selection differed among levels, as did the scales of selection for each feature. With these results, we developed a conservation plan for one of the most isolated puma populations in the U.S. Our approach captured a wide spectrum of ecological relationships for a population, resulted in effective conservation planning, and can be readily applied to other wildlife species.
Multiple Time Series Node Synchronization Utilizing Ambient Reference
2014-12-31
assessment, is the need for fine scale synchronization among communicating nodes and across multiple domains. The severe requirements that Special...processing targeted to performance assessment, is the need for fine scale synchronization among communicating nodes and across multiple domains. The...research community and it is well documented and characterized. The datasets considered from this project (listed below) were used to derive the
Ionocovalency and Applications 1. Ionocovalency Model and Orbital Hybrid Scales
Zhang, Yonghe
2010-01-01
Ionocovalency (IC), a quantitative dual nature of the atom, is defined and correlated with quantum-mechanical potential to describe quantitatively the dual properties of the bond. Orbiotal hybrid IC model scale, IC, and IC electronegativity scale, XIC, are proposed, wherein the ionicity and the covalent radius are determined by spectroscopy. Being composed of the ionic function I and the covalent function C, the model describes quantitatively the dual properties of bond strengths, charge density and ionic potential. Based on the atomic electron configuration and the various quantum-mechanical built-up dual parameters, the model formed a Dual Method of the multiple-functional prediction, which has much more versatile and exceptional applications than traditional electronegativity scales and molecular properties. Hydrogen has unconventional values of IC and XIC, lower than that of boron. The IC model can agree fairly well with the data of bond properties and satisfactorily explain chemical observations of elements throughout the Periodic Table. PMID:21151444
Xiao, Li; Cai, Qin; Li, Zhilin; Zhao, Hongkai; Luo, Ray
2014-01-01
A multi-scale framework is proposed for more realistic molecular dynamics simulations in continuum solvent models by coupling a molecular mechanics treatment of solute with a fluid mechanics treatment of solvent. This article reports our initial efforts to formulate the physical concepts necessary for coupling the two mechanics and develop a 3D numerical algorithm to simulate the solvent fluid via the Navier-Stokes equation. The numerical algorithm was validated with multiple test cases. The validation shows that the algorithm is effective and stable, with observed accuracy consistent with our design. PMID:25404761