Sample records for complex variable techniques

  1. Variable Complexity Optimization of Composite Structures

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.

    2002-01-01

    The use of several levels of modeling in design has been dubbed variable complexity modeling. The work under the grant focused on developing variable complexity modeling strategies with emphasis on response surface techniques. Applications included design of stiffened composite plates for improved damage tolerance, the use of response surfaces for fitting weights obtained by structural optimization, and design against uncertainty using response surface techniques.

  2. COED Transactions, Vol. IX, No. 3, March 1977. Evaluation of a Complex Variable Using Analog/Hybrid Computation Techniques.

    ERIC Educational Resources Information Center

    Marcovitz, Alan B., Ed.

    Described is the use of an analog/hybrid computer installation to study those physical phenomena that can be described through the evaluation of an algebraic function of a complex variable. This is an alternative way to study such phenomena on an interactive graphics terminal. The typical problem used, involving complex variables, is that of…

  3. Dannie Heineman Prize for Mathematical Physics: Applying mathematical techniques to solve important problems in quantum theory

    NASA Astrophysics Data System (ADS)

    Bender, Carl

    2017-01-01

    The theory of complex variables is extremely useful because it helps to explain the mathematical behavior of functions of a real variable. Complex variable theory also provides insight into the nature of physical theories. For example, it provides a simple and beautiful picture of quantization and it explains the underlying reason for the divergence of perturbation theory. By using complex-variable methods one can generalize conventional Hermitian quantum theories into the complex domain. The result is a new class of parity-time-symmetric (PT-symmetric) theories whose remarkable physical properties have been studied and verified in many recent laboratory experiments.

  4. Review and classification of variability analysis techniques with clinical applications.

    PubMed

    Bravi, Andrea; Longtin, André; Seely, Andrew J E

    2011-10-10

    Analysis of patterns of variation of time-series, termed variability analysis, represents a rapidly evolving discipline with increasing applications in different fields of science. In medicine and in particular critical care, efforts have focussed on evaluating the clinical utility of variability. However, the growth and complexity of techniques applicable to this field have made interpretation and understanding of variability more challenging. Our objective is to provide an updated review of variability analysis techniques suitable for clinical applications. We review more than 70 variability techniques, providing for each technique a brief description of the underlying theory and assumptions, together with a summary of clinical applications. We propose a revised classification for the domains of variability techniques, which include statistical, geometric, energetic, informational, and invariant. We discuss the process of calculation, often necessitating a mathematical transform of the time-series. Our aims are to summarize a broad literature, promote a shared vocabulary that would improve the exchange of ideas, and the analyses of the results between different studies. We conclude with challenges for the evolving science of variability analysis.

  5. Review and classification of variability analysis techniques with clinical applications

    PubMed Central

    2011-01-01

    Analysis of patterns of variation of time-series, termed variability analysis, represents a rapidly evolving discipline with increasing applications in different fields of science. In medicine and in particular critical care, efforts have focussed on evaluating the clinical utility of variability. However, the growth and complexity of techniques applicable to this field have made interpretation and understanding of variability more challenging. Our objective is to provide an updated review of variability analysis techniques suitable for clinical applications. We review more than 70 variability techniques, providing for each technique a brief description of the underlying theory and assumptions, together with a summary of clinical applications. We propose a revised classification for the domains of variability techniques, which include statistical, geometric, energetic, informational, and invariant. We discuss the process of calculation, often necessitating a mathematical transform of the time-series. Our aims are to summarize a broad literature, promote a shared vocabulary that would improve the exchange of ideas, and the analyses of the results between different studies. We conclude with challenges for the evolving science of variability analysis. PMID:21985357

  6. Developing a complex independent component analysis technique to extract non-stationary patterns from geophysical time-series

    NASA Astrophysics Data System (ADS)

    Forootan, Ehsan; Kusche, Jürgen

    2016-04-01

    Geodetic/geophysical observations, such as the time series of global terrestrial water storage change or sea level and temperature change, represent samples of physical processes and therefore contain information about complex physical interactionswith many inherent time scales. Extracting relevant information from these samples, for example quantifying the seasonality of a physical process or its variability due to large-scale ocean-atmosphere interactions, is not possible by rendering simple time series approaches. In the last decades, decomposition techniques have found increasing interest for extracting patterns from geophysical observations. Traditionally, principal component analysis (PCA) and more recently independent component analysis (ICA) are common techniques to extract statistical orthogonal (uncorrelated) and independent modes that represent the maximum variance of observations, respectively. PCA and ICA can be classified as stationary signal decomposition techniques since they are based on decomposing the auto-covariance matrix or diagonalizing higher (than two)-order statistical tensors from centered time series. However, the stationary assumption is obviously not justifiable for many geophysical and climate variables even after removing cyclic components e.g., the seasonal cycles. In this paper, we present a new decomposition method, the complex independent component analysis (CICA, Forootan, PhD-2014), which can be applied to extract to non-stationary (changing in space and time) patterns from geophysical time series. Here, CICA is derived as an extension of real-valued ICA (Forootan and Kusche, JoG-2012), where we (i) define a new complex data set using a Hilbert transformation. The complex time series contain the observed values in their real part, and the temporal rate of variability in their imaginary part. (ii) An ICA algorithm based on diagonalization of fourth-order cumulants is then applied to decompose the new complex data set in (i). (iii) Dominant non-stationary patterns are recognized as independent complex patterns that can be used to represent the space and time amplitude and phase propagations. We present the results of CICA on simulated and real cases e.g., for quantifying the impact of large-scale ocean-atmosphere interaction on global mass changes. Forootan (PhD-2014) Statistical signal decomposition techniques for analyzing time-variable satellite gravimetry data, PhD Thesis, University of Bonn, http://hss.ulb.uni-bonn.de/2014/3766/3766.htm Forootan and Kusche (JoG-2012) Separation of global time-variable gravity signals into maximally independent components, Journal of Geodesy 86 (7), 477-497, doi: 10.1007/s00190-011-0532-5

  7. Navigating complex sample analysis using national survey data.

    PubMed

    Saylor, Jennifer; Friedmann, Erika; Lee, Hyeon Joo

    2012-01-01

    The National Center for Health Statistics conducts the National Health and Nutrition Examination Survey and other national surveys with probability-based complex sample designs. Goals of national surveys are to provide valid data for the population of the United States. Analyses of data from population surveys present unique challenges in the research process but are valuable avenues to study the health of the United States population. The aim of this study was to demonstrate the importance of using complex data analysis techniques for data obtained with complex multistage sampling design and provide an example of analysis using the SPSS Complex Samples procedure. Illustration of challenges and solutions specific to secondary data analysis of national databases are described using the National Health and Nutrition Examination Survey as the exemplar. Oversampling of small or sensitive groups provides necessary estimates of variability within small groups. Use of weights without complex samples accurately estimates population means and frequency from the sample after accounting for over- or undersampling of specific groups. Weighting alone leads to inappropriate population estimates of variability, because they are computed as if the measures were from the entire population rather than a sample in the data set. The SPSS Complex Samples procedure allows inclusion of all sampling design elements, stratification, clusters, and weights. Use of national data sets allows use of extensive, expensive, and well-documented survey data for exploratory questions but limits analysis to those variables included in the data set. The large sample permits examination of multiple predictors and interactive relationships. Merging data files, availability of data in several waves of surveys, and complex sampling are techniques used to provide a representative sample but present unique challenges. In sophisticated data analysis techniques, use of these data is optimized.

  8. Visualization of Global Sensitivity Analysis Results Based on a Combination of Linearly Dependent and Independent Directions

    NASA Technical Reports Server (NTRS)

    Davies, Misty D.; Gundy-Burlet, Karen

    2010-01-01

    A useful technique for the validation and verification of complex flight systems is Monte Carlo Filtering -- a global sensitivity analysis that tries to find the inputs and ranges that are most likely to lead to a subset of the outputs. A thorough exploration of the parameter space for complex integrated systems may require thousands of experiments and hundreds of controlled and measured variables. Tools for analyzing this space often have limitations caused by the numerical problems associated with high dimensionality and caused by the assumption of independence of all of the dimensions. To combat both of these limitations, we propose a technique that uses a combination of the original variables with the derived variables obtained during a principal component analysis.

  9. Detangling complex relationships in forensic data: principles and use of causal networks and their application to clinical forensic science.

    PubMed

    Lefèvre, Thomas; Lepresle, Aude; Chariot, Patrick

    2015-09-01

    The search for complex, nonlinear relationships and causality in data is hindered by the availability of techniques in many domains, including forensic science. Linear multivariable techniques are useful but present some shortcomings. In the past decade, Bayesian approaches have been introduced in forensic science. To date, authors have mainly focused on providing an alternative to classical techniques for quantifying effects and dealing with uncertainty. Causal networks, including Bayesian networks, can help detangle complex relationships in data. A Bayesian network estimates the joint probability distribution of data and graphically displays dependencies between variables and the circulation of information between these variables. In this study, we illustrate the interest in utilizing Bayesian networks for dealing with complex data through an application in clinical forensic science. Evaluating the functional impairment of assault survivors is a complex task for which few determinants are known. As routinely estimated in France, the duration of this impairment can be quantified by days of 'Total Incapacity to Work' ('Incapacité totale de travail,' ITT). In this study, we used a Bayesian network approach to identify the injury type, victim category and time to evaluation as the main determinants of the 'Total Incapacity to Work' (TIW). We computed the conditional probabilities associated with the TIW node and its parents. We compared this approach with a multivariable analysis, and the results of both techniques were converging. Thus, Bayesian networks should be considered a reliable means to detangle complex relationships in data.

  10. Water Quality Variable Estimation using Partial Least Squares Regression and Multi-Scale Remote Sensing.

    NASA Astrophysics Data System (ADS)

    Peterson, K. T.; Wulamu, A.

    2017-12-01

    Water, essential to all living organisms, is one of the Earth's most precious resources. Remote sensing offers an ideal approach to monitor water quality over traditional in-situ techniques that are highly time and resource consuming. Utilizing a multi-scale approach, incorporating data from handheld spectroscopy, UAS based hyperspectal, and satellite multispectral images were collected in coordination with in-situ water quality samples for the two midwestern watersheds. The remote sensing data was modeled and correlated to the in-situ water quality variables including chlorophyll content (Chl), turbidity, and total dissolved solids (TDS) using Normalized Difference Spectral Indices (NDSI) and Partial Least Squares Regression (PLSR). The results of the study supported the original hypothesis that correlating water quality variables with remotely sensed data benefits greatly from the use of more complex modeling and regression techniques such as PLSR. The final results generated from the PLSR analysis resulted in much higher R2 values for all variables when compared to NDSI. The combination of NDSI and PLSR analysis also identified key wavelengths for identification that aligned with previous study's findings. This research displays the advantages and future for complex modeling and machine learning techniques to improve water quality variable estimation from spectral data.

  11. Forest fuels and potential fire behaviour 12 years after variable-retention harvest in lodgepole pine

    Treesearch

    Justin S. Crotteau; Christopher R. Keyes; Elaine K. Sutherland; David K. Wright; Joel M. Egan

    2016-01-01

    Variable-retention harvesting in lodgepole pine offers an alternative to conventional, even-aged management. This harvesting technique promotes structural complexity and age-class diversity in residual stands and promotes resilience to disturbance. We examined fuel loads and potential fire behaviour 12 years after two modes of variable-retention harvesting (...

  12. Exploring total cardiac variability in healthy and pathophysiological subjects using improved refined multiscale entropy.

    PubMed

    Marwaha, Puneeta; Sunkaria, Ramesh Kumar

    2017-02-01

    Multiscale entropy (MSE) and refined multiscale entropy (RMSE) techniques are being widely used to evaluate the complexity of a time series across multiple time scales 't'. Both these techniques, at certain time scales (sometimes for the entire time scales, in the case of RMSE), assign higher entropy to the HRV time series of certain pathologies than that of healthy subjects, and to their corresponding randomized surrogate time series. This incorrect assessment of signal complexity may be due to the fact that these techniques suffer from the following limitations: (1) threshold value 'r' is updated as a function of long-term standard deviation and hence unable to explore the short-term variability as well as substantial variability inherited in beat-to-beat fluctuations of long-term HRV time series. (2) In RMSE, entropy values assigned to different filtered scaled time series are the result of changes in variance, but do not completely reflect the real structural organization inherited in original time series. In the present work, we propose an improved RMSE (I-RMSE) technique by introducing a new procedure to set the threshold value by taking into account the period-to-period variability inherited in a signal and evaluated it on simulated and real HRV database. The proposed I-RMSE assigns higher entropy to the age-matched healthy subjects than that of patients suffering from atrial fibrillation, congestive heart failure, sudden cardiac death and diabetes mellitus, for the entire time scales. The results strongly support the reduction in complexity of HRV time series in female group, old-aged, patients suffering from severe cardiovascular and non-cardiovascular diseases, and in their corresponding surrogate time series.

  13. Temporal information partitioning: Characterizing synergy, uniqueness, and redundancy in interacting environmental variables

    NASA Astrophysics Data System (ADS)

    Goodwell, Allison E.; Kumar, Praveen

    2017-07-01

    Information theoretic measures can be used to identify nonlinear interactions between source and target variables through reductions in uncertainty. In information partitioning, multivariate mutual information is decomposed into synergistic, unique, and redundant components. Synergy is information shared only when sources influence a target together, uniqueness is information only provided by one source, and redundancy is overlapping shared information from multiple sources. While this partitioning has been applied to provide insights into complex dependencies, several proposed partitioning methods overestimate redundant information and omit a component of unique information because they do not account for source dependencies. Additionally, information partitioning has only been applied to time-series data in a limited context, using basic pdf estimation techniques or a Gaussian assumption. We develop a Rescaled Redundancy measure (Rs) to solve the source dependency issue, and present Gaussian, autoregressive, and chaotic test cases to demonstrate its advantages over existing techniques in the presence of noise, various source correlations, and different types of interactions. This study constitutes the first rigorous application of information partitioning to environmental time-series data, and addresses how noise, pdf estimation technique, or source dependencies can influence detected measures. We illustrate how our techniques can unravel the complex nature of forcing and feedback within an ecohydrologic system with an application to 1 min environmental signals of air temperature, relative humidity, and windspeed. The methods presented here are applicable to the study of a broad range of complex systems composed of interacting variables.

  14. Integrated geostatistics for modeling fluid contacts and shales in Prudhoe Bay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perez, G.; Chopra, A.K.; Severson, C.D.

    1997-12-01

    Geostatistics techniques are being used increasingly to model reservoir heterogeneity at a wide range of scales. A variety of techniques is now available with differing underlying assumptions, complexity, and applications. This paper introduces a novel method of geostatistics to model dynamic gas-oil contacts and shales in the Prudhoe Bay reservoir. The method integrates reservoir description and surveillance data within the same geostatistical framework. Surveillance logs and shale data are transformed to indicator variables. These variables are used to evaluate vertical and horizontal spatial correlation and cross-correlation of gas and shale at different times and to develop variogram models. Conditional simulationmore » techniques are used to generate multiple three-dimensional (3D) descriptions of gas and shales that provide a measure of uncertainty. These techniques capture the complex 3D distribution of gas-oil contacts through time. The authors compare results of the geostatistical method with conventional techniques as well as with infill wells drilled after the study. Predicted gas-oil contacts and shale distributions are in close agreement with gas-oil contacts observed at infill wells.« less

  15. Analysis of a municipal wastewater treatment plant using a neural network-based pattern analysis

    USGS Publications Warehouse

    Hong, Y.-S.T.; Rosen, Michael R.; Bhamidimarri, R.

    2003-01-01

    This paper addresses the problem of how to capture the complex relationships that exist between process variables and to diagnose the dynamic behaviour of a municipal wastewater treatment plant (WTP). Due to the complex biological reaction mechanisms, the highly time-varying, and multivariable aspects of the real WTP, the diagnosis of the WTP are still difficult in practice. The application of intelligent techniques, which can analyse the multi-dimensional process data using a sophisticated visualisation technique, can be useful for analysing and diagnosing the activated-sludge WTP. In this paper, the Kohonen Self-Organising Feature Maps (KSOFM) neural network is applied to analyse the multi-dimensional process data, and to diagnose the inter-relationship of the process variables in a real activated-sludge WTP. By using component planes, some detailed local relationships between the process variables, e.g., responses of the process variables under different operating conditions, as well as the global information is discovered. The operating condition and the inter-relationship among the process variables in the WTP have been diagnosed and extracted by the information obtained from the clustering analysis of the maps. It is concluded that the KSOFM technique provides an effective analysing and diagnosing tool to understand the system behaviour and to extract knowledge contained in multi-dimensional data of a large-scale WTP. ?? 2003 Elsevier Science Ltd. All rights reserved.

  16. Trispyrazolylborate Complexes: An Advanced Synthesis Experiment Using Paramagnetic NMR, Variable-Temperature NMR, and EPR Spectroscopies

    ERIC Educational Resources Information Center

    Abell, Timothy N.; McCarrick, Robert M.; Bretz, Stacey Lowery; Tierney, David L.

    2017-01-01

    A structured inquiry experiment for inorganic synthesis has been developed to introduce undergraduate students to advanced spectroscopic techniques including paramagnetic nuclear magnetic resonance and electron paramagnetic resonance. Students synthesize multiple complexes with unknown first row transition metals and identify the unknown metals by…

  17. A Hardware Model Validation Tool for Use in Complex Space Systems

    NASA Technical Reports Server (NTRS)

    Davies, Misty Dawn; Gundy-Burlet, Karen L.; Limes, Gregory L.

    2010-01-01

    One of the many technological hurdles that must be overcome in future missions is the challenge of validating as-built systems against the models used for design. We propose a technique composed of intelligent parameter exploration in concert with automated failure analysis as a scalable method for the validation of complex space systems. The technique is impervious to discontinuities and linear dependencies in the data, and can handle dimensionalities consisting of hundreds of variables over tens of thousands of experiments.

  18. Invariant resolutions for several Fueter operators

    NASA Astrophysics Data System (ADS)

    Colombo, Fabrizio; Souček, Vladimir; Struppa, Daniele C.

    2006-07-01

    A proper generalization of complex function theory to higher dimension is Clifford analysis and an analogue of holomorphic functions of several complex variables were recently described as the space of solutions of several Dirac equations. The four-dimensional case has special features and is closely connected to functions of quaternionic variables. In this paper we present an approach to the Dolbeault sequence for several quaternionic variables based on symmetries and representation theory. In particular we prove that the resolution of the Cauchy-Fueter system obtained algebraically, via Gröbner bases techniques, is equivalent to the one obtained by R.J. Baston (J. Geom. Phys. 1992).

  19. Optimization Techniques for Analysis of Biological and Social Networks

    DTIC Science & Technology

    2012-03-28

    analyzing a new metaheuristic technique, variable objective search. 3. Experimentation and application: Implement the proposed algorithms , test and fine...alternative mathematical programming formulations, their theoretical analysis, the development of exact algorithms , and heuristics. Originally, clusters...systematic fashion under a unifying theoretical and algorithmic framework. Optimization, Complex Networks, Social Network Analysis, Computational

  20. The art of spacecraft design: A multidisciplinary challenge

    NASA Technical Reports Server (NTRS)

    Abdi, F.; Ide, H.; Levine, M.; Austel, L.

    1989-01-01

    Actual design turn-around time has become shorter due to the use of optimization techniques which have been introduced into the design process. It seems that what, how and when to use these optimization techniques may be the key factor for future aircraft engineering operations. Another important aspect of this technique is that complex physical phenomena can be modeled by a simple mathematical equation. The new powerful multilevel methodology reduces time-consuming analysis significantly while maintaining the coupling effects. This simultaneous analysis method stems from the implicit function theorem and system sensitivity derivatives of input variables. Use of the Taylor's series expansion and finite differencing technique for sensitivity derivatives in each discipline makes this approach unique for screening dominant variables from nondominant variables. In this study, the current Computational Fluid Dynamics (CFD) aerodynamic and sensitivity derivative/optimization techniques are applied for a simple cone-type forebody of a high-speed vehicle configuration to understand basic aerodynamic/structure interaction in a hypersonic flight condition.

  1. An improved switching converter model using discrete and average techniques

    NASA Technical Reports Server (NTRS)

    Shortt, D. J.; Lee, F. C.

    1982-01-01

    The nonlinear modeling and analysis of dc-dc converters has been done by averaging and discrete-sampling techniques. The averaging technique is simple, but inaccurate as the modulation frequencies approach the theoretical limit of one-half the switching frequency. The discrete technique is accurate even at high frequencies, but is very complex and cumbersome. An improved model is developed by combining the aforementioned techniques. This new model is easy to implement in circuit and state variable forms and is accurate to the theoretical limit.

  2. LinguisticBelief: a java application for linguistic evaluation using belief, fuzzy sets, and approximate reasoning.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darby, John L.

    LinguisticBelief is a Java computer code that evaluates combinations of linguistic variables using an approximate reasoning rule base. Each variable is comprised of fuzzy sets, and a rule base describes the reasoning on combinations of variables fuzzy sets. Uncertainty is considered and propagated through the rule base using the belief/plausibility measure. The mathematics of fuzzy sets, approximate reasoning, and belief/ plausibility are complex. Without an automated tool, this complexity precludes their application to all but the simplest of problems. LinguisticBelief automates the use of these techniques, allowing complex problems to be evaluated easily. LinguisticBelief can be used free of chargemore » on any Windows XP machine. This report documents the use and structure of the LinguisticBelief code, and the deployment package for installation client machines.« less

  3. Complex systems and the technology of variability analysis

    PubMed Central

    Seely, Andrew JE; Macklem, Peter T

    2004-01-01

    Characteristic patterns of variation over time, namely rhythms, represent a defining feature of complex systems, one that is synonymous with life. Despite the intrinsic dynamic, interdependent and nonlinear relationships of their parts, complex biological systems exhibit robust systemic stability. Applied to critical care, it is the systemic properties of the host response to a physiological insult that manifest as health or illness and determine outcome in our patients. Variability analysis provides a novel technology with which to evaluate the overall properties of a complex system. This review highlights the means by which we scientifically measure variation, including analyses of overall variation (time domain analysis, frequency distribution, spectral power), frequency contribution (spectral analysis), scale invariant (fractal) behaviour (detrended fluctuation and power law analysis) and regularity (approximate and multiscale entropy). Each technique is presented with a definition, interpretation, clinical application, advantages, limitations and summary of its calculation. The ubiquitous association between altered variability and illness is highlighted, followed by an analysis of how variability analysis may significantly improve prognostication of severity of illness and guide therapeutic intervention in critically ill patients. PMID:15566580

  4. Multi-level emulation of complex climate model responses to boundary forcing data

    NASA Astrophysics Data System (ADS)

    Tran, Giang T.; Oliver, Kevin I. C.; Holden, Philip B.; Edwards, Neil R.; Sóbester, András; Challenor, Peter

    2018-04-01

    Climate model components involve both high-dimensional input and output fields. It is desirable to efficiently generate spatio-temporal outputs of these models for applications in integrated assessment modelling or to assess the statistical relationship between such sets of inputs and outputs, for example, uncertainty analysis. However, the need for efficiency often compromises the fidelity of output through the use of low complexity models. Here, we develop a technique which combines statistical emulation with a dimensionality reduction technique to emulate a wide range of outputs from an atmospheric general circulation model, PLASIM, as functions of the boundary forcing prescribed by the ocean component of a lower complexity climate model, GENIE-1. Although accurate and detailed spatial information on atmospheric variables such as precipitation and wind speed is well beyond the capability of GENIE-1's energy-moisture balance model of the atmosphere, this study demonstrates that the output of this model is useful in predicting PLASIM's spatio-temporal fields through multi-level emulation. Meaningful information from the fast model, GENIE-1 was extracted by utilising the correlation between variables of the same type in the two models and between variables of different types in PLASIM. We present here the construction and validation of several PLASIM variable emulators and discuss their potential use in developing a hybrid model with statistical components.

  5. Variable length adjacent partitioning for PTS based PAPR reduction of OFDM signal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibraheem, Zeyid T.; Rahman, Md. Mijanur; Yaakob, S. N.

    2015-05-15

    Peak-to-Average power ratio (PAPR) is a major drawback in OFDM communication. It leads the power amplifier into nonlinear region operation resulting into loss of data integrity. As such, there is a strong motivation to find techniques to reduce PAPR. Partial Transmit Sequence (PTS) is an attractive scheme for this purpose. Judicious partitioning the OFDM data frame into disjoint subsets is a pivotal component of any PTS scheme. Out of the existing partitioning techniques, adjacent partitioning is characterized by an attractive trade-off between cost and performance. With an aim of determining effects of length variability of adjacent partitions, we performed anmore » investigation into the performances of a variable length adjacent partitioning (VL-AP) and fixed length adjacent partitioning in comparison with other partitioning schemes such as pseudorandom partitioning. Simulation results with different modulation and partitioning scenarios showed that fixed length adjacent partition had better performance compared to variable length adjacent partitioning. As expected, simulation results showed a slightly better performance of pseudorandom partitioning technique compared to fixed and variable adjacent partitioning schemes. However, as the pseudorandom technique incurs high computational complexities, adjacent partitioning schemes were still seen as favorable candidates for PAPR reduction.« less

  6. Variability in surface ECG morphology: signal or noise?

    NASA Technical Reports Server (NTRS)

    Smith, J. M.; Rosenbaum, D. S.; Cohen, R. J.

    1988-01-01

    Using data collected from canine models of acute myocardial ischemia, we investigated two issues of major relevance to electrocardiographic signal averaging: ECG epoch alignment, and the spectral characteristics of the beat-to-beat variability in ECG morphology. With initial digitization rates of 1 kHz, an iterative a posteriori matched filtering alignment scheme, and linear interpolation, we demonstrated that there is sufficient information in the body surface ECG to merit alignment to a precision of 0.1 msecs. Applying this technique to align QRS complexes and atrial pacing artifacts independently, we demonstrated that the conduction delay from atrial stimulus to ventricular activation may be so variable as to preclude using atrial pacing as an alignment mechanism, and that this variability in conduction time be modulated at the frequency of respiration and at a much lower frequency (0.02-0.03Hz). Using a multidimensional spectral technique, we investigated the beat-to-beat variability in ECG morphology, demonstrating that the frequency spectrum of ECG morphological variation reveals a readily discernable modulation at the frequency of respiration. In addition, this technique detects a subtle beat-to-beat alternation in surface ECG morphology which accompanies transient coronary artery occlusion. We conclude that physiologically important information may be stored in the variability in the surface electrocardiogram, and that this information is lost by conventional averaging techniques.

  7. Entropy-based complexity of the cardiovascular control in Parkinson disease: comparison between binning and k-nearest-neighbor approaches.

    PubMed

    Porta, Alberto; Bari, Vlasta; Bassani, Tito; Marchi, Andrea; Tassin, Stefano; Canesi, Margherita; Barbic, Franca; Furlan, Raffaello

    2013-01-01

    Entropy-based approaches are frequently used to quantify complexity of short-term cardiovascular control from spontaneous beat-to-beat variability of heart period (HP) and systolic arterial pressure (SAP). Among these tools the ones optimizing a critical parameter such as the pattern length are receiving more and more attention. This study compares two entropy-based techniques for the quantification of complexity making use of completely different strategies to optimize the pattern length. Comparison was carried out over HP and SAP variability series recorded from 12 Parkinson's disease (PD) patients without orthostatic hypotension or symptoms of orthostatic intolerance and 12 age-matched healthy control (HC) subjects. Regardless of the method, complexity of cardiovascular control increased in PD group, thus suggesting the early impairment of cardiovascular function.

  8. Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling

    NASA Technical Reports Server (NTRS)

    Hojnicki, Jeffrey S.; Rusick, Jeffrey J.

    2005-01-01

    Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).

  9. Characterizing Tityus discrepans scorpion venom from a fractal perspective: Venom complexity, effects of captivity, sexual dimorphism, differences among species.

    PubMed

    D'Suze, Gina; Sandoval, Moisés; Sevcik, Carlos

    2015-12-15

    A characteristic of venom elution patterns, shared with many other complex systems, is that many their features cannot be properly described with statistical or euclidean concepts. The understanding of such systems became possible with Mandelbrot's fractal analysis. Venom elution patterns were produced using the reversed phase high performance liquid chromatography (HPLC) with 1 mg of venom. One reason for the lack of quantitative analyses of the sources of venom variability is parametrizing the venom chromatograms' complexity. We quantize this complexity by means of an algorithm which estimates the contortedness (Q) of a waveform. Fractal analysis was used to compare venoms and to measure inter- and intra-specific venom variability. We studied variations in venom complexity derived from gender, seasonal and environmental factors, duration of captivity in the laboratory, technique used to milk venom. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Equation-free and variable free modeling for complex/multiscale systems. Coarse-grained computation in science and engineering using fine-grained models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kevrekidis, Ioannis G.

    The work explored the linking of modern developing machine learning techniques (manifold learning and in particular diffusion maps) with traditional PDE modeling/discretization/scientific computation techniques via the equation-free methodology developed by the PI. The result (in addition to several PhD degrees, two of them by CSGF Fellows) was a sequence of strong developments - in part on the algorithmic side, linking data mining with scientific computing, and in part on applications, ranging from PDE discretizations to molecular dynamics and complex network dynamics.

  11. The integrated manual and automatic control of complex flight systems

    NASA Technical Reports Server (NTRS)

    Schmidt, D. K.

    1985-01-01

    Pilot/vehicle analysis techniques for optimizing aircraft handling qualities are presented. The analysis approach considered is based on the optimal control frequency domain techniques. These techniques stem from an optimal control approach of a Neal-Smith like analysis on aircraft attitude dynamics extended to analyze the flared landing task. Some modifications to the technique are suggested and discussed. An in depth analysis of the effect of the experimental variables, such as prefilter, is conducted to gain further insight into the flared land task for this class of vehicle dynamics.

  12. Application of higher-order cepstral techniques in problems of fetal heart signal extraction

    NASA Astrophysics Data System (ADS)

    Sabry-Rizk, Madiha; Zgallai, Walid; Hardiman, P.; O'Riordan, J.

    1996-10-01

    Recently, cepstral analysis based on second order statistics and homomorphic filtering techniques have been used in the adaptive decomposition of overlapping, or otherwise, and noise contaminated ECG complexes of mothers and fetals obtained by a transabdominal surface electrodes connected to a monitoring instrument, an interface card, and a PC. Differential time delays of fetal heart beats measured from a reference point located on the mother complex after transformation to cepstra domains are first obtained and this is followed by fetal heart rate variability computations. Homomorphic filtering in the complex cepstral domain and the subuent transformation to the time domain results in fetal complex recovery. However, three problems have been identified with second-order based cepstral techniques that needed rectification in this paper. These are (1) errors resulting from the phase unwrapping algorithms and leading to fetal complex perturbation, (2) the unavoidable conversion of noise statistics from Gaussianess to non-Gaussianess due to the highly non-linear nature of homomorphic transform does warrant stringent noise cancellation routines, (3) due to the aforementioned problems in (1) and (2), it is difficult to adaptively optimize windows to include all individual fetal complexes in the time domain based on amplitude thresholding routines in the complex cepstral domain (i.e. the task of `zooming' in on weak fetal complexes requires more processing time). The use of third-order based high resolution differential cepstrum technique results in recovery of the delay of the order of 120 milliseconds.

  13. Longitudinal variability of complexities associated with equatorial electrojet

    NASA Astrophysics Data System (ADS)

    Rabiu, A. B.; Ogunjo, S. T.; Fuwape, I. A.

    2017-12-01

    Equatorial electrojet indices obtained from ground based magnetometers at 6 representative stations across the magnetic equatorial belt for the year 2009 (mean annual sunspot number Rz = 3.1) were treated to nonlinear time series analysis technique to ascertain the longitudinal dependence of the chaos/complexities associated with the phenomena. The selected stations were along the magnetic equator in the South American (Huancayo, dip latitude -1.80°), African (Ilorin, dip latitude -1.82°; Addis Ababa, dip latitude - 0.18°), and Philippine (Langkawi, dip latitude -2.32°; Davao, dip latitude -1.02°; Yap, dip latitude -1.49°) sectors. The non-linear quantifiers engaged in this work include: Recurrence rate, determinism, diagonal line length, entropy, laminarity, Tsallis entropy, Lyapunov exponent and correlation dimension. Ordinarily the EEJ was found to undergo variability from one longitudinal representative station to another, with the strongest EEJ of about 192.5 nT at the South American axis at Huancayo. The degree of complexity in the EEJ was found to vary qualitatively from one sector to another. Probable physical mechanisms responsible for longitudinal variability of EEJ strength and its complexities were highlighted.

  14. Detection of quasars in the time domain

    NASA Astrophysics Data System (ADS)

    Graham, Matthew J.; Djorgovski, S. G.; Stern, Daniel J.; Drake, Andrew; Mahabal, Ashish

    2017-06-01

    The time domain is the emerging forefront of astronomical research with new facilities and instruments providing unprecedented amounts of data on the temporal behavior of astrophysical populations. Dealing with the size and complexity of this requires new techniques and methodologies. Quasars are an ideal work set for developing and applying these: they vary in a detectable but not easily quantifiable manner whose physical origins are poorly understood. In this paper, we will review how quasars are identified by their variability and how these techniques can be improved, what physical insights into their variability can be gained from studying extreme examples of variability, and what approaches can be taken to increase the number of quasars known. These will demonstrate how astroinformatics is essential to discovering and understanding this important population.

  15. A variational conformational dynamics approach to the selection of collective variables in metadynamics.

    PubMed

    McCarty, James; Parrinello, Michele

    2017-11-28

    In this paper, we combine two powerful computational techniques, well-tempered metadynamics and time-lagged independent component analysis. The aim is to develop a new tool for studying rare events and exploring complex free energy landscapes. Metadynamics is a well-established and widely used enhanced sampling method whose efficiency depends on an appropriate choice of collective variables. Often the initial choice is not optimal leading to slow convergence. However by analyzing the dynamics generated in one such run with a time-lagged independent component analysis and the techniques recently developed in the area of conformational dynamics, we obtain much more efficient collective variables that are also better capable of illuminating the physics of the system. We demonstrate the power of this approach in two paradigmatic examples.

  16. A variational conformational dynamics approach to the selection of collective variables in metadynamics

    NASA Astrophysics Data System (ADS)

    McCarty, James; Parrinello, Michele

    2017-11-01

    In this paper, we combine two powerful computational techniques, well-tempered metadynamics and time-lagged independent component analysis. The aim is to develop a new tool for studying rare events and exploring complex free energy landscapes. Metadynamics is a well-established and widely used enhanced sampling method whose efficiency depends on an appropriate choice of collective variables. Often the initial choice is not optimal leading to slow convergence. However by analyzing the dynamics generated in one such run with a time-lagged independent component analysis and the techniques recently developed in the area of conformational dynamics, we obtain much more efficient collective variables that are also better capable of illuminating the physics of the system. We demonstrate the power of this approach in two paradigmatic examples.

  17. Novel Strategy to Evaluate Infectious Salmon Anemia Virus Variants by High Resolution Melting

    PubMed Central

    Sepúlveda, Dagoberto; Cárdenas, Constanza; Carmona, Marisela; Marshall, Sergio H.

    2012-01-01

    Genetic variability is a key problem in the prevention and therapy of RNA-based virus infections. Infectious Salmon Anemia virus (ISAv) is an RNA virus which aggressively attacks salmon producing farms worldwide and in particular in Chile. Just as with most of the Orthomyxovirus, ISAv displays high variability in its genome which is reflected by a wider infection potential, thus hampering management and prevention of the disease. Although a number of widely validated detection procedures exist, in this case there is a need of a more complex approach to the characterization of virus variability. We have adapted a procedure of High Resolution Melting (HRM) as a fine-tuning technique to fully differentiate viral variants detected in Chile and projected to other infective variants reported elsewhere. Out of the eight viral coding segments, the technique was adapted using natural Chilean variants for two of them, namely segments 5 and 6, recognized as virulence-associated factors. Our work demonstrates the versatility of the technique as well as its superior resolution capacity compared with standard techniques currently in use as key diagnostic tools. PMID:22719837

  18. A Composite Algorithm for Mixed Integer Constrained Nonlinear Optimization.

    DTIC Science & Technology

    1980-01-01

    de Silva [141, and Weisman and Wood [76). A particular direct search algorithm, the simplex method, has been cited for having the potential for...spaced discrete points on a line which makes the direction suitable for an efficient integer search technique based on Fibonacci numbers. Two...defined by a subset of variables. The complex algorithm is particularly well suited for this subspace search for two reasons. First, the complex method

  19. Geoelectrical characterisation of basement aquifers: the case of Iberekodo, southwestern Nigeria

    NASA Astrophysics Data System (ADS)

    Aizebeokhai, Ahzegbobor P.; Oyeyemi, Kehinde D.

    2018-03-01

    Basement aquifers, which occur within the weathered and fractured zones of crystalline bedrocks, are important groundwater resources in tropical and subtropical regions. The development of basement aquifers is complex owing to their high spatial variability. Geophysical techniques are used to obtain information about the hydrologic characteristics of the weathered and fractured zones of the crystalline basement rocks, which relates to the occurrence of groundwater in the zones. The spatial distributions of these hydrologic characteristics are then used to map the spatial variability of the basement aquifers. Thus, knowledge of the spatial variability of basement aquifers is useful in siting wells and boreholes for optimal and perennial yield. Geoelectrical resistivity is one of the most widely used geophysical methods for assessing the spatial variability of the weathered and fractured zones in groundwater exploration efforts in basement complex terrains. The presented study focuses on combining vertical electrical sounding with two-dimensional (2D) geoelectrical resistivity imaging to characterise the weathered and fractured zones in a crystalline basement complex terrain in southwestern Nigeria. The basement aquifer was delineated, and the nature, extent and spatial variability of the delineated basement aquifer were assessed based on the spatial variability of the weathered and fractured zones. The study shows that a multiple-gradient array for 2D resistivity imaging is sensitive to vertical and near-surface stratigraphic features, which have hydrological implications. The integration of resistivity sounding with 2D geoelectrical resistivity imaging is efficient and enhances near-surface characterisation in basement complex terrain.

  20. Walking the Filament of Feasibility: Global Optimization of Highly-Constrained, Multi-Modal Interplanetary Trajectories Using a Novel Stochastic Search Technique

    NASA Technical Reports Server (NTRS)

    Englander, Arnold C.; Englander, Jacob A.

    2017-01-01

    Interplanetary trajectory optimization problems are highly complex and are characterized by a large number of decision variables and equality and inequality constraints as well as many locally optimal solutions. Stochastic global search techniques, coupled with a large-scale NLP solver, have been shown to solve such problems but are inadequately robust when the problem constraints become very complex. In this work, we present a novel search algorithm that takes advantage of the fact that equality constraints effectively collapse the solution space to lower dimensionality. This new approach walks the filament'' of feasibility to efficiently find the global optimal solution.

  1. A general procedure to generate models for urban environmental-noise pollution using feature selection and machine learning methods.

    PubMed

    Torija, Antonio J; Ruiz, Diego P

    2015-02-01

    The prediction of environmental noise in urban environments requires the solution of a complex and non-linear problem, since there are complex relationships among the multitude of variables involved in the characterization and modelling of environmental noise and environmental-noise magnitudes. Moreover, the inclusion of the great spatial heterogeneity characteristic of urban environments seems to be essential in order to achieve an accurate environmental-noise prediction in cities. This problem is addressed in this paper, where a procedure based on feature-selection techniques and machine-learning regression methods is proposed and applied to this environmental problem. Three machine-learning regression methods, which are considered very robust in solving non-linear problems, are used to estimate the energy-equivalent sound-pressure level descriptor (LAeq). These three methods are: (i) multilayer perceptron (MLP), (ii) sequential minimal optimisation (SMO), and (iii) Gaussian processes for regression (GPR). In addition, because of the high number of input variables involved in environmental-noise modelling and estimation in urban environments, which make LAeq prediction models quite complex and costly in terms of time and resources for application to real situations, three different techniques are used to approach feature selection or data reduction. The feature-selection techniques used are: (i) correlation-based feature-subset selection (CFS), (ii) wrapper for feature-subset selection (WFS), and the data reduction technique is principal-component analysis (PCA). The subsequent analysis leads to a proposal of different schemes, depending on the needs regarding data collection and accuracy. The use of WFS as the feature-selection technique with the implementation of SMO or GPR as regression algorithm provides the best LAeq estimation (R(2)=0.94 and mean absolute error (MAE)=1.14-1.16 dB(A)). Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Dynamical complexity changes during two forms of meditation

    NASA Astrophysics Data System (ADS)

    Li, Jin; Hu, Jing; Zhang, Yinhong; Zhang, Xiaofeng

    2011-06-01

    Detection of dynamical complexity changes in natural and man-made systems has deep scientific and practical meaning. We use the base-scale entropy method to analyze dynamical complexity changes for heart rate variability (HRV) series during specific traditional forms of Chinese Chi and Kundalini Yoga meditation techniques in healthy young adults. The results show that dynamical complexity decreases in meditation states for two forms of meditation. Meanwhile, we detected changes in probability distribution of m-words during meditation and explained this changes using probability distribution of sine function. The base-scale entropy method may be used on a wider range of physiologic signals.

  3. Application of multivariable statistical techniques in plant-wide WWTP control strategies analysis.

    PubMed

    Flores, X; Comas, J; Roda, I R; Jiménez, L; Gernaey, K V

    2007-01-01

    The main objective of this paper is to present the application of selected multivariable statistical techniques in plant-wide wastewater treatment plant (WWTP) control strategies analysis. In this study, cluster analysis (CA), principal component analysis/factor analysis (PCA/FA) and discriminant analysis (DA) are applied to the evaluation matrix data set obtained by simulation of several control strategies applied to the plant-wide IWA Benchmark Simulation Model No 2 (BSM2). These techniques allow i) to determine natural groups or clusters of control strategies with a similar behaviour, ii) to find and interpret hidden, complex and casual relation features in the data set and iii) to identify important discriminant variables within the groups found by the cluster analysis. This study illustrates the usefulness of multivariable statistical techniques for both analysis and interpretation of the complex multicriteria data sets and allows an improved use of information for effective evaluation of control strategies.

  4. Combinatorial techniques to efficiently investigate and optimize organic thin film processing and properties.

    PubMed

    Wieberger, Florian; Kolb, Tristan; Neuber, Christian; Ober, Christopher K; Schmidt, Hans-Werner

    2013-04-08

    In this article we present several developed and improved combinatorial techniques to optimize processing conditions and material properties of organic thin films. The combinatorial approach allows investigations of multi-variable dependencies and is the perfect tool to investigate organic thin films regarding their high performance purposes. In this context we develop and establish the reliable preparation of gradients of material composition, temperature, exposure, and immersion time. Furthermore we demonstrate the smart application of combinations of composition and processing gradients to create combinatorial libraries. First a binary combinatorial library is created by applying two gradients perpendicular to each other. A third gradient is carried out in very small areas and arranged matrix-like over the entire binary combinatorial library resulting in a ternary combinatorial library. Ternary combinatorial libraries allow identifying precise trends for the optimization of multi-variable dependent processes which is demonstrated on the lithographic patterning process. Here we verify conclusively the strong interaction and thus the interdependency of variables in the preparation and properties of complex organic thin film systems. The established gradient preparation techniques are not limited to lithographic patterning. It is possible to utilize and transfer the reported combinatorial techniques to other multi-variable dependent processes and to investigate and optimize thin film layers and devices for optical, electro-optical, and electronic applications.

  5. Visualizing medium and biodistribution in complex cell culture bioreactors using in vivo imaging.

    PubMed

    Ratcliffe, E; Thomas, R J; Stacey, A J

    2014-01-01

    There is a dearth of technology and methods to aid process characterization, control and scale-up of complex culture platforms that provide niche micro-environments for some stem cell-based products. We have demonstrated a novel use of 3d in vivo imaging systems to visualize medium flow and cell distribution within a complex culture platform (hollow fiber bioreactor) to aid characterization of potential spatial heterogeneity and identify potential routes of bioreactor failure or sources of variability. This can then aid process characterization and control of such systems with a view to scale-up. Two potential sources of variation were observed with multiple bioreactors repeatedly imaged using two different imaging systems: shortcutting of medium between adjacent inlet and outlet ports with the potential to create medium gradients within the bioreactor, and localization of bioluminescent murine 4T1-luc2 cells upon inoculation with the potential to create variable seeding densities at different points within the cell growth chamber. The ability of the imaging technique to identify these key operational bioreactor characteristics demonstrates an emerging technique in troubleshooting and engineering optimization of bioreactor performance. © 2013 American Institute of Chemical Engineers.

  6. Shape optimization techniques for musical instrument design

    NASA Astrophysics Data System (ADS)

    Henrique, Luis; Antunes, Jose; Carvalho, Joao S.

    2002-11-01

    The design of musical instruments is still mostly based on empirical knowledge and costly experimentation. One interesting improvement is the shape optimization of resonating components, given a number of constraints (allowed parameter ranges, shape smoothness, etc.), so that vibrations occur at specified modal frequencies. Each admissible geometrical configuration generates an error between computed eigenfrequencies and the target set. Typically, error surfaces present many local minima, corresponding to suboptimal designs. This difficulty can be overcome using global optimization techniques, such as simulated annealing. However these methods are greedy, concerning the number of function evaluations required. Thus, the computational effort can be unacceptable if complex problems, such as bell optimization, are tackled. Those issues are addressed in this paper, and a method for improving optimization procedures is proposed. Instead of using the local geometric parameters as searched variables, the system geometry is modeled in terms of truncated series of orthogonal space-funcitons, and optimization is performed on their amplitude coefficients. Fourier series and orthogonal polynomials are typical such functions. This technique reduces considerably the number of searched variables, and has a potential for significant computational savings in complex problems. It is illustrated by optimizing the shapes of both current and uncommon marimba bars.

  7. Applied Routh approximation

    NASA Technical Reports Server (NTRS)

    Merrill, W. C.

    1978-01-01

    The Routh approximation technique for reducing the complexity of system models was applied in the frequency domain to a 16th order, state variable model of the F100 engine and to a 43d order, transfer function model of a launch vehicle boost pump pressure regulator. The results motivate extending the frequency domain formulation of the Routh method to the time domain in order to handle the state variable formulation directly. The time domain formulation was derived and a characterization that specifies all possible Routh similarity transformations was given. The characterization was computed by solving two eigenvalue-eigenvector problems. The application of the time domain Routh technique to the state variable engine model is described, and some results are given. Additional computational problems are discussed, including an optimization procedure that can improve the approximation accuracy by taking advantage of the transformation characterization.

  8. Aspect-Oriented Model-Driven Software Product Line Engineering

    NASA Astrophysics Data System (ADS)

    Groher, Iris; Voelter, Markus

    Software product line engineering aims to reduce development time, effort, cost, and complexity by taking advantage of the commonality within a portfolio of similar products. The effectiveness of a software product line approach directly depends on how well feature variability within the portfolio is implemented and managed throughout the development lifecycle, from early analysis through maintenance and evolution. This article presents an approach that facilitates variability implementation, management, and tracing by integrating model-driven and aspect-oriented software development. Features are separated in models and composed of aspect-oriented composition techniques on model level. Model transformations support the transition from problem to solution space models. Aspect-oriented techniques enable the explicit expression and modularization of variability on model, template, and code level. The presented concepts are illustrated with a case study of a home automation system.

  9. Selection of relevant input variables in storm water quality modeling by multiobjective evolutionary polynomial regression paradigm

    NASA Astrophysics Data System (ADS)

    Creaco, E.; Berardi, L.; Sun, Siao; Giustolisi, O.; Savic, D.

    2016-04-01

    The growing availability of field data, from information and communication technologies (ICTs) in "smart" urban infrastructures, allows data modeling to understand complex phenomena and to support management decisions. Among the analyzed phenomena, those related to storm water quality modeling have recently been gaining interest in the scientific literature. Nonetheless, the large amount of available data poses the problem of selecting relevant variables to describe a phenomenon and enable robust data modeling. This paper presents a procedure for the selection of relevant input variables using the multiobjective evolutionary polynomial regression (EPR-MOGA) paradigm. The procedure is based on scrutinizing the explanatory variables that appear inside the set of EPR-MOGA symbolic model expressions of increasing complexity and goodness of fit to target output. The strategy also enables the selection to be validated by engineering judgement. In such context, the multiple case study extension of EPR-MOGA, called MCS-EPR-MOGA, is adopted. The application of the proposed procedure to modeling storm water quality parameters in two French catchments shows that it was able to significantly reduce the number of explanatory variables for successive analyses. Finally, the EPR-MOGA models obtained after the input selection are compared with those obtained by using the same technique without benefitting from input selection and with those obtained in previous works where other data-modeling techniques were used on the same data. The comparison highlights the effectiveness of both EPR-MOGA and the input selection procedure.

  10. Variable forgetting factor mechanisms for diffusion recursive least squares algorithm in sensor networks

    NASA Astrophysics Data System (ADS)

    Zhang, Ling; Cai, Yunlong; Li, Chunguang; de Lamare, Rodrigo C.

    2017-12-01

    In this work, we present low-complexity variable forgetting factor (VFF) techniques for diffusion recursive least squares (DRLS) algorithms. Particularly, we propose low-complexity VFF-DRLS algorithms for distributed parameter and spectrum estimation in sensor networks. For the proposed algorithms, they can adjust the forgetting factor automatically according to the posteriori error signal. We develop detailed analyses in terms of mean and mean square performance for the proposed algorithms and derive mathematical expressions for the mean square deviation (MSD) and the excess mean square error (EMSE). The simulation results show that the proposed low-complexity VFF-DRLS algorithms achieve superior performance to the existing DRLS algorithm with fixed forgetting factor when applied to scenarios of distributed parameter and spectrum estimation. Besides, the simulation results also demonstrate a good match for our proposed analytical expressions.

  11. JIGSAW: Preference-directed, co-operative scheduling

    NASA Technical Reports Server (NTRS)

    Linden, Theodore A.; Gaw, David

    1992-01-01

    Techniques that enable humans and machines to cooperate in the solution of complex scheduling problems have evolved out of work on the daily allocation and scheduling of Tactical Air Force resources. A generalized, formal model of these applied techniques is being developed. It is called JIGSAW by analogy with the multi-agent, constructive process used when solving jigsaw puzzles. JIGSAW begins from this analogy and extends it by propagating local preferences into global statistics that dynamically influence the value and variable ordering decisions. The statistical projections also apply to abstract resources and time periods--allowing more opportunities to find a successful variable ordering by reserving abstract resources and deferring the choice of a specific resource or time period.

  12. Analysis of Setting Efficacy in Young Male and Female Volleyball Players.

    PubMed

    González-Silva, Jara; Domínguez, Alberto Moreno; Fernández-Echeverría, Carmen; Rabaz, Fernando Claver; Arroyo, M Perla Moreno

    2016-12-01

    The main objective of this study was to analyse the variables that predicted setting efficacy in complex I (KI) in volleyball, in formative categories and depending on gender. The study sample was comprised of 5842 game actions carried out by the 16 male category and the 18 female category teams that participated in the Under-16 Spanish Championship. The dependent variable was setting efficacy. The independent variables were grouped into: serve variables (a serve zone, the type of serve, striking technique, an in-game role of the server and serve direction), reception variables (a reception zone, a receiver player and reception efficacy) and setting variables (a setter's position, a setting zone, the type of a set, setting technique, a set's area and tempo of a set). Multinomial logistic regression showed that the best predictive variables of setting efficacy, both in female and male categories, were reception efficacy, setting technique and tempo of a set. In the male category, the jump serve was the greatest predictor of setting efficacy, while in the female category, it was the set's area. Therefore, in the male category, it was not only the preceding action that affected setting efficacy, but also the serve. On the contrary, in the female category, only variables of the action itself and of the previous action, reception, affected setting efficacy. The results obtained in the present study should be taken into account in the training process of both male and female volleyball players in formative stages.

  13. A 3-D chimera grid embedding technique

    NASA Technical Reports Server (NTRS)

    Benek, J. A.; Buning, P. G.; Steger, J. L.

    1985-01-01

    A three-dimensional (3-D) chimera grid-embedding technique is described. The technique simplifies the construction of computational grids about complex geometries. The method subdivides the physical domain into regions which can accommodate easily generated grids. Communication among the grids is accomplished by interpolation of the dependent variables at grid boundaries. The procedures for constructing the composite mesh and the associated data structures are described. The method is demonstrated by solution of the Euler equations for the transonic flow about a wing/body, wing/body/tail, and a configuration of three ellipsoidal bodies.

  14. The Case for Open Source Software: The Interactional Discourse Lab

    ERIC Educational Resources Information Center

    Choi, Seongsook

    2016-01-01

    Computational techniques and software applications for the quantitative content analysis of texts are now well established, and many qualitative data software applications enable the manipulation of input variables and the visualization of complex relations between them via interactive and informative graphical interfaces. Although advances in…

  15. Sensorimotor System Measurement Techniques

    PubMed Central

    Riemann, Bryan L.; Myers, Joseph B.; Lephart, Scott M.

    2002-01-01

    Objective: To provide an overview of currently available sensorimotor assessment techniques. Data Sources: We drew information from an extensive review of the scientific literature conducted in the areas of proprioception, neuromuscular control, and motor control measurement. Literature searches were conducted using MEDLINE for the years 1965 to 1999 with the key words proprioception, somatosensory evoked potentials, nerve conduction testing, electromyography, muscle dynamometry, isometric, isokinetic, kinetic, kinematic, posture, equilibrium, balance, stiffness, neuromuscular, sensorimotor, and measurement. Additional sources were collected using the reference lists of identified articles. Data Synthesis: Sensorimotor measurement techniques are discussed with reference to the underlying physiologic mechanisms, influential factors and locations of the variable within the system, clinical research questions, limitations of the measurement technique, and directions for future research. Conclusions/Recommendations: The complex interactions and relationships among the individual components of the sensorimotor system make measuring and analyzing specific characteristics and functions difficult. Additionally, the specific assessment techniques used to measure a variable can influence attained results. Optimizing the application of sensorimotor research to clinical settings can, therefore, be best accomplished through the use of common nomenclature to describe underlying physiologic mechanisms and specific measurement techniques. PMID:16558672

  16. Recurrence Quantification Analysis of Sentence-Level Speech Kinematics.

    PubMed

    Jackson, Eric S; Tiede, Mark; Riley, Michael A; Whalen, D H

    2016-12-01

    Current approaches to assessing sentence-level speech variability rely on measures that quantify variability across utterances and use normalization procedures that alter raw trajectory data. The current work tests the feasibility of a less restrictive nonlinear approach-recurrence quantification analysis (RQA)-via a procedural example and subsequent analysis of kinematic data. To test the feasibility of RQA, lip aperture (i.e., the Euclidean distance between lip-tracking sensors) was recorded for 21 typically developing adult speakers during production of a simple utterance. The utterance was produced in isolation and in carrier structures differing just in length or in length and complexity. Four RQA indices were calculated: percent recurrence (%REC), percent determinism (%DET), stability (MAXLINE), and stationarity (TREND). Percent determinism (%DET) decreased only for the most linguistically complex sentence; MAXLINE decreased as a function of linguistic complexity but increased for the longer-only sentence; TREND decreased as a function of both length and linguistic complexity. This research note demonstrates the feasibility of using RQA as a tool to compare speech variability across speakers and groups. RQA offers promise as a technique to assess effects of potential stressors (e.g., linguistic or cognitive factors) on the speech production system.

  17. Recurrence Quantification Analysis of Sentence-Level Speech Kinematics

    PubMed Central

    Tiede, Mark; Riley, Michael A.; Whalen, D. H.

    2016-01-01

    Purpose Current approaches to assessing sentence-level speech variability rely on measures that quantify variability across utterances and use normalization procedures that alter raw trajectory data. The current work tests the feasibility of a less restrictive nonlinear approach—recurrence quantification analysis (RQA)—via a procedural example and subsequent analysis of kinematic data. Method To test the feasibility of RQA, lip aperture (i.e., the Euclidean distance between lip-tracking sensors) was recorded for 21 typically developing adult speakers during production of a simple utterance. The utterance was produced in isolation and in carrier structures differing just in length or in length and complexity. Four RQA indices were calculated: percent recurrence (%REC), percent determinism (%DET), stability (MAXLINE), and stationarity (TREND). Results Percent determinism (%DET) decreased only for the most linguistically complex sentence; MAXLINE decreased as a function of linguistic complexity but increased for the longer-only sentence; TREND decreased as a function of both length and linguistic complexity. Conclusions This research note demonstrates the feasibility of using RQA as a tool to compare speech variability across speakers and groups. RQA offers promise as a technique to assess effects of potential stressors (e.g., linguistic or cognitive factors) on the speech production system. PMID:27824987

  18. Complexity quantification of cardiac variability time series using improved sample entropy (I-SampEn).

    PubMed

    Marwaha, Puneeta; Sunkaria, Ramesh Kumar

    2016-09-01

    The sample entropy (SampEn) has been widely used to quantify the complexity of RR-interval time series. It is a fact that higher complexity, and hence, entropy is associated with the RR-interval time series of healthy subjects. But, SampEn suffers from the disadvantage that it assigns higher entropy to the randomized surrogate time series as well as to certain pathological time series, which is a misleading observation. This wrong estimation of the complexity of a time series may be due to the fact that the existing SampEn technique updates the threshold value as a function of long-term standard deviation (SD) of a time series. However, time series of certain pathologies exhibits substantial variability in beat-to-beat fluctuations. So the SD of the first order difference (short term SD) of the time series should be considered while updating threshold value, to account for period-to-period variations inherited in a time series. In the present work, improved sample entropy (I-SampEn), a new methodology has been proposed in which threshold value is updated by considering the period-to-period variations of a time series. The I-SampEn technique results in assigning higher entropy value to age-matched healthy subjects than patients suffering atrial fibrillation (AF) and diabetes mellitus (DM). Our results are in agreement with the theory of reduction in complexity of RR-interval time series in patients suffering from chronic cardiovascular and non-cardiovascular diseases.

  19. Estimating beta diversity for under-sampled communities using the variably weighted Odum dissimilarity index and OTUshuff

    USDA-ARS?s Scientific Manuscript database

    Characterization of complex microbial communities by DNA sequencing has become a standard technique in microbial ecology. Yet, particular features of this approach render traditional methods of community comparison problematic. In particular, a very low proportion of community members are typically ...

  20. Adaptive control for a class of nonlinear complex dynamical systems with uncertain complex parameters and perturbations

    PubMed Central

    Liu, Jian; Liu, Kexin; Liu, Shutang

    2017-01-01

    In this paper, adaptive control is extended from real space to complex space, resulting in a new control scheme for a class of n-dimensional time-dependent strict-feedback complex-variable chaotic (hyperchaotic) systems (CVCSs) in the presence of uncertain complex parameters and perturbations, which has not been previously reported in the literature. In detail, we have developed a unified framework for designing the adaptive complex scalar controller to ensure this type of CVCSs asymptotically stable and for selecting complex update laws to estimate unknown complex parameters. In particular, combining Lyapunov functions dependent on complex-valued vectors and back-stepping technique, sufficient criteria on stabilization of CVCSs are derived in the sense of Wirtinger calculus in complex space. Finally, numerical simulation is presented to validate our theoretical results. PMID:28467431

  1. Adaptive control for a class of nonlinear complex dynamical systems with uncertain complex parameters and perturbations.

    PubMed

    Liu, Jian; Liu, Kexin; Liu, Shutang

    2017-01-01

    In this paper, adaptive control is extended from real space to complex space, resulting in a new control scheme for a class of n-dimensional time-dependent strict-feedback complex-variable chaotic (hyperchaotic) systems (CVCSs) in the presence of uncertain complex parameters and perturbations, which has not been previously reported in the literature. In detail, we have developed a unified framework for designing the adaptive complex scalar controller to ensure this type of CVCSs asymptotically stable and for selecting complex update laws to estimate unknown complex parameters. In particular, combining Lyapunov functions dependent on complex-valued vectors and back-stepping technique, sufficient criteria on stabilization of CVCSs are derived in the sense of Wirtinger calculus in complex space. Finally, numerical simulation is presented to validate our theoretical results.

  2. Quality Assurance in the Presence of Variability

    NASA Astrophysics Data System (ADS)

    Lauenroth, Kim; Metzger, Andreas; Pohl, Klaus

    Software Product Line Engineering (SPLE) is a reuse-driven development paradigm that has been applied successfully in information system engineering and other domains. Quality assurance of the reusable artifacts of the product line (e.g. requirements, design, and code artifacts) is essential for successful product line engineering. As those artifacts are reused in several products, a defect in a reusable artifact can affect several products of the product line. A central challenge for quality assurance in product line engineering is how to consider product line variability. Since the reusable artifacts contain variability, quality assurance techniques from single-system engineering cannot directly be applied to those artifacts. Therefore, different strategies and techniques have been developed for quality assurance in the presence of variability. In this chapter, we describe those strategies and discuss in more detail one of those strategies, the so called comprehensive strategy. The comprehensive strategy aims at checking the quality of all possible products of the product line and thus offers the highest benefits, since it is able to uncover defects in all possible products of the product line. However, the central challenge for applying the comprehensive strategy is the complexity that results from the product line variability and the large number of potential products of a product line. In this chapter, we present one concrete technique that we have developed to implement the comprehensive strategy that addresses this challenge. The technique is based on model checking technology and allows for a comprehensive verification of domain artifacts against temporal logic properties.

  3. Progress in mental workload measurement

    NASA Technical Reports Server (NTRS)

    Moray, Neville; Turksen, Burhan; Aidie, Paul; Drascic, David; Eisen, Paul

    1986-01-01

    Two new techniques are described, one using subjective, the other physiological data for the measurement of workload in complex tasks. The subjective approach uses fuzzy measurement to analyze and predict the difficulty of combinations of skill based and rule based behavior from the difficulty of skill based behavior and rule based behavior measured separately. The physiological technique offers an on-line real-time filter for measuring the Mulder signal at 0.1 Hz in the heart rate variability spectrum.

  4. SAINT: A combined simulation language for modeling man-machine systems

    NASA Technical Reports Server (NTRS)

    Seifert, D. J.

    1979-01-01

    SAINT (Systems Analysis of Integrated Networks of Tasks) is a network modeling and simulation technique for design and analysis of complex man machine systems. SAINT provides the conceptual framework for representing systems that consist of discrete task elements, continuous state variables, and interactions between them. It also provides a mechanism for combining human performance models and dynamic system behaviors in a single modeling structure. The SAINT technique is described and applications of the SAINT are discussed.

  5. multiUQ: An intrusive uncertainty quantification tool for gas-liquid multiphase flows

    NASA Astrophysics Data System (ADS)

    Turnquist, Brian; Owkes, Mark

    2017-11-01

    Uncertainty quantification (UQ) can improve our understanding of the sensitivity of gas-liquid multiphase flows to variability about inflow conditions and fluid properties, creating a valuable tool for engineers. While non-intrusive UQ methods (e.g., Monte Carlo) are simple and robust, the cost associated with these techniques can render them unrealistic. In contrast, intrusive UQ techniques modify the governing equations by replacing deterministic variables with stochastic variables, adding complexity, but making UQ cost effective. Our numerical framework, called multiUQ, introduces an intrusive UQ approach for gas-liquid flows, leveraging a polynomial chaos expansion of the stochastic variables: density, momentum, pressure, viscosity, and surface tension. The gas-liquid interface is captured using a conservative level set approach, including a modified reinitialization equation which is robust and quadrature free. A least-squares method is leveraged to compute the stochastic interface normal and curvature needed in the continuum surface force method for surface tension. The solver is tested by applying uncertainty to one or two variables and verifying results against the Monte Carlo approach. NSF Grant #1511325.

  6. Optimal dimensionality reduction of complex dynamics: the chess game as diffusion on a free-energy landscape.

    PubMed

    Krivov, Sergei V

    2011-07-01

    Dimensionality reduction is ubiquitous in the analysis of complex dynamics. The conventional dimensionality reduction techniques, however, focus on reproducing the underlying configuration space, rather than the dynamics itself. The constructed low-dimensional space does not provide a complete and accurate description of the dynamics. Here I describe how to perform dimensionality reduction while preserving the essential properties of the dynamics. The approach is illustrated by analyzing the chess game--the archetype of complex dynamics. A variable that provides complete and accurate description of chess dynamics is constructed. The winning probability is predicted by describing the game as a random walk on the free-energy landscape associated with the variable. The approach suggests a possible way of obtaining a simple yet accurate description of many important complex phenomena. The analysis of the chess game shows that the approach can quantitatively describe the dynamics of processes where human decision-making plays a central role, e.g., financial and social dynamics.

  7. Optimal dimensionality reduction of complex dynamics: The chess game as diffusion on a free-energy landscape

    NASA Astrophysics Data System (ADS)

    Krivov, Sergei V.

    2011-07-01

    Dimensionality reduction is ubiquitous in the analysis of complex dynamics. The conventional dimensionality reduction techniques, however, focus on reproducing the underlying configuration space, rather than the dynamics itself. The constructed low-dimensional space does not provide a complete and accurate description of the dynamics. Here I describe how to perform dimensionality reduction while preserving the essential properties of the dynamics. The approach is illustrated by analyzing the chess game—the archetype of complex dynamics. A variable that provides complete and accurate description of chess dynamics is constructed. The winning probability is predicted by describing the game as a random walk on the free-energy landscape associated with the variable. The approach suggests a possible way of obtaining a simple yet accurate description of many important complex phenomena. The analysis of the chess game shows that the approach can quantitatively describe the dynamics of processes where human decision-making plays a central role, e.g., financial and social dynamics.

  8. Surviving blind decomposition: A distributional analysis of the time-course of complex word recognition.

    PubMed

    Schmidtke, Daniel; Matsuki, Kazunaga; Kuperman, Victor

    2017-11-01

    The current study addresses a discrepancy in the psycholinguistic literature about the chronology of information processing during the visual recognition of morphologically complex words. Form-then-meaning accounts of complex word recognition claim that morphemes are processed as units of form prior to any influence of their meanings, whereas form-and-meaning models posit that recognition of complex word forms involves the simultaneous access of morphological and semantic information. The study reported here addresses this theoretical discrepancy by applying a nonparametric distributional technique of survival analysis (Reingold & Sheridan, 2014) to 2 behavioral measures of complex word processing. Across 7 experiments reported here, this technique is employed to estimate the point in time at which orthographic, morphological, and semantic variables exert their earliest discernible influence on lexical decision RTs and eye movement fixation durations. Contrary to form-then-meaning predictions, Experiments 1-4 reveal that surface frequency is the earliest lexical variable to exert a demonstrable influence on lexical decision RTs for English and Dutch derived words (e.g., badness ; bad + ness ), English pseudoderived words (e.g., wander ; wand + er ) and morphologically simple control words (e.g., ballad ; ball + ad ). Furthermore, for derived word processing across lexical decision and eye-tracking paradigms (Experiments 1-2; 5-7), semantic effects emerge early in the time-course of word recognition, and their effects either precede or emerge simultaneously with morphological effects. These results are not consistent with the premises of the form-then-meaning view of complex word recognition, but are convergent with a form-and-meaning account of complex word recognition. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. Complex Morphological Variability in Complex Evaporitic Systems: Thermal Spring Snails from the Chihuahuan Desert, Mexico

    NASA Astrophysics Data System (ADS)

    Tang, Carol M.; Roopnarine, Peter D.

    2003-11-01

    Thermal springs in evaporitic environments provide a unique biological laboratory in which to study natural selection and evolutionary diversification. These isolated systems may be an analogue for conditions in early Earth or Mars history. One modern example of such a system can be found in the Chihuahuan Desert of north-central Mexico. The Cuatro Cienegas basin hosts a series of thermal springs that form a complex of aquatic ecosystems under a range of environmental conditions. Using landmark-based morphometric techniques, we have quantified an unusually high level of morphological variability in the endemic gastropod Mexipyrgus from Cuatro Cienegas. The differentiation is seen both within and between hydrological systems. Our results suggest that this type of environmental system is capable of producing and maintaining a high level of morphological diversity on small spatial scales, and thus should be a target for future astrobiological research.

  10. Single-step fabrication of thin-film linear variable bandpass filters based on metal-insulator-metal geometry.

    PubMed

    Williams, Calum; Rughoobur, Girish; Flewitt, Andrew J; Wilkinson, Timothy D

    2016-11-10

    A single-step fabrication method is presented for ultra-thin, linearly variable optical bandpass filters (LVBFs) based on a metal-insulator-metal arrangement using modified evaporation deposition techniques. This alternate process methodology offers reduced complexity and cost in comparison to conventional techniques for fabricating LVBFs. We are able to achieve linear variation of insulator thickness across a sample, by adjusting the geometrical parameters of a typical physical vapor deposition process. We demonstrate LVBFs with spectral selectivity from 400 to 850 nm based on Ag (25 nm) and MgF2 (75-250 nm). Maximum spectral transmittance is measured at ∼70% with a Q-factor of ∼20.

  11. Understanding the determinants of problem-solving behavior in a complex environment

    NASA Technical Reports Server (NTRS)

    Casner, Stephen A.

    1994-01-01

    It is often argued that problem-solving behavior in a complex environment is determined as much by the features of the environment as by the goals of the problem solver. This article explores a technique to determine the extent to which measured features of a complex environment influence problem-solving behavior observed within that environment. In this study, the technique is used to determine how complex flight deck and air traffic control environment influences the strategies used by airline pilots when controlling the flight path of a modern jetliner. Data collected aboard 16 commercial flights are used to measure selected features of the task environment. A record of the pilots' problem-solving behavior is analyzed to determine to what extent behavior is adapted to the environmental features that were measured. The results suggest that the measured features of the environment account for as much as half of the variability in the pilots' problem-solving behavior and provide estimates on the probable effects of each environmental feature.

  12. An Efficient Biometric-Based Algorithm Using Heart Rate Variability for Securing Body Sensor Networks

    PubMed Central

    Pirbhulal, Sandeep; Zhang, Heye; Mukhopadhyay, Subhas Chandra; Li, Chunyue; Wang, Yumei; Li, Guanglin; Wu, Wanqing; Zhang, Yuan-Ting

    2015-01-01

    Body Sensor Network (BSN) is a network of several associated sensor nodes on, inside or around the human body to monitor vital signals, such as, Electroencephalogram (EEG), Photoplethysmography (PPG), Electrocardiogram (ECG), etc. Each sensor node in BSN delivers major information; therefore, it is very significant to provide data confidentiality and security. All existing approaches to secure BSN are based on complex cryptographic key generation procedures, which not only demands high resource utilization and computation time, but also consumes large amount of energy, power and memory during data transmission. However, it is indispensable to put forward energy efficient and computationally less complex authentication technique for BSN. In this paper, a novel biometric-based algorithm is proposed, which utilizes Heart Rate Variability (HRV) for simple key generation process to secure BSN. Our proposed algorithm is compared with three data authentication techniques, namely Physiological Signal based Key Agreement (PSKA), Data Encryption Standard (DES) and Rivest Shamir Adleman (RSA). Simulation is performed in Matlab and results suggest that proposed algorithm is quite efficient in terms of transmission time utilization, average remaining energy and total power consumption. PMID:26131666

  13. An Efficient Biometric-Based Algorithm Using Heart Rate Variability for Securing Body Sensor Networks.

    PubMed

    Pirbhulal, Sandeep; Zhang, Heye; Mukhopadhyay, Subhas Chandra; Li, Chunyue; Wang, Yumei; Li, Guanglin; Wu, Wanqing; Zhang, Yuan-Ting

    2015-06-26

    Body Sensor Network (BSN) is a network of several associated sensor nodes on, inside or around the human body to monitor vital signals, such as, Electroencephalogram (EEG), Photoplethysmography (PPG), Electrocardiogram (ECG), etc. Each sensor node in BSN delivers major information; therefore, it is very significant to provide data confidentiality and security. All existing approaches to secure BSN are based on complex cryptographic key generation procedures, which not only demands high resource utilization and computation time, but also consumes large amount of energy, power and memory during data transmission. However, it is indispensable to put forward energy efficient and computationally less complex authentication technique for BSN. In this paper, a novel biometric-based algorithm is proposed, which utilizes Heart Rate Variability (HRV) for simple key generation process to secure BSN. Our proposed algorithm is compared with three data authentication techniques, namely Physiological Signal based Key Agreement (PSKA), Data Encryption Standard (DES) and Rivest Shamir Adleman (RSA). Simulation is performed in Matlab and results suggest that proposed algorithm is quite efficient in terms of transmission time utilization, average remaining energy and total power consumption.

  14. Surface-Sensitive Microwear Texture Analysis of Attrition and Erosion.

    PubMed

    Ranjitkar, S; Turan, A; Mann, C; Gully, G A; Marsman, M; Edwards, S; Kaidonis, J A; Hall, C; Lekkas, D; Wetselaar, P; Brook, A H; Lobbezoo, F; Townsend, G C

    2017-03-01

    Scale-sensitive fractal analysis of high-resolution 3-dimensional surface reconstructions of wear patterns has advanced our knowledge in evolutionary biology, and has opened up opportunities for translatory applications in clinical practice. To elucidate the microwear characteristics of attrition and erosion in worn natural teeth, we scanned 50 extracted human teeth using a confocal profiler at a high optical resolution (X-Y, 0.17 µm; Z < 3 nm). Our hypothesis was that microwear complexity would be greater in erosion and that anisotropy would be greater in attrition. The teeth were divided into 4 groups, including 2 wear types (attrition and erosion) and 2 locations (anterior and posterior teeth; n = 12 for each anterior group, n = 13 for each posterior group) for 2 tissue types (enamel and dentine). The raw 3-dimensional data cloud was subjected to a newly developed rigorous standardization technique to reduce interscanner variability as well as to filter anomalous scanning data. Linear mixed effects (regression) analyses conducted separately for the dependent variables, complexity and anisotropy, showed the following effects of the independent variables: significant interactions between wear type and tissue type ( P = 0.0157 and P = 0.0003, respectively) and significant effects of location ( P < 0.0001 and P = 0.0035, respectively). There were significant associations between complexity and anisotropy when the dependent variable was either complexity ( P = 0.0003) or anisotropy ( P = 0.0014). Our findings of greater complexity in erosion and greater anisotropy in attrition confirm our hypothesis. The greatest geometric means were noted in dentine erosion for complexity and dentine attrition for anisotropy. Dentine also exhibited microwear characteristics that were more consistent with wear types than enamel. Overall, our findings could complement macrowear assessment in dental clinical practice and research and could assist in the early detection and management of pathologic tooth wear.

  15. Kinematic and kinetic analysis of overhand, sidearm and underhand lacrosse shot techniques.

    PubMed

    Macaulay, Charles A J; Katz, Larry; Stergiou, Pro; Stefanyshyn, Darren; Tomaghelli, Luciano

    2017-12-01

    Lacrosse requires the coordinated performance of many complex skills. One of these skills is shooting on the opponents' net using one of three techniques: overhand, sidearm or underhand. The purpose of this study was to (i) determine which technique generated the highest ball velocity and greatest shot accuracy and (ii) identify kinematic and kinetic variables that contribute to a high velocity and high accuracy shot. Twelve elite male lacrosse players participated in this study. Kinematic data were sampled at 250 Hz, while two-dimensional force plates collected ground reaction force data (1000 Hz). Statistical analysis showed significantly greater ball velocity for the sidearm technique than overhand (P < 0.001) and underhand (P < 0.001) techniques. No statistical difference was found for shot accuracy (P > 0.05). Kinematic and kinetic variables were not significantly correlated to shot accuracy or velocity across all shot types; however, when analysed independently, the lead foot horizontal impulse showed a negative correlation with underhand ball velocity (P = 0.042). This study identifies the technique with the highest ball velocity, defines kinematic and kinetic predictors related to ball velocity and provides information to coaches and athletes concerned with improving lacrosse shot performance.

  16. Beat to beat variability in cardiovascular variables: noise or music?

    NASA Technical Reports Server (NTRS)

    Appel, M. L.; Berger, R. D.; Saul, J. P.; Smith, J. M.; Cohen, R. J.

    1989-01-01

    Cardiovascular variables such as heart rate, arterial blood pressure, stroke volume and the shape of electrocardiographic complexes all fluctuate on a beat to beat basis. These fluctuations have traditionally been ignored or, at best, treated as noise to be averaged out. The variability in cardiovascular signals reflects the homeodynamic interplay between perturbations to cardiovascular function and the dynamic response of the cardiovascular regulatory systems. Modern signal processing techniques provide a means of analyzing beat to beat fluctuations in cardiovascular signals, so as to permit a quantitative, noninvasive or minimally invasive method of assessing closed loop hemodynamic regulation and cardiac electrical stability. This method promises to provide a new approach to the clinical diagnosis and management of alterations in cardiovascular regulation and stability.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chicoine, T.K.; Fay, P.K.; Nielsen, G.A.

    Soil characteristics, elevation, annual precipitation, potential evapotranspiration, length of frost-free season, and mean maximum July temperature were estimated for 116 established infestations of spotted knapweed (Centaurea maculosa Lam. number/sup 3/ CENMA) in Montana using basic land resource maps. Areas potentially vulnerable to invasion by the plant were delineated on the basis of representative edaphic and climatic characteristics. No single environmental variable was an effective predictor of sites vulnerable to invasion by spotted knapweed. Only a combination of variables was effective, indicating that the factors that regulate adaptability of this plant are complex. This technique provides a first approximation map ofmore » the regions most similar environmentally to infested sites and; therefore, most vulnerable to further invasion. This weed migration prediction technique shows promise for predicting suitable habitats of other invader species. 6 references, 4 figures, 1 table.« less

  18. Sacroiliac Joint Interventions.

    PubMed

    Soto Quijano, David A; Otero Loperena, Eduardo

    2018-02-01

    Sacroiliac joint (SIJ) pain is an important cause of lower back problems. Multiple SIJ injection techniques have been proposed over the years to help in the diagnosis and treatment of this condition. However, the SIJ innervation is complex and variable, and truly intra-articular injections are sometimes difficult to obtain. Different sacroiliac joint injections have shown to provide pain relief in patients suffering this ailment. Various techniques for intraarticular injections, sacral branch blocks and radiofrequency ablation, both fluoroscopy guided and ultrasound guided are discussed in this paper. Less common techniques like prolotherapy, platelet rich plasma injections and botulism toxin injections are also discussed. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Evidence of Deterministic Components in the Apparent Randomness of GRBs: Clues of a Chaotic Dynamic

    PubMed Central

    Greco, G.; Rosa, R.; Beskin, G.; Karpov, S.; Romano, L.; Guarnieri, A.; Bartolini, C.; Bedogni, R.

    2011-01-01

    Prompt γ-ray emissions from gamma-ray bursts (GRBs) exhibit a vast range of extremely complex temporal structures with a typical variability time-scale significantly short – as fast as milliseconds. This work aims to investigate the apparent randomness of the GRB time profiles making extensive use of nonlinear techniques combining the advanced spectral method of the Singular Spectrum Analysis (SSA) with the classical tools provided by the Chaos Theory. Despite their morphological complexity, we detect evidence of a non stochastic short-term variability during the overall burst duration – seemingly consistent with a chaotic behavior. The phase space portrait of such variability shows the existence of a well-defined strange attractor underlying the erratic prompt emission structures. This scenario can shed new light on the ultra-relativistic processes believed to take place in GRB explosions and usually associated with the birth of a fast-spinning magnetar or accretion of matter onto a newly formed black hole. PMID:22355609

  20. Evidence of deterministic components in the apparent randomness of GRBs: clues of a chaotic dynamic.

    PubMed

    Greco, G; Rosa, R; Beskin, G; Karpov, S; Romano, L; Guarnieri, A; Bartolini, C; Bedogni, R

    2011-01-01

    Prompt γ-ray emissions from gamma-ray bursts (GRBs) exhibit a vast range of extremely complex temporal structures with a typical variability time-scale significantly short - as fast as milliseconds. This work aims to investigate the apparent randomness of the GRB time profiles making extensive use of nonlinear techniques combining the advanced spectral method of the Singular Spectrum Analysis (SSA) with the classical tools provided by the Chaos Theory. Despite their morphological complexity, we detect evidence of a non stochastic short-term variability during the overall burst duration - seemingly consistent with a chaotic behavior. The phase space portrait of such variability shows the existence of a well-defined strange attractor underlying the erratic prompt emission structures. This scenario can shed new light on the ultra-relativistic processes believed to take place in GRB explosions and usually associated with the birth of a fast-spinning magnetar or accretion of matter onto a newly formed black hole.

  1. Exploring biological, chemical and geomorphological patterns in fluvial ecosystems with Structural Equation Modelling

    NASA Astrophysics Data System (ADS)

    Bizzi, S.; Surridge, B.; Lerner, D. N.:

    2009-04-01

    River ecosystems represent complex networks of interacting biological, chemical and geomorphological processes. These processes generate spatial and temporal patterns in biological, chemical and geomorphological variables, and a growing number of these variables are now being used to characterise the status of rivers. However, integrated analyses of these biological-chemical-geomorphological networks have rarely been undertaken, and as a result our knowledge of the underlying processes and how they generate the resulting patterns remains weak. The apparent complexity of the networks involved, and the lack of coherent datasets, represent two key challenges to such analyses. In this paper we describe the application of a novel technique, Structural Equation Modelling (SEM), to the investigation of biological, chemical and geomorphological data collected from rivers across England and Wales. The SEM approach is a multivariate statistical technique enabling simultaneous examination of direct and indirect relationships across a network of variables. Further, SEM allows a-priori conceptual or theoretical models to be tested against available data. This is a significant departure from the solely exploratory analyses which characterise other multivariate techniques. We took biological, chemical and river habitat survey data collected by the Environment Agency for 400 sites in rivers spread across England and Wales, and created a single, coherent dataset suitable for SEM analyses. Biological data cover benthic macroinvertebrates, chemical data relate to a range of standard parameters (e.g. BOD, dissolved oxygen and phosphate concentration), and geomorphological data cover factors such as river typology, substrate material and degree of physical modification. We developed a number of a-priori conceptual models, reflecting current research questions or existing knowledge, and tested the ability of these conceptual models to explain the variance and covariance within the dataset. The conceptual models we developed were able to explain correctly the variance and covariance shown by the datasets, proving to be a relevant representation of the processes involved. The models explained 65% of the variance in indices describing benthic macroinvertebrate communities. Dissolved oxygen was of primary importance, but geomorphological factors, including river habitat type and degree of habitat degradation, also had significant explanatory power. The addition of spatial variables, such as latitude or longitude, did not provide additional explanatory power. This suggests that the variables already included in the models effectively represented the eco-regions across which our data were distributed. The models produced new insights into the relative importance of chemical and geomorphological factors for river macroinvertebrate communities. The SEM technique proved a powerful tool for exploring complex biological-chemical-geomorphological networks, for example able to deal with the co-correlations that are common in rivers due to multiple feedback mechanisms.

  2. Interpretation of the Lempel-Ziv complexity measure in the context of biomedical signal analysis.

    PubMed

    Aboy, Mateo; Hornero, Roberto; Abásolo, Daniel; Alvarez, Daniel

    2006-11-01

    Lempel-Ziv complexity (LZ) and derived LZ algorithms have been extensively used to solve information theoretic problems such as coding and lossless data compression. In recent years, LZ has been widely used in biomedical applications to estimate the complexity of discrete-time signals. Despite its popularity as a complexity measure for biosignal analysis, the question of LZ interpretability and its relationship to other signal parameters and to other metrics has not been previously addressed. We have carried out an investigation aimed at gaining a better understanding of the LZ complexity itself, especially regarding its interpretability as a biomedical signal analysis technique. Our results indicate that LZ is particularly useful as a scalar metric to estimate the bandwidth of random processes and the harmonic variability in quasi-periodic signals.

  3. Transverse injection into Mach 2 flow behind a rearward-facing step - A 3-D, compressible flow test case for hypersonic combustor CFD validation

    NASA Technical Reports Server (NTRS)

    Mcdaniel, James C.; Fletcher, Douglas G.; Hartfield, Roy J.; Hollo, Steven D.

    1991-01-01

    A spatially-complete data set of the important primitive flow variables is presented for the complex, nonreacting, 3D unit combustor flow field employing transverse injection into a Mach 2 flow behind a rearward-facing step. A unique wind tunnel facility providing the capability for iodine seeding was built specifically for these measurements. Two optical techniques based on laser-induced-iodine fluorescence were developed and utilized for nonintrusive, in situ flow field measurements. LDA provided both mean and fluctuating velocity component measurements. A thermographic phosphor wall temperature measurement technique was developed and employed. Data from the 2D flow over a rearward-facing step and the complex 3D mixing flow with injection are reported.

  4. Complexity in the Chinese stock market and its relationships with monetary policy intensity

    NASA Astrophysics Data System (ADS)

    Ying, Shangjun; Fan, Ying

    2014-01-01

    This paper introduces how to formulate the CSI300 evolving stock index using the Paasche compiling technique of weighed indexes after giving the GCA model. It studies dynamics characteristics of the Chinese stock market and its relationships with monetary policy intensity, based on the evolving stock index. It concludes by saying that it is possible to construct a dynamics equation of the Chinese stock market using three variables, and that it is useless to regular market-complexity according to changing intensity of external factors from a chaos point of view.

  5. Computer modeling and simulation of human movement. Applications in sport and rehabilitation.

    PubMed

    Neptune, R R

    2000-05-01

    Computer modeling and simulation of human movement plays an increasingly important role in sport and rehabilitation, with applications ranging from sport equipment design to understanding pathologic gait. The complex dynamic interactions within the musculoskeletal and neuromuscular systems make analyzing human movement with existing experimental techniques difficult but computer modeling and simulation allows for the identification of these complex interactions and causal relationships between input and output variables. This article provides an overview of computer modeling and simulation and presents an example application in the field of rehabilitation.

  6. Management of Severe and Complex Hypopharyngeal and/or Laryngotracheal Stenoses by Various Open Surgical Procedures: A Retrospective Study of Seventeen Patients.

    PubMed

    Chen, Wenxian; Gao, Pengfei; Cui, Pengcheng; Ruan, Yanyan; Liu, Zhi; Sun, Yongzhu; Bian, Ka

    2016-01-01

    To systematically study various surgical approaches for treating complex hypopharyngeal and/or laryngotracheal stenoses at a variety of sites and levels. We retrospectively analyzed the treatment of 17 patients with severe and complex hypopharyngeal and/or laryngotracheal stenosis at various sites and levels of severity. All of the 17 patients initially had a tracheostomy. Thirteen had failed the previous laser lysis and/or dilation treatment. Given the high severity and complexity of stenosis, all of these patients were treated by open surgical reconstruction techniques using repairing grafts (flaps), followed by stenting. Thirteen of 17 patients had successful decannulation 1-8 months post-operation and had stable airway and adequate vocal and swallow function. Two patients with complex hypopharyngeal and esophageal stenosis had unsuccessful decannulation. Follow-up was lost in 1 patient with complex hypopharyngeal and esophageal stenosis and 1 patient with original hypopharyngeal stenosis and recurrent thoracotracheal stenosis. Despite the failure by the regular treatments using laser lysis and/or dilation therapy, severe and complex hypopharyngeal and/or laryngotracheal stenosis may be successfully treated by variable open surgical reconstruction techniques using different grafts (flaps) depending on the site and severity of the stenosis. © 2016 S. Karger AG, Basel.

  7. Datamining approaches for modeling tumor control probability.

    PubMed

    Naqa, Issam El; Deasy, Joseph O; Mu, Yi; Huang, Ellen; Hope, Andrew J; Lindsay, Patricia E; Apte, Aditya; Alaly, James; Bradley, Jeffrey D

    2010-11-01

    Tumor control probability (TCP) to radiotherapy is determined by complex interactions between tumor biology, tumor microenvironment, radiation dosimetry, and patient-related variables. The complexity of these heterogeneous variable interactions constitutes a challenge for building predictive models for routine clinical practice. We describe a datamining framework that can unravel the higher order relationships among dosimetric dose-volume prognostic variables, interrogate various radiobiological processes, and generalize to unseen data before when applied prospectively. Several datamining approaches are discussed that include dose-volume metrics, equivalent uniform dose, mechanistic Poisson model, and model building methods using statistical regression and machine learning techniques. Institutional datasets of non-small cell lung cancer (NSCLC) patients are used to demonstrate these methods. The performance of the different methods was evaluated using bivariate Spearman rank correlations (rs). Over-fitting was controlled via resampling methods. Using a dataset of 56 patients with primary NCSLC tumors and 23 candidate variables, we estimated GTV volume and V75 to be the best model parameters for predicting TCP using statistical resampling and a logistic model. Using these variables, the support vector machine (SVM) kernel method provided superior performance for TCP prediction with an rs=0.68 on leave-one-out testing compared to logistic regression (rs=0.4), Poisson-based TCP (rs=0.33), and cell kill equivalent uniform dose model (rs=0.17). The prediction of treatment response can be improved by utilizing datamining approaches, which are able to unravel important non-linear complex interactions among model variables and have the capacity to predict on unseen data for prospective clinical applications.

  8. Data mining and statistical inference in selective laser melting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamath, Chandrika

    Selective laser melting (SLM) is an additive manufacturing process that builds a complex three-dimensional part, layer-by-layer, using a laser beam to fuse fine metal powder together. The design freedom afforded by SLM comes associated with complexity. As the physical phenomena occur over a broad range of length and time scales, the computational cost of modeling the process is high. At the same time, the large number of parameters that control the quality of a part make experiments expensive. In this paper, we describe ways in which we can use data mining and statistical inference techniques to intelligently combine simulations andmore » experiments to build parts with desired properties. We start with a brief summary of prior work in finding process parameters for high-density parts. We then expand on this work to show how we can improve the approach by using feature selection techniques to identify important variables, data-driven surrogate models to reduce computational costs, improved sampling techniques to cover the design space adequately, and uncertainty analysis for statistical inference. Here, our results indicate that techniques from data mining and statistics can complement those from physical modeling to provide greater insight into complex processes such as selective laser melting.« less

  9. Data mining and statistical inference in selective laser melting

    DOE PAGES

    Kamath, Chandrika

    2016-01-11

    Selective laser melting (SLM) is an additive manufacturing process that builds a complex three-dimensional part, layer-by-layer, using a laser beam to fuse fine metal powder together. The design freedom afforded by SLM comes associated with complexity. As the physical phenomena occur over a broad range of length and time scales, the computational cost of modeling the process is high. At the same time, the large number of parameters that control the quality of a part make experiments expensive. In this paper, we describe ways in which we can use data mining and statistical inference techniques to intelligently combine simulations andmore » experiments to build parts with desired properties. We start with a brief summary of prior work in finding process parameters for high-density parts. We then expand on this work to show how we can improve the approach by using feature selection techniques to identify important variables, data-driven surrogate models to reduce computational costs, improved sampling techniques to cover the design space adequately, and uncertainty analysis for statistical inference. Here, our results indicate that techniques from data mining and statistics can complement those from physical modeling to provide greater insight into complex processes such as selective laser melting.« less

  10. Recent experience in simultaneous control-structure optimization

    NASA Technical Reports Server (NTRS)

    Salama, M.; Ramaker, R.; Milman, M.

    1989-01-01

    To show the feasibility of simultaneous optimization as design procedure, low order problems were used in conjunction with simple control formulations. The numerical results indicate that simultaneous optimization is not only feasible, but also advantageous. Such advantages come at the expense of introducing complexities beyond those encountered in structure optimization alone, or control optimization alone. Examples include: larger design parameter space, optimization may combine continuous and combinatoric variables, and the combined objective function may be nonconvex. Future extensions to include large order problems, more complex objective functions and constraints, and more sophisticated control formulations will require further research to ensure that the additional complexities do not outweigh the advantages of simultaneous optimization. Some areas requiring more efficient tools than currently available include: multiobjective criteria and nonconvex optimization. Efficient techniques to deal with optimization over combinatoric and continuous variables, and with truncation issues for structure and control parameters of both the model space as well as the design space need to be developed.

  11. Left Atrial Appendage Closure for Stroke Prevention: Devices, Techniques, and Efficacy.

    PubMed

    Iskandar, Sandia; Vacek, James; Lavu, Madhav; Lakkireddy, Dhanunjaya

    2016-05-01

    Left atrial appendage closure can be performed either surgically or percutaneously. Surgical approaches include direct suture, excision and suture, stapling, and clipping. Percutaneous approaches include endocardial, epicardial, and hybrid endocardial-epicardial techniques. Left atrial appendage anatomy is highly variable and complex; therefore, preprocedural imaging is crucial to determine device selection and sizing, which contribute to procedural success and reduction of complications. Currently, the WATCHMAN is the only device that is approved for left atrial appendage closure in the United States. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Assessing multiscale complexity of short heart rate variability series through a model-based linear approach

    NASA Astrophysics Data System (ADS)

    Porta, Alberto; Bari, Vlasta; Ranuzzi, Giovanni; De Maria, Beatrice; Baselli, Giuseppe

    2017-09-01

    We propose a multiscale complexity (MSC) method assessing irregularity in assigned frequency bands and being appropriate for analyzing the short time series. It is grounded on the identification of the coefficients of an autoregressive model, on the computation of the mean position of the poles generating the components of the power spectral density in an assigned frequency band, and on the assessment of its distance from the unit circle in the complex plane. The MSC method was tested on simulations and applied to the short heart period (HP) variability series recorded during graded head-up tilt in 17 subjects (age from 21 to 54 years, median = 28 years, 7 females) and during paced breathing protocols in 19 subjects (age from 27 to 35 years, median = 31 years, 11 females) to assess the contribution of time scales typical of the cardiac autonomic control, namely in low frequency (LF, from 0.04 to 0.15 Hz) and high frequency (HF, from 0.15 to 0.5 Hz) bands to the complexity of the cardiac regulation. The proposed MSC technique was compared to a traditional model-free multiscale method grounded on information theory, i.e., multiscale entropy (MSE). The approach suggests that the reduction of HP variability complexity observed during graded head-up tilt is due to a regularization of the HP fluctuations in LF band via a possible intervention of sympathetic control and the decrement of HP variability complexity observed during slow breathing is the result of the regularization of the HP variations in both LF and HF bands, thus implying the action of physiological mechanisms working at time scales even different from that of respiration. MSE did not distinguish experimental conditions at time scales larger than 1. Over a short time series MSC allows a more insightful association between cardiac control complexity and physiological mechanisms modulating cardiac rhythm compared to a more traditional tool such as MSE.

  13. Integrating an artificial intelligence approach with k-means clustering to model groundwater salinity: the case of Gaza coastal aquifer (Palestine)

    NASA Astrophysics Data System (ADS)

    Alagha, Jawad S.; Seyam, Mohammed; Md Said, Md Azlin; Mogheir, Yunes

    2017-12-01

    Artificial intelligence (AI) techniques have increasingly become efficient alternative modeling tools in the water resources field, particularly when the modeled process is influenced by complex and interrelated variables. In this study, two AI techniques—artificial neural networks (ANNs) and support vector machine (SVM)—were employed to achieve deeper understanding of the salinization process (represented by chloride concentration) in complex coastal aquifers influenced by various salinity sources. Both models were trained using 11 years of groundwater quality data from 22 municipal wells in Khan Younis Governorate, Gaza, Palestine. Both techniques showed satisfactory prediction performance, where the mean absolute percentage error (MAPE) and correlation coefficient ( R) for the test data set were, respectively, about 4.5 and 99.8% for the ANNs model, and 4.6 and 99.7% for SVM model. The performances of the developed models were further noticeably improved through preprocessing the wells data set using a k-means clustering method, then conducting AI techniques separately for each cluster. The developed models with clustered data were associated with higher performance, easiness and simplicity. They can be employed as an analytical tool to investigate the influence of input variables on coastal aquifer salinity, which is of great importance for understanding salinization processes, leading to more effective water-resources-related planning and decision making.

  14. Phytoplankton dynamics of a subtropical reservoir controlled by the complex interplay among hydrological, abiotic, and biotic variables.

    PubMed

    Kuo, Yi-Ming; Wu, Jiunn-Tzong

    2016-12-01

    This study was conducted to identify the key factors related to the spatiotemporal variations in phytoplankton abundance in a subtropical reservoir from 2006 to 2010 and to assist in developing strategies for water quality management. Dynamic factor analysis (DFA), a dimension-reduction technique, was used to identify interactions between explanatory variables (i.e., environmental variables) and abundance (biovolume) of predominant phytoplankton classes. The optimal DFA model significantly described the dynamic changes in abundances of predominant phytoplankton groups (including dinoflagellates, diatoms, and green algae) at five monitoring sites. Water temperature, electrical conductivity, water level, nutrients (total phosphorus, NO 3 -N, and NH 3 -N), macro-zooplankton, and zooplankton were the key factors affecting the dynamics of aforementioned phytoplankton. Therefore, transformations of nutrients and reactions between water quality variables and aforementioned processes altered by hydrological conditions may also control the abundance dynamics of phytoplankton, which may represent common trends in the DFA model. The meandering shape of Shihmen Reservoir and its surrounding rivers caused a complex interplay between hydrological conditions and abiotic and biotic variables, resulting in phytoplankton abundance that could not be estimated using certain variables. Additional water quality and hydrological variables at surrounding rivers and monitoring plans should be executed a few days before and after reservoir operations and heavy storm, which would assist in developing site-specific preventive strategies to control phytoplankton abundance.

  15. Adaptive Elastic Net for Generalized Methods of Moments.

    PubMed

    Caner, Mehmet; Zhang, Hao Helen

    2014-01-30

    Model selection and estimation are crucial parts of econometrics. This paper introduces a new technique that can simultaneously estimate and select the model in generalized method of moments (GMM) context. The GMM is particularly powerful for analyzing complex data sets such as longitudinal and panel data, and it has wide applications in econometrics. This paper extends the least squares based adaptive elastic net estimator of Zou and Zhang (2009) to nonlinear equation systems with endogenous variables. The extension is not trivial and involves a new proof technique due to estimators lack of closed form solutions. Compared to Bridge-GMM of Caner (2009), we allow for the number of parameters to diverge to infinity as well as collinearity among a large number of variables, also the redundant parameters set to zero via a data dependent technique. This method has the oracle property, meaning that we can estimate nonzero parameters with their standard limit and the redundant parameters are dropped from the equations simultaneously. Numerical examples are used to illustrate the performance of the new method.

  16. Modeling maintenance-strategies with rainbow nets

    NASA Astrophysics Data System (ADS)

    Johnson, Allen M., Jr.; Schoenfelder, Michael A.; Lebold, David

    The Rainbow net (RN) modeling technique offers a promising alternative to traditional reliability modeling techniques. RNs are evaluated through discrete event simulation. Using specialized tokens to represent systems and faults, an RN models the fault-handling behavior of an inventory of systems produced over time. In addition, a portion of the RN represents system repair and the vendor's spare part production. Various dependability parameters are measured and used to calculate the impact of four variations of maintenance strategies. Input variables are chosen to demonstrate the technique. The number of inputs allowed to vary is intentionally constrained to limit the volume of data presented and to avoid overloading the reader with complexity. If only availability data were reviewed, it is possible that the conclusion might be drawn that both strategies are about the same and therefore the cheaper strategy from the vendor's perspective may be chosen. The richer set of metrics provided by the RN simulation gives greater insight into the problem, which leads to better decisions. By using RNs, the impact of several different variables is integrated.

  17. A review on reflective remote sensing and data assimilation techniques for enhanced agroecosystem modeling

    NASA Astrophysics Data System (ADS)

    Dorigo, W. A.; Zurita-Milla, R.; de Wit, A. J. W.; Brazile, J.; Singh, R.; Schaepman, M. E.

    2007-05-01

    During the last 50 years, the management of agroecosystems has been undergoing major changes to meet the growing demand for food, timber, fibre and fuel. As a result of this intensified use, the ecological status of many agroecosystems has been severely deteriorated. Modeling the behavior of agroecosystems is, therefore, of great help since it allows the definition of management strategies that maximize (crop) production while minimizing the environmental impacts. Remote sensing can support such modeling by offering information on the spatial and temporal variation of important canopy state variables which would be very difficult to obtain otherwise. In this paper, we present an overview of different methods that can be used to derive biophysical and biochemical canopy state variables from optical remote sensing data in the VNIR-SWIR regions. The overview is based on an extensive literature review where both statistical-empirical and physically based methods are discussed. Subsequently, the prevailing techniques of assimilating remote sensing data into agroecosystem models are outlined. The increasing complexity of data assimilation methods and of models describing agroecosystem functioning has significantly increased computational demands. For this reason, we include a short section on the potential of parallel processing to deal with the complex and computationally intensive algorithms described in the preceding sections. The studied literature reveals that many valuable techniques have been developed both for the retrieval of canopy state variables from reflective remote sensing data as for assimilating the retrieved variables in agroecosystem models. However, for agroecosystem modeling and remote sensing data assimilation to be commonly employed on a global operational basis, emphasis will have to be put on bridging the mismatch between data availability and accuracy on one hand, and model and user requirements on the other. This could be achieved by integrating imagery with different spatial, temporal, spectral, and angular resolutions, and the fusion of optical data with data of different origin, such as LIDAR and radar/microwave.

  18. Non-Destructive Techniques Based on Eddy Current Testing

    PubMed Central

    García-Martín, Javier; Gómez-Gil, Jaime; Vázquez-Sánchez, Ernesto

    2011-01-01

    Non-destructive techniques are used widely in the metal industry in order to control the quality of materials. Eddy current testing is one of the most extensively used non-destructive techniques for inspecting electrically conductive materials at very high speeds that does not require any contact between the test piece and the sensor. This paper includes an overview of the fundamentals and main variables of eddy current testing. It also describes the state-of-the-art sensors and modern techniques such as multi-frequency and pulsed systems. Recent advances in complex models towards solving crack-sensor interaction, developments in instrumentation due to advances in electronic devices, and the evolution of data processing suggest that eddy current testing systems will be increasingly used in the future. PMID:22163754

  19. Non-destructive techniques based on eddy current testing.

    PubMed

    García-Martín, Javier; Gómez-Gil, Jaime; Vázquez-Sánchez, Ernesto

    2011-01-01

    Non-destructive techniques are used widely in the metal industry in order to control the quality of materials. Eddy current testing is one of the most extensively used non-destructive techniques for inspecting electrically conductive materials at very high speeds that does not require any contact between the test piece and the sensor. This paper includes an overview of the fundamentals and main variables of eddy current testing. It also describes the state-of-the-art sensors and modern techniques such as multi-frequency and pulsed systems. Recent advances in complex models towards solving crack-sensor interaction, developments in instrumentation due to advances in electronic devices, and the evolution of data processing suggest that eddy current testing systems will be increasingly used in the future.

  20. EPE analysis of sub-N10 BEoL flow with and without fully self-aligned via using Coventor SEMulator3D

    NASA Astrophysics Data System (ADS)

    Franke, Joern-Holger; Gallagher, Matt; Murdoch, Gayle; Halder, Sandip; Juncker, Aurelie; Clark, William

    2017-03-01

    During the last few decades, the semiconductor industry has been able to scale device performance up while driving costs down. What started off as simple geometrical scaling, driven mostly by advances in lithography, has recently been accompanied by advances in processing techniques and in device architectures. The trend to combine efforts using process technology and lithography is expected to intensify, as further scaling becomes ever more difficult. One promising component of future nodes are "scaling boosters", i.e. processing techniques that enable further scaling. An indispensable component in developing these ever more complex processing techniques is semiconductor process modeling software. Visualization of complex 3D structures in SEMulator3D, along with budget analysis on film thicknesses, CD and etch budgets, allow process integrators to compare flows before any physical wafers are run. Hundreds of "virtual" wafers allow comparison of different processing approaches, along with EUV or DUV patterning options for defined layers and different overlay schemes. This "virtual fabrication" technology produces massively parallel process variation studies that would be highly time-consuming or expensive in experiment. Here, we focus on one particular scaling booster, the fully self-aligned via (FSAV). We compare metal-via-metal (mevia-me) chains with self-aligned and fully-self-aligned via's using a calibrated model for imec's N7 BEoL flow. To model overall variability, 3D Monte Carlo modeling of as many variability sources as possible is critical. We use Coventor SEMulator3D to extract minimum me-me distances and contact areas and show how fully self-aligned vias allow a better me-via distance control and tighter via-me contact area variability compared with the standard self-aligned via (SAV) approach.

  1. Coordinated crew performance in commercial aircraft operations

    NASA Technical Reports Server (NTRS)

    Murphy, M. R.

    1977-01-01

    A specific methodology is proposed for an improved system of coding and analyzing crew member interaction. The complexity and lack of precision of many crew and task variables suggest the usefulness of fuzzy linguistic techniques for modeling and computer simulation of the crew performance process. Other research methodologies and concepts that have promise for increasing the effectiveness of research on crew performance are identified.

  2. Time estimation as a secondary task to measure workload: Summary of research

    NASA Technical Reports Server (NTRS)

    Hart, S. G.; Mcpherson, D.; Loomis, L. L.

    1978-01-01

    Actively produced intervals of time were found to increase in length and variability, whereas retrospectively produced intervals decreased in length although they also increased in variability with the addition of a variety of flight-related tasks. If pilots counted aloud while making a production, however, the impact of concurrent activity was minimized, at least for the moderately demanding primary tasks that were selected. The effects of feedback on estimation accuracy and consistency were greatly enhanced if a counting or tapping production technique was used. This compares with the minimal effect that feedback had when no overt timekeeping technique was used. Actively made verbal estimates of sessions filled with different activities performed during the interval were increased. Retrospectively made verbal estimates, however, increased in length as the amount and complexity of activities performed during the interval were increased.

  3. The History of Electromagnetic Induction Techniques in Soil Survey

    NASA Astrophysics Data System (ADS)

    Brevik, Eric C.; Doolittle, Jim

    2014-05-01

    Electromagnetic induction (EMI) has been used to characterize the spatial variability of soil properties since the late 1970s. Initially used to assess soil salinity, the use of EMI in soil studies has expanded to include: mapping soil types; characterizing soil water content and flow patterns; assessing variations in soil texture, compaction, organic matter content, and pH; and determining the depth to subsurface horizons, stratigraphic layers or bedrock, among other uses. In all cases the soil property being investigated must influence soil apparent electrical conductivity (ECa) either directly or indirectly for EMI techniques to be effective. An increasing number and diversity of EMI sensors have been developed in response to users' needs and the availability of allied technologies, which have greatly improved the functionality of these tools. EMI investigations provide several benefits for soil studies. The large amount of georeferenced data that can be rapidly and inexpensively collected with EMI provides more complete characterization of the spatial variations in soil properties than traditional sampling techniques. In addition, compared to traditional soil survey methods, EMI can more effectively characterize diffuse soil boundaries and identify included areas of dissimilar soils within mapped soil units, giving soil scientists greater confidence when collecting spatial soil information. EMI techniques do have limitations; results are site-specific and can vary depending on the complex interactions among multiple and variable soil properties. Despite this, EMI techniques are increasingly being used to investigate the spatial variability of soil properties at field and landscape scales.

  4. Magneto-Structural Correlations in Pseudotetrahedral Forms of the [Co(SPh)4]2- Complex Probed by Magnetometry, MCD Spectroscopy, Advanced EPR Techniques, and ab Initio Electronic Structure Calculations.

    PubMed

    Suturina, Elizaveta A; Nehrkorn, Joscha; Zadrozny, Joseph M; Liu, Junjie; Atanasov, Mihail; Weyhermüller, Thomas; Maganas, Dimitrios; Hill, Stephen; Schnegg, Alexander; Bill, Eckhard; Long, Jeffrey R; Neese, Frank

    2017-03-06

    The magnetic properties of pseudotetrahedral Co(II) complexes spawned intense interest after (PPh 4 ) 2 [Co(SPh) 4 ] was shown to be the first mononuclear transition-metal complex displaying slow relaxation of the magnetization in the absence of a direct current magnetic field. However, there are differing reports on its fundamental magnetic spin Hamiltonian (SH) parameters, which arise from inherent experimental challenges in detecting large zero-field splittings. There are also remarkable changes in the SH parameters of [Co(SPh) 4 ] 2- upon structural variations, depending on the counterion and crystallization conditions. In this work, four complementary experimental techniques are utilized to unambiguously determine the SH parameters for two different salts of [Co(SPh) 4 ] 2- : (PPh 4 ) 2 [Co(SPh) 4 ] (1) and (NEt 4 ) 2 [Co(SPh) 4 ] (2). The characterization methods employed include multifield SQUID magnetometry, high-field/high-frequency electron paramagnetic resonance (HF-EPR), variable-field variable-temperature magnetic circular dichroism (VTVH-MCD), and frequency domain Fourier transform THz-EPR (FD-FT THz-EPR). Notably, the paramagnetic Co(II) complex [Co(SPh) 4 ] 2- shows strong axial magnetic anisotropy in 1, with D = -55(1) cm -1 and E/D = 0.00(3), but rhombic anisotropy is seen for 2, with D = +11(1) cm -1 and E/D = 0.18(3). Multireference ab initio CASSCF/NEVPT2 calculations enable interpretation of the remarkable variation of D and its dependence on the electronic structure and geometry.

  5. Characterization of measurement errors using structure-from-motion and photogrammetry to measure marine habitat structural complexity.

    PubMed

    Bryson, Mitch; Ferrari, Renata; Figueira, Will; Pizarro, Oscar; Madin, Josh; Williams, Stefan; Byrne, Maria

    2017-08-01

    Habitat structural complexity is one of the most important factors in determining the makeup of biological communities. Recent advances in structure-from-motion and photogrammetry have resulted in a proliferation of 3D digital representations of habitats from which structural complexity can be measured. Little attention has been paid to quantifying the measurement errors associated with these techniques, including the variability of results under different surveying and environmental conditions. Such errors have the potential to confound studies that compare habitat complexity over space and time. This study evaluated the accuracy, precision, and bias in measurements of marine habitat structural complexity derived from structure-from-motion and photogrammetric measurements using repeated surveys of artificial reefs (with known structure) as well as natural coral reefs. We quantified measurement errors as a function of survey image coverage, actual surface rugosity, and the morphological community composition of the habitat-forming organisms (reef corals). Our results indicated that measurements could be biased by up to 7.5% of the total observed ranges of structural complexity based on the environmental conditions present during any particular survey. Positive relationships were found between measurement errors and actual complexity, and the strength of these relationships was increased when coral morphology and abundance were also used as predictors. The numerous advantages of structure-from-motion and photogrammetry techniques for quantifying and investigating marine habitats will mean that they are likely to replace traditional measurement techniques (e.g., chain-and-tape). To this end, our results have important implications for data collection and the interpretation of measurements when examining changes in habitat complexity using structure-from-motion and photogrammetry.

  6. Experimental Determination of Infrared Extinction Coefficients of Interplanetary Dust Particles

    NASA Technical Reports Server (NTRS)

    Spann, J. F., Jr.; Abbas, M. M.

    1998-01-01

    This technique is based on irradiating a single isolated charged dust particle suspended in balance by an electric field, and measuring the scattered radiation as a function of angle. The observed scattered intensity profile at a specific wavelength obtained for a dust particle of known composition is compared with Mie theory calculations, and the variable parameters relating to the particle size and complex refractive index are adjusted for a best fit between the two profiles. This leads to a simultaneous determination of the particle radius, the complex refractive index, and the scattering and extinction coefficients. The results of these experiments can be utilized to examine the IRAS and DIRBE (Diffuse Infrared Background Experiment) infrared data sets in order to determine the dust particle physical characteristics and distributions by using infrared models and inversion techniques. This technique may also be employed for investigation of the rotational bursting phenomena whereby large size cosmic and interplanetary particles are believed to fragment into smaller dust particles.

  7. Quantitative Tools for Examining the Vocalizations of Juvenile Songbirds

    PubMed Central

    Wellock, Cameron D.; Reeke, George N.

    2012-01-01

    The singing of juvenile songbirds is highly variable and not well stereotyped, a feature that makes it difficult to analyze with existing computational techniques. We present here a method suitable for analyzing such vocalizations, windowed spectral pattern recognition (WSPR). Rather than performing pairwise sample comparisons, WSPR measures the typicality of a sample against a large sample set. We also illustrate how WSPR can be used to perform a variety of tasks, such as sample classification, song ontogeny measurement, and song variability measurement. Finally, we present a novel measure, based on WSPR, for quantifying the apparent complexity of a bird's singing. PMID:22701474

  8. An evaluation of Bayesian techniques for controlling model complexity and selecting inputs in a neural network for short-term load forecasting.

    PubMed

    Hippert, Henrique S; Taylor, James W

    2010-04-01

    Artificial neural networks have frequently been proposed for electricity load forecasting because of their capabilities for the nonlinear modelling of large multivariate data sets. Modelling with neural networks is not an easy task though; two of the main challenges are defining the appropriate level of model complexity, and choosing the input variables. This paper evaluates techniques for automatic neural network modelling within a Bayesian framework, as applied to six samples containing daily load and weather data for four different countries. We analyse input selection as carried out by the Bayesian 'automatic relevance determination', and the usefulness of the Bayesian 'evidence' for the selection of the best structure (in terms of number of neurones), as compared to methods based on cross-validation. Copyright 2009 Elsevier Ltd. All rights reserved.

  9. Design of efficient circularly symmetric two-dimensional variable digital FIR filters.

    PubMed

    Bindima, Thayyil; Elias, Elizabeth

    2016-05-01

    Circularly symmetric two-dimensional (2D) finite impulse response (FIR) filters find extensive use in image and medical applications, especially for isotropic filtering. Moreover, the design and implementation of 2D digital filters with variable fractional delay and variable magnitude responses without redesigning the filter has become a crucial topic of interest due to its significance in low-cost applications. Recently the design using fixed word length coefficients has gained importance due to the replacement of multipliers by shifters and adders, which reduces the hardware complexity. Among the various approaches to 2D design, transforming a one-dimensional (1D) filter to 2D by transformation, is reported to be an efficient technique. In this paper, 1D variable digital filters (VDFs) with tunable cut-off frequencies are designed using Farrow structure based interpolation approach, and the sub-filter coefficients in the Farrow structure are made multiplier-less using canonic signed digit (CSD) representation. The resulting performance degradation in the filters is overcome by using artificial bee colony (ABC) optimization. Finally, the optimized 1D VDFs are mapped to 2D using generalized McClellan transformation resulting in low complexity, circularly symmetric 2D VDFs with real-time tunability.

  10. Design of efficient circularly symmetric two-dimensional variable digital FIR filters

    PubMed Central

    Bindima, Thayyil; Elias, Elizabeth

    2016-01-01

    Circularly symmetric two-dimensional (2D) finite impulse response (FIR) filters find extensive use in image and medical applications, especially for isotropic filtering. Moreover, the design and implementation of 2D digital filters with variable fractional delay and variable magnitude responses without redesigning the filter has become a crucial topic of interest due to its significance in low-cost applications. Recently the design using fixed word length coefficients has gained importance due to the replacement of multipliers by shifters and adders, which reduces the hardware complexity. Among the various approaches to 2D design, transforming a one-dimensional (1D) filter to 2D by transformation, is reported to be an efficient technique. In this paper, 1D variable digital filters (VDFs) with tunable cut-off frequencies are designed using Farrow structure based interpolation approach, and the sub-filter coefficients in the Farrow structure are made multiplier-less using canonic signed digit (CSD) representation. The resulting performance degradation in the filters is overcome by using artificial bee colony (ABC) optimization. Finally, the optimized 1D VDFs are mapped to 2D using generalized McClellan transformation resulting in low complexity, circularly symmetric 2D VDFs with real-time tunability. PMID:27222739

  11. Multi-objective optimization for model predictive control.

    PubMed

    Wojsznis, Willy; Mehta, Ashish; Wojsznis, Peter; Thiele, Dirk; Blevins, Terry

    2007-06-01

    This paper presents a technique of multi-objective optimization for Model Predictive Control (MPC) where the optimization has three levels of the objective function, in order of priority: handling constraints, maximizing economics, and maintaining control. The greatest weights are assigned dynamically to control or constraint variables that are predicted to be out of their limits. The weights assigned for economics have to out-weigh those assigned for control objectives. Control variables (CV) can be controlled at fixed targets or within one- or two-sided ranges around the targets. Manipulated Variables (MV) can have assigned targets too, which may be predefined values or current actual values. This MV functionality is extremely useful when economic objectives are not defined for some or all the MVs. To achieve this complex operation, handle process outputs predicted to go out of limits, and have a guaranteed solution for any condition, the technique makes use of the priority structure, penalties on slack variables, and redefinition of the constraint and control model. An engineering implementation of this approach is shown in the MPC embedded in an industrial control system. The optimization and control of a distillation column, the standard Shell heavy oil fractionator (HOF) problem, is adequately achieved with this MPC.

  12. Impact of environmental variables on Dubas bug infestation rate: A case study from the Sultanate of Oman

    PubMed Central

    Al-Kindi, Khalifa M.; Andrew, Nigel; Welch, Mitchell

    2017-01-01

    Date palm cultivation is economically important in the Sultanate of Oman, with significant financial investment coming from both the government and from private individuals. However, a global infestation of Dubas bug (Ommatissus lybicus Bergevin) has impacted the Middle East region, and infestations of date palms have been widespread. In this study, spatial analysis and geostatistical techniques were used to model the spatial distribution of Dubas bug infestations to (a) identify correlations between Dubas bug densities and different environmental variables, and (b) predict the locations of future Dubas bug infestations in Oman. Firstly, we considered individual environmental variables and their correlations with infestation locations. Then, we applied more complex predictive models and regression analysis techniques to investigate the combinations of environmental factors most conducive to the survival and spread of the Dubas bug. Environmental variables including elevation, geology, and distance to drainage pathways were found to significantly affect Dubas bug infestations. In contrast, aspect and hillshade did not significantly impact on Dubas bug infestations. Understanding their distribution and therefore applying targeted controls on their spread is important for effective mapping, control and management (e.g., resource allocation) of Dubas bug infestations. PMID:28558069

  13. Impact of environmental variables on Dubas bug infestation rate: A case study from the Sultanate of Oman.

    PubMed

    Al-Kindi, Khalifa M; Kwan, Paul; Andrew, Nigel; Welch, Mitchell

    2017-01-01

    Date palm cultivation is economically important in the Sultanate of Oman, with significant financial investment coming from both the government and from private individuals. However, a global infestation of Dubas bug (Ommatissus lybicus Bergevin) has impacted the Middle East region, and infestations of date palms have been widespread. In this study, spatial analysis and geostatistical techniques were used to model the spatial distribution of Dubas bug infestations to (a) identify correlations between Dubas bug densities and different environmental variables, and (b) predict the locations of future Dubas bug infestations in Oman. Firstly, we considered individual environmental variables and their correlations with infestation locations. Then, we applied more complex predictive models and regression analysis techniques to investigate the combinations of environmental factors most conducive to the survival and spread of the Dubas bug. Environmental variables including elevation, geology, and distance to drainage pathways were found to significantly affect Dubas bug infestations. In contrast, aspect and hillshade did not significantly impact on Dubas bug infestations. Understanding their distribution and therefore applying targeted controls on their spread is important for effective mapping, control and management (e.g., resource allocation) of Dubas bug infestations.

  14. An evaluation of human factors research for ultrasonic inservice inspection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pond, D.J.; Donohoo, D.T.; Harris, R.V. Jr.

    1998-03-01

    This work was undertaken to determine if human factors research has yielded information applicable to upgrading requirements in ASME Boiler and Pressure Vessel Code Section XI, improving methods and techniques in Section V, and/or suggesting relevant research. A preference was established for information and recommendations which have become accepted and standard practice. Manual Ultrasonic Testing/Inservice Inspection (UT/ISI) is a complex task subject to influence by dozens of variables. This review frequently revealed equivocal findings regarding effects of environmental variables as well as repeated indications that inspection performance may be more, and more reliably, influenced by the workers` social environment, includingmore » managerial practices, than by other situational variables. Also of significance are each inspector`s relevant knowledge, skills, and abilities, and determination of these is seen as a necessary first step in upgrading requirements, methods, and techniques as well as in focusing research in support of such programs, While understanding the effects and mediating mechanisms of the variables impacting inspection performance is a worthwhile pursuit for researchers, initial improvements in industrial UTASI performance may be achieved by implementing practices already known to mitigate the effects of potentially adverse conditions. 52 refs., 2 tabs.« less

  15. Protein Modelling: What Happened to the “Protein Structure Gap”?

    PubMed Central

    Schwede, Torsten

    2013-01-01

    Computational modeling and prediction of three-dimensional macromolecular structures and complexes from their sequence has been a long standing vision in structural biology as it holds the promise to bypass part of the laborious process of experimental structure solution. Over the last two decades, a paradigm shift has occurred: starting from a situation where the “structure knowledge gap” between the huge number of protein sequences and small number of known structures has hampered the widespread use of structure-based approaches in life science research, today some form of structural information – either experimental or computational – is available for the majority of amino acids encoded by common model organism genomes. Template based homology modeling techniques have matured to a point where they are now routinely used to complement experimental techniques. With the scientific focus of interest moving towards larger macromolecular complexes and dynamic networks of interactions, the integration of computational modeling methods with low-resolution experimental techniques allows studying large and complex molecular machines. Computational modeling and prediction techniques are still facing a number of challenges which hamper the more widespread use by the non-expert scientist. For example, it is often difficult to convey the underlying assumptions of a computational technique, as well as the expected accuracy and structural variability of a specific model. However, these aspects are crucial to understand the limitations of a model, and to decide which interpretations and conclusions can be supported. PMID:24010712

  16. Micro-CT analyses of apical enlargement and molar root canal complexity.

    PubMed

    Markvart, M; Darvann, T A; Larsen, P; Dalstra, M; Kreiborg, S; Bjørndal, L

    2012-03-01

    To compare the effectiveness of two rotary hybrid instrumentation techniques with focus on apical enlargement in molar teeth and to quantify and visualize spatial details of instrumentation efficacy in root canals of different complexity. Maxillary and mandibular molar teeth were scanned using X-ray microcomputed tomography. Root canals were prepared using either a GT/Profile protocol or a RaCe/NiTi protocol. Variables used for evaluation were the following: distance between root canal surfaces before and after preparation (distance after preparation, DAP), percentage of root canal area remaining unprepared and increase in canal volume after preparation. Root canals were classified according to size and complexity, and consequences of unprepared portions of narrow root canals and intraradicular connections/isthmuses were included in the analyses. One- and two-way anova were used in the statistical analyses. No difference was found between the two techniques: DAP(apical-third) (P = 0.590), area unprepared(apical-third) (P = 0.126) and volume increase(apical-third) (P = 0.821). Unprepared root canal area became larger in relation to root canal size and complexity, irrespective of the technique used. Percentage of root canal area remaining unprepared was significantly lower in small root canals and complex systems compared to large root canals. The isthmus area per se contributed with a mean of 17.6%, and with a mean of 25.7%, when a narrow root canal remained unprepared. The addition of isthmuses did not significantly alter the ratio of instrumented to unprepared areas at total root canal level. Distal and palatal root canals had the highest level of unprepared area irrespective of the two instrumentation techniques examined. © 2011 International Endodontic Journal.

  17. Machine learning methods applied on dental fear and behavior management problems in children.

    PubMed

    Klingberg, G; Sillén, R; Norén, J G

    1999-08-01

    The etiologies of dental fear and dental behavior management problems in children were investigated in a database of information on 2,257 Swedish children 4-6 and 9-11 years old. The analyses were performed using computerized inductive techniques within the field of artificial intelligence. The database held information regarding dental fear levels and behavior management problems, which were defined as outcomes, i.e. dependent variables. The attributes, i.e. independent variables, included data on dental health and dental treatments, information about parental dental fear, general anxiety, socioeconomic variables, etc. The data contained both numerical and discrete variables. The analyses were performed using an inductive analysis program (XpertRule Analyser, Attar Software Ltd, Lancashire, UK) that presents the results in a hierarchic diagram called a knowledge tree. The importance of the different attributes is represented by their position in this diagram. The results show that inductive methods are well suited for analyzing multifactorial and complex relationships in large data sets, and are thus a useful complement to multivariate statistical techniques. The knowledge trees for the two outcomes, dental fear and behavior management problems, were very different from each other, suggesting that the two phenomena are not equivalent. Dental fear was found to be more related to non-dental variables, whereas dental behavior management problems seemed connected to dental variables.

  18. Introduction to the special section on mixture modeling in personality assessment.

    PubMed

    Wright, Aidan G C; Hallquist, Michael N

    2014-01-01

    Latent variable models offer a conceptual and statistical framework for evaluating the underlying structure of psychological constructs, including personality and psychopathology. Complex structures that combine or compare categorical and dimensional latent variables can be accommodated using mixture modeling approaches, which provide a powerful framework for testing nuanced theories about psychological structure. This special series includes introductory primers on cross-sectional and longitudinal mixture modeling, in addition to empirical examples applying these techniques to real-world data collected in clinical settings. This group of articles is designed to introduce personality assessment scientists and practitioners to a general latent variable framework that we hope will stimulate new research and application of mixture models to the assessment of personality and its pathology.

  19. Variable Complexity Structural Optimization of Shells

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.; Venkataraman, Satchi

    1999-01-01

    Structural designers today face both opportunities and challenges in a vast array of available analysis and optimization programs. Some programs such as NASTRAN, are very general, permitting the designer to model any structure, to any degree of accuracy, but often at a higher computational cost. Additionally, such general procedures often do not allow easy implementation of all constraints of interest to the designer. Other programs, based on algebraic expressions used by designers one generation ago, have limited applicability for general structures with modem materials. However, when applicable, they provide easy understanding of design decisions trade-off. Finally, designers can also use specialized programs suitable for designing efficiently a subset of structural problems. For example, PASCO and PANDA2 are panel design codes, which calculate response and estimate failure much more efficiently than general-purpose codes, but are narrowly applicable in terms of geometry and loading. Therefore, the problem of optimizing structures based on simultaneous use of several models and computer programs is a subject of considerable interest. The problem of using several levels of models in optimization has been dubbed variable complexity modeling. Work under NASA grant NAG1-2110 has been concerned with the development of variable complexity modeling strategies with special emphasis on response surface techniques. In addition, several modeling issues for the design of shells of revolution were studied.

  20. Variable Complexity Structural Optimization of Shells

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.; Venkataraman, Satchi

    1998-01-01

    Structural designers today face both opportunities and challenges in a vast array of available analysis and optimization programs. Some programs such as NASTRAN, are very general, permitting the designer to model any structure, to any degree of accuracy, but often at a higher computational cost. Additionally, such general procedures often do not allow easy implementation of all constraints of interest to the designer. Other programs, based on algebraic expressions used by designers one generation ago, have limited applicability for general structures with modem materials. However, when applicable, they provide easy understanding of design decisions trade-off. Finally, designers can also use specialized programs suitable for designing efficiently a subset of structural problems. For example, PASCO and PANDA2 are panel design codes, which calculate response and estimate failure much more efficiently than general-purpose codes, but are narrowly applicable in terms of geometry and loading. Therefore, the problem of optimizing structures based on simultaneous use of several models and computer programs is a subject of considerable interest. The problem of using several levels of models in optimization has been dubbed variable complexity modeling. Work under NASA grant NAG1-1808 has been concerned with the development of variable complexity modeling strategies with special emphasis on response surface techniques. In addition several modeling issues for the design of shells of revolution were studied.

  1. Chromospheric Activity in Cool Luminous Stars

    NASA Astrophysics Data System (ADS)

    Dupree, Andrea

    2018-04-01

    Spatially unresolved spectra of giant and supergiant stars demonstrate ubiquitous signatures of chromospheric activity, variable outflows, and winds. The advent of imaging techniques and spatially resolved spectra reveal complex structures in these extended stellar atmospheres that we do not understand. The presence and behavior of these atmospheres is wide ranging and impacts stellar activity, magnetic fields, angular momentum loss, abundance determinations, and the understanding of stellar cluster populations.

  2. Optimization Techniques for Clustering,Connectivity, and Flow Problems in Complex Networks

    DTIC Science & Technology

    2012-10-01

    discrete optimization and for analysis of performance of algorithm portfolios; introducing a metaheuristic framework of variable objective search that...The results of empirical evaluation of the proposed algorithm are also included. 1.3 Theoretical analysis of heuristics and designing new metaheuristic ...analysis of heuristics for inapproximable problems and designing new metaheuristic approaches for the problems of interest; (IV) Developing new models

  3. Regional Morphology Analysis Package (RMAP): Empirical Orthogonal Function Analysis, Background and Examples

    DTIC Science & Technology

    2007-10-01

    1984. Complex principal component analysis : Theory and examples. Journal of Climate and Applied Meteorology 23: 1660-1673. Hotelling, H. 1933...Sediments 99. ASCE: 2,566-2,581. Von Storch, H., and A. Navarra. 1995. Analysis of climate variability. Applications of statistical techniques. Berlin...ERDC TN-SWWRP-07-9 October 2007 Regional Morphology Empirical Analysis Package (RMAP): Orthogonal Function Analysis , Background and Examples by

  4. Do the Metrics Make the Mission?

    DTIC Science & Technology

    2008-09-01

    view that, “multidimensional peacekeeping missions are complex with many unknown variables and fall victim to mission creep once a peacekeeping...techniques for breeding and vaccinating cattle . Since the vaccination program was implemented, over “nine million heads of livestock have received...Procurement of equipment, fish feed and brood stock is in progress, while 1,500 fish pond owners have been supplied with fish fingerlings. As a result

  5. Design and simulation of stratified probability digital receiver with application to the multipath communication

    NASA Technical Reports Server (NTRS)

    Deal, J. H.

    1975-01-01

    One approach to the problem of simplifying complex nonlinear filtering algorithms is through using stratified probability approximations where the continuous probability density functions of certain random variables are represented by discrete mass approximations. This technique is developed in this paper and used to simplify the filtering algorithms developed for the optimum receiver for signals corrupted by both additive and multiplicative noise.

  6. Process parameters and morphology in puerarin, phospholipids and their complex microparticles generation by supercritical antisolvent precipitation.

    PubMed

    Li, Ying; Yang, Da-Jian; Chen, Shi-Lin; Chen, Si-Bao; Chan, Albert Sun-Chi

    2008-07-09

    The aim of the study was to develop and evaluate a new method for the production of puerarin phospholipids complex (PPC) microparticles. The advanced particle formation method, solution enhanced dispersion by supercritical fluids (SEDS), was used for the preparation of puerarin (Pur), phospholipids (PC) and their complex particles for the first time. Evaluation of the processing variables on PPC particle characteristics was also conducted. The processing variables included temperature, pressure, solution concentration, the flow rate of supercritical carbon dioxide (SC-CO2) and the relative flow rate of drug solution to CO2. The morphology, particle size and size distribution of the particles were determined. Meanwhile Pur and phospholipids were separately prepared by gas antisolvent precipitation (GAS) method and solid characterization of particles by the two supercritical methods was also compared. Pur formed by GAS was more orderly, purer crystal, whereas amorphous Pur particles between 0.5 and 1microm were formed by SEDS. The complex was successfully obtained by SEDS exhibiting amorphous, partially agglomerated spheres comprised of particles sized only about 1microm. SEDS method may be useful for the processing of other pharmaceutical preparations besides phospholipids complex particles. Furthermore adopting a GAS process to recrystallize pharmaceuticals will provide a highly versatile methodology to generate new polymorphs of drugs in addition to conventional techniques.

  7. Discharge ratings at gaging stations

    USGS Publications Warehouse

    Kennedy, E.J.

    1984-01-01

    A discharge rating is the relation of the discharge at a gaging station to stage and sometimes also to other variables. This chapter of 'Techniques of Water-Resources Investigations' describes the procedures commonly used to develop simple ratings where discharge is related only to stage and the most frequently encountered types of complex ratings where additional factors such as rate of change in stage, water-surface slope, or index velocity are used. Fundamental techniques of logarithmic plotting and the applications of simple storage routing to rating development are demonstrated. Computer applications, especially for handheld programmable calculators, and data handling are stressed.

  8. Multifractal cross-correlation effects in two-variable time series of complex network vertex observables

    NASA Astrophysics Data System (ADS)

    OświÈ©cimka, Paweł; Livi, Lorenzo; DroŻdŻ, Stanisław

    2016-10-01

    We investigate the scaling of the cross-correlations calculated for two-variable time series containing vertex properties in the context of complex networks. Time series of such observables are obtained by means of stationary, unbiased random walks. We consider three vertex properties that provide, respectively, short-, medium-, and long-range information regarding the topological role of vertices in a given network. In order to reveal the relation between these quantities, we applied the multifractal cross-correlation analysis technique, which provides information about the nonlinear effects in coupling of time series. We show that the considered network models are characterized by unique multifractal properties of the cross-correlation. In particular, it is possible to distinguish between Erdös-Rényi, Barabási-Albert, and Watts-Strogatz networks on the basis of fractal cross-correlation. Moreover, the analysis of protein contact networks reveals characteristics shared with both scale-free and small-world models.

  9. POWER ANALYSIS FOR COMPLEX MEDIATIONAL DESIGNS USING MONTE CARLO METHODS

    PubMed Central

    Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.

    2013-01-01

    Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex mediational models. The approach is based on the well known technique of generating a large number of samples in a Monte Carlo study, and estimating power as the percentage of cases in which an estimate of interest is significantly different from zero. Examples of power calculation for commonly used mediational models are provided. Power analyses for the single mediator, multiple mediators, three-path mediation, mediation with latent variables, moderated mediation, and mediation in longitudinal designs are described. Annotated sample syntax for Mplus is appended and tabled values of required sample sizes are shown for some models. PMID:23935262

  10. VLSI implementation of a new LMS-based algorithm for noise removal in ECG signal

    NASA Astrophysics Data System (ADS)

    Satheeskumaran, S.; Sabrigiriraj, M.

    2016-06-01

    Least mean square (LMS)-based adaptive filters are widely deployed for removing artefacts in electrocardiogram (ECG) due to less number of computations. But they posses high mean square error (MSE) under noisy environment. The transform domain variable step-size LMS algorithm reduces the MSE at the cost of computational complexity. In this paper, a variable step-size delayed LMS adaptive filter is used to remove the artefacts from the ECG signal for improved feature extraction. The dedicated digital Signal processors provide fast processing, but they are not flexible. By using field programmable gate arrays, the pipelined architectures can be used to enhance the system performance. The pipelined architecture can enhance the operation efficiency of the adaptive filter and save the power consumption. This technique provides high signal-to-noise ratio and low MSE with reduced computational complexity; hence, it is a useful method for monitoring patients with heart-related problem.

  11. Approximate reduction of linear population models governed by stochastic differential equations: application to multiregional models.

    PubMed

    Sanz, Luis; Alonso, Juan Antonio

    2017-12-01

    In this work we develop approximate aggregation techniques in the context of slow-fast linear population models governed by stochastic differential equations and apply the results to the treatment of populations with spatial heterogeneity. Approximate aggregation techniques allow one to transform a complex system involving many coupled variables and in which there are processes with different time scales, by a simpler reduced model with a fewer number of 'global' variables, in such a way that the dynamics of the former can be approximated by that of the latter. In our model we contemplate a linear fast deterministic process together with a linear slow process in which the parameters are affected by additive noise, and give conditions for the solutions corresponding to positive initial conditions to remain positive for all times. By letting the fast process reach equilibrium we build a reduced system with a lesser number of variables, and provide results relating the asymptotic behaviour of the first- and second-order moments of the population vector for the original and the reduced system. The general technique is illustrated by analysing a multiregional stochastic system in which dispersal is deterministic and the rate growth of the populations in each patch is affected by additive noise.

  12. Regression: The Apple Does Not Fall Far From the Tree.

    PubMed

    Vetter, Thomas R; Schober, Patrick

    2018-05-15

    Researchers and clinicians are frequently interested in either: (1) assessing whether there is a relationship or association between 2 or more variables and quantifying this association; or (2) determining whether 1 or more variables can predict another variable. The strength of such an association is mainly described by the correlation. However, regression analysis and regression models can be used not only to identify whether there is a significant relationship or association between variables but also to generate estimations of such a predictive relationship between variables. This basic statistical tutorial discusses the fundamental concepts and techniques related to the most common types of regression analysis and modeling, including simple linear regression, multiple regression, logistic regression, ordinal regression, and Poisson regression, as well as the common yet often underrecognized phenomenon of regression toward the mean. The various types of regression analysis are powerful statistical techniques, which when appropriately applied, can allow for the valid interpretation of complex, multifactorial data. Regression analysis and models can assess whether there is a relationship or association between 2 or more observed variables and estimate the strength of this association, as well as determine whether 1 or more variables can predict another variable. Regression is thus being applied more commonly in anesthesia, perioperative, critical care, and pain research. However, it is crucial to note that regression can identify plausible risk factors; it does not prove causation (a definitive cause and effect relationship). The results of a regression analysis instead identify independent (predictor) variable(s) associated with the dependent (outcome) variable. As with other statistical methods, applying regression requires that certain assumptions be met, which can be tested with specific diagnostics.

  13. A PLL-based resampling technique for vibration analysis in variable-speed wind turbines with PMSG: A bearing fault case

    NASA Astrophysics Data System (ADS)

    Pezzani, Carlos M.; Bossio, José M.; Castellino, Ariel M.; Bossio, Guillermo R.; De Angelo, Cristian H.

    2017-02-01

    Condition monitoring in permanent magnet synchronous machines has gained interest due to the increasing use in applications such as electric traction and power generation. Particularly in wind power generation, non-invasive condition monitoring techniques are of great importance. Usually, in such applications the access to the generator is complex and costly, while unexpected breakdowns results in high repair costs. This paper presents a technique which allows using vibration analysis for bearing fault detection in permanent magnet synchronous generators used in wind turbines. Given that in wind power applications the generator rotational speed may vary during normal operation, it is necessary to use special sampling techniques to apply spectral analysis of mechanical vibrations. In this work, a resampling technique based on order tracking without measuring the rotor position is proposed. To synchronize sampling with rotor position, an estimation of the rotor position obtained from the angle of the voltage vector is proposed. This angle is obtained from a phase-locked loop synchronized with the generator voltages. The proposed strategy is validated by laboratory experimental results obtained from a permanent magnet synchronous generator. Results with single point defects in the outer race of a bearing under variable speed and load conditions are presented.

  14. Technical Note: The Initial Stages of Statistical Data Analysis

    PubMed Central

    Tandy, Richard D.

    1998-01-01

    Objective: To provide an overview of several important data-related considerations in the design stage of a research project and to review the levels of measurement and their relationship to the statistical technique chosen for the data analysis. Background: When planning a study, the researcher must clearly define the research problem and narrow it down to specific, testable questions. The next steps are to identify the variables in the study, decide how to group and treat subjects, and determine how to measure, and the underlying level of measurement of, the dependent variables. Then the appropriate statistical technique can be selected for data analysis. Description: The four levels of measurement in increasing complexity are nominal, ordinal, interval, and ratio. Nominal data are categorical or “count” data, and the numbers are treated as labels. Ordinal data can be ranked in a meaningful order by magnitude. Interval data possess the characteristics of ordinal data and also have equal distances between levels. Ratio data have a natural zero point. Nominal and ordinal data are analyzed with nonparametric statistical techniques and interval and ratio data with parametric statistical techniques. Advantages: Understanding the four levels of measurement and when it is appropriate to use each is important in determining which statistical technique to use when analyzing data. PMID:16558489

  15. Solid oxide fuel cell simulation and design optimization with numerical adjoint techniques

    NASA Astrophysics Data System (ADS)

    Elliott, Louie C.

    This dissertation reports on the application of numerical optimization techniques as applied to fuel cell simulation and design. Due to the "multi-physics" inherent in a fuel cell, which results in a highly coupled and non-linear behavior, an experimental program to analyze and improve the performance of fuel cells is extremely difficult. This program applies new optimization techniques with computational methods from the field of aerospace engineering to the fuel cell design problem. After an overview of fuel cell history, importance, and classification, a mathematical model of solid oxide fuel cells (SOFC) is presented. The governing equations are discretized and solved with computational fluid dynamics (CFD) techniques including unstructured meshes, non-linear solution methods, numerical derivatives with complex variables, and sensitivity analysis with adjoint methods. Following the validation of the fuel cell model in 2-D and 3-D, the results of the sensitivity analysis are presented. The sensitivity derivative for a cost function with respect to a design variable is found with three increasingly sophisticated techniques: finite difference, direct differentiation, and adjoint. A design cycle is performed using a simple optimization method to improve the value of the implemented cost function. The results from this program could improve fuel cell performance and lessen the world's dependence on fossil fuels.

  16. IVF: exploiting intensity variation function for high-performance pedestrian tracking in forward-looking infrared imagery

    NASA Astrophysics Data System (ADS)

    Lamberti, Fabrizio; Sanna, Andrea; Paravati, Gianluca; Belluccini, Luca

    2014-02-01

    Tracking pedestrian targets in forward-looking infrared video sequences is a crucial component of a growing number of applications. At the same time, it is particularly challenging, since image resolution and signal-to-noise ratio are generally very low, while the nonrigidity of the human body produces highly variable target shapes. Moreover, motion can be quite chaotic with frequent target-to-target and target-to-scene occlusions. Hence, the trend is to design ever more sophisticated techniques, able to ensure rather accurate tracking results at the cost of a generally higher complexity. However, many of such techniques might not be suitable for real-time tracking in limited-resource environments. This work presents a technique that extends an extremely computationally efficient tracking method based on target intensity variation and template matching originally designed for targets with a marked and stable hot spot by adapting it to deal with much more complex thermal signatures and by removing the native dependency on configuration choices. Experimental tests demonstrated that, by working on multiple hot spots, the designed technique is able to achieve the robustness of other common approaches by limiting drifts and preserving the low-computational footprint of the reference method.

  17. Less or more hemodynamic monitoring in critically ill patients.

    PubMed

    Jozwiak, Mathieu; Monnet, Xavier; Teboul, Jean-Louis

    2018-06-07

    Hemodynamic investigations are required in patients with shock to identify the type of shock, to select the most appropriate treatments and to assess the patient's response to the selected therapy. We discuss how to select the most appropriate hemodynamic monitoring techniques in patients with shock as well as the future of hemodynamic monitoring. Over the last decades, the hemodynamic monitoring techniques have evolved from intermittent toward continuous and real-time measurements and from invasive toward less-invasive approaches. In patients with shock, current guidelines recommend the echocardiography as the preferred modality for the initial hemodynamic evaluation. In patients with shock nonresponsive to initial therapy and/or in the most complex patients, it is recommended to monitor the cardiac output and to use advanced hemodynamic monitoring techniques. They also provide other useful variables that are useful for managing the most complex cases. Uncalibrated and noninvasive cardiac output monitors are not reliable enough in the intensive care setting. The use of echocardiography should be initially encouraged in patients with shock to identify the type of shock and to select the most appropriate therapy. The use of more invasive hemodynamic monitoring techniques should be discussed on an individualized basis.

  18. High resolution seismic reflection profiling at Aberdeen Proving Grounds, Maryland

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, R.D.; Xia, Jianghai; Swartzel, S.

    1996-11-01

    The effectiveness of shallow high resolution seismic reflection (i.e., resolution potential) to image geologic interfaces between about 70 and 750 ft at the Aberdeen Proving Grounds, Maryland (APG), appears to vary locally with the geometric complexity of the unconsolidated sediments that overlay crystalline bedrock. The bedrock surface (which represents the primary geologic target of this study) was imaged at each of three test areas on walkaway noise tests and CDP (common depth point) stacked data. Proven high resolution techniques were used to design and acquire data on this survey. Feasibility of the technique and minimum acquisition requirements were determined throughmore » evaluation and correlation of walkaway noise tests, CDP survey lines, and a downhole velocity check shot survey. Data processing and analysis revealed several critical attributes of shallow seismic data from APG that need careful consideration and compensation on reflection data sets. This survey determined: (1) the feasibility of the technique, (2) the resolution potential (both horizontal and vertical) of the technique, (3) the optimum source for this site, (4) the optimum acquisition geometries, (5) general processing flow, and (6) a basic idea of the acoustic variability across this site. Source testing involved an accelerated weight drop, land air gun, downhole black powder charge, sledge hammer/plate, and high frequency vibrator. Shallow seismic reflection profiles provided for a more detailed picture of the geometric complexity and variability of the distinct clay sequences (aquatards), previously inferred from drilling to be present, based on sparse drill holes and basewide conceptual models. The seismic data also reveal a clear explanation for the difficulties previously noted in correlating individual, borehole-identified sand or clay units over even short distances.« less

  19. Application of AIS Technology to Forest Mapping

    NASA Technical Reports Server (NTRS)

    Yool, S. R.; Star, J. L.

    1985-01-01

    Concerns about environmental effects of large scale deforestation have prompted efforts to map forests over large areas using various remote sensing data and image processing techniques. Basic research on the spectral characteristics of forest vegetation are required to form a basis for development of new techniques, and for image interpretation. Examination of LANDSAT data and image processing algorithms over a portion of boreal forest have demonstrated the complexity of relations between the various expressions of forest canopies, environmental variability, and the relative capacities of different image processing algorithms to achieve high classification accuracies under these conditions. Airborne Imaging Spectrometer (AIS) data may in part provide the means to interpret the responses of standard data and techniques to the vegetation based on its relatively high spectral resolution.

  20. Analyzing the Relative Linkages of Land Use and Hydrologic Variables with Urban Surface Water Quality using Multivariate Techniques

    NASA Astrophysics Data System (ADS)

    Ahmed, S.; Abdul-Aziz, O. I.

    2015-12-01

    We used a systematic data-analytics approach to analyze and quantify relative linkages of four stream water quality indicators (total nitrogen, TN; total phosphorus, TP; chlorophyll-a, Chla; and dissolved oxygen, DO) with six land use and four hydrologic variables, along with the potential external (upstream in-land and downstream coastal) controls in highly complex coastal urban watersheds of southeast Florida, U.S.A. Multivariate pattern recognition techniques of principle component and factor analyses, in concert with Pearson correlation analysis, were applied to map interrelations and identify latent patterns of the participatory variables. Relative linkages of the in-stream water quality variables with their associated drivers were then quantified by developing dimensionless partial least squares (PLS) regression model based on standardized data. Model fitting efficiency (R2=0.71-0.87) and accuracy (ratio of root-mean-square error to the standard deviation of the observations, RSR=0.35-0.53) suggested good predictions of the water quality variables in both wet and dry seasons. Agricultural land and groundwater exhibited substantial controls on surface water quality. In-stream TN concentration appeared to be mostly contributed by the upstream water entering from Everglades in both wet and dry seasons. In contrast, watershed land uses had stronger linkages with TP and Chla than that of the watershed hydrologic and upstream (Everglades) components for both seasons. Both land use and hydrologic components showed strong linkages with DO in wet season; however, the land use linkage appeared to be less in dry season. The data-analytics method provided a comprehensive empirical framework to achieve crucial mechanistic insights into the urban stream water quality processes. Our study quantitatively identified dominant drivers of water quality, indicating key management targets to maintain healthy stream ecosystems in complex urban-natural environments near the coast.

  1. Graphic tracings of condylar paths and measurements of condylar angles.

    PubMed

    el-Gheriani, A S; Winstanley, R B

    1989-01-01

    A study was carried out to determine the accuracy of different methods of measuring condylar inclination from graphical recordings of condylar paths. Thirty subjects made protrusive mandibular movements while condylar inclination was recorded on a graph paper card. A mandibular facebow and intraoral central bearing plate facilitated the procedure. The first method proved to be too variable to be of value in measuring condylar angles. The spline curve fitting technique was shown to be accurate, but its use clinically may prove complex. The mathematical method was more practical and overcame the variability of the tangent method. Other conclusions regarding condylar inclination are outlined.

  2. Reliability analysis of composite structures

    NASA Technical Reports Server (NTRS)

    Kan, Han-Pin

    1992-01-01

    A probabilistic static stress analysis methodology has been developed to estimate the reliability of a composite structure. Closed form stress analysis methods are the primary analytical tools used in this methodology. These structural mechanics methods are used to identify independent variables whose variations significantly affect the performance of the structure. Once these variables are identified, scatter in their values is evaluated and statistically characterized. The scatter in applied loads and the structural parameters are then fitted to appropriate probabilistic distribution functions. Numerical integration techniques are applied to compute the structural reliability. The predicted reliability accounts for scatter due to variability in material strength, applied load, fabrication and assembly processes. The influence of structural geometry and mode of failure are also considerations in the evaluation. Example problems are given to illustrate various levels of analytical complexity.

  3. Measurement of the M² beam propagation factor using a focus-tunable liquid lens.

    PubMed

    Niederriter, Robert D; Gopinath, Juliet T; Siemens, Mark E

    2013-03-10

    We demonstrate motion-free beam quality M² measurements of stigmatic, simple astigmatic, and general astigmatic (twisted) beams using only a focus-tunable liquid lens and a CCD camera. We extend the variable-focus technique to the characterization of general astigmatic beams by measuring the 10 second-order moments of the power density distribution for the twisted beam produced by passage through multimode optical fiber. Our method measures the same M² values as the traditional variable-distance method for a wide range of laser beam sources, including nearly TEM(00) (M²≈1) and general astigmatic multimode beams (M²≈8). The method is simple and compact, with no moving parts or complex apparatus and measurement precision comparable to the standard variable-distance method.

  4. New technique for simulation of microgravity and variable gravity conditions

    NASA Astrophysics Data System (ADS)

    de la Rosa, R.; Alonso, A.; Abasolo, D. E.; Hornero, R.; Abasolo, D. E.

    2005-08-01

    This paper suggests a microgravity or variable gravity conditions simulator based on a Neuromuscular Control System (NCS), working as a man-machine interface. The subject under training lies on an active platform that counteracts his weight. And a Virtual Reality (VR) system displays a simulated environment, where the subject can interact a number of settings: extravehicular activity (EVA), walking on the Moon or training the limb response faced with variable acceleration scenes. Results related to real-time voluntary control have been achieved with neuromuscular interfaces at the Bioengineering Group in the University of Valladolid. It has been employed a custom real-time system to train arm movements. This paper outlines a more complex design that can complement other training facilities, like the buoyancy pool, in the task of microgravity simulation.

  5. Inter- and Intra-method Variability of VS Profiles and VS30 at ARRA-funded Sites

    NASA Astrophysics Data System (ADS)

    Yong, A.; Boatwright, J.; Martin, A. J.

    2015-12-01

    The 2009 American Recovery and Reinvestment Act (ARRA) funded geophysical site characterizations at 191 seismographic stations in California and in the central and eastern United States. Shallow boreholes were considered cost- and environmentally-prohibitive, thus non-invasive methods (passive and active surface- and body-wave techniques) were used at these stations. The drawback, however, is that these techniques measure seismic properties indirectly and introduce more uncertainty than borehole methods. The principal methods applied were Array Microtremor (AM), Multi-channel Analysis of Surface Waves (MASW; Rayleigh and Love waves), Spectral Analysis of Surface Waves (SASW), Refraction Microtremor (ReMi), and P- and S-wave refraction tomography. Depending on the apparent geologic or seismic complexity of the site, field crews applied one or a combination of these methods to estimate the shear-wave velocity (VS) profile and calculate VS30, the time-averaged VS to a depth of 30 meters. We study the inter- and intra-method variability of VS and VS30 at each seismographic station where combinations of techniques were applied. For each site, we find both types of variability in VS30 remain insignificant (5-10% difference) despite substantial variability observed in the VS profiles. We also find that reliable VS profiles are best developed using a combination of techniques, e.g., surface-wave VS profiles correlated against P-wave tomography to constrain variables (Poisson's ratio and density) that are key depth-dependent parameters used in modeling VS profiles. The most reliable results are based on surface- or body-wave profiles correlated against independent observations such as material properties inferred from outcropping geology nearby. For example, mapped geology describes station CI.LJR as a hard rock site (VS30 > 760 m/s). However, decomposed rock outcrops were found nearby and support the estimated VS30 of 303 m/s derived from the MASW (Love wave) profile.

  6. A parallel offline CFD and closed-form approximation strategy for computationally efficient analysis of complex fluid flows

    NASA Astrophysics Data System (ADS)

    Allphin, Devin

    Computational fluid dynamics (CFD) solution approximations for complex fluid flow problems have become a common and powerful engineering analysis technique. These tools, though qualitatively useful, remain limited in practice by their underlying inverse relationship between simulation accuracy and overall computational expense. While a great volume of research has focused on remedying these issues inherent to CFD, one traditionally overlooked area of resource reduction for engineering analysis concerns the basic definition and determination of functional relationships for the studied fluid flow variables. This artificial relationship-building technique, called meta-modeling or surrogate/offline approximation, uses design of experiments (DOE) theory to efficiently approximate non-physical coupling between the variables of interest in a fluid flow analysis problem. By mathematically approximating these variables, DOE methods can effectively reduce the required quantity of CFD simulations, freeing computational resources for other analytical focuses. An idealized interpretation of a fluid flow problem can also be employed to create suitably accurate approximations of fluid flow variables for the purposes of engineering analysis. When used in parallel with a meta-modeling approximation, a closed-form approximation can provide useful feedback concerning proper construction, suitability, or even necessity of an offline approximation tool. It also provides a short-circuit pathway for further reducing the overall computational demands of a fluid flow analysis, again freeing resources for otherwise unsuitable resource expenditures. To validate these inferences, a design optimization problem was presented requiring the inexpensive estimation of aerodynamic forces applied to a valve operating on a simulated piston-cylinder heat engine. The determination of these forces was to be found using parallel surrogate and exact approximation methods, thus evidencing the comparative benefits of this technique. For the offline approximation, latin hypercube sampling (LHS) was used for design space filling across four (4) independent design variable degrees of freedom (DOF). Flow solutions at the mapped test sites were converged using STAR-CCM+ with aerodynamic forces from the CFD models then functionally approximated using Kriging interpolation. For the closed-form approximation, the problem was interpreted as an ideal 2-D converging-diverging (C-D) nozzle, where aerodynamic forces were directly mapped by application of the Euler equation solutions for isentropic compression/expansion. A cost-weighting procedure was finally established for creating model-selective discretionary logic, with a synthesized parallel simulation resource summary provided.

  7. Accuracy Improvement in Magnetic Field Modeling for an Axisymmetric Electromagnet

    NASA Technical Reports Server (NTRS)

    Ilin, Andrew V.; Chang-Diaz, Franklin R.; Gurieva, Yana L.; Il,in, Valery P.

    2000-01-01

    This paper examines the accuracy and calculation speed for the magnetic field computation in an axisymmetric electromagnet. Different numerical techniques, based on an adaptive nonuniform grid, high order finite difference approximations, and semi-analitical calculation of boundary conditions are considered. These techniques are being applied to the modeling of the Variable Specific Impulse Magnetoplasma Rocket. For high-accuracy calculations, a fourth-order scheme offers dramatic advantages over a second order scheme. For complex physical configurations of interest in plasma propulsion, a second-order scheme with nonuniform mesh gives the best results. Also, the relative advantages of various methods are described when the speed of computation is an important consideration.

  8. Influence of Excipients and Spray Drying on the Physical and Chemical Properties of Nutraceutical Capsules Containing Phytochemicals from Black Bean Extract.

    PubMed

    Guajardo-Flores, Daniel; Rempel, Curtis; Gutiérrez-Uribe, Janet A; Serna-Saldívar, Sergio O

    2015-12-03

    Black beans (Phaseolus vulgaris L.) are a rich source of flavonoids and saponins with proven health benefits. Spray dried black bean extract powders were used in different formulations for the production of nutraceutical capsules with reduced batch-to-batch weight variability. Factorial designs were used to find an adequate maltodextrin-extract ratio for the spray-drying process to produce black bean extract powders. Several flowability properties were used to determine composite flow index of produced powders. Powder containing 6% maltodextrin had the highest yield (78.6%) and the best recovery of flavonoids and saponins (>56% and >73%, respectively). The new complexes formed by the interaction of black bean powder with maltodextrin, microcrystalline cellulose 50 and starch exhibited not only bigger particles, but also a rougher structure than using only maltodextrin and starch as excipients. A drying process prior to capsule production improved powder flowability, increasing capsule weight and reducing variability. The formulation containing 25.0% of maltodextrin, 24.1% of microcrystalline cellulose 50, 50% of starch and 0.9% of magnesium stearate produced capsules with less than 2.5% weight variability. The spray drying technique is a feasible technique to produce good flow extract powders containing valuable phytochemicals and low cost excipients to reduce the end-product variability.

  9. A Brief History of the use of Electromagnetic Induction Techniques in Soil Survey

    NASA Astrophysics Data System (ADS)

    Brevik, Eric C.; Doolittle, James

    2017-04-01

    Electromagnetic induction (EMI) has been used to characterize the spatial variability of soil properties since the late 1970s. Initially used to assess soil salinity, the use of EMI in soil studies has expanded to include: mapping soil types; characterizing soil water content and flow patterns; assessing variations in soil texture, compaction, organic matter content, and pH; and determining the depth to subsurface horizons, stratigraphic layers or bedrock, among other uses. In all cases the soil property being investigated must influence soil apparent electrical conductivity (ECa) either directly or indirectly for EMI techniques to be effective. An increasing number and diversity of EMI sensors have been developed in response to users' needs and the availability of allied technologies, which have greatly improved the functionality of these tools and increased the amount and types of data that can be gathered with a single pass. EMI investigations provide several benefits for soil studies. The large amount of georeferenced data that can be rapidly and inexpensively collected with EMI provides more complete characterization of the spatial variations in soil properties than traditional sampling techniques. In addition, compared to traditional soil survey methods, EMI can more effectively characterize diffuse soil boundaries and identify included areas of dissimilar soils within mapped soil units, giving soil scientists greater confidence when collecting spatial soil information. EMI techniques do have limitations; results are site-specific and can vary depending on the complex interactions among multiple and variable soil properties. Despite this, EMI techniques are increasingly being used to investigate the spatial variability of soil properties at field and landscape scales. The future should witness a greater use of multiple-frequency and multiple-coil EMI sensors and integration with other sensors to assess the spatial variability of soil properties. Data analysis will be improved with advanced processing and presentation systems and more sophisticated geostatistical modeling algorithms will be developed and used to interpolate EMI data, improve the resolution of subsurface features, and assess soil properties.

  10. Detection and characterization of uranium-humic complexes during 1D transport studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lesher, Emily K.; Honeyman, Bruce D.; Ranville, James F.

    2013-05-01

    The speciation and transport of uranium (VI) through porous media is highly dependent on solution conditions, the presence of complexing ligands, and the nature of the porous media. The dependency on many variables makes prediction of U transport in bench-scale experiments and in the field difficult. In particular, the identification of colloidal U phases poses a technical challenge. Transport of U in the presence and absence of natural organic matter (Suwannee River humic acid, SRHA) through silica sand and hematite coated silica sand was tested at pH 4 and 5 using static columns, where flow is controlled by gravity andmore » residence time between advective pore volume exchanges can be strictly controlled. The column effluents were characterized by traditional techniques including ICPMS quantification of total [U] and [Fe], TOC analysis of [DOC], and pH analysis, and also by non-traditional techniques: flow field flow fractionation with online ICPMS detection (FlFFF-ICPMS) and specific UV absorbance (SUVA) characterization of effluent fractions. Key results include that the transport of U through the columns was enhanced by pre-equilibration with SRHA, and previously deposited U was remobilized by the addition of SRHA. The advanced techniques yielded important insights on the mechanisms of transport: FlFFF-ICPMS identified a U-SRHA complex as the mobile U species and directly quantified relative amounts of the complex, while specific UV absorbance (SUVA) measurements indicated a composition-based fractionation onto the porous media.« less

  11. Cross-boundary management between national parks and surrounding lands: A review and discussion

    NASA Astrophysics Data System (ADS)

    Schonewald-Cox, Christine; Buechner, Marybeth; Sauvajot, Raymond; Wilcox, Bruce A.

    1992-03-01

    Protecting biodiversity on public lands is difficult, requiring the management of a complex array of factors. This is especially true when the ecosystems in question are affected by, or extend onto, lands outside the boundaries of the protected area. In this article we review recent developments in the cross-boundary management of protected natural resources, such as parks, wildlife reserves, and designated wilderness areas. Five ecological and 11 anthropic techniques have been suggested for use in cross-boundary management. The categories are not mutually exclusive, but each is a distinct and representative approach, suggested by various authors from academic, managerial, and legal professions. The ecological strategies stress the collection of basic data and documentation of trends. The anthropic techniques stress the usefulness of cooperative guidelines and the need to develop a local constituency which supports park goals. However, the situation is complex and the needed strategies are often difficult to implement. Diverse park resources are influenced by events in surrounding lands. The complexity and variability of sources, the ecological systems under protection, and the uncertainty of the effects combine to produce situations for which there are no simple answers. The solution to coexistence of the park and surrounding land depends upon creative techniques and recommendations, many still forthcoming. Ecological, sociological, legal, and economic disciplines as well as the managing agency should all contribute to these recommendations. Platforms for change include legislation, institutional policies, communication, education, management techniques, and ethics.

  12. Variability of hand tremor in rest and in posture--a pilot study.

    PubMed

    Rahimi, Fariborz; Bee, Carina; South, Angela; Debicki, Derek; Jog, Mandar

    2011-01-01

    Previous, studies have demonstrated variability in the frequency and amplitude in tremor between subjects and between trials in both healthy individuals and those with disease states. However, to date, few studies have examined the composition of tremor. Efficacy of treatment for tremor using techniques such as Botulinum neurotoxin type A (BoNT A) injection may benefit from a better understanding of tremor variability, but more importantly, tremor composition. In the present study, we evaluated tremor variability and composition in 8 participants with either essential tremor or Parkinson disease tremor using kinematic recording methods. Our preliminary findings suggest that while individual patients may have more intra-trial and intra-task variability, overall, task effect was significant only for amplitude of tremor. Composition of tremor varied among patients and the data suggest that tremor composition is complex involving multiple muscle groups. These results may support the value of kinematic assessment methods and the improved understanding of tremor composition in the management of tremor.

  13. Intrinsic movement variability at work. How long is the path from motor control to design engineering?

    PubMed

    Gaudez, C; Gilles, M A; Savin, J

    2016-03-01

    For several years, increasing numbers of studies have highlighted the existence of movement variability. Before that, it was neglected in movement analysis and it is still almost completely ignored in workstation design. This article reviews motor control theories and factors influencing movement execution, and indicates how intrinsic movement variability is part of task completion. These background clarifications should help ergonomists and workstation designers to gain a better understanding of these concepts, which can then be used to improve design tools. We also question which techniques--kinematics, kinetics or muscular activity--and descriptors are most appropriate for describing intrinsic movement variability and for integration into design tools. By this way, simulations generated by designers for workstation design should be closer to the real movements performed by workers. This review emphasises the complexity of identifying, describing and processing intrinsic movement variability in occupational activities. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  14. Identifying Changes of Complex Flood Dynamics with Recurrence Analysis

    NASA Astrophysics Data System (ADS)

    Wendi, D.; Merz, B.; Marwan, N.

    2016-12-01

    Temporal changes in flood hazard system are known to be difficult to detect and attribute due to multiple drivers that include complex processes that are non-stationary and highly variable. These drivers, such as human-induced climate change, natural climate variability, implementation of flood defense, river training, or land use change, could impact variably on space-time scales and influence or mask each other. Flood time series may show complex behavior that vary at a range of time scales and may cluster in time. Moreover hydrological time series (i.e. discharge) are often subject to measurement errors, such as rating curve error especially in the case of extremes where observation are actually derived through extrapolation. This study focuses on the application of recurrence based data analysis techniques (recurrence plot) for understanding and quantifying spatio-temporal changes in flood hazard in Germany. The recurrence plot is known as an effective tool to visualize the dynamics of phase space trajectories i.e. constructed from a time series by using an embedding dimension and a time delay, and it is known to be effective in analyzing non-stationary and non-linear time series. Sensitivity of the common measurement errors and noise on recurrence analysis will also be analyzed and evaluated against conventional methods. The emphasis will be on the identification of characteristic recurrence properties that could associate typical dynamic to certain flood events.

  15. Application of Fuzzy TOPSIS for evaluating machining techniques using sustainability metrics

    NASA Astrophysics Data System (ADS)

    Digalwar, Abhijeet K.

    2018-04-01

    Sustainable processes and techniques are getting increased attention over the last few decades due to rising concerns over the environment, improved focus on productivity and stringency in environmental as well as occupational health and safety norms. The present work analyzes the research on sustainable machining techniques and identifies techniques and parameters on which sustainability of a process is evaluated. Based on the analysis these parameters are then adopted as criteria’s to evaluate different sustainable machining techniques such as Cryogenic Machining, Dry Machining, Minimum Quantity Lubrication (MQL) and High Pressure Jet Assisted Machining (HPJAM) using a fuzzy TOPSIS framework. In order to facilitate easy arithmetic, the linguistic variables represented by fuzzy numbers are transformed into crisp numbers based on graded mean representation. Cryogenic machining was found to be the best alternative sustainable technique as per the fuzzy TOPSIS framework adopted. The paper provides a method to deal with multi criteria decision making problems in a complex and linguistic environment.

  16. COMPLEXITY&APPROXIMABILITY OF QUANTIFIED&STOCHASTIC CONSTRAINT SATISFACTION PROBLEMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunt, H. B.; Marathe, M. V.; Stearns, R. E.

    2001-01-01

    Let D be an arbitrary (not necessarily finite) nonempty set, let C be a finite set of constant symbols denoting arbitrary elements of D, and let S and T be an arbitrary finite set of finite-arity relations on D. We denote the problem of determining the satisfiability of finite conjunctions of relations in S applied to variables (to variables and symbols in C) by SAT(S) (by SATc(S).) Here, we study simultaneously the complexity of decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. We present simple yet general techniques to characterize simultaneously, the complexity ormore » efficient approximability of a number of versions/variants of the problems SAT(S), Q-SAT(S), S-SAT(S),MAX-Q-SAT(S) etc., for many different such D,C ,S, T. These versions/variants include decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. Our unified approach is based on the following two basic concepts: (i) strongly-local replacements/reductions and (ii) relational/algebraic represent ability. Some of the results extend the earlier results in [Pa85,LMP99,CF+93,CF+94O]u r techniques and results reported here also provide significant steps towards obtaining dichotomy theorems, for a number of the problems above, including the problems MAX-&-SAT( S), and MAX-S-SAT(S). The discovery of such dichotomy theorems, for unquantified formulas, has received significant recent attention in the literature [CF+93,CF+94,Cr95,KSW97]« less

  17. Investigation of laser Doppler anemometry in developing a velocity-based measurement technique

    NASA Astrophysics Data System (ADS)

    Jung, Ki Won

    2009-12-01

    Acoustic properties, such as the characteristic impedance and the complex propagation constant, of porous materials have been traditionally characterized based on pressure-based measurement techniques using microphones. Although the microphone techniques have evolved since their introduction, the most general form of the microphone technique employs two microphones in characterizing the acoustic field for one continuous medium. The shortcomings of determining the acoustic field based on only two microphones can be overcome by using numerous microphones. However, the use of a number of microphones requires a careful and intricate calibration procedure. This dissertation uses laser Doppler anemometry (LDA) to establish a new measurement technique which can resolve issues that microphone techniques have: First, it is based on a single sensor, thus the calibration is unnecessary when only overall ratio of the acoustic field is required for the characterization of a system. This includes the measurements of the characteristic impedance and the complex propagation constant of a system. Second, it can handle multiple positional measurements without calibrating the signal at each position. Third, it can measure three dimensional components of velocity even in a system with a complex geometry. Fourth, it has a flexible adaptability which is not restricted to a certain type of apparatus only if the apparatus is transparent. LDA is known to possess several disadvantages, such as the requirement of a transparent apparatus, high cost, and necessity of seeding particles. The technique based on LDA combined with a curvefitting algorithm is validated through measurements on three systems. First, the complex propagation constant of the air is measured in a rigidly terminated cylindrical pipe which has very low dissipation. Second, the radiation impedance of an open-ended pipe is measured. These two parameters can be characterized by the ratio of acoustic field measured at multiple locations. Third, the power dissipated in a variable RLC load is measured. The three experiments validate the LDA technique proposed. The utility of the LDA method is then extended to the measurement of the complex propagation constant of the air inside a 100 ppi reticulated vitreous carbon (RVC) sample. Compared to measurements in the available studies, the measurement with the 100 ppi RVC sample supports the LDA technique in that it can achieve a low uncertainty in the determined quantity. This dissertation concludes with using the LDA technique for modal decomposition of the plane wave mode and the (1,1) mode that are driven simultaneously. This modal decomposition suggests that the LDA technique surpasses microphone-based techniques, because they are unable to determine the acoustic field based on an acoustic model with unconfined propagation constants for each modal component.

  18. Variable bright-darkfield-contrast, a new illumination technique for improved visualizations of complex structured transparent specimens.

    PubMed

    Piper, Timm; Piper, Jörg

    2012-04-01

    Variable bright-darkfield contrast (VBDC) is a new technique in light microscopy which promises significant improvements in imaging of transparent colorless specimens especially when characterized by a high regional thickness and a complex three-dimensional architecture. By a particular light pathway, two brightfield- and darkfield-like partial images are simultaneously superimposed so that the brightfield-like absorption image based on the principal zeroth order maximum interferes with the darkfield-like reflection image which is based on the secondary maxima. The background brightness and character of the resulting image can be continuously modulated from a brightfield-dominated to a darkfield-dominated appearance. When the weighting of the dark- and brightfield components is balanced, medium background brightness will result showing the specimen in a phase- or interference contrast-like manner. Specimens can either be illuminated axially/concentrically or obliquely/eccentrically. In oblique illumination, the angle of incidence and grade of eccentricity can be continuously changed. The condenser aperture diaphragm can be used for improvements of the image quality in the same manner as usual in standard brightfield illumination. By this means, the illumination can be optimally adjusted to the specific properties of the specimen. In VBDC, the image contrast is higher than in normal brightfield illumination, blooming and scattering are lower than in standard darkfield examinations, and any haloing is significantly reduced or absent. Although axial resolution and depth of field are higher than in concurrent standard techniques, the lateral resolution is not visibly reduced. Three dimensional structures, reliefs and fine textures can be perceived in superior clarity. Copyright © 2011 Wiley-Liss, Inc.

  19. Utility and translatability of mathematical modeling, cell culture and small and large animal models in magnetic nanoparticle hyperthermia cancer treatment research

    NASA Astrophysics Data System (ADS)

    Hoopes, P. J.; Petryk, Alicia A.; Misra, Adwiteeya; Kastner, Elliot J.; Pearce, John A.; Ryan, Thomas P.

    2015-03-01

    For more than 50 years, hyperthermia-based cancer researchers have utilized mathematical models, cell culture studies and animal models to better understand, develop and validate potential new treatments. It has been, and remains, unclear how and to what degree these research techniques depend on, complement and, ultimately, translate accurately to a successful clinical treatment. In the past, when mathematical models have not proven accurate in a clinical treatment situation, the initiating quantitative scientists (engineers, mathematicians and physicists) have tended to believe the biomedical parameters provided to them were inaccurately determined or reported. In a similar manner, experienced biomedical scientists often tend to question the value of mathematical models and cell culture results since those data typically lack the level of biologic and medical variability and complexity that are essential to accurately study and predict complex diseases and subsequent treatments. Such quantitative and biomedical interdependence, variability, diversity and promise have never been greater than they are within magnetic nanoparticle hyperthermia cancer treatment. The use of hyperthermia to treat cancer is well studied and has utilized numerous delivery techniques, including microwaves, radio frequency, focused ultrasound, induction heating, infrared radiation, warmed perfusion liquids (combined with chemotherapy), and, recently, metallic nanoparticles (NP) activated by near infrared radiation (NIR) and alternating magnetic field (AMF) based platforms. The goal of this paper is to use proven concepts and current research to address the potential pathobiology, modeling and quantification of the effects of treatment as pertaining to the similarities and differences in energy delivered by known external delivery techniques and iron oxide nanoparticles.

  20. Beyond the G-spot: clitourethrovaginal complex anatomy in female orgasm.

    PubMed

    Jannini, Emmanuele A; Buisson, Odile; Rubio-Casillas, Alberto

    2014-09-01

    The search for the legendary, highly erogenous vaginal region, the Gräfenberg spot (G-spot), has produced important data, substantially improving understanding of the complex anatomy and physiology of sexual responses in women. Modern imaging techniques have enabled visualization of dynamic interactions of female genitals during self-sexual stimulation or coitus. Although no single structure consistent with a distinct G-spot has been identified, the vagina is not a passive organ but a highly dynamic structure with an active role in sexual arousal and intercourse. The anatomical relationships and dynamic interactions between the clitoris, urethra, and anterior vaginal wall have led to the concept of a clitourethrovaginal (CUV) complex, defining a variable, multifaceted morphofunctional area that, when properly stimulated during penetration, could induce orgasmic responses. Knowledge of the anatomy and physiology of the CUV complex might help to avoid damage to its neural, muscular, and vascular components during urological and gynaecological surgical procedures.

  1. Extending Quantum Chemistry of Bound States to Electronic Resonances

    NASA Astrophysics Data System (ADS)

    Jagau, Thomas-C.; Bravaya, Ksenia B.; Krylov, Anna I.

    2017-05-01

    Electronic resonances are metastable states with finite lifetime embedded in the ionization or detachment continuum. They are ubiquitous in chemistry, physics, and biology. Resonances play a central role in processes as diverse as DNA radiolysis, plasmonic catalysis, and attosecond spectroscopy. This review describes novel equation-of-motion coupled-cluster (EOM-CC) methods designed to treat resonances and bound states on an equal footing. Built on complex-variable techniques such as complex scaling and complex absorbing potentials that allow resonances to be associated with a single eigenstate of the molecular Hamiltonian rather than several continuum eigenstates, these methods extend electronic-structure tools developed for bound states to electronic resonances. Selected examples emphasize the formal advantages as well as the numerical accuracy of EOM-CC in the treatment of electronic resonances. Connections to experimental observables such as spectra and cross sections, as well as practical aspects of implementing complex-valued approaches, are also discussed.

  2. Investigation of complexity dynamics in a DC glow discharge magnetized plasma using recurrence quantification analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mitra, Vramori; Sarma, Bornali; Sarma, Arun

    Recurrence is an ubiquitous feature which provides deep insights into the dynamics of real dynamical systems. A suitable tool for investigating recurrences is recurrence quantification analysis (RQA). It allows, e.g., the detection of regime transitions with respect to varying control parameters. We investigate the complexity of different coexisting nonlinear dynamical regimes of the plasma floating potential fluctuations at different magnetic fields and discharge voltages by using recurrence quantification variables, in particular, DET, L{sub max}, and Entropy. The recurrence analysis reveals that the predictability of the system strongly depends on discharge voltage. Furthermore, the persistent behaviour of the plasma time seriesmore » is characterized by the Detrended fluctuation analysis technique to explore the complexity in terms of long range correlation. The enhancement of the discharge voltage at constant magnetic field increases the nonlinear correlations; hence, the complexity of the system decreases, which corroborates the RQA analysis.« less

  3. [Generation of Superoxide Radicals by Complex III in Heart Mitochondria and Antioxidant Effect of Dinitrosyl Iron Complexes at Different Partial Pressure of Oxygen].

    PubMed

    Dudylina, A L; Ivanova, M V; Shumaev, K B; Ruuge, E K

    2016-01-01

    The EPR spin-trapping technique and EPR-oximetry were used to study generation of superoxide radicals in heart mitochondria isolated from Wistar rats under conditions of variable oxygen concentration. Lithium phthalocyanine and TEMPONE-15N-D16 were chosen to determine oxygen content in a gas-permeable capillary tube containing mitochondria. TIRON was used as a spin trap. We investigated the influence of different oxygen concentrations in incubation mixture and demonstrated that heart mitochondria can generate superoxide in complex III at different partial pressure of oxygen as well as under the conditions of deep hypoxia (< 5% O2). Dinitrosyl iron complexes with glutathione (the pharmaceutical drug "Oxacom") exerted an antioxidant effect, regardless of the value of the partial pressure of oxygen, but the magnitude and kinetic characteristics of the effect depended on the concentration of the drug.

  4. Formal methods for modeling and analysis of hybrid systems

    NASA Technical Reports Server (NTRS)

    Tiwari, Ashish (Inventor); Lincoln, Patrick D. (Inventor)

    2009-01-01

    A technique based on the use of a quantifier elimination decision procedure for real closed fields and simple theorem proving to construct a series of successively finer qualitative abstractions of hybrid automata is taught. The resulting abstractions are always discrete transition systems which can then be used by any traditional analysis tool. The constructed abstractions are conservative and can be used to establish safety properties of the original system. The technique works on linear and non-linear polynomial hybrid systems: the guards on discrete transitions and the continuous flows in all modes can be specified using arbitrary polynomial expressions over the continuous variables. An exemplar tool in the SAL environment built over the theorem prover PVS is detailed. The technique scales well to large and complex hybrid systems.

  5. A system of three-dimensional complex variables

    NASA Technical Reports Server (NTRS)

    Martin, E. Dale

    1986-01-01

    Some results of a new theory of multidimensional complex variables are reported, including analytic functions of a three-dimensional (3-D) complex variable. Three-dimensional complex numbers are defined, including vector properties and rules of multiplication. The necessary conditions for a function of a 3-D variable to be analytic are given and shown to be analogous to the 2-D Cauchy-Riemann equations. A simple example also demonstrates the analogy between the newly defined 3-D complex velocity and 3-D complex potential and the corresponding ordinary complex velocity and complex potential in two dimensions.

  6. Complex Networks Dynamics Based on Events-Phase Synchronization and Intensity Correlation Applied to The Anomaly Patterns and Extremes in The Tropical African Climate System

    NASA Astrophysics Data System (ADS)

    Oluoch, K.; Marwan, N.; Trauth, M.; Loew, A.; Kurths, J.

    2012-04-01

    The African continent lie almost entirely within the tropics and as such its (tropical) climate systems are predominantly governed by the heterogeneous, spatial and temporal variability of the Hadley and Walker circulations. The variabilities in these meridional and zonal circulations lead to intensification or suppression of the intensities, durations and frequencies of the Inter-tropical Convergence Zone (ICTZ) migration, trade winds and subtropical high-pressure regions and the continental monsoons. The above features play a central role in determining the African rainfall spatial and temporal variability patterns. The current understanding of these climate features and their influence on the rainfall patterns is not sufficiently understood. Like many real-world systems, atmospheric-oceanic processes exhibit non-linear properties that can be better explored using non-linear (NL) methods of time-series analysis. Over the recent years, the complex network approach has evolved as a powerful new player in understanding spatio-temporal dynamics and evolution of complex systems. Together with NL techniques, it is continuing to find new applications in many areas of science and technology including climate research. We would like to use these two powerful methods to understand the spatial structure and dynamics of African rainfall anomaly patterns and extremes. The method of event synchronization (ES) developed by Quiroga et al., 2002 and first applied to climate networks by Malik et al., 2011 looks at correlations with a dynamic time lag and as such, it is a more intuitive way to correlate a complex and heterogeneous system like climate networks than a fixed time delay most commonly used. On the other hand, the short comings of ES is its lack of vigorous test statistics for the significance level of the correlations, and the fact that only the events' time indices are synchronized while all information about how the relative intensities propagate within network framework is lost. The new method we present is motivated by the ES and borrows ideas from signal processing where a signal is represented by its intensity and frequency. Even though the anomaly signals are not periodic, the idea of phase synchronization is not far fetched. It brings into one umbrella, the traditionally known linear Intensity correlation methods like Pearson correlation, spear-man's rank or non-linear ones like mutual information with the ES for non-linear temporal synchronization. The intensity correlation is only performed where there is a temporal synchronization. The former just measures how constant the intensity differences are. In other words, how monotonic are the two functions. The overall measure of correlation and synchronization is the product of the two coefficients. Complex networks constructed by this technique has all the advantages inherent in each of the techniques it borrows. But, it is more superior and able to uncover many known and unknown dynamical features in rainfall field or any variable of interest. The main aim of this work is to develop a method that can identify the footprints of coherent or incoherent structures within the ICTZ, the African and the Indian monsoons and the ENSO signal on the tropical African continent and their temporal evolution.

  7. Multivariate analysis in thoracic research.

    PubMed

    Mengual-Macenlle, Noemí; Marcos, Pedro J; Golpe, Rafael; González-Rivas, Diego

    2015-03-01

    Multivariate analysis is based in observation and analysis of more than one statistical outcome variable at a time. In design and analysis, the technique is used to perform trade studies across multiple dimensions while taking into account the effects of all variables on the responses of interest. The development of multivariate methods emerged to analyze large databases and increasingly complex data. Since the best way to represent the knowledge of reality is the modeling, we should use multivariate statistical methods. Multivariate methods are designed to simultaneously analyze data sets, i.e., the analysis of different variables for each person or object studied. Keep in mind at all times that all variables must be treated accurately reflect the reality of the problem addressed. There are different types of multivariate analysis and each one should be employed according to the type of variables to analyze: dependent, interdependence and structural methods. In conclusion, multivariate methods are ideal for the analysis of large data sets and to find the cause and effect relationships between variables; there is a wide range of analysis types that we can use.

  8. Reaction time, impulsivity, and attention in hyperactive children and controls: a video game technique.

    PubMed

    Mitchell, W G; Chavez, J M; Baker, S A; Guzman, B L; Azen, S P

    1990-07-01

    Maturation of sustained attention was studied in a group of 52 hyperactive elementary school children and 152 controls using a microcomputer-based test formatted to resemble a video game. In nonhyperactive children, both simple and complex reaction time decreased with age, as did variability of response time. Omission errors were extremely infrequent on simple reaction time and decreased with age on the more complex tasks. Commission errors had an inconsistent relationship with age. Hyperactive children were slower, more variable, and made more errors on all segments of the game than did controls. Both motor speed and calculated mental speed were slower in hyperactive children, with greater discrepancy for responses directed to the nondominant hand, suggesting that a selective right hemisphere deficit may be present in hyperactives. A summary score (number of individual game scores above the 95th percentile) of 4 or more detected 60% of hyperactive subjects with a false positive rate of 5%. Agreement with the Matching Familiar Figures Test was 75% in the hyperactive group.

  9. Ectopic beats in approximate entropy and sample entropy-based HRV assessment

    NASA Astrophysics Data System (ADS)

    Singh, Butta; Singh, Dilbag; Jaryal, A. K.; Deepak, K. K.

    2012-05-01

    Approximate entropy (ApEn) and sample entropy (SampEn) are the promising techniques for extracting complex characteristics of cardiovascular variability. Ectopic beats, originating from other than the normal site, are the artefacts contributing a serious limitation to heart rate variability (HRV) analysis. The approaches like deletion and interpolation are currently in use to eliminate the bias produced by ectopic beats. In this study, normal R-R interval time series of 10 healthy and 10 acute myocardial infarction (AMI) patients were analysed by inserting artificial ectopic beats. Then the effects of ectopic beats editing by deletion, degree-zero and degree-one interpolation on ApEn and SampEn have been assessed. Ectopic beats addition (even 2%) led to reduced complexity, resulting in decreased ApEn and SampEn of both healthy and AMI patient data. This reduction has been found to be dependent on level of ectopic beats. Editing of ectopic beats by interpolation degree-one method is found to be superior to other methods.

  10. Lymphatic Anatomy.

    PubMed

    Hsu, Michael C; Itkin, Maxim

    2016-12-01

    Recent development of new lymphatic imaging and intervention techniques, such as intranodal lymphangiogram, dynamic contrast enhanced magnetic resonance lymphangiography and lymphatic embolization, have resulted in the resurgence of interest in the lymphatic anatomy. The lymphatic system is a continuous maze of interlacing vessels and lymph nodes and is extremely complex and variable. This presents a significant challenge for interpretation of imaging and performance of interventions on this system. There is an embryological reason for this complexity and variability; the lymphatic system sprouts off of primordia from several locations in the body, which later fuse together at different stages of development of the embryo. The lymphatic system can be divided in three distinct parts: soft tissue lymphatics, intestinal lymphatics, and liver lymphatics. Liver and intestinal lymphatics generate approximately 80% of the body lymph and are functionally the most important parts of the lymphatic system. However, their normal anatomy and pathological changes are relatively unknown. In this chapter we will explore the anatomy of these three systems relevant to lymphatic imaging and interventions. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Variable speed induction motor operation from a 20-kHz power bus

    NASA Technical Reports Server (NTRS)

    Hansen, Irving G.

    1989-01-01

    Induction motors are recognized for their simple rugged construction. To date, however, their application to variable speed or servo drives was hampered by limitations on their control. Induction motor drives tend to be complex and to display troublesome low speed characteristics due in part to nonsinusoidal driving voltages. A technique was developed which involves direct synthesis of sinusoidal driving voltages from a high frequency power bus and independent control of frequency and voltages. Separation of frequency and voltage allows independent control of rotor and stator flux, full four quadrant operation, and instantaneous torque control. Recent test results, current status of the technology, and proposed aerospace applications will be discussed.

  12. Variable speed induction motor operation from a 20-kHz power bus

    NASA Technical Reports Server (NTRS)

    Hansen, Irving G.

    1989-01-01

    Induction motors are recognized for their simple rugged construction to date, however, their application to variable speed or servo drives has been hampered by limitations on their control. Induction motor drives tend to be complex and to display troublesome low speed characteristics due in part to nonsinusoidal driving voltages. A technique was developed which involves direct synthesis of sinusoidal driving voltages from a high frequency power bus and independent control of frequency and voltages. Separation offrequency and voltage allows independent control of rotor and stator flux, full four-quadrant operation, and instantaneous torque control. Recent test results, current status of the technology, and proposed aerospace applications will be discussed.

  13. Optimisation by hierarchical search

    NASA Astrophysics Data System (ADS)

    Zintchenko, Ilia; Hastings, Matthew; Troyer, Matthias

    2015-03-01

    Finding optimal values for a set of variables relative to a cost function gives rise to some of the hardest problems in physics, computer science and applied mathematics. Although often very simple in their formulation, these problems have a complex cost function landscape which prevents currently known algorithms from efficiently finding the global optimum. Countless techniques have been proposed to partially circumvent this problem, but an efficient method is yet to be found. We present a heuristic, general purpose approach to potentially improve the performance of conventional algorithms or special purpose hardware devices by optimising groups of variables in a hierarchical way. We apply this approach to problems in combinatorial optimisation, machine learning and other fields.

  14. Movement variability in the golf swing.

    PubMed

    Langdown, Ben L; Bridge, Matt; Li, Francois-Xavier

    2012-06-01

    Traditionally, golf biomechanics has focused upon achieving consistency in swing kinematics and kinetics, whilst variability was considered to be noise and dysfunctional. There has been a growing argument that variability is an intrinsic aspect of skilled motor performance and plays a functional role. Two types of variability are described: 'strategic shot selection' and 'movement variability'. In 'strategic shot selection', the outcome remains consistent, but the swing kinematics/kinetics (resulting in the desired ball flight) are free to vary; 'movement variability' is the changes in swing kinematics and kinetics from trial to trial when the golfer attempts to hit the same shot. These changes will emerge due to constraints of the golfer's body, the environment, and the task. Biomechanical research has focused upon aspects of technique such as elite versus non-elite kinematics, kinetics, kinematic sequencing, peak angular velocities of body segments, wrist function, ground reaction forces, and electromyography, mainly in the search for greater distance and clubhead velocity. To date very little is known about the impact of variability on this complex motor skill, and it has yet to be fully researched to determine where the trade-off between functional and detrimental variability lies when in pursuit of enhanced performance outcomes.

  15. A non-linear data mining parameter selection algorithm for continuous variables

    PubMed Central

    Razavi, Marianne; Brady, Sean

    2017-01-01

    In this article, we propose a new data mining algorithm, by which one can both capture the non-linearity in data and also find the best subset model. To produce an enhanced subset of the original variables, a preferred selection method should have the potential of adding a supplementary level of regression analysis that would capture complex relationships in the data via mathematical transformation of the predictors and exploration of synergistic effects of combined variables. The method that we present here has the potential to produce an optimal subset of variables, rendering the overall process of model selection more efficient. This algorithm introduces interpretable parameters by transforming the original inputs and also a faithful fit to the data. The core objective of this paper is to introduce a new estimation technique for the classical least square regression framework. This new automatic variable transformation and model selection method could offer an optimal and stable model that minimizes the mean square error and variability, while combining all possible subset selection methodology with the inclusion variable transformations and interactions. Moreover, this method controls multicollinearity, leading to an optimal set of explanatory variables. PMID:29131829

  16. Review of literature on the finite-element solution of the equations of two-dimensional surface-water flow in the horizontal plane

    USGS Publications Warehouse

    Lee, Jonathan K.; Froehlich, David C.

    1987-01-01

    Published literature on the application of the finite-element method to solving the equations of two-dimensional surface-water flow in the horizontal plane is reviewed in this report. The finite-element method is ideally suited to modeling two-dimensional flow over complex topography with spatially variable resistance. A two-dimensional finite-element surface-water flow model with depth and vertically averaged velocity components as dependent variables allows the user great flexibility in defining geometric features such as the boundaries of a water body, channels, islands, dikes, and embankments. The following topics are reviewed in this report: alternative formulations of the equations of two-dimensional surface-water flow in the horizontal plane; basic concepts of the finite-element method; discretization of the flow domain and representation of the dependent flow variables; treatment of boundary conditions; discretization of the time domain; methods for modeling bottom, surface, and lateral stresses; approaches to solving systems of nonlinear equations; techniques for solving systems of linear equations; finite-element alternatives to Galerkin's method of weighted residuals; techniques of model validation; and preparation of model input data. References are listed in the final chapter.

  17. Application of copulas to improve covariance estimation for partial least squares.

    PubMed

    D'Angelo, Gina M; Weissfeld, Lisa A

    2013-02-20

    Dimension reduction techniques, such as partial least squares, are useful for computing summary measures and examining relationships in complex settings. Partial least squares requires an estimate of the covariance matrix as a first step in the analysis, making this estimate critical to the results. In addition, the covariance matrix also forms the basis for other techniques in multivariate analysis, such as principal component analysis and independent component analysis. This paper has been motivated by an example from an imaging study in Alzheimer's disease where there is complete separation between Alzheimer's and control subjects for one of the imaging modalities. This separation occurs in one block of variables and does not occur with the second block of variables resulting in inaccurate estimates of the covariance. We propose the use of a copula to obtain estimates of the covariance in this setting, where one set of variables comes from a mixture distribution. Simulation studies show that the proposed estimator is an improvement over the standard estimators of covariance. We illustrate the methods from the motivating example from a study in the area of Alzheimer's disease. Copyright © 2012 John Wiley & Sons, Ltd.

  18. Probing AGN Accretion Physics through AGN Variability: Insights from Kepler

    NASA Astrophysics Data System (ADS)

    Kasliwal, Vishal Pramod

    Active Galactic Nuclei (AGN) exhibit large luminosity variations over the entire electromagnetic spectrum on timescales ranging from hours to years. The variations in luminosity are devoid of any periodic character and appear stochastic. While complex correlations exist between the variability observed in different parts of the electromagnetic spectrum, no frequency band appears to be completely dominant, suggesting that the physical processes producing the variability are exceedingly rich and complex. In the absence of a clear theoretical explanation of the variability, phenomenological models are used to study AGN variability. The stochastic behavior of AGN variability makes formulating such models difficult and connecting them to the underlying physics exceedingly hard. We study AGN light curves serendipitously observed by the NASA Kepler planet-finding mission. Compared to previous ground-based observations, Kepler offers higher precision and a smaller sampling interval resulting in potentially higher quality light curves. Using structure functions, we demonstrate that (1) the simplest statistical model of AGN variability, the damped random walk (DRW), is insufficient to characterize the observed behavior of AGN light curves; and (2) variability begins to occur in AGN on time-scales as short as hours. Of the 20 light curves studied by us, only 3-8 may be consistent with the DRW. The structure functions of the AGN in our sample exhibit complex behavior with pronounced dips on time-scales of 10-100 d suggesting that AGN variability can be very complex and merits further analysis. We examine the accuracy of the Kepler pipeline-generated light curves and find that the publicly available light curves may require re-processing to reduce contamination from field sources. We show that while the re-processing changes the exact PSD power law slopes inferred by us, it is unlikely to change the conclusion of our structure function study-Kepler AGN light curves indicate that the DRW is insufficient to characterize AGN variability. We provide a new approach to probing accretion physics with variability by decomposing observed light curves into a set of impulses that drive diffusive processes using C-ARMA models. Applying our approach to Kepler data, we demonstrate how the time-scales reported in the literature can be interpreted in the context of the growth and decay time-scales for flux perturbations and tentatively identify the flux perturbation driving process with accretion disk turbulence on length-scales much longer than the characteristic eddy size. Our analysis technique is applicable to (1) studying the connection between AGN sub-type and variability properties; (2) probing the origins of variability by studying the multi-wavelength behavior of AGN; (3) testing numerical simulations of accretion flows with the goal of creating a library of the variability properties of different accretion mechanisms; (4) hunting for changes in the behavior of the accretion flow by block-analyzing observed light curves; and (5) constraining the sampling requirements of future surveys of AGN variability.

  19. A novel technique to solve nonlinear higher-index Hessenberg differential-algebraic equations by Adomian decomposition method.

    PubMed

    Benhammouda, Brahim

    2016-01-01

    Since 1980, the Adomian decomposition method (ADM) has been extensively used as a simple powerful tool that applies directly to solve different kinds of nonlinear equations including functional, differential, integro-differential and algebraic equations. However, for differential-algebraic equations (DAEs) the ADM is applied only in four earlier works. There, the DAEs are first pre-processed by some transformations like index reductions before applying the ADM. The drawback of such transformations is that they can involve complex algorithms, can be computationally expensive and may lead to non-physical solutions. The purpose of this paper is to propose a novel technique that applies the ADM directly to solve a class of nonlinear higher-index Hessenberg DAEs systems efficiently. The main advantage of this technique is that; firstly it avoids complex transformations like index reductions and leads to a simple general algorithm. Secondly, it reduces the computational work by solving only linear algebraic systems with a constant coefficient matrix at each iteration, except for the first iteration where the algebraic system is nonlinear (if the DAE is nonlinear with respect to the algebraic variable). To demonstrate the effectiveness of the proposed technique, we apply it to a nonlinear index-three Hessenberg DAEs system with nonlinear algebraic constraints. This technique is straightforward and can be programmed in Maple or Mathematica to simulate real application problems.

  20. Straightforward and precise approach to replicate complex hierarchical structures from plant surfaces onto soft matter polymer

    PubMed Central

    Speck, Thomas; Bohn, Holger F.

    2018-01-01

    The surfaces of plant leaves are rarely smooth and often possess a species-specific micro- and/or nano-structuring. These structures usually influence the surface functionality of the leaves such as wettability, optical properties, friction and adhesion in insect–plant interactions. This work presents a simple, convenient, inexpensive and precise two-step micro-replication technique to transfer surface microstructures of plant leaves onto highly transparent soft polymer material. Leaves of three different plants with variable size (0.5–100 µm), shape and complexity (hierarchical levels) of their surface microstructures were selected as model bio-templates. A thermoset epoxy resin was used at ambient conditions to produce negative moulds directly from fresh plant leaves. An alkaline chemical treatment was established to remove the entirety of the leaf material from the cured negative epoxy mould when necessary, i.e. for highly complex hierarchical structures. Obtained moulds were filled up afterwards with low viscosity silicone elastomer (PDMS) to obtain positive surface replicas. Comparative scanning electron microscopy investigations (original plant leaves and replicated polymeric surfaces) reveal the high precision and versatility of this replication technique. This technique has promising future application for the development of bioinspired functional surfaces. Additionally, the fabricated polymer replicas provide a model to systematically investigate the structural key points of surface functionalities. PMID:29765666

  1. A knitted garment using intarsia technique for Heart Rate Variability biofeedback: Evaluation of initial prototype.

    PubMed

    Abtahi, F; Ji, G; Lu, K; Rödby, K; Seoane, F

    2015-01-01

    Heart rate variability (HRV) biofeedback is a method based on paced breathing at specific rate called resonance frequency by giving online feedbacks from user respiration and its effect on HRV. Since the HRV is also influence by different factors like stress and emotions, stress related to an unfamiliar measurement device, cables and skin electrodes may cover the underling effect of such kind of intervention. Wearable systems are usually considered as intuitive solutions which are more familiar to the end-user and can help to improve usability and hence reducing the stress. In this work, a prototype of a knitted garment using intarsia technique is developed and evaluated. Results show the satisfactory level of quality for Electrocardiogram and thoracic electrical bioimpedance i.e. for respiration monitoring as a part of HRV biofeedback system. Using intarsia technique and conductive yarn for making the connection instead of cables will reduce the complexity of fabrication in textile production and hence reduce the final costs in a final commercial product. Further development of garment and Android application is ongoing and usability and efficiency of final prototype will be evaluated in detail.

  2. Sources of Wind Variability at a Single Station in Complex Terrain During Tropical Cyclone Passage

    DTIC Science & Technology

    2013-12-01

    Mesoscale Prediction System CPA Closest point of approach ET Extratropical transition FNMOC Fleet Numerical Meteorology and Oceanography Center...forecasts. However, 2 the TC forecast tracks and warnings they issue necessarily focus on the large-scale structure of the storm , and are not...winds at one station. Also, this technique is a storm - centered forecast and even if the grid spacing is on order of one kilometer, it is unlikely

  3. Information analysis of a spatial database for ecological land classification

    NASA Technical Reports Server (NTRS)

    Davis, Frank W.; Dozier, Jeff

    1990-01-01

    An ecological land classification was developed for a complex region in southern California using geographic information system techniques of map overlay and contingency table analysis. Land classes were identified by mutual information analysis of vegetation pattern in relation to other mapped environmental variables. The analysis was weakened by map errors, especially errors in the digital elevation data. Nevertheless, the resulting land classification was ecologically reasonable and performed well when tested with higher quality data from the region.

  4. Correlation and simple linear regression.

    PubMed

    Eberly, Lynn E

    2007-01-01

    This chapter highlights important steps in using correlation and simple linear regression to address scientific questions about the association of two continuous variables with each other. These steps include estimation and inference, assessing model fit, the connection between regression and ANOVA, and study design. Examples in microbiology are used throughout. This chapter provides a framework that is helpful in understanding more complex statistical techniques, such as multiple linear regression, linear mixed effects models, logistic regression, and proportional hazards regression.

  5. A Framework for Simulating Turbine-Based Combined-Cycle Inlet Mode-Transition

    NASA Technical Reports Server (NTRS)

    Le, Dzu K.; Vrnak, Daniel R.; Slater, John W.; Hessel, Emil O.

    2012-01-01

    A simulation framework based on the Memory-Mapped-Files technique was created to operate multiple numerical processes in locked time-steps and send I/O data synchronously across to one-another to simulate system-dynamics. This simulation scheme is currently used to study the complex interactions between inlet flow-dynamics, variable-geometry actuation mechanisms, and flow-controls in the transition from the supersonic to hypersonic conditions and vice-versa. A study of Mode-Transition Control for a high-speed inlet wind-tunnel model with this MMF-based framework is presented to illustrate this scheme and demonstrate its usefulness in simulating supersonic and hypersonic inlet dynamics and controls or other types of complex systems.

  6. Application of geologic-mathematical 3D modeling for complex structure deposits by the example of Lower- Cretaceous period depositions in Western Ust - Balykh oil field (Khanty-Mansiysk Autonomous District)

    NASA Astrophysics Data System (ADS)

    Perevertailo, T.; Nedolivko, N.; Prisyazhnyuk, O.; Dolgaya, T.

    2015-11-01

    The complex structure of the Lower-Cretaceous formation by the example of the reservoir BC101 in Western Ust - Balykh Oil Field (Khanty-Mansiysk Autonomous District) has been studied. Reservoir range relationships have been identified. 3D geologic- mathematical modeling technique considering the heterogeneity and variability of a natural reservoir structure has been suggested. To improve the deposit geological structure integrity methods of mathematical statistics were applied, which, in its turn, made it possible to obtain equal probability models with similar input data and to consider the formation conditions of reservoir rocks and cap rocks.

  7. Organotypic Slice Cultures for Studies of Postnatal Neurogenesis

    PubMed Central

    Mosa, Adam J.; Wang, Sabrina; Tan, Yao Fang; Wojtowicz, J. Martin

    2015-01-01

    Here we describe a technique for studying hippocampal postnatal neurogenesis in the rodent brain using the organotypic slice culture technique. This method maintains the characteristic topographical morphology of the hippocampus while allowing direct application of pharmacological agents to the developing hippocampal dentate gyrus. Additionally, slice cultures can be maintained for up to 4 weeks and thus, allow one to study the maturation process of newborn granule neurons. Slice cultures allow for efficient pharmacological manipulation of hippocampal slices while excluding complex variables such as uncertainties related to the deep anatomic location of the hippocampus as well as the blood brain barrier. For these reasons, we sought to optimize organotypic slice cultures specifically for postnatal neurogenesis research. PMID:25867138

  8. Psychotherapy for massively traumatized refugees: the therapist variable.

    PubMed

    Kinzie, J D

    2001-01-01

    In the treatment of severe posttraumatic stress disorder (PTSD), much emphasis is put on techniques, especially behavioral therapies. Such techniques negate the importance of the therapist as an individual in the treatment of complex PTSD as presented in severely traumatized refugees. The specific difficulties encountered by this population and the therapist responses are discussed: the need to tell the trauma story and the therapist's ability to listen; the patient's need for constancy and therapist's ability to stay; the patient's need to give and the therapist's ability to receive; the patient's problem with evil and the therapist's ability to believe. Case examples illustrate the approach and then discuss how generalizable this experience is to other populations. Research implications are suggested.

  9. Locating CVBEM collocation points for steady state heat transfer problems

    USGS Publications Warehouse

    Hromadka, T.V.

    1985-01-01

    The Complex Variable Boundary Element Method or CVBEM provides a highly accurate means of developing numerical solutions to steady state two-dimensional heat transfer problems. The numerical approach exactly solves the Laplace equation and satisfies the boundary conditions at specified points on the boundary by means of collocation. The accuracy of the approximation depends upon the nodal point distribution specified by the numerical analyst. In order to develop subsequent, refined approximation functions, four techniques for selecting additional collocation points are presented. The techniques are compared as to the governing theory, representation of the error of approximation on the problem boundary, the computational costs, and the ease of use by the numerical analyst. ?? 1985.

  10. Imaging normal pressure hydrocephalus: theories, techniques, and challenges.

    PubMed

    Keong, Nicole C H; Pena, Alonso; Price, Stephen J; Czosnyka, Marek; Czosnyka, Zofia; Pickard, John D

    2016-09-01

    The pathophysiology of NPH continues to provoke debate. Although guidelines and best-practice recommendations are well established, there remains a lack of consensus about the role of individual imaging modalities in characterizing specific features of the condition and predicting the success of CSF shunting. Variability of clinical presentation and imperfect responsiveness to shunting are obstacles to the application of novel imaging techniques. Few studies have sought to interpret imaging findings in the context of theories of NPH pathogenesis. In this paper, the authors discuss the major streams of thought for the evolution of NPH and the relevance of key imaging studies contributing to the understanding of the pathophysiology of this complex condition.

  11. A mathematical tool to generate complex whole body motor tasks and test hypotheses on underlying motor planning.

    PubMed

    Tagliabue, Michele; Pedrocchi, Alessandra; Pozzo, Thierry; Ferrigno, Giancarlo

    2008-01-01

    In spite of the complexity of human motor behavior, difficulties in mathematical modeling have restricted to rather simple movements attempts to identify the motor planning criterion used by the central nervous system. This paper presents a novel-simulation technique able to predict the "desired trajectory" corresponding to a wide range of kinematic and kinetic optimality criteria for tasks involving many degrees of freedom and the coordination between goal achievement and balance maintenance. Employment of proper time discretization, inverse dynamic methods and constrained optimization technique are combined. The application of this simulator to a planar whole body pointing movement shows its effectiveness in managing system nonlinearities and instability as well as in ensuring the anatomo-physiological feasibility of predicted motor plans. In addition, the simulator's capability to simultaneously optimize competing movement aspects represents an interesting opportunity for the motor control community, in which the coexistence of several controlled variables has been hypothesized.

  12. Biopharmaceutical considerations and characterizations in development of colon targeted dosage forms for inflammatory bowel disease.

    PubMed

    Malayandi, Rajkumar; Kondamudi, Phani Krishna; Ruby, P K; Aggarwal, Deepika

    2014-04-01

    Colon targeted dosage forms have been extensively studied for the localized treatment of inflammatory bowel disease. These dosage forms not only improve the therapeutic efficacy but also reduce the incidence of adverse drug reactions and hence improve the patient compliance. However, complex and highly variable gastro intestinal physiology limits the clinical success of these dosage forms. Biopharmaceutical characteristics of these dosage forms play a key role in rapid formulation development and ensure the clinical success. The complexity in product development and clinical success of colon targeted dosage forms are based on the biopharmaceutical characteristics such as physicochemical properties of drug substances, pharmaceutical characteristics of dosage form, physiological conditions and pharmacokinetic properties of drug substances as well as drug products. Various in vitro and in vivo techniques have been employed in past to characterize the biopharmaceutical properties of colon targeted dosage forms. This review focuses on the factors influencing the biopharmaceutical performances of the dosage forms, in vitro characterization techniques and in vivo studies.

  13. The medial patellofemoral complex.

    PubMed

    Loeb, Alexander E; Tanaka, Miho J

    2018-06-01

    The purpose of this review is to describe the current understanding of the medial patellofemoral complex, including recent anatomic advances, evaluation of indications for reconstruction with concomitant pathology, and surgical reconstruction techniques. Recent advances in our understanding of MPFC anatomy have found that there are fibers that insert onto the deep quadriceps tendon as well as the patella, thus earning the name "medial patellofemoral complex" to allow for the variability in its anatomy. In MPFC reconstruction, anatomic origin and insertion points and appropriate graft length are critical to prevent overconstraint of the patellofemoral joint. The MPFC is a crucial soft tissue checkrein to lateral patellar translation, and its repair or reconstruction results in good restoration of patellofemoral stability. As our understanding of MPFC anatomy evolves, further studies are needed to apply its relevance in kinematics and surgical applications to its role in maintaining patellar stability.

  14. Newtonian nudging for a Richards equation-based distributed hydrological model

    NASA Astrophysics Data System (ADS)

    Paniconi, Claudio; Marrocu, Marino; Putti, Mario; Verbunt, Mark

    The objective of data assimilation is to provide physically consistent estimates of spatially distributed environmental variables. In this study a relatively simple data assimilation method has been implemented in a relatively complex hydrological model. The data assimilation technique is Newtonian relaxation or nudging, in which model variables are driven towards observations by a forcing term added to the model equations. The forcing term is proportional to the difference between simulation and observation (relaxation component) and contains four-dimensional weighting functions that can incorporate prior knowledge about the spatial and temporal variability and characteristic scales of the state variable(s) being assimilated. The numerical model couples a three-dimensional finite element Richards equation solver for variably saturated porous media and a finite difference diffusion wave approximation based on digital elevation data for surface water dynamics. We describe the implementation of the data assimilation algorithm for the coupled model and report on the numerical and hydrological performance of the resulting assimilation scheme. Nudging is shown to be successful in improving the hydrological simulation results, and it introduces little computational cost, in terms of CPU and other numerical aspects of the model's behavior, in some cases even improving numerical performance compared to model runs without nudging. We also examine the sensitivity of the model to nudging term parameters including the spatio-temporal influence coefficients in the weighting functions. Overall the nudging algorithm is quite flexible, for instance in dealing with concurrent observation datasets, gridded or scattered data, and different state variables, and the implementation presented here can be readily extended to any of these features not already incorporated. Moreover the nudging code and tests can serve as a basis for implementation of more sophisticated data assimilation techniques in a Richards equation-based hydrological model.

  15. Global high-frequency source imaging accounting for complexity in Green's functions

    NASA Astrophysics Data System (ADS)

    Lambert, V.; Zhan, Z.

    2017-12-01

    The general characterization of earthquake source processes at long periods has seen great success via seismic finite fault inversion/modeling. Complementary techniques, such as seismic back-projection, extend the capabilities of source imaging to higher frequencies and reveal finer details of the rupture process. However, such high frequency methods are limited by the implicit assumption of simple Green's functions, which restricts the use of global arrays and introduces artifacts (e.g., sweeping effects, depth/water phases) that require careful attention. This motivates the implementation of an imaging technique that considers the potential complexity of Green's functions at high frequencies. We propose an alternative inversion approach based on the modest assumption that the path effects contributing to signals within high-coherency subarrays share a similar form. Under this assumption, we develop a method that can combine multiple high-coherency subarrays to invert for a sparse set of subevents. By accounting for potential variability in the Green's functions among subarrays, our method allows for the utilization of heterogeneous global networks for robust high resolution imaging of the complex rupture process. The approach also provides a consistent framework for examining frequency-dependent radiation across a broad frequency spectrum.

  16. Evaluating uncertainty in predicting spatially variable representative elementary scales in fractured aquifers, with application to Turkey Creek Basin, Colorado

    USGS Publications Warehouse

    Wellman, Tristan P.; Poeter, Eileen P.

    2006-01-01

    Computational limitations and sparse field data often mandate use of continuum representation for modeling hydrologic processes in large‐scale fractured aquifers. Selecting appropriate element size is of primary importance because continuum approximation is not valid for all scales. The traditional approach is to select elements by identifying a single representative elementary scale (RES) for the region of interest. Recent advances indicate RES may be spatially variable, prompting unanswered questions regarding the ability of sparse data to spatially resolve continuum equivalents in fractured aquifers. We address this uncertainty of estimating RES using two techniques. In one technique we employ data‐conditioned realizations generated by sequential Gaussian simulation. For the other we develop a new approach using conditioned random walks and nonparametric bootstrapping (CRWN). We evaluate the effectiveness of each method under three fracture densities, three data sets, and two groups of RES analysis parameters. In sum, 18 separate RES analyses are evaluated, which indicate RES magnitudes may be reasonably bounded using uncertainty analysis, even for limited data sets and complex fracture structure. In addition, we conduct a field study to estimate RES magnitudes and resulting uncertainty for Turkey Creek Basin, a crystalline fractured rock aquifer located 30 km southwest of Denver, Colorado. Analyses indicate RES does not correlate to rock type or local relief in several instances but is generally lower within incised creek valleys and higher along mountain fronts. Results of this study suggest that (1) CRWN is an effective and computationally efficient method to estimate uncertainty, (2) RES predictions are well constrained using uncertainty analysis, and (3) for aquifers such as Turkey Creek Basin, spatial variability of RES is significant and complex.

  17. Technical and Tactical Aspects that Differentiate Winning and Losing Performances in Elite Male Karate Fighters.

    PubMed

    Vidranski, Tihomir; Sertić, Hrvoje; Jukić, Josefina

    2015-07-01

    The purpose of this research was to identify the fighters' technical and tactical activity indicators in order to determine indicator significance regarding situational efficiency and designation between winning and losing performances in a karate match. We scientifically observed a sample of 274 male contesters of 137 karate matches during the 2008 World Karate Championship in Tokyo. Each individual competitor was observed in maximum of three matches. The matches were recorded using a DVD camera in order to collect data for further analysis, and the sample was further described using 48 technical and tactical indicators of situational efficiency and match outcome variables. The obtained results indicate that a karate match is composed of 91% of non-scoring techniques and 9% of scoring techniques in the total technique frequency. On this basis a significant difference in the situational efficiency between the match winners and the losing contesters has been discovered. Those two groups of fighters exhibit a statistically significant difference (p<0.05) in 11 out of 21 observed variables of situational efficiency in the table of derived situational indicators. A prevalence of non-scoring techniques suggests that energy demand and technical and tactical requirements of a karate match are in the largest extent defined by non-scoring techniques. Therefore, it would be a grave mistake to disregard non-scoring karate techniques in any future situational efficiency studies. It has been discovered that the winners differ from the defeated contesters by a higher level of situational efficiency in their executed techniques, which incorporate versatility, biomechanical and structural complexity, topological diversity and a specific tactical concept of technique use in the attack phase.

  18. Scaling Linguistic Characterization of Precipitation Variability

    NASA Astrophysics Data System (ADS)

    Primo, C.; Gutierrez, J. M.

    2003-04-01

    Rainfall variability is influenced by changes in the aggregation of daily rainfall. This problem is of great importance for hydrological, agricultural and ecological applications. Rainfall averages, or accumulations, are widely used as standard climatic parameters. However different aggregation schemes may lead to the same average or accumulated values. In this paper we present a fractal method to characterize different aggregation schemes. The method provides scaling exponents characterizing weekly or monthly rainfall patterns for a given station. To this aim, we establish an analogy with linguistic analysis, considering precipitation as a discrete variable (e.g., rain, no rain). Each weekly, or monthly, symbolic precipitation sequence of observed precipitation is then considered as a "word" (in this case, a binary word) which defines a specific weekly rainfall pattern. Thus, each site defines a "language" characterized by the words observed in that site during a period representative of the climatology. Then, the more variable the observed weekly precipitation sequences, the more complex the obtained language. To characterize these languages, we first applied the Zipf's method obtaining scaling histograms of rank ordered frequencies. However, to obtain significant exponents, the scaling must be maintained some orders of magnitude, requiring long sequences of daily precipitation which are not available at particular stations. Thus this analysis is not suitable for applications involving particular stations (such as regionalization). Then, we introduce an alternative fractal method applicable to data from local stations. The so-called Chaos-Game method uses Iterated Function Systems (IFS) for graphically representing rainfall languages, in a way that complex languages define complex graphical patterns. The box-counting dimension and the entropy of the resulting patterns are used as linguistic parameters to quantitatively characterize the complexity of the patterns. We illustrate the high climatological discrimination power of the linguistic parameters in the Iberian peninsula, when compared with other standard techniques (such as seasonal mean accumulated precipitation). As an example, standard and linguistic parameters are used as inputs for a clustering regionalization method, comparing the resulting clusters.

  19. Hybrid genetic algorithm with an adaptive penalty function for fitting multimodal experimental data: application to exchange-coupled non-Kramers binuclear iron active sites.

    PubMed

    Beaser, Eric; Schwartz, Jennifer K; Bell, Caleb B; Solomon, Edward I

    2011-09-26

    A Genetic Algorithm (GA) is a stochastic optimization technique based on the mechanisms of biological evolution. These algorithms have been successfully applied in many fields to solve a variety of complex nonlinear problems. While they have been used with some success in chemical problems such as fitting spectroscopic and kinetic data, many have avoided their use due to the unconstrained nature of the fitting process. In engineering, this problem is now being addressed through incorporation of adaptive penalty functions, but their transfer to other fields has been slow. This study updates the Nanakorrn Adaptive Penalty function theory, expanding its validity beyond maximization problems to minimization as well. The expanded theory, using a hybrid genetic algorithm with an adaptive penalty function, was applied to analyze variable temperature variable field magnetic circular dichroism (VTVH MCD) spectroscopic data collected on exchange coupled Fe(II)Fe(II) enzyme active sites. The data obtained are described by a complex nonlinear multimodal solution space with at least 6 to 13 interdependent variables and are costly to search efficiently. The use of the hybrid GA is shown to improve the probability of detecting the global optimum. It also provides large gains in computational and user efficiency. This method allows a full search of a multimodal solution space, greatly improving the quality and confidence in the final solution obtained, and can be applied to other complex systems such as fitting of other spectroscopic or kinetics data.

  20. Multi-rendezvous low-thrust trajectory optimization using costate transforming and homotopic approach

    NASA Astrophysics Data System (ADS)

    Chen, Shiyu; Li, Haiyang; Baoyin, Hexi

    2018-06-01

    This paper investigates a method for optimizing multi-rendezvous low-thrust trajectories using indirect methods. An efficient technique, labeled costate transforming, is proposed to optimize multiple trajectory legs simultaneously rather than optimizing each trajectory leg individually. Complex inner-point constraints and a large number of free variables are one main challenge in optimizing multi-leg transfers via shooting algorithms. Such a difficulty is reduced by first optimizing each trajectory leg individually. The results may be, next, utilized as an initial guess in the simultaneous optimization of multiple trajectory legs. In this paper, the limitations of similar techniques in previous research is surpassed and a homotopic approach is employed to improve the convergence efficiency of the shooting process in multi-rendezvous low-thrust trajectory optimization. Numerical examples demonstrate that newly introduced techniques are valid and efficient.

  1. Analysis of covariance as a remedy for demographic mismatch of research subject groups: some sobering simulations.

    PubMed

    Adams, K M; Brown, G G; Grant, I

    1985-08-01

    Analysis of Covariance (ANCOVA) is often used in neuropsychological studies to effect ex-post-facto adjustment of performance variables amongst groups of subjects mismatched on some relevant demographic variable. This paper reviews some of the statistical assumptions underlying this usage. In an attempt to illustrate the complexities of this statistical technique, three sham studies using actual patient data are presented. These staged simulations have varying relationships between group test performance differences and levels of covariate discrepancy. The results were robust and consistent in their nature, and were held to support the wisdom of previous cautions by statisticians concerning the employment of ANCOVA to justify comparisons between incomparable groups. ANCOVA should not be used in neuropsychological research to equate groups unequal on variables such as age and education or to exert statistical control whose objective is to eliminate consideration of the covariate as an explanation for results. Finally, the report advocates by example the use of simulation to further our understanding of neuropsychological variables.

  2. Analysis of Power Laws, Shape Collapses, and Neural Complexity: New Techniques and MATLAB Support via the NCC Toolbox.

    PubMed

    Marshall, Najja; Timme, Nicholas M; Bennett, Nicholas; Ripp, Monica; Lautzenhiser, Edward; Beggs, John M

    2016-01-01

    Neural systems include interactions that occur across many scales. Two divergent methods for characterizing such interactions have drawn on the physical analysis of critical phenomena and the mathematical study of information. Inferring criticality in neural systems has traditionally rested on fitting power laws to the property distributions of "neural avalanches" (contiguous bursts of activity), but the fractal nature of avalanche shapes has recently emerged as another signature of criticality. On the other hand, neural complexity, an information theoretic measure, has been used to capture the interplay between the functional localization of brain regions and their integration for higher cognitive functions. Unfortunately, treatments of all three methods-power-law fitting, avalanche shape collapse, and neural complexity-have suffered from shortcomings. Empirical data often contain biases that introduce deviations from true power law in the tail and head of the distribution, but deviations in the tail have often been unconsidered; avalanche shape collapse has required manual parameter tuning; and the estimation of neural complexity has relied on small data sets or statistical assumptions for the sake of computational efficiency. In this paper we present technical advancements in the analysis of criticality and complexity in neural systems. We use maximum-likelihood estimation to automatically fit power laws with left and right cutoffs, present the first automated shape collapse algorithm, and describe new techniques to account for large numbers of neural variables and small data sets in the calculation of neural complexity. In order to facilitate future research in criticality and complexity, we have made the software utilized in this analysis freely available online in the MATLAB NCC (Neural Complexity and Criticality) Toolbox.

  3. A three-dimensional insight into the complexity of flow convergence in mitral regurgitation: adjunctive benefit of anatomic regurgitant orifice area.

    PubMed

    Chandra, Sonal; Salgo, Ivan S; Sugeng, Lissa; Weinert, Lynn; Settlemier, Scott H; Mor-Avi, Victor; Lang, Roberto M

    2011-09-01

    Mitral effective regurgitant orifice area (EROA) using the flow convergence (FC) method is used to quantify the severity of mitral regurgitation (MR). However, it is challenging and prone to interobserver variability in complex valvular pathology. We hypothesized that real-time three-dimensional (3D) transesophageal echocardiography (RT3D TEE) derived anatomic regurgitant orifice area (AROA) can be a reasonable adjunct, irrespective of valvular geometry. Our goals were to 1) to determine the regurgitant orifice morphology and distance suitable for FC measurement using 3D computational flow dynamics and finite element analysis (FEA), and (2) to measure AROA from RT3D TEE and compare it with 2D FC derived EROA measurements. We studied 61 patients. EROA was calculated from 2D TEE images using the 2D-FC technique, and AROA was obtained from zoomed RT3DE TEE acquisitions using prototype software. 3D computational fluid dynamics by FEA were applied to 3D TEE images to determine the effects of mitral valve (MV) orifice geometry on FC pattern. 3D FEA analysis revealed that a central regurgitant orifice is suitable for FC measurements at an optimal distance from the orifice but complex MV orifice resulting in eccentric jets yielded nonaxisymmetric isovelocity contours close to the orifice where the assumptions underlying FC are problematic. EROA and AROA measurements correlated well (r = 0.81) with a nonsignificant bias. However, in patients with eccentric MR, the bias was larger than in central MR. Intermeasurement variability was higher for the 2D FC technique than for RT3DE-based measurements. With its superior reproducibility, 3D analysis of the AROA is a useful alternative to quantify MR when 2D FC measurements are challenging.

  4. A highly stable gadolinium complex with a fast, associative mechanism of water exchange.

    PubMed

    Thompson, Marlon K; Botta, Mauro; Nicolle, Gaëlle; Helm, Lothar; Aime, Silvio; Merbach, André E; Raymond, Kenneth N

    2003-11-26

    The stability and water exchange dynamics of gadolinium (GdIII) complexes are critical characteristics that determine their effectiveness as contrast agents for magnetic resonance imaging (MRI). A new heteropodal GdIII chelate, [Gd-TREN-bis(6-Me-HOPO)-(TAM-TRI)(H2O)2] (Gd-2), is presented which is based on a hydroxypyridinate (HOPO)-terephthalamide (TAM) ligand design. Thermodynamic equilibrium constants for the acid-base properties and the GdIII complexation strength of TREN-bis(6-Me-HOPO)-(TAM-TRI) (2) were measured by potentiometric and spectrophotometric titration techniques, respectively. The pGd of 2 is 20.6 (pH 7.4, 25 degrees C, I = 0.1 M), indicating that Gd-2 is of more than sufficient thermodynamic stability for in vivo MRI applications. The water exchange rate of Gd-2 (kex = 5.3(+/-0.6) x 107 s-1) was determined by variable temperature 17O NMR and is in the fast exchange regime - ideal for MRI. Variable pressure 17O NMR was used to determine the volume of activation (DeltaV) of Gd-2. DeltaV for Gd-2 is -5 cm3 mol-1, indicative of an interchange associative (Ia) water exchange mechanism. The results reported herein are important as they provide insight into the factors influencing high stability and fast water exchange in the HOPO series of complexes, potentially future clinical contrast agents.

  5. Solving the Inverse-Square Problem with Complex Variables

    ERIC Educational Resources Information Center

    Gauthier, N.

    2005-01-01

    The equation of motion for a mass that moves under the influence of a central, inverse-square force is formulated and solved as a problem in complex variables. To find the solution, the constancy of angular momentum is first established using complex variables. Next, the complex position coordinate and complex velocity of the particle are assumed…

  6. Reduced modeling of signal transduction – a modular approach

    PubMed Central

    Koschorreck, Markus; Conzelmann, Holger; Ebert, Sybille; Ederer, Michael; Gilles, Ernst Dieter

    2007-01-01

    Background Combinatorial complexity is a challenging problem in detailed and mechanistic mathematical modeling of signal transduction. This subject has been discussed intensively and a lot of progress has been made within the last few years. A software tool (BioNetGen) was developed which allows an automatic rule-based set-up of mechanistic model equations. In many cases these models can be reduced by an exact domain-oriented lumping technique. However, the resulting models can still consist of a very large number of differential equations. Results We introduce a new reduction technique, which allows building modularized and highly reduced models. Compared to existing approaches further reduction of signal transduction networks is possible. The method also provides a new modularization criterion, which allows to dissect the model into smaller modules that are called layers and can be modeled independently. Hallmarks of the approach are conservation relations within each layer and connection of layers by signal flows instead of mass flows. The reduced model can be formulated directly without previous generation of detailed model equations. It can be understood and interpreted intuitively, as model variables are macroscopic quantities that are converted by rates following simple kinetics. The proposed technique is applicable without using complex mathematical tools and even without detailed knowledge of the mathematical background. However, we provide a detailed mathematical analysis to show performance and limitations of the method. For physiologically relevant parameter domains the transient as well as the stationary errors caused by the reduction are negligible. Conclusion The new layer based reduced modeling method allows building modularized and strongly reduced models of signal transduction networks. Reduced model equations can be directly formulated and are intuitively interpretable. Additionally, the method provides very good approximations especially for macroscopic variables. It can be combined with existing reduction methods without any difficulties. PMID:17854494

  7. An overview of techniques for linking high-dimensional molecular data to time-to-event endpoints by risk prediction models.

    PubMed

    Binder, Harald; Porzelius, Christine; Schumacher, Martin

    2011-03-01

    Analysis of molecular data promises identification of biomarkers for improving prognostic models, thus potentially enabling better patient management. For identifying such biomarkers, risk prediction models can be employed that link high-dimensional molecular covariate data to a clinical endpoint. In low-dimensional settings, a multitude of statistical techniques already exists for building such models, e.g. allowing for variable selection or for quantifying the added value of a new biomarker. We provide an overview of techniques for regularized estimation that transfer this toward high-dimensional settings, with a focus on models for time-to-event endpoints. Techniques for incorporating specific covariate structure are discussed, as well as techniques for dealing with more complex endpoints. Employing gene expression data from patients with diffuse large B-cell lymphoma, some typical modeling issues from low-dimensional settings are illustrated in a high-dimensional application. First, the performance of classical stepwise regression is compared to stage-wise regression, as implemented by a component-wise likelihood-based boosting approach. A second issues arises, when artificially transforming the response into a binary variable. The effects of the resulting loss of efficiency and potential bias in a high-dimensional setting are illustrated, and a link to competing risks models is provided. Finally, we discuss conditions for adequately quantifying the added value of high-dimensional gene expression measurements, both at the stage of model fitting and when performing evaluation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Diffractive optical variable image devices generated by maskless interferometric lithography for optical security

    NASA Astrophysics Data System (ADS)

    Cabral, Alexandre; Rebordão, José M.

    2011-05-01

    In optical security (protection against forgery and counterfeit of products and documents) the problem is not exact reproduction but the production of something sufficiently similar to the original. Currently, Diffractive Optically Variable Image Devices (DOVID), that create dynamic chromatic effects which may be easily recognized but are difficult to reproduce, are often used to protect important products and documents. Well known examples of DOVID for security are 3D or 2D/3D holograms in identity documents and credit cards. Others are composed of shapes with different types of microstructures yielding by diffraction to chromatic dynamic effects. A maskless interferometric lithography technique to generate DOVIDs for optical security is presented and compared to traditional techniques. The approach can be considered as a self-masking focused holography on planes tilted with respect to the reference optical axes of the system, and is based on the Scheimpflug and Hinge rules. No physical masks are needed to ensure optimum exposure of the photosensitive film. The system built to demonstrate the technique relies on the digital mirrors device MOEMS technology from Texas Instruments' Digital Light Processing. The technique is linear on the number of specified colors and does not depend either on the area of the device or the number of pixels, factors that drive the complexity of dot-matrix based systems. The results confirmed the technique innovation and capabilities in the creation of diffractive optical elements for security against counterfeiting and forgery.

  9. Endodontic and Clinical Considerations in the Management of Variable Anatomy in Mandibular Premolars: A Literature Review

    PubMed Central

    Hammo, Mohammad

    2014-01-01

    Mandibular premolars are known to have numerous anatomic variations of their roots and root canals, which are a challenge to treat endodontically. The paper reviews literature to detail the various clinically relevant anatomic considerations with detailed techniques and methods to successfully manage these anomalies. An emphasis and detailed description of every step of treatment including preoperative diagnosis, intraoperative identification and management, and surgical endodontic considerations for the successful management of these complex cases have been included. PMID:24895584

  10. Techniques and resources for storm-scale numerical weather prediction

    NASA Technical Reports Server (NTRS)

    Droegemeier, Kelvin; Grell, Georg; Doyle, James; Soong, Su-Tzai; Skamarock, William; Bacon, David; Staniforth, Andrew; Crook, Andrew; Wilhelmson, Robert

    1993-01-01

    The topics discussed include the following: multiscale application of the 5th-generation PSU/NCAR mesoscale model, the coupling of nonhydrostatic atmospheric and hydrostatic ocean models for air-sea interaction studies; a numerical simulation of cloud formation over complex topography; adaptive grid simulations of convection; an unstructured grid, nonhydrostatic meso/cloud scale model; efficient mesoscale modeling for multiple scales using variable resolution; initialization of cloud-scale models with Doppler radar data; and making effective use of future computing architectures, networks, and visualization software.

  11. Technical development to improve satellite sounding over radiatively complex terrain

    NASA Technical Reports Server (NTRS)

    Schreiner, A. J.

    1985-01-01

    High resolution topography was acquired and applied on the McIDAS system. A technique for finding the surface skin temperature in the presence of cloud and reflected sunlight was implemented in the ALPEX retrieval software and the variability of surface emissivity at microwave wavelength was examined. Data containing raw radiances for all HIRS and MSU channels for NOAA-6 and 7 were used. METEOSAT data were used to derive cloud drift and water vapor winds over the Alpine region.

  12. Stability of uncertain impulsive complex-variable chaotic systems with time-varying delays.

    PubMed

    Zheng, Song

    2015-09-01

    In this paper, the robust exponential stabilization of uncertain impulsive complex-variable chaotic delayed systems is considered with parameters perturbation and delayed impulses. It is assumed that the considered complex-variable chaotic systems have bounded parametric uncertainties together with the state variables on the impulses related to the time-varying delays. Based on the theories of adaptive control and impulsive control, some less conservative and easily verified stability criteria are established for a class of complex-variable chaotic delayed systems with delayed impulses. Some numerical simulations are given to validate the effectiveness of the proposed criteria of impulsive stabilization for uncertain complex-variable chaotic delayed systems. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  13. An Ensemble Successive Project Algorithm for Liquor Detection Using Near Infrared Sensor.

    PubMed

    Qu, Fangfang; Ren, Dong; Wang, Jihua; Zhang, Zhong; Lu, Na; Meng, Lei

    2016-01-11

    Spectral analysis technique based on near infrared (NIR) sensor is a powerful tool for complex information processing and high precision recognition, and it has been widely applied to quality analysis and online inspection of agricultural products. This paper proposes a new method to address the instability of small sample sizes in the successive projections algorithm (SPA) as well as the lack of association between selected variables and the analyte. The proposed method is an evaluated bootstrap ensemble SPA method (EBSPA) based on a variable evaluation index (EI) for variable selection, and is applied to the quantitative prediction of alcohol concentrations in liquor using NIR sensor. In the experiment, the proposed EBSPA with three kinds of modeling methods are established to test their performance. In addition, the proposed EBSPA combined with partial least square is compared with other state-of-the-art variable selection methods. The results show that the proposed method can solve the defects of SPA and it has the best generalization performance and stability. Furthermore, the physical meaning of the selected variables from the near infrared sensor data is clear, which can effectively reduce the variables and improve their prediction accuracy.

  14. Towards automating the discovery of certain innovative design principles through a clustering-based optimization technique

    NASA Astrophysics Data System (ADS)

    Bandaru, Sunith; Deb, Kalyanmoy

    2011-09-01

    In this article, a methodology is proposed for automatically extracting innovative design principles which make a system or process (subject to conflicting objectives) optimal using its Pareto-optimal dataset. Such 'higher knowledge' would not only help designers to execute the system better, but also enable them to predict how changes in one variable would affect other variables if the system has to retain its optimal behaviour. This in turn would help solve other similar systems with different parameter settings easily without the need to perform a fresh optimization task. The proposed methodology uses a clustering-based optimization technique and is capable of discovering hidden functional relationships between the variables, objective and constraint functions and any other function that the designer wishes to include as a 'basis function'. A number of engineering design problems are considered for which the mathematical structure of these explicit relationships exists and has been revealed by a previous study. A comparison with the multivariate adaptive regression splines (MARS) approach reveals the practicality of the proposed approach due to its ability to find meaningful design principles. The success of this procedure for automated innovization is highly encouraging and indicates its suitability for further development in tackling more complex design scenarios.

  15. A method for work modeling at complex systems: towards applying information systems in family health care units.

    PubMed

    Jatobá, Alessandro; de Carvalho, Paulo Victor R; da Cunha, Amauri Marques

    2012-01-01

    Work in organizations requires a minimum level of consensus on the understanding of the practices performed. To adopt technological devices to support the activities in environments where work is complex, characterized by the interdependence among a large number of variables, understanding about how work is done not only takes an even greater importance, but also becomes a more difficult task. Therefore, this study aims to present a method for modeling of work in complex systems, which allows improving the knowledge about the way activities are performed where these activities do not simply happen by performing procedures. Uniting techniques of Cognitive Task Analysis with the concept of Work Process, this work seeks to provide a method capable of providing a detailed and accurate vision of how people perform their tasks, in order to apply information systems for supporting work in organizations.

  16. SIMRAND I- SIMULATION OF RESEARCH AND DEVELOPMENT PROJECTS

    NASA Technical Reports Server (NTRS)

    Miles, R. F.

    1994-01-01

    The Simulation of Research and Development Projects program (SIMRAND) aids in the optimal allocation of R&D resources needed to achieve project goals. SIMRAND models the system subsets or project tasks as various network paths to a final goal. Each path is described in terms of task variables such as cost per hour, cost per unit, availability of resources, etc. Uncertainty is incorporated by treating task variables as probabilistic random variables. SIMRAND calculates the measure of preference for each alternative network. The networks yielding the highest utility function (or certainty equivalence) are then ranked as the optimal network paths. SIMRAND has been used in several economic potential studies at NASA's Jet Propulsion Laboratory involving solar dish power systems and photovoltaic array construction. However, any project having tasks which can be reduced to equations and related by measures of preference can be modeled. SIMRAND analysis consists of three phases: reduction, simulation, and evaluation. In the reduction phase, analytical techniques from probability theory and simulation techniques are used to reduce the complexity of the alternative networks. In the simulation phase, a Monte Carlo simulation is used to derive statistics on the variables of interest for each alternative network path. In the evaluation phase, the simulation statistics are compared and the networks are ranked in preference by a selected decision rule. The user must supply project subsystems in terms of equations based on variables (for example, parallel and series assembly line tasks in terms of number of items, cost factors, time limits, etc). The associated cumulative distribution functions and utility functions for each variable must also be provided (allowable upper and lower limits, group decision factors, etc). SIMRAND is written in Microsoft FORTRAN 77 for batch execution and has been implemented on an IBM PC series computer operating under DOS.

  17. Newtonian Nudging For A Richards Equation-based Distributed Hydrological Model

    NASA Astrophysics Data System (ADS)

    Paniconi, C.; Marrocu, M.; Putti, M.; Verbunt, M.

    In this study a relatively simple data assimilation method has been implemented in a relatively complex hydrological model. The data assimilation technique is Newtonian relaxation or nudging, in which model variables are driven towards observations by a forcing term added to the model equations. The forcing term is proportional to the difference between simulation and observation (relaxation component) and contains four-dimensional weighting functions that can incorporate prior knowledge about the spatial and temporal variability and characteristic scales of the state variable(s) being assimilated. The numerical model couples a three-dimensional finite element Richards equation solver for variably saturated porous media and a finite difference diffusion wave approximation based on digital elevation data for surface water dynamics. We describe the implementation of the data assimilation algorithm for the coupled model and report on the numerical and hydrological performance of the resulting assimila- tion scheme. Nudging is shown to be successful in improving the hydrological sim- ulation results, and it introduces little computational cost, in terms of CPU and other numerical aspects of the model's behavior, in some cases even improving numerical performance compared to model runs without nudging. We also examine the sensitiv- ity of the model to nudging term parameters including the spatio-temporal influence coefficients in the weighting functions. Overall the nudging algorithm is quite flexi- ble, for instance in dealing with concurrent observation datasets, gridded or scattered data, and different state variables, and the implementation presented here can be read- ily extended to any features not already incorporated. Moreover the nudging code and tests can serve as a basis for implementation of more sophisticated data assimilation techniques in a Richards equation-based hydrological model.

  18. Analysis of Power Laws, Shape Collapses, and Neural Complexity: New Techniques and MATLAB Support via the NCC Toolbox

    PubMed Central

    Marshall, Najja; Timme, Nicholas M.; Bennett, Nicholas; Ripp, Monica; Lautzenhiser, Edward; Beggs, John M.

    2016-01-01

    Neural systems include interactions that occur across many scales. Two divergent methods for characterizing such interactions have drawn on the physical analysis of critical phenomena and the mathematical study of information. Inferring criticality in neural systems has traditionally rested on fitting power laws to the property distributions of “neural avalanches” (contiguous bursts of activity), but the fractal nature of avalanche shapes has recently emerged as another signature of criticality. On the other hand, neural complexity, an information theoretic measure, has been used to capture the interplay between the functional localization of brain regions and their integration for higher cognitive functions. Unfortunately, treatments of all three methods—power-law fitting, avalanche shape collapse, and neural complexity—have suffered from shortcomings. Empirical data often contain biases that introduce deviations from true power law in the tail and head of the distribution, but deviations in the tail have often been unconsidered; avalanche shape collapse has required manual parameter tuning; and the estimation of neural complexity has relied on small data sets or statistical assumptions for the sake of computational efficiency. In this paper we present technical advancements in the analysis of criticality and complexity in neural systems. We use maximum-likelihood estimation to automatically fit power laws with left and right cutoffs, present the first automated shape collapse algorithm, and describe new techniques to account for large numbers of neural variables and small data sets in the calculation of neural complexity. In order to facilitate future research in criticality and complexity, we have made the software utilized in this analysis freely available online in the MATLAB NCC (Neural Complexity and Criticality) Toolbox. PMID:27445842

  19. The Naïve Overfitting Index Selection (NOIS): A new method to optimize model complexity for hyperspectral data

    NASA Astrophysics Data System (ADS)

    Rocha, Alby D.; Groen, Thomas A.; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Willemen, Louise

    2017-11-01

    The growing number of narrow spectral bands in hyperspectral remote sensing improves the capacity to describe and predict biological processes in ecosystems. But it also poses a challenge to fit empirical models based on such high dimensional data, which often contain correlated and noisy predictors. As sample sizes, to train and validate empirical models, seem not to be increasing at the same rate, overfitting has become a serious concern. Overly complex models lead to overfitting by capturing more than the underlying relationship, and also through fitting random noise in the data. Many regression techniques claim to overcome these problems by using different strategies to constrain complexity, such as limiting the number of terms in the model, by creating latent variables or by shrinking parameter coefficients. This paper is proposing a new method, named Naïve Overfitting Index Selection (NOIS), which makes use of artificially generated spectra, to quantify the relative model overfitting and to select an optimal model complexity supported by the data. The robustness of this new method is assessed by comparing it to a traditional model selection based on cross-validation. The optimal model complexity is determined for seven different regression techniques, such as partial least squares regression, support vector machine, artificial neural network and tree-based regressions using five hyperspectral datasets. The NOIS method selects less complex models, which present accuracies similar to the cross-validation method. The NOIS method reduces the chance of overfitting, thereby avoiding models that present accurate predictions that are only valid for the data used, and too complex to make inferences about the underlying process.

  20. DEVELOPMENTS IN GRworkbench

    NASA Astrophysics Data System (ADS)

    Moylan, Andrew; Scott, Susan M.; Searle, Anthony C.

    2006-02-01

    The software tool GRworkbench is an ongoing project in visual, numerical General Relativity at The Australian National University. Recently, GRworkbench has been significantly extended to facilitate numerical experimentation in analytically-defined space-times. The numerical differential geometric engine has been rewritten using functional programming techniques, enabling objects which are normally defined as functions in the formalism of differential geometry and General Relativity to be directly represented as function variables in the C++ code of GRworkbench. The new functional differential geometric engine allows for more accurate and efficient visualisation of objects in space-times and makes new, efficient computational techniques available. Motivated by the desire to investigate a recent scientific claim using GRworkbench, new tools for numerical experimentation have been implemented, allowing for the simulation of complex physical situations.

  1. Management of High-Grade Penile Curvature Associated With Hypospadias in Children

    PubMed Central

    Moscardi, Paulo R. M.; Gosalbez, Rafael; Castellan, Miguel Alfedo

    2017-01-01

    Penile curvature is a frequent feature associated with hypospadias with also a great variability of severity among each patient. While the low-grade curvature (<30°) can be relatively easily corrected by simple techniques like penile degloving and dorsal plication, severe cases often demand more complex maneuvers to manage it. A great number of surgical techniques have been developed to adequately correct curvatures greater than 30°; however, each one of them should be individualized to different patients and local conditions encountered. In this article, we will review the evaluation of the pediatric patient with penile curvature associated with hypospadias with a special attention to high-grade cases, their management, indications for surgical treatment, and several surgical options for their definitive treatment. PMID:28929092

  2. The Timeseries Toolbox - A Web Application to Enable Accessible, Reproducible Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Veatch, W.; Friedman, D.; Baker, B.; Mueller, C.

    2017-12-01

    The vast majority of data analyzed by climate researchers are repeated observations of physical process or time series data. This data lends itself of a common set of statistical techniques and models designed to determine trends and variability (e.g., seasonality) of these repeated observations. Often, these same techniques and models can be applied to a wide variety of different time series data. The Timeseries Toolbox is a web application designed to standardize and streamline these common approaches to time series analysis and modeling with particular attention to hydrologic time series used in climate preparedness and resilience planning and design by the U. S. Army Corps of Engineers. The application performs much of the pre-processing of time series data necessary for more complex techniques (e.g. interpolation, aggregation). With this tool, users can upload any dataset that conforms to a standard template and immediately begin applying these techniques to analyze their time series data.

  3. Event triggered state estimation techniques for power systems with integrated variable energy resources.

    PubMed

    Francy, Reshma C; Farid, Amro M; Youcef-Toumi, Kamal

    2015-05-01

    For many decades, state estimation (SE) has been a critical technology for energy management systems utilized by power system operators. Over time, it has become a mature technology that provides an accurate representation of system state under fairly stable and well understood system operation. The integration of variable energy resources (VERs) such as wind and solar generation, however, introduces new fast frequency dynamics and uncertainties into the system. Furthermore, such renewable energy is often integrated into the distribution system thus requiring real-time monitoring all the way to the periphery of the power grid topology and not just the (central) transmission system. The conventional solution is two fold: solve the SE problem (1) at a faster rate in accordance with the newly added VER dynamics and (2) for the entire power grid topology including the transmission and distribution systems. Such an approach results in exponentially growing problem sets which need to be solver at faster rates. This work seeks to address these two simultaneous requirements and builds upon two recent SE methods which incorporate event-triggering such that the state estimator is only called in the case of considerable novelty in the evolution of the system state. The first method incorporates only event-triggering while the second adds the concept of tracking. Both SE methods are demonstrated on the standard IEEE 14-bus system and the results are observed for a specific bus for two difference scenarios: (1) a spike in the wind power injection and (2) ramp events with higher variability. Relative to traditional state estimation, the numerical case studies showed that the proposed methods can result in computational time reductions of 90%. These results were supported by a theoretical discussion of the computational complexity of three SE techniques. The work concludes that the proposed SE techniques demonstrate practical improvements to the computational complexity of classical state estimation. In such a way, state estimation can continue to support the necessary control actions to mitigate the imbalances resulting from the uncertainties in renewables. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  4. GRID3D-v2: An updated version of the GRID2D/3D computer program for generating grid systems in complex-shaped three-dimensional spatial domains

    NASA Technical Reports Server (NTRS)

    Steinthorsson, E.; Shih, T. I-P.; Roelke, R. J.

    1991-01-01

    In order to generate good quality systems for complicated three-dimensional spatial domains, the grid-generation method used must be able to exert rather precise controls over grid-point distributions. Several techniques are presented that enhance control of grid-point distribution for a class of algebraic grid-generation methods known as the two-, four-, and six-boundary methods. These techniques include variable stretching functions from bilinear interpolation, interpolating functions based on tension splines, and normalized K-factors. The techniques developed in this study were incorporated into a new version of GRID3D called GRID3D-v2. The usefulness of GRID3D-v2 was demonstrated by using it to generate a three-dimensional grid system in the coolent passage of a radial turbine blade with serpentine channels and pin fins.

  5. Thermal radiation characteristics of nonisothermal cylindrical enclosures using a numerical ray tracing technique

    NASA Technical Reports Server (NTRS)

    Baumeister, Joseph F.

    1990-01-01

    Analysis of energy emitted from simple or complex cavity designs can lead to intricate solutions due to nonuniform radiosity and irradiation within a cavity. A numerical ray tracing technique was applied to simulate radiation propagating within and from various cavity designs. To obtain the energy balance relationships between isothermal and nonisothermal cavity surfaces and space, the computer code NEVADA was utilized for its statistical technique applied to numerical ray tracing. The analysis method was validated by comparing results with known theoretical and limiting solutions, and the electrical resistance network method. In general, for nonisothermal cavities the performance (apparent emissivity) is a function of cylinder length-to-diameter ratio, surface emissivity, and cylinder surface temperatures. The extent of nonisothermal conditions in a cylindrical cavity significantly affects the overall cavity performance. Results are presented over a wide range of parametric variables for use as a possible design reference.

  6. Developing a cardiopulmonary exercise testing laboratory.

    PubMed

    Diamond, Edward

    2007-12-01

    Cardiopulmonary exercise testing is a noninvasive and cost-effective technique that adds significant value to the assessment and management of a variety of symptoms and diseases. The penetration of this testing in medical practice may be limited by perceived operational and financial barriers. This article reviews coding and supervision requirements related to both simple and complex pulmonary stress testing. A program evaluation and review technique diagram is used to describe the work flow process. Data from our laboratory are used to generate an income statement that separates fixed and variable costs and calculates the contribution margin. A cost-volume-profit (break-even) analysis is then performed. Using data from our laboratory including fixed and variable costs, payer mix, reimbursements by payer, and the assumption that the studies are divided evenly between simple and complex pulmonary stress tests, the break-even number is calculated to be 300 tests per year. A calculator with embedded formulas has been designed by the author and is available on request. Developing a cardiopulmonary exercise laboratory is challenging but achievable and potentially profitable. It should be considered by a practice that seeks to distinguish itself as a quality leader. Providing this clinically valuable service may yield indirect benefits such as increased patient volume and increased utilization of other services provided by the practice. The decision for a medical practice to commit resources to managerial accounting support requires a cost-benefit analysis, but may be a worthwhile investment in our challenging economic environment.

  7. Exploring the Connection Between Sampling Problems in Bayesian Inference and Statistical Mechanics

    NASA Technical Reports Server (NTRS)

    Pohorille, Andrew

    2006-01-01

    The Bayesian and statistical mechanical communities often share the same objective in their work - estimating and integrating probability distribution functions (pdfs) describing stochastic systems, models or processes. Frequently, these pdfs are complex functions of random variables exhibiting multiple, well separated local minima. Conventional strategies for sampling such pdfs are inefficient, sometimes leading to an apparent non-ergodic behavior. Several recently developed techniques for handling this problem have been successfully applied in statistical mechanics. In the multicanonical and Wang-Landau Monte Carlo (MC) methods, the correct pdfs are recovered from uniform sampling of the parameter space by iteratively establishing proper weighting factors connecting these distributions. Trivial generalizations allow for sampling from any chosen pdf. The closely related transition matrix method relies on estimating transition probabilities between different states. All these methods proved to generate estimates of pdfs with high statistical accuracy. In another MC technique, parallel tempering, several random walks, each corresponding to a different value of a parameter (e.g. "temperature"), are generated and occasionally exchanged using the Metropolis criterion. This method can be considered as a statistically correct version of simulated annealing. An alternative approach is to represent the set of independent variables as a Hamiltonian system. Considerab!e progress has been made in understanding how to ensure that the system obeys the equipartition theorem or, equivalently, that coupling between the variables is correctly described. Then a host of techniques developed for dynamical systems can be used. Among them, probably the most powerful is the Adaptive Biasing Force method, in which thermodynamic integration and biased sampling are combined to yield very efficient estimates of pdfs. The third class of methods deals with transitions between states described by rate constants. These problems are isomorphic with chemical kinetics problems. Recently, several efficient techniques for this purpose have been developed based on the approach originally proposed by Gillespie. Although the utility of the techniques mentioned above for Bayesian problems has not been determined, further research along these lines is warranted

  8. Scalability of surrogate-assisted multi-objective optimization of antenna structures exploiting variable-fidelity electromagnetic simulation models

    NASA Astrophysics Data System (ADS)

    Koziel, Slawomir; Bekasiewicz, Adrian

    2016-10-01

    Multi-objective optimization of antenna structures is a challenging task owing to the high computational cost of evaluating the design objectives as well as the large number of adjustable parameters. Design speed-up can be achieved by means of surrogate-based optimization techniques. In particular, a combination of variable-fidelity electromagnetic (EM) simulations, design space reduction techniques, response surface approximation models and design refinement methods permits identification of the Pareto-optimal set of designs within a reasonable timeframe. Here, a study concerning the scalability of surrogate-assisted multi-objective antenna design is carried out based on a set of benchmark problems, with the dimensionality of the design space ranging from six to 24 and a CPU cost of the EM antenna model from 10 to 20 min per simulation. Numerical results indicate that the computational overhead of the design process increases more or less quadratically with the number of adjustable geometric parameters of the antenna structure at hand, which is a promising result from the point of view of handling even more complex problems.

  9. Cosmic Ray Neutron Sensing in Complex Systems

    NASA Astrophysics Data System (ADS)

    Piussi, L. M.; Tomelleri, E.; Tonon, G.; Bertoldi, G.; Mejia Aguilar, A.; Monsorno, R.; Zebisch, M.

    2017-12-01

    Soil moisture is a key variable in environmental monitoring and modelling: being located at the soil-atmosphere boundary, it is a driving force for water, energy and carbon fluxes. Nevertheless its importance, soil moisture observations lack of long time-series at high acquisition frequency in spatial meso-scale resolutions: traditional measurements deliver either long time series with high measurement frequency at spatial point scale or large scale and low frequency acquisitions. The Cosmic Ray Neutron Sensing (CRNS) technique fills this gap because it supplies information from a footprint of 240m of diameter and 15 to 83 cm of depth at a temporal resolution varying between 15 minutes and 24 hours. In addition, being a passive sensing technique, it is non-invasive. For these reasons, CRNS is gaining more and more attention from the scientific community. Nevertheless, the application of this technique in complex systems is still an open issue: where different Hydrogen pools are present and where their distributions vary appreciably with space and time, the traditional calibration method shows some limits. In order to obtain a better understanding of the data and to compare them with remote sensing products and spatially distributed traditional measurements (i.e. Wireless Sensors Network), the complexity of the surrounding environment has to be taken into account. In the current work we assessed the effects of spatial-temporal variability of soil moisture within the footprint, in a steep, heterogeneous mountain grassland area. Measurement were performed with a Cosmic Ray Neutron Probe (CRNP) and a mobile Wireless Sensors Network. We performed an in-deep sensitivity analysis of the effects of varying distributions of soil moisture on the calibration of the CRNP and our preliminary results show how the footprint shape varies depending on these dynamics. The results are then compared with remote sensing data (Sentinel 1 and 2). The current work is an assessment of different calibration procedures and their effect on the measurement outcome. We found that the response of the CRNP follows quite well the punctual measurement performed by a TDR installed on the site, but discrepancies could be explained by using the Wireless Sensors Network to perform a spatially weighted calibration and to introduce temporal dynamics.

  10. Prediction of ozone concentration in tropospheric levels using artificial neural networks and support vector machine at Rio de Janeiro, Brazil

    NASA Astrophysics Data System (ADS)

    Luna, A. S.; Paredes, M. L. L.; de Oliveira, G. C. G.; Corrêa, S. M.

    2014-12-01

    It is well known that air quality is a complex function of emissions, meteorology and topography, and statistical tools provide a sound framework for relating these variables. The observed data were contents of nitrogen dioxide (NO2), nitrogen monoxide (NO), nitrogen oxides (NOx), carbon monoxide (CO), ozone (O3), scalar wind speed (SWS), global solar radiation (GSR), temperature (TEM), moisture content in the air (HUM), collected by a mobile automatic monitoring station at Rio de Janeiro City in two places of the metropolitan area during 2011 and 2012. The aims of this study were: (1) to analyze the behavior of the variables, using the method of PCA for exploratory data analysis; (2) to propose forecasts of O3 levels from primary pollutants and meteorological factors, using nonlinear regression methods like ANN and SVM, from primary pollutants and meteorological factors. The PCA technique showed that for first dataset, variables NO, NOx and SWS have a greater impact on the concentration of O3 and the other data set had the TEM and GSR as the most influential variables. The obtained results from the nonlinear regression techniques ANN and SVM were remarkably closely and acceptable to one dataset presenting coefficient of determination for validation respectively 0.9122 and 0.9152, and root mean square error of 7.66 and 7.85, respectively. For these datasets, the PCA, SVM and ANN had demonstrated their robustness as useful tools for evaluation, and forecast scenarios for air quality.

  11. Active Learning to Understand Infectious Disease Models and Improve Policy Making

    PubMed Central

    Vladislavleva, Ekaterina; Broeckhove, Jan; Beutels, Philippe; Hens, Niel

    2014-01-01

    Modeling plays a major role in policy making, especially for infectious disease interventions but such models can be complex and computationally intensive. A more systematic exploration is needed to gain a thorough systems understanding. We present an active learning approach based on machine learning techniques as iterative surrogate modeling and model-guided experimentation to systematically analyze both common and edge manifestations of complex model runs. Symbolic regression is used for nonlinear response surface modeling with automatic feature selection. First, we illustrate our approach using an individual-based model for influenza vaccination. After optimizing the parameter space, we observe an inverse relationship between vaccination coverage and cumulative attack rate reinforced by herd immunity. Second, we demonstrate the use of surrogate modeling techniques on input-response data from a deterministic dynamic model, which was designed to explore the cost-effectiveness of varicella-zoster virus vaccination. We use symbolic regression to handle high dimensionality and correlated inputs and to identify the most influential variables. Provided insight is used to focus research, reduce dimensionality and decrease decision uncertainty. We conclude that active learning is needed to fully understand complex systems behavior. Surrogate models can be readily explored at no computational expense, and can also be used as emulator to improve rapid policy making in various settings. PMID:24743387

  12. Active learning to understand infectious disease models and improve policy making.

    PubMed

    Willem, Lander; Stijven, Sean; Vladislavleva, Ekaterina; Broeckhove, Jan; Beutels, Philippe; Hens, Niel

    2014-04-01

    Modeling plays a major role in policy making, especially for infectious disease interventions but such models can be complex and computationally intensive. A more systematic exploration is needed to gain a thorough systems understanding. We present an active learning approach based on machine learning techniques as iterative surrogate modeling and model-guided experimentation to systematically analyze both common and edge manifestations of complex model runs. Symbolic regression is used for nonlinear response surface modeling with automatic feature selection. First, we illustrate our approach using an individual-based model for influenza vaccination. After optimizing the parameter space, we observe an inverse relationship between vaccination coverage and cumulative attack rate reinforced by herd immunity. Second, we demonstrate the use of surrogate modeling techniques on input-response data from a deterministic dynamic model, which was designed to explore the cost-effectiveness of varicella-zoster virus vaccination. We use symbolic regression to handle high dimensionality and correlated inputs and to identify the most influential variables. Provided insight is used to focus research, reduce dimensionality and decrease decision uncertainty. We conclude that active learning is needed to fully understand complex systems behavior. Surrogate models can be readily explored at no computational expense, and can also be used as emulator to improve rapid policy making in various settings.

  13. Study of T-wave morphology parameters based on Principal Components Analysis during acute myocardial ischemia

    NASA Astrophysics Data System (ADS)

    Baglivo, Fabricio Hugo; Arini, Pedro David

    2011-12-01

    Electrocardiographic repolarization abnormalities can be detected by Principal Components Analysis of the T-wave. In this work we studied the efect of signal averaging on the mean value and reproducibility of the ratio of the 2nd to the 1st eigenvalue of T-wave (T21W) and the absolute and relative T-wave residuum (TrelWR and TabsWR) in the ECG during ischemia induced by Percutaneous Coronary Intervention. Also, the intra-subject and inter-subject variability of T-wave parameters have been analyzed. Results showed that TrelWR and TabsWR evaluated from the average of 10 complexes had lower values and higher reproducibility than those obtained from 1 complex. On the other hand T21W calculated from 10 complexes did not show statistical diferences versus the T21W calculated on single beats. The results of this study corroborate that, with a signal averaging technique, the 2nd and the 1st eigenvalue are not afected by noise while the 4th to 8th eigenvalues are so much afected by this, suggesting the use of the signal averaged technique before calculation of absolute and relative T-wave residuum. Finally, we have shown that T-wave morphology parameters present high intra-subject stability.

  14. PREDICTING TWO-DIMENSIONAL STEADY-STATE SOIL FREEZING FRONTS USING THE CVBEM.

    USGS Publications Warehouse

    Hromadka, T.V.

    1986-01-01

    The complex variable boundary element method (CVBEM) is used instead of a real variable boundary element method due to the available modeling error evaluation techniques developed. The modeling accuracy is evaluated by the model-user in the determination of an approximative boundary upon which the CVBEM provides an exact solution. Although inhomogeneity (and anisotropy) can be included in the CVBEM model, the resulting fully populated matrix system quickly becomes large. Therefore in this paper, the domain is assumed homogeneous and isotropic except for differences in frozen and thawed conduction parameters on either side of the freezing front. The example problems presented were obtained by use of a popular 64K microcomputer (the current version of the program used in this study has the capacity to accommodate 30 nodal points).

  15. Vis-NIR spectrometric determination of Brix and sucrose in sugar production samples using kernel partial least squares with interval selection based on the successive projections algorithm.

    PubMed

    de Almeida, Valber Elias; de Araújo Gomes, Adriano; de Sousa Fernandes, David Douglas; Goicoechea, Héctor Casimiro; Galvão, Roberto Kawakami Harrop; Araújo, Mario Cesar Ugulino

    2018-05-01

    This paper proposes a new variable selection method for nonlinear multivariate calibration, combining the Successive Projections Algorithm for interval selection (iSPA) with the Kernel Partial Least Squares (Kernel-PLS) modelling technique. The proposed iSPA-Kernel-PLS algorithm is employed in a case study involving a Vis-NIR spectrometric dataset with complex nonlinear features. The analytical problem consists of determining Brix and sucrose content in samples from a sugar production system, on the basis of transflectance spectra. As compared to full-spectrum Kernel-PLS, the iSPA-Kernel-PLS models involve a smaller number of variables and display statistically significant superiority in terms of accuracy and/or bias in the predictions. Published by Elsevier B.V.

  16. Contemporary Tools and Techniques for Substrate Ablation of Ventricular Tachycardia in Structural Heart Disease.

    PubMed

    Hutchinson, Mathew D; Garza, Hyon-He K

    2018-02-24

    As we have witnessed in other arenas of catheter-based therapeutics, ventricular tachycardia (VT) ablation has become increasingly anatomical in its execution. Multi-modality imaging provides anatomical detail in substrate characterization, which is often complex in nonischemic cardiomyopathy patients. Patients with intramural, intraseptal, and epicardial substrates provide challenges in delivering effective ablation to the critical arrhythmia substrate due to the depth of origin or the presence of adjacent critical structures. Novel ablation techniques such as simultaneous unipolar or bipolar ablation can be useful to achieve greater lesion depth, though at the expense of increasing collateral damage. Disruptive technologies like stereotactic radioablation may provide a tailored approach to these complex patients while minimizing procedural risk. Substrate ablation is a cornerstone of the contemporary VT ablation procedure, and recent data suggest that it is as effective and more efficient that conventional activation guided ablation. A number of specific targets and techniques for substrate ablation have been described, and all have shown a fairly high success in achieving their acute procedural endpoint. Substrate ablation also provides a novel and reproducible procedural endpoint, which may add predictive value for VT recurrence beyond conventional programmed stimulation. Extrapolation of outcome data to nonischemic phenotypes requires caution given both the variability in substrate nonischemic distribution and the underrepresentation of these patients in previous trials.

  17. How effective are geometric morphometric techniques for assessing functional shape variation? An example from the great ape temporomandibular joint.

    PubMed

    Terhune, Claire E

    2013-08-01

    Functional shape analyses have long relied on the use of shape ratios to test biomechanical hypotheses. This method is powerful because of the ease with which results are interpreted, but these techniques fall short in quantifying complex morphologies that may not have a strong biomechanical foundation but may still be functionally informative. In contrast, geometric morphometric methods are continually being adopted for quantifying complex shapes, but they tend to prove inadequate in functional analyses because they have little foundation in an explicit biomechanical framework. The goal of this study was to evaluate the intersection of these two methods using the great ape temporomandibular joint as a case study. Three-dimensional coordinates of glenoid fossa and mandibular condyle shape were collected using a Microscribe digitizer. Linear distances extracted from these landmarks were analyzed using a series of one-way ANOVAs; further, the landmark configurations were analyzed using geometric morphometric techniques. Results suggest that the two methods are broadly similar, although the geometric morphometric data allow for the identification of shape differences among taxa that were not immediately apparent in the univariate analyses. Furthermore, this study suggests several new approaches for translating these shape data into a biomechanical context by adjusting the data using a biomechanically relevant variable. Copyright © 2013 Wiley Periodicals, Inc.

  18. The "Chaos Theory" and nonlinear dynamics in heart rate variability analysis: does it work in short-time series in patients with coronary heart disease?

    PubMed

    Krstacic, Goran; Krstacic, Antonija; Smalcelj, Anton; Milicic, Davor; Jembrek-Gostovic, Mirjana

    2007-04-01

    Dynamic analysis techniques may quantify abnormalities in heart rate variability (HRV) based on nonlinear and fractal analysis (chaos theory). The article emphasizes clinical and prognostic significance of dynamic changes in short-time series applied on patients with coronary heart disease (CHD) during the exercise electrocardiograph (ECG) test. The subjects were included in the series after complete cardiovascular diagnostic data. Series of R-R and ST-T intervals were obtained from exercise ECG data after sampling digitally. The range rescaled analysis method determined the fractal dimension of the intervals. To quantify fractal long-range correlation's properties of heart rate variability, the detrended fluctuation analysis technique was used. Approximate entropy (ApEn) was applied to quantify the regularity and complexity of time series, as well as unpredictability of fluctuations in time series. It was found that the short-term fractal scaling exponent (alpha(1)) is significantly lower in patients with CHD (0.93 +/- 0.07 vs 1.09 +/- 0.04; P < 0.001). The patients with CHD had higher fractal dimension in each exercise test program separately, as well as in exercise program at all. ApEn was significant lower in CHD group in both RR and ST-T ECG intervals (P < 0.001). The nonlinear dynamic methods could have clinical and prognostic applicability also in short-time ECG series. Dynamic analysis based on chaos theory during the exercise ECG test point out the multifractal time series in CHD patients who loss normal fractal characteristics and regularity in HRV. Nonlinear analysis technique may complement traditional ECG analysis.

  19. Free energy of conformational transition paths in biomolecules: The string method and its application to myosin VI

    PubMed Central

    Ovchinnikov, Victor; Karplus, Martin; Vanden-Eijnden, Eric

    2011-01-01

    A set of techniques developed under the umbrella of the string method is used in combination with all-atom molecular dynamics simulations to analyze the conformation change between the prepowerstroke (PPS) and rigor (R) structures of the converter domain of myosin VI. The challenges specific to the application of these techniques to such a large and complex biomolecule are addressed in detail. These challenges include (i) identifying a proper set of collective variables to apply the string method, (ii) finding a suitable initial string, (iii) obtaining converged profiles of the free energy along the transition path, (iv) validating and interpreting the free energy profiles, and (v) computing the mean first passage time of the transition. A detailed description of the PPS↔R transition in the converter domain of myosin VI is obtained, including the transition path, the free energy along the path, and the rates of interconversion. The methodology developed here is expected to be useful more generally in studies of conformational transitions in complex biomolecules. PMID:21361558

  20. Decreasing the temporal complexity for nonlinear, implicit reduced-order models by forecasting

    DOE PAGES

    Carlberg, Kevin; Ray, Jaideep; van Bloemen Waanders, Bart

    2015-02-14

    Implicit numerical integration of nonlinear ODEs requires solving a system of nonlinear algebraic equations at each time step. Each of these systems is often solved by a Newton-like method, which incurs a sequence of linear-system solves. Most model-reduction techniques for nonlinear ODEs exploit knowledge of system's spatial behavior to reduce the computational complexity of each linear-system solve. However, the number of linear-system solves for the reduced-order simulation often remains roughly the same as that for the full-order simulation. We propose exploiting knowledge of the model's temporal behavior to (1) forecast the unknown variable of the reduced-order system of nonlinear equationsmore » at future time steps, and (2) use this forecast as an initial guess for the Newton-like solver during the reduced-order-model simulation. To compute the forecast, we propose using the Gappy POD technique. As a result, the goal is to generate an accurate initial guess so that the Newton solver requires many fewer iterations to converge, thereby decreasing the number of linear-system solves in the reduced-order-model simulation.« less

  1. Rhodamine-123: a p-glycoprotein marker complex with sodium lauryl sulfate.

    PubMed

    Al-Mohizea, Abdullah M; Al-Jenoobi, Fahad Ibrahim; Alam, Mohd Aftab

    2015-03-01

    Aim of this study was to investigate the role of sodium lauryl sulfate (SLS) as P-glycoprotein inhibitor. The everted rat gut sac model was used to study in-vitro mucosal to serosal transport of Rhodamine-123 (Rho-123). Surprisingly, SLS decreases the serosal absorption of Rho-123 at all investigated concentrations. Investigation reveals complex formation between Rhodamine-123 and sodium lauryl sulfate. Interaction profile of SLS & Rho-123 was studied at variable SLS concentrations. The SLS concentration higher than critical micelle concentration (CMC) increases the solubility of Rho-123 but could not help in serosal absorption, on the contrary the absorption of Rho-123 decreased. Rho-123 and SLS form pink color complex at sub-CMC. The SLS concentrations below CMC decrease the solubility of Rho-123. For further studies, Rho-123 & SLS complex was prepared by using solvent evaporation technique and characterized by using differential scanning calorimeter (DSC). Thermal analysis also proved the formation of complex between SLS & Rho-123. The P values were found to be significant (<0.05) except group comprising 0.0001% SLS, and that is because 0.0001% SLS is seems to be very low to affect the solubility or complexation of Rho-123.

  2. A comprehensive methodology for the multidimensional and synchronic data collecting in soundscape.

    PubMed

    Kogan, Pablo; Turra, Bruno; Arenas, Jorge P; Hinalaf, María

    2017-02-15

    The soundscape paradigm is comprised of complex living systems where individuals interact moment-by-moment among one another and with the physical environment. The real environments provide promising conditions to reveal deep soundscape behavior, including the multiple components involved and their interrelations as a whole. However, measuring and analyzing the numerous simultaneous variables of soundscape represents a challenge that is not completely understood. This work proposes and applies a comprehensive methodology for multidimensional and synchronic data collection in soundscape. The soundscape variables were organized into three main entities: experienced environment, acoustic environment, and extra-acoustic environment, containing, in turn, subgroups of variables called components. The variables contained in these components were acquired through synchronic field techniques that include surveys, acoustic measurements, audio recordings, photography, and video. The proposed methodology was tested, optimized, and applied in diverse open environments, including squares, parks, fountains, university campuses, streets, and pedestrian areas. The systematization of this comprehensive methodology provides a framework for soundscape research, a support for urban and environment management, and a preliminary procedure for standardization in soundscape data collecting. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Assessing sustainability in real urban systems: the Greater Cincinnati Metropolitan Area in Ohio, Kentucky, and Indiana.

    PubMed

    Gonzalez-Mejía, Alejandra M; Eason, Tarsha N; Cabezas, Heriberto; Suidan, Makram T

    2012-09-04

    Urban systems have a number of factors (i.e., economic, social, and environmental) that can potentially impact growth, change, and transition. As such, assessing and managing these systems is a complex challenge. While, tracking trends of key variables may provide some insight, identifying the critical characteristics that truly impact the dynamic behavior of these systems is difficult. As an integrated approach to evaluate real urban systems, this work contributes to the research on scientific techniques for assessing sustainability. Specifically, it proposes a practical methodology based on the estimation of dynamic order, for identifying stable and unstable periods of sustainable or unsustainable trends with Fisher Information (FI) metric. As a test case, the dynamic behavior of the City, Suburbs, and Metropolitan Statistical Area (MSA) of Cincinnati was evaluated by using 29 social and 11 economic variables to characterize each system from 1970 to 2009. Air quality variables were also selected to describe the MSA's environmental component (1980-2009). Results indicate systems dynamic started to change from about 1995 for the social variables and about 2000 for the economic and environmental characteristics.

  4. Molecular Diagnostic Testing for Aspergillus

    PubMed Central

    Powers-Fletcher, Margaret V.

    2016-01-01

    The direct detection of Aspergillus nucleic acid in clinical specimens has the potential to improve the diagnosis of aspergillosis by offering more rapid and sensitive identification of invasive infections than is possible with traditional techniques, such as culture or histopathology. Molecular tests for Aspergillus have been limited historically by lack of standardization and variable sensitivities and specificities. Recent efforts have been directed at addressing these limitations and optimizing assay performance using a variety of specimen types. This review provides a summary of standardization efforts and outlines the complexities of molecular testing for Aspergillus in clinical mycology. PMID:27487954

  5. Dynamic Programming for Structured Continuous Markov Decision Problems

    NASA Technical Reports Server (NTRS)

    Dearden, Richard; Meuleau, Nicholas; Washington, Richard; Feng, Zhengzhu

    2004-01-01

    We describe an approach for exploiting structure in Markov Decision Processes with continuous state variables. At each step of the dynamic programming, the state space is dynamically partitioned into regions where the value function is the same throughout the region. We first describe the algorithm for piecewise constant representations. We then extend it to piecewise linear representations, using techniques from POMDPs to represent and reason about linear surfaces efficiently. We show that for complex, structured problems, our approach exploits the natural structure so that optimal solutions can be computed efficiently.

  6. Robust Control Analysis of Hydraulic Turbine Speed

    NASA Astrophysics Data System (ADS)

    Jekan, P.; Subramani, C.

    2018-04-01

    An effective control strategy for the hydro-turbine governor in time scenario is adjective for this paper. Considering the complex dynamic characteristic and the uncertainty of the hydro-turbine governor model and taking the static and dynamic performance of the governing system as the ultimate goal, the designed logic combined the classical PID control theory with artificial intelligence used to obtain the desired output. The used controller will be a variable control techniques, therefore, its parameters can be adaptively adjusted according to the information about the control error signal.

  7. Quantifying uncertainty in high-resolution coupled hydrodynamic-ecosystem models

    NASA Astrophysics Data System (ADS)

    Allen, J. I.; Somerfield, P. J.; Gilbert, F. J.

    2007-01-01

    Marine ecosystem models are becoming increasingly complex and sophisticated, and are being used to estimate the effects of future changes in the earth system with a view to informing important policy decisions. Despite their potential importance, far too little attention has been, and is generally, paid to model errors and the extent to which model outputs actually relate to real-world processes. With the increasing complexity of the models themselves comes an increasing complexity among model results. If we are to develop useful modelling tools for the marine environment we need to be able to understand and quantify the uncertainties inherent in the simulations. Analysing errors within highly multivariate model outputs, and relating them to even more complex and multivariate observational data, are not trivial tasks. Here we describe the application of a series of techniques, including a 2-stage self-organising map (SOM), non-parametric multivariate analysis, and error statistics, to a complex spatio-temporal model run for the period 1988-1989 in the Southern North Sea, coinciding with the North Sea Project which collected a wealth of observational data. We use model output, large spatio-temporally resolved data sets and a combination of methodologies (SOM, MDS, uncertainty metrics) to simplify the problem and to provide tractable information on model performance. The use of a SOM as a clustering tool allows us to simplify the dimensions of the problem while the use of MDS on independent data grouped according to the SOM classification allows us to validate the SOM. The combination of classification and uncertainty metrics allows us to pinpoint the variables and associated processes which require attention in each region. We recommend the use of this combination of techniques for simplifying complex comparisons of model outputs with real data, and analysis of error distributions.

  8. Variability in Cadence During Forced Cycling Predicts Motor Improvement in Individuals With Parkinson’s Disease

    PubMed Central

    Ridgel, Angela L.; Abdar, Hassan Mohammadi; Alberts, Jay L.; Discenzo, Fred M.; Loparo, Kenneth A.

    2014-01-01

    Variability in severity and progression of Parkinson’s disease symptoms makes it challenging to design therapy interventions that provide maximal benefit. Previous studies showed that forced cycling, at greater pedaling rates, results in greater improvements in motor function than voluntary cycling. The precise mechanism for differences in function following exercise is unknown. We examined the complexity of biomechanical and physiological features of forced and voluntary cycling and correlated these features to improvements in motor function as measured by the Unified Parkinson’s Disease Rating Scale (UPDRS). Heart rate, cadence, and power were analyzed using entropy signal processing techniques. Pattern variability in heart rate and power were greater in the voluntary group when compared to forced group. In contrast, variability in cadence was higher during forced cycling. UPDRS Motor III scores predicted from the pattern variability data were highly correlated to measured scores in the forced group. This study shows how time series analysis methods of biomechanical and physiological parameters of exercise can be used to predict improvements in motor function. This knowledge will be important in the development of optimal exercise-based rehabilitation programs for Parkinson’s disease. PMID:23144045

  9. Stokes phenomena in discrete Painlevé II.

    PubMed

    Joshi, N; Lustri, C J; Luu, S

    2017-02-01

    We consider the asymptotic behaviour of the second discrete Painlevé equation in the limit as the independent variable becomes large. Using asymptotic power series, we find solutions that are asymptotically pole-free within some region of the complex plane. These asymptotic solutions exhibit Stokes phenomena, which is typically invisible to classical power series methods. We subsequently apply exponential asymptotic techniques to investigate such phenomena, and obtain mathematical descriptions of the rapid switching behaviour associated with Stokes curves. Through this analysis, we determine the regions of the complex plane in which the asymptotic behaviour is described by a power series expression, and find that the behaviour of these asymptotic solutions shares a number of features with the tronquée and tri-tronquée solutions of the second continuous Painlevé equation.

  10. Stokes phenomena in discrete Painlevé II

    PubMed Central

    Joshi, N.

    2017-01-01

    We consider the asymptotic behaviour of the second discrete Painlevé equation in the limit as the independent variable becomes large. Using asymptotic power series, we find solutions that are asymptotically pole-free within some region of the complex plane. These asymptotic solutions exhibit Stokes phenomena, which is typically invisible to classical power series methods. We subsequently apply exponential asymptotic techniques to investigate such phenomena, and obtain mathematical descriptions of the rapid switching behaviour associated with Stokes curves. Through this analysis, we determine the regions of the complex plane in which the asymptotic behaviour is described by a power series expression, and find that the behaviour of these asymptotic solutions shares a number of features with the tronquée and tri-tronquée solutions of the second continuous Painlevé equation. PMID:28293132

  11. Shape Mode Analysis Exposes Movement Patterns in Biology: Flagella and Flatworms as Case Studies

    PubMed Central

    Werner, Steffen; Rink, Jochen C.; Riedel-Kruse, Ingmar H.; Friedrich, Benjamin M.

    2014-01-01

    We illustrate shape mode analysis as a simple, yet powerful technique to concisely describe complex biological shapes and their dynamics. We characterize undulatory bending waves of beating flagella and reconstruct a limit cycle of flagellar oscillations, paying particular attention to the periodicity of angular data. As a second example, we analyze non-convex boundary outlines of gliding flatworms, which allows us to expose stereotypic body postures that can be related to two different locomotion mechanisms. Further, shape mode analysis based on principal component analysis allows to discriminate different flatworm species, despite large motion-associated shape variability. Thus, complex shape dynamics is characterized by a small number of shape scores that change in time. We present this method using descriptive examples, explaining abstract mathematics in a graphic way. PMID:25426857

  12. Development of Multiobjective Optimization Techniques for Sonic Boom Minimization

    NASA Technical Reports Server (NTRS)

    Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.

    1996-01-01

    A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously improve the aerodynamic, the sonic boom and the structural characteristics of the aircraft. The flow solution is obtained using a comprehensive parabolized Navier Stokes solver. Sonic boom analysis is performed using an extrapolation procedure. The aircraft wing load carrying member is modeled as either an isotropic or a composite box beam. The isotropic box beam is analyzed using thin wall theory. The composite box beam is analyzed using a finite element procedure. The developed optimization procedures yield significant improvements in all the performance criteria and provide interesting design trade-offs. The semi-analytical sensitivity analysis techniques offer significant computational savings and allow the use of comprehensive analysis procedures within design optimization studies.

  13. Upwelling regime off the Cabo Frio region in Brazil and impact on acoustic propagation.

    PubMed

    Calado, Leandro; Camargo Rodríguez, Orlando; Codato, Gabriel; Contrera Xavier, Fabio

    2018-03-01

    This work introduces a description of the complex upwelling regime off the Cabo Frio region in Brazil and shows that ocean modeling, based on the feature-oriented regional modeling system (FORMS) technique, can produce reliable predictions of sound speed fields for the corresponding shallow water environment. This work also shows, through the development of simulations, that the upwelling regime can be responsible for the creation of shadow coastal zones, in which the detection probability is too low for an acoustic source to be detected. The development of the FORMS technique and its validation with real data, for the particular region of coastal upwelling off Cabo Frio, reveals the possibility of a sustainable and reliable forecast system for the corresponding (variable in space and time) underwater acoustic environment.

  14. Trocar anterior chamber maintainer: Improvised infusion technique.

    PubMed

    Agarwal, Amar; Narang, Priya; Kumar, Dhivya A; Agarwal, Ashvin

    2016-02-01

    We present an improvised technique of infusion that uses a trocar cannula as an anterior chamber maintainer (ACM). Although routinely used in posterior segment surgery, the trocar cannula has been infrequently used in complex anterior segment procedures. The trocar ACM creates a transconjunctival biplanar wound of appropriate size that is self-sealing and overcomes the shortcomings of an ACM, such as spontaneous extrusion and forced introduction into the eye from variability in the size of the corneal paracentesis incision. Constant infusion inflow through the trocar ACM is used to maintain positive intraocular pressure through a self-sealing sclerotomy incision at the limbus. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2016 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  15. Open loop model for WDM links

    NASA Astrophysics Data System (ADS)

    D, Meena; Francis, Fredy; T, Sarath K.; E, Dipin; Srinivas, T.; K, Jayasree V.

    2014-10-01

    Wavelength Division Multiplexing (WDM) techniques overfibrelinks helps to exploit the high bandwidth capacity of single mode fibres. A typical WDM link consisting of laser source, multiplexer/demultiplexer, amplifier and detectoris considered for obtaining the open loop gain model of the link. The methodology used here is to obtain individual component models using mathematical and different curve fitting techniques. These individual models are then combined to obtain the WDM link model. The objective is to deduce a single variable model for the WDM link in terms of input current to system. Thus it provides a black box solution for a link. The Root Mean Square Error (RMSE) associated with each of the approximated models is given for comparison. This will help the designer to select the suitable WDM link model during a complex link design.

  16. Multivariate spatiotemporal visualizations for mobile devices in Flyover Country

    NASA Astrophysics Data System (ADS)

    Loeffler, S.; Thorn, R.; Myrbo, A.; Roth, R.; Goring, S. J.; Williams, J.

    2017-12-01

    Visualizing and interacting with complex multivariate and spatiotemporal datasets on mobile devices is challenging due to their smaller screens, reduced processing power, and limited data connectivity. Pollen data require visualizing pollen assemblages spatially, temporally, and across multiple taxa to understand plant community dynamics through time. Drawing from cartography, information visualization, and paleoecology, we have created new mobile-first visualization techniques that represent multiple taxa across many sites and enable user interaction. Using pollen datasets from the Neotoma Paleoecology Database as a case study, the visualization techniques allow ecological patterns and trends to be quickly understood on a mobile device compared to traditional pollen diagrams and maps. This flexible visualization system can be used for datasets beyond pollen, with the only requirements being point-based localities and multiple variables changing through time or depth.

  17. Uncertainty Analysis of Decomposing Polyurethane Foam

    NASA Technical Reports Server (NTRS)

    Hobbs, Michael L.; Romero, Vicente J.

    2000-01-01

    Sensitivity/uncertainty analyses are necessary to determine where to allocate resources for improved predictions in support of our nation's nuclear safety mission. Yet, sensitivity/uncertainty analyses are not commonly performed on complex combustion models because the calculations are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, a variety of sensitivity/uncertainty analyses were used to determine the uncertainty associated with thermal decomposition of polyurethane foam exposed to high radiative flux boundary conditions. The polyurethane used in this study is a rigid closed-cell foam used as an encapsulant. Related polyurethane binders such as Estane are used in many energetic materials of interest to the JANNAF community. The complex, finite element foam decomposition model used in this study has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The response variable was selected as the steady-state decomposition front velocity calculated as the derivative of the decomposition front location versus time. An analytical mean value sensitivity/uncertainty (MV) analysis was used to determine the standard deviation by taking numerical derivatives of the response variable with respect to each of the 25 input parameters. Since the response variable is also a derivative, the standard deviation was essentially determined from a second derivative that was extremely sensitive to numerical noise. To minimize the numerical noise, 50-micrometer element dimensions and approximately 1-msec time steps were required to obtain stable uncertainty results. As an alternative method to determine the uncertainty and sensitivity in the decomposition front velocity, surrogate response surfaces were generated for use with a constrained Latin Hypercube Sampling (LHS) technique. Two surrogate response surfaces were investigated: 1) a linear surrogate response surface (LIN) and 2) a quadratic response surface (QUAD). The LHS techniques do not require derivatives of the response variable and are subsequently relatively insensitive to numerical noise. To compare the LIN and QUAD methods to the MV method, a direct LHS analysis (DLHS) was performed using the full grid and timestep resolved finite element model. The surrogate response models (LIN and QUAD) are shown to give acceptable values of the mean and standard deviation when compared to the fully converged DLHS model.

  18. Advanced computer-aided design for bone tissue-engineering scaffolds.

    PubMed

    Ramin, E; Harris, R A

    2009-04-01

    The design of scaffolds with an intricate and controlled internal structure represents a challenge for tissue engineering. Several scaffold-manufacturing techniques allow the creation of complex architectures but with little or no control over the main features of the channel network such as the size, shape, and interconnectivity of each individual channel, resulting in intricate but random structures. The combined use of computer-aided design (CAD) systems and layer-manufacturing techniques allows a high degree of control over these parameters with few limitations in terms of achievable complexity. However, the design of complex and intricate networks of channels required in CAD is extremely time-consuming since manually modelling hundreds of different geometrical elements, all with different parameters, may require several days to design individual scaffold structures. An automated design methodology is proposed by this research to overcome these limitations. This approach involves the investigation of novel software algorithms, which are able to interact with a conventional CAD program and permit the automated design of several geometrical elements, each with a different size and shape. In this work, the variability of the parameters required to define each geometry has been set as random, but any other distribution could have been adopted. This methodology has been used to design five cubic scaffolds with interconnected pore channels that range from 200 to 800 microm in diameter, each with an increased complexity of the internal geometrical arrangement. A clinical case study, consisting of an integration of one of these geometries with a craniofacial implant, is then presented.

  19. Biostatistics Series Module 10: Brief Overview of Multivariate Methods.

    PubMed

    Hazra, Avijit; Gogtay, Nithya

    2017-01-01

    Multivariate analysis refers to statistical techniques that simultaneously look at three or more variables in relation to the subjects under investigation with the aim of identifying or clarifying the relationships between them. These techniques have been broadly classified as dependence techniques, which explore the relationship between one or more dependent variables and their independent predictors, and interdependence techniques, that make no such distinction but treat all variables equally in a search for underlying relationships. Multiple linear regression models a situation where a single numerical dependent variable is to be predicted from multiple numerical independent variables. Logistic regression is used when the outcome variable is dichotomous in nature. The log-linear technique models count type of data and can be used to analyze cross-tabulations where more than two variables are included. Analysis of covariance is an extension of analysis of variance (ANOVA), in which an additional independent variable of interest, the covariate, is brought into the analysis. It tries to examine whether a difference persists after "controlling" for the effect of the covariate that can impact the numerical dependent variable of interest. Multivariate analysis of variance (MANOVA) is a multivariate extension of ANOVA used when multiple numerical dependent variables have to be incorporated in the analysis. Interdependence techniques are more commonly applied to psychometrics, social sciences and market research. Exploratory factor analysis and principal component analysis are related techniques that seek to extract from a larger number of metric variables, a smaller number of composite factors or components, which are linearly related to the original variables. Cluster analysis aims to identify, in a large number of cases, relatively homogeneous groups called clusters, without prior information about the groups. The calculation intensive nature of multivariate analysis has so far precluded most researchers from using these techniques routinely. The situation is now changing with wider availability, and increasing sophistication of statistical software and researchers should no longer shy away from exploring the applications of multivariate methods to real-life data sets.

  20. Fluid Mechanics and Complex Variable Theory: Getting Past the 19th Century

    ERIC Educational Resources Information Center

    Newton, Paul K.

    2017-01-01

    The subject of fluid mechanics is a rich, vibrant, and rapidly developing branch of applied mathematics. Historically, it has developed hand-in-hand with the elegant subject of complex variable theory. The Westmont College NSF-sponsored workshop on the revitalization of complex variable theory in the undergraduate curriculum focused partly on…

  1. Design of forging process variables under uncertainties

    NASA Astrophysics Data System (ADS)

    Repalle, Jalaja; Grandhi, Ramana V.

    2005-02-01

    Forging is a complex nonlinear process that is vulnerable to various manufacturing anomalies, such as variations in billet geometry, billet/die temperatures, material properties, and workpiece and forging equipment positional errors. A combination of these uncertainties could induce heavy manufacturing losses through premature die failure, final part geometric distortion, and reduced productivity. Identifying, quantifying, and controlling the uncertainties will reduce variability risk in a manufacturing environment, which will minimize the overall production cost. In this article, various uncertainties that affect the forging process are identified, and their cumulative effect on the forging tool life is evaluated. Because the forging process simulation is time-consuming, a response surface model is used to reduce computation time by establishing a relationship between the process performance and the critical process variables. A robust design methodology is developed by incorporating reliability-based optimization techniques to obtain sound forging components. A case study of an automotive-component forging-process design is presented to demonstrate the applicability of the method.

  2. Distributed Space Mission Design for Earth Observation Using Model-Based Performance Evaluation

    NASA Technical Reports Server (NTRS)

    Nag, Sreeja; LeMoigne-Stewart, Jacqueline; Cervantes, Ben; DeWeck, Oliver

    2015-01-01

    Distributed Space Missions (DSMs) are gaining momentum in their application to earth observation missions owing to their unique ability to increase observation sampling in multiple dimensions. DSM design is a complex problem with many design variables, multiple objectives determining performance and cost and emergent, often unexpected, behaviors. There are very few open-access tools available to explore the tradespace of variables, minimize cost and maximize performance for pre-defined science goals, and therefore select the most optimal design. This paper presents a software tool that can multiple DSM architectures based on pre-defined design variable ranges and size those architectures in terms of predefined science and cost metrics. The tool will help a user select Pareto optimal DSM designs based on design of experiments techniques. The tool will be applied to some earth observation examples to demonstrate its applicability in making some key decisions between different performance metrics and cost metrics early in the design lifecycle.

  3. Isoflurane and Ketamine Anesthesia have Different Effects on Ventilatory Pattern Variability in Rats

    PubMed Central

    Chung, Augustine; Fishman, Mikkel; Dasenbrook, Elliot C.; Loparo, Kenneth A.; Dick, Thomas E.; Jacono, Frank J.

    2013-01-01

    We hypothesize that isoflurane and ketamine impact ventilatory pattern variability (VPV) differently. Adult Sprague-Dawley rats were recorded in a whole-body plethysmograph before, during and after deep anesthesia. VPV was quantified from 60-s epochs using a complementary set of analytic techniques that included constructing surrogate data sets that preserved the linear structure but disrupted nonlinear deterministic properties of the original data. Even though isoflurane decreased and ketamine increased respiratory rate, VPV as quantified by the coefficient of variation decreased for both anesthetics. Further, mutual information increased and sample entropy decreased and the nonlinear complexity index (NLCI) increased during anesthesia despite qualitative differences in the shape and period of the waveform. Surprisingly mutual information and sample entropy did not change in the surrogate sets constructed from isoflurane data, but in those constructed from ketamine data, mutual information increased and sample entropy decreased significantly in the surrogate segments constructed from anesthetized relative to unanesthetized epochs. These data suggest that separate mechanisms modulate linear and nonlinear variability of breathing. PMID:23246800

  4. Analysis of Neuronal Spike Trains, Deconstructed

    PubMed Central

    Aljadeff, Johnatan; Lansdell, Benjamin J.; Fairhall, Adrienne L.; Kleinfeld, David

    2016-01-01

    As information flows through the brain, neuronal firing progresses from encoding the world as sensed by the animal to driving the motor output of subsequent behavior. One of the more tractable goals of quantitative neuroscience is to develop predictive models that relate the sensory or motor streams with neuronal firing. Here we review and contrast analytical tools used to accomplish this task. We focus on classes of models in which the external variable is compared with one or more feature vectors to extract a low-dimensional representation, the history of spiking and other variables are potentially incorporated, and these factors are nonlinearly transformed to predict the occurrences of spikes. We illustrate these techniques in application to datasets of different degrees of complexity. In particular, we address the fitting of models in the presence of strong correlations in the external variable, as occurs in natural sensory stimuli and in movement. Spectral correlation between predicted and measured spike trains is introduced to contrast the relative success of different methods. PMID:27477016

  5. Ultra compact spectrometer using linear variable filters

    NASA Astrophysics Data System (ADS)

    Dami, M.; De Vidi, R.; Aroldi, G.; Belli, F.; Chicarella, L.; Piegari, A.; Sytchkova, A.; Bulir, J.; Lemarquis, F.; Lequime, M.; Abel Tibérini, L.; Harnisch, B.

    2017-11-01

    The Linearly Variable Filters (LVF) are complex optical devices that, integrated in a CCD, can realize a "single chip spectrometer". In the framework of an ESA Study, a team of industries and institutes led by SELEX-Galileo explored the design principles and manufacturing techniques, realizing and characterizing LVF samples based both on All-Dielectric (AD) and Metal-Dielectric (MD) Coating Structures in the VNIR and SWIR spectral ranges. In particular the achieved performances on spectral gradient, transmission bandwidth and Spectral Attenuation (SA) are presented and critically discussed. Potential improvements will be highlighted. In addition the results of a feasibility study of a SWIR Linear Variable Filter are presented with the comparison of design prediction and measured performances. Finally criticalities related to the filter-CCD packaging are discussed. The main achievements reached during these activities have been: - to evaluate by design, manufacturing and test of LVF samples the achievable performances compared with target requirements; - to evaluate the reliability of the projects by analyzing their repeatability; - to define suitable measurement methodologies

  6. Are middle school mathematics teachers able to solve word problems without using variable?

    NASA Astrophysics Data System (ADS)

    Gökkurt Özdemir, Burçin; Erdem, Emrullah; Örnek, Tuğba; Soylu, Yasin

    2018-01-01

    Many people consider problem solving as a complex process in which variables such as x, y are used. Problems may not be solved by only using 'variable.' Problem solving can be rationalized and made easier using practical strategies. When especially the development of children at younger ages is considered, it is obvious that mathematics teachers should solve problems through concrete processes. In this context, middle school mathematics teachers' skills to solve word problems without using variables were examined in the current study. Through the case study method, this study was conducted with 60 middle school mathematics teachers who have different professional experiences in five provinces in Turkey. A test consisting of five open-ended word problems was used as the data collection tool. The content analysis technique was used to analyze the data. As a result of the analysis, it was seen that the most of the teachers used trial-and-error strategy or area model as the solution strategy. On the other hand, the teachers who solved the problems using variables such as x, a, n or symbols such as Δ, □, ○, * and who also felt into error by considering these solutions as without variable were also seen in the study.

  7. Retrograde renal hilar dissection and segmental arterial clamping: a simple modification to achieve super-selective robotic partial nephrectomy.

    PubMed

    Greene, Richard N; Sutherland, Douglas E; Tausch, Timothy J; Perez, Deo S

    2014-03-01

    Super-selective vascular control prior to robotic partial nephrectomy (also known as 'zero-ischemia') is a novel surgical technique that promises to reduce warm ischemia time. The technique has been shown to be feasible but adds substantial technical complexity and cost to the procedure. We present a simplified retrograde dissection of the renal hilum to achieve selective vascular control during robotic partial nephrectomy. Consecutive patients with stage 1 solid and complex cystic renal masses underwent robotic partial nephrectomies with selective vascular control using a modification to previously described super-selective robotic partial nephrectomy. In each case, the renal arterial branch supplying the mass and surrounding parenchyma was dissected in a retrograde fashion from the tumor. Intra-renal dissection of the interlobular artery was not performed. Intra-operative immunofluorescence was not utilized as assessment of parenchymal ischemia was documented before partial nephrectomy. Data was prospectively collected in an IRB-approved partial nephrectomy database. Operative variables between patients undergoing super-selective versus standard robotic partial nephrectomy were compared. Super-selective partial nephrectomy with retrograde hilar dissection was successfully completed in five consecutive patients. There were no complications or conversions to traditional partial nephrectomy. All were diagnosed with renal cell carcinoma and surgical margins were all negative. Estimated blood loss, warm ischemia time, operative time and length of stay were all comparable between patients undergoing super-selective and standard robotic partial nephrectomy. Retrograde hilar dissection appears to be a feasible and safe approach to super-selective partial nephrectomy without adding complex renovascular surgical techniques or cost to the procedure.

  8. Comparison of Free Energy Surfaces Calculations from Ab Initio Molecular Dynamic Simulations at the Example of Two Transition Metal Catalyzed Reactions

    PubMed Central

    Brüssel, Marc; di Dio, Philipp J.; Muñiz, Kilian; Kirchner, Barbara

    2011-01-01

    We carried out ab initio molecular dynamic simulations in order to determine the free energy surfaces of two selected reactions including solvents, namely a rearrangement of a ruthenium oxoester in water and a carbon dioxide addition to a palladium complex in carbon dioxide. For the latter reaction we also investigated the gas phase reaction in order to take solvent effects into account. We used two techniques to reconstruct the free energy surfaces: thermodynamic integration and metadynamics. Furthermore, we gave a reasonable error estimation of the computed free energy surface. We calculated a reaction barrier of ΔF = 59.5 ± 8.5 kJ mol−1 for the rearrangement of a ruthenium oxoester in water from thermodynamic integration. For the carbon dioxide addition to the palladium complex in carbon dioxide we found a ΔF = 44.9 ± 3.3 kJ mol−1 from metadynamics simulations with one collective variable. The investigation of the same reactions in the gas phase resulted in ΔF = 24.9 ± 6.7 kJ mol−1 from thermodynamic integration, in ΔF = 26.7 ± 2.3 kJ mol−1 from metadynamics simulations with one collective variable, and in ΔF = 27.1 ± 5.9 kJ mol−1 from metadynamics simulations with two collective variables. PMID:21541065

  9. How to find what you don't know: Visualising variability in 3D geological models

    NASA Astrophysics Data System (ADS)

    Lindsay, Mark; Wellmann, Florian; Jessell, Mark; Ailleres, Laurent

    2014-05-01

    Uncertainties in input data can have compounding effects on the predictive reliability of three-dimensional (3D) geological models. Resource exploration, tectonic studies and environmental modelling can be compromised by using 3D models that misrepresent the target geology, and drilling campaigns that attempt to intersect particular geological units guided by 3D models are at risk of failure if the exploration geologist is unaware of inherent uncertainties. In addition, the visual inspection of 3D models is often the first contact decision makers have with the geology, thus visually communicating the presence and magnitude of uncertainties contained within geological 3D models is critical. Unless uncertainties are presented early in the relationship between decision maker and model, the model will be considered more truthful than the uncertainties allow with each subsequent viewing. We present a selection of visualisation techniques that provide the viewer with an insight to the location and amount of uncertainty contained within a model, and the geological characteristics which are most affected. A model of the Gippsland Basin, southeastern Australia is used as a case study to demonstrate the concepts of information entropy, stratigraphic variability and geodiversity. Central to the techniques shown here is the creation of a model suite, performed by creating similar (but not the same) version of the original model through perturbation of the input data. Specifically, structural data in the form of strike and dip measurements is perturbed in the creation of the model suite. The visualisation techniques presented are: (i) information entropy; (ii) stratigraphic variability and (iii) geodiversity. Information entropy is used to analyse uncertainty in a spatial context, combining the empirical probability distributions of multiple outcomes with a single quantitative measure. Stratigraphic variability displays the number of possible lithologies that may exist at a given point within the model volume. Geodiversity analyses various model characteristics (or 'geodiveristy metrics'), including the depth, volume of unit, the curvature of an interface, the geological complexity of a contact and the contact relationships units have with each other. Principal component analysis, a multivariate statistical technique, is used to simultaneously examine each of the geodiveristy metrics to determine the boundaries of model space, and identify which metrics contribute most to model uncertainty. The combination of information entropy, stratigraphic variability and geodiversity analysis provides a descriptive and thorough representation of uncertainty with effective visualisation techniques that clearly communicate the geological uncertainty contained within the geological model.

  10. Complexity Science Framework for Big Data: Data-enabled Science

    NASA Astrophysics Data System (ADS)

    Surjalal Sharma, A.

    2016-07-01

    The ubiquity of Big Data has stimulated the development of analytic tools to harness the potential for timely and improved modeling and prediction. While much of the data is available near-real time and can be compiled to specify the current state of the system, the capability to make predictions is lacking. The main reason is the basic nature of Big Data - the traditional techniques are challenged in their ability to cope with its velocity, volume and variability to make optimum use of the available information. Another aspect is the absence of an effective description of the time evolution or dynamics of the specific system, derived from the data. Once such dynamical models are developed predictions can be made readily. This approach of " letting the data speak for itself " is distinct from the first-principles models based on the understanding of the fundamentals of the system. The predictive capability comes from the data-derived dynamical model, with no modeling assumptions, and can address many issues such as causality and correlation. This approach provides a framework for addressing the challenges in Big Data, especially in the case of spatio-temporal time series data. The reconstruction of dynamics from time series data is based on recognition that in most systems the different variables or degrees of freedom are coupled nonlinearly and in the presence of dissipation the state space contracts, effectively reducing the number of variables, thus enabling a description of its dynamical evolution and consequently prediction of future states. The predictability is analysed from the intrinsic characteristics of the distribution functions, such as Hurst exponents and Hill estimators. In most systems the distributions have heavy tails, which imply higher likelihood for extreme events. The characterization of the probabilities of extreme events are critical in many cases e. g., natural hazards, for proper assessment of risk and mitigation strategies. Big Data with such new analytics can yield improved risk estimates. The challenges of scientific inference from complex and massive data are addressed by data-enabled science, also referred as the Fourth paradigm, after experiment, theory and simulation. An example of this approach is the modelling of dynamical and statistical features of natural systems, without assumptions of specific processes. An effective use of the techniques of complexity science to yield the inherent features of a system from extensive data from observations and large scale numerical simulations is evident in the case of Earth's magnetosphere. The multiscale nature of the magnetosphere makes the numerical simulations a challenge, requiring very large computing resources. The reconstruction of dynamics from observational data can however yield the inherent characteristics using typical desktop computers. Such studies for other systems are in progress. Data-enabled approach using the framework of complexity science provides new techniques for modelling and prediction using Big Data. The studies of Earth's magnetosphere, provide an example of the potential for a new approach to the development of quantitative analytic tools.

  11. Machine learning methods as a tool to analyse incomplete or irregularly sampled radon time series data.

    PubMed

    Janik, M; Bossew, P; Kurihara, O

    2018-07-15

    Machine learning is a class of statistical techniques which has proven to be a powerful tool for modelling the behaviour of complex systems, in which response quantities depend on assumed controls or predictors in a complicated way. In this paper, as our first purpose, we propose the application of machine learning to reconstruct incomplete or irregularly sampled data of time series indoor radon ( 222 Rn). The physical assumption underlying the modelling is that Rn concentration in the air is controlled by environmental variables such as air temperature and pressure. The algorithms "learn" from complete sections of multivariate series, derive a dependence model and apply it to sections where the controls are available, but not the response (Rn), and in this way complete the Rn series. Three machine learning techniques are applied in this study, namely random forest, its extension called the gradient boosting machine and deep learning. For a comparison, we apply the classical multiple regression in a generalized linear model version. Performance of the models is evaluated through different metrics. The performance of the gradient boosting machine is found to be superior to that of the other techniques. By applying learning machines, we show, as our second purpose, that missing data or periods of Rn series data can be reconstructed and resampled on a regular grid reasonably, if data of appropriate physical controls are available. The techniques also identify to which degree the assumed controls contribute to imputing missing Rn values. Our third purpose, though no less important from the viewpoint of physics, is identifying to which degree physical, in this case environmental variables, are relevant as Rn predictors, or in other words, which predictors explain most of the temporal variability of Rn. We show that variables which contribute most to the Rn series reconstruction, are temperature, relative humidity and day of the year. The first two are physical predictors, while "day of the year" is a statistical proxy or surrogate for missing or unknown predictors. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. [Renal elastography].

    PubMed

    Correas, Jean-Michel; Anglicheau, Dany; Gennisson, Jean-Luc; Tanter, Mickael

    2016-04-01

    Renal elastography has become available with the development of noninvasive quantitative techniques (including shear-wave elastography), following the rapidly growing field of diagnosis and quantification of liver fibrosis, which has a demonstrated major clinical impact. Ultrasound or even magnetic resonance techniques are leaving the pure research area to reach the routine clinical use. With the increased incidence of chronic kidney disease and its specific morbidity and mortality, the noninvasive diagnosis of renal fibrosis can be of critical value. However, it is difficult to simply extend the application from one organ to the other due to a large number of anatomical and technical issues. Indeed, the kidney exhibits various features that make stiffness assessment more complex, such as the presence of various tissue types (cortex, medulla), high spatial orientation (anisotropy), local blood flow, fatty sinus with variable volume and echotexture, perirenal space with variable fatty content, and the variable depth of the organ. Furthermore, the stiffness changes of the renal parenchyma are not exclusively related to fibrosis, as renal perfusion or hydronephrosis will impact the local elasticity. Renal elastography might be able to diagnose acute or chronic obstruction, or to renal tumor or pseudotumor characterization. Today, renal elastography appears as a promising application that still requires optimization and validation, which is the contrary for liver stiffness assessment. Copyright © 2016 Association Société de néphrologie. Published by Elsevier SAS. All rights reserved.

  13. Functional relationships between wood structure and vulnerability to xylem cavitation in races of Eucalyptus globulus differing in wood density.

    PubMed

    Barotto, Antonio José; Monteoliva, Silvia; Gyenge, Javier; Martinez-Meier, Alejandro; Fernandez, María Elena

    2018-02-01

    Wood density can be considered as a measure of the internal wood structure, and it is usually used as a proxy measure of other mechanical and functional traits. Eucalyptus is one of the most important commercial forestry genera worldwide, but the relationship between wood density and vulnerability to cavitation in this genus has been little studied. The analysis is hampered by, among other things, its anatomical complexity, so it becomes necessary to address more complex techniques and analyses to elucidate the way in which the different anatomical elements are functionally integrated. In this study, vulnerability to cavitation in two races of Eucalyptus globulus Labill. with different wood density was evaluated through Path analysis, a multivariate method that allows evaluation of descriptive models of causal relationship between variables. A model relating anatomical variables with wood properties and functional parameters was proposed and tested. We found significant differences in wood basic density and vulnerability to cavitation between races. The main exogenous variables predicting vulnerability to cavitation were vessel hydraulic diameter and fibre wall fraction. Fibre wall fraction showed a direct impact on wood basic density and the slope of vulnerability curve, and an indirect and negative effect over the pressure imposing 50% of conductivity loss (P50) through them. Hydraulic diameter showed a direct negative effect on P50, but an indirect and positive influence over this variable through wood density on one hand, and through maximum hydraulic conductivity (ks max) and slope on the other. Our results highlight the complexity of the relationship between xylem efficiency and safety in species with solitary vessels such as Eucalyptus spp., with no evident compromise at the intraspecific level. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  14. Econometrics in outcomes research: the use of instrumental variables.

    PubMed

    Newhouse, J P; McClellan, M

    1998-01-01

    We describe an econometric technique, instrumental variables, that can be useful in estimating the effectiveness of clinical treatments in situations when a controlled trial has not or cannot be done. This technique relies upon the existence of one or more variables that induce substantial variation in the treatment variable but have no direct effect on the outcome variable of interest. We illustrate the use of the technique with an application to aggressive treatment of acute myocardial infarction in the elderly.

  15. Fusing Data Mining, Machine Learning and Traditional Statistics to Detect Biomarkers Associated with Depression

    PubMed Central

    Dipnall, Joanna F.

    2016-01-01

    Background Atheoretical large-scale data mining techniques using machine learning algorithms have promise in the analysis of large epidemiological datasets. This study illustrates the use of a hybrid methodology for variable selection that took account of missing data and complex survey design to identify key biomarkers associated with depression from a large epidemiological study. Methods The study used a three-step methodology amalgamating multiple imputation, a machine learning boosted regression algorithm and logistic regression, to identify key biomarkers associated with depression in the National Health and Nutrition Examination Study (2009–2010). Depression was measured using the Patient Health Questionnaire-9 and 67 biomarkers were analysed. Covariates in this study included gender, age, race, smoking, food security, Poverty Income Ratio, Body Mass Index, physical activity, alcohol use, medical conditions and medications. The final imputed weighted multiple logistic regression model included possible confounders and moderators. Results After the creation of 20 imputation data sets from multiple chained regression sequences, machine learning boosted regression initially identified 21 biomarkers associated with depression. Using traditional logistic regression methods, including controlling for possible confounders and moderators, a final set of three biomarkers were selected. The final three biomarkers from the novel hybrid variable selection methodology were red cell distribution width (OR 1.15; 95% CI 1.01, 1.30), serum glucose (OR 1.01; 95% CI 1.00, 1.01) and total bilirubin (OR 0.12; 95% CI 0.05, 0.28). Significant interactions were found between total bilirubin with Mexican American/Hispanic group (p = 0.016), and current smokers (p<0.001). Conclusion The systematic use of a hybrid methodology for variable selection, fusing data mining techniques using a machine learning algorithm with traditional statistical modelling, accounted for missing data and complex survey sampling methodology and was demonstrated to be a useful tool for detecting three biomarkers associated with depression for future hypothesis generation: red cell distribution width, serum glucose and total bilirubin. PMID:26848571

  16. Fusing Data Mining, Machine Learning and Traditional Statistics to Detect Biomarkers Associated with Depression.

    PubMed

    Dipnall, Joanna F; Pasco, Julie A; Berk, Michael; Williams, Lana J; Dodd, Seetal; Jacka, Felice N; Meyer, Denny

    2016-01-01

    Atheoretical large-scale data mining techniques using machine learning algorithms have promise in the analysis of large epidemiological datasets. This study illustrates the use of a hybrid methodology for variable selection that took account of missing data and complex survey design to identify key biomarkers associated with depression from a large epidemiological study. The study used a three-step methodology amalgamating multiple imputation, a machine learning boosted regression algorithm and logistic regression, to identify key biomarkers associated with depression in the National Health and Nutrition Examination Study (2009-2010). Depression was measured using the Patient Health Questionnaire-9 and 67 biomarkers were analysed. Covariates in this study included gender, age, race, smoking, food security, Poverty Income Ratio, Body Mass Index, physical activity, alcohol use, medical conditions and medications. The final imputed weighted multiple logistic regression model included possible confounders and moderators. After the creation of 20 imputation data sets from multiple chained regression sequences, machine learning boosted regression initially identified 21 biomarkers associated with depression. Using traditional logistic regression methods, including controlling for possible confounders and moderators, a final set of three biomarkers were selected. The final three biomarkers from the novel hybrid variable selection methodology were red cell distribution width (OR 1.15; 95% CI 1.01, 1.30), serum glucose (OR 1.01; 95% CI 1.00, 1.01) and total bilirubin (OR 0.12; 95% CI 0.05, 0.28). Significant interactions were found between total bilirubin with Mexican American/Hispanic group (p = 0.016), and current smokers (p<0.001). The systematic use of a hybrid methodology for variable selection, fusing data mining techniques using a machine learning algorithm with traditional statistical modelling, accounted for missing data and complex survey sampling methodology and was demonstrated to be a useful tool for detecting three biomarkers associated with depression for future hypothesis generation: red cell distribution width, serum glucose and total bilirubin.

  17. Integrating Map Algebra and Statistical Modeling for Spatio- Temporal Analysis of Monthly Mean Daily Incident Photosynthetically Active Radiation (PAR) over a Complex Terrain.

    PubMed

    Evrendilek, Fatih

    2007-12-12

    This study aims at quantifying spatio-temporal dynamics of monthly mean dailyincident photosynthetically active radiation (PAR) over a vast and complex terrain such asTurkey. The spatial interpolation method of universal kriging, and the combination ofmultiple linear regression (MLR) models and map algebra techniques were implemented togenerate surface maps of PAR with a grid resolution of 500 x 500 m as a function of fivegeographical and 14 climatic variables. Performance of the geostatistical and MLR modelswas compared using mean prediction error (MPE), root-mean-square prediction error(RMSPE), average standard prediction error (ASE), mean standardized prediction error(MSPE), root-mean-square standardized prediction error (RMSSPE), and adjustedcoefficient of determination (R² adj. ). The best-fit MLR- and universal kriging-generatedmodels of monthly mean daily PAR were validated against an independent 37-year observeddataset of 35 climate stations derived from 160 stations across Turkey by the Jackknifingmethod. The spatial variability patterns of monthly mean daily incident PAR were moreaccurately reflected in the surface maps created by the MLR-based models than in thosecreated by the universal kriging method, in particular, for spring (May) and autumn(November). The MLR-based spatial interpolation algorithms of PAR described in thisstudy indicated the significance of the multifactor approach to understanding and mappingspatio-temporal dynamics of PAR for a complex terrain over meso-scales.

  18. Variability And Uncertainty Analysis Of Contaminant Transport Model Using Fuzzy Latin Hypercube Sampling Technique

    NASA Astrophysics Data System (ADS)

    Kumar, V.; Nayagum, D.; Thornton, S.; Banwart, S.; Schuhmacher2, M.; Lerner, D.

    2006-12-01

    Characterization of uncertainty associated with groundwater quality models is often of critical importance, as for example in cases where environmental models are employed in risk assessment. Insufficient data, inherent variability and estimation errors of environmental model parameters introduce uncertainty into model predictions. However, uncertainty analysis using conventional methods such as standard Monte Carlo sampling (MCS) may not be efficient, or even suitable, for complex, computationally demanding models and involving different nature of parametric variability and uncertainty. General MCS or variant of MCS such as Latin Hypercube Sampling (LHS) assumes variability and uncertainty as a single random entity and the generated samples are treated as crisp assuming vagueness as randomness. Also when the models are used as purely predictive tools, uncertainty and variability lead to the need for assessment of the plausible range of model outputs. An improved systematic variability and uncertainty analysis can provide insight into the level of confidence in model estimates, and can aid in assessing how various possible model estimates should be weighed. The present study aims to introduce, Fuzzy Latin Hypercube Sampling (FLHS), a hybrid approach of incorporating cognitive and noncognitive uncertainties. The noncognitive uncertainty such as physical randomness, statistical uncertainty due to limited information, etc can be described by its own probability density function (PDF); whereas the cognitive uncertainty such estimation error etc can be described by the membership function for its fuzziness and confidence interval by ?-cuts. An important property of this theory is its ability to merge inexact generated data of LHS approach to increase the quality of information. The FLHS technique ensures that the entire range of each variable is sampled with proper incorporation of uncertainty and variability. A fuzzified statistical summary of the model results will produce indices of sensitivity and uncertainty that relate the effects of heterogeneity and uncertainty of input variables to model predictions. The feasibility of the method is validated to assess uncertainty propagation of parameter values for estimation of the contamination level of a drinking water supply well due to transport of dissolved phenolics from a contaminated site in the UK.

  19. Automated retrieval of forest structure variables based on multi-scale texture analysis of VHR satellite imagery

    NASA Astrophysics Data System (ADS)

    Beguet, Benoit; Guyon, Dominique; Boukir, Samia; Chehata, Nesrine

    2014-10-01

    The main goal of this study is to design a method to describe the structure of forest stands from Very High Resolution satellite imagery, relying on some typical variables such as crown diameter, tree height, trunk diameter, tree density and tree spacing. The emphasis is placed on the automatization of the process of identification of the most relevant image features for the forest structure retrieval task, exploiting both spectral and spatial information. Our approach is based on linear regressions between the forest structure variables to be estimated and various spectral and Haralick's texture features. The main drawback of this well-known texture representation is the underlying parameters which are extremely difficult to set due to the spatial complexity of the forest structure. To tackle this major issue, an automated feature selection process is proposed which is based on statistical modeling, exploring a wide range of parameter values. It provides texture measures of diverse spatial parameters hence implicitly inducing a multi-scale texture analysis. A new feature selection technique, we called Random PRiF, is proposed. It relies on random sampling in feature space, carefully addresses the multicollinearity issue in multiple-linear regression while ensuring accurate prediction of forest variables. Our automated forest variable estimation scheme was tested on Quickbird and Pléiades panchromatic and multispectral images, acquired at different periods on the maritime pine stands of two sites in South-Western France. It outperforms two well-established variable subset selection techniques. It has been successfully applied to identify the best texture features in modeling the five considered forest structure variables. The RMSE of all predicted forest variables is improved by combining multispectral and panchromatic texture features, with various parameterizations, highlighting the potential of a multi-resolution approach for retrieving forest structure variables from VHR satellite images. Thus an average prediction error of ˜ 1.1 m is expected on crown diameter, ˜ 0.9 m on tree spacing, ˜ 3 m on height and ˜ 0.06 m on diameter at breast height.

  20. Controlling for confounding variables in MS-omics protocol: why modularity matters.

    PubMed

    Smith, Rob; Ventura, Dan; Prince, John T

    2014-09-01

    As the field of bioinformatics research continues to grow, more and more novel techniques are proposed to meet new challenges and improvements upon solutions to long-standing problems. These include data processing techniques and wet lab protocol techniques. Although the literature is consistently thorough in experimental detail and variable-controlling rigor for wet lab protocol techniques, bioinformatics techniques tend to be less described and less controlled. As the validation or rejection of hypotheses rests on the experiment's ability to isolate and measure a variable of interest, we urge the importance of reducing confounding variables in bioinformatics techniques during mass spectrometry experimentation. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  1. A progress report on the ARRA-funded geotechnical site characterization project

    NASA Astrophysics Data System (ADS)

    Martin, A. J.; Yong, A.; Stokoe, K.; Di Matteo, A.; Diehl, J.; Jack, S.

    2011-12-01

    For the past 18 months, the 2009 American Recovery and Reinvestment Act (ARRA) has funded geotechnical site characterizations at 189 seismographic station sites in California and the central U.S. This ongoing effort applies methods involving surface-wave techniques, which include the horizontal-to-vertical spectral ratio (HVSR) technique and one or more of the following: spectral analysis of surface wave (SASW), active and passive multi-channel analysis of surface wave (MASW) and passive array microtremor techniques. From this multi-method approach, shear-wave velocity profiles (VS) and the time-averaged shear-wave velocity of the upper 30 meters (VS30) are estimated for each site. To accommodate the variability in local conditions (e.g., rural and urban soil locales, as well as weathered and competent rock sites), conventional field procedures are often modified ad-hoc to fit the unanticipated complexity at each location. For the majority of sites (>80%), fundamental-mode Rayleigh wave dispersion-based techniques are deployed and where complex geology is encountered, multiple test locations are made. Due to the presence of high velocity layers, about five percent of the locations require multi-mode inversion of Rayleigh wave (MASW-based) data or 3-D array-based inversion of SASW dispersion data, in combination with shallow P-wave seismic refraction and/or HVSR results. Where a strong impedance contrast (i.e. soil over rock) exists at shallow depth (about 10% of sites), dominant higher modes limit the use of Rayleigh wave dispersion techniques. Here, use of the Love wave dispersion technique, along with seismic refraction and/or HVSR data, is required to model the presence of shallow bedrock. At a small percentage of the sites, surface wave techniques are found not suitable for stand-alone deployment and site characterization is limited to the use of the seismic refraction technique. A USGS Open File Report-describing the surface geology, VS profile and the calculated VS30 for each site-will be prepared after the completion of the project in November 2011.

  2. Focusing on Environmental Biofilms With Variable-Pressure Scanning Electron Microscopy

    NASA Astrophysics Data System (ADS)

    Joubert, L.; Wolfaardt, G. M.; Du Plessis, K.

    2006-12-01

    Since the term biofilm has been coined almost 30 years ago, visualization has formed an integral part of investigations on microbial attachment. Electron microscopic (EM) biofilm studies, however, have been limited by the hydrated extracellular matrix which loses structural integrity with conventional preparative techniques, and under required high-vacuum conditions, resulting in a loss of information on spatial relationships and distribution of biofilm microbes. Recent advances in EM technology enable the application of Variable Pressure Scanning Electron Microscopy (VP SEM) to biofilms, allowing low vacuum and hydrated chamber atmosphere during visualization. Environmental biofilm samples can be viewed in situ, unfixed and fully hydrated, with application of gold-sputter-coating only, to increase image resolution. As the impact of microbial biofilms can be both hazardous and beneficial to man and his environment, recognition of biofilms as a natural form of microbial existence is needed to fully assess the potential role of microbial communities on technology. The integration of multiple techniques to elucidate biofilm processes has become imperative for unraveling complex phenotypic adaptations of this microbial lifestyle. We applied VP SEM as integrative technique with traditional and novel analytical techniques to (1)localize lignocellulosic microbial consortia applied for producing alternative bio-energy sources in the mining wastewater industry, (2) characterize and visualize wetland microbial communities in the treatment of winery wastewater, and (3)determine the impact of recombinant technology on yeast biofilm behavior. Visualization of microbial attachment to a lignocellulose substrate, and degradation of exposed plant tissue, gave insight into fiber degradation and volatile fatty acid production for biological sulphate removal from mining wastewater. Also, the 3D-architecture of complex biofilms developing in constructed wetlands was correlated with molecular fingerprints of wetland communities using tRFLP (Terminal Restriction Fragment Length Polymorphism) - and gave evidence of temporal and spatial variation in a wetland system, to potentially be applied as management tool in wastewater treatment. Visualization of differences in biofilm development by wild and recombinant yeast strains furthermore supported real-time quantitative data of biofilm development by Cryptococcus laurentii and Saccharomyces yeast strains. In all cases VP SEM allowed a more holistic interpretation of biofilm processes than afforded by quantitative empirical data only.

  3. Rapid bespoke laser ablation of variable period grating structures using a digital micromirror device for multi-colored surface images.

    PubMed

    Heath, Daniel J; Mills, Ben; Feinaeugle, Matthias; Eason, Robert W

    2015-06-01

    A digital micromirror device has been used to project variable-period grating patterns at high values of demagnification for direct laser ablation on planar surfaces. Femtosecond laser pulses of ∼1  mJ pulse energy at 800 nm wavelength from a Ti:sapphire laser were used to machine complex patterns with areas of up to ∼1  cm2 on thin films of bismuth telluride by dynamically modifying the grating period as the sample was translated beneath the imaged laser pulses. Individual ∼30 by 30 μm gratings were stitched together to form contiguous structures, which had diffractive effects clearly visible to the naked eye. This technique may have applications in marking, coding, and security features.

  4. Systems engineering and integration: Cost estimation and benefits analysis

    NASA Technical Reports Server (NTRS)

    Dean, ED; Fridge, Ernie; Hamaker, Joe

    1990-01-01

    Space Transportation Avionics hardware and software cost has traditionally been estimated in Phase A and B using cost techniques which predict cost as a function of various cost predictive variables such as weight, lines of code, functions to be performed, quantities of test hardware, quantities of flight hardware, design and development heritage, complexity, etc. The output of such analyses has been life cycle costs, economic benefits and related data. The major objectives of Cost Estimation and Benefits analysis are twofold: (1) to play a role in the evaluation of potential new space transportation avionics technologies, and (2) to benefit from emerging technological innovations. Both aspects of cost estimation and technology are discussed here. The role of cost analysis in the evaluation of potential technologies should be one of offering additional quantitative and qualitative information to aid decision-making. The cost analyses process needs to be fully integrated into the design process in such a way that cost trades, optimizations and sensitivities are understood. Current hardware cost models tend to primarily use weights, functional specifications, quantities, design heritage and complexity as metrics to predict cost. Software models mostly use functionality, volume of code, heritage and complexity as cost descriptive variables. Basic research needs to be initiated to develop metrics more responsive to the trades which are required for future launch vehicle avionics systems. These would include cost estimating capabilities that are sensitive to technological innovations such as improved materials and fabrication processes, computer aided design and manufacturing, self checkout and many others. In addition to basic cost estimating improvements, the process must be sensitive to the fact that no cost estimate can be quoted without also quoting a confidence associated with the estimate. In order to achieve this, better cost risk evaluation techniques are needed as well as improved usage of risk data by decision-makers. More and better ways to display and communicate cost and cost risk to management are required.

  5. Strategies to optimize monitoring schemes of recreational waters from Salta, Argentina: a multivariate approach

    PubMed Central

    Gutiérrez-Cacciabue, Dolores; Teich, Ingrid; Poma, Hugo Ramiro; Cruz, Mercedes Cecilia; Balzarini, Mónica; Rajal, Verónica Beatriz

    2014-01-01

    Several recreational surface waters in Salta, Argentina, were selected to assess their quality. Seventy percent of the measurements exceeded at least one of the limits established by international legislation becoming unsuitable for their use. To interpret results of complex data, multivariate techniques were applied. Arenales River, due to the variability observed in the data, was divided in two: upstream and downstream representing low and high pollution sites, respectively; and Cluster Analysis supported that differentiation. Arenales River downstream and Campo Alegre Reservoir were the most different environments and Vaqueros and La Caldera Rivers were the most similar. Canonical Correlation Analysis allowed exploration of correlations between physicochemical and microbiological variables except in both parts of Arenales River, and Principal Component Analysis allowed finding relationships among the 9 measured variables in all aquatic environments. Variable’s loadings showed that Arenales River downstream was impacted by industrial and domestic activities, Arenales River upstream was affected by agricultural activities, Campo Alegre Reservoir was disturbed by anthropogenic and ecological effects, and La Caldera and Vaqueros Rivers were influenced by recreational activities. Discriminant Analysis allowed identification of subgroup of variables responsible for seasonal and spatial variations. Enterococcus, dissolved oxygen, conductivity, E. coli, pH, and fecal coliforms are sufficient to spatially describe the quality of the aquatic environments. Regarding seasonal variations, dissolved oxygen, conductivity, fecal coliforms, and pH can be used to describe water quality during dry season, while dissolved oxygen, conductivity, total coliforms, E. coli, and Enterococcus during wet season. Thus, the use of multivariate techniques allowed optimizing monitoring tasks and minimizing costs involved. PMID:25190636

  6. Residual interference and wind tunnel wall adaption

    NASA Technical Reports Server (NTRS)

    Mokry, Miroslav

    1989-01-01

    Measured flow variables near the test section boundaries, used to guide adjustments of the walls in adaptive wind tunnels, can also be used to quantify the residual interference. Because of a finite number of wall control devices (jacks, plenum compartments), the finite test section length, and the approximation character of adaptation algorithms, the unconfined flow conditions are not expected to be precisely attained even in the fully adapted stage. The procedures for the evaluation of residual wall interference are essentially the same as those used for assessing the correction in conventional, non-adaptive wind tunnels. Depending upon the number of flow variables utilized, one can speak of one- or two-variable methods; in two dimensions also of Schwarz- or Cauchy-type methods. The one-variable methods use the measured static pressure and normal velocity at the test section boundary, but do not require any model representation. This is clearly of an advantage for adaptive wall test section, which are often relatively small with respect to the test model, and for the variety of complex flows commonly encountered in wind tunnel testing. For test sections with flexible walls the normal component of velocity is given by the shape of the wall, adjusted for the displacement effect of its boundary layer. For ventilated test section walls it has to be measured by the Calspan pipes, laser Doppler velocimetry, or other appropriate techniques. The interface discontinuity method, also described, is a genuine residual interference assessment technique. It is specific to adaptive wall wind tunnels, where the computation results for the fictitious flow in the exterior of the test section are provided.

  7. Technique of optimization of minimum temperature driving forces in the heaters of regeneration system of a steam turbine unit

    NASA Astrophysics Data System (ADS)

    Shamarokov, A. S.; Zorin, V. M.; Dai, Fam Kuang

    2016-03-01

    At the current stage of development of nuclear power engineering, high demands on nuclear power plants (NPP), including on their economy, are made. In these conditions, improving the quality of NPP means, in particular, the need to reasonably choose the values of numerous managed parameters of technological (heat) scheme. Furthermore, the chosen values should correspond to the economic conditions of NPP operation, which are postponed usually a considerable time interval from the point of time of parameters' choice. The article presents the technique of optimization of controlled parameters of the heat circuit of a steam turbine plant for the future. Its particularity is to obtain the results depending on a complex parameter combining the external economic and operating parameters that are relatively stable under the changing economic environment. The article presents the results of optimization according to this technique of the minimum temperature driving forces in the surface heaters of the heat regeneration system of the steam turbine plant of a K-1200-6.8/50 type. For optimization, the collector-screen heaters of high and low pressure developed at the OAO All-Russia Research and Design Institute of Nuclear Power Machine Building, which, in the authors' opinion, have the certain advantages over other types of heaters, were chosen. The optimality criterion in the task was the change in annual reduced costs for NPP compared to the version accepted as the baseline one. The influence on the decision of the task of independent variables that are not included in the complex parameter was analyzed. An optimization task was decided using the alternating-variable descent method. The obtained values of minimum temperature driving forces can guide the design of new nuclear plants with a heat circuit, similar to that accepted in the considered task.

  8. Laryngeal reinnervation for bilateral vocal fold paralysis.

    PubMed

    Marina, Mat B; Marie, Jean-Paul; Birchall, Martin A

    2011-12-01

    Laryngeal reinnervation for bilateral vocal fold paralysis (BVFP) patients is a promising technique to achieve good airway, although preserving a good quality of voice. On the other hand, the procedure is not simple. This review explores the recent literature on surgical technique and factors that may contribute to the success. Research and literature in this area are limited due to variability and complexity of the nerve supply. The posterior cricoarytenoid (PCA) muscle also receives nerve supply from the interarytenoid branch. Transection of this nerve at the point between interarytenoid and PCA branch may prevent aberrant reinnervation of adductor nerve axons to the PCA muscle. A varying degree of regeneration of injured recurrent laryngeal nerves (RLN) in humans of more than 6 months confirms subclinical reinnervation, which may prevent denervation-induced atrophy. Several promising surgical techniques have been developed for bilateral selective reinnervation for BVFP patients. This involves reinnervation of the abductor and adductor laryngeal muscles. The surgical technique aims at reinnervating the PCA muscle to trigger abduction during the respiratory cycle and preservation of good voice by strengthening the adductor muscles as well as prevention of laryngeal synkinesis.

  9. Measuring Conformational Dynamics of Single Biomolecules Using Nanoscale Electronic Devices

    NASA Astrophysics Data System (ADS)

    Akhterov, Maxim V.; Choi, Yongki; Sims, Patrick C.; Olsen, Tivoli J.; Gul, O. Tolga; Corso, Brad L.; Weiss, Gregory A.; Collins, Philip G.

    2014-03-01

    Molecular motion can be a rate-limiting step of enzyme catalysis, but motions are typically too quick to resolve with fluorescent single molecule techniques. Recently, we demonstrated a label-free technique that replaced fluorophores with nano-electronic circuits to monitor protein motions. The solid-state electronic technique used single-walled carbon nanotube (SWNT) transistors to monitor conformational motions of a single molecule of T4 lysozyme while processing its substrate, peptidoglycan. As lysozyme catalyzes the hydrolysis of glycosidic bonds, two protein domains undergo 8 Å hinge bending motion that generates an electronic signal in the SWNT transistor. We describe improvements to the system that have extended our temporal resolution to 2 μs . Electronic recordings at this level of detail directly resolve not just transitions between open and closed conformations but also the durations for those transition events. Statistical analysis of many events determines transition timescales characteristic of enzyme activity and shows a high degree of variability within nominally identical chemical events. The high resolution technique can be readily applied to other complex biomolecules to gain insights into their kinetic parameters and catalytic function.

  10. [Approach to percutaneous nephrolithotomy. Comparison of the procedure in a one-shot versus the sequential with metal dilata].

    PubMed

    Sedano-Portillo, Ismael; Ochoa-León, Gastón; Fuentes-Orozco, Clotilde; Irusteta-Jiménez, Leire; Michel-Espinoza, Luis Rodrigo; Salazar-Parra, Marcela; Cuesta-Márquez, Lizbeth; González-Ojeda, Alejandro

    2017-01-01

    Percutaneous nephrolithotomy is an efficient approach for treatment of different types of kidney stones. Various types of access techniques have been described like sequential dilatation and one-shot procedure. To determine the differences in time of exposure to X-rays and hemoglobin levels between techniques. Controlled clinical trial. Patients older than 18 years with complex/uncomplicated kidney stones, without urine infection were included. They were assigned randomly to one of the two techniques. Response variables were determined before and 24 h after procedures. 59 patients were included: 30 underwent one-shot procedure (study-group) and 29 sequential dilatation (control-group). Baseline characteristics were similar. Study group had a lower postoperative hemoglobin decline than control group (0.81 vs. 2.03 g/dl, respectively; p < 0.001); X-ray exposure time (69.6 vs. 100.62 s; p < 0.001) and postoperative creatinine serum levels (0.93 ± 0.29 vs. 1.13 ± 0.4 mg/dl; p = 0.039). No significant differences in postoperative morbidity were found. One-shot technique demonstrated better results compared to sequential dilatation.

  11. D.C. electrical conductivity and conduction mechanism of some azo sulfonyl quinoline ligands and uranyl complexes.

    PubMed

    El-Ghamaz, N A; Diab, M A; El-Sonbati, A Z; Salem, O L

    2011-12-01

    Supramolecular coordination of dioxouranium(VI) heterochelates 5-sulphono-7-(4'-X phenylazo)-8-hydroxyquinoline HL(n) (n=1, X=CH(3); n=2, X=H; n=3, X=Cl; n=4, X=NO(2)) have been prepared and characterized with various physico-chemical techniques. The infrared spectral studies showed a monobasic bidentate behavior with the oxygen and azonitrogen donor system. The temperature dependence of the D.C. electrical conductivity of HL(n) ligands and their uranyl complexes has been studied in the temperature range 305-415 K. The thermal activation energies E(a) for HL(n) compounds were found to be in the range 0.44-0.9 eV depending on the nature of the substituent X. The complexation process decreased E(a) values to the range 0.043-045 eV. The electrical conduction mechanism has been investigated for all samples under investigation. It was found to obey the variable range hopping mechanism (VRH). Copyright © 2011 Elsevier B.V. All rights reserved.

  12. Development and Validation of an Extractive Spectrophotometric Method for Miconazole Nitrate Assay in Pharmaceutical Formulations.

    PubMed

    Eticha, Tadele; Kahsay, Getu; Hailu, Teklebrhan; Gebretsadikan, Tesfamichael; Asefa, Fitsum; Gebretsadik, Hailekiros; Thangabalan, Boovizhikannan

    2018-01-01

    A simple extractive spectrophotometric technique has been developed and validated for the determination of miconazole nitrate in pure and pharmaceutical formulations. The method is based on the formation of a chloroform-soluble ion-pair complex between the drug and bromocresol green (BCG) dye in an acidic medium. The complex showed absorption maxima at 422 nm, and the system obeys Beer's law in the concentration range of 1-30  µ g/mL with molar absorptivity of 2.285 × 10 4  L/mol/cm. The composition of the complex was studied by Job's method of continuous variation, and the results revealed that the mole ratio of drug : BCG is 1 : 1. Full factorial design was used to optimize the effect of variable factors, and the method was validated based on the ICH guidelines. The method was applied for the determination of miconazole nitrate in real samples.

  13. SPICE: exploration and analysis of post-cytometric complex multivariate datasets.

    PubMed

    Roederer, Mario; Nozzi, Joshua L; Nason, Martha C

    2011-02-01

    Polychromatic flow cytometry results in complex, multivariate datasets. To date, tools for the aggregate analysis of these datasets across multiple specimens grouped by different categorical variables, such as demographic information, have not been optimized. Often, the exploration of such datasets is accomplished by visualization of patterns with pie charts or bar charts, without easy access to statistical comparisons of measurements that comprise multiple components. Here we report on algorithms and a graphical interface we developed for these purposes. In particular, we discuss thresholding necessary for accurate representation of data in pie charts, the implications for display and comparison of normalized versus unnormalized data, and the effects of averaging when samples with significant background noise are present. Finally, we define a statistic for the nonparametric comparison of complex distributions to test for difference between groups of samples based on multi-component measurements. While originally developed to support the analysis of T cell functional profiles, these techniques are amenable to a broad range of datatypes. Published 2011 Wiley-Liss, Inc.

  14. Switching industrial production processes from complex to defined media: method development and case study using the example of Penicillium chrysogenum.

    PubMed

    Posch, Andreas E; Spadiut, Oliver; Herwig, Christoph

    2012-06-22

    Filamentous fungi are versatile cell factories and widely used for the production of antibiotics, organic acids, enzymes and other industrially relevant compounds at large scale. As a fact, industrial production processes employing filamentous fungi are commonly based on complex raw materials. However, considerable lot-to-lot variability of complex media ingredients not only demands for exhaustive incoming components inspection and quality control, but unavoidably affects process stability and performance. Thus, switching bioprocesses from complex to defined media is highly desirable. This study presents a strategy for strain characterization of filamentous fungi on partly complex media using redundant mass balancing techniques. Applying the suggested method, interdependencies between specific biomass and side-product formation rates, production of fructooligosaccharides, specific complex media component uptake rates and fungal strains were revealed. A 2-fold increase of the overall penicillin space time yield and a 3-fold increase in the maximum specific penicillin formation rate were reached in defined media compared to complex media. The newly developed methodology enabled fast characterization of two different industrial Penicillium chrysogenum candidate strains on complex media based on specific complex media component uptake kinetics and identification of the most promising strain for switching the process from complex to defined conditions. Characterization at different complex/defined media ratios using only a limited number of analytical methods allowed maximizing the overall industrial objectives of increasing both, method throughput and the generation of scientific process understanding.

  15. Switching industrial production processes from complex to defined media: method development and case study using the example of Penicillium chrysogenum

    PubMed Central

    2012-01-01

    Background Filamentous fungi are versatile cell factories and widely used for the production of antibiotics, organic acids, enzymes and other industrially relevant compounds at large scale. As a fact, industrial production processes employing filamentous fungi are commonly based on complex raw materials. However, considerable lot-to-lot variability of complex media ingredients not only demands for exhaustive incoming components inspection and quality control, but unavoidably affects process stability and performance. Thus, switching bioprocesses from complex to defined media is highly desirable. Results This study presents a strategy for strain characterization of filamentous fungi on partly complex media using redundant mass balancing techniques. Applying the suggested method, interdependencies between specific biomass and side-product formation rates, production of fructooligosaccharides, specific complex media component uptake rates and fungal strains were revealed. A 2-fold increase of the overall penicillin space time yield and a 3-fold increase in the maximum specific penicillin formation rate were reached in defined media compared to complex media. Conclusions The newly developed methodology enabled fast characterization of two different industrial Penicillium chrysogenum candidate strains on complex media based on specific complex media component uptake kinetics and identification of the most promising strain for switching the process from complex to defined conditions. Characterization at different complex/defined media ratios using only a limited number of analytical methods allowed maximizing the overall industrial objectives of increasing both, method throughput and the generation of scientific process understanding. PMID:22727013

  16. Fuzzy logic based robotic controller

    NASA Technical Reports Server (NTRS)

    Attia, F.; Upadhyaya, M.

    1994-01-01

    Existing Proportional-Integral-Derivative (PID) robotic controllers rely on an inverse kinematic model to convert user-specified cartesian trajectory coordinates to joint variables. These joints experience friction, stiction, and gear backlash effects. Due to lack of proper linearization of these effects, modern control theory based on state space methods cannot provide adequate control for robotic systems. In the presence of loads, the dynamic behavior of robotic systems is complex and nonlinear, especially where mathematical modeling is evaluated for real-time operators. Fuzzy Logic Control is a fast emerging alternative to conventional control systems in situations where it may not be feasible to formulate an analytical model of the complex system. Fuzzy logic techniques track a user-defined trajectory without having the host computer to explicitly solve the nonlinear inverse kinematic equations. The goal is to provide a rule-based approach, which is closer to human reasoning. The approach used expresses end-point error, location of manipulator joints, and proximity to obstacles as fuzzy variables. The resulting decisions are based upon linguistic and non-numerical information. This paper presents a solution to the conventional robot controller which is independent of computationally intensive kinematic equations. Computer simulation results of this approach as obtained from software implementation are also discussed.

  17. Application of a data-mining method based on Bayesian networks to lesion-deficit analysis

    NASA Technical Reports Server (NTRS)

    Herskovits, Edward H.; Gerring, Joan P.

    2003-01-01

    Although lesion-deficit analysis (LDA) has provided extensive information about structure-function associations in the human brain, LDA has suffered from the difficulties inherent to the analysis of spatial data, i.e., there are many more variables than subjects, and data may be difficult to model using standard distributions, such as the normal distribution. We herein describe a Bayesian method for LDA; this method is based on data-mining techniques that employ Bayesian networks to represent structure-function associations. These methods are computationally tractable, and can represent complex, nonlinear structure-function associations. When applied to the evaluation of data obtained from a study of the psychiatric sequelae of traumatic brain injury in children, this method generates a Bayesian network that demonstrates complex, nonlinear associations among lesions in the left caudate, right globus pallidus, right side of the corpus callosum, right caudate, and left thalamus, and subsequent development of attention-deficit hyperactivity disorder, confirming and extending our previous statistical analysis of these data. Furthermore, analysis of simulated data indicates that methods based on Bayesian networks may be more sensitive and specific for detecting associations among categorical variables than methods based on chi-square and Fisher exact statistics.

  18. Sampling and Data Analysis for Environmental Microbiology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murray, Christopher J.

    2001-06-01

    A brief review of the literature indicates the importance of statistical analysis in applied and environmental microbiology. Sampling designs are particularly important for successful studies, and it is highly recommended that researchers review their sampling design before heading to the laboratory or the field. Most statisticians have numerous stories of scientists who approached them after their study was complete only to have to tell them that the data they gathered could not be used to test the hypothesis they wanted to address. Once the data are gathered, a large and complex body of statistical techniques are available for analysis ofmore » the data. Those methods include both numerical and graphical techniques for exploratory characterization of the data. Hypothesis testing and analysis of variance (ANOVA) are techniques that can be used to compare the mean and variance of two or more groups of samples. Regression can be used to examine the relationships between sets of variables and is often used to examine the dependence of microbiological populations on microbiological parameters. Multivariate statistics provides several methods that can be used for interpretation of datasets with a large number of variables and to partition samples into similar groups, a task that is very common in taxonomy, but also has applications in other fields of microbiology. Geostatistics and other techniques have been used to examine the spatial distribution of microorganisms. The objectives of this chapter are to provide a brief survey of some of the statistical techniques that can be used for sample design and data analysis of microbiological data in environmental studies, and to provide some examples of their use from the literature.« less

  19. A Scalable, Parallel Approach for Multi-Point, High-Fidelity Aerostructural Optimization of Aircraft Configurations

    NASA Astrophysics Data System (ADS)

    Kenway, Gaetan K. W.

    This thesis presents new tools and techniques developed to address the challenging problem of high-fidelity aerostructural optimization with respect to large numbers of design variables. A new mesh-movement scheme is developed that is both computationally efficient and sufficiently robust to accommodate large geometric design changes and aerostructural deformations. A fully coupled Newton-Krylov method is presented that accelerates the convergence of aerostructural systems and provides a 20% performance improvement over the traditional nonlinear block Gauss-Seidel approach and can handle more exible structures. A coupled adjoint method is used that efficiently computes derivatives for a gradient-based optimization algorithm. The implementation uses only machine accurate derivative techniques and is verified to yield fully consistent derivatives by comparing against the complex step method. The fully-coupled large-scale coupled adjoint solution method is shown to have 30% better performance than the segregated approach. The parallel scalability of the coupled adjoint technique is demonstrated on an Euler Computational Fluid Dynamics (CFD) model with more than 80 million state variables coupled to a detailed structural finite-element model of the wing with more than 1 million degrees of freedom. Multi-point high-fidelity aerostructural optimizations of a long-range wide-body, transonic transport aircraft configuration are performed using the developed techniques. The aerostructural analysis employs Euler CFD with a 2 million cell mesh and a structural finite element model with 300 000 DOF. Two design optimization problems are solved: one where takeoff gross weight is minimized, and another where fuel burn is minimized. Each optimization uses a multi-point formulation with 5 cruise conditions and 2 maneuver conditions. The optimization problems have 476 design variables are optimal results are obtained within 36 hours of wall time using 435 processors. The TOGW minimization results in a 4.2% reduction in TOGW with a 6.6% fuel burn reduction, while the fuel burn optimization resulted in a 11.2% fuel burn reduction with no change to the takeoff gross weight.

  20. Evaluating data-driven causal inference techniques in noisy physical and ecological systems

    NASA Astrophysics Data System (ADS)

    Tennant, C.; Larsen, L.

    2016-12-01

    Causal inference from observational time series challenges traditional approaches for understanding processes and offers exciting opportunities to gain new understanding of complex systems where nonlinearity, delayed forcing, and emergent behavior are common. We present a formal evaluation of the performance of convergent cross-mapping (CCM) and transfer entropy (TE) for data-driven causal inference under real-world conditions. CCM is based on nonlinear state-space reconstruction, and causality is determined by the convergence of prediction skill with an increasing number of observations of the system. TE is the uncertainty reduction based on transition probabilities of a pair of time-lagged variables. With TE, causal inference is based on asymmetry in information flow between the variables. Observational data and numerical simulations from a number of classical physical and ecological systems: atmospheric convection (the Lorenz system), species competition (patch-tournaments), and long-term climate change (Vostok ice core) were used to evaluate the ability of CCM and TE to infer causal-relationships as data series become increasingly corrupted by observational (instrument-driven) or process (model-or -stochastic-driven) noise. While both techniques show promise for causal inference, TE appears to be applicable to a wider range of systems, especially when the data series are of sufficient length to reliably estimate transition probabilities of system components. Both techniques also show a clear effect of observational noise on causal inference. For example, CCM exhibits a negative logarithmic decline in prediction skill as the noise level of the system increases. Changes in TE strongly depend on noise type and which variable the noise was added to. The ability of CCM and TE to detect driving influences suggest that their application to physical and ecological systems could be transformative for understanding driving mechanisms as Earth systems undergo change.

  1. Water quality analysis in rivers with non-parametric probability distributions and fuzzy inference systems: application to the Cauca River, Colombia.

    PubMed

    Ocampo-Duque, William; Osorio, Carolina; Piamba, Christian; Schuhmacher, Marta; Domingo, José L

    2013-02-01

    The integration of water quality monitoring variables is essential in environmental decision making. Nowadays, advanced techniques to manage subjectivity, imprecision, uncertainty, vagueness, and variability are required in such complex evaluation process. We here propose a probabilistic fuzzy hybrid model to assess river water quality. Fuzzy logic reasoning has been used to compute a water quality integrative index. By applying a Monte Carlo technique, based on non-parametric probability distributions, the randomness of model inputs was estimated. Annual histograms of nine water quality variables were built with monitoring data systematically collected in the Colombian Cauca River, and probability density estimations using the kernel smoothing method were applied to fit data. Several years were assessed, and river sectors upstream and downstream the city of Santiago de Cali, a big city with basic wastewater treatment and high industrial activity, were analyzed. The probabilistic fuzzy water quality index was able to explain the reduction in water quality, as the river receives a larger number of agriculture, domestic, and industrial effluents. The results of the hybrid model were compared to traditional water quality indexes. The main advantage of the proposed method is that it considers flexible boundaries between the linguistic qualifiers used to define the water status, being the belongingness of water quality to the diverse output fuzzy sets or classes provided with percentiles and histograms, which allows classify better the real water condition. The results of this study show that fuzzy inference systems integrated to stochastic non-parametric techniques may be used as complementary tools in water quality indexing methodologies. Copyright © 2012 Elsevier Ltd. All rights reserved.

  2. Decision insight into stakeholder conflict for ERN.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siirola, John; Tidwell, Vincent Carroll; Benz, Zachary O.

    Participatory modeling has become an important tool in facilitating resource decision making and dispute resolution. Approaches to modeling that are commonly used in this context often do not adequately account for important human factors. Current techniques provide insights into how certain human activities and variables affect resource outcomes; however, they do not directly simulate the complex variables that shape how, why, and under what conditions different human agents behave in ways that affect resources and human interactions related to them. Current approaches also do not adequately reveal how the effects of individual decisions scale up to have systemic level effectsmore » in complex resource systems. This lack of integration prevents the development of more robust models to support decision making and dispute resolution processes. Development of integrated tools is further hampered by the fact that collection of primary data for decision-making modeling is costly and time consuming. This project seeks to develop a new approach to resource modeling that incorporates both technical and behavioral modeling techniques into a single decision-making architecture. The modeling platform is enhanced by use of traditional and advanced processes and tools for expedited data capture. Specific objectives of the project are: (1) Develop a proof of concept for a new technical approach to resource modeling that combines the computational techniques of system dynamics and agent based modeling, (2) Develop an iterative, participatory modeling process supported with traditional and advance data capture techniques that may be utilized to facilitate decision making, dispute resolution, and collaborative learning processes, and (3) Examine potential applications of this technology and process. The development of this decision support architecture included both the engineering of the technology and the development of a participatory method to build and apply the technology. Stakeholder interaction with the model and associated data capture was facilitated through two very different modes of engagement, one a standard interface involving radio buttons, slider bars, graphs and plots, while the other utilized an immersive serious gaming interface. The decision support architecture developed through this project was piloted in the Middle Rio Grande Basin to examine how these tools might be utilized to promote enhanced understanding and decision-making in the context of complex water resource management issues. Potential applications of this architecture and its capacity to lead to enhanced understanding and decision-making was assessed through qualitative interviews with study participants who represented key stakeholders in the basin.« less

  3. Some elements of a theory of multidimensional complex variables. I - General theory. II - Expansions of analytic functions and application to fluid flows

    NASA Technical Reports Server (NTRS)

    Martin, E. Dale

    1989-01-01

    The paper introduces a new theory of N-dimensional complex variables and analytic functions which, for N greater than 2, is both a direct generalization and a close analog of the theory of ordinary complex variables. The algebra in the present theory is a commutative ring, not a field. Functions of a three-dimensional variable were defined and the definition of the derivative then led to analytic functions.

  4. Hot spot variability and lithography process window investigation by CDU improvement using CDC technique

    NASA Astrophysics Data System (ADS)

    Thamm, Thomas; Geh, Bernd; Djordjevic Kaufmann, Marija; Seltmann, Rolf; Bitensky, Alla; Sczyrba, Martin; Samy, Aravind Narayana

    2018-03-01

    Within the current paper, we will concentrate on the well-known CDC technique from Carl Zeiss to improve the CD distribution of the wafer by improving the reticle CDU and its impact on hotspots and Litho process window. The CDC technique uses an ultra-short pulse laser technology, which generates a micro-level Shade-In-Element (also known as "Pixels") into the mask quartz bulk material. These scatter centers are able to selectively attenuate certain areas of the reticle in higher resolution compared to other methods and thus improve the CD uniformity. In a first section, we compare the CDC technique with scanner dose correction schemes. It becomes obvious, that the CDC technique has unique advantages with respect to spatial resolution and intra-field flexibility over scanner correction schemes, however, due to the scanner flexibility across wafer both methods are rather complementary than competing. In a second section we show that a reference feature based correction scheme can be used to improve the CDU of a full chip with multiple different features that have different MEEF and dose sensitivities. In detail we will discuss the impact of forward scattering light originated by the CDC pixels on the illumination source and the related proximity signature. We will show that the impact on proximity is small compared to the CDU benefit of the CDC technique. Finally we show to which extend the reduced variability across reticle will result in a better common electrical process window of a whole chip design on the whole reticle field on wafer. Finally we will discuss electrical verification results between masks with purposely made bad CDU that got repaired by the CDC technique versus inherently good "golden" masks on a complex logic device. No yield difference is observed between the repaired bad masks and the masks with good CDU.

  5. Aging reduces complexity of heart rate variability assessed by conditional entropy and symbolic analysis.

    PubMed

    Takahashi, Anielle C M; Porta, Alberto; Melo, Ruth C; Quitério, Robison J; da Silva, Ester; Borghi-Silva, Audrey; Tobaldini, Eleonora; Montano, Nicola; Catai, Aparecida M

    2012-06-01

    Increasing age is associated with a reduction in overall heart rate variability as well as changes in complexity of physiologic dynamics. The aim of this study was to verify if the alterations in autonomic modulation of heart rate caused by the aging process could be detected by Shannon entropy (SE), conditional entropy (CE) and symbolic analysis (SA). Complexity analysis was carried out in 44 healthy subjects divided into two groups: old (n = 23, 63 ± 3 years) and young group (n = 21, 23 ± 2). It was analyzed SE, CE [complexity index (CI) and normalized CI (NCI)] and SA (0V, 1V, 2LV and 2ULV patterns) during short heart period series (200 cardiac beats) derived from ECG recordings during 15 min of rest in a supine position. The sequences characterized by three heart periods with no significant variations (0V), and that with two significant unlike variations (2ULV) reflect changes in sympathetic and vagal modulation, respectively. The unpaired t test (or Mann-Whitney rank sum test when appropriate) was used in the statistical analysis. In the aging process, the distributions of patterns (SE) remain similar to young subjects. However, the regularity is significantly different; the patterns are more repetitive in the old group (a decrease of CI and NCI). The amounts of pattern types are different: 0V is increased and 2LV and 2ULV are reduced in the old group. These differences indicate marked change of autonomic regulation. The CE and SA are feasible techniques to detect alteration in autonomic control of heart rate in the old group.

  6. Parametric Study of Variable Emissivity Radiator Surfaces

    NASA Technical Reports Server (NTRS)

    Grob, Lisa M.; Swanson, Theodore D.

    2000-01-01

    The goal of spacecraft thermal design is to accommodate a high function satellite in a low weight and real estate package. The extreme environments that the satellite is exposed during its orbit are handled using passive and active control techniques. Heritage passive heat rejection designs are sized for the hot conditions and augmented for the cold end with heaters. The active heat rejection designs to date are heavy, expensive and/or complex. Incorporating an active radiator into the design that is lighter, cheaper and more simplistic will allow designers to meet the previously stated goal of thermal spacecraft design Varying the radiator's surface properties without changing the radiating area (as with VCHP), or changing the radiators' views (traditional louvers) is the objective of the variable emissivity (vary-e) radiator technologies. A parametric evaluation of the thermal performance of three such technologies is documented in this paper. Comparisons of the Micro-Electromechanical Systems (MEMS), Electrochromics, and Electrophoretics radiators to conventional radiators, both passive and active are quantified herein. With some noted limitations, the vary-e radiator surfaces provide significant advantages over traditional radiators and a promising alternative design technique for future spacecraft thermal systems.

  7. Locating landmarks on high-dimensional free energy surfaces

    PubMed Central

    Chen, Ming; Yu, Tang-Qing; Tuckerman, Mark E.

    2015-01-01

    Coarse graining of complex systems possessing many degrees of freedom can often be a useful approach for analyzing and understanding key features of these systems in terms of just a few variables. The relevant energy landscape in a coarse-grained description is the free energy surface as a function of the coarse-grained variables, which, despite the dimensional reduction, can still be an object of high dimension. Consequently, navigating and exploring this high-dimensional free energy surface is a nontrivial task. In this paper, we use techniques from multiscale modeling, stochastic optimization, and machine learning to devise a strategy for locating minima and saddle points (termed “landmarks”) on a high-dimensional free energy surface “on the fly” and without requiring prior knowledge of or an explicit form for the surface. In addition, we propose a compact graph representation of the landmarks and connections between them, and we show that the graph nodes can be subsequently analyzed and clustered based on key attributes that elucidate important properties of the system. Finally, we show that knowledge of landmark locations allows for the efficient determination of their relative free energies via enhanced sampling techniques. PMID:25737545

  8. Monitoring a Complex Physical System using a Hybrid Dynamic Bayes Net

    NASA Technical Reports Server (NTRS)

    Lerner, Uri; Moses, Brooks; Scott, Maricia; McIlraith, Sheila; Keller, Daphne

    2005-01-01

    The Reverse Water Gas Shift system (RWGS) is a complex physical system designed to produce oxygen from the carbon dioxide atmosphere on Mars. If sent to Mars, it would operate without human supervision, thus requiring a reliable automated system for monitoring and control. The RWGS presents many challenges typical of real-world systems, including: noisy and biased sensors, nonlinear behavior, effects that are manifested over different time granularities, and unobservability of many important quantities. In this paper we model the RWGS using a hybrid (discrete/continuous) Dynamic Bayesian Network (DBN), where the state at each time slice contains 33 discrete and 184 continuous variables. We show how the system state can be tracked using probabilistic inference over the model. We discuss how to deal with the various challenges presented by the RWGS, providing a suite of techniques that are likely to be useful in a wide range of applications. In particular, we describe a general framework for dealing with nonlinear behavior using numerical integration techniques, extending the successful Unscented Filter. We also show how to use a fixed-point computation to deal with effects that develop at different time scales, specifically rapid changes occuring during slowly changing processes. We test our model using real data collected from the RWGS, demonstrating the feasibility of hybrid DBNs for monitoring complex real-world physical systems.

  9. Tissue dielectric measurement using an interstitial dipole antenna.

    PubMed

    Wang, Peng; Brace, Christopher L

    2012-01-01

    The purpose of this study was to develop a technique to measure the dielectric properties of biological tissues with an interstitial dipole antenna based upon previous efforts for open-ended coaxial probes. The primary motivation for this technique is to facilitate treatment monitoring during microwave tumor ablation by utilizing the heating antenna without additional intervention or interruption of the treatment. The complex permittivity of a tissue volume surrounding the antenna was calculated from reflection coefficients measured after high-temperature microwave heating by using a rational function model of the antenna's input admittance. Three referencing liquids were needed for measurement calibration. The dielectric measurement technique was validated ex vivo in normal and ablated bovine livers. Relative permittivity and effective conductivity were lower in the ablation zone when compared to normal tissue, consistent with previous results. The dipole technique demonstrated a mean 10% difference of permittivity values when compared to open-ended coaxial cable measurements in the frequency range of 0.5-20 GHz. Variability in measured permittivities could be smoothed by fitting to a Cole-Cole dispersion model. Further development of this technique may facilitate real-time monitoring of microwave ablation treatments through the treatment applicator. © 2011 IEEE

  10. Tissue Dielectric Measurement Using an Interstitial Dipole Antenna

    PubMed Central

    Wang, Peng; Brace, Christopher L.

    2012-01-01

    The purpose of this study was to develop a technique to measure the dielectric properties of biological tissues with an interstitial dipole antenna based upon previous efforts for open-ended coaxial probes. The primary motivation for this technique is to facilitate treatment monitoring during microwave tumor ablation by utilizing the heating antenna without additional intervention or interruption of the treatment. The complex permittivity of a tissue volume surrounding the antenna was calculated from reflection coefficients measured after high-temperature microwave heating by using a rational function model of the antenna’s input admittance. Three referencing liquids were needed for measurement calibration. The dielectric measurement technique was validated ex vivo in normal and ablated bovine livers. Relative permittivity and effective conductivity were lower in the ablation zone when compared to normal tissue, consistent with previous results. The dipole technique demonstrated a mean 10% difference of permittivity values when compared to open-ended coaxial cable measurements in the frequency range of 0.5–20 GHz. Variability in measured permittivities could be smoothed by fitting to a Cole–Cole dispersion model. Further development of this technique may facilitate real-time monitoring of microwave ablation treatments through the treatment applicator. PMID:21914566

  11. A Comparison of Traditional, Step-Path, and Geostatistical Techniques in the Stability Analysis of a Large Open Pit

    NASA Astrophysics Data System (ADS)

    Mayer, J. M.; Stead, D.

    2017-04-01

    With the increased drive towards deeper and more complex mine designs, geotechnical engineers are often forced to reconsider traditional deterministic design techniques in favour of probabilistic methods. These alternative techniques allow for the direct quantification of uncertainties within a risk and/or decision analysis framework. However, conventional probabilistic practices typically discretize geological materials into discrete, homogeneous domains, with attributes defined by spatially constant random variables, despite the fact that geological media display inherent heterogeneous spatial characteristics. This research directly simulates this phenomenon using a geostatistical approach, known as sequential Gaussian simulation. The method utilizes the variogram which imposes a degree of controlled spatial heterogeneity on the system. Simulations are constrained using data from the Ok Tedi mine site in Papua New Guinea and designed to randomly vary the geological strength index and uniaxial compressive strength using Monte Carlo techniques. Results suggest that conventional probabilistic techniques have a fundamental limitation compared to geostatistical approaches, as they fail to account for the spatial dependencies inherent to geotechnical datasets. This can result in erroneous model predictions, which are overly conservative when compared to the geostatistical results.

  12. Taxi-Out Time Prediction for Departures at Charlotte Airport Using Machine Learning Techniques

    NASA Technical Reports Server (NTRS)

    Lee, Hanbong; Malik, Waqar; Jung, Yoon C.

    2016-01-01

    Predicting the taxi-out times of departures accurately is important for improving airport efficiency and takeoff time predictability. In this paper, we attempt to apply machine learning techniques to actual traffic data at Charlotte Douglas International Airport for taxi-out time prediction. To find the key factors affecting aircraft taxi times, surface surveillance data is first analyzed. From this data analysis, several variables, including terminal concourse, spot, runway, departure fix and weight class, are selected for taxi time prediction. Then, various machine learning methods such as linear regression, support vector machines, k-nearest neighbors, random forest, and neural networks model are applied to actual flight data. Different traffic flow and weather conditions at Charlotte airport are also taken into account for more accurate prediction. The taxi-out time prediction results show that linear regression and random forest techniques can provide the most accurate prediction in terms of root-mean-square errors. We also discuss the operational complexity and uncertainties that make it difficult to predict the taxi times accurately.

  13. Synthesis, structural studies and reactivity of vanadium complexes with tridentate (OSO) ligand.

    PubMed

    Janas, Zofia; Wiśniewska, Dorota; Jerzykiewicz, Lucjan B; Sobota, Piotr; Drabent, Krzysztof; Szczegot, Krzysztof

    2007-05-28

    The direct reaction between [VCl(3)(thf)3] or [VO(OEt)3] and 2,2'-thiobis{4-(1,1,3,3-tetramethyl-butyl)phenol (tbopH(2)) leads to the formation of [V(2)(micro-tbop-kappa(3)O,S,O)2Cl(2)(CH(3)CN)(2)] (1).4CH(3)CN or [V(2)(micro-OEt)2(O)2(tbop-kappa(3)O,S,O)2] (2), respectively, in high yield. Compounds 1 and 2 were characterized by chemical and physical techniques including X-ray crystallography and variable temperature magnetic susceptibility studies (J = -29.1 cm(-1)) for 1. Complexes 1 and 2 were supported on MgCl2 and when activated with aluminium alkyls, were found to effectively polymerize ethene to produce polyethylene with a narrow molecular weight distribution M(w)/M(n) approximately 3.

  14. Multistability and complex basins in a nonlinear duopoly with price competition and relative profit delegation.

    PubMed

    Fanti, Luciano; Gori, Luca; Mammana, Cristiana; Michetti, Elisabetta

    2016-09-01

    In this article, we investigate the local and global dynamics of a nonlinear duopoly model with price-setting firms and managerial delegation contracts (relative profits). Our study aims at clarifying the effects of the interaction between the degree of product differentiation and the weight of manager's bonus on long-term outcomes in two different states: managers behave more aggressively with the rival (competition) under product complementarity and less aggressively with the rival (cooperation) under product substitutability. We combine analytical tools and numerical techniques to reach interesting results such as synchronisation and on-off intermittency of the state variables (in the case of homogeneous attitude of managers) and the existence of chaotic attractors, complex basins of attraction, and multistability (in the case of heterogeneous attitudes of managers). We also give policy insights.

  15. Multistability and complex basins in a nonlinear duopoly with price competition and relative profit delegation

    NASA Astrophysics Data System (ADS)

    Fanti, Luciano; Gori, Luca; Mammana, Cristiana; Michetti, Elisabetta

    2016-09-01

    In this article, we investigate the local and global dynamics of a nonlinear duopoly model with price-setting firms and managerial delegation contracts (relative profits). Our study aims at clarifying the effects of the interaction between the degree of product differentiation and the weight of manager's bonus on long-term outcomes in two different states: managers behave more aggressively with the rival (competition) under product complementarity and less aggressively with the rival (cooperation) under product substitutability. We combine analytical tools and numerical techniques to reach interesting results such as synchronisation and on-off intermittency of the state variables (in the case of homogeneous attitude of managers) and the existence of chaotic attractors, complex basins of attraction, and multistability (in the case of heterogeneous attitudes of managers). We also give policy insights.

  16. Determination of thermodynamic values of acidic dissociation constants and complexation constants of profens and their utilization for optimization of separation conditions by Simul 5 Complex.

    PubMed

    Riesová, Martina; Svobodová, Jana; Ušelová, Kateřina; Tošner, Zdeněk; Zusková, Iva; Gaš, Bohuslav

    2014-10-17

    In this paper we determine acid dissociation constants, limiting ionic mobilities, complexation constants with β-cyclodextrin or heptakis(2,3,6-tri-O-methyl)-β-cyclodextrin, and mobilities of resulting complexes of profens, using capillary zone electrophoresis and affinity capillary electrophoresis. Complexation parameters are determined for both neutral and fully charged forms of profens and further corrected for actual ionic strength and variable viscosity in order to obtain thermodynamic values of complexation constants. The accuracy of obtained complexation parameters is verified by multidimensional nonlinear regression of affinity capillary electrophoretic data, which provides the acid dissociation and complexation parameters within one set of measurements, and by NMR technique. A good agreement among all discussed methods was obtained. Determined complexation parameters were used as input parameters for simulations of electrophoretic separation of profens by Simul 5 Complex. An excellent agreement of experimental and simulated results was achieved in terms of positions, shapes, and amplitudes of analyte peaks, confirming the applicability of Simul 5 Complex to complex systems, and accuracy of obtained physical-chemical constants. Simultaneously, we were able to demonstrate the influence of electromigration dispersion on the separation efficiency, which is not possible using the common theoretical approaches, and predict the electromigration order reversals of profen peaks. We have shown that determined acid dissociation and complexation parameters in combination with tool Simul 5 Complex software can be used for optimization of separation conditions in capillary electrophoresis. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Stock price forecasting for companies listed on Tehran stock exchange using multivariate adaptive regression splines model and semi-parametric splines technique

    NASA Astrophysics Data System (ADS)

    Rounaghi, Mohammad Mahdi; Abbaszadeh, Mohammad Reza; Arashi, Mohammad

    2015-11-01

    One of the most important topics of interest to investors is stock price changes. Investors whose goals are long term are sensitive to stock price and its changes and react to them. In this regard, we used multivariate adaptive regression splines (MARS) model and semi-parametric splines technique for predicting stock price in this study. The MARS model as a nonparametric method is an adaptive method for regression and it fits for problems with high dimensions and several variables. semi-parametric splines technique was used in this study. Smoothing splines is a nonparametric regression method. In this study, we used 40 variables (30 accounting variables and 10 economic variables) for predicting stock price using the MARS model and using semi-parametric splines technique. After investigating the models, we select 4 accounting variables (book value per share, predicted earnings per share, P/E ratio and risk) as influencing variables on predicting stock price using the MARS model. After fitting the semi-parametric splines technique, only 4 accounting variables (dividends, net EPS, EPS Forecast and P/E Ratio) were selected as variables effective in forecasting stock prices.

  18. The Seasonality and Ecology of the Anopheles gambiae complex (Dipetra: Culicidae) in Liberia Using Molecular Identification.

    PubMed

    Fahmy, N T; Villinski, J T; Bolay, F; Stoops, C A; Tageldin, R A; Fakoli, L; Okasha, O; Obenauer, P J; Diclaro, J W

    2015-05-01

    Members of the Anopheles gambiae sensu lato (Giles) complex define a group of seven morphologically indistinguishable species, including the principal malaria vectors in Sub-Saharan Africa. Members of this complex differ in behavior and ability to transmit malaria; hence, precise identification of member species is critical to monitoring and evaluating malaria threat levels. We collected mosquitoes from five counties in Liberia every other month from May 2011 until May 2012, using various trapping techniques. A. gambiae complex members were identified using molecular techniques based on differences in the ribosomal DNA (rDNA) region between species and the molecular forms (S and M) of A. gambiae sensu stricto (s.s) specimens. In total, 1,696 A. gambiae mosquitoes were collected and identified. DNA was extracted from legs of each specimen with species identification determined by multiplex polymerase chain reaction using specific primers. The molecular forms (M or S) of A. gambiae s.s were determined by restriction fragment length polymorphism. Bivariate and multivariate logistic regression models identified environmental variables associated with genomic differentiation. Our results indicate widespread occurrence of A. gambiae s.s., the principal malaria vector in the complex, although two Anopheles melas Theobald/A. merus Donitz mosquitoes were detected. We found 72.6, 25.5, and 1.9% of A. gambiae s.s specimens were S, M, and hybrid forms, respectively. Statistical analysis indicates that the S form was more likely to be found in rural areas during rainy seasons and indoor catchments. This information will enhance vector control efforts in Liberia. Published by Oxford University Press on behalf of Entomological Society of America 2015. This work is written by US Government employees and is in the public domain in the US.

  19. The technique of entropy optimization in motor current signature analysis and its application in the fault diagnosis of gear transmission

    NASA Astrophysics Data System (ADS)

    Chen, Xiaoguang; Liang, Lin; Liu, Fei; Xu, Guanghua; Luo, Ailing; Zhang, Sicong

    2012-05-01

    Nowadays, Motor Current Signature Analysis (MCSA) is widely used in the fault diagnosis and condition monitoring of machine tools. However, although the current signal has lower SNR (Signal Noise Ratio), it is difficult to identify the feature frequencies of machine tools from complex current spectrum that the feature frequencies are often dense and overlapping by traditional signal processing method such as FFT transformation. With the study in the Motor Current Signature Analysis (MCSA), it is found that the entropy is of importance for frequency identification, which is associated with the probability distribution of any random variable. Therefore, it plays an important role in the signal processing. In order to solve the problem that the feature frequencies are difficult to be identified, an entropy optimization technique based on motor current signal is presented in this paper for extracting the typical feature frequencies of machine tools which can effectively suppress the disturbances. Some simulated current signals were made by MATLAB, and a current signal was obtained from a complex gearbox of an iron works made in Luxembourg. In diagnosis the MCSA is combined with entropy optimization. Both simulated and experimental results show that this technique is efficient, accurate and reliable enough to extract the feature frequencies of current signal, which provides a new strategy for the fault diagnosis and the condition monitoring of machine tools.

  20. Respiration and heart rate complexity: Effects of age and gender assessed by band-limited transfer entropy

    PubMed Central

    Nemati, Shamim; Edwards, Bradley A.; Lee, Joon; Pittman-Polletta, Benjamin; Butler, James P.; Malhotra, Atul

    2013-01-01

    Aging and disease are accompanied with a reduction of complex variability in the temporal patterns of heart rate. This reduction has been attributed to a break down of the underlying regulatory feedback mechanisms that maintain a homeodynamic state. Previous work has established the utility of entropy as an index of disorder, for quantification of changes in heart rate complexity. However, questions remain regarding the origin of heart rate complexity and the mechanisms involved in its reduction with aging and disease. In this work we use a newly developed technique based on the concept of band-limited transfer entropy to assess the aging-related changes in contribution of respiration and blood pressure to entropy of heart rate at different frequency bands. Noninvasive measurements of heart beat interval, respiration, and systolic blood pressure were recorded from 20 young (21–34 years) and 20 older (68–85 years) healthy adults. Band-limited transfer entropy analysis revealed a reduction in high-frequency contribution of respiration to heart rate complexity (p < 0.001) with normal aging, particularly in men. These results have the potential for dissecting the relative contributions of respiration and blood pressure-related reflexes to heart rate complexity and their degeneration with normal aging. PMID:23811194

  1. Movement variability and skill level of various throwing techniques.

    PubMed

    Wagner, Herbert; Pfusterschmied, Jürgen; Klous, Miriam; von Duvillard, Serge P; Müller, Erich

    2012-02-01

    In team-handball, skilled athletes are able to adapt to different game situations that may lead to differences in movement variability. Whether movement variability affects the performance of a team-handball throw and is affected by different skill levels or throwing techniques has not yet been demonstrated. Consequently, the aims of the study were to determine differences in performance and movement variability for several throwing techniques in different phases of the throwing movement, and of different skill levels. Twenty-four team-handball players of different skill levels (n=8) performed 30 throws using various throwing techniques. Upper body kinematics was measured via an 8 camera Vicon motion capture system and movement variability was calculated. Results indicated an increase in movement variability in the distal joint movements during the acceleration phase. In addition, there was a decrease in movement variability in highly skilled and skilled players in the standing throw with run-up, which indicated an increase in the ball release speed, which was highest when using this throwing technique. We assert that team-handball players had the ability to compensate an increase in movement variability in the acceleration phase to throw accurately, and skilled players were able to control the movement, although movement variability decreased in the standing throw with run-up. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Adjoint-Based Design of Rotors Using the Navier-Stokes Equations in a Noninertial Reference Frame

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Lee-Rausch, Elizabeth M.; Jones, William T.

    2010-01-01

    Optimization of rotorcraft flowfields using an adjoint method generally requires a time-dependent implementation of the equations. The current study examines an intermediate approach in which a subset of rotor flowfields are cast as steady problems in a noninertial reference frame. This technique permits the use of an existing steady-state adjoint formulation with minor modifications to perform sensitivity analyses. The formulation is valid for isolated rigid rotors in hover or where the freestream velocity is aligned with the axis of rotation. Discrete consistency of the implementation is demonstrated by using comparisons with a complex-variable technique, and a number of single- and multipoint optimizations for the rotorcraft figure of merit function are shown for varying blade collective angles. Design trends are shown to remain consistent as the grid is refined.

  3. Adjoint-Based Design of Rotors using the Navier-Stokes Equations in a Noninertial Reference Frame

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Lee-Rausch, Elizabeth M.; Jones, William T.

    2009-01-01

    Optimization of rotorcraft flowfields using an adjoint method generally requires a time-dependent implementation of the equations. The current study examines an intermediate approach in which a subset of rotor flowfields are cast as steady problems in a noninertial reference frame. This technique permits the use of an existing steady-state adjoint formulation with minor modifications to perform sensitivity analyses. The formulation is valid for isolated rigid rotors in hover or where the freestream velocity is aligned with the axis of rotation. Discrete consistency of the implementation is demonstrated using comparisons with a complex-variable technique, and a number of single- and multi-point optimizations for the rotorcraft figure of merit function are shown for varying blade collective angles. Design trends are shown to remain consistent as the grid is refined.

  4. Comparison of five modelling techniques to predict the spatial distribution and abundance of seabirds

    USGS Publications Warehouse

    O'Connell, Allan F.; Gardner, Beth; Oppel, Steffen; Meirinho, Ana; Ramírez, Iván; Miller, Peter I.; Louzao, Maite

    2012-01-01

    Knowledge about the spatial distribution of seabirds at sea is important for conservation. During marine conservation planning, logistical constraints preclude seabird surveys covering the complete area of interest and spatial distribution of seabirds is frequently inferred from predictive statistical models. Increasingly complex models are available to relate the distribution and abundance of pelagic seabirds to environmental variables, but a comparison of their usefulness for delineating protected areas for seabirds is lacking. Here we compare the performance of five modelling techniques (generalised linear models, generalised additive models, Random Forest, boosted regression trees, and maximum entropy) to predict the distribution of Balearic Shearwaters (Puffinus mauretanicus) along the coast of the western Iberian Peninsula. We used ship transect data from 2004 to 2009 and 13 environmental variables to predict occurrence and density, and evaluated predictive performance of all models using spatially segregated test data. Predicted distribution varied among the different models, although predictive performance varied little. An ensemble prediction that combined results from all five techniques was robust and confirmed the existence of marine important bird areas for Balearic Shearwaters in Portugal and Spain. Our predictions suggested additional areas that would be of high priority for conservation and could be proposed as protected areas. Abundance data were extremely difficult to predict, and none of five modelling techniques provided a reliable prediction of spatial patterns. We advocate the use of ensemble modelling that combines the output of several methods to predict the spatial distribution of seabirds, and use these predictions to target separate surveys assessing the abundance of seabirds in areas of regular use.

  5. A fast non-contact imaging photoplethysmography method using a tissue-like model

    NASA Astrophysics Data System (ADS)

    McDuff, Daniel J.; Blackford, Ethan B.; Estepp, Justin R.; Nishidate, Izumi

    2018-02-01

    Imaging photoplethysmography (iPPG) allows non-contact, concomitant measurement and visualization of peripheral blood flow using just an RGB camera. Most iPPG methods require a window of temporal data and complex computation, this makes real-time measurement and spatial visualization impossible. We present a fast,"window-less", non-contact imaging photoplethysmography method, based on a tissue-like model of the skin, that allows accurate measurement of heart rate and heart rate variability parameters. The error in heart rate estimates is equivalent to state-of-the-art techniques and computation is much faster.

  6. Linear Calibration of Radiographic Mineral Density Using Video-Digitizing Methods

    NASA Technical Reports Server (NTRS)

    Martin, R. Bruce; Papamichos, Thomas; Dannucci, Greg A.

    1990-01-01

    Radiographic images can provide quantitative as well as qualitative information if they are subjected to densitometric analysis. Using modem video-digitizing techniques, such densitometry can be readily accomplished using relatively inexpensive computer systems. However, such analyses are made more difficult by the fact that the density values read from the radiograph have a complex, nonlinear relationship to bone mineral content. This article derives the relationship between these variables from the nature of the intermediate physical processes, and presents a simple mathematical method for obtaining a linear calibration function using a step wedge or other standard.

  7. Linear Calibration of Radiographic Mineral Density Using Video-Digitizing Methods

    NASA Technical Reports Server (NTRS)

    Martin, R. Bruce; Papamichos, Thomas; Dannucci, Greg A.

    1990-01-01

    Radiographic images can provide quantitative as well as qualitative information if they are subjected to densitometric analysis. Using modern video-digitizing techniques, such densitometry can be readily accomplished using relatively inexpensive computer systems. However, such analyses are made more difficult by the fact that the density values read from the radiograph have a complex, nonlinear relationship to bone mineral content. This article derives the relationship between these variables from the nature of the intermediate physical processes, and presents a simple mathematical method for obtaining a linear calibration function using a step wedge or other standard.

  8. Effect of hydrogenation on the electrical and optical properties of CdZnTe substrates and HgCdTe epitaxial layers

    NASA Astrophysics Data System (ADS)

    Sitharaman, S.; Raman, R.; Durai, L.; Pal, Surendra; Gautam, Madhukar; Nagpal, Anjana; Kumar, Shiv; Chatterjee, S. N.; Gupta, S. C.

    2005-12-01

    In this paper, we report the experimental observations on the effect of plasma hydrogenation in passivating intrinsic point defects, shallow/deep levels and extended defects in low-resistivity undoped CdZnTe crystals. The optical absorption studies show transmittance improvement in the below gap absorption spectrum. Using variable temperature Hall measurement technique, the shallow defect level on which the penetrating hydrogen makes complex, has been identified. In 'compensated' n-type HgCdTe epitaxial layers, hydrogenation can improve the resistivity by two orders of magnitude.

  9. Self-Calibration Approach for Mixed Signal Circuits in Systems-on-Chip

    NASA Astrophysics Data System (ADS)

    Jung, In-Seok

    MOSFET scaling has served industry very well for a few decades by proving improvements in transistor performance, power, and cost. However, they require high test complexity and cost due to several issues such as limited pin count and integration of analog and digital mixed circuits. Therefore, self-calibration is an excellent and promising method to improve yield and to reduce manufacturing cost by simplifying the test complexity, because it is possible to address the process variation effects by means of self-calibration technique. Since the prior published calibration techniques were developed for a specific targeted application, it is not easy to be utilized for other applications. In order to solve the aforementioned issues, in this dissertation, several novel self-calibration design techniques in mixed-signal mode circuits are proposed for an analog to digital converter (ADC) to reduce mismatch error and improve performance. These are essential components in SOCs and the proposed self-calibration approach also compensates the process variations. The proposed novel self-calibration approach targets the successive approximation (SA) ADC. First of all, the offset error of the comparator in the SA-ADC is reduced using the proposed approach by enabling the capacitor array in the input nodes for better matching. In addition, the auxiliary capacitors for each capacitor of DAC in the SA-ADC are controlled by using synthesized digital controller to minimize the mismatch error of the DAC. Since the proposed technique is applied during foreground operation, the power overhead in SA-ADC case is minimal because the calibration circuit is deactivated during normal operation time. Another benefit of the proposed technique is that the offset voltage of the comparator is continuously adjusted for every step to decide one-bit code, because not only the inherit offset voltage of the comparator but also the mismatch of DAC are compensated simultaneously. Synthesized digital calibration control circuit operates as fore-ground mode, and the controller has been highly optimized for low power and better performance with simplified structure. In addition, in order to increase the sampling clock frequency of proposed self-calibration approach, novel variable clock period method is proposed. To achieve high speed SAR operation, a variable clock time technique is used to reduce not only peak current but also die area. The technique removes conversion time waste and extends the SAR operation speed easily. To verify and demonstrate the proposed techniques, a prototype charge-redistribution SA-ADCs with the proposed self-calibration is implemented in a 130nm standard CMOS process. The prototype circuit's silicon area is 0.0715 mm 2 and consumers 4.62mW with 1.2V power supply.

  10. On the primary variable switching technique for simulating unsaturated-saturated flows

    NASA Astrophysics Data System (ADS)

    Diersch, H.-J. G.; Perrochet, P.

    Primary variable switching appears as a promising numerical technique for variably saturated flows. While the standard pressure-based form of the Richards equation can suffer from poor mass balance accuracy, the mixed form with its improved conservative properties can possess convergence difficulties for dry initial conditions. On the other hand, variable switching can overcome most of the stated numerical problems. The paper deals with variable switching for finite elements in two and three dimensions. The technique is incorporated in both an adaptive error-controlled predictor-corrector one-step Newton (PCOSN) iteration strategy and a target-based full Newton (TBFN) iteration scheme. Both schemes provide different behaviors with respect to accuracy and solution effort. Additionally, a simplified upstream weighting technique is used. Compared with conventional approaches the primary variable switching technique represents a fast and robust strategy for unsaturated problems with dry initial conditions. The impact of the primary variable switching technique is studied over a wide range of mostly 2D and partly difficult-to-solve problems (infiltration, drainage, perched water table, capillary barrier), where comparable results are available. It is shown that the TBFN iteration is an effective but error-prone procedure. TBFN sacrifices temporal accuracy in favor of accelerated convergence if aggressive time step sizes are chosen.

  11. Predicting the trajectories and intensities of hurricanes by applying machine learning techniques

    NASA Astrophysics Data System (ADS)

    Sujithkumar, A.; King, A. W.; Kovilakam, M.; Graves, D.

    2017-12-01

    The world has witnessed an escalation of devastating hurricanes and tropical cyclones over the last three decades. Hurricanes and tropical cyclones of very high magnitude will likely be even more frequent in a warmer world. Thus, precise forecasting of the track and intensity of hurricane/tropical cyclones remains one of the meteorological community's top priorities. However, comprehensive prediction of hurricane/ tropical cyclone is a difficult problem due to the many complexities of underlying physical processes with many variables and complex relations. The availability of global meteorological and hurricane/tropical storm climatological data opens new opportunities for data-driven approaches to hurricane/tropical cyclone modeling. Here we report initial results from two data-driven machine learning techniques, specifically, random forest (RF) and Bayesian learning (BL) to predict the trajectory and intensity of hurricanes and tropical cyclones. We used International Best Track Archive for Climate Stewardship (IBTrACS) data along with weather data from NOAA in a 50 km buffer surrounding each of the reported hurricane and tropical cyclone tracts to train the model. Initial results reveal that both RF and BL are skillful in predicting storm intensity. We will also present results for the more complicated trajectory prediction.

  12. Contributions of 3D Cell Cultures for Cancer Research.

    PubMed

    Ravi, Maddaly; Ramesh, Aarthi; Pattabhi, Aishwarya

    2017-10-01

    Cancer cell lines have contributed immensely in understanding the complex physiology of cancers. They are excellent material for studies as they offer homogenous samples without individual variations and can be utilised with ease and flexibility. Also, the number of assays and end-points one can study is almost limitless; with the advantage of improvising, modifying or altering several variables and methods. Literally, a new dimension to cancer research has been achieved by the advent of 3Dimensional (3D) cell culture techniques. This approach increased many folds the ways in which cancer cell lines can be utilised for understanding complex cancer biology. 3D cell culture techniques are now the preferred way of using cancer cell lines to bridge the gap between the 'absolute in vitro' and 'true in vivo'. The aspects of cancer biology that 3D cell culture systems have contributed include morphology, microenvironment, gene and protein expression, invasion/migration/metastasis, angiogenesis, tumour metabolism and drug discovery, testing chemotherapeutic agents, adaptive responses and cancer stem cells. We present here, a comprehensive review on the applications of 3D cell culture systems for these aspects of cancers. J. Cell. Physiol. 232: 2679-2697, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  13. Multilayer Joint Gait-Pose Manifolds for Human Gait Motion Modeling.

    PubMed

    Ding, Meng; Fan, Guolian

    2015-11-01

    We present new multilayer joint gait-pose manifolds (multilayer JGPMs) for complex human gait motion modeling, where three latent variables are defined jointly in a low-dimensional manifold to represent a variety of body configurations. Specifically, the pose variable (along the pose manifold) denotes a specific stage in a walking cycle; the gait variable (along the gait manifold) represents different walking styles; and the linear scale variable characterizes the maximum stride in a walking cycle. We discuss two kinds of topological priors for coupling the pose and gait manifolds, i.e., cylindrical and toroidal, to examine their effectiveness and suitability for motion modeling. We resort to a topologically-constrained Gaussian process (GP) latent variable model to learn the multilayer JGPMs where two new techniques are introduced to facilitate model learning under limited training data. First is training data diversification that creates a set of simulated motion data with different strides. Second is the topology-aware local learning to speed up model learning by taking advantage of the local topological structure. The experimental results on the Carnegie Mellon University motion capture data demonstrate the advantages of our proposed multilayer models over several existing GP-based motion models in terms of the overall performance of human gait motion modeling.

  14. A method for analyzing temporal patterns of variability of a time series from Poincare plots.

    PubMed

    Fishman, Mikkel; Jacono, Frank J; Park, Soojin; Jamasebi, Reza; Thungtong, Anurak; Loparo, Kenneth A; Dick, Thomas E

    2012-07-01

    The Poincaré plot is a popular two-dimensional, time series analysis tool because of its intuitive display of dynamic system behavior. Poincaré plots have been used to visualize heart rate and respiratory pattern variabilities. However, conventional quantitative analysis relies primarily on statistical measurements of the cumulative distribution of points, making it difficult to interpret irregular or complex plots. Moreover, the plots are constructed to reflect highly correlated regions of the time series, reducing the amount of nonlinear information that is presented and thereby hiding potentially relevant features. We propose temporal Poincaré variability (TPV), a novel analysis methodology that uses standard techniques to quantify the temporal distribution of points and to detect nonlinear sources responsible for physiological variability. In addition, the analysis is applied across multiple time delays, yielding a richer insight into system dynamics than the traditional circle return plot. The method is applied to data sets of R-R intervals and to synthetic point process data extracted from the Lorenz time series. The results demonstrate that TPV complements the traditional analysis and can be applied more generally, including Poincaré plots with multiple clusters, and more consistently than the conventional measures and can address questions regarding potential structure underlying the variability of a data set.

  15. Expanding the occupational health methodology: A concatenated artificial neural network approach to model the burnout process in Chinese nurses.

    PubMed

    Ladstätter, Felix; Garrosa, Eva; Moreno-Jiménez, Bernardo; Ponsoda, Vicente; Reales Aviles, José Manuel; Dai, Junming

    2016-01-01

    Artificial neural networks are sophisticated modelling and prediction tools capable of extracting complex, non-linear relationships between predictor (input) and predicted (output) variables. This study explores this capacity by modelling non-linearities in the hardiness-modulated burnout process with a neural network. Specifically, two multi-layer feed-forward artificial neural networks are concatenated in an attempt to model the composite non-linear burnout process. Sensitivity analysis, a Monte Carlo-based global simulation technique, is then utilised to examine the first-order effects of the predictor variables on the burnout sub-dimensions and consequences. Results show that (1) this concatenated artificial neural network approach is feasible to model the burnout process, (2) sensitivity analysis is a prolific method to study the relative importance of predictor variables and (3) the relationships among variables involved in the development of burnout and its consequences are to different degrees non-linear. Many relationships among variables (e.g., stressors and strains) are not linear, yet researchers use linear methods such as Pearson correlation or linear regression to analyse these relationships. Artificial neural network analysis is an innovative method to analyse non-linear relationships and in combination with sensitivity analysis superior to linear methods.

  16. Inter-individual variability and pattern recognition of surface electromyography in front crawl swimming.

    PubMed

    Martens, Jonas; Daly, Daniel; Deschamps, Kevin; Staes, Filip; Fernandes, Ricardo J

    2016-12-01

    Variability of electromyographic (EMG) recordings is a complex phenomenon rarely examined in swimming. Our purposes were to investigate inter-individual variability in muscle activation patterns during front crawl swimming and assess if there were clusters of sub patterns present. Bilateral muscle activity of rectus abdominis (RA) and deltoideus medialis (DM) was recorded using wireless surface EMG in 15 adult male competitive swimmers. The amplitude of the median EMG trial of six upper arm movement cycles was used for the inter-individual variability assessment, quantified with the coefficient of variation, coefficient of quartile variation, the variance ratio and mean deviation. Key features were selected based on qualitative and quantitative classification strategies to enter in a k-means cluster analysis to examine the presence of strong sub patterns. Such strong sub patterns were found when clustering in two, three and four clusters. Inter-individual variability in a group of highly skilled swimmers was higher compared to other cyclic movements which is in contrast to what has been reported in the previous 50years of EMG research in swimming. This leads to the conclusion that coaches should be careful in using overall reference EMG information to enhance the individual swimming technique of their athletes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Comparison of the resulting error in data fusion techniques when used with remote sensing, earth observation, and in-situ data sets for water quality applications

    NASA Astrophysics Data System (ADS)

    Ziemba, Alexander; El Serafy, Ghada

    2016-04-01

    Ecological modeling and water quality investigations are complex processes which can require a high level of parameterization and a multitude of varying data sets in order to properly execute the model in question. Since models are generally complex, their calibration and validation can benefit from the application of data and information fusion techniques. The data applied to ecological models comes from a wide range of sources such as remote sensing, earth observation, and in-situ measurements, resulting in a high variability in the temporal and spatial resolution of the various data sets available to water quality investigators. It is proposed that effective fusion into a comprehensive singular set will provide a more complete and robust data resource with which models can be calibrated, validated, and driven by. Each individual product contains a unique valuation of error resulting from the method of measurement and application of pre-processing techniques. The uncertainty and error is further compounded when the data being fused is of varying temporal and spatial resolution. In order to have a reliable fusion based model and data set, the uncertainty of the results and confidence interval of the data being reported must be effectively communicated to those who would utilize the data product or model outputs in a decision making process[2]. Here we review an array of data fusion techniques applied to various remote sensing, earth observation, and in-situ data sets whose domains' are varied in spatial and temporal resolution. The data sets examined are combined in a manner so that the various classifications, complementary, redundant, and cooperative, of data are all assessed to determine classification's impact on the propagation and compounding of error. In order to assess the error of the fused data products, a comparison is conducted with data sets containing a known confidence interval and quality rating. We conclude with a quantification of the performance of the data fusion techniques and a recommendation on the feasibility of applying of the fused products in operating forecast systems and modeling scenarios. The error bands and confidence intervals derived can be used in order to clarify the error and confidence of water quality variables produced by prediction and forecasting models. References [1] F. Castanedo, "A Review of Data Fusion Techniques", The Scientific World Journal, vol. 2013, pp. 1-19, 2013. [2] T. Keenan, M. Carbone, M. Reichstein and A. Richardson, "The model-data fusion pitfall: assuming certainty in an uncertain world", Oecologia, vol. 167, no. 3, pp. 587-597, 2011.

  18. Syntactic Complexity, Lexical Variation and Accuracy as a Function of Task Complexity and Proficiency Level in L2 Writing and Speaking

    ERIC Educational Resources Information Center

    Kuiken, Folkert; Vedder, Ineke

    2012-01-01

    The research project reported in this chapter consists of three studies in which syntactic complexity, lexical variation and fluency appear as dependent variables. The independent variables are task complexity and proficiency level, as the three studies investigate the effect of task complexity on the written and oral performance of L2 learners of…

  19. Structural reanalysis via a mixed method. [using Taylor series for accuracy improvement

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Lowder, H. E.

    1975-01-01

    A study is made of the approximate structural reanalysis technique based on the use of Taylor series expansion of response variables in terms of design variables in conjunction with the mixed method. In addition, comparisons are made with two reanalysis techniques based on the displacement method. These techniques are the Taylor series expansion and the modified reduced basis. It is shown that the use of the reciprocals of the sizing variables as design variables (which is the natural choice in the mixed method) can result in a substantial improvement in the accuracy of the reanalysis technique. Numerical results are presented for a space truss structure.

  20. Effects of visual feedback-induced variability on motor learning of handrim wheelchair propulsion.

    PubMed

    Leving, Marika T; Vegter, Riemer J K; Hartog, Johanneke; Lamoth, Claudine J C; de Groot, Sonja; van der Woude, Lucas H V

    2015-01-01

    It has been suggested that a higher intra-individual variability benefits the motor learning of wheelchair propulsion. The present study evaluated whether feedback-induced variability on wheelchair propulsion technique variables would also enhance the motor learning process. Learning was operationalized as an improvement in mechanical efficiency and propulsion technique, which are thought to be closely related during the learning process. 17 Participants received visual feedback-based practice (feedback group) and 15 participants received regular practice (natural learning group). Both groups received equal practice dose of 80 min, over 3 weeks, at 0.24 W/kg at a treadmill speed of 1.11 m/s. To compare both groups the pre- and post-test were performed without feedback. The feedback group received real-time visual feedback on seven propulsion variables with instruction to manipulate the presented variable to achieve the highest possible variability (1st 4-min block) and optimize it in the prescribed direction (2nd 4-min block). To increase motor exploration the participants were unaware of the exact variable they received feedback on. Energy consumption and the propulsion technique variables with their respective coefficient of variation were calculated to evaluate the amount of intra-individual variability. The feedback group, which practiced with higher intra-individual variability, improved the propulsion technique between pre- and post-test to the same extent as the natural learning group. Mechanical efficiency improved between pre- and post-test in the natural learning group but remained unchanged in the feedback group. These results suggest that feedback-induced variability inhibited the improvement in mechanical efficiency. Moreover, since both groups improved propulsion technique but only the natural learning group improved mechanical efficiency, it can be concluded that the improvement in mechanical efficiency and propulsion technique do not always appear simultaneously during the motor learning process. Their relationship is most likely modified by other factors such as the amount of the intra-individual variability.

  1. Effects of Visual Feedback-Induced Variability on Motor Learning of Handrim Wheelchair Propulsion

    PubMed Central

    Leving, Marika T.; Vegter, Riemer J. K.; Hartog, Johanneke; Lamoth, Claudine J. C.; de Groot, Sonja; van der Woude, Lucas H. V.

    2015-01-01

    Background It has been suggested that a higher intra-individual variability benefits the motor learning of wheelchair propulsion. The present study evaluated whether feedback-induced variability on wheelchair propulsion technique variables would also enhance the motor learning process. Learning was operationalized as an improvement in mechanical efficiency and propulsion technique, which are thought to be closely related during the learning process. Methods 17 Participants received visual feedback-based practice (feedback group) and 15 participants received regular practice (natural learning group). Both groups received equal practice dose of 80 min, over 3 weeks, at 0.24 W/kg at a treadmill speed of 1.11 m/s. To compare both groups the pre- and post-test were performed without feedback. The feedback group received real-time visual feedback on seven propulsion variables with instruction to manipulate the presented variable to achieve the highest possible variability (1st 4-min block) and optimize it in the prescribed direction (2nd 4-min block). To increase motor exploration the participants were unaware of the exact variable they received feedback on. Energy consumption and the propulsion technique variables with their respective coefficient of variation were calculated to evaluate the amount of intra-individual variability. Results The feedback group, which practiced with higher intra-individual variability, improved the propulsion technique between pre- and post-test to the same extent as the natural learning group. Mechanical efficiency improved between pre- and post-test in the natural learning group but remained unchanged in the feedback group. Conclusion These results suggest that feedback-induced variability inhibited the improvement in mechanical efficiency. Moreover, since both groups improved propulsion technique but only the natural learning group improved mechanical efficiency, it can be concluded that the improvement in mechanical efficiency and propulsion technique do not always appear simultaneously during the motor learning process. Their relationship is most likely modified by other factors such as the amount of the intra-individual variability. PMID:25992626

  2. VTVH-MCD and DFT studies of thiolate bonding to [FeNO]7/[FeO2]8 complexes of isopenicillin N synthase: substrate determination of oxidase versus oxygenase activity in nonheme Fe enzymes.

    PubMed

    Brown, Christina D; Neidig, Michael L; Neibergall, Matthew B; Lipscomb, John D; Solomon, Edward I

    2007-06-13

    Isopenicillin N synthase (IPNS) is a unique mononuclear nonheme Fe enzyme that catalyzes the four-electron oxidative double ring closure of its substrate ACV. A combination of spectroscopic techniques including EPR, absorbance, circular dichroism (CD), magnetic CD, and variable-temperature, variable-field MCD (VTVH-MCD) were used to evaluate the geometric and electronic structure of the [FeNO]7 complex of IPNS coordinated with the ACV thiolate ligand. Density Function Theory (DFT) calculations correlated to the spectroscopic data were used to generate an experimentally calibrated bonding description of the Fe-IPNS-ACV-NO complex. New spectroscopic features introduced by the binding of the ACV thiolate at 13 100 and 19 800 cm-1 are assigned as the NO pi*(ip) --> Fe dx2-y2 and S pi--> Fe dx2-y2 charge transfer (CT) transitions, respectively. Configuration interaction mixes S CT character into the NO pi*(ip) --> Fe dx2-y2 CT transition, which is observed experimentally from the VTVH-MCD data from this transition. Calculations on the hypothetical {FeO2}8 complex of Fe-IPNS-ACV reveal that the configuration interaction present in the [FeNO]7 complex results in an unoccupied frontier molecular orbital (FMO) with correct orientation and distal O character for H-atom abstraction from the ACV substrate. The energetics of NO/O2 binding to Fe-IPNS-ACV were evaluated and demonstrate that charge donation from the ACV thiolate ligand renders the formation of the FeIII-superoxide complex energetically favorable, driving the reaction at the Fe center. This single center reaction allows IPNS to avoid the O2 bridged binding generally invoked in other nonheme Fe enzymes that leads to oxygen insertion (i.e., oxygenase function) and determines the oxidase activity of IPNS.

  3. Computation of Sensitivity Derivatives of Navier-Stokes Equations using Complex Variables

    NASA Technical Reports Server (NTRS)

    Vatsa, Veer N.

    2004-01-01

    Accurate computation of sensitivity derivatives is becoming an important item in Computational Fluid Dynamics (CFD) because of recent emphasis on using nonlinear CFD methods in aerodynamic design, optimization, stability and control related problems. Several techniques are available to compute gradients or sensitivity derivatives of desired flow quantities or cost functions with respect to selected independent (design) variables. Perhaps the most common and oldest method is to use straightforward finite-differences for the evaluation of sensitivity derivatives. Although very simple, this method is prone to errors associated with choice of step sizes and can be cumbersome for geometric variables. The cost per design variable for computing sensitivity derivatives with central differencing is at least equal to the cost of three full analyses, but is usually much larger in practice due to difficulty in choosing step sizes. Another approach gaining popularity is the use of Automatic Differentiation software (such as ADIFOR) to process the source code, which in turn can be used to evaluate the sensitivity derivatives of preselected functions with respect to chosen design variables. In principle, this approach is also very straightforward and quite promising. The main drawback is the large memory requirement because memory use increases linearly with the number of design variables. ADIFOR software can also be cumber-some for large CFD codes and has not yet reached a full maturity level for production codes, especially in parallel computing environments.

  4. Evaluation of interpolation techniques for the creation of gridded daily precipitation (1 × 1 km2); Cyprus, 1980-2010

    NASA Astrophysics Data System (ADS)

    Camera, Corrado; Bruggeman, Adriana; Hadjinicolaou, Panos; Pashiardis, Stelios; Lange, Manfred A.

    2014-01-01

    High-resolution gridded daily data sets are essential for natural resource management and the analyses of climate changes and their effects. This study aims to evaluate the performance of 15 simple or complex interpolation techniques in reproducing daily precipitation at a resolution of 1 km2 over topographically complex areas. Methods are tested considering two different sets of observation densities and different rainfall amounts. We used rainfall data that were recorded at 74 and 145 observational stations, respectively, spread over the 5760 km2 of the Republic of Cyprus, in the Eastern Mediterranean. Regression analyses utilizing geographical copredictors and neighboring interpolation techniques were evaluated both in isolation and combined. Linear multiple regression (LMR) and geographically weighted regression methods (GWR) were tested. These included a step-wise selection of covariables, as well as inverse distance weighting (IDW), kriging, and 3D-thin plate splines (TPS). The relative rank of the different techniques changes with different station density and rainfall amounts. Our results indicate that TPS performs well for low station density and large-scale events and also when coupled with regression models. It performs poorly for high station density. The opposite is observed when using IDW. Simple IDW performs best for local events, while a combination of step-wise GWR and IDW proves to be the best method for large-scale events and high station density. This study indicates that the use of step-wise regression with a variable set of geographic parameters can improve the interpolation of large-scale events because it facilitates the representation of local climate dynamics.

  5. Constructing Compact Takagi-Sugeno Rule Systems: Identification of Complex Interactions in Epidemiological Data

    PubMed Central

    Zhou, Shang-Ming; Lyons, Ronan A.; Brophy, Sinead; Gravenor, Mike B.

    2012-01-01

    The Takagi-Sugeno (TS) fuzzy rule system is a widely used data mining technique, and is of particular use in the identification of non-linear interactions between variables. However the number of rules increases dramatically when applied to high dimensional data sets (the curse of dimensionality). Few robust methods are available to identify important rules while removing redundant ones, and this results in limited applicability in fields such as epidemiology or bioinformatics where the interaction of many variables must be considered. Here, we develop a new parsimonious TS rule system. We propose three statistics: R, L, and ω-values, to rank the importance of each TS rule, and a forward selection procedure to construct a final model. We use our method to predict how key components of childhood deprivation combine to influence educational achievement outcome. We show that a parsimonious TS model can be constructed, based on a small subset of rules, that provides an accurate description of the relationship between deprivation indices and educational outcomes. The selected rules shed light on the synergistic relationships between the variables, and reveal that the effect of targeting specific domains of deprivation is crucially dependent on the state of the other domains. Policy decisions need to incorporate these interactions, and deprivation indices should not be considered in isolation. The TS rule system provides a basis for such decision making, and has wide applicability for the identification of non-linear interactions in complex biomedical data. PMID:23272108

  6. Constructing compact Takagi-Sugeno rule systems: identification of complex interactions in epidemiological data.

    PubMed

    Zhou, Shang-Ming; Lyons, Ronan A; Brophy, Sinead; Gravenor, Mike B

    2012-01-01

    The Takagi-Sugeno (TS) fuzzy rule system is a widely used data mining technique, and is of particular use in the identification of non-linear interactions between variables. However the number of rules increases dramatically when applied to high dimensional data sets (the curse of dimensionality). Few robust methods are available to identify important rules while removing redundant ones, and this results in limited applicability in fields such as epidemiology or bioinformatics where the interaction of many variables must be considered. Here, we develop a new parsimonious TS rule system. We propose three statistics: R, L, and ω-values, to rank the importance of each TS rule, and a forward selection procedure to construct a final model. We use our method to predict how key components of childhood deprivation combine to influence educational achievement outcome. We show that a parsimonious TS model can be constructed, based on a small subset of rules, that provides an accurate description of the relationship between deprivation indices and educational outcomes. The selected rules shed light on the synergistic relationships between the variables, and reveal that the effect of targeting specific domains of deprivation is crucially dependent on the state of the other domains. Policy decisions need to incorporate these interactions, and deprivation indices should not be considered in isolation. The TS rule system provides a basis for such decision making, and has wide applicability for the identification of non-linear interactions in complex biomedical data.

  7. Natural extension of fast-slow decomposition for dynamical systems

    NASA Astrophysics Data System (ADS)

    Rubin, J. E.; Krauskopf, B.; Osinga, H. M.

    2018-01-01

    Modeling and parameter estimation to capture the dynamics of physical systems are often challenging because many parameters can range over orders of magnitude and are difficult to measure experimentally. Moreover, selecting a suitable model complexity requires a sufficient understanding of the model's potential use, such as highlighting essential mechanisms underlying qualitative behavior or precisely quantifying realistic dynamics. We present an approach that can guide model development and tuning to achieve desired qualitative and quantitative solution properties. It relies on the presence of disparate time scales and employs techniques of separating the dynamics of fast and slow variables, which are well known in the analysis of qualitative solution features. We build on these methods to show how it is also possible to obtain quantitative solution features by imposing designed dynamics for the slow variables in the form of specified two-dimensional paths in a bifurcation-parameter landscape.

  8. Understanding Space Weather: The Sun as a Variable Star

    NASA Technical Reports Server (NTRS)

    Strong, Keith; Saba, Julia; Kucera, Therese

    2011-01-01

    The Sun is a complex system of systems and until recently, less than half of its surface was observable at any given time and then only from afar. New observational techniques and modeling capabilities are giving us a fresh perspective of the solar interior and how our Sun works as a variable star. This revolution in solar observations and modeling provides us with the exciting prospect of being able to use a vastly increased stream of solar data taken simultaneously from several different vantage points to produce more reliable and prompt space weather forecasts. Solar variations that cause identifiable space weather effects do not happen only on solar-cycle timescales from decades to centuries; there are also many shorter-term events that have their own unique space weather effects and a different set of challenges to understand and predict, such as flares, coronal mass ejections, and solar wind variations

  9. Understanding Space Weather: The Sun as a Variable Star

    NASA Technical Reports Server (NTRS)

    Strong, Keith; Saba, Julia; Kucera, Therese

    2012-01-01

    The Sun is a complex system of systems and until recently, less than half of its surface was observable at any given time and then only from afar. New observational techniques and modeling capabilities are giving us a fresh perspective of the solar interior and how our Sun works as a variable star. This revolution in solar observations and modeling provides us with the exciting prospect of being able to use a vastly increased stream of solar data taken simultaneously from several different vantage points to produce more reliable and prompt space weather forecasts. Solar variations that cause identifiable space weather effects do not happen only on solar-cycle timescales from decades to centuries; there are also many shorter-term events that have their own unique space weather effects and a different set of challenges to understand and predict, such as flares, coronal mass ejections, and solar wind variations.

  10. Experimental evidence of exciton-plasmon coupling in densely packed dye doped core-shell nanoparticles obtained via microfluidic technique

    NASA Astrophysics Data System (ADS)

    De Luca, A.; Iazzolino, A.; Salmon, J.-B.; Leng, J.; Ravaine, S.; Grigorenko, A. N.; Strangi, G.

    2014-09-01

    The interplay between plasmons and excitons in bulk metamaterials are investigated by performing spectroscopic studies, including variable angle pump-probe ellipsometry. Gain functionalized gold nanoparticles have been densely packed through a microfluidic chip, representing a scalable process towards bulk metamaterials based on self-assembly approach. Chromophores placed at the hearth of plasmonic subunits ensure exciton-plasmon coupling to convey excitation energy to the quasi-static electric field of the plasmon states. The overall complex polarizability of the system, probed by variable angle spectroscopic ellipsometry, shows a significant modification under optical excitation, as demonstrated by the behavior of the ellipsometric angles Ψ and Δ as a function of suitable excitation fields. The plasmon resonances observed in densely packed gain functionalized core-shell gold nanoparticles represent a promising step to enable a wide range of electromagnetic properties and fascinating applications of plasmonic bulk systems for advanced optical materials.

  11. Mass spectrometry-based biomarker discovery: toward a global proteome index of individuality.

    PubMed

    Hawkridge, Adam M; Muddiman, David C

    2009-01-01

    Biomarker discovery and proteomics have become synonymous with mass spectrometry in recent years. Although this conflation is an injustice to the many essential biomolecular techniques widely used in biomarker-discovery platforms, it underscores the power and potential of contemporary mass spectrometry. Numerous novel and powerful technologies have been developed around mass spectrometry, proteomics, and biomarker discovery over the past 20 years to globally study complex proteomes (e.g., plasma). However, very few large-scale longitudinal studies have been carried out using these platforms to establish the analytical variability relative to true biological variability. The purpose of this review is not to cover exhaustively the applications of mass spectrometry to biomarker discovery, but rather to discuss the analytical methods and strategies that have been developed for mass spectrometry-based biomarker-discovery platforms and to place them in the context of the many challenges and opportunities yet to be addressed.

  12. An adaptive gridless methodology in one dimension

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snyder, N.T.; Hailey, C.E.

    1996-09-01

    Gridless numerical analysis offers great potential for accurately solving for flow about complex geometries or moving boundary problems. Because gridless methods do not require point connection, the mesh cannot twist or distort. The gridless method utilizes a Taylor series about each point to obtain the unknown derivative terms from the current field variable estimates. The governing equation is then numerically integrated to determine the field variables for the next iteration. Effects of point spacing and Taylor series order on accuracy are studied, and they follow similar trends of traditional numerical techniques. Introducing adaption by point movement using a spring analogymore » allows the solution method to track a moving boundary. The adaptive gridless method models linear, nonlinear, steady, and transient problems. Comparison with known analytic solutions is given for these examples. Although point movement adaption does not provide a significant increase in accuracy, it helps capture important features and provides an improved solution.« less

  13. Maternal factors predicting cognitive and behavioral characteristics of children with fetal alcohol spectrum disorders.

    PubMed

    May, Philip A; Tabachnick, Barbara G; Gossage, J Phillip; Kalberg, Wendy O; Marais, Anna-Susan; Robinson, Luther K; Manning, Melanie A; Blankenship, Jason; Buckley, David; Hoyme, H Eugene; Adnams, Colleen M

    2013-06-01

    To provide an analysis of multiple predictors of cognitive and behavioral traits for children with fetal alcohol spectrum disorders (FASDs). Multivariate correlation techniques were used with maternal and child data from epidemiologic studies in a community in South Africa. Data on 561 first-grade children with fetal alcohol syndrome (FAS), partial FAS (PFAS), and not FASD and their mothers were analyzed by grouping 19 maternal variables into categories (physical, demographic, childbearing, and drinking) and used in structural equation models (SEMs) to assess correlates of child intelligence (verbal and nonverbal) and behavior. A first SEM using only 7 maternal alcohol use variables to predict cognitive/behavioral traits was statistically significant (B = 3.10, p < .05) but explained only 17.3% of the variance. The second model incorporated multiple maternal variables and was statistically significant explaining 55.3% of the variance. Significantly correlated with low intelligence and problem behavior were demographic (B = 3.83, p < .05) (low maternal education, low socioeconomic status [SES], and rural residence) and maternal physical characteristics (B = 2.70, p < .05) (short stature, small head circumference, and low weight). Childbearing history and alcohol use composites were not statistically significant in the final complex model and were overpowered by SES and maternal physical traits. Although other analytic techniques have amply demonstrated the negative effects of maternal drinking on intelligence and behavior, this highly controlled analysis of multiple maternal influences reveals that maternal demographics and physical traits make a significant enabling or disabling contribution to child functioning in FASD.

  14. A strategy for simultaneous determination of fatty acid composition, fatty acid position, and position-specific isotope contents in triacylglycerol matrices by 13C-NMR.

    PubMed

    Merchak, Noelle; Silvestre, Virginie; Loquet, Denis; Rizk, Toufic; Akoka, Serge; Bejjani, Joseph

    2017-01-01

    Triacylglycerols, which are quasi-universal components of food matrices, consist of complex mixtures of molecules. Their site-specific 13 C content, their fatty acid profile, and their position on the glycerol moiety may significantly vary with the geographical, botanical, or animal origin of the sample. Such variables are valuable tracers for food authentication issues. The main objective of this work was to develop a new method based on a rapid and precise 13 C-NMR spectroscopy (using a polarization transfer technique) coupled with multivariate linear regression analyses in order to quantify the whole set of individual fatty acids within triacylglycerols. In this respect, olive oil samples were analyzed by means of both adiabatic 13 C-INEPT sequence and gas chromatography (GC). For each fatty acid within the studied matrix and for squalene as well, a multivariate prediction model was constructed using the deconvoluted peak areas of 13 C-INEPT spectra as predictors, and the data obtained by GC as response variables. This 13 C-NMR-based strategy, tested on olive oil, could serve as an alternative to the gas chromatographic quantification of individual fatty acids in other matrices, while providing additional compositional and isotopic information. Graphical abstract A strategy based on the multivariate linear regression of variables obtained by a rapid 13 C-NMR technique was developed for the quantification of individual fatty acids within triacylglycerol matrices. The conceived strategy was tested on olive oil.

  15. Heart rate dynamics in patients with stable angina pectoris and utility of fractal and complexity measures

    NASA Technical Reports Server (NTRS)

    Makikallio, T. H.; Ristimae, T.; Airaksinen, K. E.; Peng, C. K.; Goldberger, A. L.; Huikuri, H. V.

    1998-01-01

    Dynamic analysis techniques may uncover abnormalities in heart rate (HR) behavior that are not easily detectable with conventional statistical measures. However, the applicability of these new methods for detecting possible abnormalities in HR behavior in various cardiovascular disorders is not well established. Conventional measures of HR variability were compared with short-term (< or = 11 beats, alpha1) and long-term (> 11 beats, alpha2) fractal correlation properties and with approximate entropy of RR interval data in 38 patients with stable angina pectoris without previous myocardial infarction or cardiac medication at the time of the study and 38 age-matched healthy controls. The short- and long-term fractal scaling exponents (alpha1, alpha2) were significantly higher in the coronary patients than in the healthy controls (1.34 +/- 0.15 vs 1.11 +/- 0.12 [p <0.001] and 1.10 +/- 0.08 vs 1.04 +/- 0.06 [p <0.01], respectively), and they also had lower approximate entropy (p <0.05), standard deviation of all RR intervals (p <0.01), and high-frequency spectral component of HR variability (p <0.05). The short-term fractal scaling exponent performed better than other heart rate variability parameters in differentiating patients with coronary artery disease from healthy subjects, but it was not related to the clinical or angiographic severity of coronary artery disease or any single nonspectral or spectral measure of HR variability in this retrospective study. Patients with stable angina pectoris have altered fractal properties and reduced complexity in their RR interval dynamics relative to age-matched healthy subjects. Dynamic analysis may complement traditional analyses in detecting altered HR behavior in patients with stable angina pectoris.

  16. The Structure of Performance of a Sport Rock Climber

    PubMed Central

    Magiera, Artur; Roczniok, Robert; Maszczyk, Adam; Czuba, Miłosz; Kantyka, Joanna; Kurek, Piotr

    2013-01-01

    This study is a contribution to the discussion about the structure of performance of sport rock climbers. Because of the complex and multifaceted nature of this sport, multivariate statistics were applied in the study. The subjects included thirty experienced sport climbers. Forty three variables were scrutinised, namely somatic characteristics, specific physical fitness, coordination abilities, aerobic and anaerobic power, technical and tactical skills, mental characteristics, as well as 2 variables describing the climber’s performance in the OS (Max OS) and RP style (Max RP). The results show that for training effectiveness of advanced climbers to be thoroughly analysed and examined, tests assessing their physical, technical and mental characteristics are necessary. The three sets of variables used in this study explained the structure of performance similarly, but not identically (in 38, 33 and 25%, respectively). They were also complementary to around 30% of the variance. The overall performance capacity of a sport rock climber (Max OS and Max RP) was also evaluated in the study. The canonical weights of the dominant first canonical root were 0.554 and 0.512 for Max OS and Max RP, respectively. Despite the differences between the two styles of climbing, seven variables – the maximal relative strength of the fingers (canonical weight = 0.490), mental endurance (one of scales : The Formal Characteristics of Behaviour–Temperament Inventory (FCB–TI; Strelau and Zawadzki, 1995)) (−0.410), climbing technique (0.370), isometric endurance of the fingers (0.340), the number of errors in the complex reaction time test (−0.319), the ape index (−0.319) and oxygen uptake during arm work at the anaerobic threshold (0.254) were found to explain 77% of performance capacity common to the two styles. PMID:23717360

  17. Predicting radiotherapy outcomes using statistical learning techniques

    NASA Astrophysics Data System (ADS)

    El Naqa, Issam; Bradley, Jeffrey D.; Lindsay, Patricia E.; Hope, Andrew J.; Deasy, Joseph O.

    2009-09-01

    Radiotherapy outcomes are determined by complex interactions between treatment, anatomical and patient-related variables. A common obstacle to building maximally predictive outcome models for clinical practice is the failure to capture potential complexity of heterogeneous variable interactions and applicability beyond institutional data. We describe a statistical learning methodology that can automatically screen for nonlinear relations among prognostic variables and generalize to unseen data before. In this work, several types of linear and nonlinear kernels to generate interaction terms and approximate the treatment-response function are evaluated. Examples of institutional datasets of esophagitis, pneumonitis and xerostomia endpoints were used. Furthermore, an independent RTOG dataset was used for 'generalizabilty' validation. We formulated the discrimination between risk groups as a supervised learning problem. The distribution of patient groups was initially analyzed using principle components analysis (PCA) to uncover potential nonlinear behavior. The performance of the different methods was evaluated using bivariate correlations and actuarial analysis. Over-fitting was controlled via cross-validation resampling. Our results suggest that a modified support vector machine (SVM) kernel method provided superior performance on leave-one-out testing compared to logistic regression and neural networks in cases where the data exhibited nonlinear behavior on PCA. For instance, in prediction of esophagitis and pneumonitis endpoints, which exhibited nonlinear behavior on PCA, the method provided 21% and 60% improvements, respectively. Furthermore, evaluation on the independent pneumonitis RTOG dataset demonstrated good generalizabilty beyond institutional data in contrast with other models. This indicates that the prediction of treatment response can be improved by utilizing nonlinear kernel methods for discovering important nonlinear interactions among model variables. These models have the capacity to predict on unseen data. Part of this work was first presented at the Seventh International Conference on Machine Learning and Applications, San Diego, CA, USA, 11-13 December 2008.

  18. Mapping extreme rainfall in the Northwest Portugal region: statistical analysis and spatial modelling

    NASA Astrophysics Data System (ADS)

    Santos, Monica; Fragoso, Marcelo

    2010-05-01

    Extreme precipitation events are one of the causes of natural hazards, such as floods and landslides, making its investigation so important, and this research aims to contribute to the study of the extreme rainfall patterns in a Portuguese mountainous area. The study area is centred on the Arcos de Valdevez county, located in the northwest region of Portugal, the rainiest of the country, with more than 3000 mm of annual rainfall at the Peneda-Gerês mountain system. This work focus on two main subjects related with the precipitation variability on the study area. First, a statistical analysis of several precipitation parameters is carried out, using daily data from 17 rain-gauges with a complete record for the 1960-1995 period. This approach aims to evaluate the main spatial contrasts regarding different aspects of the rainfall regime, described by ten parameters and indices of precipitation extremes (e.g. mean annual precipitation, the annual frequency of precipitation days, wet spells durations, maximum daily precipitation, maximum of precipitation in 30 days, number of days with rainfall exceeding 100 mm and estimated maximum daily rainfall for a return period of 100 years). The results show that the highest precipitation amounts (from annual to daily scales) and the higher frequency of very abundant rainfall events occur in the Serra da Peneda and Gerês mountains, opposing to the valleys of the Lima, Minho and Vez rivers, with lower precipitation amounts and less frequent heavy storms. The second purpose of this work is to find a method of mapping extreme rainfall in this mountainous region, investigating the complex influence of the relief (e.g. elevation, topography) on the precipitation patterns, as well others geographical variables (e.g. distance from coast, latitude), applying tested geo-statistical techniques (Goovaerts, 2000; Diodato, 2005). Models of linear regression were applied to evaluate the influence of different geographical variables (altitude, latitude, distance from sea and distance to the highest orographic barrier) on the rainfall behaviours described by the studied variables. The techniques of spatial interpolation evaluated include univariate and multivariate methods: cokriging, kriging, IDW (inverse distance weighted) and multiple linear regression. Validation procedures were used, assessing the estimated errors in the analysis of descriptive statistics of the models. Multiple linear regression models produced satisfactory results in relation to 70% of the rainfall parameters, suggested by lower average percentage of error. However, the results also demonstrates that there is no an unique and ideal model, depending on the rainfall parameter in consideration. Probably, the unsatisfactory results obtained in relation to some rainfall parameters was motivated by constraints as the spatial complexity of the precipitation patterns, as well as to the deficient spatial coverage of the territory by the rain-gauges network. References Diodato, N. (2005). The influence of topographic co-variables on the spatial variability of precipitation over small regions of complex terrain. Internacional Journal of Climatology, 25(3), 351-363. Goovaerts, P. (2000). Geostatistical approaches for incorporating elevation into the spatial interpolation of rainfall. Journal of Hydrology, 228, 113 - 129.

  19. Vagal-dependent nonlinear variability in the respiratory pattern of anesthetized, spontaneously breathing rats

    PubMed Central

    Dhingra, R. R.; Jacono, F. J.; Fishman, M.; Loparo, K. A.; Rybak, I. A.

    2011-01-01

    Physiological rhythms, including respiration, exhibit endogenous variability associated with health, and deviations from this are associated with disease. Specific changes in the linear and nonlinear sources of breathing variability have not been investigated. In this study, we used information theory-based techniques, combined with surrogate data testing, to quantify and characterize the vagal-dependent nonlinear pattern variability in urethane-anesthetized, spontaneously breathing adult rats. Surrogate data sets preserved the amplitude distribution and linear correlations of the original data set, but nonlinear correlation structure in the data was removed. Differences in mutual information and sample entropy between original and surrogate data sets indicated the presence of deterministic nonlinear or stochastic non-Gaussian variability. With vagi intact (n = 11), the respiratory cycle exhibited significant nonlinear behavior in templates of points separated by time delays ranging from one sample to one cycle length. After vagotomy (n = 6), even though nonlinear variability was reduced significantly, nonlinear properties were still evident at various time delays. Nonlinear deterministic variability did not change further after subsequent bilateral microinjection of MK-801, an N-methyl-d-aspartate receptor antagonist, in the Kölliker-Fuse nuclei. Reversing the sequence (n = 5), blocking N-methyl-d-aspartate receptors bilaterally in the dorsolateral pons significantly decreased nonlinear variability in the respiratory pattern, even with the vagi intact, and subsequent vagotomy did not change nonlinear variability. Thus both vagal and dorsolateral pontine influences contribute to nonlinear respiratory pattern variability. Furthermore, breathing dynamics of the intact system are mutually dependent on vagal and pontine sources of nonlinear complexity. Understanding the structure and modulation of variability provides insight into disease effects on respiratory patterning. PMID:21527661

  20. Bi-Frequency Modulated Quasi-Resonant Converters: Theory and Applications

    NASA Astrophysics Data System (ADS)

    Zhang, Yuefeng

    1995-01-01

    To avoid the variable frequency operation of quasi -resonant converters, many soft-switching PWM converters have been proposed, all of them require an auxiliary switch, which will increase the cost and complexity of the power supply system. In this thesis, a new kind of technique for quasi -resonant converters has been proposed, which is called the bi-frequency modulation technique. By operating the quasi-resonant converters at two switching frequencies, this technique enables quasi-resonant converters to achieve the soft-switching, at fixed switching frequencies, without an auxiliary switch. The steady-state analysis of four commonly used quasi-resonant converters, namely, ZVS buck, ZCS buck, ZVS boost, and ZCS boost converter has been presented. Using the concepts of equivalent sources, equivalent sinks, and resonant tank, the large signal models of these four quasi -resonant converters were developed. Based on these models, the steady-state control characteristics of BFM ZVS buck, BFM ZCS buck, BFM ZVS boost, and BFM ZCS boost converter have been derived. The functional block and design consideration of the bi-frequency controller were presented, and one of the implementations of the bi-frequency controller was given. A complete design example has been presented. Both computer simulations and experimental results have verified that the bi-frequency modulated quasi-resonant converters can achieve soft-switching, at fixed switching frequencies, without an auxiliary switch. One of the application of bi-frequency modulation technique is for EMI reduction. The basic principle of using BFM technique for EMI reduction was introduced. Based on the spectral analysis, the EMI performances of the PWM, variable-frequency, and bi-frequency modulated control signals was evaluated, and the BFM control signals show the lowest EMI emission. The bi-frequency modulated technique has also been applied to the power factor correction. A BFM zero -current switching boost converter has been designed for the power factor correction, and the simulation results show that the power factor has been improved.

  1. LIDAR optical rugosity of coral reefs in Biscayne National Park, Florida

    USGS Publications Warehouse

    Brock, J.C.; Wright, C.W.; Clayton, T.D.; Nayegandhi, A.

    2004-01-01

    The NASA Experimental Advanced Airborne Research Lidar (EAARL), a temporal waveform-resolving, airborne, green wavelength LIDAR (light detection and ranging), is designed to measure the submeter-scale topography of shallow reef substrates. Topographic variability is a prime component of habitat complexity, an ecological factor that both expresses and controls the abundance and distribution of many reef organisms. Following the acquisition of EAARL coverage over both mid-platform patch reefs and shelf-margin bank reefs within Biscayne National Park in August 2002, EAARL-based optical indices of topographic variability were evaluated at 15 patch reef and bank reef sites. Several sites were selected to match reefs previously evaluated in situ along underwater video and belt transects. The analysis used large populations of submarine topographic transects derived from the examination of closely spaced laser spot reflections along LIDAR raster scans. At all 15 sites, each LIDAR transect was evaluated separately to determine optical rugosity (Rotran), and the average elevation difference between adjacent points (Av(??E ap)). Further, the whole-site mean and maximum values of Ro tran and Av(??Eap) for the entire population of transects at each analysis site, along with their standard deviations, were calculated. This study revealed that the greater habitat complexity of inshore patch reefs versus outer bank reefs results in relative differences in topographic complexity that can be discerned in the laser returns. Accordingly, LIDAR sensing of optical rugosity is proposed as a complementary new technique for the rapid assessment of shallow coral reefs. ?? Springer-Verlag 2004.

  2. Effects of variable practice on the motor learning outcomes in manual wheelchair propulsion.

    PubMed

    Leving, Marika T; Vegter, Riemer J K; de Groot, Sonja; van der Woude, Lucas H V

    2016-11-23

    Handrim wheelchair propulsion is a cyclic skill that needs to be learned during rehabilitation. It has been suggested that more variability in propulsion technique benefits the motor learning process of wheelchair propulsion. The purpose of this study was to determine the influence of variable practice on the motor learning outcomes of wheelchair propulsion in able-bodied participants. Variable practice was introduced in the form of wheelchair basketball practice and wheelchair-skill practice. Motor learning was operationalized as improvements in mechanical efficiency and propulsion technique. Eleven Participants in the variable practice group and 12 participants in the control group performed an identical pre-test and a post-test. Pre- and post-test were performed in a wheelchair on a motor-driven treadmill (1.11 m/s) at a relative power output of 0.23 W/kg. Energy consumption and the propulsion technique variables with their respective coefficient of variation were calculated. Between the pre- and the post-test the variable practice group received 7 practice sessions. During the practice sessions participants performed one-hour of variable practice, consisting of five wheelchair-skill tasks and a 30 min wheelchair basketball game. The control group did not receive any practice between the pre- and the post-test. Comparison of the pre- and the post-test showed that the variable practice group significantly improved the mechanical efficiency (4.5 ± 0.6% → 5.7 ± 0.7%) in contrast to the control group (4.5 ± 0.6% → 4.4 ± 0.5%) (group x time interaction effect p < 0.001).With regard to propulsion technique, both groups significantly reduced the push frequency and increased the contact angle of the hand with the handrim (within group, time effect). No significant group × time interaction effects were found for propulsion technique. With regard to propulsion variability, the variable practice group increased variability when compared to the control group (interaction effect p < 0.001). Compared to a control, variable practice, resulted in an increase in mechanical efficiency and increased variability. Interestingly, the large relative improvement in mechanical efficiency was concomitant with only moderate improvements in the propulsion technique, which were similar in the control group, suggesting that other factors besides propulsion technique contributed to the lower energy expenditure.

  3. Hydrological parameter estimations from a conservative tracer test with variable-density effects at the Boise Hydrogeophysical Research Site

    NASA Astrophysics Data System (ADS)

    Dafflon, B.; Barrash, W.; Cardiff, M.; Johnson, T. C.

    2011-12-01

    Reliable predictions of groundwater flow and solute transport require an estimation of the detailed distribution of the parameters (e.g., hydraulic conductivity, effective porosity) controlling these processes. However, such parameters are difficult to estimate because of the inaccessibility and complexity of the subsurface. In this regard, developments in parameter estimation techniques and investigations of field experiments are still challenging and necessary to improve our understanding and the prediction of hydrological processes. Here we analyze a conservative tracer test conducted at the Boise Hydrogeophysical Research Site in 2001 in a heterogeneous unconfined fluvial aquifer. Some relevant characteristics of this test include: variable-density (sinking) effects because of the injection concentration of the bromide tracer, the relatively small size of the experiment, and the availability of various sources of geophysical and hydrological information. The information contained in this experiment is evaluated through several parameter estimation approaches, including a grid-search-based strategy, stochastic simulation of hydrological property distributions, and deterministic inversion using regularization and pilot-point techniques. Doing this allows us to investigate hydraulic conductivity and effective porosity distributions and to compare the effects of assumptions from several methods and parameterizations. Our results provide new insights into the understanding of variable-density transport processes and the hydrological relevance of incorporating various sources of information in parameter estimation approaches. Among others, the variable-density effect and the effective porosity distribution, as well as their coupling with the hydraulic conductivity structure, are seen to be significant in the transport process. The results also show that assumed prior information can strongly influence the estimated distributions of hydrological properties.

  4. On Chaotic and Hyperchaotic Complex Nonlinear Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Mahmoud, Gamal M.

    Dynamical systems described by real and complex variables are currently one of the most popular areas of scientific research. These systems play an important role in several fields of physics, engineering, and computer sciences, for example, laser systems, control (or chaos suppression), secure communications, and information science. Dynamical basic properties, chaos (hyperchaos) synchronization, chaos control, and generating hyperchaotic behavior of these systems are briefly summarized. The main advantage of introducing complex variables is the reduction of phase space dimensions by a half. They are also used to describe and simulate the physics of detuned laser and thermal convection of liquid flows, where the electric field and the atomic polarization amplitudes are both complex. Clearly, if the variables of the system are complex the equations involve twice as many variables and control parameters, thus making it that much harder for a hostile agent to intercept and decipher the coded message. Chaotic and hyperchaotic complex systems are stated as examples. Finally there are many open problems in the study of chaotic and hyperchaotic complex nonlinear dynamical systems, which need further investigations. Some of these open problems are given.

  5. Four quadrant control of induction motors

    NASA Technical Reports Server (NTRS)

    Hansen, Irving G.

    1991-01-01

    Induction motors are the nation's workhorse, being the motor of choice in most applications due to their simple rugged construction. It has been estimated that 14 to 27 percent of the country's total electricity use could be saved with adjustable speed drives. Until now, induction motors have not been suited well for variable speed or servo-drives, due to the inherent complexity, size, and inefficiency of their variable speed controls. Work at NASA Lewis Research Center on field oriented control of induction motors using pulse population modulation method holds the promise for the desired drive electronics. The system allows for a variable voltage to frequency ratio which enables the user to operate the motor at maximum efficiency, while having independent control of both the speed and torque of an induction motor in all four quadrants of the speed torque map. Multiple horsepower machine drives were demonstrated, and work is on-going to develop a 20 hp average, 40 hp peak class of machine. The pulse population technique, results to date, and projections for implementation of this existing new motor control technology are discussed.

  6. Decision Neuroscience: Neuroeconomics

    PubMed Central

    Smith, David V.; Huettel, Scott A.

    2012-01-01

    Few aspects of human cognition are more personal than the choices we make. Our decisions – from the mundane to the impossibly complex – continually shape the courses of our lives. In recent years, researchers have applied the tools of neuroscience to understand the mechanisms that underlie decision making, as part of the new discipline of decision neuroscience. A primary goal of this emerging field has been to identify the processes that underlie specific decision variables, including the value of rewards, the uncertainty associated with particular outcomes, and the consequences of social interactions. Recent work suggests potential neural substrates that integrate these variables, potentially reflecting a common neural currency for value, to facilitate value comparisons. Despite the successes of decision neuroscience research for elucidating brain mechanisms, significant challenges remain. These include building new conceptual frameworks for decision making, integrating research findings across disparate techniques and species, and extending results from neuroscience to shape economic theory. To overcome these challenges, future research will likely focus on interpersonal variability in decision making, with the eventual goal of creating biologically plausible models for individual choice. PMID:22754602

  7. A spectro-interferometric view of l Carinae's modulated pulsations

    NASA Astrophysics Data System (ADS)

    Anderson, Richard I.; Mérand, Antoine; Kervella, Pierre; Breitfelder, Joanne; Eyer, Laurent; Gallenne, Alexandre

    Classical Cepheids are radially pulsating stars that enable important tests of stellar evolution and play a crucial role in the calibration of the local Hubble constant. l Carinae is a particularly well-known distance calibrator, being the closest long-period (P ~ 35.5 d) Cepheid and subtending the largest angular diameter. We have carried out an unprecedented observing program to investigate whether recently discovered cycle-to-cycle changes (modulations) of l Carinae's radial velocity (RV) variability are mirrored by its variability in angular size. To this end, we have secured a fully contemporaneous dataset of high-precision RVs and high-precision angular diameters. Here we provide a concise summary of our project and report preliminary results. We confirm the modulated nature of the RV variability and find tentative evidence of cycle-to-cycle differences in l Car's maximal angular diameter. Our analysis is exploring the limits of state-of-the-art instrumentation and reveals additional complexity in the pulsations of Cepheids. If confirmed, our result suggests a previously unknown pulsation cycle dependence of projection factors required for determining Cepheid distances via the Baade-Wesselink technique.

  8. A Computing Method for Sound Propagation Through a Nonuniform Jet Stream

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Liu, C. H.

    1974-01-01

    Understanding the principles of jet noise propagation is an essential ingredient of systematic noise reduction research. High speed computer methods offer a unique potential for dealing with complex real life physical systems whereas analytical solutions are restricted to sophisticated idealized models. The classical formulation of sound propagation through a jet flow was found to be inadequate for computer solutions and a more suitable approach was needed. Previous investigations selected the phase and amplitude of the acoustic pressure as dependent variables requiring the solution of a system of nonlinear algebraic equations. The nonlinearities complicated both the analysis and the computation. A reformulation of the convective wave equation in terms of a new set of dependent variables is developed with a special emphasis on its suitability for numerical solutions on fast computers. The technique is very attractive because the resulting equations are linear in nonwaving variables. The computer solution to such a linear system of algebraic equations may be obtained by well-defined and direct means which are conservative of computer time and storage space. Typical examples are illustrated and computational results are compared with available numerical and experimental data.

  9. Novel Flood Detection and Analysis Method Using Recurrence Property

    NASA Astrophysics Data System (ADS)

    Wendi, Dadiyorto; Merz, Bruno; Marwan, Norbert

    2016-04-01

    Temporal changes in flood hazard are known to be difficult to detect and attribute due to multiple drivers that include processes that are non-stationary and highly variable. These drivers, such as human-induced climate change, natural climate variability, implementation of flood defence, river training, or land use change, could impact variably on space-time scales and influence or mask each other. Flood time series may show complex behavior that vary at a range of time scales and may cluster in time. This study focuses on the application of recurrence based data analysis techniques (recurrence plot) for understanding and quantifying spatio-temporal changes in flood hazard in Germany. The recurrence plot is known as an effective tool to visualize the dynamics of phase space trajectories i.e. constructed from a time series by using an embedding dimension and a time delay, and it is known to be effective in analyzing non-stationary and non-linear time series. The emphasis will be on the identification of characteristic recurrence properties that could associate typical dynamic behavior to certain flood situations.

  10. Markov state modeling of sliding friction

    NASA Astrophysics Data System (ADS)

    Pellegrini, F.; Landes, François P.; Laio, A.; Prestipino, S.; Tosatti, E.

    2016-11-01

    Markov state modeling (MSM) has recently emerged as one of the key techniques for the discovery of collective variables and the analysis of rare events in molecular simulations. In particular in biochemistry this approach is successfully exploited to find the metastable states of complex systems and their evolution in thermal equilibrium, including rare events, such as a protein undergoing folding. The physics of sliding friction and its atomistic simulations under external forces constitute a nonequilibrium field where relevant variables are in principle unknown and where a proper theory describing violent and rare events such as stick slip is still lacking. Here we show that MSM can be extended to the study of nonequilibrium phenomena and in particular friction. The approach is benchmarked on the Frenkel-Kontorova model, used here as a test system whose properties are well established. We demonstrate that the method allows the least prejudiced identification of a minimal basis of natural microscopic variables necessary for the description of the forced dynamics of sliding, through their probabilistic evolution. The steps necessary for the application to realistic frictional systems are highlighted.

  11. Ground-based thermography of fluvial systems at low and high discharge reveals potential complex thermal heterogeneity driven by flow variation and bioroughness

    USGS Publications Warehouse

    Cardenas, M.B.; Harvey, J.W.; Packman, A.I.; Scott, D.T.

    2008-01-01

    Temperature is a primary physical and biogeochemical variable in aquatic systems. Field-based measurement of temperature at discrete sampling points has revealed temperature variability in fluvial systems, but traditional techniques do not readily allow for synoptic sampling schemes that can address temperature-related questions with broad, yet detailed, coverage. We present results of thermal infrared imaging at different stream discharge (base flow and peak flood) conditions using a handheld IR camera. Remotely sensed temperatures compare well with those measured with a digital thermometer. The thermal images show that periphyton, wood, and sandbars induce significant thermal heterogeneity during low stages. Moreover, the images indicate temperature variability within the periphyton community and within the partially submerged bars. The thermal heterogeneity was diminished during flood inundation, when the areas of more slowly moving water to the side of the stream differed in their temperature. The results have consequences for thermally sensitive hydroelogical processes and implications for models of those processes, especially those that assume an effective stream temperature. Copyright ?? 2008 John Wiley & Sons, Ltd.

  12. DNA strand displacement system running logic programs.

    PubMed

    Rodríguez-Patón, Alfonso; Sainz de Murieta, Iñaki; Sosík, Petr

    2014-01-01

    The paper presents a DNA-based computing model which is enzyme-free and autonomous, not requiring a human intervention during the computation. The model is able to perform iterated resolution steps with logical formulae in conjunctive normal form. The implementation is based on the technique of DNA strand displacement, with each clause encoded in a separate DNA molecule. Propositions are encoded assigning a strand to each proposition p, and its complementary strand to the proposition ¬p; clauses are encoded comprising different propositions in the same strand. The model allows to run logic programs composed of Horn clauses by cascading resolution steps. The potential of the model is demonstrated also by its theoretical capability of solving SAT. The resulting SAT algorithm has a linear time complexity in the number of resolution steps, whereas its spatial complexity is exponential in the number of variables of the formula. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  13. Elastic Green’s Function in Anisotropic Bimaterials Considering Interfacial Elasticity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Juan, Pierre -Alexandre; Dingreville, Remi

    Here, the two-dimensional elastic Green’s function is calculated for a general anisotropic elastic bimaterial containing a line dislocation and a concentrated force while accounting for the interfacial structure by means of a generalized interfacial elasticity paradigm. The introduction of the interface elasticity model gives rise to boundary conditions that are effectively equivalent to those of a weakly bounded interface. The equations of elastic equilibrium are solved by complex variable techniques and the method of analytical continuation. The solution is decomposed into the sum of the Green’s function corresponding to the perfectly bonded interface and a perturbation term corresponding to themore » complex coupling nature between the interface structure and a line dislocation/concentrated force. Such construct can be implemented into the boundary integral equations and the boundary element method for analysis of nano-layered structures and epitaxial systems where the interface structure plays an important role.« less

  14. Elastic Green’s Function in Anisotropic Bimaterials Considering Interfacial Elasticity

    DOE PAGES

    Juan, Pierre -Alexandre; Dingreville, Remi

    2017-09-13

    Here, the two-dimensional elastic Green’s function is calculated for a general anisotropic elastic bimaterial containing a line dislocation and a concentrated force while accounting for the interfacial structure by means of a generalized interfacial elasticity paradigm. The introduction of the interface elasticity model gives rise to boundary conditions that are effectively equivalent to those of a weakly bounded interface. The equations of elastic equilibrium are solved by complex variable techniques and the method of analytical continuation. The solution is decomposed into the sum of the Green’s function corresponding to the perfectly bonded interface and a perturbation term corresponding to themore » complex coupling nature between the interface structure and a line dislocation/concentrated force. Such construct can be implemented into the boundary integral equations and the boundary element method for analysis of nano-layered structures and epitaxial systems where the interface structure plays an important role.« less

  15. Sequential Injection Analysis for Optimization of Molecular Biology Reactions

    PubMed Central

    Allen, Peter B.; Ellington, Andrew D.

    2011-01-01

    In order to automate the optimization of complex biochemical and molecular biology reactions, we developed a Sequential Injection Analysis (SIA) device and combined this with a Design of Experiment (DOE) algorithm. This combination of hardware and software automatically explores the parameter space of the reaction and provides continuous feedback for optimizing reaction conditions. As an example, we optimized the endonuclease digest of a fluorogenic substrate, and showed that the optimized reaction conditions also applied to the digest of the substrate outside of the device, and to the digest of a plasmid. The sequential technique quickly arrived at optimized reaction conditions with less reagent use than a batch process (such as a fluid handling robot exploring multiple reaction conditions in parallel) would have. The device and method should now be amenable to much more complex molecular biology reactions whose variable spaces are correspondingly larger. PMID:21338059

  16. Geological and Structural Patterns on Titan Enhanced Through Cassini's SAR PCA and High-Resolution Radiometry

    NASA Astrophysics Data System (ADS)

    Paganelli, F.; Schubert, G.; Lopes, R. M. C.; Malaska, M.; Le Gall, A. A.; Kirk, R. L.

    2016-12-01

    The current SAR data coverage on Titan encompasses several areas in which multiple radar passes are present and overlapping, providing additional information to aid the interpretation of geological and structural features. We exploit the different combinations of look direction and variable incidence angle to examine Cassini Synthetic Aperture RADAR (SAR) data using the Principal Component Analysis (PCA) technique and high-resolution radiometry, as a tool to aid in the interpretation of geological and structural features. Look direction and variable incidence angle is of particular importance in the analysis of variance in the images, which aid in the perception and identification of geological and structural features, as extensively demonstrated in Earth and planetary examples. The PCA enhancement technique uses projected non-ortho-rectified SAR imagery in order to maintain the inherent differences in scattering and geometric properties due to the different look directions, while enhancing the geometry of surface features. The PC2 component provides a stereo view of the areas in which complex surface features and structural patterns can be enhanced and outlined. We focus on several areas of interest, in older and recently acquired flybys, in which evidence of geological and structural features can be enhanced and outlined in the PC1 and PC2 components. Results of this technique provide enhanced geometry and insights into the interpretation of the observed geological and structural features, thus allowing a better understanding towards the geology and tectonics on Titan.

  17. Variable filter array spectrometer of VPD PbSe

    NASA Astrophysics Data System (ADS)

    Linares-Herrero, R.; Vergara, G.; Gutiérrez-Álvarez, R.; Fernández-Montojo, C.; Gómez, L. J.; Villamayor, V.; Baldasano-Ramírez, A.; Montojo, M. T.

    2012-06-01

    MWIR spectroscopy shows a large potential in the current IR devices market, due to its multiple applications (gas detection, chemical analysis, industrial monitoring, combustion and flame characterization, food packaging etc) and its outstanding performance (good sensitivity, NDT method, velocity of response, among others), opening this technique to very diverse fields of application, such as industrial monitoring and control, agriculture, medicine and environmental monitoring. However, even though a big interest on MWIR spectroscopy technique has been present in the last years, two major barriers have held it back from its widespread use outside the laboratory: the complexity and delicateness of some popular techniques such as Fourier-transform IR (FT-IR) spectrometers, and the lack of affordable specific key elements such a MWIR light sources and low cost (real uncooled) detectors. Recent developments in electrooptical components are helping to overcome these drawbacks. The need for simpler solutions for analytical measurements has prompted the development of better and more affordable uncooled MWIR detectors, electronics and optics. In this paper a new MWIR spectrometry device is presented. Based on linear arrays of different geometries (64, 128 and 256 elements), NIT has developed a MWIR Variable Filter Array Spectrometer (VFAS). This compact device, with no moving parts, based on a rugged and affordable detector, is suitable to be used in applications which demand high sensitivity, good spectral discrimination, reliability and compactness, and where an alternative to the traditional scanning instrument is desired. Some measurements carried out for several industries will be also presented.

  18. Complexity Variability Assessment of Nonlinear Time-Varying Cardiovascular Control

    NASA Astrophysics Data System (ADS)

    Valenza, Gaetano; Citi, Luca; Garcia, Ronald G.; Taylor, Jessica Noggle; Toschi, Nicola; Barbieri, Riccardo

    2017-02-01

    The application of complex systems theory to physiology and medicine has provided meaningful information about the nonlinear aspects underlying the dynamics of a wide range of biological processes and their disease-related aberrations. However, no studies have investigated whether meaningful information can be extracted by quantifying second-order moments of time-varying cardiovascular complexity. To this extent, we introduce a novel mathematical framework termed complexity variability, in which the variance of instantaneous Lyapunov spectra estimated over time serves as a reference quantifier. We apply the proposed methodology to four exemplary studies involving disorders which stem from cardiology, neurology and psychiatry: Congestive Heart Failure (CHF), Major Depression Disorder (MDD), Parkinson’s Disease (PD), and Post-Traumatic Stress Disorder (PTSD) patients with insomnia under a yoga training regime. We show that complexity assessments derived from simple time-averaging are not able to discern pathology-related changes in autonomic control, and we demonstrate that between-group differences in measures of complexity variability are consistent across pathologies. Pathological states such as CHF, MDD, and PD are associated with an increased complexity variability when compared to healthy controls, whereas wellbeing derived from yoga in PTSD is associated with lower time-variance of complexity.

  19. Aging and the complexity of cardiovascular dynamics

    NASA Technical Reports Server (NTRS)

    Kaplan, D. T.; Furman, M. I.; Pincus, S. M.; Ryan, S. M.; Lipsitz, L. A.; Goldberger, A. L.

    1991-01-01

    Biomedical signals often vary in a complex and irregular manner. Analysis of variability in such signals generally does not address directly their complexity, and so may miss potentially useful information. We analyze the complexity of heart rate and beat-to-beat blood pressure using two methods motivated by nonlinear dynamics (chaos theory). A comparison of a group of healthy elderly subjects with healthy young adults indicates that the complexity of cardiovascular dynamics is reduced with aging. This suggests that complexity of variability may be a useful physiological marker.

  20. Hierarchical Modeling and Robust Synthesis for the Preliminary Design of Large Scale Complex Systems

    NASA Technical Reports Server (NTRS)

    Koch, Patrick N.

    1997-01-01

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis; Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration; and Noise modeling techniques for implementing robust preliminary design when approximate models are employed. Hierarchical partitioning and modeling techniques including intermediate responses, linking variables, and compatibility constraints are incorporated within a hierarchical compromise decision support problem formulation for synthesizing subproblem solutions for a partitioned system. Experimentation and approximation techniques are employed for concurrent investigations and modeling of partitioned subproblems. A modified composite experiment is introduced for fitting better predictive models across the ranges of the factors, and an approach for constructing partitioned response surfaces is developed to reduce the computational expense of experimentation for fitting models in a large number of factors. Noise modeling techniques are compared and recommendations are offered for the implementation of robust design when approximate models are sought. These techniques, approaches, and recommendations are incorporated within the method developed for hierarchical robust preliminary design exploration. This method as well as the associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system. The case study is developed in collaboration with Allison Engine Company, Rolls Royce Aerospace, and is based on the Allison AE3007 existing engine designed for midsize commercial, regional business jets. For this case study, the turbofan system-level problem is partitioned into engine cycle design and configuration design and a compressor modules integrated for more detailed subsystem-level design exploration, improving system evaluation. The fan and low pressure turbine subsystems are also modeled, but in less detail. Given the defined partitioning, these subproblems are investigated independently and concurrently, and response surface models are constructed to approximate the responses of each. These response models are then incorporated within a commercial turbofan hierarchical compromise decision support problem formulation. Five design scenarios are investigated, and robust solutions are identified. The method and solutions identified are verified by comparison with the AE3007 engine. The solutions obtained are similar to the AE3007 cycle and configuration, but are better with respect to many of the requirements.

  1. Variability in Rheumatology day care hospitals in Spain: VALORA study.

    PubMed

    Hernández Miguel, María Victoria; Martín Martínez, María Auxiliadora; Corominas, Héctor; Sanchez-Piedra, Carlos; Sanmartí, Raimon; Fernandez Martinez, Carmen; García-Vicuña, Rosario

    To describe the variability of the day care hospital units (DCHUs) of Rheumatology in Spain, in terms of structural resources and operating processes. Multicenter descriptive study with data from a self-completed questionnaire of DCHUs self-assessment based on DCHUs quality standards of the Spanish Society of Rheumatology. Structural resources and operating processes were analyzed and stratified by hospital complexity (regional, general, major and complex). Variability was determined using the coefficient of variation (CV) of the variable with clinical relevance that presented statistically significant differences when was compared by centers. A total of 89 hospitals (16 autonomous regions and Melilla) were included in the analysis. 11.2% of hospitals are regional, 22,5% general, 27%, major and 39,3% complex. A total of 92% of DCHUs were polyvalent. The number of treatments applied, the coordination between DCHUs and hospital pharmacy and the post graduate training process were the variables that showed statistically significant differences depending on the complexity of hospital. The highest rate of rheumatologic treatments was found in complex hospitals (2.97 per 1,000 population), and the lowest in general hospitals (2.01 per 1,000 population). The CV was 0.88 in major hospitals; 0.86 in regional; 0.76 in general, and 0.72 in the complex. there was variability in the number of treatments delivered in DCHUs, being greater in major hospitals and then in regional centers. Nonetheless, the variability in terms of structure and function does not seem due to differences in center complexity. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Reumatología y Colegio Mexicano de Reumatología. All rights reserved.

  2. Angioarchitecture of the coeliac sympathetic ganglion complex in the common tree shrew (Tupaia glis)

    PubMed Central

    PROMWIKORN, WARAPORN; THONGPILA, SAKPORN; PRADIDARCHEEP, WISUIT; MINGSAKUL, THAWORN; CHUNHABUNDIT, PANJIT; SOMANA, REON

    1998-01-01

    The angioarchitecture of the coeliac sympathetic ganglion complex (CGC) of the common tree shrew (Tupaia glis) was studied by the vascular corrosion cast technique in conjunction with scanning electron microscopy. The CGC of the tree shrew was found to be a highly vascularised organ. It normally received arterial blood supply from branches of the inferior phrenic, superior suprarenal and inferior suprarenal arteries and of the abdominal aorta. In some animals, its blood supply was also derived from branches of the middle suprarenal arteries, coeliac artery, superior mesenteric artery and lumbar arteries. These arteries penetrated the ganglion at variable points and in slightly different patterns. They gave off peripheral branches to form a subcapsular capillary plexus while their main trunks traversed deeply into the inner part before branching into the densely packed intraganglionic capillary networks. The capillaries merged to form venules before draining into collecting veins at the peripheral region of the ganglion complex. Finally, the veins coursed to the dorsal aspect of the ganglion to drain into the renal and inferior phrenic veins and the inferior vena cava. The capillaries on the coeliac ganglion complex do not possess fenestrations. PMID:9877296

  3. In-vitro nanodiagnostic platform through nanoparticles and DNA-RNA nanotechnology.

    PubMed

    Chan, Ki; Ng, Tzi Bun

    2015-04-01

    Nanocomposites containing nanoparticles or nanostructured domains exhibit an even higher degree of material complexity that leads to an extremely high variability of nanostructured materials. This review introduces analytical concepts and techniques for nanomaterials and derives recommendations for a qualified selection of characterization techniques for specific types of samples, and focuses the characterization of nanoparticles and their agglomerates or aggregates. In addition, DNA nanotechnology and the more recent newcomer RNA nanotechnology have achieved almost an advanced status among nanotechnology researchers¸ therefore, the core features, potential, and significant challenges of DNA nanotechnology are also highlighted as a new discipline. Moreover, nanobiochips made by nanomaterials are rapidly emerging as a new paradigm in the area of large-scale biochemical analysis. The use of nanoscale components enables higher precision in diagnostics while considerably reducing the cost of the platform that leads this review to explore the use of nanoparticles, nanomaterials, and other bionanotechnologies for its application to nanodiagnostics in-vitro.

  4. Application of Three Existing Stope Boundary Optimisation Methods in an Operating Underground Mine

    NASA Astrophysics Data System (ADS)

    Erdogan, Gamze; Yavuz, Mahmut

    2017-12-01

    The underground mine planning and design optimisation process have received little attention because of complexity and variability of problems in underground mines. Although a number of optimisation studies and software tools are available and some of them, in special, have been implemented effectively to determine the ultimate-pit limits in an open pit mine, there is still a lack of studies for optimisation of ultimate stope boundaries in underground mines. The proposed approaches for this purpose aim at maximizing the economic profit by selecting the best possible layout under operational, technical and physical constraints. In this paper, the existing three heuristic techniques including Floating Stope Algorithm, Maximum Value Algorithm and Mineable Shape Optimiser (MSO) are examined for optimisation of stope layout in a case study. Each technique is assessed in terms of applicability, algorithm capabilities and limitations considering the underground mine planning challenges. Finally, the results are evaluated and compared.

  5. Use of a three-color chromosome in situ suppression technique for the detection of past radiation exposure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gebhart, E.; Neubauer, S.; Schmitt, G.

    1996-01-01

    A three-color chromosome in situ suppression technique and classical cytogenetic analysis were compared for the detection of chromosomal aberrations in blood lymphocytes of 27 patients who had undergone radiation therapies from 1 month to 9 years ago. Depending on the respective regimens of therapy, a high variability was found in the aberration data. Aberration rates depended on the interval between exposure and scoring rather than on the locally applied radiation doses, which were rather uniform among most patients. Chromosome in situ suppression was found to be superior to classical cytogenetics with respect not only to the spectrum of detectable aberrationsmore » but also to the uncovering of long-term effects of irradiation. Of particular interest were the relative stability of the frequency of radiation-induced reciprocal translocations and the utility of chromosome in situ suppression to uncover complex rearrangements. 27 refs., 4 figs.« less

  6. Sampling of atmospheric carbonyl compounds for determination by liquid chromatography after 2,4-dinitrophenylhydrazine labelling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vairavamurthy, A.; Roberts, J.M.; Newman, L.

    1991-02-01

    Determination of carbonyl compounds in the ambient atmosphere is receiving increasing attention because of the critical role these compounds play as pollutants and as key participants in tropospheric photochemistry. Carbonyls are involved in photochemical reactions as products of the oxidation of hydrocarbons, precursors of oxidants including ozone and peroxycarboxylic nitric anhydrides (PANs), and as sources of free radicals and organic aerosols. A correct understanding and assessment of the role of carbonyls in tropospheric chemistry requires the accurate and precise measurement of these compounds along with their parent and product compounds. Here we discuss some of these important issues along withmore » the different techniques used for time-integrated collection of carbonyls in the DNPH based liquid chromatographic methods because of their complexity, variability and as well their importance; we emphasize the principles, advantages, and limitations of these techniques. 58 refs., 9 figs., 3 tabs.« less

  7. Spray drying formulation of amorphous solid dispersions.

    PubMed

    Singh, Abhishek; Van den Mooter, Guy

    2016-05-01

    Spray drying is a well-established manufacturing technique which can be used to formulate amorphous solid dispersions (ASDs) which is an effective strategy to deliver poorly water soluble drugs (PWSDs). However, the inherently complex nature of the spray drying process coupled with specific characteristics of ASDs makes it an interesting area to explore. Numerous diverse factors interact in an inter-dependent manner to determine the final product properties. This review discusses the basic background of ASDs, various formulation and process variables influencing the critical quality attributes (CQAs) of the ASDs and aspects of downstream processing. Also various aspects of spray drying such as instrumentation, thermodynamics, drying kinetics, particle formation process and scale-up challenges are included. Recent advances in the spray-based drying techniques are mentioned along with some future avenues where major research thrust is needed. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Shielding Effectiveness in a Two-Dimensional Reverberation Chamber Using Finite-Element Techniques

    NASA Technical Reports Server (NTRS)

    Bunting, Charles F.

    2006-01-01

    Reverberation chambers are attaining an increased importance in determination of electromagnetic susceptibility of avionics equipment. Given the nature of the variable boundary condition, the ability of a given source to couple energy into certain modes and the passband characteristic due the chamber Q, the fields are typically characterized by statistical means. The emphasis of this work is to apply finite-element techniques at cutoff to the analysis of a two-dimensional structure to examine the notion of shielding-effectiveness issues in a reverberating environment. Simulated mechanical stirring will be used to obtain the appropriate statistical field distribution. The shielding effectiveness (SE) in a simulated reverberating environment is compared to measurements in a reverberation chamber. A log-normal distribution for the SE is observed with implications for system designers. The work is intended to provide further refinement in the consideration of SE in a complex electromagnetic environment.

  9. Agent-based modeling: a new approach for theory building in social psychology.

    PubMed

    Smith, Eliot R; Conrey, Frederica R

    2007-02-01

    Most social and psychological phenomena occur not as the result of isolated decisions by individuals but rather as the result of repeated interactions between multiple individuals over time. Yet the theory-building and modeling techniques most commonly used in social psychology are less than ideal for understanding such dynamic and interactive processes. This article describes an alternative approach to theory building, agent-based modeling (ABM), which involves simulation of large numbers of autonomous agents that interact with each other and with a simulated environment and the observation of emergent patterns from their interactions. The authors believe that the ABM approach is better able than prevailing approaches in the field, variable-based modeling (VBM) techniques such as causal modeling, to capture types of complex, dynamic, interactive processes so important in the social world. The article elaborates several important contrasts between ABM and VBM and offers specific recommendations for learning more and applying the ABM approach.

  10. Effect of Aperture Field Variability, Flow Rate, and Ionic Strength on Colloid Transport in Single Fractures: Laboratory-Scale Experiments and Numerical Simulation

    NASA Astrophysics Data System (ADS)

    Zheng, Q.; Dickson, S.; Guo, Y.

    2007-12-01

    A good understanding of the physico-chemical processes (i.e., advection, dispersion, attachment/detachment, straining, sedimentation etc.) governing colloid transport in fractured media is imperative in order to develop appropriate bioremediation and/or bioaugmentation strategies for contaminated fractured aquifers, form management plans for groundwater resources to prevent pathogen contamination, and identify suitable radioactive waste disposal sites. However, research in this field is still in its infancy due to the complex heterogeneous nature of fractured media and the resulting difficulty in characterizing this media. The goal of this research is to investigate the effects of aperture field variability, flow rate and ionic strength on colloid transport processes in well characterized single fractures. A combination of laboratory-scale experiments, numerical simulations, and imaging techniques were employed to achieve this goal. Transparent replicas were cast from natural rock fractures, and a light transmission technique was employed to measure their aperture fields directly. The surface properties of the synthetic fractures were characterized by measuring the zeta-potential under different ionic strengths. A 33 (3 increased to the power of 3) factorial experiment was implemented to investigate the influence of aperture field variability, flow rate, and ionic strength on different colloid transport processes in the laboratory-scale fractures, specifically dispersion and attachment/detachment. A fluorescent stain technique was employed to photograph the colloid transport processes, and an analytical solution to the one-dimensional transport equation was fit to the colloid breakthrough curves to calculate the average transport velocity, dispersion coefficient, and attachment/detachment coefficient. The Reynolds equation was solved to obtain the flow field in the measured aperture fields, and the random walk particle tracking technique was employed to model the colloid transport experiments. The images clearly show the development of preferential pathways for colloid transport in the different aperture fields and under different flow conditions. Additionally, a correlation between colloid deposition and fracture wall topography was identified. This presentation will demonstrate (1) differential transport between colloid and solute in single fractures, and the relationship between differential transport and aperture field statistics; (2) the relationship between the colloid dispersion coefficient and aperture field statistics; and (3) the relationship between attachment/detachment, aperture field statistics, fracture wall topography, flow rate, and ionic strength. In addition, this presentation will provide insight into the application of the random walk particle tracking technique for modeling colloid transport in variable-aperture fractures.

  11. Bias, Confounding, and Interaction: Lions and Tigers, and Bears, Oh My!

    PubMed

    Vetter, Thomas R; Mascha, Edward J

    2017-09-01

    Epidemiologists seek to make a valid inference about the causal effect between an exposure and a disease in a specific population, using representative sample data from a specific population. Clinical researchers likewise seek to make a valid inference about the association between an intervention and outcome(s) in a specific population, based upon their randomly collected, representative sample data. Both do so by using the available data about the sample variable to make a valid estimate about its corresponding or underlying, but unknown population parameter. Random error in an experiment can be due to the natural, periodic fluctuation or variation in the accuracy or precision of virtually any data sampling technique or health measurement tool or scale. In a clinical research study, random error can be due to not only innate human variability but also purely chance. Systematic error in an experiment arises from an innate flaw in the data sampling technique or measurement instrument. In the clinical research setting, systematic error is more commonly referred to as systematic bias. The most commonly encountered types of bias in anesthesia, perioperative, critical care, and pain medicine research include recall bias, observational bias (Hawthorne effect), attrition bias, misclassification or informational bias, and selection bias. A confounding variable is a factor associated with both the exposure of interest and the outcome of interest. A confounding variable (confounding factor or confounder) is a variable that correlates (positively or negatively) with both the exposure and outcome. Confounding is typically not an issue in a randomized trial because the randomized groups are sufficiently balanced on all potential confounding variables, both observed and nonobserved. However, confounding can be a major problem with any observational (nonrandomized) study. Ignoring confounding in an observational study will often result in a "distorted" or incorrect estimate of the association or treatment effect. Interaction among variables, also known as effect modification, exists when the effect of 1 explanatory variable on the outcome depends on the particular level or value of another explanatory variable. Bias and confounding are common potential explanations for statistically significant associations between exposure and outcome when the true relationship is noncausal. Understanding interactions is vital to proper interpretation of treatment effects. These complex concepts should be consistently and appropriately considered whenever one is not only designing but also analyzing and interpreting data from a randomized trial or observational study.

  12. Modeling Psychological Attributes in Psychology – An Epistemological Discussion: Network Analysis vs. Latent Variables

    PubMed Central

    Guyon, Hervé; Falissard, Bruno; Kop, Jean-Luc

    2017-01-01

    Network Analysis is considered as a new method that challenges Latent Variable models in inferring psychological attributes. With Network Analysis, psychological attributes are derived from a complex system of components without the need to call on any latent variables. But the ontological status of psychological attributes is not adequately defined with Network Analysis, because a psychological attribute is both a complex system and a property emerging from this complex system. The aim of this article is to reappraise the legitimacy of latent variable models by engaging in an ontological and epistemological discussion on psychological attributes. Psychological attributes relate to the mental equilibrium of individuals embedded in their social interactions, as robust attractors within complex dynamic processes with emergent properties, distinct from physical entities located in precise areas of the brain. Latent variables thus possess legitimacy, because the emergent properties can be conceptualized and analyzed on the sole basis of their manifestations, without exploring the upstream complex system. However, in opposition with the usual Latent Variable models, this article is in favor of the integration of a dynamic system of manifestations. Latent Variables models and Network Analysis thus appear as complementary approaches. New approaches combining Latent Network Models and Network Residuals are certainly a promising new way to infer psychological attributes, placing psychological attributes in an inter-subjective dynamic approach. Pragmatism-realism appears as the epistemological framework required if we are to use latent variables as representations of psychological attributes. PMID:28572780

  13. Complex Variables throughout the Curriculum

    ERIC Educational Resources Information Center

    D'Angelo, John P.

    2017-01-01

    We offer many specific detailed examples, several of which are new, that instructors can use (in lecture or as student projects) to revitalize the role of complex variables throughout the curriculum. We conclude with three primary recommendations: revise the syllabus of Calculus II to allow early introductions of complex numbers and linear…

  14. Independent variable complexity for regional regression of the flow duration curve in ungauged basins

    NASA Astrophysics Data System (ADS)

    Fouad, Geoffrey; Skupin, André; Hope, Allen

    2016-04-01

    The flow duration curve (FDC) is one of the most widely used tools to quantify streamflow. Its percentile flows are often required for water resource applications, but these values must be predicted for ungauged basins with insufficient or no streamflow data. Regional regression is a commonly used approach for predicting percentile flows that involves identifying hydrologic regions and calibrating regression models to each region. The independent variables used to describe the physiographic and climatic setting of the basins are a critical component of regional regression, yet few studies have investigated their effect on resulting predictions. In this study, the complexity of the independent variables needed for regional regression is investigated. Different levels of variable complexity are applied for a regional regression consisting of 918 basins in the US. Both the hydrologic regions and regression models are determined according to the different sets of variables, and the accuracy of resulting predictions is assessed. The different sets of variables include (1) a simple set of three variables strongly tied to the FDC (mean annual precipitation, potential evapotranspiration, and baseflow index), (2) a traditional set of variables describing the average physiographic and climatic conditions of the basins, and (3) a more complex set of variables extending the traditional variables to include statistics describing the distribution of physiographic data and temporal components of climatic data. The latter set of variables is not typically used in regional regression, and is evaluated for its potential to predict percentile flows. The simplest set of only three variables performed similarly to the other more complex sets of variables. Traditional variables used to describe climate, topography, and soil offered little more to the predictions, and the experimental set of variables describing the distribution of basin data in more detail did not improve predictions. These results are largely reflective of cross-correlation existing in hydrologic datasets, and highlight the limited predictive power of many traditionally used variables for regional regression. A parsimonious approach including fewer variables chosen based on their connection to streamflow may be more efficient than a data mining approach including many different variables. Future regional regression studies may benefit from having a hydrologic rationale for including different variables and attempting to create new variables related to streamflow.

  15. Surveillance of Arthropod Vector-Borne Infectious Diseases Using Remote Sensing Techniques: A Review

    PubMed Central

    Kalluri, Satya; Gilruth, Peter; Rogers, David; Szczur, Martha

    2007-01-01

    Epidemiologists are adopting new remote sensing techniques to study a variety of vector-borne diseases. Associations between satellite-derived environmental variables such as temperature, humidity, and land cover type and vector density are used to identify and characterize vector habitats. The convergence of factors such as the availability of multi-temporal satellite data and georeferenced epidemiological data, collaboration between remote sensing scientists and biologists, and the availability of sophisticated, statistical geographic information system and image processing algorithms in a desktop environment creates a fertile research environment. The use of remote sensing techniques to map vector-borne diseases has evolved significantly over the past 25 years. In this paper, we review the status of remote sensing studies of arthropod vector-borne diseases due to mosquitoes, ticks, blackflies, tsetse flies, and sandflies, which are responsible for the majority of vector-borne diseases in the world. Examples of simple image classification techniques that associate land use and land cover types with vector habitats, as well as complex statistical models that link satellite-derived multi-temporal meteorological observations with vector biology and abundance, are discussed here. Future improvements in remote sensing applications in epidemiology are also discussed. PMID:17967056

  16. Evaluation of Techniques Used to Estimate Cortical Feature Maps

    PubMed Central

    Katta, Nalin; Chen, Thomas L.; Watkins, Paul V.; Barbour, Dennis L.

    2011-01-01

    Functional properties of neurons are often distributed nonrandomly within a cortical area and form topographic maps that reveal insights into neuronal organization and interconnection. Some functional maps, such as in visual cortex, are fairly straightforward to discern with a variety of techniques, while other maps, such as in auditory cortex, have resisted easy characterization. In order to determine appropriate protocols for establishing accurate functional maps in auditory cortex, artificial topographic maps were probed under various conditions, and the accuracy of estimates formed from the actual maps was quantified. Under these conditions, low-complexity maps such as sound frequency can be estimated accurately with as few as 25 total samples (e.g., electrode penetrations or imaging pixels) if neural responses are averaged together. More samples are required to achieve the highest estimation accuracy for higher complexity maps, and averaging improves map estimate accuracy even more than increasing sampling density. Undersampling without averaging can result in misleading map estimates, while undersampling with averaging can lead to the false conclusion of no map when one actually exists. Uniform sample spacing only slightly improves map estimation over nonuniform sample spacing typical of serial electrode penetrations. Tessellation plots commonly used to visualize maps estimated using nonuniform sampling are always inferior to linearly interpolated estimates, although differences are slight at higher sampling densities. Within primary auditory cortex, then, multiunit sampling with at least 100 samples would likely result in reasonable feature map estimates for all but the highest complexity maps and the highest variability that might be expected. PMID:21889537

  17. A global search inversion for earthquake kinematic rupture history: Application to the 2000 western Tottori, Japan earthquake

    USGS Publications Warehouse

    Piatanesi, A.; Cirella, A.; Spudich, P.; Cocco, M.

    2007-01-01

    We present a two-stage nonlinear technique to invert strong motions records and geodetic data to retrieve the rupture history of an earthquake on a finite fault. To account for the actual rupture complexity, the fault parameters are spatially variable peak slip velocity, slip direction, rupture time and risetime. The unknown parameters are given at the nodes of the subfaults, whereas the parameters within a subfault are allowed to vary through a bilinear interpolation of the nodal values. The forward modeling is performed with a discrete wave number technique, whose Green's functions include the complete response of the vertically varying Earth structure. During the first stage, an algorithm based on the heat-bath simulated annealing generates an ensemble of models that efficiently sample the good data-fitting regions of parameter space. In the second stage (appraisal), the algorithm performs a statistical analysis of the model ensemble and computes a weighted mean model and its standard deviation. This technique, rather than simply looking at the best model, extracts the most stable features of the earthquake rupture that are consistent with the data and gives an estimate of the variability of each model parameter. We present some synthetic tests to show the effectiveness of the method and its robustness to uncertainty of the adopted crustal model. Finally, we apply this inverse technique to the well recorded 2000 western Tottori, Japan, earthquake (Mw 6.6); we confirm that the rupture process is characterized by large slip (3-4 m) at very shallow depths but, differently from previous studies, we imaged a new slip patch (2-2.5 m) located deeper, between 14 and 18 km depth. Copyright 2007 by the American Geophysical Union.

  18. On coarse projective integration for atomic deposition in amorphous systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chuang, Claire Y., E-mail: yungc@seas.upenn.edu, E-mail: meister@unm.edu, E-mail: zepedaruiz1@llnl.gov; Sinno, Talid, E-mail: talid@seas.upenn.edu; Han, Sang M., E-mail: yungc@seas.upenn.edu, E-mail: meister@unm.edu, E-mail: zepedaruiz1@llnl.gov

    2015-10-07

    Direct molecular dynamics simulation of atomic deposition under realistic conditions is notoriously challenging because of the wide range of time scales that must be captured. Numerous simulation approaches have been proposed to address the problem, often requiring a compromise between model fidelity, algorithmic complexity, and computational efficiency. Coarse projective integration, an example application of the “equation-free” framework, offers an attractive balance between these constraints. Here, periodically applied, short atomistic simulations are employed to compute time derivatives of slowly evolving coarse variables that are then used to numerically integrate differential equations over relatively large time intervals. A key obstacle to themore » application of this technique in realistic settings is the “lifting” operation in which a valid atomistic configuration is recreated from knowledge of the coarse variables. Using Ge deposition on amorphous SiO{sub 2} substrates as an example application, we present a scheme for lifting realistic atomistic configurations comprised of collections of Ge islands on amorphous SiO{sub 2} using only a few measures of the island size distribution. The approach is shown to provide accurate initial configurations to restart molecular dynamics simulations at arbitrary points in time, enabling the application of coarse projective integration for this morphologically complex system.« less

  19. On Coarse Projective Integration for Atomic Deposition in Amorphous Systems

    DOE PAGES

    Chuang, Claire Y.; Han, Sang M.; Zepeda-Ruiz, Luis A.; ...

    2015-10-02

    Direct molecular dynamics simulation of atomic deposition under realistic conditions is notoriously challenging because of the wide range of timescales that must be captured. Numerous simulation approaches have been proposed to address the problem, often requiring a compromise between model fidelity, algorithmic complexity and computational efficiency. Coarse projective integration, an example application of the ‘equation-free’ framework, offers an attractive balance between these constraints. Here, periodically applied, short atomistic simulations are employed to compute gradients of slowly-evolving coarse variables that are then used to numerically integrate differential equations over relatively large time intervals. A key obstacle to the application of thismore » technique in realistic settings is the ‘lifting’ operation in which a valid atomistic configuration is recreated from knowledge of the coarse variables. Using Ge deposition on amorphous SiO 2 substrates as an example application, we present a scheme for lifting realistic atomistic configurations comprised of collections of Ge islands on amorphous SiO 2 using only a few measures of the island size distribution. In conclusion, the approach is shown to provide accurate initial configurations to restart molecular dynamics simulations at arbitrary points in time, enabling the application of coarse projective integration for this morphologically complex system.« less

  20. A Three-Dimensional Finite-Element Model for Simulating Water Flow in Variably Saturated Porous Media

    NASA Astrophysics Data System (ADS)

    Huyakorn, Peter S.; Springer, Everett P.; Guvanasen, Varut; Wadsworth, Terry D.

    1986-12-01

    A three-dimensional finite-element model for simulating water flow in variably saturated porous media is presented. The model formulation is general and capable of accommodating complex boundary conditions associated with seepage faces and infiltration or evaporation on the soil surface. Included in this formulation is an improved Picard algorithm designed to cope with severely nonlinear soil moisture relations. The algorithm is formulated for both rectangular and triangular prism elements. The element matrices are evaluated using an "influence coefficient" technique that avoids costly numerical integration. Spatial discretization of a three-dimensional region is performed using a vertical slicing approach designed to accommodate complex geometry with irregular boundaries, layering, and/or lateral discontinuities. Matrix solution is achieved using a slice successive overrelaxation scheme that permits a fairly large number of nodal unknowns (on the order of several thousand) to be handled efficiently on small minicomputers. Six examples are presented to verify and demonstrate the utility of the proposed finite-element model. The first four examples concern one- and two-dimensional flow problems used as sample problems to benchmark the code. The remaining examples concern three-dimensional problems. These problems are used to illustrate the performance of the proposed algorithm in three-dimensional situations involving seepage faces and anisotropic soil media.

  1. An analysis code for the Rapid Engineering Estimation of Momentum and Energy Losses (REMEL)

    NASA Technical Reports Server (NTRS)

    Dechant, Lawrence J.

    1994-01-01

    Nonideal behavior has traditionally been modeled by defining efficiency (a comparison between actual and isentropic processes), and subsequent specification by empirical or heuristic methods. With the increasing complexity of aeropropulsion system designs, the reliability of these more traditional methods is uncertain. Computational fluid dynamics (CFD) and experimental methods can provide this information but are expensive in terms of human resources, cost, and time. This report discusses an alternative to empirical and CFD methods by applying classical analytical techniques and a simplified flow model to provide rapid engineering estimates of these losses based on steady, quasi-one-dimensional governing equations including viscous and heat transfer terms (estimated by Reynold's analogy). A preliminary verification of REMEL has been compared with full Navier-Stokes (FNS) and CFD boundary layer computations for several high-speed inlet and forebody designs. Current methods compare quite well with more complex method results and solutions compare very well with simple degenerate and asymptotic results such as Fanno flow, isentropic variable area flow, and a newly developed, combined variable area duct with friction flow solution. These solution comparisons may offer an alternative to transitional and CFD-intense methods for the rapid estimation of viscous and heat transfer losses in aeropropulsion systems.

  2. Perceptions of Voice Teachers Regarding Students' Vocal Behaviors During Singing and Speaking.

    PubMed

    Beeman, Shellie A

    2017-01-01

    This study examined voice teachers' perceptions of their instruction of healthy singing and speaking voice techniques. An online, researcher-generated questionnaire based on the McClosky technique was administered to college/university voice teachers listed as members in the 2012-2013 College Music Society directory. A majority of participants believed there to be a relationship between the health of the singing voice and the health of the speaking voice. Participants' perception scores were the most positive for variable MBSi, the monitoring of students' vocal behaviors during singing. Perception scores for variable TVB, the teaching of healthy vocal behaviors, and variable MBSp, the monitoring of students' vocal behaviors while speaking, ranked second and third, respectively. Perception scores for variable TVB were primarily associated with participants' familiarity with voice rehabilitation techniques, gender, and familiarity with the McClosky technique. Perception scores for variable MBSi were primarily associated with participants' familiarity with voice rehabilitation techniques, gender, type of student taught, and instruction of a student with a voice disorder. Perception scores for variable MBSp were correlated with the greatest number of characteristics, including participants' familiarity with voice rehabilitation techniques, familiarity with the McClosky technique, type of student taught, years of teaching experience, and instruction of a student with a voice disorder. Voice teachers are purportedly working with injured voices and attempting to include vocal health in their instruction. Although a voice teacher is not obligated to pursue further rehabilitative training, the current study revealed a positive relationship between familiarity with specific rehabilitation techniques and vocal health. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  3. Plasticity models of material variability based on uncertainty quantification techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, Reese E.; Rizzi, Francesco; Boyce, Brad

    The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQmore » techniques can be used in model selection and assessing the quality of calibrated physical parameters.« less

  4. VO2 and VCO2 variabilities through indirect calorimetry instrumentation.

    PubMed

    Cadena-Méndez, Miguel; Escalante-Ramírez, Boris; Azpiroz-Leehan, Joaquín; Infante-Vázquez, Oscar

    2013-01-01

    The aim of this paper is to understand how to measure the VO2 and VCO2 variabilities in indirect calorimetry (IC) since we believe they can explain the high variation in the resting energy expenditure (REE) estimation. We propose that variabilities should be separately measured from the VO2 and VCO2 averages to understand technological differences among metabolic monitors when they estimate the REE. To prove this hypothesis the mixing chamber (MC) and the breath-by-breath (BbB) techniques measured the VO2 and VCO2 averages and their variabilities. Variances and power spectrum energies in the 0-0.5 Hertz band were measured to establish technique differences in steady and non-steady state. A hybrid calorimeter with both IC techniques studied a population of 15 volunteers that underwent the clino-orthostatic maneuver in order to produce the two physiological stages. The results showed that inter-individual VO2 and VCO2 variabilities measured as variances were negligible using the MC while variabilities measured as spectral energies using the BbB underwent 71 and 56% (p < 0.05), increase respectively. Additionally, the energy analysis showed an unexpected cyclic rhythm at 0.025 Hertz only during the orthostatic stage, which is new physiological information, not reported previusly. The VO2 and VCO2 inter-individual averages increased to 63 and 39% by the MC (p < 0.05) and 32 and 40% using the BbB (p < 0.1), respectively, without noticeable statistical differences among techniques. The conclusions are: (a) metabolic monitors should simultaneously include the MC and the BbB techniques to correctly interpret the steady or non-steady state variabilities effect in the REE estimation, (b) the MC is the appropriate technique to compute averages since it behaves as a low-pass filter that minimizes variances, (c) the BbB is the ideal technique to measure the variabilities since it can work as a high-pass filter to generate discrete time series able to accomplish spectral analysis, and (d) the new physiological information in the VO2 and VCO2 variabilities can help to understand why metabolic monitors with dissimilar IC techniques give different results in the REE estimation.

  5. General statistical considerations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eberhardt, L L; Gilbert, R O

    From NAEG plutonium environmental studies program meeting; Las Vegas, Nevada, USA (2 Oct 1973). The high sampling variability encountered in environmental plutonium studies along with high analytical costs makes it very important that efficient soil sampling plans be used. However, efficient sampling depends on explicit and simple statements of the objectives of the study. When there are multiple objectives it may be difficult to devise a wholly suitable sampling scheme. Sampling for long-term changes in plutonium concentration in soils may also be complex and expensive. Further attention to problems associated with compositing samples is recommended, as is the consistent usemore » of random sampling as a basic technique. (auth)« less

  6. Ideal, nonideal, and no-marker variables: The confirmatory factor analysis (CFA) marker technique works when it matters.

    PubMed

    Williams, Larry J; O'Boyle, Ernest H

    2015-09-01

    A persistent concern in the management and applied psychology literature is the effect of common method variance on observed relations among variables. Recent work (i.e., Richardson, Simmering, & Sturman, 2009) evaluated 3 analytical approaches to controlling for common method variance, including the confirmatory factor analysis (CFA) marker technique. Their findings indicated significant problems with this technique, especially with nonideal marker variables (those with theoretical relations with substantive variables). Based on their simulation results, Richardson et al. concluded that not correcting for method variance provides more accurate estimates than using the CFA marker technique. We reexamined the effects of using marker variables in a simulation study and found the degree of error in estimates of a substantive factor correlation was relatively small in most cases, and much smaller than error associated with making no correction. Further, in instances in which the error was large, the correlations between the marker and substantive scales were higher than that found in organizational research with marker variables. We conclude that in most practical settings, the CFA marker technique yields parameter estimates close to their true values, and the criticisms made by Richardson et al. are overstated. (c) 2015 APA, all rights reserved).

  7. A FORTRAN technique for correlating a circular environmental variable with a linear physiological variable in the sugar maple.

    PubMed

    Pease, J M; Morselli, M F

    1987-01-01

    This paper deals with a computer program adapted to a statistical method for analyzing an unlimited quantity of binary recorded data of an independent circular variable (e.g. wind direction), and a linear variable (e.g. maple sap flow volume). Circular variables cannot be statistically analyzed with linear methods, unless they have been transformed. The program calculates a critical quantity, the acrophase angle (PHI, phi o). The technique is adapted from original mathematics [1] and is written in Fortran 77 for easier conversion between computer networks. Correlation analysis can be performed following the program or regression which, because of the circular nature of the independent variable, becomes periodic regression. The technique was tested on a file of approximately 4050 data pairs.

  8. Variability in Second Language Learning: The Roles of Individual Differences, Learning Conditions, and Linguistic Complexity

    ERIC Educational Resources Information Center

    Tagarelli, Kaitlyn M.; Ruiz, Simón; Vega, José Luis Moreno; Rebuschat, Patrick

    2016-01-01

    Second language learning outcomes are highly variable, due to a variety of factors, including individual differences, exposure conditions, and linguistic complexity. However, exactly how these factors interact to influence language learning is unknown. This article examines the relationship between these three variables in language learners.…

  9. Diminished heart rate complexity in adolescent girls: a sign of vulnerability to anxiety disorders?

    PubMed

    Fiol-Veny, Aina; De la Torre-Luque, Alejandro; Balle, Maria; Bornas, Xavier

    2018-07-01

    Diminished heart rate variability has been found to be associated with high anxiety symptomatology. Since adolescence is the period of onset for many anxiety disorders, this study aimed to determine sex- and anxiety-related differences in heart rate variability and complexity in adolescents. We created four groups according to sex and anxiety symptomatology: high-anxiety girls (n = 24) and boys (n = 25), and low-anxiety girls (n = 22) and boys (n = 24) and recorded their cardiac function while they performed regular school activities. A series of two-way (sex and anxiety) MANOVAs were performed on time domain variability, frequency domain variability, and non-linear complexity. We obtained no multivariate interaction effects between sex and anxiety, but highly anxious participants had lower heart rate variability than the low-anxiety group. Regarding sex, girls showed lower heart rate variability and complexity than boys. The results suggest that adolescent girls have a less flexible cardiac system that could be a marker of the girls' vulnerability to developing anxiety disorders.

  10. Matrix-Inversion-Free Compressed Sensing With Variable Orthogonal Multi-Matching Pursuit Based on Prior Information for ECG Signals.

    PubMed

    Cheng, Yih-Chun; Tsai, Pei-Yun; Huang, Ming-Hao

    2016-05-19

    Low-complexity compressed sensing (CS) techniques for monitoring electrocardiogram (ECG) signals in wireless body sensor network (WBSN) are presented. The prior probability of ECG sparsity in the wavelet domain is first exploited. Then, variable orthogonal multi-matching pursuit (vOMMP) algorithm that consists of two phases is proposed. In the first phase, orthogonal matching pursuit (OMP) algorithm is adopted to effectively augment the support set with reliable indices and in the second phase, the orthogonal multi-matching pursuit (OMMP) is employed to rescue the missing indices. The reconstruction performance is thus enhanced with the prior information and the vOMMP algorithm. Furthermore, the computation-intensive pseudo-inverse operation is simplified by the matrix-inversion-free (MIF) technique based on QR decomposition. The vOMMP-MIF CS decoder is then implemented in 90 nm CMOS technology. The QR decomposition is accomplished by two systolic arrays working in parallel. The implementation supports three settings for obtaining 40, 44, and 48 coefficients in the sparse vector. From the measurement result, the power consumption is 11.7 mW at 0.9 V and 12 MHz. Compared to prior chip implementations, our design shows good hardware efficiency and is suitable for low-energy applications.

  11. A geomorphology-based ANFIS model for multi-station modeling of rainfall-runoff process

    NASA Astrophysics Data System (ADS)

    Nourani, Vahid; Komasi, Mehdi

    2013-05-01

    This paper demonstrates the potential use of Artificial Intelligence (AI) techniques for predicting daily runoff at multiple gauging stations. Uncertainty and complexity of the rainfall-runoff process due to its variability in space and time in one hand and lack of historical data on the other hand, cause difficulties in the spatiotemporal modeling of the process. In this paper, an Integrated Geomorphological Adaptive Neuro-Fuzzy Inference System (IGANFIS) model conjugated with C-means clustering algorithm was used for rainfall-runoff modeling at multiple stations of the Eel River watershed, California. The proposed model could be used for predicting runoff in the stations with lack of data or any sub-basin within the watershed because of employing the spatial and temporal variables of the sub-basins as the model inputs. This ability of the integrated model for spatiotemporal modeling of the process was examined through the cross validation technique for a station. In this way, different ANFIS structures were trained using Sugeno algorithm in order to estimate daily discharge values at different stations. In order to improve the model efficiency, the input data were then classified into some clusters by the means of fuzzy C-means (FCMs) method. The goodness-of-fit measures support the gainful use of the IGANFIS and FCM methods in spatiotemporal modeling of hydrological processes.

  12. Modeling of nitrate concentration in groundwater using artificial intelligence approach--a case study of Gaza coastal aquifer.

    PubMed

    Alagha, Jawad S; Said, Md Azlin Md; Mogheir, Yunes

    2014-01-01

    Nitrate concentration in groundwater is influenced by complex and interrelated variables, leading to great difficulty during the modeling process. The objectives of this study are (1) to evaluate the performance of two artificial intelligence (AI) techniques, namely artificial neural networks and support vector machine, in modeling groundwater nitrate concentration using scant input data, as well as (2) to assess the effect of data clustering as a pre-modeling technique on the developed models' performance. The AI models were developed using data from 22 municipal wells of the Gaza coastal aquifer in Palestine from 2000 to 2010. Results indicated high simulation performance, with the correlation coefficient and the mean average percentage error of the best model reaching 0.996 and 7 %, respectively. The variables that strongly influenced groundwater nitrate concentration were previous nitrate concentration, groundwater recharge, and on-ground nitrogen load of each land use land cover category in the well's vicinity. The results also demonstrated the merit of performing clustering of input data prior to the application of AI models. With their high performance and simplicity, the developed AI models can be effectively utilized to assess the effects of future management scenarios on groundwater nitrate concentration, leading to more reasonable groundwater resources management and decision-making.

  13. Cost characteristics of hospitals.

    PubMed

    Smet, Mike

    2002-09-01

    Modern hospitals are complex multi-product organisations. The analysis of a hospital's production and/or cost structure should therefore use the appropriate techniques. Flexible functional forms based on the neo-classical theory of the firm seem to be most suitable. Using neo-classical cost functions implicitly assumes minimisation of (variable) costs given that input prices and outputs are exogenous. Local and global properties of flexible functional forms and short-run versus long-run equilibrium are further issues that require thorough investigation. In order to put the results based on econometric estimations of cost functions in the right perspective, it is important to keep these considerations in mind when using flexible functional forms. The more recent studies seem to agree that hospitals generally do not operate in their long-run equilibrium (they tend to over-invest in capital (capacity and equipment)) and that it is therefore appropriate to estimate a short-run variable cost function. However, few studies explicitly take into account the implicit assumptions and restrictions embedded in the models they use. An alternative method to explain differences in costs uses management accounting techniques to identify the cost drivers of overhead costs. Related issues such as cost-shifting and cost-adjusting behaviour of hospitals and the influence of market structure on competition, prices and costs are also discussed shortly.

  14. Linear Quadratic Tracking Design for a Generic Transport Aircraft with Structural Load Constraints

    NASA Technical Reports Server (NTRS)

    Burken, John J.; Frost, Susan A.; Taylor, Brian R.

    2011-01-01

    When designing control laws for systems with constraints added to the tracking performance, control allocation methods can be utilized. Control allocations methods are used when there are more command inputs than controlled variables. Constraints that require allocators are such task as; surface saturation limits, structural load limits, drag reduction constraints or actuator failures. Most transport aircraft have many actuated surfaces compared to the three controlled variables (such as angle of attack, roll rate & angle of side slip). To distribute the control effort among the redundant set of actuators a fixed mixer approach can be utilized or online control allocation techniques. The benefit of an online allocator is that constraints can be considered in the design whereas the fixed mixer cannot. However, an online control allocator mixer has a disadvantage of not guaranteeing a surface schedule, which can then produce ill defined loads on the aircraft. The load uncertainty and complexity has prevented some controller designs from using advanced allocation techniques. This paper considers actuator redundancy management for a class of over actuated systems with real-time structural load limits using linear quadratic tracking applied to the generic transport model. A roll maneuver example of an artificial load limit constraint is shown and compared to the same no load limitation maneuver.

  15. Deciphering the complex: methodological overview of statistical models to derive OMICS-based biomarkers.

    PubMed

    Chadeau-Hyam, Marc; Campanella, Gianluca; Jombart, Thibaut; Bottolo, Leonardo; Portengen, Lutzen; Vineis, Paolo; Liquet, Benoit; Vermeulen, Roel C H

    2013-08-01

    Recent technological advances in molecular biology have given rise to numerous large-scale datasets whose analysis imposes serious methodological challenges mainly relating to the size and complex structure of the data. Considerable experience in analyzing such data has been gained over the past decade, mainly in genetics, from the Genome-Wide Association Study era, and more recently in transcriptomics and metabolomics. Building upon the corresponding literature, we provide here a nontechnical overview of well-established methods used to analyze OMICS data within three main types of regression-based approaches: univariate models including multiple testing correction strategies, dimension reduction techniques, and variable selection models. Our methodological description focuses on methods for which ready-to-use implementations are available. We describe the main underlying assumptions, the main features, and advantages and limitations of each of the models. This descriptive summary constitutes a useful tool for driving methodological choices while analyzing OMICS data, especially in environmental epidemiology, where the emergence of the exposome concept clearly calls for unified methods to analyze marginally and jointly complex exposure and OMICS datasets. Copyright © 2013 Wiley Periodicals, Inc.

  16. Large area sub-micron chemical imaging of magnesium in sea urchin teeth.

    PubMed

    Masic, Admir; Weaver, James C

    2015-03-01

    The heterogeneous and site-specific incorporation of inorganic ions can profoundly influence the local mechanical properties of damage tolerant biological composites. Using the sea urchin tooth as a research model, we describe a multi-technique approach to spatially map the distribution of magnesium in this complex multiphase system. Through the combined use of 16-bit backscattered scanning electron microscopy, multi-channel energy dispersive spectroscopy elemental mapping, and diffraction-limited confocal Raman spectroscopy, we demonstrate a new set of high throughput, multi-spectral, high resolution methods for the large scale characterization of mineralized biological materials. In addition, instrument hardware and data collection protocols can be modified such that several of these measurements can be performed on irregularly shaped samples with complex surface geometries and without the need for extensive sample preparation. Using these approaches, in conjunction with whole animal micro-computed tomography studies, we have been able to spatially resolve micron and sub-micron structural features across macroscopic length scales on entire urchin tooth cross-sections and correlate these complex morphological features with local variability in elemental composition. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. cAMP-dependent protein kinase (PKA) complexes probed by complementary differential scanning fluorimetry and ion mobility–mass spectrometry

    PubMed Central

    Byrne, Dominic P.; Vonderach, Matthias; Ferries, Samantha; Brownridge, Philip J.; Eyers, Claire E.; Eyers, Patrick A.

    2016-01-01

    cAMP-dependent protein kinase (PKA) is an archetypal biological signaling module and a model for understanding the regulation of protein kinases. In the present study, we combine biochemistry with differential scanning fluorimetry (DSF) and ion mobility–mass spectrometry (IM–MS) to evaluate effects of phosphorylation and structure on the ligand binding, dynamics and stability of components of heteromeric PKA protein complexes in vitro. We uncover dynamic, conformationally distinct populations of the PKA catalytic subunit with distinct structural stability and susceptibility to the physiological protein inhibitor PKI. Native MS of reconstituted PKA R2C2 holoenzymes reveals variable subunit stoichiometry and holoenzyme ablation by PKI binding. Finally, we find that although a ‘kinase-dead’ PKA catalytic domain cannot bind to ATP in solution, it interacts with several prominent chemical kinase inhibitors. These data demonstrate the combined power of IM–MS and DSF to probe PKA dynamics and regulation, techniques that can be employed to evaluate other protein-ligand complexes, with broad implications for cellular signaling. PMID:27444646

  18. Temperature variability analysis using wavelets and multiscale entropy in patients with systemic inflammatory response syndrome, sepsis, and septic shock.

    PubMed

    Papaioannou, Vasilios E; Chouvarda, Ioanna G; Maglaveras, Nikos K; Pneumatikos, Ioannis A

    2012-12-12

    Even though temperature is a continuous quantitative variable, its measurement has been considered a snapshot of a process, indicating whether a patient is febrile or afebrile. Recently, other diagnostic techniques have been proposed for the association between different properties of the temperature curve with severity of illness in the Intensive Care Unit (ICU), based on complexity analysis of continuously monitored body temperature. In this study, we tried to assess temperature complexity in patients with systemic inflammation during a suspected ICU-acquired infection, by using wavelets transformation and multiscale entropy of temperature signals, in a cohort of mixed critically ill patients. Twenty-two patients were enrolled in the study. In five, systemic inflammatory response syndrome (SIRS, group 1) developed, 10 had sepsis (group 2), and seven had septic shock (group 3). All temperature curves were studied during the first 24 hours of an inflammatory state. A wavelet transformation was applied, decomposing the signal in different frequency components (scales) that have been found to reflect neurogenic and metabolic inputs on temperature oscillations. Wavelet energy and entropy per different scales associated with complexity in specific frequency bands and multiscale entropy of the whole signal were calculated. Moreover, a clustering technique and a linear discriminant analysis (LDA) were applied for permitting pattern recognition in data sets and assessing diagnostic accuracy of different wavelet features among the three classes of patients. Statistically significant differences were found in wavelet entropy between patients with SIRS and groups 2 and 3, and in specific ultradian bands between SIRS and group 3, with decreased entropy in sepsis. Cluster analysis using wavelet features in specific bands revealed concrete clusters closely related with the groups in focus. LDA after wrapper-based feature selection was able to classify with an accuracy of more than 80% SIRS from the two sepsis groups, based on multiparametric patterns of entropy values in the very low frequencies and indicating reduced metabolic inputs on local thermoregulation, probably associated with extensive vasodilatation. We suggest that complexity analysis of temperature signals can assess inherent thermoregulatory dynamics during systemic inflammation and has increased discriminating value in patients with infectious versus noninfectious conditions, probably associated with severity of illness.

  19. Temperature variability analysis using wavelets and multiscale entropy in patients with systemic inflammatory response syndrome, sepsis, and septic shock

    PubMed Central

    2012-01-01

    Background Even though temperature is a continuous quantitative variable, its measurement has been considered a snapshot of a process, indicating whether a patient is febrile or afebrile. Recently, other diagnostic techniques have been proposed for the association between different properties of the temperature curve with severity of illness in the Intensive Care Unit (ICU), based on complexity analysis of continuously monitored body temperature. In this study, we tried to assess temperature complexity in patients with systemic inflammation during a suspected ICU-acquired infection, by using wavelets transformation and multiscale entropy of temperature signals, in a cohort of mixed critically ill patients. Methods Twenty-two patients were enrolled in the study. In five, systemic inflammatory response syndrome (SIRS, group 1) developed, 10 had sepsis (group 2), and seven had septic shock (group 3). All temperature curves were studied during the first 24 hours of an inflammatory state. A wavelet transformation was applied, decomposing the signal in different frequency components (scales) that have been found to reflect neurogenic and metabolic inputs on temperature oscillations. Wavelet energy and entropy per different scales associated with complexity in specific frequency bands and multiscale entropy of the whole signal were calculated. Moreover, a clustering technique and a linear discriminant analysis (LDA) were applied for permitting pattern recognition in data sets and assessing diagnostic accuracy of different wavelet features among the three classes of patients. Results Statistically significant differences were found in wavelet entropy between patients with SIRS and groups 2 and 3, and in specific ultradian bands between SIRS and group 3, with decreased entropy in sepsis. Cluster analysis using wavelet features in specific bands revealed concrete clusters closely related with the groups in focus. LDA after wrapper-based feature selection was able to classify with an accuracy of more than 80% SIRS from the two sepsis groups, based on multiparametric patterns of entropy values in the very low frequencies and indicating reduced metabolic inputs on local thermoregulation, probably associated with extensive vasodilatation. Conclusions We suggest that complexity analysis of temperature signals can assess inherent thermoregulatory dynamics during systemic inflammation and has increased discriminating value in patients with infectious versus noninfectious conditions, probably associated with severity of illness. PMID:22424316

  20. A Sensitivity Analysis Method to Study the Behavior of Complex Process-based Models

    NASA Astrophysics Data System (ADS)

    Brugnach, M.; Neilson, R.; Bolte, J.

    2001-12-01

    The use of process-based models as a tool for scientific inquiry is becoming increasingly relevant in ecosystem studies. Process-based models are artificial constructs that simulate the system by mechanistically mimicking the functioning of its component processes. Structurally, a process-based model can be characterized, in terms of its processes and the relationships established among them. Each process comprises a set of functional relationships among several model components (e.g., state variables, parameters and input data). While not encoded explicitly, the dynamics of the model emerge from this set of components and interactions organized in terms of processes. It is the task of the modeler to guarantee that the dynamics generated are appropriate and semantically equivalent to the phenomena being modeled. Despite the availability of techniques to characterize and understand model behavior, they do not suffice to completely and easily understand how a complex process-based model operates. For example, sensitivity analysis studies model behavior by determining the rate of change in model output as parameters or input data are varied. One of the problems with this approach is that it considers the model as a "black box", and it focuses on explaining model behavior by analyzing the relationship input-output. Since, these models have a high degree of non-linearity, understanding how the input affects an output can be an extremely difficult task. Operationally, the application of this technique may constitute a challenging task because complex process-based models are generally characterized by a large parameter space. In order to overcome some of these difficulties, we propose a method of sensitivity analysis to be applicable to complex process-based models. This method focuses sensitivity analysis at the process level, and it aims to determine how sensitive the model output is to variations in the processes. Once the processes that exert the major influence in the output are identified, the causes of its variability can be found. Some of the advantages of this approach are that it reduces the dimensionality of the search space, it facilitates the interpretation of the results and it provides information that allows exploration of uncertainty at the process level, and how it might affect model output. We present an example using the vegetation model BIOME-BGC.

  1. Forecasting seasonal hydrologic response in major river basins

    NASA Astrophysics Data System (ADS)

    Bhuiyan, A. M.

    2014-05-01

    Seasonal precipitation variation due to natural climate variation influences stream flow and the apparent frequency and severity of extreme hydrological conditions such as flood and drought. To study hydrologic response and understand the occurrence of extreme hydrological events, the relevant forcing variables must be identified. This study attempts to assess and quantify the historical occurrence and context of extreme hydrologic flow events and quantify the relation between relevant climate variables. Once identified, the flow data and climate variables are evaluated to identify the primary relationship indicators of hydrologic extreme event occurrence. Existing studies focus on developing basin-scale forecasting techniques based on climate anomalies in El Nino/La Nina episodes linked to global climate. Building on earlier work, the goal of this research is to quantify variations in historical river flows at seasonal temporal-scale, and regional to continental spatial-scale. The work identifies and quantifies runoff variability of major river basins and correlates flow with environmental forcing variables such as El Nino, La Nina, sunspot cycle. These variables are expected to be the primary external natural indicators of inter-annual and inter-seasonal patterns of regional precipitation and river flow. Relations between continental-scale hydrologic flows and external climate variables are evaluated through direct correlations in a seasonal context with environmental phenomenon such as sun spot numbers (SSN), Southern Oscillation Index (SOI), and Pacific Decadal Oscillation (PDO). Methods including stochastic time series analysis and artificial neural networks are developed to represent the seasonal variability evident in the historical records of river flows. River flows are categorized into low, average and high flow levels to evaluate and simulate flow variations under associated climate variable variations. Results demonstrated not any particular method is suited to represent scenarios leading to extreme flow conditions. For selected flow scenarios, the persistence model performance may be comparable to more complex multivariate approaches, and complex methods did not always improve flow estimation. Overall model performance indicates inclusion of river flows and forcing variables on average improve model extreme event forecasting skills. As a means to further refine the flow estimation, an ensemble forecast method is implemented to provide a likelihood-based indication of expected river flow magnitude and variability. Results indicate seasonal flow variations are well-captured in the ensemble range, therefore the ensemble approach can often prove efficient in estimating extreme river flow conditions. The discriminant prediction approach, a probabilistic measure to forecast streamflow, is also adopted to derive model performance. Results show the efficiency of the method in terms of representing uncertainties in the forecasts.

  2. Fort Collins Science Center Ecosystem Dynamics branch--interdisciplinary research for addressing complex natural resource issues across landscapes and time

    USGS Publications Warehouse

    Bowen, Zachary H.; Melcher, Cynthia P.; Wilson, Juliette T.

    2013-01-01

    The Ecosystem Dynamics Branch of the Fort Collins Science Center offers an interdisciplinary team of talented and creative scientists with expertise in biology, botany, ecology, geology, biogeochemistry, physical sciences, geographic information systems, and remote-sensing, for tackling complex questions about natural resources. As demand for natural resources increases, the issues facing natural resource managers, planners, policy makers, industry, and private landowners are increasing in spatial and temporal scope, often involving entire regions, multiple jurisdictions, and long timeframes. Needs for addressing these issues include (1) a better understanding of biotic and abiotic ecosystem components and their complex interactions; (2) the ability to easily monitor, assess, and visualize the spatially complex movements of animals, plants, water, and elements across highly variable landscapes; and (3) the techniques for accurately predicting both immediate and long-term responses of system components to natural and human-caused change. The overall objectives of our research are to provide the knowledge, tools, and techniques needed by the U.S. Department of the Interior, state agencies, and other stakeholders in their endeavors to meet the demand for natural resources while conserving biodiversity and ecosystem services. Ecosystem Dynamics scientists use field and laboratory research, data assimilation, and ecological modeling to understand ecosystem patterns, trends, and mechanistic processes. This information is used to predict the outcomes of changes imposed on species, habitats, landscapes, and climate across spatiotemporal scales. The products we develop include conceptual models to illustrate system structure and processes; regional baseline and integrated assessments; predictive spatial and mathematical models; literature syntheses; and frameworks or protocols for improved ecosystem monitoring, adaptive management, and program evaluation. The descriptions in this fact sheet provide snapshots of our three research emphases, followed by descriptions of select current projects.

  3. Planning Staff and Space Capacity Requirements during Wartime.

    PubMed

    Kepner, Elisa B; Spencer, Rachel

    2016-01-01

    Determining staff and space requirements for military medical centers can be challenging. Changing patient populations change the caseload requirements. Deployment and assignment rotations change the experience and education of clinicians and support staff, thereby changing the caseload capacity of a facility. During wartime, planning becomes increasingly more complex. What will the patient mix and caseload volume be by location? What type of clinicians will be available and when? How many beds are needed at each facility to meet caseload demand and match clinician supply? As soon as these factors are known, operations are likely to change and planning factors quickly become inaccurate. Soon, more beds or staff are needed in certain locations to meet caseload demand while other locations retain underutilized staff, waiting for additional caseload fluctuations. This type of complexity challenges the best commanders. As in so many other industries, supply and demand principles apply to military health, but very little is stable about military health capacity planning. Planning analysts build complex statistical forecasting models to predict caseload based on historical patterns. These capacity planning techniques work best in stable repeatable processes where caseload and staffing resources remain constant over a long period of time. Variability must be simplified to predict complex operations. This is counterintuitive to the majority of capacity planners who believe more data drives better answers. When the best predictor of future needs is not historical patterns, traditional capacity planning does not work. Rather, simplified estimation techniques coupled with frequent calibration adjustments to account for environmental changes will create the most accurate and most useful capacity planning and management system. The method presented in this article outlines the capacity planning approach used to actively manage hospital staff and space during Operations Iraqi Freedom and Enduring Freedom.

  4. Dimmable electronic ballasts by variable power density modulation technique

    NASA Astrophysics Data System (ADS)

    Borekci, Selim; Kesler, Selami

    2014-11-01

    Dimming can be accomplished commonly by switching frequency and pulse density modulation techniques and a variable inductor. In this study, a variable power density modulation (VPDM) control technique is proposed for dimming applications. A fluorescent lamp is operated in several states to meet the desired lamp power in a modulation period. The proposed technique has the same advantages of magnetic dimming topologies have. In addition, a unique and flexible control technique can be achieved. A prototype dimmable electronic ballast is built and experiments related to it have been conducted. As a result, a 36WT8 fluorescent lamp can be driven for a desired lamp power from several alternatives without modulating the switching frequency.

  5. Neural Network Machine Learning and Dimension Reduction for Data Visualization

    NASA Technical Reports Server (NTRS)

    Liles, Charles A.

    2014-01-01

    Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.

  6. Advances in Parameter and Uncertainty Quantification Using Bayesian Hierarchical Techniques with a Spatially Referenced Watershed Model (Invited)

    NASA Astrophysics Data System (ADS)

    Alexander, R. B.; Boyer, E. W.; Schwarz, G. E.; Smith, R. A.

    2013-12-01

    Estimating water and material stores and fluxes in watershed studies is frequently complicated by uncertainties in quantifying hydrological and biogeochemical effects of factors such as land use, soils, and climate. Although these process-related effects are commonly measured and modeled in separate catchments, researchers are especially challenged by their complexity across catchments and diverse environmental settings, leading to a poor understanding of how model parameters and prediction uncertainties vary spatially. To address these concerns, we illustrate the use of Bayesian hierarchical modeling techniques with a dynamic version of the spatially referenced watershed model SPARROW (SPAtially Referenced Regression On Watershed attributes). The dynamic SPARROW model is designed to predict streamflow and other water cycle components (e.g., evapotranspiration, soil and groundwater storage) for monthly varying hydrological regimes, using mechanistic functions, mass conservation constraints, and statistically estimated parameters. In this application, the model domain includes nearly 30,000 NHD (National Hydrologic Data) stream reaches and their associated catchments in the Susquehanna River Basin. We report the results of our comparisons of alternative models of varying complexity, including models with different explanatory variables as well as hierarchical models that account for spatial and temporal variability in model parameters and variance (error) components. The model errors are evaluated for changes with season and catchment size and correlations in time and space. The hierarchical models consist of a two-tiered structure in which climate forcing parameters are modeled as random variables, conditioned on watershed properties. Quantification of spatial and temporal variations in the hydrological parameters and model uncertainties in this approach leads to more efficient (lower variance) and less biased model predictions throughout the river network. Moreover, predictions of water-balance components are reported according to probabilistic metrics (e.g., percentiles, prediction intervals) that include both parameter and model uncertainties. These improvements in predictions of streamflow dynamics can inform the development of more accurate predictions of spatial and temporal variations in biogeochemical stores and fluxes (e.g., nutrients and carbon) in watersheds.

  7. Accounting for uncertainty in model-based prevalence estimation: paratuberculosis control in dairy herds.

    PubMed

    Davidson, Ross S; McKendrick, Iain J; Wood, Joanna C; Marion, Glenn; Greig, Alistair; Stevenson, Karen; Sharp, Michael; Hutchings, Michael R

    2012-09-10

    A common approach to the application of epidemiological models is to determine a single (point estimate) parameterisation using the information available in the literature. However, in many cases there is considerable uncertainty about parameter values, reflecting both the incomplete nature of current knowledge and natural variation, for example between farms. Furthermore model outcomes may be highly sensitive to different parameter values. Paratuberculosis is an infection for which many of the key parameter values are poorly understood and highly variable, and for such infections there is a need to develop and apply statistical techniques which make maximal use of available data. A technique based on Latin hypercube sampling combined with a novel reweighting method was developed which enables parameter uncertainty and variability to be incorporated into a model-based framework for estimation of prevalence. The method was evaluated by applying it to a simulation of paratuberculosis in dairy herds which combines a continuous time stochastic algorithm with model features such as within herd variability in disease development and shedding, which have not been previously explored in paratuberculosis models. Generated sample parameter combinations were assigned a weight, determined by quantifying the model's resultant ability to reproduce prevalence data. Once these weights are generated the model can be used to evaluate other scenarios such as control options. To illustrate the utility of this approach these reweighted model outputs were used to compare standard test and cull control strategies both individually and in combination with simple husbandry practices that aim to reduce infection rates. The technique developed has been shown to be applicable to a complex model incorporating realistic control options. For models where parameters are not well known or subject to significant variability, the reweighting scheme allowed estimated distributions of parameter values to be combined with additional sources of information, such as that available from prevalence distributions, resulting in outputs which implicitly handle variation and uncertainty. This methodology allows for more robust predictions from modelling approaches by allowing for parameter uncertainty and combining different sources of information, and is thus expected to be useful in application to a large number of disease systems.

  8. A new method to detect transitory signatures and local time/space variability structures in the climate system: the scale-dependent correlation analysis

    NASA Astrophysics Data System (ADS)

    Rodó, Xavier; Rodríguez-Arias, Miquel-Àngel

    2006-10-01

    The study of transitory signals and local variability structures in both/either time and space and their role as sources of climatic memory, is an important but often neglected topic in climate research despite its obvious importance and extensive coverage in the literature. Transitory signals arise either from non-linearities, in the climate system, transitory atmosphere-ocean couplings, and other processes in the climate system evolving after a critical threshold is crossed. These temporary interactions that, though intense, may not last long, can be responsible for a large amount of unexplained variability but are normally considered of limited relevance and often, discarded. With most of the current techniques at hand these typology of signatures are difficult to isolate because the low signal-to-noise ratio in midlatitudes, the limited recurrence of the transitory signals during a customary interval of data considered. Also, there is often a serious problem arising from the smoothing of local or transitory processes if statistical techniques are applied, that consider all the length of data available, rather than taking into account the size of the specific variability structure under investigation. Scale-dependent correlation (SDC) analysis is a new statistical method capable of highlighting the presence of transitory processes, these former being understood as temporary significant lag-dependent autocovariance in a single series, or covariance structures between two series. This approach, therefore, complements other approaches such as those resulting from the families of wavelet analysis, singular-spectrum analysis and recurrence plots. A main feature of SDC is its high-performance for short time series, its ability to characterize phase-relationships and thresholds in the bivariate domain. Ultimately, SDC helps tracking short-lagged relationships among processes that locally or temporarily couple and uncouple. The use of SDC is illustrated in the present paper by means of some synthetic time-series examples of increasing complexity, and it is compared with wavelet analysis in order to provide a well-known reference of its capabilities. A comparison between SDC and companion techniques is also addressed and results are exemplified for the specific case of some relevant El Niño-Southern Oscillation teleconnections.

  9. Histology image analysis for carcinoma detection and grading

    PubMed Central

    He, Lei; Long, L. Rodney; Antani, Sameer; Thoma, George R.

    2012-01-01

    This paper presents an overview of the image analysis techniques in the domain of histopathology, specifically, for the objective of automated carcinoma detection and classification. As in other biomedical imaging areas such as radiology, many computer assisted diagnosis (CAD) systems have been implemented to aid histopathologists and clinicians in cancer diagnosis and research, which have been attempted to significantly reduce the labor and subjectivity of traditional manual intervention with histology images. The task of automated histology image analysis is usually not simple due to the unique characteristics of histology imaging, including the variability in image preparation techniques, clinical interpretation protocols, and the complex structures and very large size of the images themselves. In this paper we discuss those characteristics, provide relevant background information about slide preparation and interpretation, and review the application of digital image processing techniques to the field of histology image analysis. In particular, emphasis is given to state-of-the-art image segmentation methods for feature extraction and disease classification. Four major carcinomas of cervix, prostate, breast, and lung are selected to illustrate the functions and capabilities of existing CAD systems. PMID:22436890

  10. Non-Gaussian spatiotemporal simulation of multisite daily precipitation: downscaling framework

    NASA Astrophysics Data System (ADS)

    Ben Alaya, M. A.; Ouarda, T. B. M. J.; Chebana, F.

    2018-01-01

    Probabilistic regression approaches for downscaling daily precipitation are very useful. They provide the whole conditional distribution at each forecast step to better represent the temporal variability. The question addressed in this paper is: how to simulate spatiotemporal characteristics of multisite daily precipitation from probabilistic regression models? Recent publications point out the complexity of multisite properties of daily precipitation and highlight the need for using a non-Gaussian flexible tool. This work proposes a reasonable compromise between simplicity and flexibility avoiding model misspecification. A suitable nonparametric bootstrapping (NB) technique is adopted. A downscaling model which merges a vector generalized linear model (VGLM as a probabilistic regression tool) and the proposed bootstrapping technique is introduced to simulate realistic multisite precipitation series. The model is applied to data sets from the southern part of the province of Quebec, Canada. It is shown that the model is capable of reproducing both at-site properties and the spatial structure of daily precipitations. Results indicate the superiority of the proposed NB technique, over a multivariate autoregressive Gaussian framework (i.e. Gaussian copula).

  11. Aerobiology: Experimental Considerations, Observations, and Future Tools

    PubMed Central

    Haddrell, Allen E.

    2017-01-01

    ABSTRACT Understanding airborne survival and decay of microorganisms is important for a range of public health and biodefense applications, including epidemiological and risk analysis modeling. Techniques for experimental aerosol generation, retention in the aerosol phase, and sampling require careful consideration and understanding so that they are representative of the conditions the bioaerosol would experience in the environment. This review explores the current understanding of atmospheric transport in relation to advances and limitations of aerosol generation, maintenance in the aerosol phase, and sampling techniques. Potential tools for the future are examined at the interface between atmospheric chemistry, aerosol physics, and molecular microbiology where the heterogeneity and variability of aerosols can be explored at the single-droplet and single-microorganism levels within a bioaerosol. The review highlights the importance of method comparison and validation in bioaerosol research and the benefits that the application of novel techniques could bring to increasing the understanding of aerobiological phenomena in diverse research fields, particularly during the progression of atmospheric transport, where complex interdependent physicochemical and biological processes occur within bioaerosol particles. PMID:28667111

  12. [Treatment of eyelid retraction in Grave's disease by recession of the levator complex].

    PubMed

    Fichter, N; Schittkowski, M; Guthoff, R F

    2004-11-01

    The chronic stage in Grave's orbitopathy is characterised by fibrotic changes within the orbital soft tissues, especially the extraocular muscles. Retraction of the eyelids is a common clinical feature of this phenomenon. To solve this problem several techniques for lengthening the upper eyelid have been described with variable rates of success. In this report we describe our modified Harvey's technique for the correction of upper eyelid retraction which includes a complete recession of the Muller's muscle/levator complex from the tarsal plate without the interposition of a spacer. Finally only the skin and the superficial orbicularis muscle are sutured. We also report about our results with this procedure. 8 patients (1 male, 7 female) with lid retraction in Grave's ophthalmopathy were recorded who had undergone the modified lengthening technique by an external approach between 2001 and 2004. Four patients underwent a bilateral procedure and 1 patient showed a significant under-correction, necessitating reoperation. So a total of 13 procedures were included in this follow-up study. Beside the common ophthalmological examination, special interest was put in the difference of the two eyelid apertures in primary position pre- and postoperatively. Within a follow-up period of at least 3 months we recorded an averaged lengthening of the upper eyelid of 3.1 mm. The difference of the two eyelid apertures in primary position improved from 2.2 mm preoperatively to 1.0 mm postoperatively. Only 1 patient needed reoperation because of a significant under-correction. There were no late over-corrections observed. The modified Harvey's technique to lengthen the upper eyelid is a safe and effective method to reduce upper eyelid retraction in Grave's disease. An eventually required orbital decompression or extraocular muscle surgery has to be done before the lid surgery.

  13. The QSAR study of flavonoid-metal complexes scavenging rad OH free radical

    NASA Astrophysics Data System (ADS)

    Wang, Bo-chu; Qian, Jun-zhen; Fan, Ying; Tan, Jun

    2014-10-01

    Flavonoid-metal complexes have antioxidant activities. However, quantitative structure-activity relationships (QSAR) of flavonoid-metal complexes and their antioxidant activities has still not been tackled. On the basis of 21 structures of flavonoid-metal complexes and their antioxidant activities for scavenging rad OH free radical, we optimised their structures using Gaussian 03 software package and we subsequently calculated and chose 18 quantum chemistry descriptors such as dipole, charge and energy. Then we chose several quantum chemistry descriptors that are very important to the IC50 of flavonoid-metal complexes for scavenging rad OH free radical through method of stepwise linear regression, Meanwhile we obtained 4 new variables through the principal component analysis. Finally, we built the QSAR models based on those important quantum chemistry descriptors and the 4 new variables as the independent variables and the IC50 as the dependent variable using an Artificial Neural Network (ANN), and we validated the two models using experimental data. These results show that the two models in this paper are reliable and predictable.

  14. The evaluation of complex interventions in palliative care: an exploration of the potential of case study research strategies.

    PubMed

    Walshe, Catherine

    2011-12-01

    Complex, incrementally changing, context dependent and variable palliative care services are difficult to evaluate. Case study research strategies may have potential to contribute to evaluating such complex interventions, and to develop this field of evaluation research. This paper explores definitions of case study (as a unit of study, a process, and a product) and examines the features of case study research strategies which are thought to confer benefits for the evaluation of complex interventions in palliative care settings. Ten features of case study that are thought to be beneficial in evaluating complex interventions in palliative care are discussed, drawing from exemplars of research in this field. Important features are related to a longitudinal approach, triangulation, purposive instance selection, comprehensive approach, multiple data sources, flexibility, concurrent data collection and analysis, search for proving-disproving evidence, pattern matching techniques and an engaging narrative. The limitations of case study approaches are discussed including the potential for subjectivity and their complex, time consuming and potentially expensive nature. Case study research strategies have great potential in evaluating complex interventions in palliative care settings. Three key features need to be exploited to develop this field: case selection, longitudinal designs, and the use of rival hypotheses. In particular, case study should be used in situations where there is interplay and interdependency between the intervention and its context, such that it is difficult to define or find relevant comparisons.

  15. An Investigation into the Roles of Theory of Mind, Emotion Regulation, and Attachment Styles in Predicting the Traits of Borderline Personality Disorder

    PubMed Central

    Ghiasi, Hamed; Mohammadi, Abolalfazl; Zarrinfar, Pouria

    2016-01-01

    Objective: Borderline personality disorder is one of the most complex and prevalent personality disorders. Many variables have so far been studied in relation to this disorder. This study aimed to investigate the role of emotion regulation, attachment styles, and theory of mind in predicting the traits of borderline personality disorder. Method: In this study, 85 patients with borderline personality disorder were selected using convenience sampling method. To measure the desired variables, the questionnaires of Gross emotion regulation, Collins and Read attachment styles, and Baron Cohen's Reading Mind from Eyes Test were applied. The data were analyzed using multivariate stepwise regression technique. Results: Emotion regulation, attachment styles, and theory of mind predicted 41.2% of the variance criterion altogether; among which, the shares of emotion regulation, attachment styles and theory of mind to the distribution of the traits of borderline personality disorder were 27.5%, 9.8%, and 3.9%, respectively.‎‎ Conclusion: The results of the study revealed that emotion regulation, attachment styles, and theory of mind are important variables in predicting the traits of borderline personality disorder and that these variables can be well applied for both the treatment and identification of this disorder. PMID:28050180

  16. Creation of Synthetic Surface Temperature and Precipitation Ensembles Through A Computationally Efficient, Mixed Method Approach

    NASA Astrophysics Data System (ADS)

    Hartin, C.; Lynch, C.; Kravitz, B.; Link, R. P.; Bond-Lamberty, B. P.

    2017-12-01

    Typically, uncertainty quantification of internal variability relies on large ensembles of climate model runs under multiple forcing scenarios or perturbations in a parameter space. Computationally efficient, standard pattern scaling techniques only generate one realization and do not capture the complicated dynamics of the climate system (i.e., stochastic variations with a frequency-domain structure). In this study, we generate large ensembles of climate data with spatially and temporally coherent variability across a subselection of Coupled Model Intercomparison Project Phase 5 (CMIP5) models. First, for each CMIP5 model we apply a pattern emulation approach to derive the model response to external forcing. We take all the spatial and temporal variability that isn't explained by the emulator and decompose it into non-physically based structures through use of empirical orthogonal functions (EOFs). Then, we perform a Fourier decomposition of the EOF projection coefficients to capture the input fields' temporal autocorrelation so that our new emulated patterns reproduce the proper timescales of climate response and "memory" in the climate system. Through this 3-step process, we derive computationally efficient climate projections consistent with CMIP5 model trends and modes of variability, which address a number of deficiencies inherent in the ability of pattern scaling to reproduce complex climate model behavior.

  17. An Investigation into the Roles of Theory of Mind, Emotion Regulation, and Attachment Styles in Predicting the Traits of Borderline Personality Disorder.

    PubMed

    Ghiasi, Hamed; Mohammadi, Abolalfazl; Zarrinfar, Pouria

    2016-10-01

    Objective: Borderline personality disorder is one of the most complex and prevalent personality disorders. Many variables have so far been studied in relation to this disorder. This study aimed to investigate the role of emotion regulation, attachment styles, and theory of mind in predicting the traits of borderline personality disorder. Method: In this study, 85 patients with borderline personality disorder were selected using convenience sampling method. To measure the desired variables, the questionnaires of Gross emotion regulation, Collins and Read attachment styles, and Baron Cohen's Reading Mind from Eyes Test were applied. The data were analyzed using multivariate stepwise regression technique. Results: Emotion regulation, attachment styles, and theory of mind predicted 41.2% of the variance criterion altogether; among which, the shares of emotion regulation, attachment styles and theory of mind to the distribution of the traits of borderline personality disorder were 27.5%, 9.8%, and 3.9%, respectively.‎‎ Conclusion : The results of the study revealed that emotion regulation, attachment styles, and theory of mind are important variables in predicting the traits of borderline personality disorder and that these variables can be well applied for both the treatment and identification of this disorder.

  18. Environmental variability and indicators: a few observations

    Treesearch

    William F. Laudenslayer

    1991-01-01

    Abstract The environment of the earth is exceedingly complex and variable. Indicator species are used to reduce thaf complexity and variability to a level that can be more emily understood. In recent years, use of indicators has increased dramatically. For the Forest Service, as an example, regulations that interpret the National Forest Management Act require the use...

  19. Classification of Regional Ionospheric Disturbances Based on Support Vector Machines

    NASA Astrophysics Data System (ADS)

    Begüm Terzi, Merve; Arikan, Feza; Arikan, Orhan; Karatay, Secil

    2016-07-01

    Ionosphere is an anisotropic, inhomogeneous, time varying and spatio-temporally dispersive medium whose parameters can be estimated almost always by using indirect measurements. Geomagnetic, gravitational, solar or seismic activities cause variations of ionosphere at various spatial and temporal scales. This complex spatio-temporal variability is challenging to be identified due to extensive scales in period, duration, amplitude and frequency of disturbances. Since geomagnetic and solar indices such as Disturbance storm time (Dst), F10.7 solar flux, Sun Spot Number (SSN), Auroral Electrojet (AE), Kp and W-index provide information about variability on a global scale, identification and classification of regional disturbances poses a challenge. The main aim of this study is to classify the regional effects of global geomagnetic storms and classify them according to their risk levels. For this purpose, Total Electron Content (TEC) estimated from GPS receivers, which is one of the major parameters of ionosphere, will be used to model the regional and local variability that differs from global activity along with solar and geomagnetic indices. In this work, for the automated classification of the regional disturbances, a classification technique based on a robust machine learning technique that have found wide spread use, Support Vector Machine (SVM) is proposed. SVM is a supervised learning model used for classification with associated learning algorithm that analyze the data and recognize patterns. In addition to performing linear classification, SVM can efficiently perform nonlinear classification by embedding data into higher dimensional feature spaces. Performance of the developed classification technique is demonstrated for midlatitude ionosphere over Anatolia using TEC estimates generated from the GPS data provided by Turkish National Permanent GPS Network (TNPGN-Active) for solar maximum year of 2011. As a result of implementing the developed classification technique to the Global Ionospheric Map (GIM) TEC data which is provided by the NASA Jet Propulsion Laboratory (JPL), it will be shown that SVM can be a suitable learning method to detect the anomalies in Total Electron Content (TEC) variations. This study is supported by TUBITAK 114E541 project as a part of the Scientific and Technological Research Projects Funding Program (1001).

  20. On the Impact of Execution Models: A Case Study in Computational Chemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavarría-Miranda, Daniel; Halappanavar, Mahantesh; Krishnamoorthy, Sriram

    2015-05-25

    Efficient utilization of high-performance computing (HPC) platforms is an important and complex problem. Execution models, abstract descriptions of the dynamic runtime behavior of the execution stack, have significant impact on the utilization of HPC systems. Using a computational chemistry kernel as a case study and a wide variety of execution models combined with load balancing techniques, we explore the impact of execution models on the utilization of an HPC system. We demonstrate a 50 percent improvement in performance by using work stealing relative to a more traditional static scheduling approach. We also use a novel semi-matching technique for load balancingmore » that has comparable performance to a traditional hypergraph-based partitioning implementation, which is computationally expensive. Using this study, we found that execution model design choices and assumptions can limit critical optimizations such as global, dynamic load balancing and finding the correct balance between available work units and different system and runtime overheads. With the emergence of multi- and many-core architectures and the consequent growth in the complexity of HPC platforms, we believe that these lessons will be beneficial to researchers tuning diverse applications on modern HPC platforms, especially on emerging dynamic platforms with energy-induced performance variability.« less

  1. Gsflow-py: An integrated hydrologic model development tool

    NASA Astrophysics Data System (ADS)

    Gardner, M.; Niswonger, R. G.; Morton, C.; Henson, W.; Huntington, J. L.

    2017-12-01

    Integrated hydrologic modeling encompasses a vast number of processes and specifications, variable in time and space, and development of model datasets can be arduous. Model input construction techniques have not been formalized or made easily reproducible. Creating the input files for integrated hydrologic models (IHM) requires complex GIS processing of raster and vector datasets from various sources. Developing stream network topology that is consistent with the model resolution digital elevation model is important for robust simulation of surface water and groundwater exchanges. Distribution of meteorologic parameters over the model domain is difficult in complex terrain at the model resolution scale, but is necessary to drive realistic simulations. Historically, development of input data for IHM models has required extensive GIS and computer programming expertise which has restricted the use of IHMs to research groups with available financial, human, and technical resources. Here we present a series of Python scripts that provide a formalized technique for the parameterization and development of integrated hydrologic model inputs for GSFLOW. With some modifications, this process could be applied to any regular grid hydrologic model. This Python toolkit automates many of the necessary and laborious processes of parameterization, including stream network development and cascade routing, land coverages, and meteorological distribution over the model domain.

  2. Rapid Discrimination for Traditional Complex Herbal Medicines from Different Parts, Collection Time, and Origins Using High-Performance Liquid Chromatography and Near-Infrared Spectral Fingerprints with Aid of Pattern Recognition Methods

    PubMed Central

    Fu, Haiyan; Fan, Yao; Zhang, Xu; Lan, Hanyue; Yang, Tianming; Shao, Mei; Li, Sihan

    2015-01-01

    As an effective method, the fingerprint technique, which emphasized the whole compositions of samples, has already been used in various fields, especially in identifying and assessing the quality of herbal medicines. High-performance liquid chromatography (HPLC) and near-infrared (NIR), with their unique characteristics of reliability, versatility, precision, and simple measurement, played an important role among all the fingerprint techniques. In this paper, a supervised pattern recognition method based on PLSDA algorithm by HPLC and NIR has been established to identify the information of Hibiscus mutabilis L. and Berberidis radix, two common kinds of herbal medicines. By comparing component analysis (PCA), linear discriminant analysis (LDA), and particularly partial least squares discriminant analysis (PLSDA) with different fingerprint preprocessing of NIR spectra variables, PLSDA model showed perfect functions on the analysis of samples as well as chromatograms. Most important, this pattern recognition method by HPLC and NIR can be used to identify different collection parts, collection time, and different origins or various species belonging to the same genera of herbal medicines which proved to be a promising approach for the identification of complex information of herbal medicines. PMID:26345990

  3. A proposed analytic framework for determining the impact of an antimicrobial resistance intervention.

    PubMed

    Grohn, Yrjo T; Carson, Carolee; Lanzas, Cristina; Pullum, Laura; Stanhope, Michael; Volkova, Victoriya

    2017-06-01

    Antimicrobial use (AMU) is increasingly threatened by antimicrobial resistance (AMR). The FDA is implementing risk mitigation measures promoting prudent AMU in food animals. Their evaluation is crucial: the AMU/AMR relationship is complex; a suitable framework to analyze interventions is unavailable. Systems science analysis, depicting variables and their associations, would help integrate mathematics/epidemiology to evaluate the relationship. This would identify informative data and models to evaluate interventions. This National Institute for Mathematical and Biological Synthesis AMR Working Group's report proposes a system framework to address the methodological gap linking livestock AMU and AMR in foodborne bacteria. It could evaluate how AMU (and interventions) impact AMR. We will evaluate pharmacokinetic/dynamic modeling techniques for projecting AMR selection pressure on enteric bacteria. We study two methods to model phenotypic AMR changes in bacteria in the food supply and evolutionary genotypic analyses determining molecular changes in phenotypic AMR. Systems science analysis integrates the methods, showing how resistance in the food supply is explained by AMU and concurrent factors influencing the whole system. This process is updated with data and techniques to improve prediction and inform improvements for AMU/AMR surveillance. Our proposed framework reflects both the AMR system's complexity, and desire for simple, reliable conclusions.

  4. A Control-Theoretic Approach for the Combined Management of Quality-of-Service and Energy in Service Centers

    NASA Astrophysics Data System (ADS)

    Poussot-Vassal, Charles; Tanelli, Mara; Lovera, Marco

    The complexity of Information Technology (IT) systems is steadily increasing and system complexity has been recognised as the main obstacle to further advancements of IT. This fact has recently raised energy management issues. Control techniques have been proposed and successfully applied to design Autonomic Computing systems, trading-off system performance with energy saving goals. As users behaviour is highly time varying and workload conditions can change substantially within the same business day, the Linear Parametrically Varying (LPV) framework is particularly promising for modeling such systems. In this chapter, a control-theoretic method to investigate the trade-off between Quality of Service (QoS) requirements and energy saving objectives in the case of admission control in Web service systems is proposed, considering as control variables the server CPU frequency and the admission probability. To quantitatively evaluate the trade-off, a dynamic model of the admission control dynamics is estimated via LPV identification techniques. Based on this model, an optimisation problem within the Model Predictive Control (MPC) framework is setup, by means of which it is possible to investigate the optimal trade-off policy to manage QoS and energy saving objectives at design time and taking into explicit account the system dynamics.

  5. New evidence favoring multilevel decomposition and optimization

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; Polignone, Debra A.

    1990-01-01

    The issue of the utility of multilevel decomposition and optimization remains controversial. To date, only the structural optimization community has actively developed and promoted multilevel optimization techniques. However, even this community acknowledges that multilevel optimization is ideally suited for a rather limited set of problems. It is warned that decomposition typically requires eliminating local variables by using global variables and that this in turn causes ill-conditioning of the multilevel optimization by adding equality constraints. The purpose is to suggest a new multilevel optimization technique. This technique uses behavior variables, in addition to design variables and constraints, to decompose the problem. The new technique removes the need for equality constraints, simplifies the decomposition of the design problem, simplifies the programming task, and improves the convergence speed of multilevel optimization compared to conventional optimization.

  6. Genetic algorithm approaches for conceptual design of spacecraft systems including multi-objective optimization and design under uncertainty

    NASA Astrophysics Data System (ADS)

    Hassan, Rania A.

    In the design of complex large-scale spacecraft systems that involve a large number of components and subsystems, many specialized state-of-the-art design tools are employed to optimize the performance of various subsystems. However, there is no structured system-level concept-architecting process. Currently, spacecraft design is heavily based on the heritage of the industry. Old spacecraft designs are modified to adapt to new mission requirements, and feasible solutions---rather than optimal ones---are often all that is achieved. During the conceptual phase of the design, the choices available to designers are predominantly discrete variables describing major subsystems' technology options and redundancy levels. The complexity of spacecraft configurations makes the number of the system design variables that need to be traded off in an optimization process prohibitive when manual techniques are used. Such a discrete problem is well suited for solution with a Genetic Algorithm, which is a global search technique that performs optimization-like tasks. This research presents a systems engineering framework that places design requirements at the core of the design activities and transforms the design paradigm for spacecraft systems to a top-down approach rather than the current bottom-up approach. To facilitate decision-making in the early phases of the design process, the population-based search nature of the Genetic Algorithm is exploited to provide computationally inexpensive---compared to the state-of-the-practice---tools for both multi-objective design optimization and design optimization under uncertainty. In terms of computational cost, those tools are nearly on the same order of magnitude as that of standard single-objective deterministic Genetic Algorithm. The use of a multi-objective design approach provides system designers with a clear tradeoff optimization surface that allows them to understand the effect of their decisions on all the design objectives under consideration simultaneously. Incorporating uncertainties avoids large safety margins and unnecessary high redundancy levels. The focus on low computational cost for the optimization tools stems from the objective that improving the design of complex systems should not be achieved at the expense of a costly design methodology.

  7. Environmental variability and acoustic signals: a multi-level approach in songbirds.

    PubMed

    Medina, Iliana; Francis, Clinton D

    2012-12-23

    Among songbirds, growing evidence suggests that acoustic adaptation of song traits occurs in response to habitat features. Despite extensive study, most research supporting acoustic adaptation has only considered acoustic traits averaged for species or populations, overlooking intraindividual variation of song traits, which may facilitate effective communication in heterogeneous and variable environments. Fewer studies have explicitly incorporated sexual selection, which, if strong, may favour variation across environments. Here, we evaluate the prevalence of acoustic adaptation among 44 species of songbirds by determining how environmental variability and sexual selection intensity are associated with song variability (intraindividual and intraspecific) and short-term song complexity. We show that variability in precipitation can explain short-term song complexity among taxonomically diverse songbirds, and that precipitation seasonality and the intensity of sexual selection are related to intraindividual song variation. Our results link song complexity to environmental variability, something previously found for mockingbirds (Family Mimidae). Perhaps more importantly, our results illustrate that individual variation in song traits may be shaped by both environmental variability and strength of sexual selection.

  8. Numerical modelling of biomass combustion: Solid conversion processes in a fixed bed furnace

    NASA Astrophysics Data System (ADS)

    Karim, Md. Rezwanul; Naser, Jamal

    2017-06-01

    Increasing demand for energy and rising concerns over global warming has urged the use of renewable energy sources to carry a sustainable development of the world. Bio mass is a renewable energy which has become an important fuel to produce thermal energy or electricity. It is an eco-friendly source of energy as it reduces carbon dioxide emissions. Combustion of solid biomass is a complex phenomenon due to its large varieties and physical structures. Among various systems, fixed bed combustion is the most commonly used technique for thermal conversion of solid biomass. But inadequate knowledge on complex solid conversion processes has limited the development of such combustion system. Numerical modelling of this combustion system has some advantages over experimental analysis. Many important system parameters (e.g. temperature, density, solid fraction) can be estimated inside the entire domain under different working conditions. In this work, a complete numerical model is used for solid conversion processes of biomass combustion in a fixed bed furnace. The combustion system is divided in to solid and gas phase. This model includes several sub models to characterize the solid phase of the combustion with several variables. User defined subroutines are used to introduce solid phase variables in commercial CFD code. Gas phase of combustion is resolved using built-in module of CFD code. Heat transfer model is modified to predict the temperature of solid and gas phases with special radiation heat transfer solution for considering the high absorptivity of the medium. Considering all solid conversion processes the solid phase variables are evaluated. Results obtained are discussed with reference from an experimental burner.

  9. Landscape epidemiology and machine learning: A geospatial approach to modeling West Nile virus risk in the United States

    NASA Astrophysics Data System (ADS)

    Young, Sean Gregory

    The complex interactions between human health and the physical landscape and environment have been recognized, if not fully understood, since the ancient Greeks. Landscape epidemiology, sometimes called spatial epidemiology, is a sub-discipline of medical geography that uses environmental conditions as explanatory variables in the study of disease or other health phenomena. This theory suggests that pathogenic organisms (whether germs or larger vector and host species) are subject to environmental conditions that can be observed on the landscape, and by identifying where such organisms are likely to exist, areas at greatest risk of the disease can be derived. Machine learning is a sub-discipline of artificial intelligence that can be used to create predictive models from large and complex datasets. West Nile virus (WNV) is a relatively new infectious disease in the United States, and has a fairly well-understood transmission cycle that is believed to be highly dependent on environmental conditions. This study takes a geospatial approach to the study of WNV risk, using both landscape epidemiology and machine learning techniques. A combination of remotely sensed and in situ variables are used to predict WNV incidence with a correlation coefficient as high as 0.86. A novel method of mitigating the small numbers problem is also tested and ultimately discarded. Finally a consistent spatial pattern of model errors is identified, indicating the chosen variables are capable of predicting WNV disease risk across most of the United States, but are inadequate in the northern Great Plains region of the US.

  10. Optimization of parameter values for complex pulse sequences by simulated annealing: application to 3D MP-RAGE imaging of the brain.

    PubMed

    Epstein, F H; Mugler, J P; Brookeman, J R

    1994-02-01

    A number of pulse sequence techniques, including magnetization-prepared gradient echo (MP-GRE), segmented GRE, and hybrid RARE, employ a relatively large number of variable pulse sequence parameters and acquire the image data during a transient signal evolution. These sequences have recently been proposed and/or used for clinical applications in the brain, spine, liver, and coronary arteries. Thus, the need for a method of deriving optimal pulse sequence parameter values for this class of sequences now exists. Due to the complexity of these sequences, conventional optimization approaches, such as applying differential calculus to signal difference equations, are inadequate. We have developed a general framework for adapting the simulated annealing algorithm to pulse sequence parameter value optimization, and applied this framework to the specific case of optimizing the white matter-gray matter signal difference for a T1-weighted variable flip angle 3D MP-RAGE sequence. Using our algorithm, the values of 35 sequence parameters, including the magnetization-preparation RF pulse flip angle and delay time, 32 flip angles in the variable flip angle gradient-echo acquisition sequence, and the magnetization recovery time, were derived. Optimized 3D MP-RAGE achieved up to a 130% increase in white matter-gray matter signal difference compared with optimized 3D RF-spoiled FLASH with the same total acquisition time. The simulated annealing approach was effective at deriving optimal parameter values for a specific 3D MP-RAGE imaging objective, and may be useful for other imaging objectives and sequences in this general class.

  11. Parameter optimization, sensitivity, and uncertainty analysis of an ecosystem model at a forest flux tower site in the United States

    USGS Publications Warehouse

    Wu, Yiping; Liu, Shuguang; Huang, Zhihong; Yan, Wende

    2014-01-01

    Ecosystem models are useful tools for understanding ecological processes and for sustainable management of resources. In biogeochemical field, numerical models have been widely used for investigating carbon dynamics under global changes from site to regional and global scales. However, it is still challenging to optimize parameters and estimate parameterization uncertainty for complex process-based models such as the Erosion Deposition Carbon Model (EDCM), a modified version of CENTURY, that consider carbon, water, and nutrient cycles of ecosystems. This study was designed to conduct the parameter identifiability, optimization, sensitivity, and uncertainty analysis of EDCM using our developed EDCM-Auto, which incorporated a comprehensive R package—Flexible Modeling Framework (FME) and the Shuffled Complex Evolution (SCE) algorithm. Using a forest flux tower site as a case study, we implemented a comprehensive modeling analysis involving nine parameters and four target variables (carbon and water fluxes) with their corresponding measurements based on the eddy covariance technique. The local sensitivity analysis shows that the plant production-related parameters (e.g., PPDF1 and PRDX) are most sensitive to the model cost function. Both SCE and FME are comparable and performed well in deriving the optimal parameter set with satisfactory simulations of target variables. Global sensitivity and uncertainty analysis indicate that the parameter uncertainty and the resulting output uncertainty can be quantified, and that the magnitude of parameter-uncertainty effects depends on variables and seasons. This study also demonstrates that using the cutting-edge R functions such as FME can be feasible and attractive for conducting comprehensive parameter analysis for ecosystem modeling.

  12. Single-Vector Calibration of Wind-Tunnel Force Balances

    NASA Technical Reports Server (NTRS)

    Parker, P. A.; DeLoach, R.

    2003-01-01

    An improved method of calibrating a wind-tunnel force balance involves the use of a unique load application system integrated with formal experimental design methodology. The Single-Vector Force Balance Calibration System (SVS) overcomes the productivity and accuracy limitations of prior calibration methods. A force balance is a complex structural spring element instrumented with strain gauges for measuring three orthogonal components of aerodynamic force (normal, axial, and side force) and three orthogonal components of aerodynamic torque (rolling, pitching, and yawing moments). Force balances remain as the state-of-the-art instrument that provide these measurements on a scale model of an aircraft during wind tunnel testing. Ideally, each electrical channel of the balance would respond only to its respective component of load, and it would have no response to other components of load. This is not entirely possible even though balance designs are optimized to minimize these undesirable interaction effects. Ultimately, a calibration experiment is performed to obtain the necessary data to generate a mathematical model and determine the force measurement accuracy. In order to set the independent variables of applied load for the calibration 24 NASA Tech Briefs, October 2003 experiment, a high-precision mechanical system is required. Manual deadweight systems have been in use at Langley Research Center (LaRC) since the 1940s. These simple methodologies produce high confidence results, but the process is mechanically complex and labor-intensive, requiring three to four weeks to complete. Over the past decade, automated balance calibration systems have been developed. In general, these systems were designed to automate the tedious manual calibration process resulting in an even more complex system which deteriorates load application quality. The current calibration approach relies on a one-factor-at-a-time (OFAT) methodology, where each independent variable is incremented individually throughout its full-scale range, while all other variables are held at a constant magnitude. This OFAT approach has been widely accepted because of its inherent simplicity and intuitive appeal to the balance engineer. LaRC has been conducting research in a "modern design of experiments" (MDOE) approach to force balance calibration. Formal experimental design techniques provide an integrated view to the entire calibration process covering all three major aspects of an experiment; the design of the experiment, the execution of the experiment, and the statistical analyses of the data. In order to overcome the weaknesses in the available mechanical systems and to apply formal experimental techniques, a new mechanical system was required. The SVS enables the complete calibration of a six-component force balance with a series of single force vectors.

  13. Comparison of baseline removal methods for laser-induced breakdown spectroscopy of geological samples

    NASA Astrophysics Data System (ADS)

    Dyar, M. Darby; Giguere, Stephen; Carey, CJ; Boucher, Thomas

    2016-12-01

    This project examines the causes, effects, and optimization of continuum removal in laser-induced breakdown spectroscopy (LIBS) to produce the best possible prediction accuracy of elemental composition in geological samples. We compare prediction accuracy resulting from several different techniques for baseline removal, including asymmetric least squares (ALS), adaptive iteratively reweighted penalized least squares (Air-PLS), fully automatic baseline correction (FABC), continuous wavelet transformation, median filtering, polynomial fitting, the iterative thresholding Dietrich method, convex hull/rubber band techniques, and a newly-developed technique for Custom baseline removal (BLR). We assess the predictive performance of these methods using partial least-squares analysis for 13 elements of geological interest, expressed as the weight percentages of SiO2, Al2O3, TiO2, FeO, MgO, CaO, Na2O, K2O, and the parts per million concentrations of Ni, Cr, Zn, Mn, and Co. We find that previously published methods for baseline subtraction generally produce equivalent prediction accuracies for major elements. When those pre-existing methods are used, automated optimization of their adjustable parameters is always necessary to wring the best predictive accuracy out of a data set; ideally, it should be done for each individual variable. The new technique of Custom BLR produces significant improvements in prediction accuracy over existing methods across varying geological data sets, instruments, and varying analytical conditions. These results also demonstrate the dual objectives of the continuum removal problem: removing a smooth underlying signal to fit individual peaks (univariate analysis) versus using feature selection to select only those channels that contribute to best prediction accuracy for multivariate analyses. Overall, the current practice of using generalized, one-method-fits-all-spectra baseline removal results in poorer predictive performance for all methods. The extra steps needed to optimize baseline removal for each predicted variable and empower multivariate techniques with the best possible input data for optimal prediction accuracy are shown to be well worth the slight increase in necessary computations and complexity.

  14. Yielding physically-interpretable emulators - A Sparse PCA approach

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Alsahaf, A.; Giuliani, M.; Castelletti, A.

    2015-12-01

    Projection-based techniques, such as Principal Orthogonal Decomposition (POD), are a common approach to surrogate high-fidelity process-based models by lower order dynamic emulators. With POD, the dimensionality reduction is achieved by using observations, or 'snapshots' - generated with the high-fidelity model -, to project the entire set of input and state variables of this model onto a smaller set of basis functions that account for most of the variability in the data. While reduction efficiency and variance control of POD techniques are usually very high, the resulting emulators are structurally highly complex and can hardly be given a physically meaningful interpretation as each basis is a projection of the entire set of inputs and states. In this work, we propose a novel approach based on Sparse Principal Component Analysis (SPCA) that combines the several assets of POD methods with the potential for ex-post interpretation of the emulator structure. SPCA reduces the number of non-zero coefficients in the basis functions by identifying a sparse matrix of coefficients. While the resulting set of basis functions may retain less variance of the snapshots, the presence of a few non-zero coefficients assists in the interpretation of the underlying physical processes. The SPCA approach is tested on the reduction of a 1D hydro-ecological model (DYRESM-CAEDYM) used to describe the main ecological and hydrodynamic processes in Tono Dam, Japan. An experimental comparison against a standard POD approach shows that SPCA achieves the same accuracy in emulating a given output variable - for the same level of dimensionality reduction - while yielding better insights of the main process dynamics.

  15. Why caution is recommended with post-hoc individual patient matching for estimation of treatment effect in parallel-group randomized controlled trials: the case of acute stroke trials.

    PubMed

    Jafari, Nahid; Hearne, John; Churilov, Leonid

    2013-11-10

    A post-hoc individual patient matching procedure was recently proposed within the context of parallel group randomized clinical trials (RCTs) as a method for estimating treatment effect. In this paper, we consider a post-hoc individual patient matching problem within a parallel group RCT as a multi-objective decision-making problem focussing on the trade-off between the quality of individual matches and the overall percentage of matching. Using acute stroke trials as a context, we utilize exact optimization and simulation techniques to investigate a complex relationship between the overall percentage of individual post-hoc matching, the size of the respective RCT, and the quality of matching on variables highly prognostic for a good functional outcome after stroke, as well as the dispersion in these variables. It is empirically confirmed that a high percentage of individual post-hoc matching can only be achieved when the differences in prognostic baseline variables between individually matched subjects within the same pair are sufficiently large and that the unmatched subjects are qualitatively different to the matched ones. It is concluded that the post-hoc individual matching as a technique for treatment effect estimation in parallel-group RCTs should be exercised with caution because of its propensity to introduce significant bias and reduce validity. If used with appropriate caution and thorough evaluation, this approach can complement other viable alternative approaches for estimating the treatment effect. Copyright © 2013 John Wiley & Sons, Ltd.

  16. Maternal Factors Predicting Cognitive and Behavioral Characteristics of Children with Fetal Alcohol Spectrum Disorders

    PubMed Central

    May, Philip A.; Tabachnick, Barbara G.; Gossage, J. Phillip; Kalberg, Wendy O.; Marais, Anna-Susan; Robinson, Luther K.; Manning, Melanie A.; Blankenship, Jason; Buckley, David; Hoyme, H. Eugene; Adnams, Colleen M.

    2013-01-01

    Objective To provide an analysis of multiple predictors of cognitive and behavioral traits for children with fetal alcohol spectrum disorders (FASD). Method Multivariate correlation techniques were employed with maternal and child data from epidemiologic studies in a community in South Africa. Data on 561 first grade children with fetal alcohol syndrome (FAS), partial FAS (PFAS), and not FASD and their mothers were analyzed by grouping 19 maternal variables into categories (physical, demographic, childbearing, and drinking) and employed in structural equation models (SEM) to assess correlates of child intelligence (verbal and non-verbal) and behavior. Results A first SEM utilizing only seven maternal alcohol use variables to predict cognitive/behavioral traits was statistically significant (B = 3.10, p < .05), but explained only 17.3% of the variance. The second model incorporated multiple maternal variables and was statistically significant explaining 55.3% of the variance. Significantly correlated with low intelligence and problem behavior were demographic (B = 3.83, p < .05) (low maternal education, low socioeconomic status (SES), and rural residence) and maternal physical characteristics (B = 2.70, p < .05) (short stature, small head circumference, and low weight). Childbearing history and alcohol use composites were not statistically significant in the final complex model, and were overpowered by SES and maternal physical traits. Conclusions While other analytic techniques have amply demonstrated the negative effects of maternal drinking on intelligence and behavior, this highly-controlled analysis of multiple maternal influences reveals that maternal demographics and physical traits make a significant enabling or disabling contribution to child functioning in FASD. PMID:23751886

  17. Molecular dynamics based enhanced sampling of collective variables with very large time steps.

    PubMed

    Chen, Pei-Yang; Tuckerman, Mark E

    2018-01-14

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  18. Molecular dynamics based enhanced sampling of collective variables with very large time steps

    NASA Astrophysics Data System (ADS)

    Chen, Pei-Yang; Tuckerman, Mark E.

    2018-01-01

    Enhanced sampling techniques that target a set of collective variables and that use molecular dynamics as the driving engine have seen widespread application in the computational molecular sciences as a means to explore the free-energy landscapes of complex systems. The use of molecular dynamics as the fundamental driver of the sampling requires the introduction of a time step whose magnitude is limited by the fastest motions in a system. While standard multiple time-stepping methods allow larger time steps to be employed for the slower and computationally more expensive forces, the maximum achievable increase in time step is limited by resonance phenomena, which inextricably couple fast and slow motions. Recently, we introduced deterministic and stochastic resonance-free multiple time step algorithms for molecular dynamics that solve this resonance problem and allow ten- to twenty-fold gains in the large time step compared to standard multiple time step algorithms [P. Minary et al., Phys. Rev. Lett. 93, 150201 (2004); B. Leimkuhler et al., Mol. Phys. 111, 3579-3594 (2013)]. These methods are based on the imposition of isokinetic constraints that couple the physical system to Nosé-Hoover chains or Nosé-Hoover Langevin schemes. In this paper, we show how to adapt these methods for collective variable-based enhanced sampling techniques, specifically adiabatic free-energy dynamics/temperature-accelerated molecular dynamics, unified free-energy dynamics, and by extension, metadynamics, thus allowing simulations employing these methods to employ similarly very large time steps. The combination of resonance-free multiple time step integrators with free-energy-based enhanced sampling significantly improves the efficiency of conformational exploration.

  19. Developing a Complex Independent Component Analysis (CICA) Technique to Extract Non-stationary Patterns from Geophysical Time Series

    NASA Astrophysics Data System (ADS)

    Forootan, Ehsan; Kusche, Jürgen; Talpe, Matthieu; Shum, C. K.; Schmidt, Michael

    2017-12-01

    In recent decades, decomposition techniques have enabled increasingly more applications for dimension reduction, as well as extraction of additional information from geophysical time series. Traditionally, the principal component analysis (PCA)/empirical orthogonal function (EOF) method and more recently the independent component analysis (ICA) have been applied to extract, statistical orthogonal (uncorrelated), and independent modes that represent the maximum variance of time series, respectively. PCA and ICA can be classified as stationary signal decomposition techniques since they are based on decomposing the autocovariance matrix and diagonalizing higher (than two) order statistical tensors from centered time series, respectively. However, the stationarity assumption in these techniques is not justified for many geophysical and climate variables even after removing cyclic components, e.g., the commonly removed dominant seasonal cycles. In this paper, we present a novel decomposition method, the complex independent component analysis (CICA), which can be applied to extract non-stationary (changing in space and time) patterns from geophysical time series. Here, CICA is derived as an extension of real-valued ICA, where (a) we first define a new complex dataset that contains the observed time series in its real part, and their Hilbert transformed series as its imaginary part, (b) an ICA algorithm based on diagonalization of fourth-order cumulants is then applied to decompose the new complex dataset in (a), and finally, (c) the dominant independent complex modes are extracted and used to represent the dominant space and time amplitudes and associated phase propagation patterns. The performance of CICA is examined by analyzing synthetic data constructed from multiple physically meaningful modes in a simulation framework, with known truth. Next, global terrestrial water storage (TWS) data from the Gravity Recovery And Climate Experiment (GRACE) gravimetry mission (2003-2016), and satellite radiometric sea surface temperature (SST) data (1982-2016) over the Atlantic and Pacific Oceans are used with the aim of demonstrating signal separations of the North Atlantic Oscillation (NAO) from the Atlantic Multi-decadal Oscillation (AMO), and the El Niño Southern Oscillation (ENSO) from the Pacific Decadal Oscillation (PDO). CICA results indicate that ENSO-related patterns can be extracted from the Gravity Recovery And Climate Experiment Terrestrial Water Storage (GRACE TWS) with an accuracy of 0.5-1 cm in terms of equivalent water height (EWH). The magnitude of errors in extracting NAO or AMO from SST data using the complex EOF (CEOF) approach reaches up to 50% of the signal itself, while it is reduced to 16% when applying CICA. Larger errors with magnitudes of 100% and 30% of the signal itself are found while separating ENSO from PDO using CEOF and CICA, respectively. We thus conclude that the CICA is more effective than CEOF in separating non-stationary patterns.

  20. Percutaneous treatment of complex biliary stone disease using endourological technique and literature review

    PubMed Central

    Korkes, Fernando; Carneiro, Ariê; Nasser, Felipe; Affonso, Breno Boueri; Galastri, Francisco Leonardo; de Oliveira, Marcos Belotto; Macedo, Antônio Luiz de Vasconcellos

    2015-01-01

    Most biliary stone diseases need to be treated surgically. However, in special cases that traditional biliary tract endoscopic access is not allowed, a multidisciplinary approach using hybrid technique with urologic instrumental constitute a treatment option. We report a case of a patient with complex intrahepatic stones who previously underwent unsuccessful conventional approaches, and who symptoms resolved after treatment with hybrid technique using an endourologic technology. We conducted an extensive literature review until October 2012 of manuscripts indexed in PubMed on the treatment of complex gallstones with hybrid technique. The multidisciplinary approach with hybrid technique using endourologic instrumental represents a safe and effective treatment option for patients with complex biliary stone who cannot conduct treatment with conventional methods. PMID:26061073

  1. Adaptive Synchronization of Fractional Order Complex-Variable Dynamical Networks via Pinning Control

    NASA Astrophysics Data System (ADS)

    Ding, Da-Wei; Yan, Jie; Wang, Nian; Liang, Dong

    2017-09-01

    In this paper, the synchronization of fractional order complex-variable dynamical networks is studied using an adaptive pinning control strategy based on close center degree. Some effective criteria for global synchronization of fractional order complex-variable dynamical networks are derived based on the Lyapunov stability theory. From the theoretical analysis, one concludes that under appropriate conditions, the complex-variable dynamical networks can realize the global synchronization by using the proper adaptive pinning control method. Meanwhile, we succeed in solving the problem about how much coupling strength should be applied to ensure the synchronization of the fractional order complex networks. Therefore, compared with the existing results, the synchronization method in this paper is more general and convenient. This result extends the synchronization condition of the real-variable dynamical networks to the complex-valued field, which makes our research more practical. Finally, two simulation examples show that the derived theoretical results are valid and the proposed adaptive pinning method is effective. Supported by National Natural Science Foundation of China under Grant No. 61201227, National Natural Science Foundation of China Guangdong Joint Fund under Grant No. U1201255, the Natural Science Foundation of Anhui Province under Grant No. 1208085MF93, 211 Innovation Team of Anhui University under Grant Nos. KJTD007A and KJTD001B, and also supported by Chinese Scholarship Council

  2. [Measurement of CO diffusion capacity (II): Standardization and quality criteria].

    PubMed

    Salcedo Posadas, A; Villa Asensi, J R; de Mir Messa, I; Sardón Prado, O; Larramona, H

    2015-08-01

    The diffusion capacity is the technique that measures the ability of the respiratory system for gas exchange, thus allowing a diagnosis of the malfunction of the alveolar-capillary unit. The most important parameter to assess is the CO diffusion capacity (DLCO). New methods are currently being used to measure the diffusion using nitric oxide (NO). There are other methods for measuring diffusion, although in this article the single breath technique is mainly referred to, as it is the most widely used and best standardized. Its complexity, its reference equations, differences in equipment, inter-patient variability and conditions in which the DLCO is performed, lead to a wide inter-laboratory variability, although its standardization makes this a more reliable and reproductive method. The practical aspects of the technique are analyzed, by specifying the recommendations to carry out a suitable procedure, the calibration routine, calculations and adjustments. Clinical applications are also discussed. An increase in the transfer of CO occurs in diseases in which there is an increased volume of blood in the pulmonary capillaries, such as in the polycythemia and pulmonary hemorrhage. There is a decrease in DLCO in patients with alveolar volume reduction or diffusion defects, either by altered alveolar-capillary membrane (interstitial diseases) or decreased volume of blood in the pulmonary capillaries (pulmonary embolism or primary pulmonary hypertension). Other causes of decreased or increased DLCO are also highlighted. Copyright © 2014 Asociación Española de Pediatría. Published by Elsevier España, S.L.U. All rights reserved.

  3. Mesoscale Convective Complex versus Non-Mesoscale Convective Complex Thunderstorms: A Comparison of Selected Meteorological Variables.

    DTIC Science & Technology

    1986-08-01

    mean square errors for selected variables . . 34 8. Variable range and mean value for MCC and non-MCC cases . . 36 9. Alpha ( a ) levels at which the...Table 9. For each variable, the a level is listed at which the two mean values are determined to be significantly 38 Table 9. Alpha ( a ) levels at...vorticity advection None 700 mb vertical velocity forecast .20 different. These a levels express the probability of erroneously con- cluding that the

  4. Minimum Hamiltonian Ascent Trajectory Evaluation (MASTRE) program (update to automatic flight trajectory design, performance prediction, and vehicle sizing for support of Shuttle and Shuttle derived vehicles) engineering manual

    NASA Technical Reports Server (NTRS)

    Lyons, J. T.

    1993-01-01

    The Minimum Hamiltonian Ascent Trajectory Evaluation (MASTRE) program and its predecessors, the ROBOT and the RAGMOP programs, have had a long history of supporting MSFC in the simulation of space boosters for the purpose of performance evaluation. The ROBOT program was used in the simulation of the Saturn 1B and Saturn 5 vehicles in the 1960's and provided the first utilization of the minimum Hamiltonian (or min-H) methodology and the steepest ascent technique to solve the optimum trajectory problem. The advent of the Space Shuttle in the 1970's and its complex airplane design required a redesign of the trajectory simulation code since aerodynamic flight and controllability were required for proper simulation. The RAGMOP program was the first attempt to incorporate the complex equations of the Space Shuttle into an optimization tool by using an optimization method based on steepest ascent techniques (but without the min-H methodology). Development of the complex partial derivatives associated with the Space Shuttle configuration and using techniques from the RAGMOP program, the ROBOT program was redesigned to incorporate these additional complexities. This redesign created the MASTRE program, which was referred to as the Minimum Hamiltonian Ascent Shuttle TRajectory Evaluation program at that time. Unique to this program were first-stage (or booster) nonlinear aerodynamics, upper-stage linear aerodynamics, engine control via moment balance, liquid and solid thrust forces, variable liquid throttling to maintain constant acceleration limits, and a total upgrade of the equations used in the forward and backward integration segments of the program. This modification of the MASTRE code has been used to simulate the new space vehicles associated with the National Launch Systems (NLS). Although not as complicated as the Space Shuttle, the simulation and analysis of the NLS vehicles required additional modifications to the MASTRE program in the areas of providing additional flexibility in the use of the program, allowing additional optimization options, and providing special options for the NLS configuration.

  5. Stage call: Cardiovascular reactivity to audition stress in musicians

    PubMed Central

    Chanwimalueang, Theerasak; Aufegger, Lisa; Adjei, Tricia; Wasley, David; Cruder, Cinzia; Mandic, Danilo P.

    2017-01-01

    Auditioning is at the very center of educational and professional life in music and is associated with significant psychophysical demands. Knowledge of how these demands affect cardiovascular responses to psychosocial pressure is essential for developing strategies to both manage stress and understand optimal performance states. To this end, we recorded the electrocardiograms (ECGs) of 16 musicians (11 violinists and 5 flutists) before and during performances in both low- and high-stress conditions: with no audience and in front of an audition panel, respectively. The analysis consisted of the detection of R-peaks in the ECGs to extract heart rate variability (HRV) from the notoriously noisy real-world ECGs. Our data analysis approach spanned both standard (temporal and spectral) and advanced (structural complexity) techniques. The complexity science approaches—namely, multiscale sample entropy and multiscale fuzzy entropy—indicated a statistically significant decrease in structural complexity in HRV from the low- to the high-stress condition and an increase in structural complexity from the pre-performance to performance period, thus confirming the complexity loss theory and a loss in degrees of freedom due to stress. Results from the spectral analyses also suggest that the stress responses in the female participants were more parasympathetically driven than those of the male participants. In conclusion, our findings suggest that interventions to manage stress are best targeted at the sensitive pre-performance period, before an audition begins. PMID:28437466

  6. Robust pedestrian detection and tracking from a moving vehicle

    NASA Astrophysics Data System (ADS)

    Tuong, Nguyen Xuan; Müller, Thomas; Knoll, Alois

    2011-01-01

    In this paper, we address the problem of multi-person detection, tracking and distance estimation in a complex scenario using multi-cameras. Specifically, we are interested in a vision system for supporting the driver in avoiding any unwanted collision with the pedestrian. We propose an approach using Histograms of Oriented Gradients (HOG) to detect pedestrians on static images and a particle filter as a robust tracking technique to follow targets from frame to frame. Because the depth map requires expensive computation, we extract depth information of targets using Direct Linear Transformation (DLT) to reconstruct 3D-coordinates of correspondent points found by running Speeded Up Robust Features (SURF) on two input images. Using the particle filter the proposed tracker can efficiently handle target occlusions in a simple background environment. However, to achieve reliable performance in complex scenarios with frequent target occlusions and complex cluttered background, results from the detection module are integrated to create feedback and recover the tracker from tracking failures due to the complexity of the environment and target appearance model variability. The proposed approach is evaluated on different data sets both in a simple background scenario and a cluttered background environment. The result shows that, by integrating detector and tracker, a reliable and stable performance is possible even if occlusion occurs frequently in highly complex environment. A vision-based collision avoidance system for an intelligent car, as a result, can be achieved.

  7. Comparison of tunnel variability between trans-portal and outside-in techniques in ACL reconstruction.

    PubMed

    Sim, Jae-Ang; Kim, Jong-Min; Lee, Sahnghoon; Bae, Ji-Yong; Seon, Jong-Keun

    2017-04-01

    Although trans-portal and outside-in techniques are commonly used for anatomical ACL reconstruction, there is very little information on variability in tunnel placement between two techniques. A total of 103 patients who received ACL reconstruction using trans-portal (50 patients) and outside-in techniques (53 patients) were included in the study. The ACL tunnel location, length and graft-femoral tunnel angle were analyzed using the 3D CT knee models, and we compared the location and length of the femoral and tibial tunnels, and graft bending angle between the two techniques. The variability in each technique regarding the tunnel location, length and graft tunnel angle using the range values was also compared. There were no differences in the average of femoral tunnel depth and height between the two groups. The ranges of femoral tunnel depth and height showed no difference between two groups (36 and 41 % in trans-portal technique vs. 32 and 41 % in outside-in technique). The average value and ranges of tibial tunnel location also showed similar results in two groups. The outside-in technique showed longer femoral tunnel than the trans-portal technique (34.0 vs. 36.8 mm, p = 0.001). The range of femoral tunnel was also wider in trans-portal technique than in outside-in technique. Although the outside-in technique showed significant acute graft bending angle than trans-portal technique in average values, the trans-portal technique showed wider ranges in graft bending angle than outside-in technique [ranges 73° (SD 13.6) vs. 53° (SD 10.7), respectively]. Although both trans-portal and outside-in techniques in ACL reconstruction can provide relatively consistent in femoral and tibial tunnel locations, trans-portal technique showed high variability in femoral tunnel length and graft bending angles than outside-in technique. Therefore, the outside-in technique in ACL reconstruction is considered as the effective method for surgeons to make more consistent femoral tunnel. III.

  8. Multilevel Mixture Kalman Filter

    NASA Astrophysics Data System (ADS)

    Guo, Dong; Wang, Xiaodong; Chen, Rong

    2004-12-01

    The mixture Kalman filter is a general sequential Monte Carlo technique for conditional linear dynamic systems. It generates samples of some indicator variables recursively based on sequential importance sampling (SIS) and integrates out the linear and Gaussian state variables conditioned on these indicators. Due to the marginalization process, the complexity of the mixture Kalman filter is quite high if the dimension of the indicator sampling space is high. In this paper, we address this difficulty by developing a new Monte Carlo sampling scheme, namely, the multilevel mixture Kalman filter. The basic idea is to make use of the multilevel or hierarchical structure of the space from which the indicator variables take values. That is, we draw samples in a multilevel fashion, beginning with sampling from the highest-level sampling space and then draw samples from the associate subspace of the newly drawn samples in a lower-level sampling space, until reaching the desired sampling space. Such a multilevel sampling scheme can be used in conjunction with the delayed estimation method, such as the delayed-sample method, resulting in delayed multilevel mixture Kalman filter. Examples in wireless communication, specifically the coherent and noncoherent 16-QAM over flat-fading channels, are provided to demonstrate the performance of the proposed multilevel mixture Kalman filter.

  9. The SIMRAND methodology: Theory and application for the simulation of research and development projects

    NASA Technical Reports Server (NTRS)

    Miles, R. F., Jr.

    1986-01-01

    A research and development (R&D) project often involves a number of decisions that must be made concerning which subset of systems or tasks are to be undertaken to achieve the goal of the R&D project. To help in this decision making, SIMRAND (SIMulation of Research ANd Development Projects) is a methodology for the selection of the optimal subset of systems or tasks to be undertaken on an R&D project. Using alternative networks, the SIMRAND methodology models the alternative subsets of systems or tasks under consideration. Each path through an alternative network represents one way of satisfying the project goals. Equations are developed that relate the system or task variables to the measure of reference. Uncertainty is incorporated by treating the variables of the equations probabilistically as random variables, with cumulative distribution functions assessed by technical experts. Analytical techniques of probability theory are used to reduce the complexity of the alternative networks. Cardinal utility functions over the measure of preference are assessed for the decision makers. A run of the SIMRAND Computer I Program combines, in a Monte Carlo simulation model, the network structure, the equations, the cumulative distribution functions, and the utility functions.

  10. Design Optimization of a Variable-Speed Power Turbine

    NASA Technical Reports Server (NTRS)

    Hendricks, Eric S.; Jones, Scott M.; Gray, Justin S.

    2014-01-01

    NASA's Rotary Wing Project is investigating technologies that will enable the development of revolutionary civil tilt rotor aircraft. Previous studies have shown that for large tilt rotor aircraft to be viable, the rotor speeds need to be slowed significantly during the cruise portion of the flight. This requirement to slow the rotors during cruise presents an interesting challenge to the propulsion system designer as efficient engine performance must be achieved at two drastically different operating conditions. One potential solution to this challenge is to use a transmission with multiple gear ratios and shift to the appropriate ratio during flight. This solution will require a large transmission that is likely to be maintenance intensive and will require a complex shifting procedure to maintain power to the rotors at all times. An alternative solution is to use a fixed gear ratio transmission and require the power turbine to operate efficiently over the entire speed range. This concept is referred to as a variable-speed power-turbine (VSPT) and is the focus of the current study. This paper explores the design of a variable speed power turbine for civil tilt rotor applications using design optimization techniques applied to NASA's new meanline tool, the Object-Oriented Turbomachinery Analysis Code (OTAC).

  11. Time series analysis for psychological research: examining and forecasting change

    PubMed Central

    Jebb, Andrew T.; Tay, Louis; Wang, Wei; Huang, Qiming

    2015-01-01

    Psychological research has increasingly recognized the importance of integrating temporal dynamics into its theories, and innovations in longitudinal designs and analyses have allowed such theories to be formalized and tested. However, psychological researchers may be relatively unequipped to analyze such data, given its many characteristics and the general complexities involved in longitudinal modeling. The current paper introduces time series analysis to psychological research, an analytic domain that has been essential for understanding and predicting the behavior of variables across many diverse fields. First, the characteristics of time series data are discussed. Second, different time series modeling techniques are surveyed that can address various topics of interest to psychological researchers, including describing the pattern of change in a variable, modeling seasonal effects, assessing the immediate and long-term impact of a salient event, and forecasting future values. To illustrate these methods, an illustrative example based on online job search behavior is used throughout the paper, and a software tutorial in R for these analyses is provided in the Supplementary Materials. PMID:26106341

  12. Information transfer and information modification to identify the structure of cardiovascular and cardiorespiratory networks.

    PubMed

    Faes, Luca; Nollo, Giandomenico; Krohova, Jana; Czippelova, Barbora; Turianikova, Zuzana; Javorka, Michal

    2017-07-01

    To fully elucidate the complex physiological mechanisms underlying the short-term autonomic regulation of heart period (H), systolic and diastolic arterial pressure (S, D) and respiratory (R) variability, the joint dynamics of these variables need to be explored using multivariate time series analysis. This study proposes the utilization of information-theoretic measures to measure causal interactions between nodes of the cardiovascular/cardiorespiratory network and to assess the nature (synergistic or redundant) of these directed interactions. Indexes of information transfer and information modification are extracted from the H, S, D and R series measured from healthy subjects in a resting state and during postural stress. Computations are performed in the framework of multivariate linear regression, using bootstrap techniques to assess on a single-subject basis the statistical significance of each measure and of its transitions across conditions. We find patterns of information transfer and modification which are related to specific cardiovascular and cardiorespiratory mechanisms in resting conditions and to their modification induced by the orthostatic stress.

  13. Studies in integrated line-and packet-switched computer communication systems

    NASA Astrophysics Data System (ADS)

    Maglaris, B. S.

    1980-06-01

    The problem of efficiently allocating the bandwidth of a trunk to both types of traffic is handled for various system and traffic models. A performance analysis is carried out both for variable and fixed frame schemes. It is shown that variable frame schemes, adjusting the frame length according to the traffic variations, offer better trunk utilization at the cost of the additional hardware and software complexity needed because of the lack of synchronization. An optimization study on the fixed frame schemes follows. The problem of dynamically allocating the fixed frame to both types of traffic is formulated as a Markovian Decision process. It is shown that the movable boundary scheme, suggested for commercial implementations of integrated multiplexors, offers optimal or near optimal performance and simplicity of implementation. Finally, the behavior of the movable boundary integrated scheme is studied for tandem link connections. Under the assumptions made for the line-switched traffic, the forward allocation technique is found to offer the best alternative among different path set-up strategies.

  14. Time series analysis for psychological research: examining and forecasting change.

    PubMed

    Jebb, Andrew T; Tay, Louis; Wang, Wei; Huang, Qiming

    2015-01-01

    Psychological research has increasingly recognized the importance of integrating temporal dynamics into its theories, and innovations in longitudinal designs and analyses have allowed such theories to be formalized and tested. However, psychological researchers may be relatively unequipped to analyze such data, given its many characteristics and the general complexities involved in longitudinal modeling. The current paper introduces time series analysis to psychological research, an analytic domain that has been essential for understanding and predicting the behavior of variables across many diverse fields. First, the characteristics of time series data are discussed. Second, different time series modeling techniques are surveyed that can address various topics of interest to psychological researchers, including describing the pattern of change in a variable, modeling seasonal effects, assessing the immediate and long-term impact of a salient event, and forecasting future values. To illustrate these methods, an illustrative example based on online job search behavior is used throughout the paper, and a software tutorial in R for these analyses is provided in the Supplementary Materials.

  15. Postnatal brain development: Structural imaging of dynamic neurodevelopmental processes

    PubMed Central

    Jernigan, Terry L.; Baaré, William F. C.; Stiles, Joan; Madsen, Kathrine Skak

    2013-01-01

    After birth, there is striking biological and functional development of the brain’s fiber tracts as well as remodeling of cortical and subcortical structures. Behavioral development in children involves a complex and dynamic set of genetically guided processes by which neural structures interact constantly with the environment. This is a protracted process, beginning in the third week of gestation and continuing into early adulthood. Reviewed here are studies using structural imaging techniques, with a special focus on diffusion weighted imaging, describing age-related brain maturational changes in children and adolescents, as well as studies that link these changes to behavioral differences. Finally, we discuss evidence for effects on the brain of several factors that may play a role in mediating these brain–behavior associations in children, including genetic variation, behavioral interventions, and hormonal variation associated with puberty. At present longitudinal studies are few, and we do not yet know how variability in individual trajectories of biological development in specific neural systems map onto similar variability in behavioral trajectories. PMID:21489384

  16. Phase-noise limitations in continuous-variable quantum key distribution with homodyne detection

    NASA Astrophysics Data System (ADS)

    Corvaja, Roberto

    2017-02-01

    In continuous-variables quantum key distribution with coherent states, the advantage of performing the detection by using standard telecoms components is counterbalanced by the lack of a stable phase reference in homodyne detection due to the complexity of optical phase-locking circuits and to the unavoidable phase noise of lasers, which introduces a degradation on the achievable secure key rate. Pilot-assisted phase-noise estimation and postdetection compensation techniques are used to implement a protocol with coherent states where a local laser is employed and it is not locked to the received signal, but a postdetection phase correction is applied. Here the reduction of the secure key rate determined by the laser phase noise, for both individual and collective attacks, is analytically evaluated and a scheme of pilot-assisted phase estimation proposed, outlining the tradeoff in the system design between phase noise and spectral efficiency. The optimal modulation variance as a function of the phase-noise amount is derived.

  17. Dynamic Analysis of the Carotid-Kundalini Map

    NASA Astrophysics Data System (ADS)

    Wang, Xingyuan; Liang, Qingyong; Meng, Juan

    The nature of the fixed points of the Carotid-Kundalini (C-K) map was studied and the boundary equation of the first bifurcation of the C-K map in the parameter plane is presented. Using the quantitative criterion and rule of chaotic system, the paper reveals the general features of the C-K Map transforming from regularity to chaos. The following conclusions are obtained: (i) chaotic patterns of the C-K map may emerge out of double-periodic bifurcation; (ii) the chaotic crisis phenomena are found. At the same time, the authors analyzed the orbit of critical point of the complex C-K Map and put forward the definition of Mandelbrot-Julia set of the complex C-K Map. The authors generalized the Welstead and Cromer's periodic scanning technique and using this technology constructed a series of the Mandelbrot-Julia sets of the complex C-K Map. Based on the experimental mathematics method of combining the theory of analytic function of one complex variable with computer aided drawing, we investigated the symmetry of the Mandelbrot-Julia set and studied the topological inflexibility of distribution of the periodic region in the Mandelbrot set, and found that the Mandelbrot set contains abundant information of the structure of Julia sets by finding the whole portray of Julia sets based on Mandelbrot set qualitatively.

  18. Running Technique is an Important Component of Running Economy and Performance

    PubMed Central

    FOLLAND, JONATHAN P.; ALLEN, SAM J.; BLACK, MATTHEW I.; HANDSAKER, JOSEPH C.; FORRESTER, STEPHANIE E.

    2017-01-01

    ABSTRACT Despite an intuitive relationship between technique and both running economy (RE) and performance, and the diverse techniques used by runners to achieve forward locomotion, the objective importance of overall technique and the key components therein remain to be elucidated. Purpose This study aimed to determine the relationship between individual and combined kinematic measures of technique with both RE and performance. Methods Ninety-seven endurance runners (47 females) of diverse competitive standards performed a discontinuous protocol of incremental treadmill running (4-min stages, 1-km·h−1 increments). Measurements included three-dimensional full-body kinematics, respiratory gases to determine energy cost, and velocity of lactate turn point. Five categories of kinematic measures (vertical oscillation, braking, posture, stride parameters, and lower limb angles) and locomotory energy cost (LEc) were averaged across 10–12 km·h−1 (the highest common velocity < velocity of lactate turn point). Performance was measured as season's best (SB) time converted to a sex-specific z-score. Results Numerous kinematic variables were correlated with RE and performance (LEc, 19 variables; SB time, 11 variables). Regression analysis found three variables (pelvis vertical oscillation during ground contact normalized to height, minimum knee joint angle during ground contact, and minimum horizontal pelvis velocity) explained 39% of LEc variability. In addition, four variables (minimum horizontal pelvis velocity, shank touchdown angle, duty factor, and trunk forward lean) combined to explain 31% of the variability in performance (SB time). Conclusions This study provides novel and robust evidence that technique explains a substantial proportion of the variance in RE and performance. We recommend that runners and coaches are attentive to specific aspects of stride parameters and lower limb angles in part to optimize pelvis movement, and ultimately enhance performance. PMID:28263283

  19. Multivariate analysis: greater insights into complex systems

    USDA-ARS?s Scientific Manuscript database

    Many agronomic researchers measure and collect multiple response variables in an effort to understand the more complex nature of the system being studied. Multivariate (MV) statistical methods encompass the simultaneous analysis of all random variables (RV) measured on each experimental or sampling ...

  20. Effectiveness of the Touch Math Technique in Teaching Basic Addition to Children with Autism

    ERIC Educational Resources Information Center

    Yikmis, Ahmet

    2016-01-01

    This study aims to reveal whether the touch math technique is effective in teaching basic addition to children with autism. The dependent variable of this study is the children's skills to solve addition problems correctly, whereas teaching with the touch math technique is the independent variable. Among the single-subject research models, a…

  1. Benefits of a holistic breathing technique in patients on hemodialysis.

    PubMed

    Stanley, Ruth; Leither, Thomas W; Sindelir, Cathy

    2011-01-01

    Health-related quality of life and heart rate variability are often depressed in patients on hemodialysis. This pilot program used a simple holistic, self-directed breathing technique designed to improve heart rate variability, with the hypothesis that improving heart rate variability would subsequently enhance health-related quality of life. Patient self-reported benefits included reductions in anxiety, fatigue, insomnia, and pain. Using holistic physiologic techniques may offer a unique and alternative tool for nurses to help increase health-related quality of life in patients on hemodialysis.

  2. SOFIA's Choice: Automating the Scheduling of Airborne Observations

    NASA Technical Reports Server (NTRS)

    Frank, Jeremy; Norvig, Peter (Technical Monitor)

    1999-01-01

    This paper describes the problem of scheduling observations for an airborne telescope. Given a set of prioritized observations to choose from, and a wide range of complex constraints governing legitimate choices and orderings, how can we efficiently and effectively create a valid flight plan which supports high priority observations? This problem is quite different from scheduling problems which are routinely solved automatically in industry. For instance, the problem requires making choices which lead to other choices later, and contains many interacting complex constraints over both discrete and continuous variables. Furthermore, new types of constraints may be added as the fundamental problem changes. As a result of these features, this problem cannot be solved by traditional scheduling techniques. The problem resembles other problems in NASA and industry, from observation scheduling for rovers and other science instruments to vehicle routing. The remainder of the paper is organized as follows. In 2 we describe the observatory in order to provide some background. In 3 we describe the problem of scheduling a single flight. In 4 we compare flight planning and other scheduling problems and argue that traditional techniques are not sufficient to solve this problem. We also mention similar complex scheduling problems which may benefit from efforts to solve this problem. In 5 we describe an approach for solving this problem based on research into a similar problem, that of scheduling observations for a space-borne probe. In 6 we discuss extensions of the flight planning problem as well as other problems which are similar to flight planning. In 7 we conclude and discuss future work.

  3. 3D microvascular model recapitulates the diffuse large B-cell lymphoma tumor microenvironment in vitro.

    PubMed

    Mannino, Robert G; Santiago-Miranda, Adriana N; Pradhan, Pallab; Qiu, Yongzhi; Mejias, Joscelyn C; Neelapu, Sattva S; Roy, Krishnendu; Lam, Wilbur A

    2017-01-31

    Diffuse large B-cell lymphoma (DLBCL) is an aggressive cancer that affects ∼22 000 people in the United States yearly. Understanding the complex cellular interactions of the tumor microenvironment is critical to the success and development of DLBCL treatment strategies. In vitro platforms that successfully model the complex tumor microenvironment without introducing the variability of in vivo systems are vital for understanding these interactions. To date, no such in vitro model exists that can accurately recapitulate the interactions that occur between immune cells, cancer cells, and endothelial cells in the tumor microenvironment of DLBCL. To that end, we developed a lymphoma-on-chip model consisting of a hydrogel based tumor model traversed by a vascularized, perfusable, round microchannel that successfully recapitulates key complexities and interactions of the in vivo tumor microenvironment in vitro. We have shown that the perfusion capabilities of this technique allow us to study targeted treatment strategies, as well as to model the diffusion of infused reagents spatiotemporally. Furthermore, this model employs a novel fabrication technique that utilizes common laboratory materials, and allows for the microfabrication of multiplex microvascular environments without the need for advanced microfabrication facilities. Through our facile microfabrication process, we are able to achieve micro vessels within a tumor model that are highly reliable and precise over the length of the vessel. Overall, we have developed a tool that enables researchers from many diverse disciplines to study previously inaccessible aspects of the DLBCL tumor microenvironment, with profound implications for drug delivery and design.

  4. 3D microvascular model recapitulates the diffuse large B-cell lymphoma tumor microenvironment in vitro

    PubMed Central

    Mannino, Robert G.; Santiago-Miranda, Adriana N.; Pradhan, Pallab; Qiu, Yongzhi; Mejias, Joscelyn C.; Neelapu, Sattva S.; Roy, Krishnendu; Lam, Wilbur A.

    2017-01-01

    Diffuse large B-cell lymphoma (DLBCL) is an aggressive cancer that affects ~22,000 people in the United States yearly. Understanding the complex cellular interactions of the tumor microenvironment is critical to the success and development of DLBCL treatment strategies. In vitro platforms that successfully model the complex tumor microenvironment without introducing the variability of in vivo systems are vital for understanding these interactions. To date, no such in vitro model exists that can accurately recapitulate the interactions that occur between immune cells, cancer cells, and endothelial cells in the tumor microenvironment of DLBCL. To that end, we developed a lymphoma-on-chip model consisting of a hydrogel based tumor model traversed by a vascularized, perfusable, round microchannel that successfully recapitulates key complexities and interactions of the in vivo tumor microenvironment in vitro. We have shown that the perfusion capabilities of this technique allow us to study targeted treatment strategies, as well as to model the diffusion of infused reagents spatiotemporally. Furthermore, this model employs a novel fabrication technique that utilizes common laboratory materials, and allows for the microfabrication of multiplex microvascular environments without the need for advanced microfabrication facilities. Through our facile microfabrication process, we are able to achieve micro vessels within a tumor model that are highly reliable and precise over the length of the vessel. Overall, we have developed a tool that enables researchers from many diverse disciplines to study previously inaccessible aspects of the DLBCL tumor microenvironment, with profound implications for drug delivery and design. PMID:28054086

  5. Stochastic Simulation and Forecast of Hydrologic Time Series Based on Probabilistic Chaos Expansion

    NASA Astrophysics Data System (ADS)

    Li, Z.; Ghaith, M.

    2017-12-01

    Hydrological processes are characterized by many complex features, such as nonlinearity, dynamics and uncertainty. How to quantify and address such complexities and uncertainties has been a challenging task for water engineers and managers for decades. To support robust uncertainty analysis, an innovative approach for the stochastic simulation and forecast of hydrologic time series is developed is this study. Probabilistic Chaos Expansions (PCEs) are established through probabilistic collocation to tackle uncertainties associated with the parameters of traditional hydrological models. The uncertainties are quantified in model outputs as Hermite polynomials with regard to standard normal random variables. Sequentially, multivariate analysis techniques are used to analyze the complex nonlinear relationships between meteorological inputs (e.g., temperature, precipitation, evapotranspiration, etc.) and the coefficients of the Hermite polynomials. With the established relationships between model inputs and PCE coefficients, forecasts of hydrologic time series can be generated and the uncertainties in the future time series can be further tackled. The proposed approach is demonstrated using a case study in China and is compared to a traditional stochastic simulation technique, the Markov-Chain Monte-Carlo (MCMC) method. Results show that the proposed approach can serve as a reliable proxy to complicated hydrological models. It can provide probabilistic forecasting in a more computationally efficient manner, compared to the traditional MCMC method. This work provides technical support for addressing uncertainties associated with hydrological modeling and for enhancing the reliability of hydrological modeling results. Applications of the developed approach can be extended to many other complicated geophysical and environmental modeling systems to support the associated uncertainty quantification and risk analysis.

  6. Prediction of soil attributes through interpolators in a deglaciated environment with complex landforms

    NASA Astrophysics Data System (ADS)

    Schünemann, Adriano Luis; Inácio Fernandes Filho, Elpídio; Rocha Francelino, Marcio; Rodrigues Santos, Gérson; Thomazini, Andre; Batista Pereira, Antônio; Gonçalves Reynaud Schaefer, Carlos Ernesto

    2017-04-01

    The knowledge of environmental variables values, in non-sampled sites from a minimum data set can be accessed through interpolation technique. Kriging and the classifier Random Forest algorithm are examples of predictors with this aim. The objective of this work was to compare methods of soil attributes spatialization in a recent deglaciated environment with complex landforms. Prediction of the selected soil attributes (potassium, calcium and magnesium) from ice-free areas were tested by using morphometric covariables, and geostatistical models without these covariables. For this, 106 soil samples were collected at 0-10 cm depth in Keller Peninsula, King George Island, Maritime Antarctica. Soil chemical analysis was performed by the gravimetric method, determining values of potassium, calcium and magnesium for each sampled point. Digital terrain models (DTMs) were obtained by using Terrestrial Laser Scanner. DTMs were generated from a cloud of points with spatial resolutions of 1, 5, 10, 20 and 30 m. Hence, 40 morphometric covariates were generated. Simple Kriging was performed using the R package software. The same data set coupled with morphometric covariates, was used to predict values of the studied attributes in non-sampled sites through Random Forest interpolator. Little differences were observed on the DTMs generated by Simple kriging and Random Forest interpolators. Also, DTMs with better spatial resolution did not improved the quality of soil attributes prediction. Results revealed that Simple Kriging can be used as interpolator when morphometric covariates are not available, with little impact regarding quality. It is necessary to go further in soil chemical attributes prediction techniques, especially in periglacial areas with complex landforms.

  7. Magnetic-saturation zone model for two semipermeable cracks in magneto-electro-elastic medium

    NASA Astrophysics Data System (ADS)

    Jangid, Kamlesh

    2018-03-01

    Extension of the PS model (Gao et al. [1]) in piezoelectric materials and the SEMPS model (Fan and Zhao [2]) in MEE materials, is proposed for two semi-permeable cracks in a MEE medium. It is assumed that the magnetic yielding occurs at the continuation of the cracks due to the prescribed loads. We have model these crack continuations as the zones with cohesive saturation limit magnetic induction. Stroh's formalism and complex variable techniques are used to formulate the problem. Closed form analytical expressions are derived for various fracture parameters. A numerical case study is presented for BaTiO3 - CoFe2O4 ceramic cracked plate.

  8. BioLayout(Java): versatile network visualisation of structural and functional relationships.

    PubMed

    Goldovsky, Leon; Cases, Ildefonso; Enright, Anton J; Ouzounis, Christos A

    2005-01-01

    Visualisation of biological networks is becoming a common task for the analysis of high-throughput data. These networks correspond to a wide variety of biological relationships, such as sequence similarity, metabolic pathways, gene regulatory cascades and protein interactions. We present a general approach for the representation and analysis of networks of variable type, size and complexity. The application is based on the original BioLayout program (C-language implementation of the Fruchterman-Rheingold layout algorithm), entirely re-written in Java to guarantee portability across platforms. BioLayout(Java) provides broader functionality, various analysis techniques, extensions for better visualisation and a new user interface. Examples of analysis of biological networks using BioLayout(Java) are presented.

  9. Paragenesis and Geochronology of the Nopal I Uranium Deposit, Mexico

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    M. Fayek; M. Ren

    2007-02-14

    Uranium deposits can, by analogy, provide important information on the long-term performance of radioactive waste forms and radioactive waste repositories. Their complex mineralogy and variable elemental and isotopic compositions can provide important information, provided that analyses are obtained on the scale of several micrometers. Here, we present a structural model of the Nopal I deposit as well as petrography at the nanoscale coupled with preliminary U-Th-Pb ages and O isotopic compositions of uranium-rich minerals obtained by Secondary Ion Mass Spectrometry (SIMS). This multi-technique approach promises to provide ''natural system'' data on the corrosion rate of uraninite, the natural analogue ofmore » spent nuclear fuel.« less

  10. Postliver transplantation vascular and biliary surgical anatomy.

    PubMed

    Saad, Wael E A; Orloff, Mark C; Davies, Mark G; Waldman, David L; Bozorgzadeh, Adel

    2007-09-01

    Imaging and management of postliver transplantation complications require an understanding of the surgical anatomy of liver transplantation. There are several methods of liver transplantation. Furthermore, liver transplantation is a complex surgery with numerous variables in its 4 anastomoses: (1) arterial anastomosis, (2) venous inflow (portal venous) anastomosis, (3) venous outflow (hepatic vein, inferior vena cava, or both) anastomosis, and (4) biliary/biliary-enteric anastomosis. The aim of this chapter is to introduce the principles of liver transplant surgical anatomy based on anastomotic anatomy. With radiologists as the target readers, the chapter focuses on the inflow and outflow connections and does not detail intricate surgical techniques or intraoperative maneuvers, operative stages, or vascular shunting.

  11. A cubic extended interior penalty function for structural optimization

    NASA Technical Reports Server (NTRS)

    Prasad, B.; Haftka, R. T.

    1979-01-01

    This paper describes an optimization procedure for the minimum weight design of complex structures. The procedure is based on a new cubic extended interior penalty function (CEIPF) used with the sequence of unconstrained minimization technique (SUMT) and Newton's method. The Hessian matrix of the penalty function is approximated using only constraints and their derivatives. The CEIPF is designed to minimize the error in the approximation of the Hessian matrix, and as a result the number of structural analyses required is small and independent of the number of design variables. Three example problems are reported. The number of structural analyses is reduced by as much as 50 per cent below previously reported results.

  12. Statistical and Biophysical Models for Predicting Total and Outdoor Water Use in Los Angeles

    NASA Astrophysics Data System (ADS)

    Mini, C.; Hogue, T. S.; Pincetl, S.

    2012-04-01

    Modeling water demand is a complex exercise in the choice of the functional form, techniques and variables to integrate in the model. The goal of the current research is to identify the determinants that control total and outdoor residential water use in semi-arid cities and to utilize that information in the development of statistical and biophysical models that can forecast spatial and temporal urban water use. The City of Los Angeles is unique in its highly diverse socio-demographic, economic and cultural characteristics across neighborhoods, which introduces significant challenges in modeling water use. Increasing climate variability also contributes to uncertainties in water use predictions in urban areas. Monthly individual water use records were acquired from the Los Angeles Department of Water and Power (LADWP) for the 2000 to 2010 period. Study predictors of residential water use include socio-demographic, economic, climate and landscaping variables at the zip code level collected from US Census database. Climate variables are estimated from ground-based observations and calculated at the centroid of each zip code by inverse-distance weighting method. Remotely-sensed products of vegetation biomass and landscape land cover are also utilized. Two linear regression models were developed based on the panel data and variables described: a pooled-OLS regression model and a linear mixed effects model. Both models show income per capita and the percentage of landscape areas in each zip code as being statistically significant predictors. The pooled-OLS model tends to over-estimate higher water use zip codes and both models provide similar RMSE values.Outdoor water use was estimated at the census tract level as the residual between total water use and indoor use. This residual is being compared with the output from a biophysical model including tree and grass cover areas, climate variables and estimates of evapotranspiration at very high spatial resolution. A genetic algorithm based model (Shuffled Complex Evolution-UA; SCE-UA) is also being developed to provide estimates of the predictions and parameters uncertainties and to compare against the linear regression models. Ultimately, models will be selected to undertake predictions for a range of climate change and landscape scenarios. Finally, project results will contribute to a better understanding of water demand to help predict future water use and implement targeted landscaping conservation programs to maintain sustainable water needs for a growing population under uncertain climate variability.

  13. Reducing beam shaper alignment complexity: diagnostic techniques for alignment and tuning

    NASA Astrophysics Data System (ADS)

    Lizotte, Todd E.

    2011-10-01

    Safe and efficient optical alignment is a critical requirement for industrial laser systems used in a high volume manufacturing environment. Of specific interest is the development of techniques to align beam shaping optics within a beam line; having the ability to instantly verify by a qualitative means that each element is in its proper position as the beam shaper module is being aligned. There is a need to reduce these types of alignment techniques down to a level where even a newbie to optical alignment will be able to complete the task. Couple this alignment need with the fact that most laser system manufacturers ship their products worldwide and the introduction of a new set of variables including cultural and language barriers, makes this a top priority for manufacturers. Tools and methodologies for alignment of complex optical systems need to be able to cross these barriers to ensure the highest degree of up time and reduce the cost of maintenance on the production floor. Customers worldwide, who purchase production laser equipment, understand that the majority of costs to a manufacturing facility is spent on system maintenance and is typically the largest single controllable expenditure in a production plant. This desire to reduce costs is driving the trend these days towards predictive and proactive, not reactive maintenance of laser based optical beam delivery systems [10]. With proper diagnostic tools, laser system developers can develop proactive approaches to reduce system down time, safe guard operational performance and reduce premature or catastrophic optics failures. Obviously analytical data will provide quantifiable performance standards which are more precise than qualitative standards, but each have a role in determining overall optical system performance [10]. This paper will discuss the use of film and fluorescent mirror devices as diagnostic tools for beam shaper module alignment off line or in-situ. The paper will also provide an overview methodology showing how it is possible to reduce complex alignment directions into a simplified set of instructions for layman service engineers.

  14. MEASURING LENSING MAGNIFICATION OF QUASARS BY LARGE SCALE STRUCTURE USING THE VARIABILITY-LUMINOSITY RELATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauer, Anne H.; Seitz, Stella; Jerke, Jonathan

    2011-05-10

    We introduce a technique to measure gravitational lensing magnification using the variability of type I quasars. Quasars' variability amplitudes and luminosities are tightly correlated, on average. Magnification due to gravitational lensing increases the quasars' apparent luminosity, while leaving the variability amplitude unchanged. Therefore, the mean magnification of an ensemble of quasars can be measured through the mean shift in the variability-luminosity relation. As a proof of principle, we use this technique to measure the magnification of quasars spectroscopically identified in the Sloan Digital Sky Survey (SDSS), due to gravitational lensing by galaxy clusters in the SDSS MaxBCG catalog. The Palomar-QUESTmore » Variability Survey, reduced using the DeepSky pipeline, provides variability data for the sources. We measure the average quasar magnification as a function of scaled distance (r/R{sub 200}) from the nearest cluster; our measurements are consistent with expectations assuming Navarro-Frenk-White cluster profiles, particularly after accounting for the known uncertainty in the clusters' centers. Variability-based lensing measurements are a valuable complement to shape-based techniques because their systematic errors are very different, and also because the variability measurements are amenable to photometric errors of a few percent and to depths seen in current wide-field surveys. Given the volume data of the expected from current and upcoming surveys, this new technique has the potential to be competitive with weak lensing shear measurements of large-scale structure.« less

  15. Sizing of patent ductus arteriosus in adults for transcatheter closure using the balloon pull-through technique.

    PubMed

    Shafi, Nabil A; Singh, Gagan D; Smith, Thomas W; Rogers, Jason H

    2018-05-01

    To describe a novel balloon sizing technique used during adult transcatheter patent ductus arteriosus (PDA) closure. In addition, to determine the clinical and procedural outcomes in six patients who underwent PDA balloon sizing with subsequent deployment of a PDA occluder device. Transcatheter PDA closure in adults has excellent safety and procedural outcomes. However, PDA sizing in adults can be challenging due to variable defect size, high flow state, or anatomical complexity. We describe a series of six cases where the balloon- pull through technique was successfully performed for PDA sizing prior to transcatheter closure. Consecutive adult patients undergoing adult PDA closure at our institution were studied retrospectively. A partially inflated sizing balloon was pulled through the defect from the aorta into the pulmonary artery and the balloon waist diameter was measured. Procedural success and clinical outcomes were obtained. Six adult patients underwent successful balloon pull-through technique for PDA sizing during transcatheter PDA closure, since conventional angiography often gave suboptimal opacification of the defect. All PDAs were treated with closure devices based on balloon PDA sizing with complete closure and no complications. In three patients that underwent preprocedure computed tomography, the balloon size matched the CT derived measurements. The balloon pull-through technique for PDA sizing is a safe and accurate sizing modality in adults undergoing transcatheter PDA closure. © 2017 Wiley Periodicals, Inc.

  16. A constrained joint source/channel coder design and vector quantization of nonstationary sources

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Chen, Y. C.; Nori, S.; Araj, A.

    1993-01-01

    The emergence of broadband ISDN as the network for the future brings with it the promise of integration of all proposed services in a flexible environment. In order to achieve this flexibility, asynchronous transfer mode (ATM) has been proposed as the transfer technique. During this period a study was conducted on the bridging of network transmission performance and video coding. The successful transmission of variable bit rate video over ATM networks relies on the interaction between the video coding algorithm and the ATM networks. Two aspects of networks that determine the efficiency of video transmission are the resource allocation algorithm and the congestion control algorithm. These are explained in this report. Vector quantization (VQ) is one of the more popular compression techniques to appear in the last twenty years. Numerous compression techniques, which incorporate VQ, have been proposed. While the LBG VQ provides excellent compression, there are also several drawbacks to the use of the LBG quantizers including search complexity and memory requirements, and a mismatch between the codebook and the inputs. The latter mainly stems from the fact that the VQ is generally designed for a specific rate and a specific class of inputs. In this work, an adaptive technique is proposed for vector quantization of images and video sequences. This technique is an extension of the recursively indexed scalar quantization (RISQ) algorithm.

  17. Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices

    PubMed Central

    Gokmen, Tayfun; Onen, Murat; Haensch, Wilfried

    2017-01-01

    In a previous work we have detailed the requirements for obtaining maximal deep learning performance benefit by implementing fully connected deep neural networks (DNN) in the form of arrays of resistive devices. Here we extend the concept of Resistive Processing Unit (RPU) devices to convolutional neural networks (CNNs). We show how to map the convolutional layers to fully connected RPU arrays such that the parallelism of the hardware can be fully utilized in all three cycles of the backpropagation algorithm. We find that the noise and bound limitations imposed by the analog nature of the computations performed on the arrays significantly affect the training accuracy of the CNNs. Noise and bound management techniques are presented that mitigate these problems without introducing any additional complexity in the analog circuits and that can be addressed by the digital circuits. In addition, we discuss digitally programmable update management and device variability reduction techniques that can be used selectively for some of the layers in a CNN. We show that a combination of all those techniques enables a successful application of the RPU concept for training CNNs. The techniques discussed here are more general and can be applied beyond CNN architectures and therefore enables applicability of the RPU approach to a large class of neural network architectures. PMID:29066942

  18. The application of absolute quantitative (1)H NMR spectroscopy in drug discovery and development.

    PubMed

    Singh, Suruchi; Roy, Raja

    2016-07-01

    The identification of a drug candidate and its structural determination is the most important step in the process of the drug discovery and for this, nuclear magnetic resonance (NMR) is one of the most selective analytical techniques. The present review illustrates the various perspectives of absolute quantitative (1)H NMR spectroscopy in drug discovery and development. It deals with the fundamentals of quantitative NMR (qNMR), the physiochemical properties affecting qNMR, and the latest referencing techniques used for quantification. The precise application of qNMR during various stages of drug discovery and development, namely natural product research, drug quantitation in dosage forms, drug metabolism studies, impurity profiling and solubility measurements is elaborated. To achieve this, the authors explore the literature of NMR in drug discovery and development between 1963 and 2015. It also takes into account several other reviews on the subject. qNMR experiments are used for drug discovery and development processes as it is a non-destructive, versatile and robust technique with high intra and interpersonal variability. However, there are several limitations also. qNMR of complex biological samples is incorporated with peak overlap and a low limit of quantification and this can be overcome by using hyphenated chromatographic techniques in addition to NMR.

  19. Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices.

    PubMed

    Gokmen, Tayfun; Onen, Murat; Haensch, Wilfried

    2017-01-01

    In a previous work we have detailed the requirements for obtaining maximal deep learning performance benefit by implementing fully connected deep neural networks (DNN) in the form of arrays of resistive devices. Here we extend the concept of Resistive Processing Unit (RPU) devices to convolutional neural networks (CNNs). We show how to map the convolutional layers to fully connected RPU arrays such that the parallelism of the hardware can be fully utilized in all three cycles of the backpropagation algorithm. We find that the noise and bound limitations imposed by the analog nature of the computations performed on the arrays significantly affect the training accuracy of the CNNs. Noise and bound management techniques are presented that mitigate these problems without introducing any additional complexity in the analog circuits and that can be addressed by the digital circuits. In addition, we discuss digitally programmable update management and device variability reduction techniques that can be used selectively for some of the layers in a CNN. We show that a combination of all those techniques enables a successful application of the RPU concept for training CNNs. The techniques discussed here are more general and can be applied beyond CNN architectures and therefore enables applicability of the RPU approach to a large class of neural network architectures.

  20. Effects of head-down bed rest on complex heart rate variability: Response to LBNP testing

    NASA Technical Reports Server (NTRS)

    Goldberger, Ary L.; Mietus, Joseph E.; Rigney, David R.; Wood, Margie L.; Fortney, Suzanne M.

    1994-01-01

    Head-down bed rest is used to model physiological changes during spaceflight. We postulated that bed rest would decrease the degree of complex physiological heart rate variability. We analyzed continuous heart rate data from digitized Holter recordings in eight healthy female volunteers (age 28-34 yr) who underwent a 13-day 6 deg head-down bed rest study with serial lower body negative pressure (LBNP) trials. Heart rate variability was measured on a 4-min data sets using conventional time and frequency domain measures as well as with a new measure of signal 'complexity' (approximate entropy). Data were obtained pre-bed rest (control), during bed rest (day 4 and day 9 or 11), and 2 days post-bed rest (recovery). Tolerance to LBNP was significantly reduced on both bed rest days vs. pre-bed rest. Heart rate variability was assessed at peak LBNP. Heart rate approximate entropy was significantly decreased at day 4 and day 9 or 11, returning toward normal during recovery. Heart rate standard deviation and the ratio of high- to low-power frequency did not change significantly. We conclude that short-term bed rest is associated with a decrease in the complex variability of heart rate during LBNP testing in healthy young adult women. Measurement of heart rate complexity, using a method derived from nonlinear dynamics ('chaos theory'), may provide a sensitive marker of this loss of physiological variability, complementing conventional time and frequency domain statistical measures.

Top