Sample records for multi-model combination techniques

  1. Multi-Model Combination techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ajami, N K; Duan, Q; Gao, X

    2005-04-11

    This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less

  2. Risk assessments using the Strain Index and the TLV for HAL, Part II: Multi-task jobs and prevalence of CTS.

    PubMed

    Kapellusch, Jay M; Silverstein, Barbara A; Bao, Stephen S; Thiese, Mathew S; Merryweather, Andrew S; Hegmann, Kurt T; Garg, Arun

    2018-02-01

    The Strain Index (SI) and the American Conference of Governmental Industrial Hygienists (ACGIH) threshold limit value for hand activity level (TLV for HAL) have been shown to be associated with prevalence of distal upper-limb musculoskeletal disorders such as carpal tunnel syndrome (CTS). The SI and TLV for HAL disagree on more than half of task exposure classifications. Similarly, time-weighted average (TWA), peak, and typical exposure techniques used to quantity physical exposure from multi-task jobs have shown between-technique agreement ranging from 61% to 93%, depending upon whether the SI or TLV for HAL model was used. This study compared exposure-response relationships between each model-technique combination and prevalence of CTS. Physical exposure data from 1,834 workers (710 with multi-task jobs) were analyzed using the SI and TLV for HAL and the TWA, typical, and peak multi-task job exposure techniques. Additionally, exposure classifications from the SI and TLV for HAL were combined into a single measure and evaluated. Prevalent CTS cases were identified using symptoms and nerve-conduction studies. Mixed effects logistic regression was used to quantify exposure-response relationships between categorized (i.e., low, medium, and high) physical exposure and CTS prevalence for all model-technique combinations, and for multi-task workers, mono-task workers, and all workers combined. Except for TWA TLV for HAL, all model-technique combinations showed monotonic increases in risk of CTS with increased physical exposure. The combined-models approach showed stronger association than the SI or TLV for HAL for multi-task workers. Despite differences in exposure classifications, nearly all model-technique combinations showed exposure-response relationships with prevalence of CTS for the combined sample of mono-task and multi-task workers. Both the TLV for HAL and the SI, with the TWA or typical techniques, appear useful for epidemiological studies and surveillance. However, the utility of TWA, typical, and peak techniques for job design and intervention is dubious.

  3. Time series forecasting using ERNN and QR based on Bayesian model averaging

    NASA Astrophysics Data System (ADS)

    Pwasong, Augustine; Sathasivam, Saratha

    2017-08-01

    The Bayesian model averaging technique is a multi-model combination technique. The technique was employed to amalgamate the Elman recurrent neural network (ERNN) technique with the quadratic regression (QR) technique. The amalgamation produced a hybrid technique known as the hybrid ERNN-QR technique. The potentials of forecasting with the hybrid technique are compared with the forecasting capabilities of individual techniques of ERNN and QR. The outcome revealed that the hybrid technique is superior to the individual techniques in the mean square error sense.

  4. Multi-scale modeling in cell biology

    PubMed Central

    Meier-Schellersheim, Martin; Fraser, Iain D. C.; Klauschen, Frederick

    2009-01-01

    Biomedical research frequently involves performing experiments and developing hypotheses that link different scales of biological systems such as, for instance, the scales of intracellular molecular interactions to the scale of cellular behavior and beyond to the behavior of cell populations. Computational modeling efforts that aim at exploring such multi-scale systems quantitatively with the help of simulations have to incorporate several different simulation techniques due to the different time and space scales involved. Here, we provide a non-technical overview of how different scales of experimental research can be combined with the appropriate computational modeling techniques. We also show that current modeling software permits building and simulating multi-scale models without having to become involved with the underlying technical details of computational modeling. PMID:20448808

  5. MODELING MICROBUBBLE DYNAMICS IN BIOMEDICAL APPLICATIONS*

    PubMed Central

    CHAHINE, Georges L.; HSIAO, Chao-Tsung

    2012-01-01

    Controlling microbubble dynamics to produce desirable biomedical outcomes when and where necessary and avoid deleterious effects requires advanced knowledge, which can be achieved only through a combination of experimental and numerical/analytical techniques. The present communication presents a multi-physics approach to study the dynamics combining viscous- in-viscid effects, liquid and structure dynamics, and multi bubble interaction. While complex numerical tools are developed and used, the study aims at identifying the key parameters influencing the dynamics, which need to be included in simpler models. PMID:22833696

  6. Multi-time-step ahead daily and hourly intermittent reservoir inflow prediction by artificial intelligent techniques using lumped and distributed data

    NASA Astrophysics Data System (ADS)

    Jothiprakash, V.; Magar, R. B.

    2012-07-01

    SummaryIn this study, artificial intelligent (AI) techniques such as artificial neural network (ANN), Adaptive neuro-fuzzy inference system (ANFIS) and Linear genetic programming (LGP) are used to predict daily and hourly multi-time-step ahead intermittent reservoir inflow. To illustrate the applicability of AI techniques, intermittent Koyna river watershed in Maharashtra, India is chosen as a case study. Based on the observed daily and hourly rainfall and reservoir inflow various types of time-series, cause-effect and combined models are developed with lumped and distributed input data. Further, the model performance was evaluated using various performance criteria. From the results, it is found that the performances of LGP models are found to be superior to ANN and ANFIS models especially in predicting the peak inflows for both daily and hourly time-step. A detailed comparison of the overall performance indicated that the combined input model (combination of rainfall and inflow) performed better in both lumped and distributed input data modelling. It was observed that the lumped input data models performed slightly better because; apart from reducing the noise in the data, the better techniques and their training approach, appropriate selection of network architecture, required inputs, and also training-testing ratios of the data set. The slight poor performance of distributed data is due to large variations and lesser number of observed values.

  7. A three-dimensional muscle activity imaging technique for assessing pelvic muscle function

    NASA Astrophysics Data System (ADS)

    Zhang, Yingchun; Wang, Dan; Timm, Gerald W.

    2010-11-01

    A novel multi-channel surface electromyography (EMG)-based three-dimensional muscle activity imaging (MAI) technique has been developed by combining the bioelectrical source reconstruction approach and subject-specific finite element modeling approach. Internal muscle activities are modeled by a current density distribution and estimated from the intra-vaginal surface EMG signals with the aid of a weighted minimum norm estimation algorithm. The MAI technique was employed to minimally invasively reconstruct electrical activity in the pelvic floor muscles and urethral sphincter from multi-channel intra-vaginal surface EMG recordings. A series of computer simulations were conducted to evaluate the performance of the present MAI technique. With appropriate numerical modeling and inverse estimation techniques, we have demonstrated the capability of the MAI technique to accurately reconstruct internal muscle activities from surface EMG recordings. This MAI technique combined with traditional EMG signal analysis techniques is being used to study etiologic factors associated with stress urinary incontinence in women by correlating functional status of muscles characterized from the intra-vaginal surface EMG measurements with the specific pelvic muscle groups that generated these signals. The developed MAI technique described herein holds promise for eliminating the need to place needle electrodes into muscles to obtain accurate EMG recordings in some clinical applications.

  8. A Bayesian alternative for multi-objective ecohydrological model specification

    NASA Astrophysics Data System (ADS)

    Tang, Yating; Marshall, Lucy; Sharma, Ashish; Ajami, Hoori

    2018-01-01

    Recent studies have identified the importance of vegetation processes in terrestrial hydrologic systems. Process-based ecohydrological models combine hydrological, physical, biochemical and ecological processes of the catchments, and as such are generally more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov chain Monte Carlo (MCMC) techniques. The Bayesian approach offers an appealing alternative to traditional multi-objective hydrologic model calibrations by defining proper prior distributions that can be considered analogous to the ad-hoc weighting often prescribed in multi-objective calibration. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological modeling framework based on a traditional Pareto-based model calibration technique. In our study, a Pareto-based multi-objective optimization and a formal Bayesian framework are implemented in a conceptual ecohydrological model that combines a hydrological model (HYMOD) and a modified Bucket Grassland Model (BGM). Simulations focused on one objective (streamflow/LAI) and multiple objectives (streamflow and LAI) with different emphasis defined via the prior distribution of the model error parameters. Results show more reliable outputs for both predicted streamflow and LAI using Bayesian multi-objective calibration with specified prior distributions for error parameters based on results from the Pareto front in the ecohydrological modeling. The methodology implemented here provides insight into the usefulness of multiobjective Bayesian calibration for ecohydrologic systems and the importance of appropriate prior distributions in such approaches.

  9. Modeling and Assessment of GPS/BDS Combined Precise Point Positioning.

    PubMed

    Chen, Junping; Wang, Jungang; Zhang, Yize; Yang, Sainan; Chen, Qian; Gong, Xiuqiang

    2016-07-22

    Precise Point Positioning (PPP) technique enables stand-alone receivers to obtain cm-level positioning accuracy. Observations from multi-GNSS systems can augment users with improved positioning accuracy, reliability and availability. In this paper, we present and evaluate the GPS/BDS combined PPP models, including the traditional model and a simplified model, where the inter-system bias (ISB) is treated in different way. To evaluate the performance of combined GPS/BDS PPP, kinematic and static PPP positions are compared to the IGS daily estimates, where 1 month GPS/BDS data of 11 IGS Multi-GNSS Experiment (MGEX) stations are used. The results indicate apparent improvement of GPS/BDS combined PPP solutions in both static and kinematic cases, where much smaller standard deviations are presented in the magnitude distribution of coordinates RMS statistics. Comparisons between the traditional and simplified combined PPP models show no difference in coordinate estimations, and the inter system biases between the GPS/BDS system are assimilated into receiver clock, ambiguities and pseudo-range residuals accordingly.

  10. Discovering the Sequential Structure of Thought

    ERIC Educational Resources Information Center

    Anderson, John R.; Fincham, Jon M.

    2014-01-01

    Multi-voxel pattern recognition techniques combined with Hidden Markov models can be used to discover the mental states that people go through in performing a task. The combined method identifies both the mental states and how their durations vary with experimental conditions. We apply this method to a task where participants solve novel…

  11. Conservative strategy-based ensemble surrogate model for optimal groundwater remediation design at DNAPLs-contaminated sites

    NASA Astrophysics Data System (ADS)

    Ouyang, Qi; Lu, Wenxi; Lin, Jin; Deng, Wenbing; Cheng, Weiguo

    2017-08-01

    The surrogate-based simulation-optimization techniques are frequently used for optimal groundwater remediation design. When this technique is used, surrogate errors caused by surrogate-modeling uncertainty may lead to generation of infeasible designs. In this paper, a conservative strategy that pushes the optimal design into the feasible region was used to address surrogate-modeling uncertainty. In addition, chance-constrained programming (CCP) was adopted to compare with the conservative strategy in addressing this uncertainty. Three methods, multi-gene genetic programming (MGGP), Kriging (KRG) and support vector regression (SVR), were used to construct surrogate models for a time-consuming multi-phase flow model. To improve the performance of the surrogate model, ensemble surrogates were constructed based on combinations of different stand-alone surrogate models. The results show that: (1) the surrogate-modeling uncertainty was successfully addressed by the conservative strategy, which means that this method is promising for addressing surrogate-modeling uncertainty. (2) The ensemble surrogate model that combines MGGP with KRG showed the most favorable performance, which indicates that this ensemble surrogate can utilize both stand-alone surrogate models to improve the performance of the surrogate model.

  12. Vibration and acoustic frequency spectra for industrial process modeling using selective fusion multi-condition samples and multi-source features

    NASA Astrophysics Data System (ADS)

    Tang, Jian; Qiao, Junfei; Wu, ZhiWei; Chai, Tianyou; Zhang, Jian; Yu, Wen

    2018-01-01

    Frequency spectral data of mechanical vibration and acoustic signals relate to difficult-to-measure production quality and quantity parameters of complex industrial processes. A selective ensemble (SEN) algorithm can be used to build a soft sensor model of these process parameters by fusing valued information selectively from different perspectives. However, a combination of several optimized ensemble sub-models with SEN cannot guarantee the best prediction model. In this study, we use several techniques to construct mechanical vibration and acoustic frequency spectra of a data-driven industrial process parameter model based on selective fusion multi-condition samples and multi-source features. Multi-layer SEN (MLSEN) strategy is used to simulate the domain expert cognitive process. Genetic algorithm and kernel partial least squares are used to construct the inside-layer SEN sub-model based on each mechanical vibration and acoustic frequency spectral feature subset. Branch-and-bound and adaptive weighted fusion algorithms are integrated to select and combine outputs of the inside-layer SEN sub-models. Then, the outside-layer SEN is constructed. Thus, "sub-sampling training examples"-based and "manipulating input features"-based ensemble construction methods are integrated, thereby realizing the selective information fusion process based on multi-condition history samples and multi-source input features. This novel approach is applied to a laboratory-scale ball mill grinding process. A comparison with other methods indicates that the proposed MLSEN approach effectively models mechanical vibration and acoustic signals.

  13. A Multi-object Exoplanet Detecting Technique

    NASA Astrophysics Data System (ADS)

    Zhang, K.

    2011-05-01

    Exoplanet exploration is not only a meaningful astronomical action, but also has a close relation with the extra-terrestrial life. High resolution echelle spectrograph is the key instrument for measuring stellar radial velocity (RV). But with higher precision, better environmental stability and higher cost are required. An improved technique of RV means invented by David J. Erskine in 1997, External Dispersed Interferometry (EDI), can increase the RV measuring precision by combining the moderate resolution spectrograph with a fixed-delay Michelson interferometer. LAMOST with large aperture and large field of view is equipped with 16 multi-object low resolution fiber spectrographs. And these spectrographs are capable to work in medium resolution mode (R=5{K}˜10{K}). LAMOST will be one of the most powerful exoplanet detecting systems over the world by introducing EDI technique. The EDI technique is a new technique for developing astronomical instrumentation in China. The operating theory of EDI was generally verified by a feasibility experiment done in 2009. And then a multi-object exoplanet survey system based on LAMOST spectrograph was proposed. According to this project, three important tasks have been done as follows: Firstly, a simulation of EDI operating theory contains the stellar spectrum model, interferometer transmission model, spectrograph mediation model and RV solution model. In order to meet the practical situation, two detecting modes, temporal and spatial phase-stepping methods, are separately simulated. The interference spectrum is analyzed with Fourier transform algorithm and a higher resolution conventional spectrum is resolved. Secondly, an EDI prototype is composed of a multi-object interferometer prototype and the LAMOST spectrograph. Some ideas are used in the design to reduce the effect of central obscuration, for example, modular structure and external/internal adjusting frames. Another feasibility experiment was done at Xinglong Station in 2010. A related spectrum reduction program and the instrumental stability were tested by obtaining some multi-object interference spectrum. Thirdly, studying the parameter optimization of fixed-delay Michelson interferometer is helpful to increase its inner thermal stability and reduce the external environmental requirement. Referring to Wide-angle Michelson Interferometer successfully used in Upper Atmospheric Wind field, a glass pair selecting scheme is given. By choosing a suitable glass pair of interference arms, the RV error can be stable as several hundred m\\cdots^{-1}\\cdot{dg}C^{-1}. Therefore, this work is helpful to deeply study EDI technique and speed up the development of multi-object exoplanet survey system. LAMOST will make a greater contribution to astronomy when the combination between its spectrographs and EDI technique comes true.

  14. Using multi-class queuing network to solve performance models of e-business sites.

    PubMed

    Zheng, Xiao-ying; Chen, De-ren

    2004-01-01

    Due to e-business's variety of customers with different navigational patterns and demands, multi-class queuing network is a natural performance model for it. The open multi-class queuing network(QN) models are based on the assumption that no service center is saturated as a result of the combined loads of all the classes. Several formulas are used to calculate performance measures, including throughput, residence time, queue length, response time and the average number of requests. The solution technique of closed multi-class QN models is an approximate mean value analysis algorithm (MVA) based on three key equations, because the exact algorithm needs huge time and space requirement. As mixed multi-class QN models, include some open and some closed classes, the open classes should be eliminated to create a closed multi-class QN so that the closed model algorithm can be applied. Some corresponding examples are given to show how to apply the algorithms mentioned in this article. These examples indicate that multi-class QN is a reasonably accurate model of e-business and can be solved efficiently.

  15. Factors affecting the effectiveness of biomedical document indexing and retrieval based on terminologies.

    PubMed

    Dinh, Duy; Tamine, Lynda; Boubekeur, Fatiha

    2013-02-01

    The aim of this work is to evaluate a set of indexing and retrieval strategies based on the integration of several biomedical terminologies on the available TREC Genomics collections for an ad hoc information retrieval (IR) task. We propose a multi-terminology based concept extraction approach to selecting best concepts from free text by means of voting techniques. We instantiate this general approach on four terminologies (MeSH, SNOMED, ICD-10 and GO). We particularly focus on the effect of integrating terminologies into a biomedical IR process, and the utility of using voting techniques for combining the extracted concepts from each document in order to provide a list of unique concepts. Experimental studies conducted on the TREC Genomics collections show that our multi-terminology IR approach based on voting techniques are statistically significant compared to the baseline. For example, tested on the 2005 TREC Genomics collection, our multi-terminology based IR approach provides an improvement rate of +6.98% in terms of MAP (mean average precision) (p<0.05) compared to the baseline. In addition, our experimental results show that document expansion using preferred terms in combination with query expansion using terms from top ranked expanded documents improve the biomedical IR effectiveness. We have evaluated several voting models for combining concepts issued from multiple terminologies. Through this study, we presented many factors affecting the effectiveness of biomedical IR system including term weighting, query expansion, and document expansion models. The appropriate combination of those factors could be useful to improve the IR performance. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Multi-Objective and Multidisciplinary Design Optimisation (MDO) of UAV Systems using Hierarchical Asynchronous Parallel Evolutionary Algorithms

    DTIC Science & Technology

    2007-09-17

    been proposed; these include a combination of variable fidelity models, parallelisation strategies and hybridisation techniques (Coello, Veldhuizen et...Coello et al (Coello, Veldhuizen et al. 2002). 4.4.2 HIERARCHICAL POPULATION TOPOLOGY A hierarchical population topology, when integrated into...to hybrid parallel Multi-Objective Evolutionary Algorithms (pMOEA) (Cantu-Paz 2000; Veldhuizen , Zydallis et al. 2003); it uses a master slave

  17. Surface tension models for a multi-material ALE code with AMR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Wangyi; Koniges, Alice; Gott, Kevin

    A number of surface tension models have been implemented in a 3D multi-physics multi-material code, ALE–AMR, which combines Arbitrary Lagrangian Eulerian (ALE) hydrodynamics with Adaptive Mesh Refinement (AMR). ALE–AMR is unique in its ability to model hot radiating plasmas, cold fragmenting solids, and most recently, the deformation of molten material. The surface tension models implemented include a diffuse interface approach with special numerical techniques to remove parasitic flow and a height function approach in conjunction with a volume-fraction interface reconstruction package. These surface tension models are benchmarked with a variety of test problems. In conclusion, based on the results, themore » height function approach using volume fractions was chosen to simulate droplet dynamics associated with extreme ultraviolet (EUV) lithography.« less

  18. Surface tension models for a multi-material ALE code with AMR

    DOE PAGES

    Liu, Wangyi; Koniges, Alice; Gott, Kevin; ...

    2017-06-01

    A number of surface tension models have been implemented in a 3D multi-physics multi-material code, ALE–AMR, which combines Arbitrary Lagrangian Eulerian (ALE) hydrodynamics with Adaptive Mesh Refinement (AMR). ALE–AMR is unique in its ability to model hot radiating plasmas, cold fragmenting solids, and most recently, the deformation of molten material. The surface tension models implemented include a diffuse interface approach with special numerical techniques to remove parasitic flow and a height function approach in conjunction with a volume-fraction interface reconstruction package. These surface tension models are benchmarked with a variety of test problems. In conclusion, based on the results, themore » height function approach using volume fractions was chosen to simulate droplet dynamics associated with extreme ultraviolet (EUV) lithography.« less

  19. Advanced graphical user interface for multi-physics simulations using AMST

    NASA Astrophysics Data System (ADS)

    Hoffmann, Florian; Vogel, Frank

    2017-07-01

    Numerical modelling of particulate matter has gained much popularity in recent decades. Advanced Multi-physics Simulation Technology (AMST) is a state-of-the-art three dimensional numerical modelling technique combining the eX-tended Discrete Element Method (XDEM) with Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) [1]. One major limitation of this code is the lack of a graphical user interface (GUI) meaning that all pre-processing has to be made directly in a HDF5-file. This contribution presents the first graphical pre-processor developed for AMST.

  20. A resilient and efficient CFD framework: Statistical learning tools for multi-fidelity and heterogeneous information fusion

    NASA Astrophysics Data System (ADS)

    Lee, Seungjoon; Kevrekidis, Ioannis G.; Karniadakis, George Em

    2017-09-01

    Exascale-level simulations require fault-resilient algorithms that are robust against repeated and expected software and/or hardware failures during computations, which may render the simulation results unsatisfactory. If each processor can share some global information about the simulation from a coarse, limited accuracy but relatively costless auxiliary simulator we can effectively fill-in the missing spatial data at the required times by a statistical learning technique - multi-level Gaussian process regression, on the fly; this has been demonstrated in previous work [1]. Based on the previous work, we also employ another (nonlinear) statistical learning technique, Diffusion Maps, that detects computational redundancy in time and hence accelerate the simulation by projective time integration, giving the overall computation a "patch dynamics" flavor. Furthermore, we are now able to perform information fusion with multi-fidelity and heterogeneous data (including stochastic data). Finally, we set the foundations of a new framework in CFD, called patch simulation, that combines information fusion techniques from, in principle, multiple fidelity and resolution simulations (and even experiments) with a new adaptive timestep refinement technique. We present two benchmark problems (the heat equation and the Navier-Stokes equations) to demonstrate the new capability that statistical learning tools can bring to traditional scientific computing algorithms. For each problem, we rely on heterogeneous and multi-fidelity data, either from a coarse simulation of the same equation or from a stochastic, particle-based, more "microscopic" simulation. We consider, as such "auxiliary" models, a Monte Carlo random walk for the heat equation and a dissipative particle dynamics (DPD) model for the Navier-Stokes equations. More broadly, in this paper we demonstrate the symbiotic and synergistic combination of statistical learning, domain decomposition, and scientific computing in exascale simulations.

  1. Spatial effect of new municipal solid waste landfill siting using different guidelines.

    PubMed

    Ahmad, Siti Zubaidah; Ahamad, Mohd Sanusi S; Yusoff, Mohd Suffian

    2014-01-01

    Proper implementation of landfill siting with the right regulations and constraints can prevent undesirable long-term effects. Different countries have respective guidelines on criteria for new landfill sites. In this article, we perform a comparative study of municipal solid waste landfill siting criteria stated in the policies and guidelines of eight different constitutional bodies from Malaysia, Australia, India, U.S.A., Europe, China and the Middle East, and the World Bank. Subsequently, a geographic information system (GIS) multi-criteria evaluation model was applied to determine new suitable landfill sites using different criterion parameters using a constraint mapping technique and weighted linear combination. Application of Macro Modeler provided in the GIS-IDRISI Andes software helps in building and executing multi-step models. In addition, the analytic hierarchy process technique was included to determine the criterion weight of the decision maker's preferences as part of the weighted linear combination procedure. The differences in spatial results of suitable sites obtained signifies that dissimilarity in guideline specifications and requirements will have an effect on the decision-making process.

  2. VLBI-SLR Combination Solution Using GEODYN

    NASA Technical Reports Server (NTRS)

    MacMillan, Dan; Pavlis, Despina; Lemoine, Frank; Chinn, Douglas; Rowlands, David

    2010-01-01

    We would like to generate a multi-technique solution combining all of the geodetic techniques (VLBI, SLR, GPS, and DORIS) using the same software and using the same a priori models. Here we use GEODYN software and consider only the VLBI-SLR combination. Here we report initial results of our work on the combination. We first performed solutions with GEODYN using only VLBI data and found that VLBI EOP solution results produced with GEODYN agree with results using CALC/SOLVE at the 1-sigma level. We then combined the VLBI normal equations in GEODYN with weekly SLR normal equations for the period 2007-2008. Agreement of estimated Earth orientation parameters with IERS C04 were not significantly different for the VLBI-only, SLR-only, and VLBI+SLR solutions

  3. Modelling Multi Hazard Mapping in Semarang City Using GIS-Fuzzy Method

    NASA Astrophysics Data System (ADS)

    Nugraha, A. L.; Awaluddin, M.; Sasmito, B.

    2018-02-01

    One important aspect of disaster mitigation planning is hazard mapping. Hazard mapping can provide spatial information on the distribution of locations that are threatened by disaster. Semarang City as the capital of Central Java Province is one of the cities with high natural disaster intensity. Frequent natural disasters Semarang city is tidal flood, floods, landslides, and droughts. Therefore, Semarang City needs spatial information by doing multi hazard mapping to support disaster mitigation planning in Semarang City. Multi Hazards map modelling can be derived from parameters such as slope maps, rainfall, land use, and soil types. This modelling is done by using GIS method with scoring and overlay technique. However, the accuracy of modelling would be better if the GIS method is combined with Fuzzy Logic techniques to provide a good classification in determining disaster threats. The Fuzzy-GIS method will build a multi hazards map of Semarang city can deliver results with good accuracy and with appropriate threat class spread so as to provide disaster information for disaster mitigation planning of Semarang city. from the multi-hazard modelling using GIS-Fuzzy can be known type of membership that has a good accuracy is the type of membership Gauss with RMSE of 0.404 the smallest of the other membership and VAF value of 72.909% of the largest of the other membership.

  4. Estimation of Soil Moisture with L-band Multi-polarization Radar

    NASA Technical Reports Server (NTRS)

    Shi, J.; Chen, K. S.; Kim, Chung-Li Y.; Van Zyl, J. J.; Njoku, E.; Sun, G.; O'Neill, P.; Jackson, T.; Entekhabi, D.

    2004-01-01

    Through analyses of the model simulated data-base, we developed a technique to estimate surface soil moisture under HYDROS radar sensor (L-band multi-polarizations and 40deg incidence) configuration. This technique includes two steps. First, it decomposes the total backscattering signals into two components - the surface scattering components (the bare surface backscattering signals attenuated by the overlaying vegetation layer) and the sum of the direct volume scattering components and surface-volume interaction components at different polarizations. From the model simulated data-base, our decomposition technique works quit well in estimation of the surface scattering components with RMSEs of 0.12,0.25, and 0.55 dB for VV, HH, and VH polarizations, respectively. Then, we use the decomposed surface backscattering signals to estimate the soil moisture and the combined surface roughness and vegetation attenuation correction factors with all three polarizations.

  5. Symbolic Analysis of Concurrent Programs with Polymorphism

    NASA Technical Reports Server (NTRS)

    Rungta, Neha Shyam

    2010-01-01

    The current trend of multi-core and multi-processor computing is causing a paradigm shift from inherently sequential to highly concurrent and parallel applications. Certain thread interleavings, data input values, or combinations of both often cause errors in the system. Systematic verification techniques such as explicit state model checking and symbolic execution are extensively used to detect errors in such systems [7, 9]. Explicit state model checking enumerates possible thread schedules and input data values of a program in order to check for errors [3, 9]. To partially mitigate the state space explosion from data input values, symbolic execution techniques substitute data input values with symbolic values [5, 7, 6]. Explicit state model checking and symbolic execution techniques used in conjunction with exhaustive search techniques such as depth-first search are unable to detect errors in medium to large-sized concurrent programs because the number of behaviors caused by data and thread non-determinism is extremely large. We present an overview of abstraction-guided symbolic execution for concurrent programs that detects errors manifested by a combination of thread schedules and data values [8]. The technique generates a set of key program locations relevant in testing the reachability of the target locations. The symbolic execution is then guided along these locations in an attempt to generate a feasible execution path to the error state. This allows the execution to focus in parts of the behavior space more likely to contain an error.

  6. Evaluating uncertainties in multi-layer soil moisture estimation with support vector machines and ensemble Kalman filtering

    NASA Astrophysics Data System (ADS)

    Liu, Di; Mishra, Ashok K.; Yu, Zhongbo

    2016-07-01

    This paper examines the combination of support vector machines (SVM) and the dual ensemble Kalman filter (EnKF) technique to estimate root zone soil moisture at different soil layers up to 100 cm depth. Multiple experiments are conducted in a data rich environment to construct and validate the SVM model and to explore the effectiveness and robustness of the EnKF technique. It was observed that the performance of SVM relies more on the initial length of training set than other factors (e.g., cost function, regularization parameter, and kernel parameters). The dual EnKF technique proved to be efficient to improve SVM with observed data either at each time step or at a flexible time steps. The EnKF technique can reach its maximum efficiency when the updating ensemble size approaches a certain threshold. It was observed that the SVM model performance for the multi-layer soil moisture estimation can be influenced by the rainfall magnitude (e.g., dry and wet spells).

  7. Groundwater management under uncertainty using a stochastic multi-cell model

    NASA Astrophysics Data System (ADS)

    Joodavi, Ata; Zare, Mohammad; Ziaei, Ali Naghi; Ferré, Ty P. A.

    2017-08-01

    The optimization of spatially complex groundwater management models over long time horizons requires the use of computationally efficient groundwater flow models. This paper presents a new stochastic multi-cell lumped-parameter aquifer model that explicitly considers uncertainty in groundwater recharge. To achieve this, the multi-cell model is combined with the constrained-state formulation method. In this method, the lower and upper bounds of groundwater heads are incorporated into the mass balance equation using indicator functions. This provides expressions for the means, variances and covariances of the groundwater heads, which can be included in the constraint set in an optimization model. This method was used to formulate two separate stochastic models: (i) groundwater flow in a two-cell aquifer model with normal and non-normal distributions of groundwater recharge; and (ii) groundwater management in a multiple cell aquifer in which the differences between groundwater abstractions and water demands are minimized. The comparison between the results obtained from the proposed modeling technique with those from Monte Carlo simulation demonstrates the capability of the proposed models to approximate the means, variances and covariances. Significantly, considering covariances between the heads of adjacent cells allows a more accurate estimate of the variances of the groundwater heads. Moreover, this modeling technique requires no discretization of state variables, thus offering an efficient alternative to computationally demanding methods.

  8. Thermal Analysis and Design of Multi-layer Insulation for Re-entry Aerodynamic Heating

    NASA Technical Reports Server (NTRS)

    Daryabeigi, Kamran

    2001-01-01

    The combined radiation/conduction heat transfer in high-temperature multi-layer insulations was modeled using a finite volume numerical model. The numerical model was validated by comparison with steady-state effective thermal conductivity measurements, and by transient thermal tests simulating re-entry aerodynamic heating conditions. A design of experiments technique was used to investigate optimum design of multi-layer insulations for re-entry aerodynamic heating. It was found that use of 2 mm foil spacing and locating the foils near the hot boundary with the top foil 2 mm away from the hot boundary resulted in the most effective insulation design. A 76.2 mm thick multi-layer insulation using 1, 4, or 16 foils resulted in 2.9, 7.2, or 22.2 percent mass per unit area savings compared to a fibrous insulation sample at the same thickness, respectively.

  9. GLO-STIX: Graph-Level Operations for Specifying Techniques and Interactive eXploration

    PubMed Central

    Stolper, Charles D.; Kahng, Minsuk; Lin, Zhiyuan; Foerster, Florian; Goel, Aakash; Stasko, John; Chau, Duen Horng

    2015-01-01

    The field of graph visualization has produced a wealth of visualization techniques for accomplishing a variety of analysis tasks. Therefore analysts often rely on a suite of different techniques, and visual graph analysis application builders strive to provide this breadth of techniques. To provide a holistic model for specifying network visualization techniques (as opposed to considering each technique in isolation) we present the Graph-Level Operations (GLO) model. We describe a method for identifying GLOs and apply it to identify five classes of GLOs, which can be flexibly combined to re-create six canonical graph visualization techniques. We discuss advantages of the GLO model, including potentially discovering new, effective network visualization techniques and easing the engineering challenges of building multi-technique graph visualization applications. Finally, we implement the GLOs that we identified into the GLO-STIX prototype system that enables an analyst to interactively explore a graph by applying GLOs. PMID:26005315

  10. Multi-scale model of the ionosphere from the combination of modern space-geodetic satellite techniques - project status and first results

    NASA Astrophysics Data System (ADS)

    Schmidt, M.; Hugentobler, U.; Jakowski, N.; Dettmering, D.; Liang, W.; Limberger, M.; Wilken, V.; Gerzen, T.; Hoque, M.; Berdermann, J.

    2012-04-01

    Near real-time high resolution and high precision ionosphere models are needed for a large number of applications, e.g. in navigation, positioning, telecommunications or astronautics. Today these ionosphere models are mostly empirical, i.e., based purely on mathematical approaches. In the DFG project 'Multi-scale model of the ionosphere from the combination of modern space-geodetic satellite techniques (MuSIK)' the complex phenomena within the ionosphere are described vertically by combining the Chapman electron density profile with a plasmasphere layer. In order to consider the horizontal and temporal behaviour the fundamental target parameters of this physics-motivated approach are modelled by series expansions in terms of tensor products of localizing B-spline functions depending on longitude, latitude and time. For testing the procedure the model will be applied to an appropriate region in South America, which covers relevant ionospheric processes and phenomena such as the Equatorial Anomaly. The project connects the expertise of the three project partners, namely Deutsches Geodätisches Forschungsinstitut (DGFI) Munich, the Institute of Astronomical and Physical Geodesy (IAPG) of the Technical University Munich (TUM) and the German Aerospace Center (DLR), Neustrelitz. In this presentation we focus on the current status of the project. In the first year of the project we studied the behaviour of the ionosphere in the test region, we setup appropriate test periods covering high and low solar activity as well as winter and summer and started the data collection, analysis, pre-processing and archiving. We developed partly the mathematical-physical modelling approach and performed first computations based on simulated input data. Here we present information on the data coverage for the area and the time periods of our investigations and we outline challenges of the multi-dimensional mathematical-physical modelling approach. We show first results, discuss problems in modelling and possible solution strategies and finally, we address open questions.

  11. Extending the Multi-level Method for the Simulation of Stochastic Biological Systems.

    PubMed

    Lester, Christopher; Baker, Ruth E; Giles, Michael B; Yates, Christian A

    2016-08-01

    The multi-level method for discrete-state systems, first introduced by Anderson and Higham (SIAM Multiscale Model Simul 10(1):146-179, 2012), is a highly efficient simulation technique that can be used to elucidate statistical characteristics of biochemical reaction networks. A single point estimator is produced in a cost-effective manner by combining a number of estimators of differing accuracy in a telescoping sum, and, as such, the method has the potential to revolutionise the field of stochastic simulation. In this paper, we present several refinements of the multi-level method which render it easier to understand and implement, and also more efficient. Given the substantial and complex nature of the multi-level method, the first part of this work reviews existing literature, with the aim of providing a practical guide to the use of the multi-level method. The second part provides the means for a deft implementation of the technique and concludes with a discussion of a number of open problems.

  12. Scalability of surrogate-assisted multi-objective optimization of antenna structures exploiting variable-fidelity electromagnetic simulation models

    NASA Astrophysics Data System (ADS)

    Koziel, Slawomir; Bekasiewicz, Adrian

    2016-10-01

    Multi-objective optimization of antenna structures is a challenging task owing to the high computational cost of evaluating the design objectives as well as the large number of adjustable parameters. Design speed-up can be achieved by means of surrogate-based optimization techniques. In particular, a combination of variable-fidelity electromagnetic (EM) simulations, design space reduction techniques, response surface approximation models and design refinement methods permits identification of the Pareto-optimal set of designs within a reasonable timeframe. Here, a study concerning the scalability of surrogate-assisted multi-objective antenna design is carried out based on a set of benchmark problems, with the dimensionality of the design space ranging from six to 24 and a CPU cost of the EM antenna model from 10 to 20 min per simulation. Numerical results indicate that the computational overhead of the design process increases more or less quadratically with the number of adjustable geometric parameters of the antenna structure at hand, which is a promising result from the point of view of handling even more complex problems.

  13. Removing flicker based on sparse color correspondences in old film restoration

    NASA Astrophysics Data System (ADS)

    Huang, Xi; Ding, Youdong; Yu, Bing; Xia, Tianran

    2018-04-01

    In the long history of human civilization, archived film is an indispensable part of it, and using digital method to repair damaged film is also a mainstream trend nowadays. In this paper, we propose a sparse color correspondences based technique to remove fading flicker for old films. Our model, combined with multi frame images to establish a simple correction model, includes three key steps. Firstly, we recover sparse color correspondences in the input frames to build a matrix with many missing entries. Secondly, we present a low-rank matrix factorization approach to estimate the unknown parameters of this model. Finally, we adopt a two-step strategy that divide the estimated parameters into reference frame parameters for color recovery correction and other frame parameters for color consistency correction to remove flicker. Our method combined multi-frames takes continuity of the input sequence into account, and the experimental results show the method can remove fading flicker efficiently.

  14. A Bayesian Alternative for Multi-objective Ecohydrological Model Specification

    NASA Astrophysics Data System (ADS)

    Tang, Y.; Marshall, L. A.; Sharma, A.; Ajami, H.

    2015-12-01

    Process-based ecohydrological models combine the study of hydrological, physical, biogeochemical and ecological processes of the catchments, which are usually more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov Chain Monte Carlo (MCMC) techniques. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological framework. In our study, a formal Bayesian approach is implemented in an ecohydrological model which combines a hydrological model (HyMOD) and a dynamic vegetation model (DVM). Simulations focused on one objective likelihood (Streamflow/LAI) and multi-objective likelihoods (Streamflow and LAI) with different weights are compared. Uniform, weakly informative and strongly informative prior distributions are used in different simulations. The Kullback-leibler divergence (KLD) is used to measure the dis(similarity) between different priors and corresponding posterior distributions to examine the parameter sensitivity. Results show that different prior distributions can strongly influence posterior distributions for parameters, especially when the available data is limited or parameters are insensitive to the available data. We demonstrate differences in optimized parameters and uncertainty limits in different cases based on multi-objective likelihoods vs. single objective likelihoods. We also demonstrate the importance of appropriately defining the weights of objectives in multi-objective calibration according to different data types.

  15. Wavelet packets for multi- and hyper-spectral imagery

    NASA Astrophysics Data System (ADS)

    Benedetto, J. J.; Czaja, W.; Ehler, M.; Flake, C.; Hirn, M.

    2010-01-01

    State of the art dimension reduction and classification schemes in multi- and hyper-spectral imaging rely primarily on the information contained in the spectral component. To better capture the joint spatial and spectral data distribution we combine the Wavelet Packet Transform with the linear dimension reduction method of Principal Component Analysis. Each spectral band is decomposed by means of the Wavelet Packet Transform and we consider a joint entropy across all the spectral bands as a tool to exploit the spatial information. Dimension reduction is then applied to the Wavelet Packets coefficients. We present examples of this technique for hyper-spectral satellite imaging. We also investigate the role of various shrinkage techniques to model non-linearity in our approach.

  16. Using Nonlinear Programming in International Trade Theory: The Factor-Proportions Model

    ERIC Educational Resources Information Center

    Gilbert, John

    2004-01-01

    Students at all levels benefit from a multi-faceted approach to learning abstract material. The most commonly used technique in teaching the pure theory of international trade is a combination of geometry and algebraic derivations. Numerical simulation can provide a valuable third support to these approaches. The author describes a simple…

  17. A Generalized Measurement Model to Quantify Health: The Multi-Attribute Preference Response Model

    PubMed Central

    Krabbe, Paul F. M.

    2013-01-01

    After 40 years of deriving metric values for health status or health-related quality of life, the effective quantification of subjective health outcomes is still a challenge. Here, two of the best measurement tools, the discrete choice and the Rasch model, are combined to create a new model for deriving health values. First, existing techniques to value health states are briefly discussed followed by a reflection on the recent revival of interest in patients’ experience with regard to their possible role in health measurement. Subsequently, three basic principles for valid health measurement are reviewed, namely unidimensionality, interval level, and invariance. In the main section, the basic operation of measurement is then discussed in the framework of probabilistic discrete choice analysis (random utility model) and the psychometric Rasch model. It is then shown how combining the main features of these two models yields an integrated measurement model, called the multi-attribute preference response (MAPR) model, which is introduced here. This new model transforms subjective individual rank data into a metric scale using responses from patients who have experienced certain health states. Its measurement mechanism largely prevents biases such as adaptation and coping. Several extensions of the MAPR model are presented. The MAPR model can be applied to a wide range of research problems. If extended with the self-selection of relevant health domains for the individual patient, this model will be more valid than existing valuation techniques. PMID:24278141

  18. Three Dimensional Reconstruction Workflows for Lost Cultural Heritage Monuments Exploiting Public Domain and Professional Photogrammetric Imagery

    NASA Astrophysics Data System (ADS)

    Wahbeh, W.; Nebiker, S.

    2017-08-01

    In our paper, we document experiments and results of image-based 3d reconstructions of famous heritage monuments which were recently damaged or completely destroyed by the so-called Islamic state in Syria and Iraq. The specific focus of our research is on the combined use of professional photogrammetric imagery and of publicly available imagery from the web for optimally 3d reconstructing those monuments. The investigated photogrammetric reconstruction techniques include automated bundle adjustment and dense multi-view 3d reconstruction using public domain and professional imagery on the one hand and an interactive polygonal modelling based on projected panoramas on the other. Our investigations show that the combination of these two image-based modelling techniques delivers better results in terms of model completeness, level of detail and appearance.

  19. UTCI-Fiala multi-node model of human heat transfer and temperature regulation

    NASA Astrophysics Data System (ADS)

    Fiala, Dusan; Havenith, George; Bröde, Peter; Kampmann, Bernhard; Jendritzky, Gerd

    2012-05-01

    The UTCI-Fiala mathematical model of human temperature regulation forms the basis of the new Universal Thermal Climate Index (UTC). Following extensive validation tests, adaptations and extensions, such as the inclusion of an adaptive clothing model, the model was used to predict human temperature and regulatory responses for combinations of the prevailing outdoor climate conditions. This paper provides an overview of the underlying algorithms and methods that constitute the multi-node dynamic UTCI-Fiala model of human thermal physiology and comfort. Treated topics include modelling heat and mass transfer within the body, numerical techniques, modelling environmental heat exchanges, thermoregulatory reactions of the central nervous system, and perceptual responses. Other contributions of this special issue describe the validation of the UTCI-Fiala model against measured data and the development of the adaptive clothing model for outdoor climates.

  20. A Multi-Objective Decision Making Approach for Solving the Image Segmentation Fusion Problem.

    PubMed

    Khelifi, Lazhar; Mignotte, Max

    2017-08-01

    Image segmentation fusion is defined as the set of methods which aim at merging several image segmentations, in a manner that takes full advantage of the complementarity of each one. Previous relevant researches in this field have been impeded by the difficulty in identifying an appropriate single segmentation fusion criterion, providing the best possible, i.e., the more informative, result of fusion. In this paper, we propose a new model of image segmentation fusion based on multi-objective optimization which can mitigate this problem, to obtain a final improved result of segmentation. Our fusion framework incorporates the dominance concept in order to efficiently combine and optimize two complementary segmentation criteria, namely, the global consistency error and the F-measure (precision-recall) criterion. To this end, we present a hierarchical and efficient way to optimize the multi-objective consensus energy function related to this fusion model, which exploits a simple and deterministic iterative relaxation strategy combining the different image segments. This step is followed by a decision making task based on the so-called "technique for order performance by similarity to ideal solution". Results obtained on two publicly available databases with manual ground truth segmentations clearly show that our multi-objective energy-based model gives better results than the classical mono-objective one.

  1. Remote sensing estimation of the total phosphorus concentration in a large lake using band combinations and regional multivariate statistical modeling techniques.

    PubMed

    Gao, Yongnian; Gao, Junfeng; Yin, Hongbin; Liu, Chuansheng; Xia, Ting; Wang, Jing; Huang, Qi

    2015-03-15

    Remote sensing has been widely used for ater quality monitoring, but most of these monitoring studies have only focused on a few water quality variables, such as chlorophyll-a, turbidity, and total suspended solids, which have typically been considered optically active variables. Remote sensing presents a challenge in estimating the phosphorus concentration in water. The total phosphorus (TP) in lakes has been estimated from remotely sensed observations, primarily using the simple individual band ratio or their natural logarithm and the statistical regression method based on the field TP data and the spectral reflectance. In this study, we investigated the possibility of establishing a spatial modeling scheme to estimate the TP concentration of a large lake from multi-spectral satellite imagery using band combinations and regional multivariate statistical modeling techniques, and we tested the applicability of the spatial modeling scheme. The results showed that HJ-1A CCD multi-spectral satellite imagery can be used to estimate the TP concentration in a lake. The correlation and regression analysis showed a highly significant positive relationship between the TP concentration and certain remotely sensed combination variables. The proposed modeling scheme had a higher accuracy for the TP concentration estimation in the large lake compared with the traditional individual band ratio method and the whole-lake scale regression-modeling scheme. The TP concentration values showed a clear spatial variability and were high in western Lake Chaohu and relatively low in eastern Lake Chaohu. The northernmost portion, the northeastern coastal zone and the southeastern portion of western Lake Chaohu had the highest TP concentrations, and the other regions had the lowest TP concentration values, except for the coastal zone of eastern Lake Chaohu. These results strongly suggested that the proposed modeling scheme, i.e., the band combinations and the regional multivariate statistical modeling techniques, demonstrated advantages for estimating the TP concentration in a large lake and had a strong potential for universal application for the TP concentration estimation in large lake waters worldwide. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Multi-model analysis in hydrological prediction

    NASA Astrophysics Data System (ADS)

    Lanthier, M.; Arsenault, R.; Brissette, F.

    2017-12-01

    Hydrologic modelling, by nature, is a simplification of the real-world hydrologic system. Therefore ensemble hydrological predictions thus obtained do not present the full range of possible streamflow outcomes, thereby producing ensembles which demonstrate errors in variance such as under-dispersion. Past studies show that lumped models used in prediction mode can return satisfactory results, especially when there is not enough information available on the watershed to run a distributed model. But all lumped models greatly simplify the complex processes of the hydrologic cycle. To generate more spread in the hydrologic ensemble predictions, multi-model ensembles have been considered. In this study, the aim is to propose and analyse a method that gives an ensemble streamflow prediction that properly represents the forecast probabilities and reduced ensemble bias. To achieve this, three simple lumped models are used to generate an ensemble. These will also be combined using multi-model averaging techniques, which generally generate a more accurate hydrogram than the best of the individual models in simulation mode. This new predictive combined hydrogram is added to the ensemble, thus creating a large ensemble which may improve the variability while also improving the ensemble mean bias. The quality of the predictions is then assessed on different periods: 2 weeks, 1 month, 3 months and 6 months using a PIT Histogram of the percentiles of the real observation volumes with respect to the volumes of the ensemble members. Initially, the models were run using historical weather data to generate synthetic flows. This worked for individual models, but not for the multi-model and for the large ensemble. Consequently, by performing data assimilation at each prediction period and thus adjusting the initial states of the models, the PIT Histogram could be constructed using the observed flows while allowing the use of the multi-model predictions. The under-dispersion has been largely corrected on short-term predictions. For the longer term, the addition of the multi-model member has been beneficial to the quality of the predictions, although it is too early to determine whether the gain is related to the addition of a member or if multi-model member has plus-value itself.

  3. Application of separable parameter space techniques to multi-tracer PET compartment modeling.

    PubMed

    Zhang, Jeff L; Michael Morey, A; Kadrmas, Dan J

    2016-02-07

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.

  4. Application of separable parameter space techniques to multi-tracer PET compartment modeling

    NASA Astrophysics Data System (ADS)

    Zhang, Jeff L.; Morey, A. Michael; Kadrmas, Dan J.

    2016-02-01

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.

  5. Machine Learning Techniques for Global Sensitivity Analysis in Climate Models

    NASA Astrophysics Data System (ADS)

    Safta, C.; Sargsyan, K.; Ricciuto, D. M.

    2017-12-01

    Climate models studies are not only challenged by the compute intensive nature of these models but also by the high-dimensionality of the input parameter space. In our previous work with the land model components (Sargsyan et al., 2014) we identified subsets of 10 to 20 parameters relevant for each QoI via Bayesian compressive sensing and variance-based decomposition. Nevertheless the algorithms were challenged by the nonlinear input-output dependencies for some of the relevant QoIs. In this work we will explore a combination of techniques to extract relevant parameters for each QoI and subsequently construct surrogate models with quantified uncertainty necessary to future developments, e.g. model calibration and prediction studies. In the first step, we will compare the skill of machine-learning models (e.g. neural networks, support vector machine) to identify the optimal number of classes in selected QoIs and construct robust multi-class classifiers that will partition the parameter space in regions with smooth input-output dependencies. These classifiers will be coupled with techniques aimed at building sparse and/or low-rank surrogate models tailored to each class. Specifically we will explore and compare sparse learning techniques with low-rank tensor decompositions. These models will be used to identify parameters that are important for each QoI. Surrogate accuracy requirements are higher for subsequent model calibration studies and we will ascertain the performance of this workflow for multi-site ALM simulation ensembles.

  6. Optimizing a machine learning based glioma grading system using multi-parametric MRI histogram and texture features

    PubMed Central

    Hu, Yu-Chuan; Li, Gang; Yang, Yang; Han, Yu; Sun, Ying-Zhi; Liu, Zhi-Cheng; Tian, Qiang; Han, Zi-Yang; Liu, Le-De; Hu, Bin-Quan; Qiu, Zi-Yu; Wang, Wen; Cui, Guang-Bin

    2017-01-01

    Current machine learning techniques provide the opportunity to develop noninvasive and automated glioma grading tools, by utilizing quantitative parameters derived from multi-modal magnetic resonance imaging (MRI) data. However, the efficacies of different machine learning methods in glioma grading have not been investigated.A comprehensive comparison of varied machine learning methods in differentiating low-grade gliomas (LGGs) and high-grade gliomas (HGGs) as well as WHO grade II, III and IV gliomas based on multi-parametric MRI images was proposed in the current study. The parametric histogram and image texture attributes of 120 glioma patients were extracted from the perfusion, diffusion and permeability parametric maps of preoperative MRI. Then, 25 commonly used machine learning classifiers combined with 8 independent attribute selection methods were applied and evaluated using leave-one-out cross validation (LOOCV) strategy. Besides, the influences of parameter selection on the classifying performances were investigated. We found that support vector machine (SVM) exhibited superior performance to other classifiers. By combining all tumor attributes with synthetic minority over-sampling technique (SMOTE), the highest classifying accuracy of 0.945 or 0.961 for LGG and HGG or grade II, III and IV gliomas was achieved. Application of Recursive Feature Elimination (RFE) attribute selection strategy further improved the classifying accuracies. Besides, the performances of LibSVM, SMO, IBk classifiers were influenced by some key parameters such as kernel type, c, gama, K, etc. SVM is a promising tool in developing automated preoperative glioma grading system, especially when being combined with RFE strategy. Model parameters should be considered in glioma grading model optimization. PMID:28599282

  7. Optimizing a machine learning based glioma grading system using multi-parametric MRI histogram and texture features.

    PubMed

    Zhang, Xin; Yan, Lin-Feng; Hu, Yu-Chuan; Li, Gang; Yang, Yang; Han, Yu; Sun, Ying-Zhi; Liu, Zhi-Cheng; Tian, Qiang; Han, Zi-Yang; Liu, Le-De; Hu, Bin-Quan; Qiu, Zi-Yu; Wang, Wen; Cui, Guang-Bin

    2017-07-18

    Current machine learning techniques provide the opportunity to develop noninvasive and automated glioma grading tools, by utilizing quantitative parameters derived from multi-modal magnetic resonance imaging (MRI) data. However, the efficacies of different machine learning methods in glioma grading have not been investigated.A comprehensive comparison of varied machine learning methods in differentiating low-grade gliomas (LGGs) and high-grade gliomas (HGGs) as well as WHO grade II, III and IV gliomas based on multi-parametric MRI images was proposed in the current study. The parametric histogram and image texture attributes of 120 glioma patients were extracted from the perfusion, diffusion and permeability parametric maps of preoperative MRI. Then, 25 commonly used machine learning classifiers combined with 8 independent attribute selection methods were applied and evaluated using leave-one-out cross validation (LOOCV) strategy. Besides, the influences of parameter selection on the classifying performances were investigated. We found that support vector machine (SVM) exhibited superior performance to other classifiers. By combining all tumor attributes with synthetic minority over-sampling technique (SMOTE), the highest classifying accuracy of 0.945 or 0.961 for LGG and HGG or grade II, III and IV gliomas was achieved. Application of Recursive Feature Elimination (RFE) attribute selection strategy further improved the classifying accuracies. Besides, the performances of LibSVM, SMO, IBk classifiers were influenced by some key parameters such as kernel type, c, gama, K, etc. SVM is a promising tool in developing automated preoperative glioma grading system, especially when being combined with RFE strategy. Model parameters should be considered in glioma grading model optimization.

  8. Trends in modeling Biomedical Complex Systems

    PubMed Central

    Milanesi, Luciano; Romano, Paolo; Castellani, Gastone; Remondini, Daniel; Liò, Petro

    2009-01-01

    In this paper we provide an introduction to the techniques for multi-scale complex biological systems, from the single bio-molecule to the cell, combining theoretical modeling, experiments, informatics tools and technologies suitable for biological and biomedical research, which are becoming increasingly multidisciplinary, multidimensional and information-driven. The most important concepts on mathematical modeling methodologies and statistical inference, bioinformatics and standards tools to investigate complex biomedical systems are discussed and the prominent literature useful to both the practitioner and the theoretician are presented. PMID:19828068

  9. Toward a More Robust Pruning Procedure for MLP Networks

    NASA Technical Reports Server (NTRS)

    Stepniewski, Slawomir W.; Jorgensen, Charles C.

    1998-01-01

    Choosing a proper neural network architecture is a problem of great practical importance. Smaller models mean not only simpler designs but also lower variance for parameter estimation and network prediction. The widespread utilization of neural networks in modeling highlights an issue in human factors. The procedure of building neural models should find an appropriate level of model complexity in a more or less automatic fashion to make it less prone to human subjectivity. In this paper we present a Singular Value Decomposition based node elimination technique and enhanced implementation of the Optimal Brain Surgeon algorithm. Combining both methods creates a powerful pruning engine that can be used for tuning feedforward connectionist models. The performance of the proposed method is demonstrated by adjusting the structure of a multi-input multi-output model used to calibrate a six-component wind tunnel strain gage.

  10. Capturing remote mixing due to internal tides using multi-scale modeling tool: SOMAR-LES

    NASA Astrophysics Data System (ADS)

    Santilli, Edward; Chalamalla, Vamsi; Scotti, Alberto; Sarkar, Sutanu

    2016-11-01

    Internal tides that are generated during the interaction of an oscillating barotropic tide with the bottom bathymetry dissipate only a fraction of their energy near the generation region. The rest is radiated away in the form of low- high-mode internal tides. These internal tides dissipate energy at remote locations when they interact with the upper ocean pycnocline, continental slope, and large scale eddies. Capturing the wide range of length and time scales involved during the life-cycle of internal tides is computationally very expensive. A recently developed multi-scale modeling tool called SOMAR-LES combines the adaptive grid refinement features of SOMAR with the turbulence modeling features of a Large Eddy Simulation (LES) to capture multi-scale processes at a reduced computational cost. Numerical simulations of internal tide generation at idealized bottom bathymetries are performed to demonstrate this multi-scale modeling technique. Although each of the remote mixing phenomena have been considered independently in previous studies, this work aims to capture remote mixing processes during the life cycle of an internal tide in more realistic settings, by allowing multi-level (coarse and fine) grids to co-exist and exchange information during the time stepping process.

  11. Application of separable parameter space techniques to multi-tracer PET compartment modeling

    PubMed Central

    Zhang, Jeff L; Morey, A Michael; Kadrmas, Dan J

    2016-01-01

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg–Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models. PMID:26788888

  12. Multi-objective evolutionary algorithms for fuzzy classification in survival prediction.

    PubMed

    Jiménez, Fernando; Sánchez, Gracia; Juárez, José M

    2014-03-01

    This paper presents a novel rule-based fuzzy classification methodology for survival/mortality prediction in severe burnt patients. Due to the ethical aspects involved in this medical scenario, physicians tend not to accept a computer-based evaluation unless they understand why and how such a recommendation is given. Therefore, any fuzzy classifier model must be both accurate and interpretable. The proposed methodology is a three-step process: (1) multi-objective constrained optimization of a patient's data set, using Pareto-based elitist multi-objective evolutionary algorithms to maximize accuracy and minimize the complexity (number of rules) of classifiers, subject to interpretability constraints; this step produces a set of alternative (Pareto) classifiers; (2) linguistic labeling, which assigns a linguistic label to each fuzzy set of the classifiers; this step is essential to the interpretability of the classifiers; (3) decision making, whereby a classifier is chosen, if it is satisfactory, according to the preferences of the decision maker. If no classifier is satisfactory for the decision maker, the process starts again in step (1) with a different input parameter set. The performance of three multi-objective evolutionary algorithms, niched pre-selection multi-objective algorithm, elitist Pareto-based multi-objective evolutionary algorithm for diversity reinforcement (ENORA) and the non-dominated sorting genetic algorithm (NSGA-II), was tested using a patient's data set from an intensive care burn unit and a standard machine learning data set from an standard machine learning repository. The results are compared using the hypervolume multi-objective metric. Besides, the results have been compared with other non-evolutionary techniques and validated with a multi-objective cross-validation technique. Our proposal improves the classification rate obtained by other non-evolutionary techniques (decision trees, artificial neural networks, Naive Bayes, and case-based reasoning) obtaining with ENORA a classification rate of 0.9298, specificity of 0.9385, and sensitivity of 0.9364, with 14.2 interpretable fuzzy rules on average. Our proposal improves the accuracy and interpretability of the classifiers, compared with other non-evolutionary techniques. We also conclude that ENORA outperforms niched pre-selection and NSGA-II algorithms. Moreover, given that our multi-objective evolutionary methodology is non-combinational based on real parameter optimization, the time cost is significantly reduced compared with other evolutionary approaches existing in literature based on combinational optimization. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Image-based 3D reconstruction and virtual environmental walk-through

    NASA Astrophysics Data System (ADS)

    Sun, Jifeng; Fang, Lixiong; Luo, Ying

    2001-09-01

    We present a 3D reconstruction method, which combines geometry-based modeling, image-based modeling and rendering techniques. The first component is an interactive geometry modeling method which recovery of the basic geometry of the photographed scene. The second component is model-based stereo algorithm. We discus the image processing problems and algorithms of walking through in virtual space, then designs and implement a high performance multi-thread wandering algorithm. The applications range from architectural planning and archaeological reconstruction to virtual environments and cinematic special effects.

  14. Ultrasonic inspection of rocket fuel model using laminated transducer and multi-channel step pulser

    NASA Astrophysics Data System (ADS)

    Mihara, T.; Hamajima, T.; Tashiro, H.; Sato, A.

    2013-01-01

    For the ultrasonic inspection for the packing of solid fuel in a rocket booster, an industrial inspection is difficult. Because the signal to noise ratio in ultrasonic inspection of rocket fuel become worse due to the large attenuation even using lower frequency ultrasound. For the improvement of this problem, we tried to applied the two techniques in ultrasonic inspection, one was the step function pulser system with the super wideband frequency properties and the other was the laminated element transducer. By combining these two techniques, we developed the new ultrasonic measurement system and demonstrated the advantages in ultrasonic inspection of rocket fuel model specimen.

  15. Experience in using a numerical scheme with artificial viscosity at solving the Riemann problem for a multi-fluid model of multiphase flow

    NASA Astrophysics Data System (ADS)

    Bulovich, S. V.; Smirnov, E. M.

    2018-05-01

    The paper covers application of the artificial viscosity technique to numerical simulation of unsteady one-dimensional multiphase compressible flows on the base of the multi-fluid approach. The system of the governing equations is written under assumption of the pressure equilibrium between the "fluids" (phases). No interfacial exchange is taken into account. A model for evaluation of the artificial viscosity coefficient that (i) assumes identity of this coefficient for all interpenetrating phases and (ii) uses the multiphase-mixture Wood equation for evaluation of a scale speed of sound has been suggested. Performance of the artificial viscosity technique has been evaluated via numerical solution of a model problem of pressure discontinuity breakdown in a three-fluid medium. It has been shown that a relatively simple numerical scheme, explicit and first-order, combined with the suggested artificial viscosity model, predicts a physically correct behavior of the moving shock and expansion waves, and a subsequent refinement of the computational grid results in a monotonic approaching to an asymptotic time-dependent solution, without non-physical oscillations.

  16. On Using Meta-Modeling and Multi-Modeling to Address Complex Problems

    ERIC Educational Resources Information Center

    Abu Jbara, Ahmed

    2013-01-01

    Models, created using different modeling techniques, usually serve different purposes and provide unique insights. While each modeling technique might be capable of answering specific questions, complex problems require multiple models interoperating to complement/supplement each other; we call this Multi-Modeling. To address the syntactic and…

  17. A Technique of Fuzzy C-Mean in Multiple Linear Regression Model toward Paddy Yield

    NASA Astrophysics Data System (ADS)

    Syazwan Wahab, Nur; Saifullah Rusiman, Mohd; Mohamad, Mahathir; Amira Azmi, Nur; Che Him, Norziha; Ghazali Kamardan, M.; Ali, Maselan

    2018-04-01

    In this paper, we propose a hybrid model which is a combination of multiple linear regression model and fuzzy c-means method. This research involved a relationship between 20 variates of the top soil that are analyzed prior to planting of paddy yields at standard fertilizer rates. Data used were from the multi-location trials for rice carried out by MARDI at major paddy granary in Peninsular Malaysia during the period from 2009 to 2012. Missing observations were estimated using mean estimation techniques. The data were analyzed using multiple linear regression model and a combination of multiple linear regression model and fuzzy c-means method. Analysis of normality and multicollinearity indicate that the data is normally scattered without multicollinearity among independent variables. Analysis of fuzzy c-means cluster the yield of paddy into two clusters before the multiple linear regression model can be used. The comparison between two method indicate that the hybrid of multiple linear regression model and fuzzy c-means method outperform the multiple linear regression model with lower value of mean square error.

  18. Analysis of the Harrier forebody/inlet design using computational techniques

    NASA Technical Reports Server (NTRS)

    Chow, Chuen-Yen

    1993-01-01

    Under the support of this Cooperative Agreement, computations of transonic flow past the complex forebody/inlet configuration of the AV-8B Harrier II have been performed. The actual aircraft configuration was measured and its surface and surrounding domain were defined using computational structured grids. The thin-layer Navier-Stokes equations were used to model the flow along with the Chimera embedded multi-grid technique. A fully conservative, alternating direction implicit (ADI), approximately-factored, partially flux-split algorithm was employed to perform the computation. An existing code was altered to conform with the needs of the study, and some special engine face boundary conditions were developed. The algorithm incorporated the Chimera technique and an algebraic turbulence model in order to deal with the embedded multi-grids and viscous governing equations. Comparison with experimental data has yielded good agreement for the simplifications incorporated into the analysis. The aim of the present research was to provide a methodology for the numerical solution of complex, combined external/internal flows. This is the first time-dependent Navier-Stokes solution for a geometry in which the fuselage and inlet share a wall. The results indicate the methodology used here is a viable tool for transonic aircraft modeling.

  19. MMX-I: data-processing software for multimodal X-ray imaging and tomography.

    PubMed

    Bergamaschi, Antoine; Medjoubi, Kadda; Messaoudi, Cédric; Marco, Sergio; Somogyi, Andrea

    2016-05-01

    A new multi-platform freeware has been developed for the processing and reconstruction of scanning multi-technique X-ray imaging and tomography datasets. The software platform aims to treat different scanning imaging techniques: X-ray fluorescence, phase, absorption and dark field and any of their combinations, thus providing an easy-to-use data processing tool for the X-ray imaging user community. A dedicated data input stream copes with the input and management of large datasets (several hundred GB) collected during a typical multi-technique fast scan at the Nanoscopium beamline and even on a standard PC. To the authors' knowledge, this is the first software tool that aims at treating all of the modalities of scanning multi-technique imaging and tomography experiments.

  20. Multi-level methods and approximating distribution functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wilson, D., E-mail: daniel.wilson@dtc.ox.ac.uk; Baker, R. E.

    2016-07-15

    Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie’s direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparablemore » to Gillespie’s direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146–179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.« less

  1. Behavior Knowledge Space-Based Fusion for Copy-Move Forgery Detection.

    PubMed

    Ferreira, Anselmo; Felipussi, Siovani C; Alfaro, Carlos; Fonseca, Pablo; Vargas-Munoz, John E; Dos Santos, Jefersson A; Rocha, Anderson

    2016-07-20

    The detection of copy-move image tampering is of paramount importance nowadays, mainly due to its potential use for misleading the opinion forming process of the general public. In this paper, we go beyond traditional forgery detectors and aim at combining different properties of copy-move detection approaches by modeling the problem on a multiscale behavior knowledge space, which encodes the output combinations of different techniques as a priori probabilities considering multiple scales of the training data. Afterwards, the conditional probabilities missing entries are properly estimated through generative models applied on the existing training data. Finally, we propose different techniques that exploit the multi-directionality of the data to generate the final outcome detection map in a machine learning decision-making fashion. Experimental results on complex datasets, comparing the proposed techniques with a gamut of copy-move detection approaches and other fusion methodologies in the literature show the effectiveness of the proposed method and its suitability for real-world applications.

  2. Efficient Global Aerodynamic Modeling from Flight Data

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2012-01-01

    A method for identifying global aerodynamic models from flight data in an efficient manner is explained and demonstrated. A novel experiment design technique was used to obtain dynamic flight data over a range of flight conditions with a single flight maneuver. Multivariate polynomials and polynomial splines were used with orthogonalization techniques and statistical modeling metrics to synthesize global nonlinear aerodynamic models directly and completely from flight data alone. Simulation data and flight data from a subscale twin-engine jet transport aircraft were used to demonstrate the techniques. Results showed that global multivariate nonlinear aerodynamic dependencies could be accurately identified using flight data from a single maneuver. Flight-derived global aerodynamic model structures, model parameter estimates, and associated uncertainties were provided for all six nondimensional force and moment coefficients for the test aircraft. These models were combined with a propulsion model identified from engine ground test data to produce a high-fidelity nonlinear flight simulation very efficiently. Prediction testing using a multi-axis maneuver showed that the identified global model accurately predicted aircraft responses.

  3. Constraint Based Modeling Going Multicellular.

    PubMed

    Martins Conde, Patricia do Rosario; Sauter, Thomas; Pfau, Thomas

    2016-01-01

    Constraint based modeling has seen applications in many microorganisms. For example, there are now established methods to determine potential genetic modifications and external interventions to increase the efficiency of microbial strains in chemical production pipelines. In addition, multiple models of multicellular organisms have been created including plants and humans. While initially the focus here was on modeling individual cell types of the multicellular organism, this focus recently started to switch. Models of microbial communities, as well as multi-tissue models of higher organisms have been constructed. These models thereby can include different parts of a plant, like root, stem, or different tissue types in the same organ. Such models can elucidate details of the interplay between symbiotic organisms, as well as the concerted efforts of multiple tissues and can be applied to analyse the effects of drugs or mutations on a more systemic level. In this review we give an overview of the recent development of multi-tissue models using constraint based techniques and the methods employed when investigating these models. We further highlight advances in combining constraint based models with dynamic and regulatory information and give an overview of these types of hybrid or multi-level approaches.

  4. Multi-scale model for the hierarchical architecture of native cellulose hydrogels.

    PubMed

    Martínez-Sanz, Marta; Mikkelsen, Deirdre; Flanagan, Bernadine; Gidley, Michael J; Gilbert, Elliot P

    2016-08-20

    The structure of protiated and deuterated cellulose hydrogels has been investigated using a multi-technique approach combining small-angle scattering with diffraction, spectroscopy and microscopy. A model for the multi-scale structure of native cellulose hydrogels is proposed which highlights the essential role of water at different structural levels characterised by: (i) the existence of cellulose microfibrils containing an impermeable crystalline core surrounded by a partially hydrated paracrystalline shell, (ii) the creation of a strong network of cellulose microfibrils held together by hydrogen bonding to form cellulose ribbons and (iii) the differential behaviour of tightly bound water held within the ribbons compared to bulk solvent. Deuterium labelling provides an effective platform on which to further investigate the role of different plant cell wall polysaccharides in cellulose composite formation through the production of selectively deuterated cellulose composite hydrogels. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Multi-stream LSTM-HMM decoding and histogram equalization for noise robust keyword spotting.

    PubMed

    Wöllmer, Martin; Marchi, Erik; Squartini, Stefano; Schuller, Björn

    2011-09-01

    Highly spontaneous, conversational, and potentially emotional and noisy speech is known to be a challenge for today's automatic speech recognition (ASR) systems, which highlights the need for advanced algorithms that improve speech features and models. Histogram Equalization is an efficient method to reduce the mismatch between clean and noisy conditions by normalizing all moments of the probability distribution of the feature vector components. In this article, we propose to combine histogram equalization and multi-condition training for robust keyword detection in noisy speech. To better cope with conversational speaking styles, we show how contextual information can be effectively exploited in a multi-stream ASR framework that dynamically models context-sensitive phoneme estimates generated by a long short-term memory neural network. The proposed techniques are evaluated on the SEMAINE database-a corpus containing emotionally colored conversations with a cognitive system for "Sensitive Artificial Listening".

  6. Combined multi-kernel head computed tomography images optimized for depicting both brain parenchyma and bone.

    PubMed

    Takagi, Satoshi; Nagase, Hiroyuki; Hayashi, Tatsuya; Kita, Tamotsu; Hayashi, Katsumi; Sanada, Shigeru; Koike, Masayuki

    2014-01-01

    The hybrid convolution kernel technique for computed tomography (CT) is known to enable the depiction of an image set using different window settings. Our purpose was to decrease the number of artifacts in the hybrid convolution kernel technique for head CT and to determine whether our improved combined multi-kernel head CT images enabled diagnosis as a substitute for both brain (low-pass kernel-reconstructed) and bone (high-pass kernel-reconstructed) images. Forty-four patients with nondisplaced skull fractures were included. Our improved multi-kernel images were generated so that pixels of >100 Hounsfield unit in both brain and bone images were composed of CT values of bone images and other pixels were composed of CT values of brain images. Three radiologists compared the improved multi-kernel images with bone images. The improved multi-kernel images and brain images were identically displayed on the brain window settings. All three radiologists agreed that the improved multi-kernel images on the bone window settings were sufficient for diagnosing skull fractures in all patients. This improved multi-kernel technique has a simple algorithm and is practical for clinical use. Thus, simplified head CT examinations and fewer images that need to be stored can be expected.

  7. User Selection Criteria of Airspace Designs in Flexible Airspace Management

    NASA Technical Reports Server (NTRS)

    Lee, Hwasoo E.; Lee, Paul U.; Jung, Jaewoo; Lai, Chok Fung

    2011-01-01

    A method for identifying global aerodynamic models from flight data in an efficient manner is explained and demonstrated. A novel experiment design technique was used to obtain dynamic flight data over a range of flight conditions with a single flight maneuver. Multivariate polynomials and polynomial splines were used with orthogonalization techniques and statistical modeling metrics to synthesize global nonlinear aerodynamic models directly and completely from flight data alone. Simulation data and flight data from a subscale twin-engine jet transport aircraft were used to demonstrate the techniques. Results showed that global multivariate nonlinear aerodynamic dependencies could be accurately identified using flight data from a single maneuver. Flight-derived global aerodynamic model structures, model parameter estimates, and associated uncertainties were provided for all six nondimensional force and moment coefficients for the test aircraft. These models were combined with a propulsion model identified from engine ground test data to produce a high-fidelity nonlinear flight simulation very efficiently. Prediction testing using a multi-axis maneuver showed that the identified global model accurately predicted aircraft responses.

  8. An Initial Multi-Domain Modeling of an Actively Cooled Structure

    NASA Technical Reports Server (NTRS)

    Steinthorsson, Erlendur

    1997-01-01

    A methodology for the simulation of turbine cooling flows is being developed. The methodology seeks to combine numerical techniques that optimize both accuracy and computational efficiency. Key components of the methodology include the use of multiblock grid systems for modeling complex geometries, and multigrid convergence acceleration for enhancing computational efficiency in highly resolved fluid flow simulations. The use of the methodology has been demonstrated in several turbo machinery flow and heat transfer studies. Ongoing and future work involves implementing additional turbulence models, improving computational efficiency, adding AMR.

  9. A Multi-Technique Forensic Experiment for a Nonscience-Major Chemistry Course

    ERIC Educational Resources Information Center

    Szalay, Paul S.; Zook-Gerdau, Lois Anne; Schurter, Eric J.

    2011-01-01

    This multi-technique experiment with a forensic theme was developed for a nonscience-major chemistry course. The students are provided with solid samples and informed that the samples are either cocaine or a combination of drugs designed to mimic the stimulant and anesthetic qualities of cocaine such as caffeine and lidocaine. The students carry…

  10. a Multi-Data Source and Multi-Sensor Approach for the 3d Reconstruction and Visualization of a Complex Archaelogical Site: the Case Study of Tolmo de Minateda

    NASA Astrophysics Data System (ADS)

    Torres-Martínez, J. A.; Seddaiu, M.; Rodríguez-Gonzálvez, P.; Hernández-López, D.; González-Aguilera, D.

    2015-02-01

    The complexity of archaeological sites hinders to get an integral modelling using the actual Geomatic techniques (i.e. aerial, closerange photogrammetry and terrestrial laser scanner) individually, so a multi-sensor approach is proposed as the best solution to provide a 3D reconstruction and visualization of these complex sites. Sensor registration represents a riveting milestone when automation is required and when aerial and terrestrial dataset must be integrated. To this end, several problems must be solved: coordinate system definition, geo-referencing, co-registration of point clouds, geometric and radiometric homogeneity, etc. Last but not least, safeguarding of tangible archaeological heritage and its associated intangible expressions entails a multi-source data approach in which heterogeneous material (historical documents, drawings, archaeological techniques, habit of living, etc.) should be collected and combined with the resulting hybrid 3D of "Tolmo de Minateda" located models. The proposed multi-data source and multi-sensor approach is applied to the study case of "Tolmo de Minateda" archaeological site. A total extension of 9 ha is reconstructed, with an adapted level of detail, by an ultralight aerial platform (paratrike), an unmanned aerial vehicle, a terrestrial laser scanner and terrestrial photogrammetry. In addition, the own defensive nature of the site (i.e. with the presence of three different defensive walls) together with the considerable stratification of the archaeological site (i.e. with different archaeological surfaces and constructive typologies) require that tangible and intangible archaeological heritage expressions can be integrated with the hybrid 3D models obtained, to analyse, understand and exploit the archaeological site by different experts and heritage stakeholders.

  11. Combination of complementary data mining methods for geographical characterization of extra virgin olive oils based on mineral composition.

    PubMed

    Sayago, Ana; González-Domínguez, Raúl; Beltrán, Rafael; Fernández-Recamales, Ángeles

    2018-09-30

    This work explores the potential of multi-element fingerprinting in combination with advanced data mining strategies to assess the geographical origin of extra virgin olive oil samples. For this purpose, the concentrations of 55 elements were determined in 125 oil samples from multiple Spanish geographic areas. Several unsupervised and supervised multivariate statistical techniques were used to build classification models and investigate the relationship between mineral composition of olive oils and their provenance. Results showed that Spanish extra virgin olive oils exhibit characteristic element profiles, which can be differentiated on the basis of their origin in accordance with three geographical areas: Atlantic coast (Huelva province), Mediterranean coast and inland regions. Furthermore, statistical modelling yielded high sensitivity and specificity, principally when random forest and support vector machines were employed, thus demonstrating the utility of these techniques in food traceability and authenticity research. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Verifying Multi-Agent Systems via Unbounded Model Checking

    NASA Technical Reports Server (NTRS)

    Kacprzak, M.; Lomuscio, A.; Lasica, T.; Penczek, W.; Szreter, M.

    2004-01-01

    We present an approach to the problem of verification of epistemic properties in multi-agent systems by means of symbolic model checking. In particular, it is shown how to extend the technique of unbounded model checking from a purely temporal setting to a temporal-epistemic one. In order to achieve this, we base our discussion on interpreted systems semantics, a popular semantics used in multi-agent systems literature. We give details of the technique and show how it can be applied to the well known train, gate and controller problem. Keywords: model checking, unbounded model checking, multi-agent systems

  13. Multi-scale groundwater flow modeling during temperate climate conditions for the safety assessment of the proposed high-level nuclear waste repository site at Forsmark, Sweden

    NASA Astrophysics Data System (ADS)

    Joyce, Steven; Hartley, Lee; Applegate, David; Hoek, Jaap; Jackson, Peter

    2014-09-01

    Forsmark in Sweden has been proposed as the site of a geological repository for spent high-level nuclear fuel, to be located at a depth of approximately 470 m in fractured crystalline rock. The safety assessment for the repository has required a multi-disciplinary approach to evaluate the impact of hydrogeological and hydrogeochemical conditions close to the repository and in a wider regional context. Assessing the consequences of potential radionuclide releases requires quantitative site-specific information concerning the details of groundwater flow on the scale of individual waste canister locations (1-10 m) as well as details of groundwater flow and composition on the scale of groundwater pathways between the facility and the surface (500 m to 5 km). The purpose of this article is to provide an illustration of multi-scale modeling techniques and the results obtained when combining aspects of local-scale flows in fractures around a potential contaminant source with regional-scale groundwater flow and transport subject to natural evolution of the system. The approach set out is novel, as it incorporates both different scales of model and different levels of detail, combining discrete fracture network and equivalent continuous porous medium representations of fractured bedrock.

  14. MMX-I: data-processing software for multimodal X-ray imaging and tomography

    PubMed Central

    Bergamaschi, Antoine; Medjoubi, Kadda; Messaoudi, Cédric; Marco, Sergio; Somogyi, Andrea

    2016-01-01

    A new multi-platform freeware has been developed for the processing and reconstruction of scanning multi-technique X-ray imaging and tomography datasets. The software platform aims to treat different scanning imaging techniques: X-ray fluorescence, phase, absorption and dark field and any of their combinations, thus providing an easy-to-use data processing tool for the X-ray imaging user community. A dedicated data input stream copes with the input and management of large datasets (several hundred GB) collected during a typical multi-technique fast scan at the Nanoscopium beamline and even on a standard PC. To the authors’ knowledge, this is the first software tool that aims at treating all of the modalities of scanning multi-technique imaging and tomography experiments. PMID:27140159

  15. Multi-technique approach for deriving a VLBI signal extra-path variation model induced by gravity: the example of Medicina

    NASA Astrophysics Data System (ADS)

    Sarti, P.; Abbondanza, C.; Negusini, M.; Vittuari, L.

    2009-09-01

    During the measurement sessions gravity might induce significant deformations in large VLBI telescopes. If neglected or mismodelled, these deformations might bias the phase of the incoming signal thus corrupting the estimate of some crucial geodetic parameters (e.g. the height component of VLBI Reference Point). This paper describes a multi-technique approach implemented for measuring and quantifying the gravity-dependent deformations experienced by the 32-m diameter VLBI antenna of Medicina (Northern Italy). Such an approach integrates three different methods: Terrestrial Triangulations and Trilaterations (TTT), Laser Scanning (LS) and a Finite Element Model (FEM) of the antenna. The combination of the observations performed with these methods allows to accurately define an elevation-dependent model of the signal path variation which appears to be, for the Medicina telescope, non negligible. In the range [0,90] deg the signal path increases monotonically by almost 2 cm. The effect of such a variation has not been introduced in actual VLBI analysis yet; nevertheless this is the task we are going to pursue in the very next future.

  16. A combined approach of AHP and TOPSIS methods applied in the field of integrated software systems

    NASA Astrophysics Data System (ADS)

    Berdie, A. D.; Osaci, M.; Muscalagiu, I.; Barz, C.

    2017-05-01

    Adopting the most appropriate technology for developing applications on an integrated software system for enterprises, may result in great savings both in cost and hours of work. This paper proposes a research study for the determination of a hierarchy between three SAP (System Applications and Products in Data Processing) technologies. The technologies Web Dynpro -WD, Floorplan Manager - FPM and CRM WebClient UI - CRM WCUI are multi-criteria evaluated in terms of the obtained performances through the implementation of the same web business application. To establish the hierarchy a multi-criteria analysis model that combines the AHP (Analytic Hierarchy Process) and the TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) methods was proposed. This model was built with the help of the SuperDecision software. This software is based on the AHP method and determines the weights for the selected sets of criteria. The TOPSIS method was used to obtain the final ranking and the technologies hierarchy.

  17. VLBI Analysis with the Multi-Technique Software GEOSAT

    NASA Technical Reports Server (NTRS)

    Kierulf, Halfdan Pascal; Andersen, Per-Helge; Boeckmann, Sarah; Kristiansen, Oddgeir

    2010-01-01

    GEOSAT is a multi-technique geodetic analysis software developed at Forsvarets Forsknings Institutt (Norwegian defense research establishment). The Norwegian Mapping Authority has now installed the software and has, together with Forsvarets Forsknings Institutt, adapted the software to deliver datum-free normal equation systems in SINEX format. The goal is to be accepted as an IVS Associate Analysis Center and to provide contributions to the IVS EOP combination on a routine basis. GEOSAT is based on an upper diagonal factorized Kalman filter which allows estimation of time variable parameters like the troposphere and clocks as stochastic parameters. The tropospheric delays in various directions are mapped to tropospheric zenith delay using ray-tracing. Meteorological data from ECMWF with a resolution of six hours is used to perform the ray-tracing which depends both on elevation and azimuth. Other models are following the IERS and IVS conventions. The Norwegian Mapping Authority has submitted test SINEX files produced with GEOSAT to IVS. The results have been compared with the existing IVS combined products. In this paper the outcome of these comparisons is presented.

  18. Development of high-resolution multi-scale modelling system for simulation of coastal-fluvial urban flooding

    NASA Astrophysics Data System (ADS)

    Comer, Joanne; Indiana Olbert, Agnieszka; Nash, Stephen; Hartnett, Michael

    2017-02-01

    Urban developments in coastal zones are often exposed to natural hazards such as flooding. In this research, a state-of-the-art, multi-scale nested flood (MSN_Flood) model is applied to simulate complex coastal-fluvial urban flooding due to combined effects of tides, surges and river discharges. Cork city on Ireland's southwest coast is a study case. The flood modelling system comprises a cascade of four dynamically linked models that resolve the hydrodynamics of Cork Harbour and/or its sub-region at four scales: 90, 30, 6 and 2 m. Results demonstrate that the internalization of the nested boundary through the use of ghost cells combined with a tailored adaptive interpolation technique creates a highly dynamic moving boundary that permits flooding and drying of the nested boundary. This novel feature of MSN_Flood provides a high degree of choice regarding the location of the boundaries to the nested domain and therefore flexibility in model application. The nested MSN_Flood model through dynamic downscaling facilitates significant improvements in accuracy of model output without incurring the computational expense of high spatial resolution over the entire model domain. The urban flood model provides full characteristics of water levels and flow regimes necessary for flood hazard identification and flood risk assessment.

  19. Next-generation seismic experiments: wide-angle, multi-azimuth, three-dimensional, full-waveform inversion

    NASA Astrophysics Data System (ADS)

    Morgan, Joanna; Warner, Michael; Bell, Rebecca; Ashley, Jack; Barnes, Danielle; Little, Rachel; Roele, Katarina; Jones, Charles

    2013-12-01

    Full-waveform inversion (FWI) is an advanced seismic imaging technique that has recently become computationally feasible in three dimensions, and that is being widely adopted and applied by the oil and gas industry. Here we explore the potential for 3-D FWI, when combined with appropriate marine seismic acquisition, to recover high-resolution high-fidelity P-wave velocity models for subsedimentary targets within the crystalline crust and uppermost mantle. We demonstrate that FWI is able to recover detailed 3-D structural information within a radially faulted dome using a field data set acquired with a standard 3-D petroleum-industry marine acquisition system. Acquiring low-frequency seismic data is important for successful FWI; we show that current acquisition techniques can routinely acquire field data from airguns at frequencies as low as 2 Hz, and that 1 Hz acquisition is likely to be achievable using ocean-bottom hydrophones in deep water. Using existing geological and geophysical models, we construct P-wave velocity models over three potential subsedimentary targets: the Soufrière Hills Volcano on Montserrat and its associated crustal magmatic system, the crust and uppermost mantle across the continent-ocean transition beneath the Campos Basin offshore Brazil, and the oceanic crust and uppermost mantle beneath the East Pacific Rise mid-ocean ridge. We use these models to generate realistic multi-azimuth 3-D synthetic seismic data, and attempt to invert these data to recover the original models. We explore resolution and accuracy, sensitivity to noise and acquisition geometry, ability to invert elastic data using acoustic inversion codes, and the trade-off between low frequencies and starting velocity model accuracy. We show that FWI applied to multi-azimuth, refracted, wide-angle, low-frequency data can resolve features in the deep crust and uppermost mantle on scales that are significantly better than can be achieved by any other geophysical technique, and that these results can be obtained using relatively small numbers (60-90) of ocean-bottom receivers combined with large numbers of airgun shots. We demonstrate that multi-azimuth 3-D FWI is robust in the presence of noise, that acoustic FWI can invert elastic data successfully, and that the typical errors to be expected in starting models derived using traveltimes will not be problematic for FWI given appropriately designed acquisition. FWI is a rapidly maturing technology; its transfer from the petroleum sector to tackle a much broader range of targets now appears to be entirely achievable.

  20. Language Model Combination and Adaptation Using Weighted Finite State Transducers

    NASA Technical Reports Server (NTRS)

    Liu, X.; Gales, M. J. F.; Hieronymus, J. L.; Woodland, P. C.

    2010-01-01

    In speech recognition systems language model (LMs) are often constructed by training and combining multiple n-gram models. They can be either used to represent different genres or tasks found in diverse text sources, or capture stochastic properties of different linguistic symbol sequences, for example, syllables and words. Unsupervised LM adaption may also be used to further improve robustness to varying styles or tasks. When using these techniques, extensive software changes are often required. In this paper an alternative and more general approach based on weighted finite state transducers (WFSTs) is investigated for LM combination and adaptation. As it is entirely based on well-defined WFST operations, minimum change to decoding tools is needed. A wide range of LM combination configurations can be flexibly supported. An efficient on-the-fly WFST decoding algorithm is also proposed. Significant error rate gains of 7.3% relative were obtained on a state-of-the-art broadcast audio recognition task using a history dependently adapted multi-level LM modelling both syllable and word sequences

  1. FEA of the clinching process of short fiber reinforced thermoplastic with an aluminum sheet using LS-DYNA

    NASA Astrophysics Data System (ADS)

    Behrens, B.-A.; Bouguecha, A.; Vucetic, M.; Grbic, N.

    2016-10-01

    A structural concept in multi-material design is used in the automotive industry with the aim of achieving significant weight reductions of conventional car bodies. In this respect, the use of aluminum and short fiber reinforced plastics represents an interesting material combination. A wide acceptance of such a material combination requires a suitable joining technique. Among different joining techniques, clinching represents one of the most appealing alternative for automotive applications. This contribution deals with the FE simulation of the clinching process of two representative materials PA6GF30 and EN AW 5754 using the FE software LS-DYNA. With regard to the material modelling of the aluminum sheet, an isotropic material model based on the von Mises plasticity implemented in LS-DYNA was chosen. Analogous to aluminum, the same material model is used for modelling the short fiber reinforced thermoplastic. Additionally, a semi-analytical model for polymers (SAMP-1) also available in LS-DYNA was taken. Finally, the FEA of clinching process is carried out and the comparison of the simulation results is presented above.

  2. The analysis of forest policy using Landsat multi-spectral scanner data and geographic information systems

    NASA Technical Reports Server (NTRS)

    Peterson, D. L.; Brass, J. A.; Norman, S. D.; Tosta-Miller, N.

    1984-01-01

    The role of Landsat multi-spectral scanner (MSS) data for forest policy analysis in the state of California has been investigated. The combined requirements for physical, socio-economic, and institutional data in policy analysis were studied to explain potential data needs. A statewide MSS data and general land cover classification was created from which country-wide data sets could be extracted for detailed analyses. The potential to combine point sample data with MSS data was examined as a means to improve specificity in estimations. MSS data was incorporated into geographic information systems to demonstrate modeling techniques using abiotic, biotic, and socio-economic data layers. The review of system configurations to help the California Department of Forestry (CDF) acquire the capability demonstrated resulted in a sequence of options for implementation.

  3. [Multi-mathematical modelings for compatibility optimization of Jiangzhi granules].

    PubMed

    Yang, Ming; Zhang, Li; Ge, Yingli; Lu, Yanliu; Ji, Guang

    2011-12-01

    To investigate into the method of "multi activity index evaluation and combination optimized of mult-component" for Chinese herbal formulas. According to the scheme of uniform experimental design, efficacy experiment, multi index evaluation, least absolute shrinkage, selection operator (LASSO) modeling, evolutionary optimization algorithm, validation experiment, we optimized the combination of Jiangzhi granules based on the activity indexes of blood serum ALT, ALT, AST, TG, TC, HDL, LDL and TG level of liver tissues, ratio of liver tissue to body. Analytic hierarchy process (AHP) combining with criteria importance through intercriteria correlation (CRITIC) for multi activity index evaluation was more reasonable and objective, it reflected the information of activity index's order and objective sample data. LASSO algorithm modeling could accurately reflect the relationship between different combination of Jiangzhi granule and the activity comprehensive indexes. The optimized combination of Jiangzhi granule showed better values of the activity comprehensive indexed than the original formula after the validation experiment. AHP combining with CRITIC can be used for multi activity index evaluation and LASSO algorithm, it is suitable for combination optimized of Chinese herbal formulas.

  4. Multi-scale modeling of spin transport in organic semiconductors

    NASA Astrophysics Data System (ADS)

    Hemmatiyan, Shayan; Souza, Amaury; Kordt, Pascal; McNellis, Erik; Andrienko, Denis; Sinova, Jairo

    In this work, we present our theoretical framework to simulate simultaneously spin and charge transport in amorphous organic semiconductors. By combining several techniques e.g. molecular dynamics, density functional theory and kinetic Monte Carlo, we are be able to study spin transport in the presence of anisotropy, thermal effects, magnetic and electric field effects in a realistic morphologies of amorphous organic systems. We apply our multi-scale approach to investigate the spin transport in amorphous Alq3 (Tris(8-hydroxyquinolinato)aluminum) and address the underlying spin relaxation mechanism in this system as a function of temperature, bias voltage, magnetic field and sample thickness.

  5. Adaptive swarm cluster-based dynamic multi-objective synthetic minority oversampling technique algorithm for tackling binary imbalanced datasets in biomedical data classification.

    PubMed

    Li, Jinyan; Fong, Simon; Sung, Yunsick; Cho, Kyungeun; Wong, Raymond; Wong, Kelvin K L

    2016-01-01

    An imbalanced dataset is defined as a training dataset that has imbalanced proportions of data in both interesting and uninteresting classes. Often in biomedical applications, samples from the stimulating class are rare in a population, such as medical anomalies, positive clinical tests, and particular diseases. Although the target samples in the primitive dataset are small in number, the induction of a classification model over such training data leads to poor prediction performance due to insufficient training from the minority class. In this paper, we use a novel class-balancing method named adaptive swarm cluster-based dynamic multi-objective synthetic minority oversampling technique (ASCB_DmSMOTE) to solve this imbalanced dataset problem, which is common in biomedical applications. The proposed method combines under-sampling and over-sampling into a swarm optimisation algorithm. It adaptively selects suitable parameters for the rebalancing algorithm to find the best solution. Compared with the other versions of the SMOTE algorithm, significant improvements, which include higher accuracy and credibility, are observed with ASCB_DmSMOTE. Our proposed method tactfully combines two rebalancing techniques together. It reasonably re-allocates the majority class in the details and dynamically optimises the two parameters of SMOTE to synthesise a reasonable scale of minority class for each clustered sub-imbalanced dataset. The proposed methods ultimately overcome other conventional methods and attains higher credibility with even greater accuracy of the classification model.

  6. General quantitative genetic methods for comparative biology: phylogenies, taxonomies and multi-trait models for continuous and categorical characters.

    PubMed

    Hadfield, J D; Nakagawa, S

    2010-03-01

    Although many of the statistical techniques used in comparative biology were originally developed in quantitative genetics, subsequent development of comparative techniques has progressed in relative isolation. Consequently, many of the new and planned developments in comparative analysis already have well-tested solutions in quantitative genetics. In this paper, we take three recent publications that develop phylogenetic meta-analysis, either implicitly or explicitly, and show how they can be considered as quantitative genetic models. We highlight some of the difficulties with the proposed solutions, and demonstrate that standard quantitative genetic theory and software offer solutions. We also show how results from Bayesian quantitative genetics can be used to create efficient Markov chain Monte Carlo algorithms for phylogenetic mixed models, thereby extending their generality to non-Gaussian data. Of particular utility is the development of multinomial models for analysing the evolution of discrete traits, and the development of multi-trait models in which traits can follow different distributions. Meta-analyses often include a nonrandom collection of species for which the full phylogenetic tree has only been partly resolved. Using missing data theory, we show how the presented models can be used to correct for nonrandom sampling and show how taxonomies and phylogenies can be combined to give a flexible framework with which to model dependence.

  7. Water Quality Variable Estimation using Partial Least Squares Regression and Multi-Scale Remote Sensing.

    NASA Astrophysics Data System (ADS)

    Peterson, K. T.; Wulamu, A.

    2017-12-01

    Water, essential to all living organisms, is one of the Earth's most precious resources. Remote sensing offers an ideal approach to monitor water quality over traditional in-situ techniques that are highly time and resource consuming. Utilizing a multi-scale approach, incorporating data from handheld spectroscopy, UAS based hyperspectal, and satellite multispectral images were collected in coordination with in-situ water quality samples for the two midwestern watersheds. The remote sensing data was modeled and correlated to the in-situ water quality variables including chlorophyll content (Chl), turbidity, and total dissolved solids (TDS) using Normalized Difference Spectral Indices (NDSI) and Partial Least Squares Regression (PLSR). The results of the study supported the original hypothesis that correlating water quality variables with remotely sensed data benefits greatly from the use of more complex modeling and regression techniques such as PLSR. The final results generated from the PLSR analysis resulted in much higher R2 values for all variables when compared to NDSI. The combination of NDSI and PLSR analysis also identified key wavelengths for identification that aligned with previous study's findings. This research displays the advantages and future for complex modeling and machine learning techniques to improve water quality variable estimation from spectral data.

  8. Masticatory biomechanics in the rabbit: a multi-body dynamics analysis.

    PubMed

    Watson, Peter J; Gröning, Flora; Curtis, Neil; Fitton, Laura C; Herrel, Anthony; McCormack, Steven W; Fagan, Michael J

    2014-10-06

    Multi-body dynamics is a powerful engineering tool which is becoming increasingly popular for the simulation and analysis of skull biomechanics. This paper presents the first application of multi-body dynamics to analyse the biomechanics of the rabbit skull. A model has been constructed through the combination of manual dissection and three-dimensional imaging techniques (magnetic resonance imaging and micro-computed tomography). Individual muscles are represented with multiple layers, thus more accurately modelling muscle fibres with complex lines of action. Model validity was sought through comparing experimentally measured maximum incisor bite forces with those predicted by the model. Simulations of molar biting highlighted the ability of the masticatory system to alter recruitment of two muscle groups, in order to generate shearing or crushing movements. Molar shearing is capable of processing a food bolus in all three orthogonal directions, whereas molar crushing and incisor biting are predominately directed vertically. Simulations also show that the masticatory system is adapted to process foods through several cycles with low muscle activations, presumably in order to prevent rapidly fatiguing fast fibres during repeated chewing cycles. Our study demonstrates the usefulness of a validated multi-body dynamics model for investigating feeding biomechanics in the rabbit, and shows the potential for complementing and eventually reducing in vivo experiments.

  9. Masticatory biomechanics in the rabbit: a multi-body dynamics analysis

    PubMed Central

    Watson, Peter J.; Gröning, Flora; Curtis, Neil; Fitton, Laura C.; Herrel, Anthony; McCormack, Steven W.; Fagan, Michael J.

    2014-01-01

    Multi-body dynamics is a powerful engineering tool which is becoming increasingly popular for the simulation and analysis of skull biomechanics. This paper presents the first application of multi-body dynamics to analyse the biomechanics of the rabbit skull. A model has been constructed through the combination of manual dissection and three-dimensional imaging techniques (magnetic resonance imaging and micro-computed tomography). Individual muscles are represented with multiple layers, thus more accurately modelling muscle fibres with complex lines of action. Model validity was sought through comparing experimentally measured maximum incisor bite forces with those predicted by the model. Simulations of molar biting highlighted the ability of the masticatory system to alter recruitment of two muscle groups, in order to generate shearing or crushing movements. Molar shearing is capable of processing a food bolus in all three orthogonal directions, whereas molar crushing and incisor biting are predominately directed vertically. Simulations also show that the masticatory system is adapted to process foods through several cycles with low muscle activations, presumably in order to prevent rapidly fatiguing fast fibres during repeated chewing cycles. Our study demonstrates the usefulness of a validated multi-body dynamics model for investigating feeding biomechanics in the rabbit, and shows the potential for complementing and eventually reducing in vivo experiments. PMID:25121650

  10. An intuitionistic fuzzy multi-objective non-linear programming model for sustainable irrigation water allocation under the combination of dry and wet conditions

    NASA Astrophysics Data System (ADS)

    Li, Mo; Fu, Qiang; Singh, Vijay P.; Ma, Mingwei; Liu, Xiao

    2017-12-01

    Water scarcity causes conflicts among natural resources, society and economy and reinforces the need for optimal allocation of irrigation water resources in a sustainable way. Uncertainties caused by natural conditions and human activities make optimal allocation more complex. An intuitionistic fuzzy multi-objective non-linear programming (IFMONLP) model for irrigation water allocation under the combination of dry and wet conditions is developed to help decision makers mitigate water scarcity. The model is capable of quantitatively solving multiple problems including crop yield increase, blue water saving, and water supply cost reduction to obtain a balanced water allocation scheme using a multi-objective non-linear programming technique. Moreover, it can deal with uncertainty as well as hesitation based on the introduction of intuitionistic fuzzy numbers. Consideration of the combination of dry and wet conditions for water availability and precipitation makes it possible to gain insights into the various irrigation water allocations, and joint probabilities based on copula functions provide decision makers an average standard for irrigation. A case study on optimally allocating both surface water and groundwater to different growth periods of rice in different subareas in Heping irrigation area, Qing'an County, northeast China shows the potential and applicability of the developed model. Results show that the crop yield increase target especially in tillering and elongation stages is a prevailing concern when more water is available, and trading schemes can mitigate water supply cost and save water with an increased grain output. Results also reveal that the water allocation schemes are sensitive to the variation of water availability and precipitation with uncertain characteristics. The IFMONLP model is applicable for most irrigation areas with limited water supplies to determine irrigation water strategies under a fuzzy environment.

  11. Qualitative and quantitative comparison of geostatistical techniques of porosity prediction from the seismic and logging data: a case study from the Blackfoot Field, Alberta, Canada

    NASA Astrophysics Data System (ADS)

    Maurya, S. P.; Singh, K. H.; Singh, N. P.

    2018-05-01

    In present study, three recently developed geostatistical methods, single attribute analysis, multi-attribute analysis and probabilistic neural network algorithm have been used to predict porosity in inter well region for Blackfoot field, Alberta, Canada, an offshore oil field. These techniques make use of seismic attributes, generated by model based inversion and colored inversion techniques. The principle objective of the study is to find the suitable combination of seismic inversion and geostatistical techniques to predict porosity and identification of prospective zones in 3D seismic volume. The porosity estimated from these geostatistical approaches is corroborated with the well log porosity. The results suggest that all the three implemented geostatistical methods are efficient and reliable to predict the porosity but the multi-attribute and probabilistic neural network analysis provide more accurate and high resolution porosity sections. A low impedance (6000-8000 m/s g/cc) and high porosity (> 15%) zone is interpreted from inverted impedance and porosity sections respectively between 1060 and 1075 ms time interval and is characterized as reservoir. The qualitative and quantitative results demonstrate that of all the employed geostatistical methods, the probabilistic neural network along with model based inversion is the most efficient method for predicting porosity in inter well region.

  12. Rendezvous with connectivity preservation for multi-robot systems with an unknown leader

    NASA Astrophysics Data System (ADS)

    Dong, Yi

    2018-02-01

    This paper studies the leader-following rendezvous problem with connectivity preservation for multi-agent systems composed of uncertain multi-robot systems subject to external disturbances and an unknown leader, both of which are generated by a so-called exosystem with parametric uncertainty. By combining internal model design, potential function technique and adaptive control, two distributed control strategies are proposed to maintain the connectivity of the communication network, to achieve the asymptotic tracking of all the followers to the output of the unknown leader system, as well as to reject unknown external disturbances. It is also worth to mention that the uncertain parameters in the multi-robot systems and exosystem are further allowed to belong to unknown and unbounded sets when applying the second fully distributed control law containing a dynamic gain inspired by high-gain adaptive control or self-tuning regulator.

  13. Combining biophysical methods for the analysis of protein complex stoichiometry and affinity in SEDPHAT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Huaying, E-mail: zhaoh3@mail.nih.gov; Schuck, Peter, E-mail: zhaoh3@mail.nih.gov

    2015-01-01

    Global multi-method analysis for protein interactions (GMMA) can increase the precision and complexity of binding studies for the determination of the stoichiometry, affinity and cooperativity of multi-site interactions. The principles and recent developments of biophysical solution methods implemented for GMMA in the software SEDPHAT are reviewed, their complementarity in GMMA is described and a new GMMA simulation tool set in SEDPHAT is presented. Reversible macromolecular interactions are ubiquitous in signal transduction pathways, often forming dynamic multi-protein complexes with three or more components. Multivalent binding and cooperativity in these complexes are often key motifs of their biological mechanisms. Traditional solution biophysicalmore » techniques for characterizing the binding and cooperativity are very limited in the number of states that can be resolved. A global multi-method analysis (GMMA) approach has recently been introduced that can leverage the strengths and the different observables of different techniques to improve the accuracy of the resulting binding parameters and to facilitate the study of multi-component systems and multi-site interactions. Here, GMMA is described in the software SEDPHAT for the analysis of data from isothermal titration calorimetry, surface plasmon resonance or other biosensing, analytical ultracentrifugation, fluorescence anisotropy and various other spectroscopic and thermodynamic techniques. The basic principles of these techniques are reviewed and recent advances in view of their particular strengths in the context of GMMA are described. Furthermore, a new feature in SEDPHAT is introduced for the simulation of multi-method data. In combination with specific statistical tools for GMMA in SEDPHAT, simulations can be a valuable step in the experimental design.« less

  14. Novel Plasmonic and Hyberbolic Optical Materials for Control of Quantum Nanoemitters

    DTIC Science & Technology

    2016-12-08

    properties, metal ion implantation techniques, and multi- physics modeling to produce hyperbolic quantum nanoemitters. 15. SUBJECT TERMS nanotechnology 16...techniques, and multi- physics modeling to produce hyperbolic quantum nanoemitters. During the course of this project we studied plasmonic

  15. Multi-technique combination of space geodesy observations

    NASA Astrophysics Data System (ADS)

    Zoulida, Myriam; Pollet, Arnaud; Coulot, David; Biancale, Richard; Rebischung, Paul; Collilieux, Xavier

    2014-05-01

    Over the last few years, combination at the observation level (COL) of the different space geodesy techniques has been thoroughly studied. Various studies have shown that this type of combination can take advantage of common parameters. Some of these parameters, such as Zenithal Tropospheric Delays (ZTD), are available on co-location sites, where more than one technique is present. Local ties (LT) are provided for these sites, and act as intra-technique links and allow resulting terrestrial reference frames (TRF) to be homogeneous. However the use of LT can be problematic on weekly calculations, where their geographical distribution can be poor, and there are often differences observed between available LTs and space geodesy results. Similar co-locations can be found on multi-technique satellites, where more than one technique receiver is featured. A great advantage of these space ties (STs) is the densification of co-locations as the orbiting satellite acts as a moving station. The challenge of using space ties relies in the accurate knowledge or estimation of their values, as officially provided values are sometimes not reaching the required level of precision for the solution, due to receivers' or acting forces mismodelings and other factors. Thus, the necessity of an estimation and/or weighting strategy for the STs is introduced. To this day, on subsets of available data, using STs has shown promising results regarding the TRF determination through the stations' positions estimation, on the orbit determination of the GPS constellation and on the GPS antenna Phase Center Offsets and Variations (PCO and PCV) . In this study, results from a multi-technique combination including the Jason-2 satellite and its effect on the GNSS orbit determination during the CONT2011 period are presented, as well as some preliminary results on station positions' determination. Comparing resulting orbits with official solutions provides an assessment of the effect on the orbit calculation by introducing orbiting stations' observations. Moreover, simulated solutions will be presented, showing the effect of adding multi-technique observations on the estimation of STs parameters errors, such as Laser Retroreflector Offsets (LROs) or GNSS antennae Phase Center Offsets (PCOs).

  16. Multi-level emulation of complex climate model responses to boundary forcing data

    NASA Astrophysics Data System (ADS)

    Tran, Giang T.; Oliver, Kevin I. C.; Holden, Philip B.; Edwards, Neil R.; Sóbester, András; Challenor, Peter

    2018-04-01

    Climate model components involve both high-dimensional input and output fields. It is desirable to efficiently generate spatio-temporal outputs of these models for applications in integrated assessment modelling or to assess the statistical relationship between such sets of inputs and outputs, for example, uncertainty analysis. However, the need for efficiency often compromises the fidelity of output through the use of low complexity models. Here, we develop a technique which combines statistical emulation with a dimensionality reduction technique to emulate a wide range of outputs from an atmospheric general circulation model, PLASIM, as functions of the boundary forcing prescribed by the ocean component of a lower complexity climate model, GENIE-1. Although accurate and detailed spatial information on atmospheric variables such as precipitation and wind speed is well beyond the capability of GENIE-1's energy-moisture balance model of the atmosphere, this study demonstrates that the output of this model is useful in predicting PLASIM's spatio-temporal fields through multi-level emulation. Meaningful information from the fast model, GENIE-1 was extracted by utilising the correlation between variables of the same type in the two models and between variables of different types in PLASIM. We present here the construction and validation of several PLASIM variable emulators and discuss their potential use in developing a hybrid model with statistical components.

  17. Wearable-Sensor-Based Classification Models of Faller Status in Older Adults.

    PubMed

    Howcroft, Jennifer; Lemaire, Edward D; Kofman, Jonathan

    2016-01-01

    Wearable sensors have potential for quantitative, gait-based, point-of-care fall risk assessment that can be easily and quickly implemented in clinical-care and older-adult living environments. This investigation generated models for wearable-sensor based fall-risk classification in older adults and identified the optimal sensor type, location, combination, and modelling method; for walking with and without a cognitive load task. A convenience sample of 100 older individuals (75.5 ± 6.7 years; 76 non-fallers, 24 fallers based on 6 month retrospective fall occurrence) walked 7.62 m under single-task and dual-task conditions while wearing pressure-sensing insoles and tri-axial accelerometers at the head, pelvis, and left and right shanks. Participants also completed the Activities-specific Balance Confidence scale, Community Health Activities Model Program for Seniors questionnaire, six minute walk test, and ranked their fear of falling. Fall risk classification models were assessed for all sensor combinations and three model types: multi-layer perceptron neural network, naïve Bayesian, and support vector machine. The best performing model was a multi-layer perceptron neural network with input parameters from pressure-sensing insoles and head, pelvis, and left shank accelerometers (accuracy = 84%, F1 score = 0.600, MCC score = 0.521). Head sensor-based models had the best performance of the single-sensor models for single-task gait assessment. Single-task gait assessment models outperformed models based on dual-task walking or clinical assessment data. Support vector machines and neural networks were the best modelling technique for fall risk classification. Fall risk classification models developed for point-of-care environments should be developed using support vector machines and neural networks, with a multi-sensor single-task gait assessment.

  18. High performance multi-spectral interrogation for surface plasmon resonance imaging sensors.

    PubMed

    Sereda, A; Moreau, J; Canva, M; Maillart, E

    2014-04-15

    Surface plasmon resonance (SPR) sensing has proven to be a valuable tool in the field of surface interactions characterization, especially for biomedical applications where label-free techniques are of particular interest. In order to approach the theoretical resolution limit, most SPR-based systems have turned to either angular or spectral interrogation modes, which both offer very accurate real-time measurements, but at the expense of the 2-dimensional imaging capability, therefore decreasing the data throughput. In this article, we show numerically and experimentally how to combine the multi-spectral interrogation technique with 2D-imaging, while finding an optimum in terms of resolution, accuracy, acquisition speed and reduction in data dispersion with respect to the classical reflectivity interrogation mode. This multi-spectral interrogation methodology is based on a robust five parameter fitting of the spectral reflectivity curve which enables monitoring of the reflectivity spectral shift with a resolution of the order of ten picometers, and using only five wavelength measurements per point. In fine, such multi-spectral based plasmonic imaging system allows biomolecular interaction monitoring in a linear regime independently of variations of buffer optical index, which is illustrated on a DNA-DNA model case. © 2013 Elsevier B.V. All rights reserved.

  19. Can we use Earth Observations to improve monthly water level forecasts?

    NASA Astrophysics Data System (ADS)

    Slater, L. J.; Villarini, G.

    2017-12-01

    Dynamical-statistical hydrologic forecasting approaches benefit from different strengths in comparison with traditional hydrologic forecasting systems: they are computationally efficient, can integrate and `learn' from a broad selection of input data (e.g., General Circulation Model (GCM) forecasts, Earth Observation time series, teleconnection patterns), and can take advantage of recent progress in machine learning (e.g. multi-model blending, post-processing and ensembling techniques). Recent efforts to develop a dynamical-statistical ensemble approach for forecasting seasonal streamflow using both GCM forecasts and changing land cover have shown promising results over the U.S. Midwest. Here, we use climate forecasts from several GCMs of the North American Multi Model Ensemble (NMME) alongside 15-minute stage time series from the National River Flow Archive (NRFA) and land cover classes extracted from the European Space Agency's Climate Change Initiative 300 m annual Global Land Cover time series. With these data, we conduct systematic long-range probabilistic forecasting of monthly water levels in UK catchments over timescales ranging from one to twelve months ahead. We evaluate the improvement in model fit and model forecasting skill that comes from using land cover classes as predictors in the models. This work opens up new possibilities for combining Earth Observation time series with GCM forecasts to predict a variety of hazards from space using data science techniques.

  20. Combined genetic algorithm and multiple linear regression (GA-MLR) optimizer: Application to multi-exponential fluorescence decay surface.

    PubMed

    Fisz, Jacek J

    2006-12-07

    The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi-linear combinations of nonlinear functions, is indicated. The VP algorithm does not distinguish the weakly nonlinear parameters from the nonlinear ones and it does not apply to the model functions which are multi-linear combinations of nonlinear functions.

  1. Applications of multi-frequency single beam sonar fisheries analysis methods for seep quantification and characterization

    NASA Astrophysics Data System (ADS)

    Price, V.; Weber, T.; Jerram, K.; Doucet, M.

    2016-12-01

    The analysis of multi-frequency, narrow-band single-beam acoustic data for fisheries applications has long been established, with methodology focusing on characterizing targets in the water column by utilizing complex algorithms and false-color time series data to create and compare frequency response curves for dissimilar biological groups. These methods were built on concepts developed for multi-frequency analysis of satellite imagery for terrestrial analysis and have been applied to a broad range of data types and applications. Single-beam systems operating at multiple frequencies are also used for the detection and identification of seeps in water column data. Here we incorporate the same analysis and visualization techniques used for fisheries applications to attempt to characterize and quantify seeps by creating and comparing frequency response curves and applying false coloration to shallow and deep multi-channel seep data. From this information, we can establish methods to differentiate bubble size in the echogram and differentiate seep composition. These techniques are also useful in differentiating plume content from biological noise (volume reverberation) created by euphausid layers and fish with gas-filled swim bladders. The combining of the multiple frequencies using false coloring and other image analysis techniques after applying established normalization and beam pattern correction algorithms is a novel approach to quantitatively describing seeps. Further, this information could be paired with geological models, backscatter, and bathymetry data to assess seep distribution.

  2. Concurrent Probabilistic Simulation of High Temperature Composite Structural Response

    NASA Technical Reports Server (NTRS)

    Abdi, Frank

    1996-01-01

    A computational structural/material analysis and design tool which would meet industry's future demand for expedience and reduced cost is presented. This unique software 'GENOA' is dedicated to parallel and high speed analysis to perform probabilistic evaluation of high temperature composite response of aerospace systems. The development is based on detailed integration and modification of diverse fields of specialized analysis techniques and mathematical models to combine their latest innovative capabilities into a commercially viable software package. The technique is specifically designed to exploit the availability of processors to perform computationally intense probabilistic analysis assessing uncertainties in structural reliability analysis and composite micromechanics. The primary objectives which were achieved in performing the development were: (1) Utilization of the power of parallel processing and static/dynamic load balancing optimization to make the complex simulation of structure, material and processing of high temperature composite affordable; (2) Computational integration and synchronization of probabilistic mathematics, structural/material mechanics and parallel computing; (3) Implementation of an innovative multi-level domain decomposition technique to identify the inherent parallelism, and increasing convergence rates through high- and low-level processor assignment; (4) Creating the framework for Portable Paralleled architecture for the machine independent Multi Instruction Multi Data, (MIMD), Single Instruction Multi Data (SIMD), hybrid and distributed workstation type of computers; and (5) Market evaluation. The results of Phase-2 effort provides a good basis for continuation and warrants Phase-3 government, and industry partnership.

  3. A multi-species exchange model for fully fluctuating polymer field theory simulations.

    PubMed

    Düchs, Dominik; Delaney, Kris T; Fredrickson, Glenn H

    2014-11-07

    Field-theoretic models have been used extensively to study the phase behavior of inhomogeneous polymer melts and solutions, both in self-consistent mean-field calculations and in numerical simulations of the full theory capturing composition fluctuations. The models commonly used can be grouped into two categories, namely, species models and exchange models. Species models involve integrations of functionals that explicitly depend on fields originating both from species density operators and their conjugate chemical potential fields. In contrast, exchange models retain only linear combinations of the chemical potential fields. In the two-component case, development of exchange models has been instrumental in enabling stable complex Langevin (CL) simulations of the full complex-valued theory. No comparable stable CL approach has yet been established for field theories of the species type. Here, we introduce an extension of the exchange model to an arbitrary number of components, namely, the multi-species exchange (MSE) model, which greatly expands the classes of soft material systems that can be accessed by the complex Langevin simulation technique. We demonstrate the stability and accuracy of the MSE-CL sampling approach using numerical simulations of triblock and tetrablock terpolymer melts, and tetrablock quaterpolymer melts. This method should enable studies of a wide range of fluctuation phenomena in multiblock/multi-species polymer blends and composites.

  4. Real-time Retrieving Atmospheric Parameters from Multi-GNSS Constellations

    NASA Astrophysics Data System (ADS)

    Li, X.; Zus, F.; Lu, C.; Dick, G.; Ge, M.; Wickert, J.; Schuh, H.

    2016-12-01

    The multi-constellation GNSS (e.g. GPS, GLONASS, Galileo, and BeiDou) bring great opportunities and challenges for real-time retrieval of atmospheric parameters for supporting numerical weather prediction (NWP) nowcasting or severe weather event monitoring. In this study, the observations from different GNSS are combined together for atmospheric parameter retrieving based on the real-time precise point positioning technique. The atmospheric parameters retrieved from multi-GNSS observations, including zenith total delay (ZTD), integrated water vapor (IWV), horizontal gradient (especially high-resolution gradient estimates) and slant total delay (STD), are carefully analyzed and evaluated by using the VLBI, radiosonde, water vapor radiometer and numerical weather model to independently validate the performance of individual GNSS and also demonstrate the benefits of multi-constellation GNSS for real-time atmospheric monitoring. Numerous results show that the multi-GNSS processing can provide real-time atmospheric products with higher accuracy, stronger reliability and better distribution, which would be beneficial for atmospheric sounding systems, especially for nowcasting of extreme weather.

  5. A probabilistic multi-criteria decision making technique for conceptual and preliminary aerospace systems design

    NASA Astrophysics Data System (ADS)

    Bandte, Oliver

    It has always been the intention of systems engineering to invent or produce the best product possible. Many design techniques have been introduced over the course of decades that try to fulfill this intention. Unfortunately, no technique has succeeded in combining multi-criteria decision making with probabilistic design. The design technique developed in this thesis, the Joint Probabilistic Decision Making (JPDM) technique, successfully overcomes this deficiency by generating a multivariate probability distribution that serves in conjunction with a criterion value range of interest as a universally applicable objective function for multi-criteria optimization and product selection. This new objective function constitutes a meaningful Xnetric, called Probability of Success (POS), that allows the customer or designer to make a decision based on the chance of satisfying the customer's goals. In order to incorporate a joint probabilistic formulation into the systems design process, two algorithms are created that allow for an easy implementation into a numerical design framework: the (multivariate) Empirical Distribution Function and the Joint Probability Model. The Empirical Distribution Function estimates the probability that an event occurred by counting how many times it occurred in a given sample. The Joint Probability Model on the other hand is an analytical parametric model for the multivariate joint probability. It is comprised of the product of the univariate criterion distributions, generated by the traditional probabilistic design process, multiplied with a correlation function that is based on available correlation information between pairs of random variables. JPDM is an excellent tool for multi-objective optimization and product selection, because of its ability to transform disparate objectives into a single figure of merit, the likelihood of successfully meeting all goals or POS. The advantage of JPDM over other multi-criteria decision making techniques is that POS constitutes a single optimizable function or metric that enables a comparison of all alternative solutions on an equal basis. Hence, POS allows for the use of any standard single-objective optimization technique available and simplifies a complex multi-criteria selection problem into a simple ordering problem, where the solution with the highest POS is best. By distinguishing between controllable and uncontrollable variables in the design process, JPDM can account for the uncertain values of the uncontrollable variables that are inherent to the design problem, while facilitating an easy adjustment of the controllable ones to achieve the highest possible POS. Finally, JPDM's superiority over current multi-criteria decision making techniques is demonstrated with an optimization of a supersonic transport concept and ten contrived equations as well as a product selection example, determining an airline's best choice among Boeing's B-747, B-777, Airbus' A340, and a Supersonic Transport. The optimization examples demonstrate JPDM's ability to produce a better solution with a higher POS than an Overall Evaluation Criterion or Goal Programming approach. Similarly, the product selection example demonstrates JPDM's ability to produce a better solution with a higher POS and different ranking than the Overall Evaluation Criterion or Technique for Order Preferences by Similarity to the Ideal Solution (TOPSIS) approach.

  6. Multi-scales region segmentation for ROI separation in digital mammograms

    NASA Astrophysics Data System (ADS)

    Zhang, Dapeng; Zhang, Di; Li, Yue; Wang, Wei

    2017-02-01

    Mammography is currently the most effective imaging modality used by radiologists for the screening of breast cancer. Segmentation is one of the key steps in the process of developing anatomical models for calculation of safe medical dose of radiation. This paper explores the potential of the statistical region merging segmentation technique for Breast segmentation in digital mammograms. First, the mammograms are pre-processing for regions enhancement, then the enhanced images are segmented using SRM with multi scales, finally these segmentations are combined for region of interest (ROI) separation and edge detection. The proposed algorithm uses multi-scales region segmentation in order to: separate breast region from background region, region edge detection and ROIs separation. The experiments are performed using a data set of mammograms from different patients, demonstrating the validity of the proposed criterion. Results show that, the statistical region merging segmentation algorithm actually can work on the segmentation of medical image and more accurate than another methods. And the outcome shows that the technique has a great potential to become a method of choice for segmentation of mammograms.

  7. Application of zonal model on indoor air sensor network design

    NASA Astrophysics Data System (ADS)

    Chen, Y. Lisa; Wen, Jin

    2007-04-01

    Growing concerns over the safety of the indoor environment have made the use of sensors ubiquitous. Sensors that detect chemical and biological warfare agents can offer early warning of dangerous contaminants. However, current sensor system design is more informed by intuition and experience rather by systematic design. To develop a sensor system design methodology, a proper indoor airflow modeling approach is needed. Various indoor airflow modeling techniques, from complicated computational fluid dynamics approaches to simplified multi-zone approaches, exist in the literature. In this study, the effects of two airflow modeling techniques, multi-zone modeling technique and zonal modeling technique, on indoor air protection sensor system design are discussed. Common building attack scenarios, using a typical CBW agent, are simulated. Both multi-zone and zonal models are used to predict airflows and contaminant dispersion. Genetic Algorithm is then applied to optimize the sensor location and quantity. Differences in the sensor system design resulting from the two airflow models are discussed for a typical office environment and a large hall environment.

  8. Advanced wireless mobile collaborative sensing network for tactical and strategic missions

    NASA Astrophysics Data System (ADS)

    Xu, Hao

    2017-05-01

    In this paper, an advanced wireless mobile collaborative sensing network will be developed. Through properly combining wireless sensor network, emerging mobile robots and multi-antenna sensing/communication techniques, we could demonstrate superiority of developed sensing network. To be concrete, heterogeneous mobile robots including unmanned aerial vehicle (UAV) and unmanned ground vehicle (UGV) are equipped with multi-model sensors and wireless transceiver antennas. Through real-time collaborative formation control, multiple mobile robots can team the best formation that can provide most accurate sensing results. Also, formatting multiple mobile robots can also construct a multiple-input multiple-output (MIMO) communication system that can provide a reliable and high performance communication network.

  9. Progress in fast, accurate multi-scale climate simulations

    DOE PAGES

    Collins, W. D.; Johansen, H.; Evans, K. J.; ...

    2015-06-01

    We present a survey of physical and computational techniques that have the potential to contribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth with these computational improvements include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enablingmore » improved accuracy and fidelity in simulation of dynamics and allowing more complete representations of climate features at the global scale. At the same time, partnerships with computer science teams have focused on taking advantage of evolving computer architectures such as many-core processors and GPUs. As a result, approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  10. Generating Pedestrian Trajectories Consistent with the Fundamental Diagram Based on Physiological and Psychological Factors

    PubMed Central

    Narang, Sahil; Best, Andrew; Curtis, Sean; Manocha, Dinesh

    2015-01-01

    Pedestrian crowds often have been modeled as many-particle system including microscopic multi-agent simulators. One of the key challenges is to unearth governing principles that can model pedestrian movement, and use them to reproduce paths and behaviors that are frequently observed in human crowds. To that effect, we present a novel crowd simulation algorithm that generates pedestrian trajectories that exhibit the speed-density relationships expressed by the Fundamental Diagram. Our approach is based on biomechanical principles and psychological factors. The overall formulation results in better utilization of free space by the pedestrians and can be easily combined with well-known multi-agent simulation techniques with little computational overhead. We are able to generate human-like dense crowd behaviors in large indoor and outdoor environments and validate the results with captured real-world crowd trajectories. PMID:25875932

  11. Multi-Material ALE with AMR for Modeling Hot Plasmas and Cold Fragmenting Materials

    NASA Astrophysics Data System (ADS)

    Alice, Koniges; Nathan, Masters; Aaron, Fisher; David, Eder; Wangyi, Liu; Robert, Anderson; David, Benson; Andrea, Bertozzi

    2015-02-01

    We have developed a new 3D multi-physics multi-material code, ALE-AMR, which combines Arbitrary Lagrangian Eulerian (ALE) hydrodynamics with Adaptive Mesh Refinement (AMR) to connect the continuum to the microstructural regimes. The code is unique in its ability to model hot radiating plasmas and cold fragmenting solids. New numerical techniques were developed for many of the physics packages to work efficiently on a dynamically moving and adapting mesh. We use interface reconstruction based on volume fractions of the material components within mixed zones and reconstruct interfaces as needed. This interface reconstruction model is also used for void coalescence and fragmentation. A flexible strength/failure framework allows for pluggable material models, which may require material history arrays to determine the level of accumulated damage or the evolving yield stress in J2 plasticity models. For some applications laser rays are propagating through a virtual composite mesh consisting of the finest resolution representation of the modeled space. A new 2nd order accurate diffusion solver has been implemented for the thermal conduction and radiation transport packages. One application area is the modeling of laser/target effects including debris/shrapnel generation. Other application areas include warm dense matter, EUV lithography, and material wall interactions for fusion devices.

  12. Combining Direct Broadcast Polar Hyper-spectral Soundings with Geostationary Multi-spectral Imagery for Producing Low Latency Sounding Products

    NASA Astrophysics Data System (ADS)

    Smith, W.; Weisz, E.; McNabb, J. M. C.

    2017-12-01

    A technique is described which enables the combination of high vertical resolution (1 to 2-km) JPSS hyper-spectral soundings (i.e., from AIRS, CrIS, and IASI) with high horizontal (2-km) and temporal (15-min) resolution GOES multi-spectral imagery (i.e., provided by ABI) to produce low latency sounding products with the highest possible spatial and temporal resolution afforded by the instruments.

  13. [Non-contrast time-resolved magnetic resonance angiography combining high resolution multiple phase echo planar imaging based signal targeting and alternating radiofrequency contrast inherent inflow enhanced multi phase angiography combining spatial resolution echo planar imaging based signal targeting and alternating radiofrequency in intracranial arteries].

    PubMed

    Nakamura, Masanobu; Yoneyama, Masami; Tabuchi, Takashi; Takemura, Atsushi; Obara, Makoto; Sawano, Seishi

    2012-01-01

    Detailed information on anatomy and hemodynamics in cerebrovascular disorders such as AVM and Moyamoya disease is mandatory for defined diagnosis and treatment planning. Arterial spin labeling technique has come to be applied to magnetic resonance angiography (MRA) and perfusion imaging in recent years. However, those non-contrast techniques are mostly limited to single frame images. Recently we have proposed a non-contrast time-resolved MRA technique termed contrast inherent inflow enhanced multi phase angiography combining spatial resolution echo planar imaging based signal targeting and alternating radiofrequency (CINEMA-STAR). CINEMA-STAR can extract the blood flow in the major intracranial arteries at an interval of 70 ms and thus permits us to observe vascular construction in full by preparing MIP images of axial acquisitions with high spatial resolution. This preliminary study demonstrates the usefulness of the CINEMA-STAR technique in evaluating the cerebral vasculature.

  14. Reasoning about real-time systems with temporal interval logic constraints on multi-state automata

    NASA Technical Reports Server (NTRS)

    Gabrielian, Armen

    1991-01-01

    Models of real-time systems using a single paradigm often turn out to be inadequate, whether the paradigm is based on states, rules, event sequences, or logic. A model-based approach to reasoning about real-time systems is presented in which a temporal interval logic called TIL is employed to define constraints on a new type of high level automata. The combination, called hierarchical multi-state (HMS) machines, can be used to model formally a real-time system, a dynamic set of requirements, the environment, heuristic knowledge about planning-related problem solving, and the computational states of the reasoning mechanism. In this framework, mathematical techniques were developed for: (1) proving the correctness of a representation; (2) planning of concurrent tasks to achieve goals; and (3) scheduling of plans to satisfy complex temporal constraints. HMS machines allow reasoning about a real-time system from a model of how truth arises instead of merely depending of what is true in a system.

  15. Nonlinear aeroservoelastic analysis of a controlled multiple-actuated-wing model with free-play

    NASA Astrophysics Data System (ADS)

    Huang, Rui; Hu, Haiyan; Zhao, Yonghui

    2013-10-01

    In this paper, the effects of structural nonlinearity due to free-play in both leading-edge and trailing-edge outboard control surfaces on the linear flutter control system are analyzed for an aeroelastic model of three-dimensional multiple-actuated-wing. The free-play nonlinearities in the control surfaces are modeled theoretically by using the fictitious mass approach. The nonlinear aeroelastic equations of the presented model can be divided into nine sub-linear modal-based aeroelastic equations according to the different combinations of deflections of the leading-edge and trailing-edge outboard control surfaces. The nonlinear aeroelastic responses can be computed based on these sub-linear aeroelastic systems. To demonstrate the effects of nonlinearity on the linear flutter control system, a single-input and single-output controller and a multi-input and multi-output controller are designed based on the unconstrained optimization techniques. The numerical results indicate that the free-play nonlinearity can lead to either limit cycle oscillations or divergent motions when the linear control system is implemented.

  16. Thermophysical properties of multi-shock compressed dense argon.

    PubMed

    Chen, Q F; Zheng, J; Gu, Y J; Chen, Y L; Cai, L C; Shen, Z J

    2014-02-21

    In contrast to the single shock compression state that can be obtained directly via experimental measurements, the multi-shock compression states, however, have to be calculated with the aid of theoretical models. In order to determine experimentally the multiple shock states, a diagnostic approach with the Doppler pins system (DPS) and the pyrometer was used to probe multiple shocks in dense argon plasmas. Plasma was generated by a shock reverberation technique. The shock was produced using the flyer plate impact accelerated up to ∼6.1 km/s by a two-stage light gas gun and introduced into the plenum argon gas sample, which was pre-compressed from the environmental pressure to about 20 MPa. The time-resolved optical radiation histories were determined using a multi-wavelength channel optical transience radiance pyrometer. Simultaneously, the particle velocity profiles of the LiF window was measured with multi-DPS. The states of multi-shock compression argon plasma were determined from the measured shock velocities combining the particle velocity profiles. We performed the experiments on dense argon plasmas to determine the principal Hugonoit up to 21 GPa, the re-shock pressure up to 73 GPa, and the maximum measure pressure of the fourth shock up to 158 GPa. The results are used to validate the existing self-consistent variational theory model in the partial ionization region and create new theoretical models.

  17. Thermophysical properties of multi-shock compressed dense argon

    NASA Astrophysics Data System (ADS)

    Chen, Q. F.; Zheng, J.; Gu, Y. J.; Chen, Y. L.; Cai, L. C.; Shen, Z. J.

    2014-02-01

    In contrast to the single shock compression state that can be obtained directly via experimental measurements, the multi-shock compression states, however, have to be calculated with the aid of theoretical models. In order to determine experimentally the multiple shock states, a diagnostic approach with the Doppler pins system (DPS) and the pyrometer was used to probe multiple shocks in dense argon plasmas. Plasma was generated by a shock reverberation technique. The shock was produced using the flyer plate impact accelerated up to ˜6.1 km/s by a two-stage light gas gun and introduced into the plenum argon gas sample, which was pre-compressed from the environmental pressure to about 20 MPa. The time-resolved optical radiation histories were determined using a multi-wavelength channel optical transience radiance pyrometer. Simultaneously, the particle velocity profiles of the LiF window was measured with multi-DPS. The states of multi-shock compression argon plasma were determined from the measured shock velocities combining the particle velocity profiles. We performed the experiments on dense argon plasmas to determine the principal Hugonoit up to 21 GPa, the re-shock pressure up to 73 GPa, and the maximum measure pressure of the fourth shock up to 158 GPa. The results are used to validate the existing self-consistent variational theory model in the partial ionization region and create new theoretical models.

  18. Enhanced treatment of secondary municipal wastewater effluent: comparing (biological) filtration and ozonation in view of micropollutant removal, unselective effluent toxicity, and the potential for real-time control.

    PubMed

    Chys, Michael; Demeestere, Kristof; Ingabire, Ange Sabine; Dries, Jan; Van Langenhove, Herman; Van Hulle, Stijn W H

    2017-07-01

    Ozonation and three (biological) filtration techniques (trickling filtration (TF), slow sand filtration (SSF) and biological activated carbon (BAC) filtration) have been evaluated in different combinations as tertiary treatment for municipal wastewater effluent. The removal of 18 multi-class pharmaceuticals, as model trace organic contaminants (TrOCs), has been studied. (Biological) activated carbon filtration could reduce the amount of TrOCs significantly (>99%) but is cost-intensive for full-scale applications. Filtration techniques mainly depending on biodegradation mechanisms (TF and SSF) are found to be inefficient for TrOCs removal as a stand alone technique. Ozonation resulted in 90% removal of the total amount of quantified TrOCs, but a post-ozonation step is needed to cope with an increased unselective toxicity. SSF following ozonation showed to be the only technique able to reduce the unselective toxicity to the same level as before ozonation. In view of process control, innovative correlation models developed for the monitoring and control of TrOC removal during ozonation, are verified for their applicability during ozonation in combination with TF, SSF or BAC. Particularly for the poorly ozone reactive TrOCs, statistically significant models were obtained that correlate TrOC removal and reduction in UVA 254 as an online measured surrogate parameter.

  19. Multi-objective calibration and uncertainty analysis of hydrologic models; A comparative study between formal and informal methods

    NASA Astrophysics Data System (ADS)

    Shafii, M.; Tolson, B.; Matott, L. S.

    2012-04-01

    Hydrologic modeling has benefited from significant developments over the past two decades. This has resulted in building of higher levels of complexity into hydrologic models, which eventually makes the model evaluation process (parameter estimation via calibration and uncertainty analysis) more challenging. In order to avoid unreasonable parameter estimates, many researchers have suggested implementation of multi-criteria calibration schemes. Furthermore, for predictive hydrologic models to be useful, proper consideration of uncertainty is essential. Consequently, recent research has emphasized comprehensive model assessment procedures in which multi-criteria parameter estimation is combined with statistically-based uncertainty analysis routines such as Bayesian inference using Markov Chain Monte Carlo (MCMC) sampling. Such a procedure relies on the use of formal likelihood functions based on statistical assumptions, and moreover, the Bayesian inference structured on MCMC samplers requires a considerably large number of simulations. Due to these issues, especially in complex non-linear hydrological models, a variety of alternative informal approaches have been proposed for uncertainty analysis in the multi-criteria context. This study aims at exploring a number of such informal uncertainty analysis techniques in multi-criteria calibration of hydrological models. The informal methods addressed in this study are (i) Pareto optimality which quantifies the parameter uncertainty using the Pareto solutions, (ii) DDS-AU which uses the weighted sum of objective functions to derive the prediction limits, and (iii) GLUE which describes the total uncertainty through identification of behavioral solutions. The main objective is to compare such methods with MCMC-based Bayesian inference with respect to factors such as computational burden, and predictive capacity, which are evaluated based on multiple comparative measures. The measures for comparison are calculated both for calibration and evaluation periods. The uncertainty analysis methodologies are applied to a simple 5-parameter rainfall-runoff model, called HYMOD.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romanov, Gennady; /Fermilab

    CST Particle Studio combines electromagnetic field simulation, multi-particle tracking, adequate post-processing and advanced probabilistic emission model, which is the most important new capability in multipactor simulation. The emission model includes in simulation the stochastic properties of emission and adds primary electron elastic and inelastic reflection from the surfaces. The simulation of multipactor in coaxial waveguides have been performed to study the effects of the innovations on the multipactor threshold and the range over which multipactor can occur. The results compared with available previous experiments and simulations as well as the technique of MP simulation with CST PS are presented andmore » discussed.« less

  1. Direct Aerosol Radiative Forcing Based on Combined A-Train Observations: Towards All-sky Estimates and Attribution to Aerosol Type

    NASA Technical Reports Server (NTRS)

    Redemann, Jens; Shinozuka, Y.; Kacenelenbogen, M.; Russell, P.; Vaughan, M.; Ferrare, R.; Hostetler, C.; Rogers, R.; Burton, S.; Livingston, J.; hide

    2014-01-01

    We describe a technique for combining CALIOP aerosol backscatter, MODIS spectral AOD (aerosol optical depth), and OMI AAOD (absorption aerosol optical depth) measurements for the purpose of estimating full spectral sets of aerosol radiative properties, and ultimately for calculating the 3-D distribution of direct aerosol radiative forcing. We present results using one year of data collected in 2007 and show comparisons of the aerosol radiative property estimates to collocated AERONET retrievals. Initial calculations of seasonal clear-sky aerosol radiative forcing based on our multi-sensor aerosol retrievals compare well with over-ocean and top of the atmosphere IPCC-2007 model-based results, and with more recent assessments in the "Climate Change Science Program Report: Atmospheric Aerosol Properties and Climate Impacts" (2009). We discuss some of the challenges that exist in extending our clear-sky results to all-sky conditions. On the basis of comparisons to suborbital measurements, we present some of the limitations of the MODIS and CALIOP retrievals in the presence of adjacent or underlying clouds. Strategies for meeting these challenges are discussed. We also discuss a methodology for using the multi-sensor aerosol retrievals for aerosol type classification based on advanced clustering techniques. The combination of research results permits conclusions regarding the attribution of aerosol radiative forcing to aerosol type.

  2. Conceptual Model Evaluation using Advanced Parameter Estimation Techniques with Heat as a Tracer

    NASA Astrophysics Data System (ADS)

    Naranjo, R. C.; Morway, E. D.; Healy, R. W.

    2016-12-01

    Temperature measurements made at multiple depths beneath the sediment-water interface has proven useful for estimating seepage rates from surface-water channels and corresponding subsurface flow direction. Commonly, parsimonious zonal representations of the subsurface structure are defined a priori by interpretation of temperature envelopes, slug tests or analysis of soil cores. However, combining multiple observations into a single zone may limit the inverse model solution and does not take full advantage of the information content within the measured data. Further, simulating the correct thermal gradient, flow paths, and transient behavior of solutes may be biased by inadequacies in the spatial description of subsurface hydraulic properties. The use of pilot points in PEST offers a more sophisticated approach to estimate the structure of subsurface heterogeneity. This presentation evaluates seepage estimation in a cross-sectional model of a trapezoidal canal with intermittent flow representing four typical sedimentary environments. The recent improvements in heat as a tracer measurement techniques (i.e. multi-depth temperature probe) along with use of modern calibration techniques (i.e., pilot points) provides opportunities for improved calibration of flow models, and, subsequently, improved model predictions.

  3. Techniques for High Contrast Imaging in Multi-Star Systems II: Multi-Star Wavefront Control

    NASA Technical Reports Server (NTRS)

    Sirbu, D.; Thomas, S.; Belikov, R.

    2017-01-01

    Direct imaging of exoplanets represents a challenge for astronomical instrumentation due to the high-contrast ratio and small angular separation between the host star and the faint planet. Multi-star systems pose additional challenges for coronagraphic instruments because of the diffraction and aberration leakage introduced by the additional stars, and as a result are not planned to be on direct imaging target lists. Multi-star wavefront control (MSWC) is a technique that uses a coronagraphic instrument's deformable mirror (DM) to create high-contrast regions in the focal plane in the presence of multiple stars. Our previous paper introduced the Super-Nyquist Wavefront Control (SNWC) technique that uses a diffraction grating to enable the DM to generate high-contrast regions beyond the nominal controllable region. These two techniques can be combined to generate high-contrast regions for multi-star systems at any angular separations. As a case study, a high-contrast wavefront control (WC) simulation that applies these techniques shows that the habitable region of the Alpha Centauri system can be imaged reaching 8 times 10(exp -9) mean contrast in 10 percent broadband light in one-sided dark holes from 1.6-5.5 lambda (wavelength) divided by D (distance).

  4. [Computer simulation by passenger wound analysis of vehicle collision].

    PubMed

    Zou, Dong-Hua; Liu, Nning-Guo; Shen, Jie; Zhang, Xiao-Yun; Jin, Xian-Long; Chen, Yi-Jiu

    2006-08-15

    To reconstruct the course of vehicle collision, so that to provide the reference for forensic identification and disposal of traffic accidents. Through analyzing evidences left both on passengers and vehicles, technique of momentum impulse combined with multi-dynamics was applied to simulate the motion and injury of passengers as well as the track of vehicles. Model of computer stimulation perfectly reconstructed phases of the traffic collision, which coincide with details found by forensic investigation. Computer stimulation is helpful and feasible for forensic identification in traffic accidents.

  5. Fabrication of a multi-layer three-dimensional scaffold with controlled porous micro-architecture for application in small intestine tissue engineering.

    PubMed

    Knight, Toyin; Basu, Joydeep; Rivera, Elias A; Spencer, Thomas; Jain, Deepak; Payne, Richard

    2013-01-01

    Various methods can be employed to fabricate scaffolds with characteristics that promote cell-to-material interaction. This report examines the use of a novel technique combining compression molding with particulate leaching to create a unique multi-layered scaffold with differential porosities and pore sizes that provides a high level of control to influence cell behavior. These cell behavioral responses were primarily characterized by bridging and penetration of two cell types (epithelial and smooth muscle cells) on the scaffold in vitro. Larger pore sizes corresponded to an increase in pore penetration, and a decrease in pore bridging. In addition, smaller cells (epithelial) penetrated further into the scaffold than larger cells (smooth muscle cells). In vivo evaluation of a multi-layered scaffold was well tolerated for 75 d in a rodent model. This data shows the ability of the components of multi-layered scaffolds to influence cell behavior, and demonstrates the potential for these scaffolds to promote desired tissue outcomes in vivo.

  6. Structure of N-(5-ethyl-[1,3,4]-thiadiazole-2-yl)toluenesulfonamide by combined X-ray powder diffraction, 13C solid-state NMR and molecular modelling.

    PubMed

    Hangan, Adriana; Borodi, Gheorghe; Filip, Xenia; Tripon, Carmen; Morari, Cristian; Oprean, Luminita; Filip, Claudiu

    2010-12-01

    The crystal structure solution of the title compound is determined from microcrystalline powder using a multi-technique approach that combines X-ray powder diffraction (XRPD) data analysis based on direct-space methods with information from (13)C solid-state NMR (SSNMR), and molecular modelling using the GIPAW (gauge including projector augmented-wave) method. The space group is Pbca with one molecule in the asymmetric unit. The proposed methodology proves very useful for unambiguously characterizing the supramolecular arrangement adopted by the N-(5-ethyl-[1,3,4]-thiadiazole-2-yl)toluenesulfonamide molecules in the crystal, which consists of extended double strands held together by C-H···π non-covalent interactions.

  7. Introducing Enabling Computational Tools to the Climate Sciences: Multi-Resolution Climate Modeling with Adaptive Cubed-Sphere Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jablonowski, Christiane

    The research investigates and advances strategies how to bridge the scale discrepancies between local, regional and global phenomena in climate models without the prohibitive computational costs of global cloud-resolving simulations. In particular, the research explores new frontiers in computational geoscience by introducing high-order Adaptive Mesh Refinement (AMR) techniques into climate research. AMR and statically-adapted variable-resolution approaches represent an emerging trend for atmospheric models and are likely to become the new norm in future-generation weather and climate models. The research advances the understanding of multi-scale interactions in the climate system and showcases a pathway how to model these interactions effectively withmore » advanced computational tools, like the Chombo AMR library developed at the Lawrence Berkeley National Laboratory. The research is interdisciplinary and combines applied mathematics, scientific computing and the atmospheric sciences. In this research project, a hierarchy of high-order atmospheric models on cubed-sphere computational grids have been developed that serve as an algorithmic prototype for the finite-volume solution-adaptive Chombo-AMR approach. The foci of the investigations have lied on the characteristics of both static mesh adaptations and dynamically-adaptive grids that can capture flow fields of interest like tropical cyclones. Six research themes have been chosen. These are (1) the introduction of adaptive mesh refinement techniques into the climate sciences, (2) advanced algorithms for nonhydrostatic atmospheric dynamical cores, (3) an assessment of the interplay between resolved-scale dynamical motions and subgrid-scale physical parameterizations, (4) evaluation techniques for atmospheric model hierarchies, (5) the comparison of AMR refinement strategies and (6) tropical cyclone studies with a focus on multi-scale interactions and variable-resolution modeling. The results of this research project demonstrate significant advances in all six research areas. The major conclusions are that statically-adaptive variable-resolution modeling is currently becoming mature in the climate sciences, and that AMR holds outstanding promise for future-generation weather and climate models on high-performance computing architectures.« less

  8. Combined inverse-forward artificial neural networks for fast and accurate estimation of the diffusion coefficients of cartilage based on multi-physics models.

    PubMed

    Arbabi, Vahid; Pouran, Behdad; Weinans, Harrie; Zadpoor, Amir A

    2016-09-06

    Analytical and numerical methods have been used to extract essential engineering parameters such as elastic modulus, Poisson׳s ratio, permeability and diffusion coefficient from experimental data in various types of biological tissues. The major limitation associated with analytical techniques is that they are often only applicable to problems with simplified assumptions. Numerical multi-physics methods, on the other hand, enable minimizing the simplified assumptions but require substantial computational expertise, which is not always available. In this paper, we propose a novel approach that combines inverse and forward artificial neural networks (ANNs) which enables fast and accurate estimation of the diffusion coefficient of cartilage without any need for computational modeling. In this approach, an inverse ANN is trained using our multi-zone biphasic-solute finite-bath computational model of diffusion in cartilage to estimate the diffusion coefficient of the various zones of cartilage given the concentration-time curves. Robust estimation of the diffusion coefficients, however, requires introducing certain levels of stochastic variations during the training process. Determining the required level of stochastic variation is performed by coupling the inverse ANN with a forward ANN that receives the diffusion coefficient as input and returns the concentration-time curve as output. Combined together, forward-inverse ANNs enable computationally inexperienced users to obtain accurate and fast estimation of the diffusion coefficients of cartilage zones. The diffusion coefficients estimated using the proposed approach are compared with those determined using direct scanning of the parameter space as the optimization approach. It has been shown that both approaches yield comparable results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Enhancing multi-spot structured illumination microscopy with fluorescence difference

    NASA Astrophysics Data System (ADS)

    Ward, Edward N.; Torkelsen, Frida H.; Pal, Robert

    2018-03-01

    Structured illumination microscopy is a super-resolution technique used extensively in biological research. However, this technique is limited in the maximum possible resolution increase. Here we report the results of simulations of a novel enhanced multi-spot structured illumination technique. This method combines the super-resolution technique of difference microscopy with structured illumination deconvolution. Initial results give at minimum a 1.4-fold increase in resolution over conventional structured illumination in a low-noise environment. This new technique also has the potential to be expanded to further enhance axial resolution with three-dimensional difference microscopy. The requirement for precise pattern determination in this technique also led to the development of a new pattern estimation algorithm which proved more efficient and reliable than other methods tested.

  10. Multi-Site and Multi-Variables Statistical Downscaling Technique in the Monsoon Dominated Region of Pakistan

    NASA Astrophysics Data System (ADS)

    Khan, Firdos; Pilz, Jürgen

    2016-04-01

    South Asia is under the severe impacts of changing climate and global warming. The last two decades showed that climate change or global warming is happening and the first decade of 21st century is considered as the warmest decade over Pakistan ever in history where temperature reached 53 0C in 2010. Consequently, the spatio-temporal distribution and intensity of precipitation is badly effected and causes floods, cyclones and hurricanes in the region which further have impacts on agriculture, water, health etc. To cope with the situation, it is important to conduct impact assessment studies and take adaptation and mitigation remedies. For impact assessment studies, we need climate variables at higher resolution. Downscaling techniques are used to produce climate variables at higher resolution; these techniques are broadly divided into two types, statistical downscaling and dynamical downscaling. The target location of this study is the monsoon dominated region of Pakistan. One reason for choosing this area is because the contribution of monsoon rains in this area is more than 80 % of the total rainfall. This study evaluates a statistical downscaling technique which can be then used for downscaling climatic variables. Two statistical techniques i.e. quantile regression and copula modeling are combined in order to produce realistic results for climate variables in the area under-study. To reduce the dimension of input data and deal with multicollinearity problems, empirical orthogonal functions will be used. Advantages of this new method are: (1) it is more robust to outliers as compared to ordinary least squares estimates and other estimation methods based on central tendency and dispersion measures; (2) it preserves the dependence among variables and among sites and (3) it can be used to combine different types of distributions. This is important in our case because we are dealing with climatic variables having different distributions over different meteorological stations. The proposed model will be validated by using the (National Centers for Environmental Prediction / National Center for Atmospheric Research) NCEP/NCAR predictors for the period of 1960-1990 and validated for 1990-2000. To investigate the efficiency of the proposed model, it will be compared with the multivariate multiple regression model and with dynamical downscaling climate models by using different climate indices that describe the frequency, intensity and duration of the variables of interest. KEY WORDS: Climate change, Copula, Monsoon, Quantile regression, Spatio-temporal distribution.

  11. Direct Aerosol Radiative Forcing from Combined A-Train Observations - Preliminary Comparisons with AeroCom Models and Pathways to Observationally Based All-sky Estimates

    NASA Astrophysics Data System (ADS)

    Redemann, J.; Livingston, J. M.; Shinozuka, Y.; Kacenelenbogen, M. S.; Russell, P. B.; LeBlanc, S. E.; Vaughan, M.; Ferrare, R. A.; Hostetler, C. A.; Rogers, R. R.; Burton, S. P.; Torres, O.; Remer, L. A.; Stier, P.; Schutgens, N.

    2014-12-01

    We describe a technique for combining CALIOP aerosol backscatter, MODIS spectral AOD (aerosol optical depth), and OMI AAOD (absorption aerosol optical depth) retrievals for the purpose of estimating full spectral sets of aerosol radiative properties, and ultimately for calculating the 3-D distribution of direct aerosol radiative forcing. We present results using one year of data collected in 2007 and show comparisons of the aerosol radiative property estimates to collocated AERONET retrievals. Use of the recently released MODIS Collection 6 data for aerosol optical depths derived with the dark target and deep blue algorithms has extended the coverage of the multi-sensor estimates towards higher latitudes. Initial calculations of seasonal clear-sky aerosol radiative forcing based on our multi-sensor aerosol retrievals compare well with over-ocean and top of the atmosphere IPCC-2007 model-based results, and with more recent assessments in the "Climate Change Science Program Report: Atmospheric Aerosol Properties and Climate Impacts" (2009). For the first time, we present comparisons of our multi-sensor aerosol direct radiative forcing estimates to values derived from a subset of models that participated in the latest AeroCom initiative. We discuss the major challenges that exist in extending our clear-sky results to all-sky conditions. On the basis of comparisons to suborbital measurements, we present some of the limitations of the MODIS and CALIOP retrievals in the presence of adjacent or underlying clouds. Strategies for meeting these challenges are discussed.

  12. Multi-Skyrmions on AdS2 × S2, rational maps and popcorn transitions

    NASA Astrophysics Data System (ADS)

    Canfora, Fabrizio; Tallarita, Gianni

    2017-08-01

    By combining two different techniques to construct multi-soliton solutions of the (3 + 1)-dimensional Skyrme model, the generalized hedgehog and the rational map ansatz, we find multi-Skyrmion configurations in AdS2 ×S2. We construct Skyrmionic multi-layered configurations such that the total Baryon charge is the product of the number of kinks along the radial AdS2 direction and the degree of the rational map. We show that, for fixed total Baryon charge, as one increases the charge density on ∂ (AdS2 ×S2) , it becomes increasingly convenient energetically to have configurations with more peaks in the radial AdS2 direction but a lower degree of the rational map. This has a direct relation with the so-called holographic popcorn transitions in which, when the charge density is high, multi-layered configurations with low charge on each layer are favored over configurations with few layers but with higher charge on each layer. The case in which the geometry is M2 ×S2 can also be analyzed.

  13. Modeling Multi-wavelength Stellar Astrometry. III. Determination of the Absolute Masses of Exoplanets and Their Host Stars

    NASA Astrophysics Data System (ADS)

    Coughlin, J. L.; López-Morales, Mercedes

    2012-05-01

    Astrometric measurements of stellar systems are becoming significantly more precise and common, with many ground- and space-based instruments and missions approaching 1 μas precision. We examine the multi-wavelength astrometric orbits of exoplanetary systems via both analytical formulae and numerical modeling. Exoplanets have a combination of reflected and thermally emitted light that causes the photocenter of the system to shift increasingly farther away from the host star with increasing wavelength. We find that, if observed at long enough wavelengths, the planet can dominate the astrometric motion of the system, and thus it is possible to directly measure the orbits of both the planet and star, and thus directly determine the physical masses of the star and planet, using multi-wavelength astrometry. In general, this technique works best for, though is certainly not limited to, systems that have large, high-mass stars and large, low-mass planets, which is a unique parameter space not covered by other exoplanet characterization techniques. Exoplanets that happen to transit their host star present unique cases where the physical radii of the planet and star can be directly determined via astrometry alone. Planetary albedos and day-night contrast ratios may also be probed via this technique due to the unique signature they impart on the observed astrometric orbits. We develop a tool to examine the prospects for near-term detection of this effect, and give examples of some exoplanets that appear to be good targets for detection in the K to N infrared observing bands, if the required precision can be achieved.

  14. Strategies for efficient numerical implementation of hybrid multi-scale agent-based models to describe biological systems

    PubMed Central

    Cilfone, Nicholas A.; Kirschner, Denise E.; Linderman, Jennifer J.

    2015-01-01

    Biologically related processes operate across multiple spatiotemporal scales. For computational modeling methodologies to mimic this biological complexity, individual scale models must be linked in ways that allow for dynamic exchange of information across scales. A powerful methodology is to combine a discrete modeling approach, agent-based models (ABMs), with continuum models to form hybrid models. Hybrid multi-scale ABMs have been used to simulate emergent responses of biological systems. Here, we review two aspects of hybrid multi-scale ABMs: linking individual scale models and efficiently solving the resulting model. We discuss the computational choices associated with aspects of linking individual scale models while simultaneously maintaining model tractability. We demonstrate implementations of existing numerical methods in the context of hybrid multi-scale ABMs. Using an example model describing Mycobacterium tuberculosis infection, we show relative computational speeds of various combinations of numerical methods. Efficient linking and solution of hybrid multi-scale ABMs is key to model portability, modularity, and their use in understanding biological phenomena at a systems level. PMID:26366228

  15. Top-down estimate of dust emissions through integration of MODIS and MISR aerosol retrievals with the GEOS-Chem adjoint model

    NASA Astrophysics Data System (ADS)

    Wang, Jun; Xu, Xiaoguang; Henze, Daven K.; Zeng, Jing; Ji, Qiang; Tsay, Si-Chee; Huang, Jianping

    2012-04-01

    Predicting the influences of dust on atmospheric composition, climate, and human health requires accurate knowledge of dust emissions, but large uncertainties persist in quantifying mineral sources. This study presents a new method for combined use of satellite-measured radiances and inverse modeling to spatially constrain the amount and location of dust emissions. The technique is illustrated with a case study in May 2008; the dust emissions in Taklimakan and Gobi deserts are spatially optimized using the GEOS-Chem chemical transport model and its adjoint constrained by aerosol optical depth (AOD) that are derived over the downwind dark-surface region in China from MODIS (Moderate Resolution Imaging Spectroradiometer) reflectance with the aerosol single scattering properties consistent with GEOS-chem. The adjoint inverse modeling yields an overall 51% decrease in prior dust emissions estimated by GEOS-Chem over the Taklimakan-Gobi area, with more significant reductions south of the Gobi Desert. The model simulation with optimized dust emissions shows much better agreement with independent observations from MISR (Multi-angle Imaging SpectroRadiometer) AOD and MODIS Deep Blue AOD over the dust source region and surface PM10 concentrations. The technique of this study can be applied to global multi-sensor remote sensing data for constraining dust emissions at various temporal and spatial scales, and hence improving the quantification of dust effects on climate, air quality, and human health.

  16. Top-down Estimate of Dust Emissions Through Integration of MODIS and MISR Aerosol Retrievals With the Geos-chem Adjoint Model

    NASA Technical Reports Server (NTRS)

    Wang, Jun; Xu, Xiaoguang; Henze, Daven K.; Zeng, Jing; Ji, Qiang; Tsay, Si-Chee; Huang, Jianping

    2012-01-01

    Predicting the influences of dust on atmospheric composition, climate, and human health requires accurate knowledge of dust emissions, but large uncertainties persist in quantifying mineral sources. This study presents a new method for combined use of satellite-measured radiances and inverse modeling to spatially constrain the amount and location of dust emissions. The technique is illustrated with a case study in May 2008; the dust emissions in Taklimakan and Gobi deserts are spatially optimized using the GEOSChem chemical transport model and its adjoint constrained by aerosol optical depth (AOD) that are derived over the downwind dark-surface region in China from MODIS (Moderate Resolution Imaging Spectroradiometer) reflectance with the aerosol single scattering properties consistent with GEOS-chem. The adjoint inverse modeling yields an overall 51% decrease in prior dust emissions estimated by GEOS-Chem over the Taklimakan-Gobi area, with more significant reductions south of the Gobi Desert. The model simulation with optimized dust emissions shows much better agreement with independent observations from MISR (Multi-angle Imaging SpectroRadiometer) AOD and MODIS Deep Blue AOD over the dust source region and surface PM10 concentrations. The technique of this study can be applied to global multi-sensor remote sensing data for constraining dust emissions at various temporal and spatial scales, and hence improving the quantification of dust effects on climate, air quality, and human health.

  17. Multi-gene genetic programming based predictive models for municipal solid waste gasification in a fluidized bed gasifier.

    PubMed

    Pandey, Daya Shankar; Pan, Indranil; Das, Saptarshi; Leahy, James J; Kwapinski, Witold

    2015-03-01

    A multi-gene genetic programming technique is proposed as a new method to predict syngas yield production and the lower heating value for municipal solid waste gasification in a fluidized bed gasifier. The study shows that the predicted outputs of the municipal solid waste gasification process are in good agreement with the experimental dataset and also generalise well to validation (untrained) data. Published experimental datasets are used for model training and validation purposes. The results show the effectiveness of the genetic programming technique for solving complex nonlinear regression problems. The multi-gene genetic programming are also compared with a single-gene genetic programming model to show the relative merits and demerits of the technique. This study demonstrates that the genetic programming based data-driven modelling strategy can be a good candidate for developing models for other types of fuels as well. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Combination of Multi-Agent Systems and Wireless Sensor Networks for the Monitoring of Cattle

    PubMed Central

    Barriuso, Alberto L.; De Paz, Juan F.; Lozano, Álvaro

    2018-01-01

    Precision breeding techniques have been widely used to optimize expenses and increase livestock yields. Notwithstanding, the joint use of heterogeneous sensors and artificial intelligence techniques for the simultaneous analysis or detection of different problems that cattle may present has not been addressed. This study arises from the necessity to obtain a technological tool that faces this state of the art limitation. As novelty, this work presents a multi-agent architecture based on virtual organizations which allows to deploy a new embedded agent model in computationally limited autonomous sensors, making use of the Platform for Automatic coNstruction of orGanizations of intElligent Agents (PANGEA). To validate the proposed platform, different studies have been performed, where parameters specific to each animal are studied, such as physical activity, temperature, estrus cycle state and the moment in which the animal goes into labor. In addition, a set of applications that allow farmers to remotely monitor the livestock have been developed. PMID:29301310

  19. Combination of Multi-Agent Systems and Wireless Sensor Networks for the Monitoring of Cattle.

    PubMed

    Barriuso, Alberto L; Villarrubia González, Gabriel; De Paz, Juan F; Lozano, Álvaro; Bajo, Javier

    2018-01-02

    Precision breeding techniques have been widely used to optimize expenses and increase livestock yields. Notwithstanding, the joint use of heterogeneous sensors and artificial intelligence techniques for the simultaneous analysis or detection of different problems that cattle may present has not been addressed. This study arises from the necessity to obtain a technological tool that faces this state of the art limitation. As novelty, this work presents a multi-agent architecture based on virtual organizations which allows to deploy a new embedded agent model in computationally limited autonomous sensors, making use of the Platform for Automatic coNstruction of orGanizations of intElligent Agents (PANGEA). To validate the proposed platform, different studies have been performed, where parameters specific to each animal are studied, such as physical activity, temperature, estrus cycle state and the moment in which the animal goes into labor. In addition, a set of applications that allow farmers to remotely monitor the livestock have been developed.

  20. Long-term ground deformation patterns of Bucharest using multi-temporal InSAR and multivariate dynamic analyses: a possible transpressional system?

    PubMed Central

    Armaş, Iuliana; Mendes, Diana A.; Popa, Răzvan-Gabriel; Gheorghe, Mihaela; Popovici, Diana

    2017-01-01

    The aim of this exploratory research is to capture spatial evolution patterns in the Bucharest metropolitan area using sets of single polarised synthetic aperture radar (SAR) satellite data and multi-temporal radar interferometry. Three sets of SAR data acquired during the years 1992–2010 from ERS-1/-2 and ENVISAT, and 2011–2014 from TerraSAR-X satellites were used in conjunction with the Small Baseline Subset (SBAS) and persistent scatterers (PS) high-resolution multi-temporal interferometry (InSAR) techniques to provide maps of line-of-sight displacements. The satellite-based remote sensing results were combined with results derived from classical methodologies (i.e., diachronic cartography) and field research to study possible trends in developments over former clay pits, landfill excavation sites, and industrial parks. The ground displacement trend patterns were analysed using several linear and nonlinear models, and techniques. Trends based on the estimated ground displacement are characterised by long-term memory, indicated by low noise Hurst exponents, which in the long-term form interesting attractors. We hypothesize these attractors to be tectonic stress fields generated by transpressional movements. PMID:28252103

  1. Long-term ground deformation patterns of Bucharest using multi-temporal InSAR and multivariate dynamic analyses: a possible transpressional system?

    PubMed

    Armaş, Iuliana; Mendes, Diana A; Popa, Răzvan-Gabriel; Gheorghe, Mihaela; Popovici, Diana

    2017-03-02

    The aim of this exploratory research is to capture spatial evolution patterns in the Bucharest metropolitan area using sets of single polarised synthetic aperture radar (SAR) satellite data and multi-temporal radar interferometry. Three sets of SAR data acquired during the years 1992-2010 from ERS-1/-2 and ENVISAT, and 2011-2014 from TerraSAR-X satellites were used in conjunction with the Small Baseline Subset (SBAS) and persistent scatterers (PS) high-resolution multi-temporal interferometry (InSAR) techniques to provide maps of line-of-sight displacements. The satellite-based remote sensing results were combined with results derived from classical methodologies (i.e., diachronic cartography) and field research to study possible trends in developments over former clay pits, landfill excavation sites, and industrial parks. The ground displacement trend patterns were analysed using several linear and nonlinear models, and techniques. Trends based on the estimated ground displacement are characterised by long-term memory, indicated by low noise Hurst exponents, which in the long-term form interesting attractors. We hypothesize these attractors to be tectonic stress fields generated by transpressional movements.

  2. Next-generation seismic experiments: wide-angle, multi-azimuth, three-dimensional, full-waveform inversion

    NASA Astrophysics Data System (ADS)

    Bell, Rebecca; Morgan, Joanna; Warner, Michael

    2016-04-01

    There are many outstanding plate-tectonic scale questions that require us to know information about sub-surface physical properties, for example ascertaining the geometry and location of magma chambers and estimating the effective stress along plate boundary faults. These important scientific targets are often too deep, impractical and expensive for extensive academic drilling. Full-waveform inversion (FWI) is an advanced seismic imaging technique that has recently become feasible in three dimensions, and has been widely adopted by the oil and gas industry to image reservoir-scale targets at shallow-to-moderate depths. In this presentation we explore the potential for 3-D FWI, when combined with appropriate marine seismic acquisition, to recover high-resolution high-fidelity P-wave velocity models for sub-sedimentary targets within the crystalline crust and uppermost mantle. Using existing geological and geophysical models, we construct P-wave velocity models over three potential sub-sedimentary targets: the Soufrière Hills Volcano on Montserrat and its associated crustal magmatic system, the downgoing oceanic plate beneath the Nankai subduction margin, and the oceanic crust-uppermost mantle beneath the East Pacific Rise mid-ocean ridge. We use these models to generate realistic multi-azimuth 3-D synthetic seismic data, and attempt to invert these data to recover the original models. We explore the resolution and accuracy, sensitivity to noise and acquisition geometry, ability to invert elastic data using acoustic inversion codes, and the trade-off between low frequencies and starting velocity model accuracy. We will show that FWI applied to multi-azimuth, refracted, wide-angle, low-frequency data can resolve features in the deep crust and uppermost mantle on scales that are significantly better than can be achieved by any other geophysical technique, and that these results can be obtained using relatively small numbers (60-90) of ocean-bottom receivers combined with large numbers of air-gun shots. We demonstrate that multi-azimuth 3-D FWI is robust in the presence of noise, that acoustic FWI can invert elastic data successfully, and that the typical errors to be expected in starting models derived using travel times will not be problematic for FWI given appropriately designed acquisition. In this presentation we will also discuss a recent field-example of the use of FWI to image the Endeavour spreading centre in the northeastern Pacific. FWI is a rapidly maturing technology; its transfer from the petroleum sector to tackle a broader range of targets now appears entirely achievable.

  3. Model-Free control performance improvement using virtual reference feedback tuning and reinforcement Q-learning

    NASA Astrophysics Data System (ADS)

    Radac, Mircea-Bogdan; Precup, Radu-Emil; Roman, Raul-Cristian

    2017-04-01

    This paper proposes the combination of two model-free controller tuning techniques, namely linear virtual reference feedback tuning (VRFT) and nonlinear state-feedback Q-learning, referred to as a new mixed VRFT-Q learning approach. VRFT is first used to find stabilising feedback controller using input-output experimental data from the process in a model reference tracking setting. Reinforcement Q-learning is next applied in the same setting using input-state experimental data collected under perturbed VRFT to ensure good exploration. The Q-learning controller learned with a batch fitted Q iteration algorithm uses two neural networks, one for the Q-function estimator and one for the controller, respectively. The VRFT-Q learning approach is validated on position control of a two-degrees-of-motion open-loop stable multi input-multi output (MIMO) aerodynamic system (AS). Extensive simulations for the two independent control channels of the MIMO AS show that the Q-learning controllers clearly improve performance over the VRFT controllers.

  4. Swarm intelligence for multi-objective optimization of synthesis gas production

    NASA Astrophysics Data System (ADS)

    Ganesan, T.; Vasant, P.; Elamvazuthi, I.; Ku Shaari, Ku Zilati

    2012-11-01

    In the chemical industry, the production of methanol, ammonia, hydrogen and higher hydrocarbons require synthesis gas (or syn gas). The main three syn gas production methods are carbon dioxide reforming (CRM), steam reforming (SRM) and partial-oxidation of methane (POM). In this work, multi-objective (MO) optimization of the combined CRM and POM was carried out. The empirical model and the MO problem formulation for this combined process were obtained from previous works. The central objectives considered in this problem are methane conversion, carbon monoxide selectivity and the hydrogen to carbon monoxide ratio. The MO nature of the problem was tackled using the Normal Boundary Intersection (NBI) method. Two techniques (Gravitational Search Algorithm (GSA) and Particle Swarm Optimization (PSO)) were then applied in conjunction with the NBI method. The performance of the two algorithms and the quality of the solutions were gauged by using two performance metrics. Comparative studies and results analysis were then carried out on the optimization results.

  5. Residual Shuffling Convolutional Neural Networks for Deep Semantic Image Segmentation Using Multi-Modal Data

    NASA Astrophysics Data System (ADS)

    Chen, K.; Weinmann, M.; Gao, X.; Yan, M.; Hinz, S.; Jutzi, B.; Weinmann, M.

    2018-05-01

    In this paper, we address the deep semantic segmentation of aerial imagery based on multi-modal data. Given multi-modal data composed of true orthophotos and the corresponding Digital Surface Models (DSMs), we extract a variety of hand-crafted radiometric and geometric features which are provided separately and in different combinations as input to a modern deep learning framework. The latter is represented by a Residual Shuffling Convolutional Neural Network (RSCNN) combining the characteristics of a Residual Network with the advantages of atrous convolution and a shuffling operator to achieve a dense semantic labeling. Via performance evaluation on a benchmark dataset, we analyze the value of different feature sets for the semantic segmentation task. The derived results reveal that the use of radiometric features yields better classification results than the use of geometric features for the considered dataset. Furthermore, the consideration of data on both modalities leads to an improvement of the classification results. However, the derived results also indicate that the use of all defined features is less favorable than the use of selected features. Consequently, data representations derived via feature extraction and feature selection techniques still provide a gain if used as the basis for deep semantic segmentation.

  6. Simultaneous identification of optical constants and PSD of spherical particles by multi-wavelength scattering-transmittance measurement

    NASA Astrophysics Data System (ADS)

    Zhang, Jun-You; Qi, Hong; Ren, Ya-Tao; Ruan, Li-Ming

    2018-04-01

    An accurate and stable identification technique is developed to retrieve the optical constants and particle size distributions (PSDs) of particle system simultaneously from the multi-wavelength scattering-transmittance signals by using the improved quantum particle swarm optimization algorithm. The Mie theory are selected to calculate the directional laser intensity scattered by particles and the spectral collimated transmittance. The sensitivity and objective function distribution analysis were conducted to evaluate the mathematical properties (i.e. ill-posedness and multimodality) of the inverse problems under three different optical signals combinations (i.e. the single-wavelength multi-angle light scattering signal, the single-wavelength multi-angle light scattering and spectral transmittance signal, and the multi-angle light scattering and spectral transmittance signal). It was found the best global convergence performance can be obtained by using the multi-wavelength scattering-transmittance signals. Meanwhile, the present technique have been tested under different Gaussian measurement noise to prove its feasibility in a large solution space. All the results show that the inverse technique by using multi-wavelength scattering-transmittance signals is effective and suitable for retrieving the optical complex refractive indices and PSD of particle system simultaneously.

  7. Multi-fiber strains measured by micro-Raman spectroscopy: Principles and experiments

    NASA Astrophysics Data System (ADS)

    Lei, Zhenkun; Wang, Yunfeng; Qin, Fuyong; Qiu, Wei; Bai, Ruixiang; Chen, Xiaogang

    2016-02-01

    Based on widely used axial strain measurement method of Kevlar single fiber, an original theoretical model and measurement principle of application of micro-Raman spectroscopy to multi-fiber strains in a fiber bundle were established. The relationship between the nominal Raman shift of fiber bundle and the multi-fiber strains was deduced. The proposed principle for multi-fiber strains measurement is consistent with two special cases: single fiber deformation and multi-fiber deformation under equal strain. It is found experimentally that the distribution of Raman scattering intensity of a Kevlar 49 fiber as a function of distance between a fiber and the laser spot center follows a Gaussian function. Combining the Raman-shift/strain relationship of the Kevlar 49 single fiber and the uniaxial tension measured by micro-Raman spectroscopy, the Raman shift as a function of strain was obtained. Then the Raman peak at 1610 cm-1 for the Kevlar 49 fiber was fitted to a Lorentzian function and the FWHM showed a quadratic increase with the fiber strain. Finally, a dual-fiber tensile experiment was performed to verify the adequacy of the Raman technique for the measurement of multi-fiber strains.

  8. Hollow-fiber flow field-flow fractionation and multi-angle light scattering investigation of the size, shape and metal-release of silver nanoparticles in aqueous medium for nano-risk assessment.

    PubMed

    Marassi, Valentina; Casolari, Sonia; Roda, Barbara; Zattoni, Andrea; Reschiglian, Pierluigi; Panzavolta, Silvia; Tofail, Syed A M; Ortelli, Simona; Delpivo, Camilla; Blosi, Magda; Costa, Anna Luisa

    2015-03-15

    Due to the increased use of silver nanoparticles in industrial scale manufacturing, consumer products and nanomedicine reliable measurements of properties such as the size, shape and distribution of these nano particles in aqueous medium is critical. These properties indeed affect both functional properties and biological impacts especially in quantifying associated risks and identifying suitable risk-mediation strategies. The feasibility of on-line coupling of a fractionation technique such as hollow-fiber flow field flow fractionation (HF5) with a light scattering technique such as MALS (multi-angle light scattering) is investigated here for this purpose. Data obtained from such a fractionation technique and its combination thereof with MALS have been compared with those from more conventional but often complementary techniques e.g. transmission electron microscopy, dynamic light scattering, atomic absorption spectroscopy, and X-ray fluorescence. The combination of fractionation and multi angle light scattering techniques have been found to offer an ideal, hyphenated methodology for a simultaneous size-separation and characterization of silver nanoparticles. The hydrodynamic radii determined by fractionation techniques can be conveniently correlated to the mean average diameters determined by multi angle light scattering and reliable information on particle morphology in aqueous dispersion has been obtained. The ability to separate silver (Ag(+)) ions from silver nanoparticles (AgNPs) via membrane filtration during size analysis is an added advantage in obtaining quantitative insights to its risk potential. Most importantly, the methodology developed in this article can potentially be extended to similar characterization of metal-based nanoparticles when studying their functional effectiveness and hazard potential. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. A Hybrid Optimization Framework with POD-based Order Reduction and Design-Space Evolution Scheme

    NASA Astrophysics Data System (ADS)

    Ghoman, Satyajit S.

    The main objective of this research is to develop an innovative multi-fidelity multi-disciplinary design, analysis and optimization suite that integrates certain solution generation codes and newly developed innovative tools to improve the overall optimization process. The research performed herein is divided into two parts: (1) the development of an MDAO framework by integration of variable fidelity physics-based computational codes, and (2) enhancements to such a framework by incorporating innovative features extending its robustness. The first part of this dissertation describes the development of a conceptual Multi-Fidelity Multi-Strategy and Multi-Disciplinary Design Optimization Environment (M3 DOE), in context of aircraft wing optimization. M 3 DOE provides the user a capability to optimize configurations with a choice of (i) the level of fidelity desired, (ii) the use of a single-step or multi-step optimization strategy, and (iii) combination of a series of structural and aerodynamic analyses. The modularity of M3 DOE allows it to be a part of other inclusive optimization frameworks. The M 3 DOE is demonstrated within the context of shape and sizing optimization of the wing of a Generic Business Jet aircraft. Two different optimization objectives, viz. dry weight minimization, and cruise range maximization are studied by conducting one low-fidelity and two high-fidelity optimization runs to demonstrate the application scope of M3 DOE. The second part of this dissertation describes the development of an innovative hybrid optimization framework that extends the robustness of M 3 DOE by employing a proper orthogonal decomposition-based design-space order reduction scheme combined with the evolutionary algorithm technique. The POD method of extracting dominant modes from an ensemble of candidate configurations is used for the design-space order reduction. The snapshot of candidate population is updated iteratively using evolutionary algorithm technique of fitness-driven retention. This strategy capitalizes on the advantages of evolutionary algorithm as well as POD-based reduced order modeling, while overcoming the shortcomings inherent with these techniques. When linked with M3 DOE, this strategy offers a computationally efficient methodology for problems with high level of complexity and a challenging design-space. This newly developed framework is demonstrated for its robustness on a nonconventional supersonic tailless air vehicle wing shape optimization problem.

  10. Enhancing multi-spot structured illumination microscopy with fluorescence difference

    PubMed Central

    Torkelsen, Frida H.

    2018-01-01

    Structured illumination microscopy is a super-resolution technique used extensively in biological research. However, this technique is limited in the maximum possible resolution increase. Here we report the results of simulations of a novel enhanced multi-spot structured illumination technique. This method combines the super-resolution technique of difference microscopy with structured illumination deconvolution. Initial results give at minimum a 1.4-fold increase in resolution over conventional structured illumination in a low-noise environment. This new technique also has the potential to be expanded to further enhance axial resolution with three-dimensional difference microscopy. The requirement for precise pattern determination in this technique also led to the development of a new pattern estimation algorithm which proved more efficient and reliable than other methods tested. PMID:29657751

  11. Isotope ratio measurements of pg-size plutonium samples using TIMS in combination with "multiple ion counting" and filament carburization

    NASA Astrophysics Data System (ADS)

    Jakopic, Rozle; Richter, Stephan; Kühn, Heinz; Benedik, Ljudmila; Pihlar, Boris; Aregbe, Yetunde

    2009-01-01

    A sample preparation procedure for isotopic measurements using thermal ionization mass spectrometry (TIMS) was developed which employs the technique of carburization of rhenium filaments. Carburized filaments were prepared in a special vacuum chamber in which the filaments were exposed to benzene vapour as a carbon supply and carburized electrothermally. To find the optimal conditions for the carburization and isotopic measurements using TIMS, the influence of various parameters such as benzene pressure, carburization current and the exposure time were tested. As a result, carburization of the filaments improved the overall efficiency by one order of magnitude. Additionally, a new "multi-dynamic" measurement technique was developed for Pu isotope ratio measurements using a "multiple ion counting" (MIC) system. This technique was combined with filament carburization and applied to the NBL-137 isotopic standard and samples of the NUSIMEP 5 inter-laboratory comparison campaign, which included certified plutonium materials at the ppt-level. The multi-dynamic measurement technique for plutonium, in combination with filament carburization, has been shown to significantly improve the precision and accuracy for isotopic analysis of environmental samples with low-levels of plutonium.

  12. Risk maps of Lassa fever in West Africa.

    PubMed

    Fichet-Calvet, Elisabeth; Rogers, David John

    2009-01-01

    Lassa fever is caused by a viral haemorrhagic arenavirus that affects two to three million people in West Africa, causing a mortality of between 5,000 and 10,000 each year. The natural reservoir of Lassa virus is the multi-mammate rat Mastomys natalensis, which lives in houses and surrounding fields. With the aim of gaining more information to control this disease, we here carry out a spatial analysis of Lassa fever data from human cases and infected rodent hosts covering the period 1965-2007. Information on contemporary environmental conditions (temperature, rainfall, vegetation) was derived from NASA Terra MODIS satellite sensor data and other sources and for elevation from the GTOPO30 surface for the region from Senegal to the Congo. All multi-temporal data were analysed using temporal Fourier techniques to generate images of means, amplitudes and phases which were used as the predictor variables in the models. In addition, meteorological rainfall data collected between 1951 and 1989 were used to generate a synoptic rainfall surface for the same region. Three different analyses (models) are presented, one superimposing Lassa fever outbreaks on the mean rainfall surface (Model 1) and the other two using non-linear discriminant analytical techniques. Model 2 selected variables in a step-wise inclusive fashion, and Model 3 used an information-theoretic approach in which many different random combinations of 10 variables were fitted to the Lassa fever data. Three combinations of absenceratiopresence clusters were used in each of Models 2 and 3, the 2 absenceratio1 presence cluster combination giving what appeared to be the best result. Model 1 showed that the recorded outbreaks of Lassa fever in human populations occurred in zones receiving between 1,500 and 3,000 mm rainfall annually. Rainfall, and to a much lesser extent temperature variables, were most strongly selected in both Models 2 and 3, and neither vegetation nor altitude seemed particularly important. Both Models 2 and 3 produced mean kappa values in excess of 0.91 (Model 2) or 0.86 (Model 3), making them 'Excellent'. The Lassa fever areas predicted by the models cover approximately 80% of each of Sierra Leone and Liberia, 50% of Guinea, 40% of Nigeria, 30% of each of Côte d'Ivoire, Togo and Benin, and 10% of Ghana.

  13. Longitudinal in vivo evaluation of bone regeneration by combined measurement of multi-pinhole SPECT and micro-CT for tissue engineering

    NASA Astrophysics Data System (ADS)

    Lienemann, Philipp S.; Metzger, Stéphanie; Kiveliö, Anna-Sofia; Blanc, Alain; Papageorgiou, Panagiota; Astolfo, Alberto; Pinzer, Bernd R.; Cinelli, Paolo; Weber, Franz E.; Schibli, Roger; Béhé, Martin; Ehrbar, Martin

    2015-05-01

    Over the last decades, great strides were made in the development of novel implants for the treatment of bone defects. The increasing versatility and complexity of these implant designs request for concurrent advances in means to assess in vivo the course of induced bone formation in preclinical models. Since its discovery, micro-computed tomography (micro-CT) has excelled as powerful high-resolution technique for non-invasive assessment of newly formed bone tissue. However, micro-CT fails to provide spatiotemporal information on biological processes ongoing during bone regeneration. Conversely, due to the versatile applicability and cost-effectiveness, single photon emission computed tomography (SPECT) would be an ideal technique for assessing such biological processes with high sensitivity and for nuclear imaging comparably high resolution (<1 mm). Herein, we employ modular designed poly(ethylene glycol)-based hydrogels that release bone morphogenetic protein to guide the healing of critical sized calvarial bone defects. By combined in vivo longitudinal multi-pinhole SPECT and micro-CT evaluations we determine the spatiotemporal course of bone formation and remodeling within this synthetic hydrogel implant. End point evaluations by high resolution micro-CT and histological evaluation confirm the value of this approach to follow and optimize bone-inducing biomaterials.

  14. Robust model predictive control for multi-step short range spacecraft rendezvous

    NASA Astrophysics Data System (ADS)

    Zhu, Shuyi; Sun, Ran; Wang, Jiaolong; Wang, Jihe; Shao, Xiaowei

    2018-07-01

    This work presents a robust model predictive control (MPC) approach for the multi-step short range spacecraft rendezvous problem. During the specific short range phase concerned, the chaser is supposed to be initially outside the line-of-sight (LOS) cone. Therefore, the rendezvous process naturally includes two steps: the first step is to transfer the chaser into the LOS cone and the second step is to transfer the chaser into the aimed region with its motion confined within the LOS cone. A novel MPC framework named after Mixed MPC (M-MPC) is proposed, which is the combination of the Variable-Horizon MPC (VH-MPC) framework and the Fixed-Instant MPC (FI-MPC) framework. The M-MPC framework enables the optimization for the two steps to be implemented jointly rather than to be separated factitiously, and its computation workload is acceptable for the usually low-power processors onboard spacecraft. Then considering that disturbances including modeling error, sensor noise and thrust uncertainty may induce undesired constraint violations, a robust technique is developed and it is attached to the above M-MPC framework to form a robust M-MPC approach. The robust technique is based on the chance-constrained idea, which ensures that constraints can be satisfied with a prescribed probability. It improves the robust technique proposed by Gavilan et al., because it eliminates the unnecessary conservativeness by explicitly incorporating known statistical properties of the navigation uncertainty. The efficacy of the robust M-MPC approach is shown in a simulation study.

  15. Time series modeling of human operator dynamics in manual control tasks

    NASA Technical Reports Server (NTRS)

    Biezad, D. J.; Schmidt, D. K.

    1984-01-01

    A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency responses of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that has not been previously modeled to demonstrate the strengths of the method.

  16. Time Series Modeling of Human Operator Dynamics in Manual Control Tasks

    NASA Technical Reports Server (NTRS)

    Biezad, D. J.; Schmidt, D. K.

    1984-01-01

    A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency response of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that was previously modeled to demonstrate the strengths of the method.

  17. Parallel and Preemptable Dynamically Dimensioned Search Algorithms for Single and Multi-objective Optimization in Water Resources

    NASA Astrophysics Data System (ADS)

    Tolson, B.; Matott, L. S.; Gaffoor, T. A.; Asadzadeh, M.; Shafii, M.; Pomorski, P.; Xu, X.; Jahanpour, M.; Razavi, S.; Haghnegahdar, A.; Craig, J. R.

    2015-12-01

    We introduce asynchronous parallel implementations of the Dynamically Dimensioned Search (DDS) family of algorithms including DDS, discrete DDS, PA-DDS and DDS-AU. These parallel algorithms are unique from most existing parallel optimization algorithms in the water resources field in that parallel DDS is asynchronous and does not require an entire population (set of candidate solutions) to be evaluated before generating and then sending a new candidate solution for evaluation. One key advance in this study is developing the first parallel PA-DDS multi-objective optimization algorithm. The other key advance is enhancing the computational efficiency of solving optimization problems (such as model calibration) by combining a parallel optimization algorithm with the deterministic model pre-emption concept. These two efficiency techniques can only be combined because of the asynchronous nature of parallel DDS. Model pre-emption functions to terminate simulation model runs early, prior to completely simulating the model calibration period for example, when intermediate results indicate the candidate solution is so poor that it will definitely have no influence on the generation of further candidate solutions. The computational savings of deterministic model preemption available in serial implementations of population-based algorithms (e.g., PSO) disappear in synchronous parallel implementations as these algorithms. In addition to the key advances above, we implement the algorithms across a range of computation platforms (Windows and Unix-based operating systems from multi-core desktops to a supercomputer system) and package these for future modellers within a model-independent calibration software package called Ostrich as well as MATLAB versions. Results across multiple platforms and multiple case studies (from 4 to 64 processors) demonstrate the vast improvement over serial DDS-based algorithms and highlight the important role model pre-emption plays in the performance of parallel, pre-emptable DDS algorithms. Case studies include single- and multiple-objective optimization problems in water resources model calibration and in many cases linear or near linear speedups are observed.

  18. Cells, Agents, and Support Vectors in Interaction - Modeling Urban Sprawl based on Machine Learning and Artificial Intelligence Techniques in a Post-Industrial Region

    NASA Astrophysics Data System (ADS)

    Rienow, A.; Menz, G.

    2015-12-01

    Since the beginning of the millennium, artificial intelligence techniques as cellular automata (CA) and multi-agent systems (MAS) have been incorporated into land-system simulations to address the complex challenges of transitions in urban areas as open, dynamic systems. The study presents a hybrid modeling approach for modeling the two antagonistic processes of urban sprawl and urban decline at once. The simulation power of support vector machines (SVM), cellular automata (CA) and multi-agent systems (MAS) are integrated into one modeling framework and applied to the largest agglomeration of Central Europe: the Ruhr. A modified version of SLEUTH (short for Slope, Land-use, Exclusion, Urban, Transport, and Hillshade) functions as the CA component. SLEUTH makes use of historic urban land-use data sets and growth coefficients for the purpose of modeling physical urban expansion. The machine learning algorithm of SVM is applied in order to enhance SLEUTH. Thus, the stochastic variability of the CA is reduced and information about the human and ecological forces driving the local suitability of urban sprawl is incorporated. Subsequently, the supported CA is coupled with the MAS ReHoSh (Residential Mobility and the Housing Market of Shrinking City Systems). The MAS models population patterns, housing prices, and housing demand in shrinking regions based on interactions between household and city agents. Semi-explicit urban weights are introduced as a possibility of modeling from and to the pixel simultaneously. Three scenarios of changing housing preferences reveal the urban development of the region in terms of quantity and location. They reflect the dissemination of sustainable thinking among stakeholders versus the steady dream of owning a house in sub- and exurban areas. Additionally, the outcomes are transferred into a digital petri dish reflecting a synthetic environment with perfect conditions of growth. Hence, the generic growth elements affecting the future face of post-industrial cities are revealed. Finally, the advantages and limitations of linking pixels and people by combining AI and machine learning techniques in a multi-scale geosimulation approach are to be discussed.

  19. Error assessment of local tie vectors in space geodesy

    NASA Astrophysics Data System (ADS)

    Falkenberg, Jana; Heinkelmann, Robert; Schuh, Harald

    2014-05-01

    For the computation of the ITRF, the data of the geometric space-geodetic techniques on co-location sites are combined. The combination increases the redundancy and offers the possibility to utilize the strengths of each technique while mitigating their weaknesses. To enable the combination of co-located techniques each technique needs to have a well-defined geometric reference point. The linking of the geometric reference points enables the combination of the technique-specific coordinate to a multi-technique site coordinate. The vectors between these reference points are called "local ties". The realization of local ties is usually reached by local surveys of the distances and or angles between the reference points. Identified temporal variations of the reference points are considered in the local tie determination only indirectly by assuming a mean position. Finally, the local ties measured in the local surveying network are to be transformed into the ITRF, the global geocentric equatorial coordinate system of the space-geodetic techniques. The current IERS procedure for the combination of the space-geodetic techniques includes the local tie vectors with an error floor of three millimeters plus a distance dependent component. This error floor, however, significantly underestimates the real accuracy of local tie determination. To fullfill the GGOS goals of 1 mm position and 0.1 mm/yr velocity accuracy, an accuracy of the local tie will be mandatory at the sub-mm level, which is currently not achievable. To assess the local tie effects on ITRF computations, investigations of the error sources will be done to realistically assess and consider them. Hence, a reasonable estimate of all the included errors of the various local ties is needed. An appropriate estimate could also improve the separation of local tie error and technique-specific error contributions to uncertainties and thus access the accuracy of space-geodetic techniques. Our investigations concern the simulation of the error contribution of each component of the local tie definition and determination. A closer look into the models of reference point definition, of accessibility, of measurement, and of transformation is necessary to properly model the error of the local tie. The effect of temporal variations on the local ties will be studied as well. The transformation of the local survey into the ITRF can be assumed to be the largest error contributor, in particular the orientation of the local surveying network to the ITRF.

  20. DCE-MRI, DW-MRI, and MRS in Cancer: Challenges and Advantages of Implementing Qualitative and Quantitative Multi-parametric Imaging in the Clinic

    PubMed Central

    Winfield, Jessica M.; Payne, Geoffrey S.; Weller, Alex; deSouza, Nandita M.

    2016-01-01

    Abstract Multi-parametric magnetic resonance imaging (mpMRI) offers a unique insight into tumor biology by combining functional MRI techniques that inform on cellularity (diffusion-weighted MRI), vascular properties (dynamic contrast-enhanced MRI), and metabolites (magnetic resonance spectroscopy) and has scope to provide valuable information for prognostication and response assessment. Challenges in the application of mpMRI in the clinic include the technical considerations in acquiring good quality functional MRI data, development of robust techniques for analysis, and clinical interpretation of the results. This article summarizes the technical challenges in acquisition and analysis of multi-parametric MRI data before reviewing the key applications of multi-parametric MRI in clinical research and practice. PMID:27748710

  1. A robust multi-kernel change detection framework for detecting leaf beetle defoliation using Landsat 7 ETM+ data

    NASA Astrophysics Data System (ADS)

    Anees, Asim; Aryal, Jagannath; O'Reilly, Małgorzata M.; Gale, Timothy J.; Wardlaw, Tim

    2016-12-01

    A robust non-parametric framework, based on multiple Radial Basic Function (RBF) kernels, is proposed in this study, for detecting land/forest cover changes using Landsat 7 ETM+ images. One of the widely used frameworks is to find change vectors (difference image) and use a supervised classifier to differentiate between change and no-change. The Bayesian Classifiers e.g. Maximum Likelihood Classifier (MLC), Naive Bayes (NB), are widely used probabilistic classifiers which assume parametric models, e.g. Gaussian function, for the class conditional distributions. However, their performance can be limited if the data set deviates from the assumed model. The proposed framework exploits the useful properties of Least Squares Probabilistic Classifier (LSPC) formulation i.e. non-parametric and probabilistic nature, to model class posterior probabilities of the difference image using a linear combination of a large number of Gaussian kernels. To this end, a simple technique, based on 10-fold cross-validation is also proposed for tuning model parameters automatically instead of selecting a (possibly) suboptimal combination from pre-specified lists of values. The proposed framework has been tested and compared with Support Vector Machine (SVM) and NB for detection of defoliation, caused by leaf beetles (Paropsisterna spp.) in Eucalyptus nitens and Eucalyptus globulus plantations of two test areas, in Tasmania, Australia, using raw bands and band combination indices of Landsat 7 ETM+. It was observed that due to multi-kernel non-parametric formulation and probabilistic nature, the LSPC outperforms parametric NB with Gaussian assumption in change detection framework, with Overall Accuracy (OA) ranging from 93.6% (κ = 0.87) to 97.4% (κ = 0.94) against 85.3% (κ = 0.69) to 93.4% (κ = 0.85), and is more robust to changing data distributions. Its performance was comparable to SVM, with added advantages of being probabilistic and capable of handling multi-class problems naturally with its original formulation.

  2. Multivariable PID controller design tuning using bat algorithm for activated sludge process

    NASA Astrophysics Data System (ADS)

    Atikah Nor’Azlan, Nur; Asmiza Selamat, Nur; Mat Yahya, Nafrizuan

    2018-04-01

    The designing of a multivariable PID control for multi input multi output is being concerned with this project by applying four multivariable PID control tuning which is Davison, Penttinen-Koivo, Maciejowski and Proposed Combined method. The determination of this study is to investigate the performance of selected optimization technique to tune the parameter of MPID controller. The selected optimization technique is Bat Algorithm (BA). All the MPID-BA tuning result will be compared and analyzed. Later, the best MPID-BA will be chosen in order to determine which techniques are better based on the system performances in terms of transient response.

  3. Bimanual Interaction with Interscopic Multi-Touch Surfaces

    NASA Astrophysics Data System (ADS)

    Schöning, Johannes; Steinicke, Frank; Krüger, Antonio; Hinrichs, Klaus; Valkov, Dimitar

    Multi-touch interaction has received considerable attention in the last few years, in particular for natural two-dimensional (2D) interaction. However, many application areas deal with three-dimensional (3D) data and require intuitive 3D interaction techniques therefore. Indeed, virtual reality (VR) systems provide sophisticated 3D user interface, but then lack efficient 2D interaction, and are therefore rarely adopted by ordinary users or even by experts. Since multi-touch interfaces represent a good trade-off between intuitive, constrained interaction on a touch surface providing tangible feedback, and unrestricted natural interaction without any instrumentation, they have the potential to form the foundation of the next generation user interface for 2D as well as 3D interaction. In particular, stereoscopic display of 3D data provides an additional depth cue, but until now the challenges and limitations for multi-touch interaction in this context have not been considered. In this paper we present new multi-touch paradigms and interactions that combine both traditional 2D interaction and novel 3D interaction on a touch surface to form a new class of multi-touch systems, which we refer to as interscopic multi-touch surfaces (iMUTS). We discuss iMUTS-based user interfaces that support interaction with 2D content displayed in monoscopic mode and 3D content usually displayed stereoscopically. In order to underline the potential of the proposed iMUTS setup, we have developed and evaluated two example interaction metaphors for different domains. First, we present intuitive navigation techniques for virtual 3D city models, and then we describe a natural metaphor for deforming volumetric datasets in a medical context.

  4. Sound transmission through a poroelastic layered panel

    NASA Astrophysics Data System (ADS)

    Nagler, Loris; Rong, Ping; Schanz, Martin; von Estorff, Otto

    2014-04-01

    Multi-layered panels are often used to improve the acoustics in cars, airplanes, rooms, etc. For such an application these panels include porous and/or fibrous layers. The proposed numerical method is an approach to simulate the acoustical behavior of such multi-layered panels. The model assumes plate-like structures and, hence, combines plate theories for the different layers. The poroelastic layer is modelled with a recently developed plate theory. This theory uses a series expansion in thickness direction with subsequent analytical integration in this direction to reduce the three dimensions to two. The same idea is used to model either air gaps or fibrous layers. The latter are modeled as equivalent fluid and can be handled like an air gap, i.e., a kind of `air plate' is used. The coupling of the layers is done by using the series expansion to express the continuity conditions on the surfaces of the plates. The final system is solved with finite elements, where domain decomposition techniques in combination with preconditioned iterative solvers are applied to solve the final system of equations. In a large frequency range, the comparison with measurements shows very good agreement. From the numerical solution process it can be concluded that different preconditioners for the different layers are necessary. A reuse of the Krylov subspace of the iterative solvers pays if several excitations have to be computed but not that much in the loop over the frequencies.

  5. Sequential combination of multi-source satellite observations for separation of surface deformation associated with serial seismic events

    NASA Astrophysics Data System (ADS)

    Chen, Qiang; Xu, Qian; Zhang, Yijun; Yang, Yinghui; Yong, Qi; Liu, Guoxiang; Liu, Xianwen

    2018-03-01

    Single satellite geodetic technique has weakness for mapping sequence of ground deformation associated with serial seismic events, like InSAR with long revisiting period readily leading to mixed complex deformation signals from multiple events. It challenges the observation capability of single satellite geodetic technique for accurate recognition of individual surface deformation and earthquake model. The rapidly increasing availability of various satellite observations provides good solution for overcoming the issue. In this study, we explore a sequential combination of multiple overlapping datasets from ALOS/PALSAR, ENVISAT/ASAR and GPS observations to separate surface deformation associated with the 2011 Mw 9.0 Tohoku-Oki major quake and two strong aftershocks including the Mw 6.6 Iwaki and Mw 5.8 Ibaraki events. We first estimate the fault slip model of major shock with ASAR interferometry and GPS displacements as constraints. Due to the used PALSAR interferogram spanning the period of all the events, we then remove the surface deformation of major shock through forward calculated prediction thus obtaining PALSAR InSAR deformation associated with the two strong aftershocks. The inversion for source parameters of Iwaki aftershock is conducted using the refined PALSAR deformation considering that the higher magnitude Iwaki quake has dominant deformation contribution than the Ibaraki event. After removal of deformation component of Iwaki event, we determine the fault slip distribution of Ibaraki shock using the remained PALSAR InSAR deformation. Finally, the complete source models for the serial seismic events are clearly identified from the sequential combination of multi-source satellite observations, which suggest that the major quake is a predominant mega-thrust rupture, whereas the two aftershocks are normal faulting motion. The estimated seismic moment magnitude for the Tohoku-Oki, Iwaki and Ibaraki evens are Mw 9.0, Mw 6.85 and Mw 6.11, respectively.

  6. Visualizing projected Climate Changes - the CMIP5 Multi-Model Ensemble

    NASA Astrophysics Data System (ADS)

    Böttinger, Michael; Eyring, Veronika; Lauer, Axel; Meier-Fleischer, Karin

    2017-04-01

    Large ensembles add an additional dimension to climate model simulations. Internal variability of the climate system can be assessed for example by multiple climate model simulations with small variations in the initial conditions or by analyzing the spread in large ensembles made by multiple climate models under common protocols. This spread is often used as a measure of uncertainty in climate projections. In the context of the fifth phase of the WCRP's Coupled Model Intercomparison Project (CMIP5), more than 40 different coupled climate models were employed to carry out a coordinated set of experiments. Time series of the development of integral quantities such as the global mean temperature change for all models visualize the spread in the multi-model ensemble. A similar approach can be applied to 2D-visualizations of projected climate changes such as latitude-longitude maps showing the multi-model mean of the ensemble by adding a graphical representation of the uncertainty information. This has been demonstrated for example with static figures in chapter 12 of the last IPCC report (AR5) using different so-called stippling and hatching techniques. In this work, we focus on animated visualizations of multi-model ensemble climate projections carried out within CMIP5 as a way of communicating climate change results to the scientific community as well as to the public. We take a closer look at measures of robustness or uncertainty used in recent publications suitable for animated visualizations. Specifically, we use the ESMValTool [1] to process and prepare the CMIP5 multi-model data in combination with standard visualization tools such as NCL and the commercial 3D visualization software Avizo to create the animations. We compare different visualization techniques such as height fields or shading with transparency for creating animated visualization of ensemble mean changes in temperature and precipitation including corresponding robustness measures. [1] Eyring, V., Righi, M., Lauer, A., Evaldsson, M., Wenzel, S., Jones, C., Anav, A., Andrews, O., Cionni, I., Davin, E. L., Deser, C., Ehbrecht, C., Friedlingstein, P., Gleckler, P., Gottschaldt, K.-D., Hagemann, S., Juckes, M., Kindermann, S., Krasting, J., Kunert, D., Levine, R., Loew, A., Mäkelä, J., Martin, G., Mason, E., Phillips, A. S., Read, S., Rio, C., Roehrig, R., Senftleben, D., Sterl, A., van Ulft, L. H., Walton, J., Wang, S., and Williams, K. D.: ESMValTool (v1.0) - a community diagnostic and performance metrics tool for routine evaluation of Earth system models in CMIP, Geosci. Model Dev., 9, 1747-1802, doi:10.5194/gmd-9-1747-2016, 2016.

  7. Multiview photometric stereo.

    PubMed

    Hernández Esteban, Carlos; Vogiatzis, George; Cipolla, Roberto

    2008-03-01

    This paper addresses the problem of obtaining complete, detailed reconstructions of textureless shiny objects. We present an algorithm which uses silhouettes of the object, as well as images obtained under changing illumination conditions. In contrast with previous photometric stereo techniques, ours is not limited to a single viewpoint but produces accurate reconstructions in full 3D. A number of images of the object are obtained from multiple viewpoints, under varying lighting conditions. Starting from the silhouettes, the algorithm recovers camera motion and constructs the object's visual hull. This is then used to recover the illumination and initialise a multi-view photometric stereo scheme to obtain a closed surface reconstruction. There are two main contributions in this paper: Firstly we describe a robust technique to estimate light directions and intensities and secondly, we introduce a novel formulation of photometric stereo which combines multiple viewpoints and hence allows closed surface reconstructions. The algorithm has been implemented as a practical model acquisition system. Here, a quantitative evaluation of the algorithm on synthetic data is presented together with complete reconstructions of challenging real objects. Finally, we show experimentally how even in the case of highly textured objects, this technique can greatly improve on correspondence-based multi-view stereo results.

  8. QFT Multi-Input, Multi-Output Design with Non-Diagonal, Non-Square Compensation Matrices

    NASA Technical Reports Server (NTRS)

    Hess, R. A.; Henderson, D. K.

    1996-01-01

    A technique for obtaining a non-diagonal compensator for the control of a multi-input, multi-output plant is presented. The technique, which uses Quantitative Feedback Theory, provides guaranteed stability and performance robustness in the presence of parametric uncertainty. An example is given involving the lateral-directional control of an uncertain model of a high-performance fighter aircraft in which redundant control effectors are in evidence, i.e. more control effectors than output variables are used.

  9. A novel cost-effective parallel narrowband ANC system with local secondary-path estimation

    NASA Astrophysics Data System (ADS)

    Delegà, Riccardo; Bernasconi, Giancarlo; Piroddi, Luigi

    2017-08-01

    Many noise reduction applications are targeted at multi-tonal disturbances. Active noise control (ANC) solutions for such problems are generally based on the combination of multiple adaptive notch filters. Both the performance and the computational cost are negatively affected by an increase in the number of controlled frequencies. In this work we study a different modeling approach for the secondary path, based on the estimation of various small local models in adjacent frequency subbands, that greatly reduces the impact of reference-filtering operations in the ANC algorithm. Furthermore, in combination with a frequency-specific step size tuning method it provides a balanced attenuation performance over the whole controlled frequency range (and particularly in the high end of the range). Finally, the use of small local models is greatly beneficial for the reactivity of the online secondary path modeling algorithm when the characteristics of the acoustic channels are time-varying. Several simulations are provided to illustrate the positive features of the proposed method compared to other well-known techniques.

  10. On the formalization of multi-scale and multi-science processes for integrative biology

    PubMed Central

    Díaz-Zuccarini, Vanessa; Pichardo-Almarza, César

    2011-01-01

    The aim of this work is to introduce the general concept of ‘Bond Graph’ (BG) techniques applied in the context of multi-physics and multi-scale processes. BG modelling has a natural place in these developments. BGs are inherently coherent as the relationships defined between the ‘elements’ of the graph are strictly defined by causality rules and power (energy) conservation. BGs clearly show how power flows between components of the systems they represent. The ‘effort’ and ‘flow’ variables enable bidirectional information flow in the BG model. When the power level of a system is low, BGs degenerate into signal flow graphs in which information is mainly one-dimensional and power is minimal, i.e. they find a natural limitation when dealing with populations of individuals or purely kinetic models, as the concept of energy conservation in these systems is no longer relevant. The aim of this work is twofold: on the one hand, we will introduce the general concept of BG techniques applied in the context of multi-science and multi-scale models and, on the other hand, we will highlight some of the most promising features in the BG methodology by comparing with examples developed using well-established modelling techniques/software that could suggest developments or refinements to the current state-of-the-art tools, by providing a consistent framework from a structural and energetic point of view. PMID:22670211

  11. Discrimination of geographical origin and detection of adulteration of kudzu root by fluorescence spectroscopy coupled with multi-way pattern recognition

    NASA Astrophysics Data System (ADS)

    Hu, Leqian; Ma, Shuai; Yin, Chunling

    2018-03-01

    In this work, fluorescence spectroscopy combined with multi-way pattern recognition techniques were developed for determining the geographical origin of kudzu root and detection and quantification of adulterants in kudzu root. Excitation-emission (EEM) spectra were obtained for 150 pure kudzu root samples of different geographical origins and 150 fake kudzu roots with different adulteration proportions by recording emission from 330 to 570 nm with excitation in the range of 320-480 nm, respectively. Multi-way principal components analysis (M-PCA) and multilinear partial least squares discriminant analysis (N-PLS-DA) methods were used to decompose the excitation-emission matrices datasets. 150 pure kudzu root samples could be differentiated exactly from each other according to their geographical origins by M-PCA and N-PLS-DA models. For the adulteration kudzu root samples, N-PLS-DA got better and more reliable classification result comparing with the M-PCA model. The results obtained in this study indicated that EEM spectroscopy coupling with multi-way pattern recognition could be used as an easy, rapid and novel tool to distinguish the geographical origin of kudzu root and detect adulterated kudzu root. Besides, this method was also suitable for determining the geographic origin and detection the adulteration of the other foodstuffs which can produce fluorescence.

  12. Simultaneous fast scanning XRF, dark field, phase-, and absorption contrast tomography

    NASA Astrophysics Data System (ADS)

    Medjoubi, Kadda; Bonissent, Alain; Leclercq, Nicolas; Langlois, Florent; Mercère, Pascal; Somogyi, Andrea

    2013-09-01

    Scanning hard X-ray nanoprobe imaging provides a unique tool for probing specimens with high sensitivity and large penetration depth. Moreover, the combination of complementary techniques such as X-ray fluorescence, absorption, phase contrast and dark field imaging gives complete quantitative information on the sample structure, composition and chemistry. The multi-technique "FLYSCAN" data acquisition scheme developed at Synchrotron SOLEIL permits to perform fast continuous scanning imaging and as such makes scanning tomography techniques feasible in a time-frame well-adapted to typical user experiments. Here we present the recent results of simultaneous fast scanning multi-technique tomography performed at Soleil. This fast scanning scheme will be implemented at the Nanoscopium beamline for large field of view 2D and 3D multimodal imaging.

  13. Scheduling multirobot operations in manufacturing by truncated Petri nets

    NASA Astrophysics Data System (ADS)

    Chen, Qin; Luh, J. Y.

    1995-08-01

    Scheduling of operational sequences in manufacturing processes is one of the important problems in automation. Methods of applying Petri nets to model and analyze the problem with constraints on precedence relations, multiple resources allocation, etc. have been available in literature. Searching for an optimum schedule can be implemented by combining the branch-and-bound technique with the execution of the timed Petri net. The process usually produces a large Petri net which is practically not manageable. This disadvantage, however, can be handled by a truncation technique which divides the original large Petri net into several smaller size subnets. The complexity involved in the analysis of each subnet individually is greatly reduced. However, when the locally optimum schedules of the resulting subnets are combined together, it may not yield an overall optimum schedule for the original Petri net. To circumvent this problem, algorithms are developed based on the concepts of Petri net execution and modified branch-and-bound process. The developed technique is applied to a multi-robot task scheduling problem of the manufacturing work cell.

  14. Combination of Tls Point Clouds and 3d Data from Kinect v2 Sensor to Complete Indoor Models

    NASA Astrophysics Data System (ADS)

    Lachat, E.; Landes, T.; Grussenmeyer, P.

    2016-06-01

    The combination of data coming from multiple sensors is more and more applied for remote sensing issues (multi-sensor imagery) but also in cultural heritage or robotics, since it often results in increased robustness and accuracy of the final data. In this paper, the reconstruction of building elements such as window frames or door jambs scanned thanks to a low cost 3D sensor (Kinect v2) is presented. Their combination within a global point cloud of an indoor scene acquired with a terrestrial laser scanner (TLS) is considered. If the added elements acquired with the Kinect sensor enable to reach a better level of detail of the final model, an adapted acquisition protocol may also provide several benefits as for example time gain. The paper aims at analyzing whether the two measurement techniques can be complementary in this context. The limitations encountered during the acquisition and reconstruction steps are also investigated.

  15. A progress report on the ARRA-funded geotechnical site characterization project

    NASA Astrophysics Data System (ADS)

    Martin, A. J.; Yong, A.; Stokoe, K.; Di Matteo, A.; Diehl, J.; Jack, S.

    2011-12-01

    For the past 18 months, the 2009 American Recovery and Reinvestment Act (ARRA) has funded geotechnical site characterizations at 189 seismographic station sites in California and the central U.S. This ongoing effort applies methods involving surface-wave techniques, which include the horizontal-to-vertical spectral ratio (HVSR) technique and one or more of the following: spectral analysis of surface wave (SASW), active and passive multi-channel analysis of surface wave (MASW) and passive array microtremor techniques. From this multi-method approach, shear-wave velocity profiles (VS) and the time-averaged shear-wave velocity of the upper 30 meters (VS30) are estimated for each site. To accommodate the variability in local conditions (e.g., rural and urban soil locales, as well as weathered and competent rock sites), conventional field procedures are often modified ad-hoc to fit the unanticipated complexity at each location. For the majority of sites (>80%), fundamental-mode Rayleigh wave dispersion-based techniques are deployed and where complex geology is encountered, multiple test locations are made. Due to the presence of high velocity layers, about five percent of the locations require multi-mode inversion of Rayleigh wave (MASW-based) data or 3-D array-based inversion of SASW dispersion data, in combination with shallow P-wave seismic refraction and/or HVSR results. Where a strong impedance contrast (i.e. soil over rock) exists at shallow depth (about 10% of sites), dominant higher modes limit the use of Rayleigh wave dispersion techniques. Here, use of the Love wave dispersion technique, along with seismic refraction and/or HVSR data, is required to model the presence of shallow bedrock. At a small percentage of the sites, surface wave techniques are found not suitable for stand-alone deployment and site characterization is limited to the use of the seismic refraction technique. A USGS Open File Report-describing the surface geology, VS profile and the calculated VS30 for each site-will be prepared after the completion of the project in November 2011.

  16. Secure multi-party communication with quantum key distribution managed by trusted authority

    DOEpatents

    Nordholt, Jane Elizabeth; Hughes, Richard John; Peterson, Charles Glen

    2013-07-09

    Techniques and tools for implementing protocols for secure multi-party communication after quantum key distribution ("QKD") are described herein. In example implementations, a trusted authority facilitates secure communication between multiple user devices. The trusted authority distributes different quantum keys by QKD under trust relationships with different users. The trusted authority determines combination keys using the quantum keys and makes the combination keys available for distribution (e.g., for non-secret distribution over a public channel). The combination keys facilitate secure communication between two user devices even in the absence of QKD between the two user devices. With the protocols, benefits of QKD are extended to multi-party communication scenarios. In addition, the protocols can retain benefit of QKD even when a trusted authority is offline or a large group seeks to establish secure communication within the group.

  17. Secure multi-party communication with quantum key distribution managed by trusted authority

    DOEpatents

    Hughes, Richard John; Nordholt, Jane Elizabeth; Peterson, Charles Glen

    2015-01-06

    Techniques and tools for implementing protocols for secure multi-party communication after quantum key distribution ("QKD") are described herein. In example implementations, a trusted authority facilitates secure communication between multiple user devices. The trusted authority distributes different quantum keys by QKD under trust relationships with different users. The trusted authority determines combination keys using the quantum keys and makes the combination keys available for distribution (e.g., for non-secret distribution over a public channel). The combination keys facilitate secure communication between two user devices even in the absence of QKD between the two user devices. With the protocols, benefits of QKD are extended to multi-party communication scenarios. In addition, the protocols can retain benefit of QKD even when a trusted authority is offline or a large group seeks to establish secure communication within the group.

  18. Application of Multi-Objective Human Learning Optimization Method to Solve AC/DC Multi-Objective Optimal Power Flow Problem

    NASA Astrophysics Data System (ADS)

    Cao, Jia; Yan, Zheng; He, Guangyu

    2016-06-01

    This paper introduces an efficient algorithm, multi-objective human learning optimization method (MOHLO), to solve AC/DC multi-objective optimal power flow problem (MOPF). Firstly, the model of AC/DC MOPF including wind farms is constructed, where includes three objective functions, operating cost, power loss, and pollutant emission. Combining the non-dominated sorting technique and the crowding distance index, the MOHLO method can be derived, which involves individual learning operator, social learning operator, random exploration learning operator and adaptive strategies. Both the proposed MOHLO method and non-dominated sorting genetic algorithm II (NSGAII) are tested on an improved IEEE 30-bus AC/DC hybrid system. Simulation results show that MOHLO method has excellent search efficiency and the powerful ability of searching optimal. Above all, MOHLO method can obtain more complete pareto front than that by NSGAII method. However, how to choose the optimal solution from pareto front depends mainly on the decision makers who stand from the economic point of view or from the energy saving and emission reduction point of view.

  19. Multi-GPU Acceleration of Branchless Distance Driven Projection and Backprojection for Clinical Helical CT.

    PubMed

    Mitra, Ayan; Politte, David G; Whiting, Bruce R; Williamson, Jeffrey F; O'Sullivan, Joseph A

    2017-01-01

    Model-based image reconstruction (MBIR) techniques have the potential to generate high quality images from noisy measurements and a small number of projections which can reduce the x-ray dose in patients. These MBIR techniques rely on projection and backprojection to refine an image estimate. One of the widely used projectors for these modern MBIR based technique is called branchless distance driven (DD) projection and backprojection. While this method produces superior quality images, the computational cost of iterative updates keeps it from being ubiquitous in clinical applications. In this paper, we provide several new parallelization ideas for concurrent execution of the DD projectors in multi-GPU systems using CUDA programming tools. We have introduced some novel schemes for dividing the projection data and image voxels over multiple GPUs to avoid runtime overhead and inter-device synchronization issues. We have also reduced the complexity of overlap calculation of the algorithm by eliminating the common projection plane and directly projecting the detector boundaries onto image voxel boundaries. To reduce the time required for calculating the overlap between the detector edges and image voxel boundaries, we have proposed a pre-accumulation technique to accumulate image intensities in perpendicular 2D image slabs (from a 3D image) before projection and after backprojection to ensure our DD kernels run faster in parallel GPU threads. For the implementation of our iterative MBIR technique we use a parallel multi-GPU version of the alternating minimization (AM) algorithm with penalized likelihood update. The time performance using our proposed reconstruction method with Siemens Sensation 16 patient scan data shows an average of 24 times speedup using a single TITAN X GPU and 74 times speedup using 3 TITAN X GPUs in parallel for combined projection and backprojection.

  20. A hybrid degradation tendency measurement method for mechanical equipment based on moving window and Grey-Markov model

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Zhou, Jianzhong; Zheng, Yang; Liu, Han

    2017-11-01

    Accurate degradation tendency measurement is vital for the secure operation of mechanical equipment. However, the existing techniques and methodologies for degradation measurement still face challenges, such as lack of appropriate degradation indicator, insufficient accuracy, and poor capability to track the data fluctuation. To solve these problems, a hybrid degradation tendency measurement method for mechanical equipment based on a moving window and Grey-Markov model is proposed in this paper. In the proposed method, a 1D normalized degradation index based on multi-feature fusion is designed to assess the extent of degradation. Subsequently, the moving window algorithm is integrated with the Grey-Markov model for the dynamic update of the model. Two key parameters, namely the step size and the number of states, contribute to the adaptive modeling and multi-step prediction. Finally, three types of combination prediction models are established to measure the degradation trend of equipment. The effectiveness of the proposed method is validated with a case study on the health monitoring of turbine engines. Experimental results show that the proposed method has better performance, in terms of both measuring accuracy and data fluctuation tracing, in comparison with other conventional methods.

  1. Multi-objective optimization for generating a weighted multi-model ensemble

    NASA Astrophysics Data System (ADS)

    Lee, H.

    2017-12-01

    Many studies have demonstrated that multi-model ensembles generally show better skill than each ensemble member. When generating weighted multi-model ensembles, the first step is measuring the performance of individual model simulations using observations. There is a consensus on the assignment of weighting factors based on a single evaluation metric. When considering only one evaluation metric, the weighting factor for each model is proportional to a performance score or inversely proportional to an error for the model. While this conventional approach can provide appropriate combinations of multiple models, the approach confronts a big challenge when there are multiple metrics under consideration. When considering multiple evaluation metrics, it is obvious that a simple averaging of multiple performance scores or model ranks does not address the trade-off problem between conflicting metrics. So far, there seems to be no best method to generate weighted multi-model ensembles based on multiple performance metrics. The current study applies the multi-objective optimization, a mathematical process that provides a set of optimal trade-off solutions based on a range of evaluation metrics, to combining multiple performance metrics for the global climate models and their dynamically downscaled regional climate simulations over North America and generating a weighted multi-model ensemble. NASA satellite data and the Regional Climate Model Evaluation System (RCMES) software toolkit are used for assessment of the climate simulations. Overall, the performance of each model differs markedly with strong seasonal dependence. Because of the considerable variability across the climate simulations, it is important to evaluate models systematically and make future projections by assigning optimized weighting factors to the models with relatively good performance. Our results indicate that the optimally weighted multi-model ensemble always shows better performance than an arithmetic ensemble mean and may provide reliable future projections.

  2. Data-driven model reference control of MIMO vertical tank systems with model-free VRFT and Q-Learning.

    PubMed

    Radac, Mircea-Bogdan; Precup, Radu-Emil; Roman, Raul-Cristian

    2018-02-01

    This paper proposes a combined Virtual Reference Feedback Tuning-Q-learning model-free control approach, which tunes nonlinear static state feedback controllers to achieve output model reference tracking in an optimal control framework. The novel iterative Batch Fitted Q-learning strategy uses two neural networks to represent the value function (critic) and the controller (actor), and it is referred to as a mixed Virtual Reference Feedback Tuning-Batch Fitted Q-learning approach. Learning convergence of the Q-learning schemes generally depends, among other settings, on the efficient exploration of the state-action space. Handcrafting test signals for efficient exploration is difficult even for input-output stable unknown processes. Virtual Reference Feedback Tuning can ensure an initial stabilizing controller to be learned from few input-output data and it can be next used to collect substantially more input-state data in a controlled mode, in a constrained environment, by compensating the process dynamics. This data is used to learn significantly superior nonlinear state feedback neural networks controllers for model reference tracking, using the proposed Batch Fitted Q-learning iterative tuning strategy, motivating the original combination of the two techniques. The mixed Virtual Reference Feedback Tuning-Batch Fitted Q-learning approach is experimentally validated for water level control of a multi input-multi output nonlinear constrained coupled two-tank system. Discussions on the observed control behavior are offered. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  3. Aerodynamic Characteristics of High Speed Trains under Cross Wind Conditions

    NASA Astrophysics Data System (ADS)

    Chen, W.; Wu, S. P.; Zhang, Y.

    2011-09-01

    Numerical simulation for the two models in cross-wind was carried out in this paper. The three-dimensional compressible Reynolds-averaged Navier-Stokes equations(RANS), combined with the standard k-ɛ turbulence model, were solved on multi-block hybrid grids by second order upwind finite volume technique. The impact of fairing on aerodynamic characteristics of the train models was analyzed. It is shown that, the flow separates on the fairing and a strong vortex is generated, the pressure on the upper middle car decreases dramatically, which leads to a large lift force. The fairing changes the basic patterns around the trains. In addition, formulas of the coefficient of aerodynamic force at small yaw angles up to 24° were expressed.

  4. Neuro-fuzzy and neural network techniques for forecasting sea level in Darwin Harbor, Australia

    NASA Astrophysics Data System (ADS)

    Karimi, Sepideh; Kisi, Ozgur; Shiri, Jalal; Makarynskyy, Oleg

    2013-03-01

    Accurate predictions of sea level with different forecast horizons are important for coastal and ocean engineering applications, as well as in land drainage and reclamation studies. The methodology of tidal harmonic analysis, which is generally used for obtaining a mathematical description of the tides, is data demanding requiring processing of tidal observation collected over several years. In the present study, hourly sea levels for Darwin Harbor, Australia were predicted using two different, data driven techniques, adaptive neuro-fuzzy inference system (ANFIS) and artificial neural network (ANN). Multi linear regression (MLR) technique was used for selecting the optimal input combinations (lag times) of hourly sea level. The input combination comprises current sea level as well as five previous level values found to be optimal. For the ANFIS models, five different membership functions namely triangular, trapezoidal, generalized bell, Gaussian and two Gaussian membership function were tested and employed for predicting sea level for the next 1 h, 24 h, 48 h and 72 h. The used ANN models were trained using three different algorithms, namely, Levenberg-Marquardt, conjugate gradient and gradient descent. Predictions of optimal ANFIS and ANN models were compared with those of the optimal auto-regressive moving average (ARMA) models. The coefficient of determination, root mean square error and variance account statistics were used as comparison criteria. The obtained results indicated that triangular membership function was optimal for predictions with the ANFIS models while adaptive learning rate and Levenberg-Marquardt were most suitable for training the ANN models. Consequently, ANFIS and ANN models gave similar forecasts and performed better than the developed for the same purpose ARMA models for all the prediction intervals.

  5. An improved lattice Boltzmann scheme for multiphase fluid with multi-range interactions

    NASA Astrophysics Data System (ADS)

    Maquignon, Nicolas; Duchateau, Julien; Roussel, Gilles; Rousselle, François; Renaud, Christophe

    2014-10-01

    Modeling of fluids with liquid to gas phase transition has become important for understanding many environmental or industrial processes. Such simulations need new techniques, because traditional solvers are often limited. The Lattice Boltzmann Model (LBM) allows simulate complex fluids, because its mesoscopic nature gives possibility to incorporate additional physics in comparison to usual methods. In this work, an improved lattice Boltzmann model for phase transition flow will be introduced. First, the state of art for Shan & Chen [1] [2] (SC) type of LBM will be reminded. Then, link to real thermodynamics will be established with Maxwell equal areas construction. Convergence to isothermal liquid vapor equilibrium will be shown and discussed. Inclusion of an equation of state for real fluid and better incorporation of force term is presented [4] [5]. Multi-range interactions have been used for SC model [8], but it hasn't been yet applied to real fluid with non-ideal equation of state. In this work, we evaluate this model when it is applied to real liquid-vapor equilibrium. We show that important differences are found for evaluation of gas density. In order to recover thermodynamic consistency, we use a new scheme for calculation of force term, which is a combination of multi range model and numerical weighting used by Gong & Cheng [6] [7]. We show the superiority of our new model by studying convergence to equilibrium values over a large temperature range. We prove that spurious velocities remaining at equilibrium are decreased.

  6. An improved lattice Boltzmann scheme for multiphase fluid with multi-range interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maquignon, Nicolas; Duchateau, Julien; Roussel, Gilles

    2014-10-06

    Modeling of fluids with liquid to gas phase transition has become important for understanding many environmental or industrial processes. Such simulations need new techniques, because traditional solvers are often limited. The Lattice Boltzmann Model (LBM) allows simulate complex fluids, because its mesoscopic nature gives possibility to incorporate additional physics in comparison to usual methods. In this work, an improved lattice Boltzmann model for phase transition flow will be introduced. First, the state of art for Shan and Chen (SC) type of LBM will be reminded. Then, link to real thermodynamics will be established with Maxwell equal areas construction. Convergence tomore » isothermal liquid vapor equilibrium will be shown and discussed. Inclusion of an equation of state for real fluid and better incorporation of force term is presented. Multi-range interactions have been used for SC model, but it hasn't been yet applied to real fluid with non-ideal equation of state. In this work, we evaluate this model when it is applied to real liquid-vapor equilibrium. We show that important differences are found for evaluation of gas density. In order to recover thermodynamic consistency, we use a new scheme for calculation of force term, which is a combination of multi range model and numerical weighting used by Gong and Cheng. We show the superiority of our new model by studying convergence to equilibrium values over a large temperature range. We prove that spurious velocities remaining at equilibrium are decreased.« less

  7. Multi-layer plastic/glass microfluidic systems containing electrical and mechanical functionality.

    PubMed

    Han, Arum; Wang, Olivia; Graff, Mason; Mohanty, Swomitra K; Edwards, Thayne L; Han, Ki-Ho; Bruno Frazier, A

    2003-08-01

    This paper describes an approach for fabricating multi-layer microfluidic systems from a combination of glass and plastic materials. Methods and characterization results for the microfabrication technologies underlying the process flow are presented. The approach is used to fabricate and characterize multi-layer plastic/glass microfluidic systems containing electrical and mechanical functionality. Hot embossing, heat staking of plastics, injection molding, microstenciling of electrodes, and stereolithography were combined with conventional MEMS fabrication techniques to realize the multi-layer systems. The approach enabled the integration of multiple plastic/glass materials into a single monolithic system, provided a solution for the integration of electrical functionality throughout the system, provided a mechanism for the inclusion of microactuators such as micropumps/valves, and provided an interconnect technology for interfacing fluids and electrical components between the micro system and the macro world.

  8. A general Bayesian framework for calibrating and evaluating stochastic models of annual multi-site hydrological data

    NASA Astrophysics Data System (ADS)

    Frost, Andrew J.; Thyer, Mark A.; Srikanthan, R.; Kuczera, George

    2007-07-01

    SummaryMulti-site simulation of hydrological data are required for drought risk assessment of large multi-reservoir water supply systems. In this paper, a general Bayesian framework is presented for the calibration and evaluation of multi-site hydrological data at annual timescales. Models included within this framework are the hidden Markov model (HMM) and the widely used lag-1 autoregressive (AR(1)) model. These models are extended by the inclusion of a Box-Cox transformation and a spatial correlation function in a multi-site setting. Parameter uncertainty is evaluated using Markov chain Monte Carlo techniques. Models are evaluated by their ability to reproduce a range of important extreme statistics and compared using Bayesian model selection techniques which evaluate model probabilities. The case study, using multi-site annual rainfall data situated within catchments which contribute to Sydney's main water supply, provided the following results: Firstly, in terms of model probabilities and diagnostics, the inclusion of the Box-Cox transformation was preferred. Secondly the AR(1) and HMM performed similarly, while some other proposed AR(1)/HMM models with regionally pooled parameters had greater posterior probability than these two models. The practical significance of parameter and model uncertainty was illustrated using a case study involving drought security analysis for urban water supply. It was shown that ignoring parameter uncertainty resulted in a significant overestimate of reservoir yield and an underestimation of system vulnerability to severe drought.

  9. Weighted Statistical Binning: Enabling Statistically Consistent Genome-Scale Phylogenetic Analyses

    PubMed Central

    Bayzid, Md Shamsuzzoha; Mirarab, Siavash; Boussau, Bastien; Warnow, Tandy

    2015-01-01

    Because biological processes can result in different loci having different evolutionary histories, species tree estimation requires multiple loci from across multiple genomes. While many processes can result in discord between gene trees and species trees, incomplete lineage sorting (ILS), modeled by the multi-species coalescent, is considered to be a dominant cause for gene tree heterogeneity. Coalescent-based methods have been developed to estimate species trees, many of which operate by combining estimated gene trees, and so are called "summary methods". Because summary methods are generally fast (and much faster than more complicated coalescent-based methods that co-estimate gene trees and species trees), they have become very popular techniques for estimating species trees from multiple loci. However, recent studies have established that summary methods can have reduced accuracy in the presence of gene tree estimation error, and also that many biological datasets have substantial gene tree estimation error, so that summary methods may not be highly accurate in biologically realistic conditions. Mirarab et al. (Science 2014) presented the "statistical binning" technique to improve gene tree estimation in multi-locus analyses, and showed that it improved the accuracy of MP-EST, one of the most popular coalescent-based summary methods. Statistical binning, which uses a simple heuristic to evaluate "combinability" and then uses the larger sets of genes to re-calculate gene trees, has good empirical performance, but using statistical binning within a phylogenomic pipeline does not have the desirable property of being statistically consistent. We show that weighting the re-calculated gene trees by the bin sizes makes statistical binning statistically consistent under the multispecies coalescent, and maintains the good empirical performance. Thus, "weighted statistical binning" enables highly accurate genome-scale species tree estimation, and is also statistically consistent under the multi-species coalescent model. New data used in this study are available at DOI: http://dx.doi.org/10.6084/m9.figshare.1411146, and the software is available at https://github.com/smirarab/binning. PMID:26086579

  10. Multi-sensory integration in a small brain

    NASA Astrophysics Data System (ADS)

    Gepner, Ruben; Wolk, Jason; Gershow, Marc

    Understanding how fluctuating multi-sensory stimuli are integrated and transformed in neural circuits has proved a difficult task. To address this question, we study the sensori-motor transformations happening in the brain of the Drosophila larva, a tractable model system with about 10,000 neurons. Using genetic tools that allow us to manipulate the activity of individual brain cells through their transparent body, we observe the stochastic decisions made by freely-behaving animals as their visual and olfactory environments fluctuate independently. We then use simple linear-nonlinear models to correlate outputs with relevant features in the inputs, and adaptive filtering processes to track changes in these relevant parameters used by the larva's brain to make decisions. We show how these techniques allow us to probe how statistics of stimuli from different sensory modalities combine to affect behavior, and can potentially guide our understanding of how neural circuits are anatomically and functionally integrated. Supported by NIH Grant 1DP2EB022359 and NSF Grant PHY-1455015.

  11. Multi-frequency inversion-charge pumping for charge separation and mobility analysis in high-k/InGaAs metal-oxide-semiconductor field-effect transistors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Djara, V.; Cherkaoui, K.; Negara, M. A.

    2015-11-28

    An alternative multi-frequency inversion-charge pumping (MFICP) technique was developed to directly separate the inversion charge density (N{sub inv}) from the trapped charge density in high-k/InGaAs metal-oxide-semiconductor field-effect transistors (MOSFETs). This approach relies on the fitting of the frequency response of border traps, obtained from inversion-charge pumping measurements performed over a wide range of frequencies at room temperature on a single MOSFET, using a modified charge trapping model. The obtained model yielded the capture time constant and density of border traps located at energy levels aligned with the InGaAs conduction band. Moreover, the combination of MFICP and pulsed I{sub d}-V{sub g}more » measurements enabled an accurate effective mobility vs N{sub inv} extraction and analysis. The data obtained using the MFICP approach are consistent with the most recent reports on high-k/InGaAs.« less

  12. Application of multi-grid methods for solving the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.

    1989-01-01

    The application of a class of multi-grid methods to the solution of the Navier-Stokes equations for two-dimensional laminar flow problems is discussed. The methods consist of combining the full approximation scheme-full multi-grid technique (FAS-FMG) with point-, line-, or plane-relaxation routines for solving the Navier-Stokes equations in primitive variables. The performance of the multi-grid methods is compared to that of several single-grid methods. The results show that much faster convergence can be procured through the use of the multi-grid approach than through the various suggestions for improving single-grid methods. The importance of the choice of relaxation scheme for the multi-grid method is illustrated.

  13. Application of multi-grid methods for solving the Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Demuren, A. O.

    1989-01-01

    This paper presents the application of a class of multi-grid methods to the solution of the Navier-Stokes equations for two-dimensional laminar flow problems. The methods consists of combining the full approximation scheme-full multi-grid technique (FAS-FMG) with point-, line- or plane-relaxation routines for solving the Navier-Stokes equations in primitive variables. The performance of the multi-grid methods is compared to those of several single-grid methods. The results show that much faster convergence can be procured through the use of the multi-grid approach than through the various suggestions for improving single-grid methods. The importance of the choice of relaxation scheme for the multi-grid method is illustrated.

  14. The Geomatics Contribution for the Valorisation Project in the Rocca of San Silvestro Landscape Site

    NASA Astrophysics Data System (ADS)

    Brocchini, D.; Chiabrando, F.; Colucci, E.; Sammartano, G.; Spanò, A.; Teppati Losè, L.; Villa, A.

    2017-05-01

    This paper proposes an emblematic project where several multi-sensor strategies for spatial data acquisition and management, range based and image based, were combined to create a series of integrated territorial and architectural scale products characterized by a rich multi-content nature. The work presented here was finalized in a test site that is composed by an ensemble of diversified cultural deposits; the objects that were surveyed and modelled range from the landscape with its widespread mining sites, the main tower with its defensive role, the urban configuration of the settlement, the building systems and techniques, a medieval mine. For this reason, the Rocca of San Silvestro represented a perfect test case, due to its complex and multi-stratified character. This archaeological site is a medieval fortified village near the municipality of Campiglia Marittima (LI), Italy. The Rocca is part of an Archaeological Mines Park and is included in the Parchi della Val di Cornia (a system of archaeological parks, natural parks and museums in the south-west of Tuscany). The fundamental role of a deep knowledge about a cultural artefact before the planning of a restoration and valorisation project is globally recognized; the qualitative and quantitative knowledge provided by geomatics techniques is part of this process. The paper will present the different techniques that were used, the products that were obtained and will focus on some mapping and WEB GIS applications and analyses that were performed and considerations that were made.

  15. A generative model for segmentation of tumor and organs-at-risk for radiation therapy planning of glioblastoma patients

    NASA Astrophysics Data System (ADS)

    Agn, Mikael; Law, Ian; Munck af Rosenschöld, Per; Van Leemput, Koen

    2016-03-01

    We present a fully automated generative method for simultaneous brain tumor and organs-at-risk segmentation in multi-modal magnetic resonance images. The method combines an existing whole-brain segmentation technique with a spatial tumor prior, which uses convolutional restricted Boltzmann machines to model tumor shape. The method is not tuned to any specific imaging protocol and can simultaneously segment the gross tumor volume, peritumoral edema and healthy tissue structures relevant for radiotherapy planning. We validate the method on a manually delineated clinical data set of glioblastoma patients by comparing segmentations of gross tumor volume, brainstem and hippocampus. The preliminary results demonstrate the feasibility of the method.

  16. A PageRank-based reputation model for personalised manufacturing service recommendation

    NASA Astrophysics Data System (ADS)

    Zhang, W. Y.; Zhang, S.; Guo, S. S.

    2017-05-01

    The number of manufacturing services for cross-enterprise business collaborations is increasing rapidly because of the explosive growth of Web service technologies. This trend demands intelligent and robust models to address information overload in order to enable efficient discovery of manufacturing services. In this paper, we present a personalised manufacturing service recommendation approach, which combines a PageRank-based reputation model and a collaborative filtering technique in a unified framework for recommending the right manufacturing services to an active service user for supply chain deployment. The novel aspect of this research is adapting the PageRank algorithm to a network of service-oriented multi-echelon supply chain in order to determine both user reputation and service reputation. In addition, it explores the use of these methods in alleviating data sparsity and cold start problems that hinder traditional collaborative filtering techniques. A case study is conducted to validate the practicality and effectiveness of the proposed approach in recommending the right manufacturing services to active service users.

  17. Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials.

    PubMed

    Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A; Burgueño, Juan; Bandeira E Sousa, Massaine; Crossa, José

    2018-03-28

    In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines ([Formula: see text]) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. Copyright © 2018 Cuevas et al.

  18. Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials

    PubMed Central

    Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A.; Burgueño, Juan; Bandeira e Sousa, Massaine; Crossa, José

    2018-01-01

    In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines (l) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. PMID:29476023

  19. A novel hybrid MCDM model for performance evaluation of research and technology organizations based on BSC approach.

    PubMed

    Varmazyar, Mohsen; Dehghanbaghi, Maryam; Afkhami, Mehdi

    2016-10-01

    Balanced Scorecard (BSC) is a strategic evaluation tool using both financial and non-financial indicators to determine the business performance of organizations or companies. In this paper, a new integrated approach based on the Balanced Scorecard (BSC) and multi-criteria decision making (MCDM) methods are proposed to evaluate the performance of research centers of research and technology organization (RTO) in Iran. Decision-Making Trial and Evaluation Laboratory (DEMATEL) are employed to reflect the interdependencies among BSC perspectives. Then, Analytic Network Process (ANP) is utilized to weight the indices influencing the considered problem. In the next step, we apply four MCDM methods including Additive Ratio Assessment (ARAS), Complex Proportional Assessment (COPRAS), Multi-Objective Optimization by Ratio Analysis (MOORA), and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) for ranking of alternatives. Finally, the utility interval technique is applied to combine the ranking results of MCDM methods. Weighted utility intervals are computed by constructing a correlation matrix between the ranking methods. A real case is presented to show the efficacy of the proposed approach. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. QSAR prediction of additive and non-additive mixture toxicities of antibiotics and pesticide.

    PubMed

    Qin, Li-Tang; Chen, Yu-Han; Zhang, Xin; Mo, Ling-Yun; Zeng, Hong-Hu; Liang, Yan-Peng

    2018-05-01

    Antibiotics and pesticides may exist as a mixture in real environment. The combined effect of mixture can either be additive or non-additive (synergism and antagonism). However, no effective predictive approach exists on predicting the synergistic and antagonistic toxicities of mixtures. In this study, we developed a quantitative structure-activity relationship (QSAR) model for the toxicities (half effect concentration, EC 50 ) of 45 binary and multi-component mixtures composed of two antibiotics and four pesticides. The acute toxicities of single compound and mixtures toward Aliivibrio fischeri were tested. A genetic algorithm was used to obtain the optimized model with three theoretical descriptors. Various internal and external validation techniques indicated that the coefficient of determination of 0.9366 and root mean square error of 0.1345 for the QSAR model predicted that 45 mixture toxicities presented additive, synergistic, and antagonistic effects. Compared with the traditional concentration additive and independent action models, the QSAR model exhibited an advantage in predicting mixture toxicity. Thus, the presented approach may be able to fill the gaps in predicting non-additive toxicities of binary and multi-component mixtures. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Düchs, Dominik; Delaney, Kris T., E-mail: kdelaney@mrl.ucsb.edu; Fredrickson, Glenn H., E-mail: ghf@mrl.ucsb.edu

    Field-theoretic models have been used extensively to study the phase behavior of inhomogeneous polymer melts and solutions, both in self-consistent mean-field calculations and in numerical simulations of the full theory capturing composition fluctuations. The models commonly used can be grouped into two categories, namely, species models and exchange models. Species models involve integrations of functionals that explicitly depend on fields originating both from species density operators and their conjugate chemical potential fields. In contrast, exchange models retain only linear combinations of the chemical potential fields. In the two-component case, development of exchange models has been instrumental in enabling stable complexmore » Langevin (CL) simulations of the full complex-valued theory. No comparable stable CL approach has yet been established for field theories of the species type. Here, we introduce an extension of the exchange model to an arbitrary number of components, namely, the multi-species exchange (MSE) model, which greatly expands the classes of soft material systems that can be accessed by the complex Langevin simulation technique. We demonstrate the stability and accuracy of the MSE-CL sampling approach using numerical simulations of triblock and tetrablock terpolymer melts, and tetrablock quaterpolymer melts. This method should enable studies of a wide range of fluctuation phenomena in multiblock/multi-species polymer blends and composites.« less

  2. An enhanced multi-channel bacterial foraging optimization algorithm for MIMO communication system

    NASA Astrophysics Data System (ADS)

    Palanimuthu, Senthilkumar Jayalakshmi; Muthial, Chandrasekaran

    2017-04-01

    Channel estimation and optimisation are the main challenging tasks in Multi Input Multi Output (MIMO) wireless communication systems. In this work, a Multi-Channel Bacterial Foraging Optimization Algorithm approach is proposed for the selection of antenna in a transmission area. The main advantage of this method is, it reduces the loss of bandwidth during data transmission effectively. Here, we considered the channel estimation and optimisation for improving the transmission speed and reducing the unused bandwidth. Initially, the message is given to the input of the communication system. Then, the symbol mapping process is performed for converting the message into signals. It will be encoded based on the space-time encoding technique. Here, the single signal is divided into multiple signals and it will be given to the input of space-time precoder. Hence, the multiplexing is applied to transmission channel estimation. In this paper, the Rayleigh channel is selected based on the bandwidth range. This is the Gaussian distribution type channel. Then, the demultiplexing is applied on the obtained signal that is the reverse function of multiplexing, which splits the combined signal arriving from a medium into the original information signal. Furthermore, the long-term evolution technique is used for scheduling the time to channels during transmission. Here, the hidden Markov model technique is employed to predict the status information of the channel. Finally, the signals are decoded and the reconstructed signal is obtained after performing the scheduling process. The experimental results evaluate the performance of the proposed MIMO communication system in terms of bit error rate, mean squared error, average throughput, outage capacity and signal to interference noise ratio.

  3. Using Dynamic Multi-Task Non-Negative Matrix Factorization to Detect the Evolution of User Preferences in Collaborative Filtering

    PubMed Central

    Ju, Bin; Qian, Yuntao; Ye, Minchao; Ni, Rong; Zhu, Chenxi

    2015-01-01

    Predicting what items will be selected by a target user in the future is an important function for recommendation systems. Matrix factorization techniques have been shown to achieve good performance on temporal rating-type data, but little is known about temporal item selection data. In this paper, we developed a unified model that combines Multi-task Non-negative Matrix Factorization and Linear Dynamical Systems to capture the evolution of user preferences. Specifically, user and item features are projected into latent factor space by factoring co-occurrence matrices into a common basis item-factor matrix and multiple factor-user matrices. Moreover, we represented both within and between relationships of multiple factor-user matrices using a state transition matrix to capture the changes in user preferences over time. The experiments show that our proposed algorithm outperforms the other algorithms on two real datasets, which were extracted from Netflix movies and Last.fm music. Furthermore, our model provides a novel dynamic topic model for tracking the evolution of the behavior of a user over time. PMID:26270539

  4. Using Dynamic Multi-Task Non-Negative Matrix Factorization to Detect the Evolution of User Preferences in Collaborative Filtering.

    PubMed

    Ju, Bin; Qian, Yuntao; Ye, Minchao; Ni, Rong; Zhu, Chenxi

    2015-01-01

    Predicting what items will be selected by a target user in the future is an important function for recommendation systems. Matrix factorization techniques have been shown to achieve good performance on temporal rating-type data, but little is known about temporal item selection data. In this paper, we developed a unified model that combines Multi-task Non-negative Matrix Factorization and Linear Dynamical Systems to capture the evolution of user preferences. Specifically, user and item features are projected into latent factor space by factoring co-occurrence matrices into a common basis item-factor matrix and multiple factor-user matrices. Moreover, we represented both within and between relationships of multiple factor-user matrices using a state transition matrix to capture the changes in user preferences over time. The experiments show that our proposed algorithm outperforms the other algorithms on two real datasets, which were extracted from Netflix movies and Last.fm music. Furthermore, our model provides a novel dynamic topic model for tracking the evolution of the behavior of a user over time.

  5. Continuous separation of breast cancer cells from blood samples using multi-orifice flow fractionation (MOFF) and dielectrophoresis (DEP).

    PubMed

    Moon, Hui-Sung; Kwon, Kiho; Kim, Seung-Il; Han, Hyunju; Sohn, Joohyuk; Lee, Soohyeon; Jung, Hyo-Il

    2011-03-21

    Circulating tumor cells (CTCs) are highly correlated with the invasive behavior of cancer, so their isolations and quantifications are important for biomedical applications such as cancer prognosis and measuring the responses to drug treatments. In this paper, we present the development of a microfluidic device for the separation of CTCs from blood cells based on the physical properties of cells. For use as a CTC model, we successfully separated human breast cancer cells (MCF-7) from a spiked blood cell sample by combining multi-orifice flow fractionation (MOFF) and dielectrophoretic (DEP) cell separation technique. Hydrodynamic separation takes advantage of the massive and high-throughput filtration of blood cells as it can accommodate a very high flow rate. DEP separation plays a role in precise post-processing to enhance the efficiency of the separation. The serial combination of these two different sorting techniques enabled high-speed continuous flow-through separation without labeling. We observed up to a 162-fold increase in MCF-7 cells at a 126 µL min(-1) flow rate. Red and white blood cells were efficiently removed with separation efficiencies of 99.24% and 94.23% respectively. Therefore, we suggest that our system could be used for separation and detection of CTCs from blood cells for biomedical applications. This journal is © The Royal Society of Chemistry 2011

  6. Use of Multi-class Empirical Orthogonal Function for Identification of Hydrogeological Parameters and Spatiotemporal Pattern of Multiple Recharges in Groundwater Modeling

    NASA Astrophysics Data System (ADS)

    Huang, C. L.; Hsu, N. S.; Yeh, W. W. G.; Hsieh, I. H.

    2017-12-01

    This study develops an innovative calibration method for regional groundwater modeling by using multi-class empirical orthogonal functions (EOFs). The developed method is an iterative approach. Prior to carrying out the iterative procedures, the groundwater storage hydrographs associated with the observation wells are calculated. The combined multi-class EOF amplitudes and EOF expansion coefficients of the storage hydrographs are then used to compute the initial gauss of the temporal and spatial pattern of multiple recharges. The initial guess of the hydrogeological parameters are also assigned according to in-situ pumping experiment. The recharges include net rainfall recharge and boundary recharge, and the hydrogeological parameters are riverbed leakage conductivity, horizontal hydraulic conductivity, vertical hydraulic conductivity, storage coefficient, and specific yield. The first step of the iterative algorithm is to conduct the numerical model (i.e. MODFLOW) by the initial guess / adjusted values of the recharges and parameters. Second, in order to determine the best EOF combination of the error storage hydrographs for determining the correction vectors, the objective function is devised as minimizing the root mean square error (RMSE) of the simulated storage hydrographs. The error storage hydrograph are the differences between the storage hydrographs computed from observed and simulated groundwater level fluctuations. Third, adjust the values of recharges and parameters and repeat the iterative procedures until the stopping criterion is reached. The established methodology was applied to the groundwater system of Ming-Chu Basin, Taiwan. The study period is from January 1st to December 2ed in 2012. Results showed that the optimal EOF combination for the multiple recharges and hydrogeological parameters can decrease the RMSE of the simulated storage hydrographs dramatically within three calibration iterations. It represents that the iterative approach that using EOF techniques can capture the groundwater flow tendency and detects the correction vector of the simulated error sources. Hence, the established EOF-based methodology can effectively and accurately identify the multiple recharges and hydrogeological parameters.

  7. Secure multi-party communication with quantum key distribution managed by trusted authority

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, Richard John; Nordholt, Jane Elizabeth; Peterson, Charles Glen

    Techniques and tools for implementing protocols for secure multi-party communication after quantum key distribution ("QKD") are described herein. In example implementations, a trusted authority facilitates secure communication between multiple user devices. The trusted authority distributes different quantum keys by QKD under trust relationships with different users. The trusted authority determines combination keys using the quantum keys and makes the combination keys available for distribution (e.g., for non-secret distribution over a public channel). The combination keys facilitate secure communication between two user devices even in the absence of QKD between the two user devices. With the protocols, benefits of QKD aremore » extended to multi-party communication scenarios. In addition, the protocols can retain benefit of QKD even when a trusted authority is offline or a large group seeks to establish secure communication within the group.« less

  8. Multi-technique comparison of troposphere zenith delays and gradients during CONT08

    NASA Astrophysics Data System (ADS)

    Teke, Kamil; Böhm, Johannes; Nilsson, Tobias; Schuh, Harald; Steigenberger, Peter; Dach, Rolf; Heinkelmann, Robert; Willis, Pascal; Haas, Rüdiger; García-Espada, Susana; Hobiger, Thomas; Ichikawa, Ryuichi; Shimizu, Shingo

    2011-07-01

    CONT08 was a 15 days campaign of continuous Very Long Baseline Interferometry (VLBI) sessions during the second half of August 2008 carried out by the International VLBI Service for Geodesy and Astrometry (IVS). In this study, VLBI estimates of troposphere zenith total delays (ZTD) and gradients during CONT08 were compared with those derived from observations with the Global Positioning System (GPS), Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS), and water vapor radiometers (WVR) co-located with the VLBI radio telescopes. Similar geophysical models were used for the analysis of the space geodetic data, whereas the parameterization for the least-squares adjustment of the space geodetic techniques was optimized for each technique. In addition to space geodetic techniques and WVR, ZTD and gradients from numerical weather models (NWM) were used from the European Centre for Medium-Range Weather Forecasts (ECMWF) (all sites), the Japan Meteorological Agency (JMA) and Cloud Resolving Storm Simulator (CReSS) (Tsukuba), and the High Resolution Limited Area Model (HIRLAM) (European sites). Biases, standard deviations, and correlation coefficients were computed between the troposphere estimates of the various techniques for all eleven CONT08 co-located sites. ZTD from space geodetic techniques generally agree at the sub-centimetre level during CONT08, and—as expected—the best agreement is found for intra-technique comparisons: between the Vienna VLBI Software and the combined IVS solutions as well as between the Center for Orbit Determination (CODE) solution and an IGS PPP time series; both intra-technique comparisons are with standard deviations of about 3-6 mm. The best inter space geodetic technique agreement of ZTD during CONT08 is found between the combined IVS and the IGS solutions with a mean standard deviation of about 6 mm over all sites, whereas the agreement with numerical weather models is between 6 and 20 mm. The standard deviations are generally larger at low latitude sites because of higher humidity, and the latter is also the reason why the standard deviations are larger at northern hemisphere stations during CONT08 in comparison to CONT02 which was observed in October 2002. The assessment of the troposphere gradients from the different techniques is not as clear because of different time intervals, different estimation properties, or different observables. However, the best inter-technique agreement is found between the IVS combined gradients and the GPS solutions with standard deviations between 0.2 and 0.7 mm.

  9. Enabling Analytics on Sensitive Medical Data with Secure Multi-Party Computation.

    PubMed

    Veeningen, Meilof; Chatterjea, Supriyo; Horváth, Anna Zsófia; Spindler, Gerald; Boersma, Eric; van der Spek, Peter; van der Galiën, Onno; Gutteling, Job; Kraaij, Wessel; Veugen, Thijs

    2018-01-01

    While there is a clear need to apply data analytics in the healthcare sector, this is often difficult because it requires combining sensitive data from multiple data sources. In this paper, we show how the cryptographic technique of secure multi-party computation can enable such data analytics by performing analytics without the need to share the underlying data. We discuss the issue of compliance to European privacy legislation; report on three pilots bringing these techniques closer to practice; and discuss the main challenges ahead to make fully privacy-preserving data analytics in the medical sector commonplace.

  10. A-Train Aerosol Observations Preliminary Comparisons with AeroCom Models and Pathways to Observationally Based All-Sky Estimates

    NASA Technical Reports Server (NTRS)

    Redemann, J.; Livingston, J.; Shinozuka, Y.; Kacenelenbogen, M.; Russell, P.; LeBlanc, S.; Vaughan, M.; Ferrare, R.; Hostetler, C.; Rogers, R.; hide

    2014-01-01

    We have developed a technique for combining CALIOP aerosol backscatter, MODIS spectral AOD (aerosol optical depth), and OMI AAOD (absorption aerosol optical depth) retrievals for the purpose of estimating full spectral sets of aerosol radiative properties, and ultimately for calculating the 3-D distribution of direct aerosol radiative forcing. We present results using one year of data collected in 2007 and show comparisons of the aerosol radiative property estimates to collocated AERONET retrievals. Use of the recently released MODIS Collection 6 data for aerosol optical depths derived with the dark target and deep blue algorithms has extended the coverage of the multi-sensor estimates towards higher latitudes. We compare the spatio-temporal distribution of our multi-sensor aerosol retrievals and calculations of seasonal clear-sky aerosol radiative forcing based on the aerosol retrievals to values derived from four models that participated in the latest AeroCom model intercomparison initiative. We find significant inter-model differences, in particular for the aerosol single scattering albedo, which can be evaluated using the multi-sensor A-Train retrievals. We discuss the major challenges that exist in extending our clear-sky results to all-sky conditions. On the basis of comparisons to suborbital measurements, we present some of the limitations of the MODIS and CALIOP retrievals in the presence of adjacent or underlying clouds. Strategies for meeting these challenges are discussed.

  11. Multi-Scale Computational Models for Electrical Brain Stimulation

    PubMed Central

    Seo, Hyeon; Jun, Sung C.

    2017-01-01

    Electrical brain stimulation (EBS) is an appealing method to treat neurological disorders. To achieve optimal stimulation effects and a better understanding of the underlying brain mechanisms, neuroscientists have proposed computational modeling studies for a decade. Recently, multi-scale models that combine a volume conductor head model and multi-compartmental models of cortical neurons have been developed to predict stimulation effects on the macroscopic and microscopic levels more precisely. As the need for better computational models continues to increase, we overview here recent multi-scale modeling studies; we focused on approaches that coupled a simplified or high-resolution volume conductor head model and multi-compartmental models of cortical neurons, and constructed realistic fiber models using diffusion tensor imaging (DTI). Further implications for achieving better precision in estimating cellular responses are discussed. PMID:29123476

  12. A variable-gain output feedback control design approach

    NASA Technical Reports Server (NTRS)

    Haylo, Nesim

    1989-01-01

    A multi-model design technique to find a variable-gain control law defined over the whole operating range is proposed. The design is formulated as an optimal control problem which minimizes a cost function weighing the performance at many operating points. The solution is obtained by embedding into the Multi-Configuration Control (MCC) problem, a multi-model robust control design technique. In contrast to conventional gain scheduling which uses a curve fit of single model designs, the optimal variable-gain control law stabilizes the plant at every operating point included in the design. An iterative algorithm to compute the optimal control gains is presented. The methodology has been successfully applied to reconfigurable aircraft flight control and to nonlinear flight control systems.

  13. MULTI: a shared memory approach to cooperative molecular modeling.

    PubMed

    Darden, T; Johnson, P; Smith, H

    1991-03-01

    A general purpose molecular modeling system, MULTI, based on the UNIX shared memory and semaphore facilities for interprocess communication is described. In addition to the normal querying or monitoring of geometric data, MULTI also provides processes for manipulating conformations, and for displaying peptide or nucleic acid ribbons, Connolly surfaces, close nonbonded contacts, crystal-symmetry related images, least-squares superpositions, and so forth. This paper outlines the basic techniques used in MULTI to ensure cooperation among these specialized processes, and then describes how they can work together to provide a flexible modeling environment.

  14. Lagrangian Modeling of Evaporating Sprays at Diesel Engine Conditions: Effects of Multi-Hole Injector Nozzles With JP-8 Surrogates

    DTIC Science & Technology

    2014-05-01

    solver to treat the spray process. An Adaptive Mesh Refinement (AMR) and fixed embedding technique is employed to capture the gas - liquid interface with...Adaptive Mesh Refinement (AMR) and fixed embedding technique is employed to capture the gas - liquid interface with high fidelity while keeping the cell...in single and multi-hole nozzle configurations. The models were added to the present CONVERGE liquid fuel database and validated extensively

  15. Terrain Categorization using LIDAR and Multi-Spectral Data

    DTIC Science & Technology

    2007-01-01

    the same spatial resolution cell will be distinguished. 3. PROCESSING The LIDAR data set used in this study was from a discrete-return...smoothing in the spatial dimension. While it was possible to distinguish different classes of materials using this technique, the spatial resolution was...alone and a combination of the two data-types. Results are compared to significant ground truth information. Keywords: LIDAR, multi- spectral

  16. Techniques for High-contrast Imaging in Multi-star Systems. II. Multi-star Wavefront Control

    NASA Astrophysics Data System (ADS)

    Sirbu, D.; Thomas, S.; Belikov, R.; Bendek, E.

    2017-11-01

    Direct imaging of exoplanets represents a challenge for astronomical instrumentation due to the high-contrast ratio and small angular separation between the host star and the faint planet. Multi-star systems pose additional challenges for coronagraphic instruments due to the diffraction and aberration leakage caused by companion stars. Consequently, many scientifically valuable multi-star systems are excluded from direct imaging target lists for exoplanet surveys and characterization missions. Multi-star Wavefront Control (MSWC) is a technique that uses a coronagraphic instrument’s deformable mirror (DM) to create high-contrast regions in the focal plane in the presence of multiple stars. MSWC uses “non-redundant” modes on the DM to independently control speckles from each star in the dark zone. Our previous paper also introduced the Super-Nyquist wavefront control technique, which uses a diffraction grating to generate high-contrast regions beyond the Nyquist limit (nominal region correctable by the DM). These two techniques can be combined as MSWC-s to generate high-contrast regions for multi-star systems at wide (Super-Nyquist) angular separations, while MSWC-0 refers to close (Sub-Nyquist) angular separations. As a case study, a high-contrast wavefront control simulation that applies these techniques shows that the habitable region of the Alpha Centauri system can be imaged with a small aperture at 8× {10}-9 mean raw contrast in 10% broadband light in one-sided dark holes from 1.6-5.5 λ/D. Another case study using a larger 2.4 m aperture telescope such as the Wide-Field Infrared Survey Telescope uses these techniques to image the habitable zone of Alpha Centauri at 3.2× {10}-9 mean raw contrast in monochromatic light.

  17. Fusion-based multi-target tracking and localization for intelligent surveillance systems

    NASA Astrophysics Data System (ADS)

    Rababaah, Haroun; Shirkhodaie, Amir

    2008-04-01

    In this paper, we have presented two approaches addressing visual target tracking and localization in complex urban environment. The two techniques presented in this paper are: fusion-based multi-target visual tracking, and multi-target localization via camera calibration. For multi-target tracking, the data fusion concepts of hypothesis generation/evaluation/selection, target-to-target registration, and association are employed. An association matrix is implemented using RGB histograms for associated tracking of multi-targets of interests. Motion segmentation of targets of interest (TOI) from the background was achieved by a Gaussian Mixture Model. Foreground segmentation, on other hand, was achieved by the Connected Components Analysis (CCA) technique. The tracking of individual targets was estimated by fusing two sources of information, the centroid with the spatial gating, and the RGB histogram association matrix. The localization problem is addressed through an effective camera calibration technique using edge modeling for grid mapping (EMGM). A two-stage image pixel to world coordinates mapping technique is introduced that performs coarse and fine location estimation of moving TOIs. In coarse estimation, an approximate neighborhood of the target position is estimated based on nearest 4-neighbor method, and in fine estimation, we use Euclidean interpolation to localize the position within the estimated four neighbors. Both techniques were tested and shown reliable results for tracking and localization of Targets of interests in complex urban environment.

  18. Big genomics and clinical data analytics strategies for precision cancer prognosis.

    PubMed

    Ow, Ghim Siong; Kuznetsov, Vladimir A

    2016-11-07

    The field of personalized and precise medicine in the era of big data analytics is growing rapidly. Previously, we proposed our model of patient classification termed Prognostic Signature Vector Matching (PSVM) and identified a 37 variable signature comprising 36 let-7b associated prognostic significant mRNAs and the age risk factor that stratified large high-grade serous ovarian cancer patient cohorts into three survival-significant risk groups. Here, we investigated the predictive performance of PSVM via optimization of the prognostic variable weights, which represent the relative importance of one prognostic variable over the others. In addition, we compared several multivariate prognostic models based on PSVM with classical machine learning techniques such as K-nearest-neighbor, support vector machine, random forest, neural networks and logistic regression. Our results revealed that negative log-rank p-values provides more robust weight values as opposed to the use of other quantities such as hazard ratios, fold change, or a combination of those factors. PSVM, together with the classical machine learning classifiers were combined in an ensemble (multi-test) voting system, which collectively provides a more precise and reproducible patient stratification. The use of the multi-test system approach, rather than the search for the ideal classification/prediction method, might help to address limitations of the individual classification algorithm in specific situation.

  19. Enhancing membrane protein subcellular localization prediction by parallel fusion of multi-view features.

    PubMed

    Yu, Dongjun; Wu, Xiaowei; Shen, Hongbin; Yang, Jian; Tang, Zhenmin; Qi, Yong; Yang, Jingyu

    2012-12-01

    Membrane proteins are encoded by ~ 30% in the genome and function importantly in the living organisms. Previous studies have revealed that membrane proteins' structures and functions show obvious cell organelle-specific properties. Hence, it is highly desired to predict membrane protein's subcellular location from the primary sequence considering the extreme difficulties of membrane protein wet-lab studies. Although many models have been developed for predicting protein subcellular locations, only a few are specific to membrane proteins. Existing prediction approaches were constructed based on statistical machine learning algorithms with serial combination of multi-view features, i.e., different feature vectors are simply serially combined to form a super feature vector. However, such simple combination of features will simultaneously increase the information redundancy that could, in turn, deteriorate the final prediction accuracy. That's why it was often found that prediction success rates in the serial super space were even lower than those in a single-view space. The purpose of this paper is investigation of a proper method for fusing multiple multi-view protein sequential features for subcellular location predictions. Instead of serial strategy, we propose a novel parallel framework for fusing multiple membrane protein multi-view attributes that will represent protein samples in complex spaces. We also proposed generalized principle component analysis (GPCA) for feature reduction purpose in the complex geometry. All the experimental results through different machine learning algorithms on benchmark membrane protein subcellular localization datasets demonstrate that the newly proposed parallel strategy outperforms the traditional serial approach. We also demonstrate the efficacy of the parallel strategy on a soluble protein subcellular localization dataset indicating the parallel technique is flexible to suite for other computational biology problems. The software and datasets are available at: http://www.csbio.sjtu.edu.cn/bioinf/mpsp.

  20. Modeling and simulation of multi-physics multi-scale transport phenomenain bio-medical applications

    NASA Astrophysics Data System (ADS)

    Kenjereš, Saša

    2014-08-01

    We present a short overview of some of our most recent work that combines the mathematical modeling, advanced computer simulations and state-of-the-art experimental techniques of physical transport phenomena in various bio-medical applications. In the first example, we tackle predictions of complex blood flow patterns in the patient-specific vascular system (carotid artery bifurcation) and transfer of the so-called "bad" cholesterol (low-density lipoprotein, LDL) within the multi-layered artery wall. This two-way coupling between the blood flow and corresponding mass transfer of LDL within the artery wall is essential for predictions of regions where atherosclerosis can develop. It is demonstrated that a recently developed mathematical model, which takes into account the complex multi-layer arterial-wall structure, produced LDL profiles within the artery wall in good agreement with in-vivo experiments in rabbits, and it can be used for predictions of locations where the initial stage of development of atherosclerosis may take place. The second example includes a combination of pulsating blood flow and medical drug delivery and deposition controlled by external magnetic field gradients in the patient specific carotid artery bifurcation. The results of numerical simulations are compared with own PIV (Particle Image Velocimetry) and MRI (Magnetic Resonance Imaging) in the PDMS (silicon-based organic polymer) phantom. A very good agreement between simulations and experiments is obtained for different stages of the pulsating cycle. Application of the magnetic drug targeting resulted in an increase of up to ten fold in the efficiency of local deposition of the medical drug at desired locations. Finally, the LES (Large Eddy Simulation) of the aerosol distribution within the human respiratory system that includes up to eight bronchial generations is performed. A very good agreement between simulations and MRV (Magnetic Resonance Velocimetry) measurements is obtained. Magnetic steering of aerosols towards the left or right part of lungs proved to be possible, which can open new strategies for medical treatment of respiratory diseases.

  1. Multi-Modal Nano-Probes for Radionuclide and 5-color Near Infrared Optical Lymphatic Imaging

    PubMed Central

    Kobayashi, Hisataka; Koyama, Yoshinori; Barrett, Tristan; Hama, Yukihiro; Regino, Celeste A. S.; Shin, In Soo; Jang, Beom-Su; Le, Nhat; Paik, Chang H.; Choyke, Peter L.; Urano, Yasuteru

    2008-01-01

    Current contrast agents generally have one function and can only be imaged in monochrome, therefore, the majority of imaging methods can only impart uniparametric information. A single nano-particle has the potential to be loaded with multiple payloads. Such multi-modality probes have the ability to be imaged by more than one imaging technique, which could compensate for the weakness or even combine the advantages of each individual modality. Furthermore, optical imaging using different optical probes enables us to achieve multi-color in vivo imaging, wherein multiple parameters can be read from a single image. To allow differentiation of multiple optical signals in vivo, each probe should have a close but different near infrared emission. To this end, we synthesized nano-probes with multi-modal and multi-color potential, which employed a polyamidoamine dendrimer platform linked to both radionuclides and optical probes, permitting dual-modality scintigraphic and 5-color near infrared optical lymphatic imaging using a multiple excitation spectrally-resolved fluorescence imaging technique. PMID:19079788

  2. 'Enzyme Test Bench': A biochemical application of the multi-rate modeling

    NASA Astrophysics Data System (ADS)

    Rachinskiy, K.; Schultze, H.; Boy, M.; Büchs, J.

    2008-11-01

    In the expanding field of 'white biotechnology' enzymes are frequently applied to catalyze the biochemical reaction from a resource material to a valuable product. Evolutionary designed to catalyze the metabolism in any life form, they selectively accelerate complex reactions under physiological conditions. Modern techniques, such as directed evolution, have been developed to satisfy the increasing demand on enzymes. Applying these techniques together with rational protein design, we aim at improving of enzymes' activity, selectivity and stability. To tap the full potential of these techniques, it is essential to combine them with adequate screening methods. Nowadays a great number of high throughput colorimetric and fluorescent enzyme assays are applied to measure the initial enzyme activity with high throughput. However, the prediction of enzyme long term stability within short experiments is still a challenge. A new high throughput technique for enzyme characterization with specific attention to the long term stability, called 'Enzyme Test Bench', is presented. The concept of the Enzyme Test Bench consists of short term enzyme tests conducted under partly extreme conditions to predict the enzyme long term stability under moderate conditions. The technique is based on the mathematical modeling of temperature dependent enzyme activation and deactivation. Adapting the temperature profiles in sequential experiments by optimum non-linear experimental design, the long term deactivation effects can be purposefully accelerated and detected within hours. During the experiment the enzyme activity is measured online to estimate the model parameters from the obtained data. Thus, the enzyme activity and long term stability can be calculated as a function of temperature. The results of the characterization, based on micro liter format experiments of hours, are in good agreement with the results of long term experiments in 1L format. Thus, the new technique allows for both: the enzyme screening with regard to the long term stability and the choice of the optimal process temperature. The presented article gives a successful example for the application of multi-rate modeling, experimental design and parameter estimation within biochemical engineering. At the same time, it shows the limitations of the methods at the state of the art and addresses the current problems to the applied mathematics community.

  3. Scalable multi-objective control for large scale water resources systems under uncertainty

    NASA Astrophysics Data System (ADS)

    Giuliani, Matteo; Quinn, Julianne; Herman, Jonathan; Castelletti, Andrea; Reed, Patrick

    2016-04-01

    The use of mathematical models to support the optimal management of environmental systems is rapidly expanding over the last years due to advances in scientific knowledge of the natural processes, efficiency of the optimization techniques, and availability of computational resources. However, undergoing changes in climate and society introduce additional challenges for controlling these systems, ultimately motivating the emergence of complex models to explore key causal relationships and dependencies on uncontrolled sources of variability. In this work, we contribute a novel implementation of the evolutionary multi-objective direct policy search (EMODPS) method for controlling environmental systems under uncertainty. The proposed approach combines direct policy search (DPS) with hierarchical parallelization of multi-objective evolutionary algorithms (MOEAs) and offers a threefold advantage: the DPS simulation-based optimization can be combined with any simulation model and does not add any constraint on modeled information, allowing the use of exogenous information in conditioning the decisions. Moreover, the combination of DPS and MOEAs prompts the generation or Pareto approximate set of solutions for up to 10 objectives, thus overcoming the decision biases produced by cognitive myopia, where narrow or restrictive definitions of optimality strongly limit the discovery of decision relevant alternatives. Finally, the use of large-scale MOEAs parallelization improves the ability of the designed solutions in handling the uncertainty due to severe natural variability. The proposed approach is demonstrated on a challenging water resources management problem represented by the optimal control of a network of four multipurpose water reservoirs in the Red River basin (Vietnam). As part of the medium-long term energy and food security national strategy, four large reservoirs have been constructed on the Red River tributaries, which are mainly operated for hydropower production, flood control, and water supply. Numerical results under historical as well as synthetically generated hydrologic conditions show that our approach is able to discover key system tradeoffs in the operations of the system. The ability of the algorithm to find near-optimal solutions increases with the number of islands in the adopted hierarchical parallelization scheme. In addition, although significant performance degradation is observed when the solutions designed over history are re-evaluated over synthetically generated inflows, we successfully reduced these vulnerabilities by identifying alternative solutions that are more robust to hydrologic uncertainties, while also addressing the tradeoffs across the Red River multi-sector services.

  4. Predicting the impact of combined therapies on myeloma cell growth using a hybrid multi-scale agent-based model.

    PubMed

    Ji, Zhiwei; Su, Jing; Wu, Dan; Peng, Huiming; Zhao, Weiling; Nlong Zhao, Brian; Zhou, Xiaobo

    2017-01-31

    Multiple myeloma is a malignant still incurable plasma cell disorder. This is due to refractory disease relapse, immune impairment, and development of multi-drug resistance. The growth of malignant plasma cells is dependent on the bone marrow (BM) microenvironment and evasion of the host's anti-tumor immune response. Hence, we hypothesized that targeting tumor-stromal cell interaction and endogenous immune system in BM will potentially improve the response of multiple myeloma (MM). Therefore, we proposed a computational simulation of the myeloma development in the complicated microenvironment which includes immune cell components and bone marrow stromal cells and predicted the effects of combined treatment with multi-drugs on myeloma cell growth. We constructed a hybrid multi-scale agent-based model (HABM) that combines an ODE system and Agent-based model (ABM). The ODEs was used for modeling the dynamic changes of intracellular signal transductions and ABM for modeling the cell-cell interactions between stromal cells, tumor, and immune components in the BM. This model simulated myeloma growth in the bone marrow microenvironment and revealed the important role of immune system in this process. The predicted outcomes were consistent with the experimental observations from previous studies. Moreover, we applied this model to predict the treatment effects of three key therapeutic drugs used for MM, and found that the combination of these three drugs potentially suppress the growth of myeloma cells and reactivate the immune response. In summary, the proposed model may serve as a novel computational platform for simulating the formation of MM and evaluating the treatment response of MM to multiple drugs.

  5. Multi-fluid CFD analysis in Process Engineering

    NASA Astrophysics Data System (ADS)

    Hjertager, B. H.

    2017-12-01

    An overview of modelling and simulation of flow processes in gas/particle and gas/liquid systems are presented. Particular emphasis is given to computational fluid dynamics (CFD) models that use the multi-dimensional multi-fluid techniques. Turbulence modelling strategies for gas/particle flows based on the kinetic theory for granular flows are given. Sub models for the interfacial transfer processes and chemical kinetics modelling are presented. Examples are shown for some gas/particle systems including flow and chemical reaction in risers as well as gas/liquid systems including bubble columns and stirred tanks.

  6. Predictability of extreme weather events for NE U.S.: improvement of the numerical prediction using a Bayesian regression approach

    NASA Astrophysics Data System (ADS)

    Yang, J.; Astitha, M.; Anagnostou, E. N.; Hartman, B.; Kallos, G. B.

    2015-12-01

    Weather prediction accuracy has become very important for the Northeast U.S. given the devastating effects of extreme weather events in the recent years. Weather forecasting systems are used towards building strategies to prevent catastrophic losses for human lives and the environment. Concurrently, weather forecast tools and techniques have evolved with improved forecast skill as numerical prediction techniques are strengthened by increased super-computing resources. In this study, we examine the combination of two state-of-the-science atmospheric models (WRF and RAMS/ICLAMS) by utilizing a Bayesian regression approach to improve the prediction of extreme weather events for NE U.S. The basic concept behind the Bayesian regression approach is to take advantage of the strengths of two atmospheric modeling systems and, similar to the multi-model ensemble approach, limit their weaknesses which are related to systematic and random errors in the numerical prediction of physical processes. The first part of this study is focused on retrospective simulations of seventeen storms that affected the region in the period 2004-2013. Optimal variances are estimated by minimizing the root mean square error and are applied to out-of-sample weather events. The applicability and usefulness of this approach are demonstrated by conducting an error analysis based on in-situ observations from meteorological stations of the National Weather Service (NWS) for wind speed and wind direction, and NCEP Stage IV radar data, mosaicked from the regional multi-sensor for precipitation. The preliminary results indicate a significant improvement in the statistical metrics of the modeled-observed pairs for meteorological variables using various combinations of the sixteen events as predictors of the seventeenth. This presentation will illustrate the implemented methodology and the obtained results for wind speed, wind direction and precipitation, as well as set the research steps that will be followed in the future.

  7. A multi-model fusion strategy for multivariate calibration using near and mid-infrared spectra of samples from brewing industry

    NASA Astrophysics Data System (ADS)

    Tan, Chao; Chen, Hui; Wang, Chao; Zhu, Wanping; Wu, Tong; Diao, Yuanbo

    2013-03-01

    Near and mid-infrared (NIR/MIR) spectroscopy techniques have gained great acceptance in the industry due to their multiple applications and versatility. However, a success of application often depends heavily on the construction of accurate and stable calibration models. For this purpose, a simple multi-model fusion strategy is proposed. It is actually the combination of Kohonen self-organizing map (KSOM), mutual information (MI) and partial least squares (PLSs) and therefore named as KMICPLS. It works as follows: First, the original training set is fed into a KSOM for unsupervised clustering of samples, on which a series of training subsets are constructed. Thereafter, on each of the training subsets, a MI spectrum is calculated and only the variables with higher MI values than the mean value are retained, based on which a candidate PLS model is constructed. Finally, a fixed number of PLS models are selected to produce a consensus model. Two NIR/MIR spectral datasets from brewing industry are used for experiments. The results confirms its superior performance to two reference algorithms, i.e., the conventional PLS and genetic algorithm-PLS (GAPLS). It can build more accurate and stable calibration models without increasing the complexity, and can be generalized to other NIR/MIR applications.

  8. Interactive Visual Analysis within Dynamic Ocean Models

    NASA Astrophysics Data System (ADS)

    Butkiewicz, T.

    2012-12-01

    The many observation and simulation based ocean models available today can provide crucial insights for all fields of marine research and can serve as valuable references when planning data collection missions. However, the increasing size and complexity of these models makes leveraging their contents difficult for end users. Through a combination of data visualization techniques, interactive analysis tools, and new hardware technologies, the data within these models can be made more accessible to domain scientists. We present an interactive system that supports exploratory visual analysis within large-scale ocean flow models. The currents and eddies within the models are illustrated using effective, particle-based flow visualization techniques. Stereoscopic displays and rendering methods are employed to ensure that the user can correctly perceive the complex 3D structures of depth-dependent flow patterns. Interactive analysis tools are provided which allow the user to experiment through the introduction of their customizable virtual dye particles into the models to explore regions of interest. A multi-touch interface provides natural, efficient interaction, with custom multi-touch gestures simplifying the otherwise challenging tasks of navigating and positioning tools within a 3D environment. We demonstrate the potential applications of our visual analysis environment with two examples of real-world significance: Firstly, an example of using customized particles with physics-based behaviors to simulate pollutant release scenarios, including predicting the oil plume path for the 2010 Deepwater Horizon oil spill disaster. Secondly, an interactive tool for plotting and revising proposed autonomous underwater vehicle mission pathlines with respect to the surrounding flow patterns predicted by the model; as these survey vessels have extremely limited energy budgets, designing more efficient paths allows for greater survey areas.

  9. Multi-atlas segmentation with joint label fusion and corrective learning—an open source implementation

    PubMed Central

    Wang, Hongzhi; Yushkevich, Paul A.

    2013-01-01

    Label fusion based multi-atlas segmentation has proven to be one of the most competitive techniques for medical image segmentation. This technique transfers segmentations from expert-labeled images, called atlases, to a novel image using deformable image registration. Errors produced by label transfer are further reduced by label fusion that combines the results produced by all atlases into a consensus solution. Among the proposed label fusion strategies, weighted voting with spatially varying weight distributions derived from atlas-target intensity similarity is a simple and highly effective label fusion technique. However, one limitation of most weighted voting methods is that the weights are computed independently for each atlas, without taking into account the fact that different atlases may produce similar label errors. To address this problem, we recently developed the joint label fusion technique and the corrective learning technique, which won the first place of the 2012 MICCAI Multi-Atlas Labeling Challenge and was one of the top performers in 2013 MICCAI Segmentation: Algorithms, Theory and Applications (SATA) challenge. To make our techniques more accessible to the scientific research community, we describe an Insight-Toolkit based open source implementation of our label fusion methods. Our implementation extends our methods to work with multi-modality imaging data and is more suitable for segmentation problems with multiple labels. We demonstrate the usage of our tools through applying them to the 2012 MICCAI Multi-Atlas Labeling Challenge brain image dataset and the 2013 SATA challenge canine leg image dataset. We report the best results on these two datasets so far. PMID:24319427

  10. Modeling activity recognition of multi resident using label combination of multi label classification in smart home

    NASA Astrophysics Data System (ADS)

    Mohamed, Raihani; Perumal, Thinagaran; Sulaiman, Md Nasir; Mustapha, Norwati; Zainudin, M. N. Shah

    2017-10-01

    Pertaining to the human centric concern and non-obtrusive way, the ambient sensor type technology has been selected, accepted and embedded in the environment in resilient style. Human activities, everyday are gradually becoming complex and thus complicate the inferences of activities when it involving the multi resident in the same smart environment. Current works solutions focus on separate model between the resident, activities and interactions. Some study use data association and extra auxiliary of graphical nodes to model human tracking information in an environment and some produce separate framework to incorporate the auxiliary for interaction feature model. Thus, recognizing the activities and which resident perform the activity at the same time in the smart home are vital for the smart home development and future applications. This paper will cater the above issue by considering the simplification and efficient method using the multi label classification framework. This effort eliminates time consuming and simplifies a lot of pre-processing tasks comparing with previous approach. Applications to the multi resident multi label learning in smart home problems shows the LC (Label Combination) using Decision Tree (DT) as base classifier can tackle the above problems.

  11. Full-color high-definition CGH reconstructing hybrid scenes of physical and virtual objects

    NASA Astrophysics Data System (ADS)

    Tsuchiyama, Yasuhiro; Matsushima, Kyoji; Nakahara, Sumio; Yamaguchi, Masahiro; Sakamoto, Yuji

    2017-03-01

    High-definition CGHs can reconstruct high-quality 3D images that are comparable to that in conventional optical holography. However, it was difficult to exhibit full-color images reconstructed by these high-definition CGHs, because three CGHs for RGB colors and a bulky image combiner were needed to produce full-color images. Recently, we reported a novel technique for full-color reconstruction using RGB color filters, which are similar to that used for liquid-crystal panels. This technique allows us to produce full-color high-definition CGHs composed of a single plate and place them on exhibition. By using the technique, we demonstrate full-color CGHs that reconstruct hybrid scenes comprised of real-existing physical objects and CG-modeled virtual objects in this paper. Here, the wave field of the physical object are obtained from dense multi-viewpoint images by employing the ray-sampling (RS) plane technique. In addition to the technique for full-color capturing and reconstruction of real object fields, the principle and simulation technique for full- color CGHs using RGB color filters are presented.

  12. A multi-resolution strategy for a multi-objective deformable image registration framework that accommodates large anatomical differences

    NASA Astrophysics Data System (ADS)

    Alderliesten, Tanja; Bosman, Peter A. N.; Sonke, Jan-Jakob; Bel, Arjan

    2014-03-01

    Currently, two major challenges dominate the field of deformable image registration. The first challenge is related to the tuning of the developed methods to specific problems (i.e. how to best combine different objectives such as similarity measure and transformation effort). This is one of the reasons why, despite significant progress, clinical implementation of such techniques has proven to be difficult. The second challenge is to account for large anatomical differences (e.g. large deformations, (dis)appearing structures) that occurred between image acquisitions. In this paper, we study a framework based on multi-objective optimization to improve registration robustness and to simplify tuning for specific applications. Within this framework we specifically consider the use of an advanced model-based evolutionary algorithm for optimization and a dual-dynamic transformation model (i.e. two "non-fixed" grids: one for the source- and one for the target image) to accommodate for large anatomical differences. The framework computes and presents multiple outcomes that represent efficient trade-offs between the different objectives (a so-called Pareto front). In image processing it is common practice, for reasons of robustness and accuracy, to use a multi-resolution strategy. This is, however, only well-established for single-objective registration methods. Here we describe how such a strategy can be realized for our multi-objective approach and compare its results with a single-resolution strategy. For this study we selected the case of prone-supine breast MRI registration. Results show that the well-known advantages of a multi-resolution strategy are successfully transferred to our multi-objective approach, resulting in superior (i.e. Pareto-dominating) outcomes.

  13. Determination of adhesion between thermoplastic and liquid silicone rubbers in hard-soft-combinations via mechanical peeling test

    NASA Astrophysics Data System (ADS)

    Kühr, C.; Spörrer, A.; Altstädt, V.

    2014-05-01

    The production of hard-soft-combinations via multi injection molding gained more and more importance in the last years. This is attributed to different factors. One principle reason is that the use of two-component injection molding technique has many advantages such as cancelling subsequent and complex steps and shortening the process chain. Furthermore this technique allows the combination of the properties of the single components like the high stiffness of the hard component and the elastic properties of the soft component. Because of the incompatibility of some polymers the adhesion on the interface has to be determined. Thereby adhesion is not only influenced by the applied polymers, but also by the injection molding parameters and the characteristics of the mold. Besides already known combinations of thermoplastics with thermoplastic elastomers (TPE), there consists the possibility to apply liquid silicone rubber (LSR) as soft component. A thermoplastic/LSR combination gains in importance due to the specific advantages of LSR to TPE. The faintly adhesion between LSR and thermoplastics is currently one of the key challenges when dealing with those combinations. So it is coercively necessary to improve adhesion between the two components by adding an adhesion promoter. To determine the promoters influence, it is necessary to develop a suitable testing method to investigate e.g. the peel resistance. The current German standard "VDI Richtlinie 2019', which is actually only employed for thermoplastic/TPE combinations, can serve as a model to determine the adhesion of thermoplastic/LSR combinations.

  14. A dynamic multi-scale Markov model based methodology for remaining life prediction

    NASA Astrophysics Data System (ADS)

    Yan, Jihong; Guo, Chaozhong; Wang, Xing

    2011-05-01

    The ability to accurately predict the remaining life of partially degraded components is crucial in prognostics. In this paper, a performance degradation index is designed using multi-feature fusion techniques to represent deterioration severities of facilities. Based on this indicator, an improved Markov model is proposed for remaining life prediction. Fuzzy C-Means (FCM) algorithm is employed to perform state division for Markov model in order to avoid the uncertainty of state division caused by the hard division approach. Considering the influence of both historical and real time data, a dynamic prediction method is introduced into Markov model by a weighted coefficient. Multi-scale theory is employed to solve the state division problem of multi-sample prediction. Consequently, a dynamic multi-scale Markov model is constructed. An experiment is designed based on a Bently-RK4 rotor testbed to validate the dynamic multi-scale Markov model, experimental results illustrate the effectiveness of the methodology.

  15. Optical interferometry and Gaia parallaxes for a robust calibration of the Cepheid distance scale

    NASA Astrophysics Data System (ADS)

    Kervella, Pierre; Mérand, Antoine; Gallenne, Alexandre; Trahin, Boris; Borgniet, Simon; Pietrzynski, Grzegorz; Nardetto, Nicolas; Gieren, Wolfgang

    2018-04-01

    We present the modeling tool we developed to incorporate multi-technique observations of Cepheids in a single pulsation model: the Spectro-Photo-Interferometry of Pulsating Stars (SPIPS). The combination of angular diameters from optical interferometry, radial velocities and photometry with the coming Gaia DR2 parallaxes of nearby Galactic Cepheids will soon enable us to calibrate the projection factor of the classical Parallax-of-Pulsation method. This will extend its applicability to Cepheids too distant for accurate Gaia parallax measurements, and allow us to precisely calibrate the Leavitt law's zero point. As an example application, we present the SPIPS model of the long-period Cepheid RS Pup that provides a measurement of its projection factor, using the independent distance estimated from its light echoes.

  16. Suppression of nonlinear oscillations in combustors with partial length acoustic liners

    NASA Technical Reports Server (NTRS)

    Espander, W. R.; Mitchell, C. E.; Baer, M. R.

    1975-01-01

    An analytical model is formulated for a three-dimensional nonlinear stability problem in a rocket motor combustion chamber. The chamber is modeled as a right circular cylinder with a short (multi-orifice) nozzle, and an acoustic linear covering an arbitrary portion of the cylindrical periphery. The combustion is concentrated at the injector and the gas flow field is characterized by a mean Mach number. The unsteady combustion processes are formulated using the Crocco time lag model. The resulting equations are solved using a Green's function method combined with numerical evaluation techniques. The influence of acoustic liners on the nonlinear waveforms is predicted. Nonlinear stability limits and regions where triggering is possible are also predicted for both lined and unlined combustors in terms of the combustion parameters.

  17. Recovering Long-wavelength Velocity Models using Spectrogram Inversion with Single- and Multi-frequency Components

    NASA Astrophysics Data System (ADS)

    Ha, J.; Chung, W.; Shin, S.

    2015-12-01

    Many waveform inversion algorithms have been proposed in order to construct subsurface velocity structures from seismic data sets. These algorithms have suffered from computational burden, local minima problems, and the lack of low-frequency components. Computational efficiency can be improved by the application of back-propagation techniques and advances in computing hardware. In addition, waveform inversion algorithms, for obtaining long-wavelength velocity models, could avoid both the local minima problem and the effect of the lack of low-frequency components in seismic data. In this study, we proposed spectrogram inversion as a technique for recovering long-wavelength velocity models. In spectrogram inversion, decomposed frequency components from spectrograms of traces, in the observed and calculated data, are utilized to generate traces with reproduced low-frequency components. Moreover, since each decomposed component can reveal the different characteristics of a subsurface structure, several frequency components were utilized to analyze the velocity features in the subsurface. We performed the spectrogram inversion using a modified SEG/SEGE salt A-A' line. Numerical results demonstrate that spectrogram inversion could also recover the long-wavelength velocity features. However, inversion results varied according to the frequency components utilized. Based on the results of inversion using a decomposed single-frequency component, we noticed that robust inversion results are obtained when a dominant frequency component of the spectrogram was utilized. In addition, detailed information on recovered long-wavelength velocity models was obtained using a multi-frequency component combined with single-frequency components. Numerical examples indicate that various detailed analyses of long-wavelength velocity models can be carried out utilizing several frequency components.

  18. Experimental and Numerical Analysis of Hydroformed Tubular Materials for Superconducting Radio Frequency (SRF) Cavities

    NASA Astrophysics Data System (ADS)

    Kim, Hyun Sung

    Superconducting radio frequency (SRF) cavities represent a well established technology benefiting from some 40 years of research and development. An increasing demand for electron and positron accelerators leads to a continuing interest in improved cavity performance and fabrication techniques. Therefore, several seamless cavity fabrication techniques have been proposed for eliminating the multitude of electron-beam welded seams that contribute to the introduction of performance-reducing defects. Among them, hydroforming using hydraulic pressure is a promising fabrication technique for producing the desired seamless cavities while at the same time reducing manufacturing cost. This study focused on experimental and numerical analysis of hydroformed niobium (Nb) tubes for the successful application of hydroforming technique to the seamless fabrication of multi-cell SRF cavities for particle acceleration. The heat treatment, tensile testing, and bulge testing of Cu and Nb tubes has been carried out to both provide starting data for models of hydroforming of Nb tube into seamless SRF cavities. Based on the results of these experiments, numerical analyses using finite element modeling were conducted for a bulge deformation of Cu and Nb. In the experimental part of the study samples removed from representative tubes were prepared for heat treatment, tensile testing, residual resistance ratio (RRR) measurement, and orientation imaging electron microscopy (OIM). After being optimally heat treated Cu and Nb tubes were subjected to hydraulic bulge testing and the results analyzed. For numerical analysis of hydroforming process, two different simulation approaches were used. The first model was the macro-scale continuum model using the constitutive equations (stress-strain relationship) as an input of the simulation. The constitutive equations were obtained from the experimental procedure including tensile and tube bulge tests in order to investigate the influence of loading condition on deformation behavior. The second model was a multi-scale model using both macroscopic continuum model and microscopic crystal plasticity (CP) model: First, the constitutive equation was obtained from the other microscopic simulation model (CP-FEM) using the microstructural information (i.e., orientation) of materials from the OIM and simple tensile test data. Continuum FE analysis based on the obtained constitutive equation using CP model were then fulfilled. Several conclusions can be drawn on the basis of the experimental and numerical analysis as follows: 1) The stress-strain relationship from the bulge test represents a more accurate description of the deformation behavior for a hydroforming than that from tensile tests made on segments cut from the tubular materials. 2) For anisotropic material, the incorporation of anisotropic effects using anisotropy coefficient from the tensile test led to even more accurate results. 3) A multi-scale simulation strategy using combination of continuum and CP models can give high quality predictions of the deformation under hydroforming of Cu and Nb tubes.

  19. Integrated performance and reliability specification for digital avionics systems

    NASA Technical Reports Server (NTRS)

    Brehm, Eric W.; Goettge, Robert T.

    1995-01-01

    This paper describes an automated tool for performance and reliability assessment of digital avionics systems, called the Automated Design Tool Set (ADTS). ADTS is based on an integrated approach to design assessment that unifies traditional performance and reliability views of system designs, and that addresses interdependencies between performance and reliability behavior via exchange of parameters and result between mathematical models of each type. A multi-layer tool set architecture has been developed for ADTS that separates the concerns of system specification, model generation, and model solution. Performance and reliability models are generated automatically as a function of candidate system designs, and model results are expressed within the system specification. The layered approach helps deal with the inherent complexity of the design assessment process, and preserves long-term flexibility to accommodate a wide range of models and solution techniques within the tool set structure. ADTS research and development to date has focused on development of a language for specification of system designs as a basis for performance and reliability evaluation. A model generation and solution framework has also been developed for ADTS, that will ultimately encompass an integrated set of analytic and simulated based techniques for performance, reliability, and combined design assessment.

  20. Direct G-code manipulation for 3D material weaving

    NASA Astrophysics Data System (ADS)

    Koda, S.; Tanaka, H.

    2017-04-01

    The process of conventional 3D printing begins by first build a 3D model, then convert to the model to G-code via a slicer software, feed the G-code to the printer, and finally start the printing. The most simple and popular 3D printing technique is Fused Deposition Modeling. However, in this method, the printing path that the printer head can take is restricted by the G-code. Therefore the printed 3D models with complex pattern have structural errors like holes or gaps between the printed material lines. In addition, the structural density and the material's position of the printed model are difficult to control. We realized the G-code editing, Fabrix, for making a more precise and functional printed model with both single and multiple material. The models with different stiffness are fabricated by the controlling the printing density of the filament materials with our method. In addition, the multi-material 3D printing has a possibility to expand the physical properties by the material combination and its G-code editing. These results show the new printing method to provide more creative and functional 3D printing techniques.

  1. Investigating lithium-ion battery materials during overcharge-induced thermal runaway: an operando and multi-scale X-ray CT study.

    PubMed

    Finegan, Donal P; Scheel, Mario; Robinson, James B; Tjaden, Bernhard; Di Michiel, Marco; Hinds, Gareth; Brett, Dan J L; Shearing, Paul R

    2016-11-16

    Catastrophic failure of lithium-ion batteries occurs across multiple length scales and over very short time periods. A combination of high-speed operando tomography, thermal imaging and electrochemical measurements is used to probe the degradation mechanisms leading up to overcharge-induced thermal runaway of a LiCoO 2 pouch cell, through its interrelated dynamic structural, thermal and electrical responses. Failure mechanisms across multiple length scales are explored using a post-mortem multi-scale tomography approach, revealing significant morphological and phase changes in the LiCoO 2 electrode microstructure and location dependent degradation. This combined operando and multi-scale X-ray computed tomography (CT) technique is demonstrated as a comprehensive approach to understanding battery degradation and failure.

  2. Horizontal and vertical combination of multi-tenancy patterns in service-oriented applications

    NASA Astrophysics Data System (ADS)

    Mietzner, Ralph; Leymann, Frank; Unger, Tobias

    2011-02-01

    Software as a service (SaaS) providers exploit economies of scale by offering the same instance of an application to multiple customers typically in a single-instance multi-tenant architecture model. Therefore the applications must be scalable, multi-tenant aware and configurable. In this article, we show how the services in a service-oriented SaaS application can be deployed using different multi-tenancy patterns. We describe how services in different multi-tenancy patterns can be composed on the application level. In addition to that, we also describe how these multi-tenancy patterns can be applied to middleware and hardware components. We then show with some real world examples how the different multi-tenancy patterns can be combined.

  3. Multi-echo acquisition

    PubMed Central

    Posse, Stefan

    2011-01-01

    The rapid development of fMRI was paralleled early on by the adaptation of MR spectroscopic imaging (MRSI) methods to quantify water relaxation changes during brain activation. This review describes the evolution of multi-echo acquisition from high-speed MRSI to multi-echo EPI and beyond. It highlights milestones in the development of multi-echo acquisition methods, such as the discovery of considerable gains in fMRI sensitivity when combining echo images, advances in quantification of the BOLD effect using analytical biophysical modeling and interleaved multi-region shimming. The review conveys the insight gained from combining fMRI and MRSI methods and concludes with recent trends in ultra-fast fMRI, which will significantly increase temporal resolution of multi-echo acquisition. PMID:22056458

  4. Estimation of Solar Radiation on Building Roofs in Mountainous Areas

    NASA Astrophysics Data System (ADS)

    Agugiaro, G.; Remondino, F.; Stevanato, G.; De Filippi, R.; Furlanello, C.

    2011-04-01

    The aim of this study is estimating solar radiation on building roofs in complex mountain landscape areas. A multi-scale solar radiation estimation methodology is proposed that combines 3D data ranging from regional scale to the architectural one. Both the terrain and the nearby building shadowing effects are considered. The approach is modular and several alternative roof models, obtained by surveying and modelling techniques at varying level of detail, can be embedded in a DTM, e.g. that of an Alpine valley surrounded by mountains. The solar radiation maps obtained from raster models at different resolutions are compared and evaluated in order to obtain information regarding the benefits and disadvantages tied to each roof modelling approach. The solar radiation estimation is performed within the open-source GRASS GIS environment using r.sun and its ancillary modules.

  5. Using the SWAT model to improve process descriptions and define hydrologic partitioning in South Korea

    NASA Astrophysics Data System (ADS)

    Shope, C. L.; Maharjan, G. R.; Tenhunen, J.; Seo, B.; Kim, K.; Riley, J.; Arnhold, S.; Koellner, T.; Ok, Y. S.; Peiffer, S.; Kim, B.; Park, J.-H.; Huwe, B.

    2014-02-01

    Watershed-scale modeling can be a valuable tool to aid in quantification of water quality and yield; however, several challenges remain. In many watersheds, it is difficult to adequately quantify hydrologic partitioning. Data scarcity is prevalent, accuracy of spatially distributed meteorology is difficult to quantify, forest encroachment and land use issues are common, and surface water and groundwater abstractions substantially modify watershed-based processes. Our objective is to assess the capability of the Soil and Water Assessment Tool (SWAT) model to capture event-based and long-term monsoonal rainfall-runoff processes in complex mountainous terrain. To accomplish this, we developed a unique quality-control, gap-filling algorithm for interpolation of high-frequency meteorological data. We used a novel multi-location, multi-optimization calibration technique to improve estimations of catchment-wide hydrologic partitioning. The interdisciplinary model was calibrated to a unique combination of statistical, hydrologic, and plant growth metrics. Our results indicate scale-dependent sensitivity of hydrologic partitioning and substantial influence of engineered features. The addition of hydrologic and plant growth objective functions identified the importance of culverts in catchment-wide flow distribution. While this study shows the challenges of applying the SWAT model to complex terrain and extreme environments; by incorporating anthropogenic features into modeling scenarios, we can enhance our understanding of the hydroecological impact.

  6. Analytical transmissibility based transfer path analysis for multi-energy-domain systems using four-pole parameter theory

    NASA Astrophysics Data System (ADS)

    Mashayekhi, Mohammad Jalali; Behdinan, Kamran

    2017-10-01

    The increasing demand to minimize undesired vibration and noise levels in several high-tech industries has generated a renewed interest in vibration transfer path analysis. Analyzing vibration transfer paths within a system is of crucial importance in designing an effective vibration isolation strategy. Most of the existing vibration transfer path analysis techniques are empirical which are suitable for diagnosis and troubleshooting purpose. The lack of an analytical transfer path analysis to be used in the design stage is the main motivation behind this research. In this paper an analytical transfer path analysis based on the four-pole theory is proposed for multi-energy-domain systems. Bond graph modeling technique which is an effective approach to model multi-energy-domain systems is used to develop the system model. In this paper an electro-mechanical system is used as a benchmark example to elucidate the effectiveness of the proposed technique. An algorithm to obtain the equivalent four-pole representation of a dynamical systems based on the corresponding bond graph model is also presented in this paper.

  7. A robust computational technique for model order reduction of two-time-scale discrete systems via genetic algorithms.

    PubMed

    Alsmadi, Othman M K; Abo-Hammour, Zaer S

    2015-01-01

    A robust computational technique for model order reduction (MOR) of multi-time-scale discrete systems (single input single output (SISO) and multi-input multioutput (MIMO)) is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA) with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.

  8. A technique for measuring the quality of an elliptically bent pentaerythritol [PET(002)] crystal

    DOE PAGES

    Haugh, M. J.; Jacoby, K. D.; Barrios, M. A.; ...

    2016-08-23

    Here, we present a technique for determining the X-ray spectral quality from each region of an elliptically curved PET(002) crystal. The investigative technique utilizes the shape of the crystal rocking curve which changes significantly as the radius of curvature changes. This unique quality information enables the spectroscopist to verify where in the spectral range that the spectrometer performance is satisfactory and where there are regions that would show spectral distortion. A collection of rocking curve measurements for elliptically curved PET(002) has been built up in our X-ray laboratory. The multi-lamellar model from the XOP software has been used as amore » guide and corrections were applied to the model based upon measurements. But, the measurement of RI at small radius of curvature shows an anomalous behavior; the multi-lamellar model fails to show this behavior. The effect of this anomalous RI behavior on an X-ray spectrometer calibration is calculated. It is compared to the multi-lamellar model calculation which is completely inadequate for predicting RI for this range of curvature and spectral energies.« less

  9. A technique for measuring the quality of an elliptically bent pentaerythritol [PET(002)] crystal

    NASA Astrophysics Data System (ADS)

    Haugh, M. J.; Jacoby, K. D.; Barrios, M. A.; Thorn, D.; Emig, J. A.; Schneider, M. B.

    2016-11-01

    We present a technique for determining the X-ray spectral quality from each region of an elliptically curved PET(002) crystal. The investigative technique utilizes the shape of the crystal rocking curve which changes significantly as the radius of curvature changes. This unique quality information enables the spectroscopist to verify where in the spectral range that the spectrometer performance is satisfactory and where there are regions that would show spectral distortion. A collection of rocking curve measurements for elliptically curved PET(002) has been built up in our X-ray laboratory. The multi-lamellar model from the XOP software has been used as a guide and corrections were applied to the model based upon measurements. But, the measurement of RI at small radius of curvature shows an anomalous behavior; the multi-lamellar model fails to show this behavior. The effect of this anomalous RI behavior on an X-ray spectrometer calibration is calculated. It is compared to the multi-lamellar model calculation which is completely inadequate for predicting RI for this range of curvature and spectral energies.

  10. The design of multi-core DSP parallel model based on message passing and multi-level pipeline

    NASA Astrophysics Data System (ADS)

    Niu, Jingyu; Hu, Jian; He, Wenjing; Meng, Fanrong; Li, Chuanrong

    2017-10-01

    Currently, the design of embedded signal processing system is often based on a specific application, but this idea is not conducive to the rapid development of signal processing technology. In this paper, a parallel processing model architecture based on multi-core DSP platform is designed, and it is mainly suitable for the complex algorithms which are composed of different modules. This model combines the ideas of multi-level pipeline parallelism and message passing, and summarizes the advantages of the mainstream model of multi-core DSP (the Master-Slave model and the Data Flow model), so that it has better performance. This paper uses three-dimensional image generation algorithm to validate the efficiency of the proposed model by comparing with the effectiveness of the Master-Slave and the Data Flow model.

  11. A Web-GIS Procedure Based on Satellite Multi-Spectral and Airborne LIDAR Data to Map the Road blockage Due to seismic Damages of Built-Up Urban Areas

    NASA Astrophysics Data System (ADS)

    Costanzo, Antonio; Montuori, Antonio; Silva, Juan Pablo; Silvestri, Malvina; Musacchio, Massimo; Buongiorno, Maria Fabrizia; Stramondo, Salvatore

    2016-08-01

    In this work, a web-GIS procedure to map the risk of road blockage in urban environments through the combined use of space-borne and airborne remote sensing sensors is presented. The methodology concerns (1) the provision of a geo-database through the integration of space-borne multispectral images and airborne LiDAR data products; (2) the modeling of building vulnerability, based on the corresponding 3D geometry and construction time information; (3) the GIS-based mapping of road closure due to seismic- related building collapses based on the building characteristic height and the width of the road. Experimental results, gathered for the Cosenza urban area, allow demonstrating the benefits of both the proposed approach and the GIS-based integration of multi-platforms remote sensing sensors and techniques for seismic road assessment purposes.

  12. Multi-Fluid Block-Adaptive-Tree Solar Wind Roe-Type Upwind Scheme: Magnetospheric Composition and Dynamics During Geomagnetic Storms, Initial Results

    NASA Technical Reports Server (NTRS)

    Gkocer, A.; Toth, G.; Ma, Y.; Gombosi, T.; Zhang, J. C.; Kistler, L. M.

    2010-01-01

    The magnetosphere contains a significant amount of ionospheric O{+}, particularly during geomagnetically active times. The presence of ionospheric plasma in the magnetosphere has a notable impact on magnetospheric composition and processes. We present a new multifluid MHD version of the BATS-R-US model of the magnetosphere to track the fate and consequences of ionospheric outflow. The multi-fluid MHD equations are presented as are the novel techniques for overcoming the formidable challenges associated with solving them. Our new model is then applied to the May 4, 1998 and March 31, 2001 geomagnetic storms. The results are juxtaposed with traditional single- fluid MHD and multispecies MHD simulations from a previous study, thereby allowing us to assess the benefits of using a more complex model with additional physics. We find that our multi-fluid MHD model (with outflow) gives comparable results to the multi-species MHD model (with outflow), including a more strongly negative Dst, reduced CPCP, and a drastically improved magnetic field at geosynchronous orbit, as compared to single-fluid MHD with no outflow. Significant differences in composition and magnetic field are found between the multi-species and multi-fluid approach further away from the Earth. We further demonstrate the ability to explore pressure and bulk velocity differences between H{+} and O(+}, which is not possible when utilizing the other techniques considered.

  13. Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models

    NASA Astrophysics Data System (ADS)

    Altuntas, Alper; Baugh, John

    2017-07-01

    Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.

  14. Improving Scene Classifications with Combined Active/Passive Measurements

    NASA Astrophysics Data System (ADS)

    Hu, Y.; Rodier, S.; Vaughan, M.; McGill, M.

    The uncertainties in cloud and aerosol physical properties derived from passive instruments such as MODIS are not insignificant And the uncertainty increases when the optical depths decrease Lidar observations do much better for the thin clouds and aerosols Unfortunately space-based lidar measurements such as the one onboard CALIPSO satellites are limited to nadir view only and thus have limited spatial coverage To produce climatologically meaningful thin cloud and aerosol data products it is necessary to combine the spatial coverage of MODIS with the highly sensitive CALIPSO lidar measurements Can we improving the quality of cloud and aerosol remote sensing data products by extending the knowledge about thin clouds and aerosols learned from CALIPSO-type of lidar measurements to a larger portion of the off-nadir MODIS-like multi-spectral pixels To answer the question we studied the collocated Cloud Physics Lidar CPL with Modis-Airborne-Simulation MAS observations and established an effective data fusion technique that will be applied in the combined CALIPSO MODIS cloud aerosol product algorithms This technique performs k-mean and Kohonen self-organized map cluster analysis on the entire swath of MAS data as well as on the combined CPL MAS data at the nadir track Interestingly the clusters generated from the two approaches are almost identical It indicates that the MAS multi-spectral data may have already captured most of the cloud and aerosol scene types such as cloud ice water phase multi-layer information aerosols

  15. Development and Advanced Analysis of Dynamic and Static Casing Strain Monitoring to Characterize the Orientation and Dimensions of Hydraulic Fractures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruno, Michael; Ramos, Juan; Lao, Kang

    Horizontal wells combined with multi-stage hydraulic fracturing have been applied to significantly increase production from low permeability formations, contributing to expanded total US production of oil and gas. Not all applications are successful, however. Field observations indicate that poorly designed or placed fracture stages in horizontal wells can result in significant well casing deformation and damage. In some instances, early fracture stages have deformed the casing enough so that it is not possible to drill out plugs in order to complete subsequent fracture stages. Improved fracture characterization techniques are required to identify potential problems early in the development of themore » field. Over the past decade, several new technologies have been presented as alternatives to characterize the fracture geometry for unconventional reservoirs. Monitoring dynamic casing strain and deformation during hydraulic fracturing represents one of these new techniques. The objective of this research is to evaluate dynamic and static strains imposed on a well casing by single and multiple stage fractures, and to use that information in combination with numerical inversion techniques to estimate fracture characteristics such as length, orientation and post treatment opening. GeoMechanics Technologies, working in cooperation with the Department of Energy, Small Business Innovation Research through DOE SBIR Grant No: DE-SC-0017746, is conducting a research project to complete an advanced analysis of dynamic and static casing strain monitoring to characterize the orientation and dimensions of hydraulic fractures. This report describes our literature review and technical approach. The following conclusions summarize our review and simulation results to date: A literature review was performed related to the fundamental theoretical and analytical developments of stress and strain imposed by hydraulic fracturing along casing completions and deformation monitoring techniques. Analytical solutions have been developed to understand the mechanisms responsible for casing deformation induced by hydraulic fracturing operations. After reviewing a range of casing deformation techniques, including fiber optic sensors, borehole ultrasonic tools and electromagnetic tools, we can state that challenges in deployment, data acquisition and interpretation must still be overcome to ensure successful application of strain measurement and inversion techniques to characterize hydraulic fractures in the field. Numerical models were developed to analyze induced strain along casing, cement and formation interfaces. The location of the monitoring sensor around the completion, mechanical properties of the cement and its condition in the annular space can impact the strain measurement. Field data from fiber optic sensors were evaluated to compare against numerical models. A reasonable match for the fracture height characterization was obtained. Discrepancies in the strain magnitude between the field data and the numerical model was observed and can be caused by temperature effects, the cement condition in the well and the perturbation at the surface during injection. To avoid damage in the fiber optic cable during the perforation (e.g. when setting up multi stage HF scenarios), oriented perforation technologies are suggested. This issue was evidenced in the analyzed field data, where it was not possible to obtain strain measurement below the top of the perforation. This presented a limitation to characterize the entire fracture geometry. The comparison results from numerical modeling and field data for fracture characterization shows that the proposed methodology should be validated with alternative field demonstration techniques using measurements in an offset observation well to monitor and measure the induced strain. We propose to expand on this research in Phase II with a further study of multi-fracture characterization and field demonstration for horizontal wells.« less

  16. Combining chemometric tools for assessing hazard sources and factors acting simultaneously in contaminated areas. Case study: "Mar Piccolo" Taranto (South Italy).

    PubMed

    Mali, Matilda; Dell'Anna, Maria Michela; Notarnicola, Michele; Damiani, Leonardo; Mastrorilli, Piero

    2017-10-01

    Almost all marine coastal ecosystems possess complex structural and dynamic characteristics, which are influenced by anthropogenic causes and natural processes as well. Revealing the impact of sources and factors controlling the spatial distributions of contaminants within highly polluted areas is a fundamental propaedeutic step of their quality evaluation. Combination of different pattern recognition techniques, applied to one of the most polluted Mediterranean coastal basin, resulted in a more reliable hazard assessment. PCA/CA and factorial ANOVA were exploited as complementary techniques for apprehending the impact of multi-sources and multi-factors acting simultaneously and leading to similarities or differences in the spatial contamination pattern. The combination of PCA/CA and factorial ANOVA allowed, on one hand to determine the main processes and factors controlling the contamination trend within different layers and different basins, and, on the other hand, to ascertain possible synergistic effects. This approach showed the significance of a spatially representative overview given by the combination of PCA-CA/ANOVA in inferring the historical anthropogenic sources loading on the area. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. New Insight into Combined Model and Revised Model for RTD Curves in a Multi-strand Tundish

    NASA Astrophysics Data System (ADS)

    Lei, Hong

    2015-12-01

    The analysis for the residence time distribution (RTD) curve is one of the important experimental technologies to optimize the tundish design. But there are some issues about RTD analysis model. Firstly, the combined (or mixed) model and the revised model give different analysis results for the same RTD curve. Secondly, different upper limits of integral in the numerator for the mean residence time give different results for the same RTD curve. Thirdly, the negative dead volume fraction sometimes appears at the outer strand of the multi-strand tundish. In order to solve the above problems, it is necessary to have a deep insight into the RTD curve and to propose a reasonable method to analyze the RTD curve. The results show that (1) the revised model is not appropriate to treat with the RTD curve; (2) the conception of the visual single-strand tundish and the combined model with the dimensionless time at the cut-off point are applied to estimate the flow characteristics in the multi-strand tundish; and that (3) the mean residence time at each exit is the key parameter to estimate the similarity of fluid flow among strands.

  18. The Impact of Satellite Time Group Delay and Inter-Frequency Differential Code Bias Corrections on Multi-GNSS Combined Positioning

    PubMed Central

    Ge, Yulong; Zhou, Feng; Sun, Baoqi; Wang, Shengli; Shi, Bo

    2017-01-01

    We present quad-constellation (namely, GPS, GLONASS, BeiDou and Galileo) time group delay (TGD) and differential code bias (DCB) correction models to fully exploit the code observations of all the four global navigation satellite systems (GNSSs) for navigation and positioning. The relationship between TGDs and DCBs for multi-GNSS is clearly figured out, and the equivalence of TGD and DCB correction models combining theory with practice is demonstrated. Meanwhile, the TGD/DCB correction models have been extended to various standard point positioning (SPP) and precise point positioning (PPP) scenarios in a multi-GNSS and multi-frequency context. To evaluate the effectiveness and practicability of broadcast TGDs in the navigation message and DCBs provided by the Multi-GNSS Experiment (MGEX), both single-frequency GNSS ionosphere-corrected SPP and dual-frequency GNSS ionosphere-free SPP/PPP tests are carried out with quad-constellation signals. Furthermore, the author investigates the influence of differential code biases on GNSS positioning estimates. The experiments show that multi-constellation combination SPP performs better after DCB/TGD correction, for example, for GPS-only b1-based SPP, the positioning accuracies can be improved by 25.0%, 30.6% and 26.7%, respectively, in the N, E, and U components, after the differential code biases correction, while GPS/GLONASS/BDS b1-based SPP can be improved by 16.1%, 26.1% and 9.9%. For GPS/BDS/Galileo the 3rd frequency based SPP, the positioning accuracies are improved by 2.0%, 2.0% and 0.4%, respectively, in the N, E, and U components, after Galileo satellites DCB correction. The accuracy of Galileo-only b1-based SPP are improved about 48.6%, 34.7% and 40.6% with DCB correction, respectively, in the N, E, and U components. The estimates of multi-constellation PPP are subject to different degrees of influence. For multi-constellation combination SPP, the accuracy of single-frequency is slightly better than that of dual-frequency combinations. Dual-frequency combinations are more sensitive to the differential code biases, especially for the 2nd and 3rd frequency combination, such as for GPS/BDS SPP, accuracy improvements of 60.9%, 26.5% and 58.8% in the three coordinate components is achieved after DCB parameters correction. For multi-constellation PPP, the convergence time can be reduced significantly with differential code biases correction. And the accuracy of positioning is slightly better with TGD/DCB correction. PMID:28300787

  19. The Impact of Satellite Time Group Delay and Inter-Frequency Differential Code Bias Corrections on Multi-GNSS Combined Positioning.

    PubMed

    Ge, Yulong; Zhou, Feng; Sun, Baoqi; Wang, Shengli; Shi, Bo

    2017-03-16

    We present quad-constellation (namely, GPS, GLONASS, BeiDou and Galileo) time group delay (TGD) and differential code bias (DCB) correction models to fully exploit the code observations of all the four global navigation satellite systems (GNSSs) for navigation and positioning. The relationship between TGDs and DCBs for multi-GNSS is clearly figured out, and the equivalence of TGD and DCB correction models combining theory with practice is demonstrated. Meanwhile, the TGD/DCB correction models have been extended to various standard point positioning (SPP) and precise point positioning (PPP) scenarios in a multi-GNSS and multi-frequency context. To evaluate the effectiveness and practicability of broadcast TGDs in the navigation message and DCBs provided by the Multi-GNSS Experiment (MGEX), both single-frequency GNSS ionosphere-corrected SPP and dual-frequency GNSS ionosphere-free SPP/PPP tests are carried out with quad-constellation signals. Furthermore, the author investigates the influence of differential code biases on GNSS positioning estimates. The experiments show that multi-constellation combination SPP performs better after DCB/TGD correction, for example, for GPS-only b1-based SPP, the positioning accuracies can be improved by 25.0%, 30.6% and 26.7%, respectively, in the N, E, and U components, after the differential code biases correction, while GPS/GLONASS/BDS b1-based SPP can be improved by 16.1%, 26.1% and 9.9%. For GPS/BDS/Galileo the 3rd frequency based SPP, the positioning accuracies are improved by 2.0%, 2.0% and 0.4%, respectively, in the N, E, and U components, after Galileo satellites DCB correction. The accuracy of Galileo-only b1-based SPP are improved about 48.6%, 34.7% and 40.6% with DCB correction, respectively, in the N, E, and U components. The estimates of multi-constellation PPP are subject to different degrees of influence. For multi-constellation combination SPP, the accuracy of single-frequency is slightly better than that of dual-frequency combinations. Dual-frequency combinations are more sensitive to the differential code biases, especially for the 2nd and 3rd frequency combination, such as for GPS/BDS SPP, accuracy improvements of 60.9%, 26.5% and 58.8% in the three coordinate components is achieved after DCB parameters correction. For multi-constellation PPP, the convergence time can be reduced significantly with differential code biases correction. And the accuracy of positioning is slightly better with TGD/DCB correction.

  20. A clinical decision-making mechanism for context-aware and patient-specific remote monitoring systems using the correlations of multiple vital signs.

    PubMed

    Forkan, Abdur Rahim Mohammad; Khalil, Ibrahim

    2017-02-01

    In home-based context-aware monitoring patient's real-time data of multiple vital signs (e.g. heart rate, blood pressure) are continuously generated from wearable sensors. The changes in such vital parameters are highly correlated. They are also patient-centric and can be either recurrent or can fluctuate. The objective of this study is to develop an intelligent method for personalized monitoring and clinical decision support through early estimation of patient-specific vital sign values, and prediction of anomalies using the interrelation among multiple vital signs. In this paper, multi-label classification algorithms are applied in classifier design to forecast these values and related abnormalities. We proposed a completely new approach of patient-specific vital sign prediction system using their correlations. The developed technique can guide healthcare professionals to make accurate clinical decisions. Moreover, our model can support many patients with various clinical conditions concurrently by utilizing the power of cloud computing technology. The developed method also reduces the rate of false predictions in remote monitoring centres. In the experimental settings, the statistical features and correlations of six vital signs are formulated as multi-label classification problem. Eight multi-label classification algorithms along with three fundamental machine learning algorithms are used and tested on a public dataset of 85 patients. Different multi-label classification evaluation measures such as Hamming score, F1-micro average, and accuracy are used for interpreting the prediction performance of patient-specific situation classifications. We achieved 90-95% Hamming score values across 24 classifier combinations for 85 different patients used in our experiment. The results are compared with single-label classifiers and without considering the correlations among the vitals. The comparisons show that multi-label method is the best technique for this problem domain. The evaluation results reveal that multi-label classification techniques using the correlations among multiple vitals are effective ways for early estimation of future values of those vitals. In context-aware remote monitoring this process can greatly help the doctors in quick diagnostic decision making. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Development of multi-component explosive lenses for arbitrary phase velocity generation

    NASA Astrophysics Data System (ADS)

    Loiseau, Jason; Huneault, Justin; Petel, Oren; Goroshin, Sam; Frost, David; Higgins, Andrew; Zhang, Fan

    2013-06-01

    The combination of explosives with different detonation velocities and lens-like geometric shaping is a well-established technique for producing structured detonation waves. This technique can be extended to produce nearly arbitrary detonation phase velocities for the purposes of sequentially imploding pressurized tubes or driving Mach disks through high-density metalized explosives. The current study presents the experimental development of accelerating, multi-component lenses designed using simple geometric optics and idealized front curvature. The fast explosive component is either Composition C4 (VOD = 8 km/s) or Primasheet 1000 (VOD = 7 km/s), while the slow component varies from heavily amine-diluted nitromethane (amine mass fraction exceeding 20%) to packed metal and glass particle beds wetted with amine-sensitized nitromethane. The applicability of the geometric optic analog to such highly heterogeneous explosives is also investigated. The multi-layered lens technique is further developed as a means of generating a directed mass and momentum flux of metal particles via Mach-disk formation and jetting in circular and oval planar lenses.

  2. A Summary of the Naval Postgraduate School Research Program

    DTIC Science & Technology

    1989-08-30

    5 Fundamental Theory for Automatically Combining Changes to Software Systems ............................ 6 Database -System Approach to...Software Engineering Environments(SEE’s) .................................. 10 Multilevel Database Security .......................... 11 Temporal... Database Management and Real-Time Database Computers .................................... 12 The Multi-lingual, Multi Model, Multi-Backend Database

  3. Coherent beam control through inhomogeneous media in multi-photon microscopy

    NASA Astrophysics Data System (ADS)

    Paudel, Hari Prasad

    Multi-photon fluorescence microscopy has become a primary tool for high-resolution deep tissue imaging because of its sensitivity to ballistic excitation photons in comparison to scattered excitation photons. The imaging depth of multi-photon microscopes in tissue imaging is limited primarily by background fluorescence that is generated by scattered light due to the random fluctuations in refractive index inside the media, and by reduced intensity in the ballistic focal volume due to aberrations within the tissue and at its interface. We built two multi-photon adaptive optics (AO) correction systems, one for combating scattering and aberration problems, and another for compensating interface aberrations. For scattering correction a MEMS segmented deformable mirror (SDM) was inserted at a plane conjugate to the objective back-pupil plane. The SDM can pre-compensate for light scattering by coherent combination of the scattered light to make an apparent focus even at a depths where negligible ballistic light remains (i.e. ballistic limit). This problem was approached by investigating the spatial and temporal focusing characteristics of a broad-band light source through strongly scattering media. A new model was developed for coherent focus enhancement through or inside the strongly media based on the initial speckle contrast. A layer of fluorescent beads under a mouse skull was imaged using an iterative coherent beam control method in the prototype two-photon microscope to demonstrate the technique. We also adapted an AO correction system to an existing in three-photon microscope in a collaborator lab at Cornell University. In the second AO correction approach a continuous deformable mirror (CDM) is placed at a plane conjugate to the plane of an interface aberration. We demonstrated that this "Conjugate AO" technique yields a large field-of-view (FOV) advantage in comparison to Pupil AO. Further, we showed that the extended FOV in conjugate AO is maintained over a relatively large axial misalignment of the conjugate planes of the CDM and the aberrating interface. This dissertation advances the field of microscopy by providing new models and techniques for imaging deeply within strongly scattering tissue, and by describing new adaptive optics approaches to extending imaging FOV due to sample aberrations.

  4. Predicting Power Outages Using Multi-Model Ensemble Forecasts

    NASA Astrophysics Data System (ADS)

    Cerrai, D.; Anagnostou, E. N.; Yang, J.; Astitha, M.

    2017-12-01

    Power outages affect every year millions of people in the United States, affecting the economy and conditioning the everyday life. An Outage Prediction Model (OPM) has been developed at the University of Connecticut for helping utilities to quickly restore outages and to limit their adverse consequences on the population. The OPM, operational since 2015, combines several non-parametric machine learning (ML) models that use historical weather storm simulations and high-resolution weather forecasts, satellite remote sensing data, and infrastructure and land cover data to predict the number and spatial distribution of power outages. A new methodology, developed for improving the outage model performances by combining weather- and soil-related variables using three different weather models (WRF 3.7, WRF 3.8 and RAMS/ICLAMS), will be presented in this study. First, we will present a performance evaluation of each model variable, by comparing historical weather analyses with station data or reanalysis over the entire storm data set. Hence, each variable of the new outage model version is extracted from the best performing weather model for that variable, and sensitivity tests are performed for investigating the most efficient variable combination for outage prediction purposes. Despite that the final variables combination is extracted from different weather models, this ensemble based on multi-weather forcing and multi-statistical model power outage prediction outperforms the currently operational OPM version that is based on a single weather forcing variable (WRF 3.7), because each model component is the closest to the actual atmospheric state.

  5. Photometric Supernova Classification with Machine Learning

    NASA Astrophysics Data System (ADS)

    Lochner, Michelle; McEwen, Jason D.; Peiris, Hiranya V.; Lahav, Ofer; Winter, Max K.

    2016-08-01

    Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models to curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k-nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.

  6. A multi-criteria model for the comparison of building envelope energy retrofits

    NASA Astrophysics Data System (ADS)

    Donnarumma, Giuseppe; Fiore, Pierfrancesco

    2017-02-01

    In light of the current EU guidelines in the energy field, improving building envelope performance cannot be separated from the context of satisfying the environmental sustainability requirements, reducing the costs associated with the life cycle of the building as well as economic and financial feasibility. Therefore, identifying the "optimal" energy retrofit solutions requires the simultaneous assessment of several factors and thus becomes a problem of choice between several possible alternatives. To facilitate the work of the decision-makers, public or private, adequate decision support tools are of great importance. Starting from this need, a model based on the multi-criteria analysis "AHP" technique is proposed, along with the definition of three synthetic indices associated with the three requirements of "Energy Performance", "Sustainability Performance" and "Cost". From the weighted aggregation of the three indices, a global index of preference is obtained that allows to "quantify" the satisfaction level of the i-th alternative from the point of view of a particular group of decision-makers. The model is then applied, by way of example, to the case-study of the energetic redevelopment of a former factory, assuming its functional conversion. Twenty possible alternative interventions on the opaque vertical closures, resulting from the combination of three thermal insulators families (synthetic, natural and mineral) with four energy retrofitting techniques are compared and the results obtained critically discussed by considering the point of view of the three different groups of decision-makers.

  7. Multi-fidelity Gaussian process regression for prediction of random fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Parussini, L.; Venturi, D., E-mail: venturi@ucsc.edu; Perdikaris, P.

    We propose a new multi-fidelity Gaussian process regression (GPR) approach for prediction of random fields based on observations of surrogate models or hierarchies of surrogate models. Our method builds upon recent work on recursive Bayesian techniques, in particular recursive co-kriging, and extends it to vector-valued fields and various types of covariances, including separable and non-separable ones. The framework we propose is general and can be used to perform uncertainty propagation and quantification in model-based simulations, multi-fidelity data fusion, and surrogate-based optimization. We demonstrate the effectiveness of the proposed recursive GPR techniques through various examples. Specifically, we study the stochastic Burgersmore » equation and the stochastic Oberbeck–Boussinesq equations describing natural convection within a square enclosure. In both cases we find that the standard deviation of the Gaussian predictors as well as the absolute errors relative to benchmark stochastic solutions are very small, suggesting that the proposed multi-fidelity GPR approaches can yield highly accurate results.« less

  8. NASA Langley Distributed Propulsion VTOL Tilt-Wing Aircraft Testing, Modeling, Simulation, Control, and Flight Test Development

    NASA Technical Reports Server (NTRS)

    Rothhaar, Paul M.; Murphy, Patrick C.; Bacon, Barton J.; Gregory, Irene M.; Grauer, Jared A.; Busan, Ronald C.; Croom, Mark A.

    2014-01-01

    Control of complex Vertical Take-Off and Landing (VTOL) aircraft traversing from hovering to wing born flight mode and back poses notoriously difficult modeling, simulation, control, and flight-testing challenges. This paper provides an overview of the techniques and advances required to develop the GL-10 tilt-wing, tilt-tail, long endurance, VTOL aircraft control system. The GL-10 prototype's unusual and complex configuration requires application of state-of-the-art techniques and some significant advances in wind tunnel infrastructure automation, efficient Design Of Experiments (DOE) tunnel test techniques, modeling, multi-body equations of motion, multi-body actuator models, simulation, control algorithm design, and flight test avionics, testing, and analysis. The following compendium surveys key disciplines required to develop an effective control system for this challenging vehicle in this on-going effort.

  9. Multi-Array Detection, Association and Location of Infrasound and Seismo-Acoustic Events in Utah

    DTIC Science & Technology

    2008-09-30

    techniques for detecting , associating, and locating infrasound signals at single and multiple arrays and then combining the processed results with...was detected and located by both infrasound and seismic instruments (Figure 3). Infrasound signals at all three arrays , from one of the explosions, are...COVERED (From - To) 30-Sep-2008 REPRINT 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER MULTI- ARRAY DETECTION , ASSOCIATION AND LOCATION OF INFRASOUND FA8718

  10. Multi-scale heat and mass transfer modelling of cell and tissue cryopreservation

    PubMed Central

    Xu, Feng; Moon, Sangjun; Zhang, Xiaohui; Shao, Lei; Song, Young Seok; Demirci, Utkan

    2010-01-01

    Cells and tissues undergo complex physical processes during cryopreservation. Understanding the underlying physical phenomena is critical to improve current cryopreservation methods and to develop new techniques. Here, we describe multi-scale approaches for modelling cell and tissue cryopreservation including heat transfer at macroscale level, crystallization, cell volume change and mass transport across cell membranes at microscale level. These multi-scale approaches allow us to study cell and tissue cryopreservation. PMID:20047939

  11. Fluorescence multi-scale endoscopy and its applications in the study and diagnosis of gastro-intestinal diseases: set-up design and software implementation

    NASA Astrophysics Data System (ADS)

    Gómez-García, Pablo Aurelio; Arranz, Alicia; Fresno, Manuel; Desco, Manuel; Mahmood, Umar; Vaquero, Juan José; Ripoll, Jorge

    2015-06-01

    Endoscopy is frequently used in the diagnosis of several gastro-intestinal pathologies as Crohn disease, ulcerative colitis or colorectal cancer. It has great potential as a non-invasive screening technique capable of detecting suspicious alterations in the intestinal mucosa, such as inflammatory processes. However, these early lesions usually cannot be detected with conventional endoscopes, due to lack of cellular detail and the absence of specific markers. Due to this lack of specificity, the development of new endoscopy technologies, which are able to show microscopic changes in the mucosa structure, are necessary. We here present a confocal endomicroscope, which in combination with a wide field fluorescence endoscope offers fast and specific macroscopic information through the use of activatable probes and a detailed analysis at cellular level of the possible altered tissue areas. This multi-modal and multi-scale imaging module, compatible with commercial endoscopes, combines near-infrared fluorescence (NIRF) measurements (enabling specific imaging of markers of disease and prognosis) and confocal endomicroscopy making use of a fiber bundle, providing a cellular level resolution. The system will be used in animal models exhibiting gastro-intestinal diseases in order to analyze the use of potential diagnostic markers in colorectal cancer. In this work, we present in detail the set-up design and the software implementation in order to obtain simultaneous RGB/NIRF measurements and short confocal scanning times.

  12. Resolving anatomical and functional structure in human brain organization: identifying mesoscale organization in weighted network representations.

    PubMed

    Lohse, Christian; Bassett, Danielle S; Lim, Kelvin O; Carlson, Jean M

    2014-10-01

    Human brain anatomy and function display a combination of modular and hierarchical organization, suggesting the importance of both cohesive structures and variable resolutions in the facilitation of healthy cognitive processes. However, tools to simultaneously probe these features of brain architecture require further development. We propose and apply a set of methods to extract cohesive structures in network representations of brain connectivity using multi-resolution techniques. We employ a combination of soft thresholding, windowed thresholding, and resolution in community detection, that enable us to identify and isolate structures associated with different weights. One such mesoscale structure is bipartivity, which quantifies the extent to which the brain is divided into two partitions with high connectivity between partitions and low connectivity within partitions. A second, complementary mesoscale structure is modularity, which quantifies the extent to which the brain is divided into multiple communities with strong connectivity within each community and weak connectivity between communities. Our methods lead to multi-resolution curves of these network diagnostics over a range of spatial, geometric, and structural scales. For statistical comparison, we contrast our results with those obtained for several benchmark null models. Our work demonstrates that multi-resolution diagnostic curves capture complex organizational profiles in weighted graphs. We apply these methods to the identification of resolution-specific characteristics of healthy weighted graph architecture and altered connectivity profiles in psychiatric disease.

  13. MUTLI-OBJECTIVE OPTIMIZATION OF MICROSTRUCTURE IN WROUGHT MAGNESIUM ALLOYS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Radhakrishnan, Balasubramaniam; Gorti, Sarma B; Simunovic, Srdjan

    2013-01-01

    The microstructural features that govern the mechanical properties of wrought magnesium alloys include grain size, crystallographic texture, and twinning. Several processes based on shear deformation have been developed that promote grain refinement, weakening of the basal texture, as well as the shift of the peak intensity away from the center of the basal pole figure - features that promote room temperature ductility in Mg alloys. At ORNL, we are currently exploring the concept of introducing nano-twins within sub-micron grains as a possible mechanism for simultaneously improving strength and ductility by exploiting a potential dislocation glide along the twin-matrix interface amore » mechanism that was originally proposed for face-centered cubic materials. Specifically, we have developed an integrated modeling and optimization framework in order to identify the combinations of grain size, texture and twin spacing that can maximize strength-ductility combinations. A micromechanical model that relates microstructure to material strength is coupled with a failure model that relates ductility to a critical shear strain and a critical hydrostatic stress. The micro-mechanical model is combined with an optimization tool based on genetic algorithm. A multi-objective optimization technique is used to explore the strength-ductility space in a systematic fashion and identify optimum combinations of the microstructural parameters that will simultaneously maximize the strength-ductility in the alloy.« less

  14. On multi-site damage identification using single-site training data

    NASA Astrophysics Data System (ADS)

    Barthorpe, R. J.; Manson, G.; Worden, K.

    2017-11-01

    This paper proposes a methodology for developing multi-site damage location systems for engineering structures that can be trained using single-site damaged state data only. The methodology involves training a sequence of binary classifiers based upon single-site damage data and combining the developed classifiers into a robust multi-class damage locator. In this way, the multi-site damage identification problem may be decomposed into a sequence of binary decisions. In this paper Support Vector Classifiers are adopted as the means of making these binary decisions. The proposed methodology represents an advancement on the state of the art in the field of multi-site damage identification which require either: (1) full damaged state data from single- and multi-site damage cases or (2) the development of a physics-based model to make multi-site model predictions. The potential benefit of the proposed methodology is that a significantly reduced number of recorded damage states may be required in order to train a multi-site damage locator without recourse to physics-based model predictions. In this paper it is first demonstrated that Support Vector Classification represents an appropriate approach to the multi-site damage location problem, with methods for combining binary classifiers discussed. Next, the proposed methodology is demonstrated and evaluated through application to a real engineering structure - a Piper Tomahawk trainer aircraft wing - with its performance compared to classifiers trained using the full damaged-state dataset.

  15. Climatological Observations for Maritime Prediction and Analysis Support Service (COMPASS)

    NASA Astrophysics Data System (ADS)

    OConnor, A.; Kirtman, B. P.; Harrison, S.; Gorman, J.

    2016-02-01

    Current US Navy forecasting systems cannot easily incorporate extended-range forecasts that can improve mission readiness and effectiveness; ensure safety; and reduce cost, labor, and resource requirements. If Navy operational planners had systems that incorporated these forecasts, they could plan missions using more reliable and longer-term weather and climate predictions. Further, using multi-model forecast ensembles instead of single forecasts would produce higher predictive performance. Extended-range multi-model forecast ensembles, such as those available in the North American Multi-Model Ensemble (NMME), are ideal for system integration because of their high skill predictions; however, even higher skill predictions can be produced if forecast model ensembles are combined correctly. While many methods for weighting models exist, the best method in a given environment requires expert knowledge of the models and combination methods.We present an innovative approach that uses machine learning to combine extended-range predictions from multi-model forecast ensembles and generate a probabilistic forecast for any region of the globe up to 12 months in advance. Our machine-learning approach uses 30 years of hindcast predictions to learn patterns of forecast model successes and failures. Each model is assigned a weight for each environmental condition, 100 km2 region, and day given any expected environmental information. These weights are then applied to the respective predictions for the region and time of interest to effectively stitch together a single, coherent probabilistic forecast. Our experimental results demonstrate the benefits of our approach to produce extended-range probabilistic forecasts for regions and time periods of interest that are superior, in terms of skill, to individual NMME forecast models and commonly weighted models. The probabilistic forecast leverages the strengths of three NMME forecast models to predict environmental conditions for an area spanning from San Diego, CA to Honolulu, HI, seven months in-advance. Key findings include: weighted combinations of models are strictly better than individual models; machine-learned combinations are especially better; and forecasts produced using our approach have the highest rank probability skill score most often.

  16. Large area sub-micron chemical imaging of magnesium in sea urchin teeth.

    PubMed

    Masic, Admir; Weaver, James C

    2015-03-01

    The heterogeneous and site-specific incorporation of inorganic ions can profoundly influence the local mechanical properties of damage tolerant biological composites. Using the sea urchin tooth as a research model, we describe a multi-technique approach to spatially map the distribution of magnesium in this complex multiphase system. Through the combined use of 16-bit backscattered scanning electron microscopy, multi-channel energy dispersive spectroscopy elemental mapping, and diffraction-limited confocal Raman spectroscopy, we demonstrate a new set of high throughput, multi-spectral, high resolution methods for the large scale characterization of mineralized biological materials. In addition, instrument hardware and data collection protocols can be modified such that several of these measurements can be performed on irregularly shaped samples with complex surface geometries and without the need for extensive sample preparation. Using these approaches, in conjunction with whole animal micro-computed tomography studies, we have been able to spatially resolve micron and sub-micron structural features across macroscopic length scales on entire urchin tooth cross-sections and correlate these complex morphological features with local variability in elemental composition. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Application of multi-resolution 3D techniques in crime scene documentation with bloodstain pattern analysis.

    PubMed

    Hołowko, Elwira; Januszkiewicz, Kamil; Bolewicki, Paweł; Sitnik, Robert; Michoński, Jakub

    2016-10-01

    In forensic documentation with bloodstain pattern analysis (BPA) it is highly desirable to obtain non-invasively overall documentation of a crime scene, but also register in high resolution single evidence objects, like bloodstains. In this study, we propose a hierarchical 3D scanning platform designed according to the top-down approach known from the traditional forensic photography. The overall 3D model of a scene is obtained via integration of laser scans registered from different positions. Some parts of a scene being particularly interesting are documented using midrange scanner, and the smallest details are added in the highest resolution as close-up scans. The scanning devices are controlled using developed software equipped with advanced algorithms for point cloud processing. To verify the feasibility and effectiveness of multi-resolution 3D scanning in crime scene documentation, our platform was applied to document a murder scene simulated by the BPA experts from the Central Forensic Laboratory of the Police R&D, Warsaw, Poland. Applying the 3D scanning platform proved beneficial in the documentation of a crime scene combined with BPA. The multi-resolution 3D model enables virtual exploration of a scene in a three-dimensional environment, distance measurement, and gives a more realistic preservation of the evidences together with their surroundings. Moreover, high-resolution close-up scans aligned in a 3D model can be used to analyze bloodstains revealed at the crime scene. The result of BPA such as trajectories, and the area of origin are visualized and analyzed in an accurate model of a scene. At this stage, a simplified approach considering the trajectory of blood drop as a straight line is applied. Although the 3D scanning platform offers a new quality of crime scene documentation with BPA, some of the limitations of the technique are also mentioned. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. Multi-level analysis in information systems research: the case of enterprise resource planning system usage in China

    NASA Astrophysics Data System (ADS)

    Sun, Yuan; Bhattacherjee, Anol

    2011-11-01

    Information technology (IT) usage within organisations is a multi-level phenomenon that is influenced by individual-level and organisational-level variables. Yet, current theories, such as the unified theory of acceptance and use of technology, describe IT usage as solely an individual-level phenomenon. This article postulates a model of organisational IT usage that integrates salient organisational-level variables such as user training, top management support and technical support within an individual-level model to postulate a multi-level model of IT usage. The multi-level model was then empirically validated using multi-level data collected from 128 end users and 26 managers in 26 firms in China regarding their use of enterprise resource planning systems and analysed using the multi-level structural equation modelling (MSEM) technique. We demonstrate the utility of MSEM analysis of multi-level data relative to the more common structural equation modelling analysis of single-level data and show how single-level data can be aggregated to approximate multi-level analysis when multi-level data collection is not possible. We hope that this article will motivate future scholars to employ multi-level data and multi-level analysis for understanding organisational phenomena that are truly multi-level in nature.

  19. Gaussian functional regression for output prediction: Model assimilation and experimental design

    NASA Astrophysics Data System (ADS)

    Nguyen, N. C.; Peraire, J.

    2016-03-01

    In this paper, we introduce a Gaussian functional regression (GFR) technique that integrates multi-fidelity models with model reduction to efficiently predict the input-output relationship of a high-fidelity model. The GFR method combines the high-fidelity model with a low-fidelity model to provide an estimate of the output of the high-fidelity model in the form of a posterior distribution that can characterize uncertainty in the prediction. A reduced basis approximation is constructed upon the low-fidelity model and incorporated into the GFR method to yield an inexpensive posterior distribution of the output estimate. As this posterior distribution depends crucially on a set of training inputs at which the high-fidelity models are simulated, we develop a greedy sampling algorithm to select the training inputs. Our approach results in an output prediction model that inherits the fidelity of the high-fidelity model and has the computational complexity of the reduced basis approximation. Numerical results are presented to demonstrate the proposed approach.

  20. Single-Pulse Multi-Point Multi-Component Interferometric Rayleigh Scattering Velocimeter

    NASA Technical Reports Server (NTRS)

    Bivolaru, Daniel; Danehy, Paul M.; Lee, Joseph W.; Gaffney, Richard L., Jr.; Cutler, Andrew D.

    2006-01-01

    A simultaneous multi-point, multi-component velocimeter using interferometric detection of the Doppler shift of Rayleigh, Mie, and Rayleigh-Brillouin scattered light in supersonic flow is described. The system uses up to three sets of collection optics and one beam combiner for the reference laser light to form a single collimated beam. The planar Fabry-Perot interferometer used in the imaging mode for frequency detection preserves the spatial distribution of the signal reasonably well. Single-pulse multi-points measurements of up to two orthogonal and one non-orthogonal components of velocity in a Mach 2 free jet were performed to demonstrate the technique. The average velocity measurements show a close agreement with the CFD calculations using the VULCAN code.

  1. MT+, integrating magnetotellurics to determine earth structure, physical state, and processes

    USGS Publications Warehouse

    Bedrosian, P.A.

    2007-01-01

    As one of the few deep-earth imaging techniques, magnetotellurics provides information on both the structure and physical state of the crust and upper mantle. Magnetotellurics is sensitive to electrical conductivity, which varies within the earth by many orders of magnitude and is modified by a range of earth processes. As with all geophysical techniques, magnetotellurics has a non-unique inverse problem and has limitations in resolution and sensitivity. As such, an integrated approach, either via the joint interpretation of independent geophysical models, or through the simultaneous inversion of independent data sets is valuable, and at times essential to an accurate interpretation. Magnetotelluric data and models are increasingly integrated with geological, geophysical and geochemical information. This review considers recent studies that illustrate the ways in which such information is combined, from qualitative comparisons to statistical correlation studies to multi-property inversions. Also emphasized are the range of problems addressed by these integrated approaches, and their value in elucidating earth structure, physical state, and processes. ?? Springer Science+Business Media B.V. 2007.

  2. Multi-fidelity stochastic collocation method for computation of statistical moments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Xueyu, E-mail: xueyu-zhu@uiowa.edu; Linebarger, Erin M., E-mail: aerinline@sci.utah.edu; Xiu, Dongbin, E-mail: xiu.16@osu.edu

    We present an efficient numerical algorithm to approximate the statistical moments of stochastic problems, in the presence of models with different fidelities. The method extends the multi-fidelity approximation method developed in . By combining the efficiency of low-fidelity models and the accuracy of high-fidelity models, our method exhibits fast convergence with a limited number of high-fidelity simulations. We establish an error bound of the method and present several numerical examples to demonstrate the efficiency and applicability of the multi-fidelity algorithm.

  3. Conserving analyst attention units: use of multi-agent software and CEP methods to assist information analysis

    NASA Astrophysics Data System (ADS)

    Rimland, Jeffrey; McNeese, Michael; Hall, David

    2013-05-01

    Although the capability of computer-based artificial intelligence techniques for decision-making and situational awareness has seen notable improvement over the last several decades, the current state-of-the-art still falls short of creating computer systems capable of autonomously making complex decisions and judgments in many domains where data is nuanced and accountability is high. However, there is a great deal of potential for hybrid systems in which software applications augment human capabilities by focusing the analyst's attention to relevant information elements based on both a priori knowledge of the analyst's goals and the processing/correlation of a series of data streams too numerous and heterogeneous for the analyst to digest without assistance. Researchers at Penn State University are exploring ways in which an information framework influenced by Klein's (Recognition Primed Decision) RPD model, Endsley's model of situational awareness, and the Joint Directors of Laboratories (JDL) data fusion process model can be implemented through a novel combination of Complex Event Processing (CEP) and Multi-Agent Software (MAS). Though originally designed for stock market and financial applications, the high performance data-driven nature of CEP techniques provide a natural compliment to the proven capabilities of MAS systems for modeling naturalistic decision-making, performing process adjudication, and optimizing networked processing and cognition via the use of "mobile agents." This paper addresses the challenges and opportunities of such a framework for augmenting human observational capability as well as enabling the ability to perform collaborative context-aware reasoning in both human teams and hybrid human / software agent teams.

  4. Multi-criteria decision models for forestry and natural resources management: an annotated bibliography

    Treesearch

    Joseph E. de Steiguer; Leslie Liberti; Albert Schuler; Bruce Hansen

    2003-01-01

    Foresters and natural resource managers must balance conflicting objectives when developing land-management plans. Conflicts may encompass economic, environmental, social, cultural, technical, and aesthetic objectives. Selecting the best combination of management uses from numerous objectives is difficult and challenging. Multi-Criteria Decision Models (MCDM) provide a...

  5. Cooling Technology for Large Space Telescopes

    NASA Technical Reports Server (NTRS)

    DiPirro, Michael; Cleveland, Paul; Durand, Dale; Klavins, Andy; Muheim, Daniella; Paine, Christopher; Petach, Mike; Tenerelli, Domenick; Tolomeo, Jason; Walyus, Keith

    2007-01-01

    NASA's New Millennium Program funded an effort to develop a system cooling technology, which is applicable to all future infrared, sub-millimeter and millimeter cryogenic space telescopes. In particular, this technology is necessary for the proposed large space telescope Single Aperture Far-Infrared Telescope (SAFIR) mission. This technology will also enhance the performance and lower the risk and cost for other cryogenic missions. The new paradigm for cooling to low temperatures will involve passive cooling using lightweight deployable membranes that serve both as sunshields and V-groove radiators, in combination with active cooling using mechanical coolers operating down to 4 K. The Cooling Technology for Large Space Telescopes (LST) mission planned to develop and demonstrate a multi-layered sunshield, which is actively cooled by a multi-stage mechanical cryocooler, and further the models and analyses critical to scaling to future missions. The outer four layers of the sunshield cool passively by radiation, while the innermost layer is actively cooled to enable the sunshield to decrease the incident solar irradiance by a factor of more than one million. The cryocooler cools the inner layer of the sunshield to 20 K, and provides cooling to 6 K at a telescope mounting plate. The technology readiness level (TRL) of 7 will be achieved by the active cooling technology following the technology validation flight in Low Earth Orbit. In accordance with the New Millennium charter, tests and modeling are tightly integrated to advance the technology and the flight design for "ST-class" missions. Commercial off-the-shelf engineering analysis products are used to develop validated modeling capabilities to allow the techniques and results from LST to apply to a wide variety of future missions. The LST mission plans to "rewrite the book" on cryo-thermal testing and modeling techniques, and validate modeling techniques to scale to future space telescopes such as SAFIR.

  6. Evaluation of accuracy in implant site preparation performed in single- or multi-step drilling procedures.

    PubMed

    Marheineke, Nadine; Scherer, Uta; Rücker, Martin; von See, Constantin; Rahlf, Björn; Gellrich, Nils-Claudius; Stoetzer, Marcus

    2018-06-01

    Dental implant failure and insufficient osseointegration are proven results of mechanical and thermal damage during the surgery process. We herein performed a comparative study of a less invasive single-step drilling preparation protocol and a conventional multiple drilling sequence. Accuracy of drilling holes was precisely analyzed and the influence of different levels of expertise of the handlers and additional use of drill template guidance was evaluated. Six experimental groups, deployed in an osseous study model, were representing template-guided and freehanded drilling actions in a stepwise drilling procedure in comparison to a single-drill protocol. Each experimental condition was studied by the drilling actions of respectively three persons without surgical knowledge as well as three highly experienced oral surgeons. Drilling actions were performed and diameters were recorded with a precision measuring instrument. Less experienced operators were able to significantly increase the drilling accuracy using a guiding template, especially when multi-step preparations are performed. Improved accuracy without template guidance was observed when experienced operators were executing single-step versus multi-step technique. Single-step drilling protocols have shown to produce more accurate results than multi-step procedures. The outcome of any protocol can be further improved by use of guiding templates. Operator experience can be a contributing factor. Single-step preparations are less invasive and are promoting osseointegration. Even highly experienced surgeons are achieving higher levels of accuracy by combining this technique with template guidance. Hereby template guidance enables a reduction of hands-on time and side effects during surgery and lead to a more predictable clinical diameter.

  7. Using Ensemble Decisions and Active Selection to Improve Low-Cost Labeling for Multi-View Data

    NASA Technical Reports Server (NTRS)

    Rebbapragada, Umaa; Wagstaff, Kiri L.

    2011-01-01

    This paper seeks to improve low-cost labeling in terms of training set reliability (the fraction of correctly labeled training items) and test set performance for multi-view learning methods. Co-training is a popular multiview learning method that combines high-confidence example selection with low-cost (self) labeling. However, co-training with certain base learning algorithms significantly reduces training set reliability, causing an associated drop in prediction accuracy. We propose the use of ensemble labeling to improve reliability in such cases. We also discuss and show promising results on combining low-cost ensemble labeling with active (low-confidence) example selection. We unify these example selection and labeling strategies under collaborative learning, a family of techniques for multi-view learning that we are developing for distributed, sensor-network environments.

  8. Optimal spectral structure for simultaneous Stimulated Brillouin Scattering suppression and coherent property preservation in high power coherent beam combination system

    NASA Astrophysics Data System (ADS)

    Han, Kai; Xu, Xiaojun; Liu, Zejin

    2013-05-01

    Based on the spectral manipulation technique, the Stimulated Brillouin Scattering (SBS) suppression effect and the coherent beam combination (CBC) effect in multi-tone CBC system are researched theoretically and experimentally. To get satisfactory SBS suppression, the frequency interval of the multi-tone seed laser should be large enough, at least larger than the SBS gain bandwidth. In order to attain excellent CBC effect, the spectra of the multi-tone seed laser need to be matched with the optical path differences among the amplifier chains. Hence, a sufficiently separated matching spectrum is capable at both SBS mitigation and coherent property preservation. By comparing the SBS suppression effect and the CBC effect at various spectra, the optimal spectral structure for simultaneous SBS suppression and excellent CBC effect is found.

  9. Risk assessments using the Strain Index and the TLV for HAL, Part I: Task and multi-task job exposure classifications.

    PubMed

    Kapellusch, Jay M; Bao, Stephen S; Silverstein, Barbara A; Merryweather, Andrew S; Thiese, Mathew S; Hegmann, Kurt T; Garg, Arun

    2017-12-01

    The Strain Index (SI) and the American Conference of Governmental Industrial Hygienists (ACGIH) Threshold Limit Value for Hand Activity Level (TLV for HAL) use different constituent variables to quantify task physical exposures. Similarly, time-weighted-average (TWA), Peak, and Typical exposure techniques to quantify physical exposure from multi-task jobs make different assumptions about each task's contribution to the whole job exposure. Thus, task and job physical exposure classifications differ depending upon which model and technique are used for quantification. This study examines exposure classification agreement, disagreement, correlation, and magnitude of classification differences between these models and techniques. Data from 710 multi-task job workers performing 3,647 tasks were analyzed using the SI and TLV for HAL models, as well as with the TWA, Typical and Peak job exposure techniques. Physical exposures were classified as low, medium, and high using each model's recommended, or a priori limits. Exposure classification agreement and disagreement between models (SI, TLV for HAL) and between job exposure techniques (TWA, Typical, Peak) were described and analyzed. Regardless of technique, the SI classified more tasks as high exposure than the TLV for HAL, and the TLV for HAL classified more tasks as low exposure. The models agreed on 48.5% of task classifications (kappa = 0.28) with 15.5% of disagreement between low and high exposure categories. Between-technique (i.e., TWA, Typical, Peak) agreement ranged from 61-93% (kappa: 0.16-0.92) depending on whether the SI or TLV for HAL was used. There was disagreement between the SI and TLV for HAL and between the TWA, Typical and Peak techniques. Disagreement creates uncertainty for job design, job analysis, risk assessments, and developing interventions. Task exposure classifications from the SI and TLV for HAL might complement each other. However, TWA, Typical, and Peak job exposure techniques all have limitations. Part II of this article examines whether the observed differences between these models and techniques produce different exposure-response relationships for predicting prevalence of carpal tunnel syndrome.

  10. Development of multi-metal interaction model for Daphnia magna: Significance of metallothionein in cellular redistribution.

    PubMed

    Wang, Xiangrui; Liu, Jianyu; Tan, Qiaoguo; Ren, Jinqian; Liang, Dingyuan; Fan, Wenhong

    2018-04-30

    Despite the great progress made in metal-induced toxicity mechanisms, a critical knowledge gap still exists in predicting adverse effects of heavy metals on living organisms in the natural environment, particularly during exposure to multi-metals. In this study, a multi-metal interaction model of Daphnia manga was developed in an effort to provide reasonable explanations regarding the joint effects resulting from exposure to multi-metals. Metallothionein (MT), a widely used biomarker, was selected. In this model, MT was supposed to play the role of a crucial transfer protein rather than detoxifying protein. Therefore, competitive complexation of metals to MT could highly affect the cellular metal redistribution. Thus, competitive complexation of MT in D. magna with metals like Pb 2+ , Cd 2+ and Cu 2+ was qualitatively studied. The results suggested that Cd 2+ had the highest affinity towards MT, followed by Pb 2+ and Cu 2+ . On the other hand, the combination of MT with Cu 2+ appeared to alter its structure which resulted in higher affinity towards Pb 2+ . Overall, the predicted bioaccumulation of metals under multi-metal exposure was consisted with earlier reported studies. This model provided an alternative angle for joint effect through a combination of kinetic process and internal interactions, which could help to develop future models predicting toxicity to multi-metal exposure. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Genomic prediction based on data from three layer lines using non-linear regression models.

    PubMed

    Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L

    2014-11-06

    Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional occurrence of large negative accuracies when the evaluated line was not included in the training dataset. Furthermore, when using a multi-line training dataset, non-linear models provided information on the genotype data that was complementary to the linear models, which indicates that the underlying data distributions of the three studied lines were indeed heterogeneous.

  12. Final Technical Report: Mathematical Foundations for Uncertainty Quantification in Materials Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plechac, Petr; Vlachos, Dionisios G.

    We developed path-wise information theory-based and goal-oriented sensitivity analysis and parameter identification methods for complex high-dimensional dynamics and in particular of non-equilibrium extended molecular systems. The combination of these novel methodologies provided the first methods in the literature which are capable to handle UQ questions for stochastic complex systems with some or all of the following features: (a) multi-scale stochastic models such as (bio)chemical reaction networks, with a very large number of parameters, (b) spatially distributed systems such as Kinetic Monte Carlo or Langevin Dynamics, (c) non-equilibrium processes typically associated with coupled physico-chemical mechanisms, driven boundary conditions, hybrid micro-macro systems,more » etc. A particular computational challenge arises in simulations of multi-scale reaction networks and molecular systems. Mathematical techniques were applied to in silico prediction of novel materials with emphasis on the effect of microstructure on model uncertainty quantification (UQ). We outline acceleration methods to make calculations of real chemistry feasible followed by two complementary tasks on structure optimization and microstructure-induced UQ.« less

  13. Toward irrigation retrieval by combining multi-sensor remote sensing data into a land surface model over a semi-arid region

    NASA Astrophysics Data System (ADS)

    Malbéteau, Y.; Lopez, O.; Houborg, R.; McCabe, M.

    2017-12-01

    Agriculture places considerable pressure on water resources, with the relationship between water availability and food production being critical for sustaining population growth. Monitoring water resources is particularly important in arid and semi-arid regions, where irrigation can represent up to 80% of the consumptive uses of water. In this context, it is necessary to optimize on-farm irrigation management by adjusting irrigation to crop water requirements throughout the growing season. However, in situ point measurements are not routinely available over extended areas and may not be representative at the field scale. Remote sensing approaches present as a cost-effective technique for mapping and monitoring broad areas. By taking advantage of multi-sensor remote sensing methodologies, such as those provided by MODIS, Landsat, Sentinel and Cubesats, we propose a new method to estimate irrigation input at pivot-scale. Here we explore the development of crop-water use estimates via these remote sensing data and integrate them into a land surface modeling framework, using a farm in Saudi Arabia as a demonstration of what can be achieved at larger scales.

  14. Combining a multi deposition multi annealing technique with a scavenging (Ti) to improve the high-k/metal gate stack performance for a gate-last process

    NASA Astrophysics Data System (ADS)

    ShuXiang, Zhang; Hong, Yang; Bo, Tang; Zhaoyun, Tang; Yefeng, Xu; Jing, Xu; Jiang, Yan

    2014-10-01

    ALD HfO2 films fabricated by a novel multi deposition multi annealing (MDMA) technique are investigated, we have included samples both with and without a Ti scavenging layer. As compared to the reference gate stack treated by conventional one-time deposition and annealing (D&A), devices receiving MDMA show a significant reduction in leakage current. Meanwhile, EOT growth is effectively controlled by the Ti scavenging layer. This improvement strongly correlates with the cycle number of D&A (while keeping the total annealing time and total dielectrics thickness the same). Transmission electron microscope and energy-dispersive X-ray spectroscopy analysis suggests that oxygen incorporation into both the high-k film and the interfacial layer is likely to be responsible for the improvement of the device. This novel MDMA is promising for the development of gate stack technology in a gate last integration scheme.

  15. Cardiac Light-Sheet Fluorescent Microscopy for Multi-Scale and Rapid Imaging of Architecture and Function

    NASA Astrophysics Data System (ADS)

    Fei, Peng; Lee, Juhyun; Packard, René R. Sevag; Sereti, Konstantina-Ioanna; Xu, Hao; Ma, Jianguo; Ding, Yichen; Kang, Hanul; Chen, Harrison; Sung, Kevin; Kulkarni, Rajan; Ardehali, Reza; Kuo, C.-C. Jay; Xu, Xiaolei; Ho, Chih-Ming; Hsiai, Tzung K.

    2016-03-01

    Light Sheet Fluorescence Microscopy (LSFM) enables multi-dimensional and multi-scale imaging via illuminating specimens with a separate thin sheet of laser. It allows rapid plane illumination for reduced photo-damage and superior axial resolution and contrast. We hereby demonstrate cardiac LSFM (c-LSFM) imaging to assess the functional architecture of zebrafish embryos with a retrospective cardiac synchronization algorithm for four-dimensional reconstruction (3-D space + time). By combining our approach with tissue clearing techniques, we reveal the entire cardiac structures and hypertrabeculation of adult zebrafish hearts in response to doxorubicin treatment. By integrating the resolution enhancement technique with c-LSFM to increase the resolving power under a large field-of-view, we demonstrate the use of low power objective to resolve the entire architecture of large-scale neonatal mouse hearts, revealing the helical orientation of individual myocardial fibers. Therefore, our c-LSFM imaging approach provides multi-scale visualization of architecture and function to drive cardiovascular research with translational implication in congenital heart diseases.

  16. Provenance Establishment of Stingless Bee Honey Using Multi-element Analysis in Combination with Chemometrics Techniques.

    PubMed

    Shadan, Aidil Fahmi; Mahat, Naji A; Wan Ibrahim, Wan Aini; Ariffin, Zaiton; Ismail, Dzulkiflee

    2018-01-01

    As consumption of stingless bee honey has been gaining popularity in many countries including Malaysia, ability to identify accurately its geographical origin proves pertinent for investigating fraudulent activities for consumer protection. Because a chemical signature can be location-specific, multi-element distribution patterns may prove useful for provenancing such product. Using the inductively coupled-plasma optical emission spectrometer as well as principal component analysis (PCA) and linear discriminant analysis (LDA), the distributions of multi-elements in stingless bee honey collected at four different geographical locations (North, West, East, and South) in Johor, Malaysia, were investigated. While cross-validation using PCA demonstrated 87.0% correct classification rate, the same was improved (96.2%) with the use of LDA, indicating that discrimination was possible for the different geographical regions. Therefore, utilization of multi-element analysis coupled with chemometrics techniques for assigning the provenance of stingless bee honeys for forensic applications is supported. © 2017 American Academy of Forensic Sciences.

  17. Digital equalization of time-delay array receivers on coherent laser communications.

    PubMed

    Belmonte, Aniceto

    2017-01-15

    Field conjugation arrays use adaptive combining techniques on multi-aperture receivers to improve the performance of coherent laser communication links by mitigating the consequences of atmospheric turbulence on the down-converted coherent power. However, this motivates the use of complex receivers as optical signals collected by different apertures need to be adaptively processed, co-phased, and scaled before they are combined. Here, we show that multiple apertures, coupled with optical delay lines, combine retarded versions of a signal at a single coherent receiver, which uses digital equalization to obtain diversity gain against atmospheric fading. We found in our analysis that, instead of field conjugation arrays, digital equalization of time-delay multi-aperture receivers is a simpler and more versatile approach to accomplish reduction of atmospheric fading.

  18. Exploration of Force Transition in Stability Operations Using Multi-Agent Simulation

    DTIC Science & Technology

    2006-09-01

    risk, mission failure risk, and time in the context of the operational threat environment. The Pythagoras Multi-Agent Simulation and Data Farming...NUMBER OF PAGES 173 14. SUBJECT TERMS Stability Operations, Peace Operations, Data Farming, Pythagoras , Agent- Based Model, Multi-Agent Simulation...the operational threat environment. The Pythagoras Multi-Agent Simulation and Data Farming techniques are used to investigate force-level

  19. On combining multi-normalization and ancillary measures for the optimal score level fusion of fingerprint and voice biometrics

    NASA Astrophysics Data System (ADS)

    Mohammed Anzar, Sharafudeen Thaha; Sathidevi, Puthumangalathu Savithri

    2014-12-01

    In this paper, we have considered the utility of multi-normalization and ancillary measures, for the optimal score level fusion of fingerprint and voice biometrics. An efficient matching score preprocessing technique based on multi-normalization is employed for improving the performance of the multimodal system, under various noise conditions. Ancillary measures derived from the feature space and the score space are used in addition to the matching score vectors, for weighing the modalities, based on their relative degradation. Reliability (dispersion) and the separability (inter-/intra-class distance and d-prime statistics) measures under various noise conditions are estimated from the individual modalities, during the training/validation stage. The `best integration weights' are then computed by algebraically combining these measures using the weighted sum rule. The computed integration weights are then optimized against the recognition accuracy using techniques such as grid search, genetic algorithm and particle swarm optimization. The experimental results show that, the proposed biometric solution leads to considerable improvement in the recognition performance even under low signal-to-noise ratio (SNR) conditions and reduces the false acceptance rate (FAR) and false rejection rate (FRR), making the system useful for security as well as forensic applications.

  20. Regional Climate and Streamflow Projections in North America Under IPCC CMIP5 Scenarios

    NASA Astrophysics Data System (ADS)

    Chang, H. I.; Castro, C. L.; Troch, P. A. A.; Mukherjee, R.

    2014-12-01

    The Colorado River system is the predominant source of water supply for the Southwest U.S. and is already fully allocated, making the region's environmental and economic health particularly sensitive to annual and multi-year streamflow variability. Observed streamflow declines in the Colorado Basin in recent years are likely due to synergistic combination of anthropogenic global warming and natural climate variability, which are creating an overall warmer and more extreme climate. IPCC assessment reports have projected warmer and drier conditions in arid to semi-arid regions (e.g. Solomon et al. 2007). The NAM-related precipitation contributes to substantial Colorado streamflows. Recent climate change studies for the Southwest U.S. region project a dire future, with chronic drought, and substantially reduced Colorado River flows. These regional effects reflect the general observation that climate is being more extreme globally, with areas climatologically favored to be wet getting wetter and areas favored to be dry getting drier (Wang et al. 2012). Multi-scale downscaling modeling experiments are designed using recent IPCC AR5 global climate projections, which incorporate regional climate and hydrologic modeling components. The Weather Research and Forecasting model (WRF) has been selected as the main regional modeling tool; the Variable Infiltration Capacity model (VIC) will be used to generate streamflow projections for the Colorado River Basin. The WRF domain is set up to follow the CORDEX-North America guideline with 25km grid spacing, and VIC model is individually calibrated for upper and lower Colorado River basins in 1/8° resolution. The multi-scale climate and hydrology study aims to characterize how the combination of climate change and natural climate variability is changing cool and warm season precipitation. Further, to preserve the downscaled RCM sensitivity and maintain a reasonable climatology mean based on observed record, a new bias correction technique is applied when using the RCM climatology to the streamflow model. Of specific interest is how major droughts associated with La Niña-like conditions may worsen in the future, as these are the times when the Colorado River system is most critically stressed and would define the "worst case" scenario for water resource planning.

  1. Towards Symbolic Model Checking for Multi-Agent Systems via OBDDs

    NASA Technical Reports Server (NTRS)

    Raimondi, Franco; Lomunscio, Alessio

    2004-01-01

    We present an algorithm for model checking temporal-epistemic properties of multi-agent systems, expressed in the formalism of interpreted systems. We first introduce a technique for the translation of interpreted systems into boolean formulae, and then present a model-checking algorithm based on this translation. The algorithm is based on OBDD's, as they offer a compact and efficient representation for boolean formulae.

  2. Massive integration of diverse protein quality assessment methods to improve template based modeling in CASP11

    PubMed Central

    Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin

    2015-01-01

    Model evaluation and selection is an important step and a big challenge in template-based protein structure prediction. Individual model quality assessment methods designed for recognizing some specific properties of protein structures often fail to consistently select good models from a model pool because of their limitations. Therefore, combining multiple complimentary quality assessment methods is useful for improving model ranking and consequently tertiary structure prediction. Here, we report the performance and analysis of our human tertiary structure predictor (MULTICOM) based on the massive integration of 14 diverse complementary quality assessment methods that was successfully benchmarked in the 11th Critical Assessment of Techniques of Protein Structure prediction (CASP11). The predictions of MULTICOM for 39 template-based domains were rigorously assessed by six scoring metrics covering global topology of Cα trace, local all-atom fitness, side chain quality, and physical reasonableness of the model. The results show that the massive integration of complementary, diverse single-model and multi-model quality assessment methods can effectively leverage the strength of single-model methods in distinguishing quality variation among similar good models and the advantage of multi-model quality assessment methods of identifying reasonable average-quality models. The overall excellent performance of the MULTICOM predictor demonstrates that integrating a large number of model quality assessment methods in conjunction with model clustering is a useful approach to improve the accuracy, diversity, and consequently robustness of template-based protein structure prediction. PMID:26369671

  3. MASS ESTIMATES OF RAPIDLY MOVING PROMINENCE MATERIAL FROM HIGH-CADENCE EUV IMAGES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, David R.; Baker, Deborah; Van Driel-Gesztelyi, Lidia, E-mail: d.r.williams@ucl.ac.uk

    We present a new method for determining the column density of erupting filament material using state-of-the-art multi-wavelength imaging data. Much of the prior work on filament/prominence structure can be divided between studies that use a polychromatic approach with targeted campaign observations and those that use synoptic observations, frequently in only one or two wavelengths. The superior time resolution, sensitivity, and near-synchronicity of data from the Solar Dynamics Observatory's Advanced Imaging Assembly allow us to combine these two techniques using photoionization continuum opacity to determine the spatial distribution of hydrogen in filament material. We apply the combined techniques to SDO/AIA observationsmore » of a filament that erupted during the spectacular coronal mass ejection on 2011 June 7. The resulting 'polychromatic opacity imaging' method offers a powerful way to track partially ionized gas as it erupts through the solar atmosphere on a regular basis, without the need for coordinated observations, thereby readily offering regular, realistic mass-distribution estimates for models of these erupting structures.« less

  4. Multi-Sensor Documentation of Metric and Qualitative Information of Historic Stone Structures

    NASA Astrophysics Data System (ADS)

    Adamopoulos, E.; Tsilimantou, E.; Keramidas, V.; Apostolopoulou, M.; Karoglou, M.; Tapinaki, S.; Ioannidis, C.; Georgopoulos, A.; Moropoulou, A.

    2017-08-01

    This paper focuses on the integration of multi-sensor techniques regarding the acquisition, processing, visualisation and management of data regarding historic stone structures. The interdisciplinary methodology that is carried out here comprises of two parts. In the first part, the acquisition of qualitative and quantitative data concerning the geometry, the materials and the degradation of the tangible heritage asset each time, is discussed. The second part, refers to the analysis, management and visualization of the interrelated data by using spatial information technologies. Through the paradigm of the surveying of the ancient temple of Pythian Apollo at Acropolis of Rhodes, Rhodes Island, Greece, it is aimed to highlight the issues deriving from the separate application of documentation procedures and how the fusion of these methods can contribute effectively to ensure the completeness of the measurements for complex structures. The surveying results are further processed to be compatible and integrated with GIS. Also, the geometric documentation derivatives are combined with environmental data and the results of the application of non-destructive testing and evaluation techniques in situ and analytical techniques in lab after sampling. GIS operations are utilized to document the building materials but also to model and to analyse the decay extent and patterns. Detailed surface measurements and geo-processing analysis are executed. This integrated approach, helps the assessment of past interventions on the monument, identify main causes of damage and decay, and finally assist the decision making on the most compatible materials and techniques for protection and restoration works.

  5. The key technique study of a kind of personal navigation oriented LBS system

    NASA Astrophysics Data System (ADS)

    Yan, Lei; Zheng, Jianghua; Zhang, Xin; Peng, Chunhua; He, Lina

    2005-11-01

    With the integration of GIS, IT technology and wireless communication techniques, LBS is fast developing and caused wide concern. Personal navigation is the critical application of LBS. It has higher requirement of data quality, positioning accuracy and multi-model services. The study discusses the key techniques of a personal navigation oriented LBS system. As an example for service platform of China Unicom, NAVISTAR especially emphasizes the importance of spatial data organization. Based-on CDMA1X network, it adopts gpsOne\\MS-Assisted dynamic positioning technique, and puts forward a data organization solution to realize multi-scale representation.

  6. A multi-data stream assimilation framework for the assessment of volcanic unrest

    NASA Astrophysics Data System (ADS)

    Gregg, Patricia M.; Pettijohn, J. Cory

    2016-01-01

    Active volcanoes pose a constant risk to populations living in their vicinity. Significant effort has been spent to increase monitoring and data collection campaigns to mitigate potential volcano disasters. To utilize these datasets to their fullest extent, a new generation of model-data fusion techniques is required that combine multiple, disparate observations of volcanic activity with cutting-edge modeling techniques to provide efficient assessment of volcanic unrest. The purpose of this paper is to develop a data assimilation framework for volcano applications. Specifically, the Ensemble Kalman Filter (EnKF) is adapted to assimilate GPS and InSAR data into viscoelastic, time-forward, finite element models of an evolving magma system to provide model forecasts and error estimations. Since the goal of this investigation is to provide a methodological framework, our efforts are focused on theoretical development and synthetic tests to illustrate the effectiveness of the EnKF and its applicability in physical volcanology. The synthetic tests provide two critical results: (1) a proof of concept for using the EnKF for multi dataset assimilation in investigations of volcanic activity; and (2) the comparison of spatially limited, but temporally dense, GPS data with temporally limited InSAR observations for evaluating magma chamber dynamics during periods of volcanic unrest. Results indicate that the temporally dense information provided by GPS observations results in faster convergence and more accurate model predictions. However, most importantly, the synthetic tests illustrate that the EnKF is able to swiftly respond to data updates by changing the model forecast trajectory to match incoming observations. The synthetic results demonstrate a great potential for utilizing the EnKF model-data fusion method to assess volcanic unrest and provide model forecasts. The development of these new techniques provides: (1) a framework for future applications of rapid data assimilation and model development during volcanic crises; (2) a method for hind-casting to investigate previous volcanic eruptions, including potential eruption triggering mechanisms and precursors; and (3) an approach for optimizing survey designs for future data collection campaigns at active volcanic systems.

  7. The identification of multi-cave combinations in carbonate reservoirs based on sparsity constraint inverse spectral decomposition

    NASA Astrophysics Data System (ADS)

    Li, Qian; Di, Bangrang; Wei, Jianxin; Yuan, Sanyi; Si, Wenpeng

    2016-12-01

    Sparsity constraint inverse spectral decomposition (SCISD) is a time-frequency analysis method based on the convolution model, in which minimizing the l1 norm of the time-frequency spectrum of the seismic signal is adopted as a sparsity constraint term. The SCISD method has higher time-frequency resolution and more concentrated time-frequency distribution than the conventional spectral decomposition methods, such as short-time Fourier transformation (STFT), continuous-wavelet transform (CWT) and S-transform. Due to these good features, the SCISD method has gradually been used in low-frequency anomaly detection, horizon identification and random noise reduction for sandstone and shale reservoirs. However, it has not yet been used in carbonate reservoir prediction. The carbonate fractured-vuggy reservoir is the major hydrocarbon reservoir in the Halahatang area of the Tarim Basin, north-west China. If reasonable predictions for the type of multi-cave combinations are not made, it may lead to an incorrect explanation for seismic responses of the multi-cave combinations. Furthermore, it will result in large errors in reserves estimation of the carbonate reservoir. In this paper, the energy and phase spectra of the SCISD are applied to identify the multi-cave combinations in carbonate reservoirs. The examples of physical model data and real seismic data illustrate that the SCISD method can detect the combination types and the number of caves of multi-cave combinations and can provide a favourable basis for the subsequent reservoir prediction and quantitative estimation of the cave-type carbonate reservoir volume.

  8. Multi-model ensemble projections of European river floods and high flows at 1.5, 2, and 3 degrees global warming

    NASA Astrophysics Data System (ADS)

    Thober, Stephan; Kumar, Rohini; Wanders, Niko; Marx, Andreas; Pan, Ming; Rakovec, Oldrich; Samaniego, Luis; Sheffield, Justin; Wood, Eric F.; Zink, Matthias

    2018-01-01

    Severe river floods often result in huge economic losses and fatalities. Since 1980, almost 1500 such events have been reported in Europe. This study investigates climate change impacts on European floods under 1.5, 2, and 3 K global warming. The impacts are assessed employing a multi-model ensemble containing three hydrologic models (HMs: mHM, Noah-MP, PCR-GLOBWB) forced by five CMIP5 general circulation models (GCMs) under three Representative Concentration Pathways (RCPs 2.6, 6.0, and 8.5). This multi-model ensemble is unprecedented with respect to the combination of its size (45 realisations) and its spatial resolution, which is 5 km over the entirety of Europe. Climate change impacts are quantified for high flows and flood events, represented by 10% exceedance probability and annual maxima of daily streamflow, respectively. The multi-model ensemble points to the Mediterranean region as a hotspot of changes with significant decrements in high flows from -11% at 1.5 K up to -30% at 3 K global warming mainly resulting from reduced precipitation. Small changes (< ±10%) are observed for river basins in Central Europe and the British Isles under different levels of warming. Projected higher annual precipitation increases high flows in Scandinavia, but reduced snow melt equivalent decreases flood events in this region. Neglecting uncertainties originating from internal climate variability, downscaling technique, and hydrologic model parameters, the contribution by the GCMs to the overall uncertainties of the ensemble is in general higher than that by the HMs. The latter, however, have a substantial share in the Mediterranean and Scandinavia. Adaptation measures for limiting the impacts of global warming could be similar under 1.5 K and 2 K global warming, but have to account for significantly higher changes under 3 K global warming.

  9. Combination of Vlbi, GPS and Slr Observations At The Observation Level For The Realization of Terrestrial and Celestial Reference Frames

    NASA Astrophysics Data System (ADS)

    Andersen, P. H.

    Forsvarets forskningsinstitutt (FFI, the Norwegian Defence Research Establishment) has during the last 17 years developed a software system called GEOSAT, for the analysis of any type of high precision space geodetic observations. A unique feature of GEOSAT is the possibility of combining any combination of different space geode- tic data at the observation level with one consistent model and one consistent strategy. This is a much better strategy than the strategy in use today where different types of observations are processed separately using analysis software developed specifically for each technique. The results from each technique are finally combined a posteriori. In practice the models implemented in the software packages differ at the 1-cm level which is almost one order of magnitude larger than the internal precision of the most precise techniques. Another advantage of the new proposed combination method is that for example VLBI and GPS can use the same tropospheric model with common parameterization. The same is the case for the Earth orientation parameters, the geo- center coordinates and other geodetic or geophysical parameters where VLBI, GPS and SLR can have a common estimate for each of the parameters. The analysis with GEOSAT is automated for the combination of VLBI, SLR and GPS observations. The data are analyzed in batches of one day where the result from each daily arc is a SRIF array (Square Root Information Filter). A large number of SRIF arrays can be combined into a multi-year solution using the CSRIFS program (Com- bination Square Root Information Filter and Smoother). Four parameter levels are available and any parameter can, at each level, either be represented as a constant or a stochastic parameter (white noise, colored noise, or random walk). The batch length (i.e. the time interval between the addition of noise to the SRIF array) can be made time- and parameter dependent. GEOSAT and CSRIFS have been applied in the analysis of selected VLBI and SLR data (LAGEOS I &II) from the period January 1993 to July 2001. A selected number of arcs also include GPS data. Earth orientation parameters, geocenter motion, sta- tion coordinates and velocities were estimated simultaneously with the coordinates of the radio sources and satellite orbital parameters. Recent software improvements and 1 results of analyses will be presented at the meeting. 2

  10. Multiple Criteria Decision Analysis (MCDA) for evaluating new medicines in Health Technology Assessment and beyond: The Advance Value Framework.

    PubMed

    Angelis, Aris; Kanavos, Panos

    2017-09-01

    Escalating drug prices have catalysed the generation of numerous "value frameworks" with the aim of informing payers, clinicians and patients on the assessment and appraisal process of new medicines for the purpose of coverage and treatment selection decisions. Although this is an important step towards a more inclusive Value Based Assessment (VBA) approach, aspects of these frameworks are based on weak methodologies and could potentially result in misleading recommendations or decisions. In this paper, a Multiple Criteria Decision Analysis (MCDA) methodological process, based on Multi Attribute Value Theory (MAVT), is adopted for building a multi-criteria evaluation model. A five-stage model-building process is followed, using a top-down "value-focused thinking" approach, involving literature reviews and expert consultations. A generic value tree is structured capturing decision-makers' concerns for assessing the value of new medicines in the context of Health Technology Assessment (HTA) and in alignment with decision theory. The resulting value tree (Advance Value Tree) consists of three levels of criteria (top level criteria clusters, mid-level criteria, bottom level sub-criteria or attributes) relating to five key domains that can be explicitly measured and assessed: (a) burden of disease, (b) therapeutic impact, (c) safety profile (d) innovation level and (e) socioeconomic impact. A number of MAVT modelling techniques are introduced for operationalising (i.e. estimating) the model, for scoring the alternative treatment options, assigning relative weights of importance to the criteria, and combining scores and weights. Overall, the combination of these MCDA modelling techniques for the elicitation and construction of value preferences across the generic value tree provides a new value framework (Advance Value Framework) enabling the comprehensive measurement of value in a structured and transparent way. Given its flexibility to meet diverse requirements and become readily adaptable across different settings, the Advance Value Framework could be offered as a decision-support tool for evaluators and payers to aid coverage and reimbursement of new medicines. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  11. Leading with the Heart.

    ERIC Educational Resources Information Center

    Linik, Joyce Riha

    1998-01-01

    Describes techniques used in a multi-age class at Coupeville Elementary School, Washington, to boost reading comprehension and inspire students' love of books: access to an abundance of books, challenges to students, skills reinforcement, combined phonics and whole-language instruction, running-record assessment, paired reading, independent…

  12. A scale space feature based registration technique for fusion of satellite imagery

    NASA Technical Reports Server (NTRS)

    Raghavan, Srini; Cromp, Robert F.; Campbell, William C.

    1997-01-01

    Feature based registration is one of the most reliable methods to register multi-sensor images (both active and passive imagery) since features are often more reliable than intensity or radiometric values. The only situation where a feature based approach will fail is when the scene is completely homogenous or densely textural in which case a combination of feature and intensity based methods may yield better results. In this paper, we present some preliminary results of testing our scale space feature based registration technique, a modified version of feature based method developed earlier for classification of multi-sensor imagery. The proposed approach removes the sensitivity in parameter selection experienced in the earlier version as explained later.

  13. Measurement of in situ sulfur isotopes by laser ablation multi-collector ICPMS: opening Pandora’s Box

    USGS Publications Warehouse

    Ridley, William I.; Pribil, Michael; Koenig, Alan E.; Slack, John F.

    2015-01-01

    Laser ablation multi-collector ICPMS is a modern tool for in situ measurement of S isotopes. Advantages of the technique are speed of analysis and relatively minor matrix effects combined with spatial resolution sufficient for many applications. The main disadvantage is a more destructive sampling mechanism relative to the ion microprobe technique. Recent advances in instrumentation allow precise measurement with spatial resolutions down to 25 microns. We describe specific examples from economic geology where increased spatial resolution has greatly expanded insights into the sources and evolution of fluids that cause mineralization and illuminated genetic relations between individual deposits in single mineral districts.

  14. An Integer Programming Model for Multi-Echelon Supply Chain Decision Problem Considering Inventories

    NASA Astrophysics Data System (ADS)

    Harahap, Amin; Mawengkang, Herman; Siswadi; Effendi, Syahril

    2018-01-01

    In this paper we address a problem that is of significance to the industry, namely the optimal decision of a multi-echelon supply chain and the associated inventory systems. By using the guaranteed service approach to model the multi-echelon inventory system, we develop a mixed integer; programming model to simultaneously optimize the transportation, inventory and network structure of a multi-echelon supply chain. To solve the model we develop a direct search approach using a strategy of releasing nonbasic variables from their bounds, combined with the “active constraint” method. This strategy is used to force the appropriate non-integer basic variables to move to their neighbourhood integer points.

  15. A multi-model fusion strategy for multivariate calibration using near and mid-infrared spectra of samples from brewing industry.

    PubMed

    Tan, Chao; Chen, Hui; Wang, Chao; Zhu, Wanping; Wu, Tong; Diao, Yuanbo

    2013-03-15

    Near and mid-infrared (NIR/MIR) spectroscopy techniques have gained great acceptance in the industry due to their multiple applications and versatility. However, a success of application often depends heavily on the construction of accurate and stable calibration models. For this purpose, a simple multi-model fusion strategy is proposed. It is actually the combination of Kohonen self-organizing map (KSOM), mutual information (MI) and partial least squares (PLSs) and therefore named as KMICPLS. It works as follows: First, the original training set is fed into a KSOM for unsupervised clustering of samples, on which a series of training subsets are constructed. Thereafter, on each of the training subsets, a MI spectrum is calculated and only the variables with higher MI values than the mean value are retained, based on which a candidate PLS model is constructed. Finally, a fixed number of PLS models are selected to produce a consensus model. Two NIR/MIR spectral datasets from brewing industry are used for experiments. The results confirms its superior performance to two reference algorithms, i.e., the conventional PLS and genetic algorithm-PLS (GAPLS). It can build more accurate and stable calibration models without increasing the complexity, and can be generalized to other NIR/MIR applications. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. MUSE: the Multi-Slit Solar Explorer

    NASA Astrophysics Data System (ADS)

    Tarbell, Theodore D.; De Pontieu, Bart

    2017-08-01

    The Multi-Slit Solar Explorer is a proposed Small Explorer mission for studying the dynamics of the corona and transition region using both conventional and novel spectral imaging techniques. The physical processes that heat the multi-million degree solar corona, accelerate the solar wind and drive solar activity (CMEs and flares) remain poorly known. A breakthrough in these areas can only come from radically innovative instrumentation and state-of-the-art numerical modeling and will lead to better understanding of space weather origins. MUSE’s multi-slit coronal spectroscopy will use a 100x improvement in spectral raster cadence to fill a crucial gap in our knowledge of Sun-Earth connections; it will reveal temperatures, velocities and non-thermal processes over a wide temperature range to diagnose physical processes that remain invisible to current or planned instruments. MUSE will contain two instruments: an EUV spectrograph (SG) and EUV context imager (CI). Both have similar spatial resolution and leverage extensive heritage from previous high-resolution instruments such as IRIS and the HiC rocket payload. The MUSE investigation will build on the success of IRIS by combining numerical modeling with a uniquely capable observatory: MUSE will obtain EUV spectra and images with the highest resolution in space (1/3 arcsec) and time (1-4 s) ever achieved for the transition region and corona, along 35 slits and a large context FOV simultaneously. The MUSE consortium includes LMSAL, SAO, Stanford, ARC, HAO, GSFC, MSFC, MSU, ITA Oslo and other institutions.

  17. Modeling rainfall-runoff process using soft computing techniques

    NASA Astrophysics Data System (ADS)

    Kisi, Ozgur; Shiri, Jalal; Tombul, Mustafa

    2013-02-01

    Rainfall-runoff process was modeled for a small catchment in Turkey, using 4 years (1987-1991) of measurements of independent variables of rainfall and runoff values. The models used in the study were Artificial Neural Networks (ANNs), Adaptive Neuro-Fuzzy Inference System (ANFIS) and Gene Expression Programming (GEP) which are Artificial Intelligence (AI) approaches. The applied models were trained and tested using various combinations of the independent variables. The goodness of fit for the model was evaluated in terms of the coefficient of determination (R2), root mean square error (RMSE), mean absolute error (MAE), coefficient of efficiency (CE) and scatter index (SI). A comparison was also made between these models and traditional Multi Linear Regression (MLR) model. The study provides evidence that GEP (with RMSE=17.82 l/s, MAE=6.61 l/s, CE=0.72 and R2=0.978) is capable of modeling rainfall-runoff process and is a viable alternative to other applied artificial intelligence and MLR time-series methods.

  18. Unbiased multi-fidelity estimate of failure probability of a free plane jet

    NASA Astrophysics Data System (ADS)

    Marques, Alexandre; Kramer, Boris; Willcox, Karen; Peherstorfer, Benjamin

    2017-11-01

    Estimating failure probability related to fluid flows is a challenge because it requires a large number of evaluations of expensive models. We address this challenge by leveraging multiple low fidelity models of the flow dynamics to create an optimal unbiased estimator. In particular, we investigate the effects of uncertain inlet conditions in the width of a free plane jet. We classify a condition as failure when the corresponding jet width is below a small threshold, such that failure is a rare event (failure probability is smaller than 0.001). We estimate failure probability by combining the frameworks of multi-fidelity importance sampling and optimal fusion of estimators. Multi-fidelity importance sampling uses a low fidelity model to explore the parameter space and create a biasing distribution. An unbiased estimate is then computed with a relatively small number of evaluations of the high fidelity model. In the presence of multiple low fidelity models, this framework offers multiple competing estimators. Optimal fusion combines all competing estimators into a single estimator with minimal variance. We show that this combined framework can significantly reduce the cost of estimating failure probabilities, and thus can have a large impact in fluid flow applications. This work was funded by DARPA.

  19. Virtual Patients and Sensitivity Analysis of the Guyton Model of Blood Pressure Regulation: Towards Individualized Models of Whole-Body Physiology

    PubMed Central

    Moss, Robert; Grosse, Thibault; Marchant, Ivanny; Lassau, Nathalie; Gueyffier, François; Thomas, S. Randall

    2012-01-01

    Mathematical models that integrate multi-scale physiological data can offer insight into physiological and pathophysiological function, and may eventually assist in individualized predictive medicine. We present a methodology for performing systematic analyses of multi-parameter interactions in such complex, multi-scale models. Human physiology models are often based on or inspired by Arthur Guyton's whole-body circulatory regulation model. Despite the significance of this model, it has not been the subject of a systematic and comprehensive sensitivity study. Therefore, we use this model as a case study for our methodology. Our analysis of the Guyton model reveals how the multitude of model parameters combine to affect the model dynamics, and how interesting combinations of parameters may be identified. It also includes a “virtual population” from which “virtual individuals” can be chosen, on the basis of exhibiting conditions similar to those of a real-world patient. This lays the groundwork for using the Guyton model for in silico exploration of pathophysiological states and treatment strategies. The results presented here illustrate several potential uses for the entire dataset of sensitivity results and the “virtual individuals” that we have generated, which are included in the supplementary material. More generally, the presented methodology is applicable to modern, more complex multi-scale physiological models. PMID:22761561

  20. Single-voxel and multi-voxel spectroscopy yield comparable results in the normal juvenile canine brain when using 3 Tesla magnetic resonance imaging.

    PubMed

    Lee, Alison M; Beasley, Michaela J; Barrett, Emerald D; James, Judy R; Gambino, Jennifer M

    2018-06-10

    Conventional magnetic resonance imaging (MRI) characteristics of canine brain diseases are often nonspecific. Single- and multi-voxel spectroscopy techniques allow quantification of chemical biomarkers for tissues of interest and may help to improve diagnostic specificity. However, published information is currently lacking for the in vivo performance of these two techniques in dogs. The aim of this prospective, methods comparison study was to compare the performance of single- and multi-voxel spectroscopy in the brains of eight healthy, juvenile dogs using 3 Tesla MRI. Ipsilateral regions of single- and multi-voxel spectroscopy were performed in symmetric regions of interest of each brain in the parietal (n = 3), thalamic (n = 2), and piriform lobes (n = 3). In vivo single-voxel spectroscopy and multi-voxel spectroscopy metabolite ratios from the same size and multi-voxel spectroscopy ratios from different sized regions of interest were compared. No significant difference was seen between single-voxel spectroscopy and multi-voxel spectroscopy metabolite ratios for any lobe when regions of interest were similar in size and shape. Significant lobar single-voxel spectroscopy and multi-voxel spectroscopy differences were seen between the parietal lobe and thalamus (P = 0.047) for the choline to N-acetyl aspartase ratios when large multi-voxel spectroscopy regions of interest were compared to very small multi-voxel spectroscopy regions of interest within the same lobe; and for the N-acetyl aspartase to creatine ratios in all lobes when single-voxel spectroscopy was compared to combined (pooled) multi-voxel spectroscopy datasets. Findings from this preliminary study indicated that single- and multi-voxel spectroscopy techniques using 3T MRI yield comparable results for similar sized regions of interest in the normal canine brain. Findings also supported using the contralateral side as an internal control for dogs with brain lesions. © 2018 American College of Veterinary Radiology.

  1. Path analysis and multi-criteria decision making: an approach for multivariate model selection and analysis in health.

    PubMed

    Vasconcelos, A G; Almeida, R M; Nobre, F F

    2001-08-01

    This paper introduces an approach that includes non-quantitative factors for the selection and assessment of multivariate complex models in health. A goodness-of-fit based methodology combined with fuzzy multi-criteria decision-making approach is proposed for model selection. Models were obtained using the Path Analysis (PA) methodology in order to explain the interrelationship between health determinants and the post-neonatal component of infant mortality in 59 municipalities of Brazil in the year 1991. Socioeconomic and demographic factors were used as exogenous variables, and environmental, health service and agglomeration as endogenous variables. Five PA models were developed and accepted by statistical criteria of goodness-of fit. These models were then submitted to a group of experts, seeking to characterize their preferences, according to predefined criteria that tried to evaluate model relevance and plausibility. Fuzzy set techniques were used to rank the alternative models according to the number of times a model was superior to ("dominated") the others. The best-ranked model explained above 90% of the endogenous variables variation, and showed the favorable influences of income and education levels on post-neonatal mortality. It also showed the unfavorable effect on mortality of fast population growth, through precarious dwelling conditions and decreased access to sanitation. It was possible to aggregate expert opinions in model evaluation. The proposed procedure for model selection allowed the inclusion of subjective information in a clear and systematic manner.

  2. Multi-objective vs. single-objective calibration of a hydrologic model using single- and multi-objective screening

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Cuntz, Matthias; Shafii, Mahyar; Zink, Matthias; Schäfer, David; Thober, Stephan; Samaniego, Luis; Tolson, Bryan

    2016-04-01

    Hydrologic models are traditionally calibrated against observed streamflow. Recent studies have shown however, that only a few global model parameters are constrained using this kind of integral signal. They can be identified using prior screening techniques. Since different objectives might constrain different parameters, it is advisable to use multiple information to calibrate those models. One common approach is to combine these multiple objectives (MO) into one single objective (SO) function and allow the use of a SO optimization algorithm. Another strategy is to consider the different objectives separately and apply a MO Pareto optimization algorithm. In this study, two major research questions will be addressed: 1) How do multi-objective calibrations compare with corresponding single-objective calibrations? 2) How much do calibration results deteriorate when the number of calibrated parameters is reduced by a prior screening technique? The hydrologic model employed in this study is a distributed hydrologic model (mHM) with 52 model parameters, i.e. transfer coefficients. The model uses grid cells as a primary hydrologic unit, and accounts for processes like snow accumulation and melting, soil moisture dynamics, infiltration, surface runoff, evapotranspiration, subsurface storage and discharge generation. The model is applied in three distinct catchments over Europe. The SO calibrations are performed using the Dynamically Dimensioned Search (DDS) algorithm with a fixed budget while the MO calibrations are achieved using the Pareto Dynamically Dimensioned Search (PA-DDS) algorithm allowing for the same budget. The two objectives used here are the Nash Sutcliffe Efficiency (NSE) of the simulated streamflow and the NSE of the logarithmic transformation. It is shown that the SO DDS results are located close to the edges of the Pareto fronts of the PA-DDS. The MO calibrations are hence preferable due to their supply of multiple equivalent solutions from which the user can choose at the end due to the specific needs. The sequential single-objective parameter screening was employed prior to the calibrations reducing the number of parameters by at least 50% in the different catchments and for the different single objectives. The single-objective calibrations led to a faster convergence of the objectives and are hence beneficial when using a DDS on single-objectives. The above mentioned parameter screening technique is generalized for multi-objectives and applied before calibration using the PA-DDS algorithm. Two different alternatives of this MO-screening are tested. The comparison of the calibration results using all parameters and using only screened parameters shows for both alternatives that the PA-DDS algorithm does not profit in terms of trade-off size and function evaluations required to achieve converged pareto fronts. This is because the PA-DDS algorithm automatically reduces search space with progress of the calibration run. This automatic reduction should be different for other search algorithms. It is therefore hypothesized that prior screening can but must not be beneficial for parameter estimation dependent on the chosen optimization algorithm.

  3. Multi-wavelength Raman spectroscopy study of supported vanadia catalysts: Structure identification and quantification

    DOE PAGES

    Wu, Zili

    2014-10-20

    Revealing the structure of supported metal oxide catalysts is a prerequisite for establishing the structure - catalysis relationship. Among a variety of characterization techniques, multi-wavelength Raman spectroscopy, combining resonance Raman and non-resonance Raman with different excitation wavelengths, has recently emerged as a particularly powerful tool in not only identifying but also quantifying the structure of supported metal oxide clusters. In our review, we make use of two supported vanadia systems, VO x/SiO 2 and VO x/CeO 2, as examples to showcase how one can employ this technique to investigate the heterogeneous structure of active oxide clusters and to understand themore » complex interaction between the oxide clusters and the support. Moreover, the qualitative and quantitative structural information gained from the multi-wavelength Raman spectroscopy can be utilized to provide fundamental insights for designing more efficient supported metal oxide catalysts.« less

  4. HPMA-based block copolymers promote differential drug delivery kinetics for hydrophobic and amphiphilic molecules.

    PubMed

    Tomcin, Stephanie; Kelsch, Annette; Staff, Roland H; Landfester, Katharina; Zentel, Rudolf; Mailänder, Volker

    2016-04-15

    We describe a method how polymeric nanoparticles stabilized with (2-hydroxypropyl)methacrylamide (HPMA)-based block copolymers are used as drug delivery systems for a fast release of hydrophobic and a controlled release of an amphiphilic molecule. The versatile method of the miniemulsion solvent-evaporation technique was used to prepare polystyrene (PS) as well as poly-d/l-lactide (PDLLA) nanoparticles. Covalently bound or physically adsorbed fluorescent dyes labeled the particles' core and their block copolymer corona. Confocal laser scanning microscopy (CLSM) in combination with flow cytometry measurements were applied to demonstrate the burst release of a fluorescent hydrophobic drug model without the necessity of nanoparticle uptake. In addition, CLSM studies and quantitative calculations using the image processing program Volocity® show the intracellular detachment of the amphiphilic block copolymer from the particles' core after uptake. Our findings offer the possibility to combine the advantages of a fast release for hydrophobic and a controlled release for an amphiphilic molecule therefore pointing to the possibility to a 'multi-step and multi-site' targeting by one nanocarrier. We describe thoroughly how different components of a nanocarrier end up in cells. This enables different cargos of a nanocarrier having a consecutive release and delivery of distinct components. Most interestingly we demonstrate individual kinetics of distinct components of such a system: first the release of a fluorescent hydrophobic drug model at contact with the cell membrane without the necessity of nanoparticle uptake. Secondly, the intracellular detachment of the amphiphilic block copolymer from the particles' core after uptake occurs. This offers the possibility to combine the advantages of a fast release for a hydrophobic substance at the time of interaction of the nanoparticle with the cell surface and a controlled release for an amphiphilic molecule later on therefore pointing to the possibility to a 'multi-step and multisite' targeting by one nanocarrier. We therefore feel that this could be used for many cellular systems where the combined and orchestrated delivery of components is prerequisite in order to obtain the highest efficiency. Copyright © 2016 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  5. A Robust Absorbing Boundary Condition for Compressible Flows

    NASA Technical Reports Server (NTRS)

    Loh, Ching Y.; orgenson, Philip C. E.

    2005-01-01

    An absorbing non-reflecting boundary condition (NRBC) for practical computations in fluid dynamics and aeroacoustics is presented with theoretical proof. This paper is a continuation and improvement of a previous paper by the author. The absorbing NRBC technique is based on a first principle of non reflecting, which contains the essential physics that a plane wave solution of the Euler equations remains intact across the boundary. The technique is theoretically shown to work for a large class of finite volume approaches. When combined with the hyperbolic conservation laws, the NRBC is simple, robust and truly multi-dimensional; no additional implementation is needed except the prescribed physical boundary conditions. Several numerical examples in multi-dimensional spaces using two different finite volume schemes are illustrated to demonstrate its robustness in practical computations. Limitations and remedies of the technique are also discussed.

  6. The Politics of Pleasure: An Ethnographic Examination Exploring the Dominance of the Multi-Activity Sport-Based Physical Education Model

    ERIC Educational Resources Information Center

    Gerdin, Göran; Pringle, Richard

    2017-01-01

    Kirk warns that physical education (PE) exists in a precarious situation as the dominance of the multi-activity sport-techniques model, and its associated problems, threatens the long-term educational survival of PE. Yet he also notes that although the model is problematic it is highly resistant to change. In this paper, we draw on the results of…

  7. THE FUNDAMENTAL SOLUTIONS FOR MULTI-TERM MODIFIED POWER LAW WAVE EQUATIONS IN A FINITE DOMAIN.

    PubMed

    Jiang, H; Liu, F; Meerschaert, M M; McGough, R J

    2013-01-01

    Fractional partial differential equations with more than one fractional derivative term in time, such as the Szabo wave equation, or the power law wave equation, describe important physical phenomena. However, studies of these multi-term time-space or time fractional wave equations are still under development. In this paper, multi-term modified power law wave equations in a finite domain are considered. The multi-term time fractional derivatives are defined in the Caputo sense, whose orders belong to the intervals (1, 2], [2, 3), [2, 4) or (0, n ) ( n > 2), respectively. Analytical solutions of the multi-term modified power law wave equations are derived. These new techniques are based on Luchko's Theorem, a spectral representation of the Laplacian operator, a method of separating variables and fractional derivative techniques. Then these general methods are applied to the special cases of the Szabo wave equation and the power law wave equation. These methods and techniques can also be extended to other kinds of the multi-term time-space fractional models including fractional Laplacian.

  8. Measuring short-term post-fire forest recovery across a burn severity gradient in a mixed pine-oak forest using multi-sensor remote sensing techniques

    DOE PAGES

    Meng, Ran; Wu, Jin; Zhao, Feng; ...

    2018-06-01

    Understanding post-fire forest recovery is pivotal to the study of forest dynamics and global carbon cycle. Field-based studies indicated a convex response of forest recovery rate to burn severity at the individual tree level, related with fire-induced tree mortality; however, these findings were constrained in spatial/temporal extents, while not detectable by traditional optical remote sensing studies, largely attributing to the contaminated effect from understory recovery. For this work, we examined whether the combined use of multi-sensor remote sensing techniques (i.e., 1m simultaneous airborne imaging spectroscopy and LiDAR and 2m satellite multi-spectral imagery) to separate canopy recovery from understory recovery wouldmore » enable to quantify post-fire forest recovery rate spanning a large gradient in burn severity over large-scales. Our study was conducted in a mixed pine-oak forest in Long Island, NY, three years after a top-killing fire. Our studies remotely detected an initial increase and then decline of forest recovery rate to burn severity across the burned area, with a maximum canopy area-based recovery rate of 10% per year at moderate forest burn severity class. More intriguingly, such remotely detected convex relationships also held at species level, with pine trees being more resilient to high burn severity and having a higher maximum recovery rate (12% per year) than oak trees (4% per year). These results are one of the first quantitative evidences showing the effects of fire adaptive strategies on post-fire forest recovery, derived from relatively large spatial-temporal domains. Our study thus provides the methodological advance to link multi-sensor remote sensing techniques to monitor forest dynamics in a spatially explicit manner over large-scales, with important implications for fire-related forest management, and for constraining/benchmarking fire effect schemes in ecological process models.« less

  9. Measuring short-term post-fire forest recovery across a burn severity gradient in a mixed pine-oak forest using multi-sensor remote sensing techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, Ran; Wu, Jin; Zhao, Feng

    Understanding post-fire forest recovery is pivotal to the study of forest dynamics and global carbon cycle. Field-based studies indicated a convex response of forest recovery rate to burn severity at the individual tree level, related with fire-induced tree mortality; however, these findings were constrained in spatial/temporal extents, while not detectable by traditional optical remote sensing studies, largely attributing to the contaminated effect from understory recovery. For this work, we examined whether the combined use of multi-sensor remote sensing techniques (i.e., 1m simultaneous airborne imaging spectroscopy and LiDAR and 2m satellite multi-spectral imagery) to separate canopy recovery from understory recovery wouldmore » enable to quantify post-fire forest recovery rate spanning a large gradient in burn severity over large-scales. Our study was conducted in a mixed pine-oak forest in Long Island, NY, three years after a top-killing fire. Our studies remotely detected an initial increase and then decline of forest recovery rate to burn severity across the burned area, with a maximum canopy area-based recovery rate of 10% per year at moderate forest burn severity class. More intriguingly, such remotely detected convex relationships also held at species level, with pine trees being more resilient to high burn severity and having a higher maximum recovery rate (12% per year) than oak trees (4% per year). These results are one of the first quantitative evidences showing the effects of fire adaptive strategies on post-fire forest recovery, derived from relatively large spatial-temporal domains. Our study thus provides the methodological advance to link multi-sensor remote sensing techniques to monitor forest dynamics in a spatially explicit manner over large-scales, with important implications for fire-related forest management, and for constraining/benchmarking fire effect schemes in ecological process models.« less

  10. A Method for Calculating Strain Energy Release Rates in Preliminary Design of Composite Skin/Stringer Debonding Under Multi-Axial Loading

    NASA Technical Reports Server (NTRS)

    Krueger, Ronald; Minguet, Pierre J.; OBrien, T. Kevin

    1999-01-01

    Three simple procedures were developed to determine strain energy release rates, G, in composite skin/stringer specimens for various combinations of unaxial and biaxial (in-plane/out-of-plane) loading conditions. These procedures may be used for parametric design studies in such a way that only a few finite element computations will be necessary for a study of many load combinations. The results were compared with mixed mode strain energy release rates calculated directly from nonlinear two-dimensional plane-strain finite element analyses using the virtual crack closure technique. The first procedure involved solving three unknown parameters needed to determine the energy release rates. Good agreement was obtained when the external loads were used in the expression derived. This superposition technique was only applicable if the structure exhibits a linear load/deflection behavior. Consequently, a second technique was derived which was applicable in the case of nonlinear load/deformation behavior. The technique involved calculating six unknown parameters from a set of six simultaneous linear equations with data from six nonlinear analyses to determine the energy release rates. This procedure was not time efficient, and hence, less appealing. A third procedure was developed to calculate mixed mode energy release rates as a function of delamination lengths. This procedure required only one nonlinear finite element analysis of the specimen with a single delamination length to obtain a reference solution for the energy release rates and the scale factors. The delamination was extended in three separate linear models of the local area in the vicinity of the delamination subjected to unit loads to obtain the distribution of G with delamination lengths. This set of sub-problems was Although additional modeling effort is required to create the sub- models, this local technique is efficient for parametric studies.

  11. A Generalized Mixture Framework for Multi-label Classification

    PubMed Central

    Hong, Charmgil; Batal, Iyad; Hauskrecht, Milos

    2015-01-01

    We develop a novel probabilistic ensemble framework for multi-label classification that is based on the mixtures-of-experts architecture. In this framework, we combine multi-label classification models in the classifier chains family that decompose the class posterior distribution P(Y1, …, Yd|X) using a product of posterior distributions over components of the output space. Our approach captures different input–output and output–output relations that tend to change across data. As a result, we can recover a rich set of dependency relations among inputs and outputs that a single multi-label classification model cannot capture due to its modeling simplifications. We develop and present algorithms for learning the mixtures-of-experts models from data and for performing multi-label predictions on unseen data instances. Experiments on multiple benchmark datasets demonstrate that our approach achieves highly competitive results and outperforms the existing state-of-the-art multi-label classification methods. PMID:26613069

  12. A Multi-Cycle Q-Modulation for Dynamic Optimization of Inductive Links.

    PubMed

    Lee, Byunghun; Yeon, Pyungwoo; Ghovanloo, Maysam

    2016-08-01

    This paper presents a new method, called multi-cycle Q-modulation, which can be used in wireless power transmission (WPT) to modulate the quality factor (Q) of the receiver (Rx) coil and dynamically optimize the load impedance to maximize the power transfer efficiency (PTE) in two-coil links. A key advantage of the proposed method is that it can be easily implemented using off-the-shelf components without requiring fast switching at or above the carrier frequency, which is more suitable for integrated circuit design. Moreover, the proposed technique does not need any sophisticated synchronization between the power carrier and Q-modulation switching pulses. The multi-cycle Q-modulation is analyzed theoretically by a lumped circuit model, and verified in simulation and measurement using an off-the-shelf prototype. Automatic resonance tuning (ART) in the Rx, combined with multi-cycle Q-modulation helped maximizing PTE of the inductive link dynamically in the presence of environmental and loading variations, which can otherwise significantly degrade the PTE in multi-coil settings. In the prototype conventional 2-coil link, the proposed method increased the power amplifier (PA) plus inductive link efficiency from 4.8% to 16.5% at ( R L = 1 kΩ, d 23 = 3 cm), and from 23% to 28.2% at ( R L = 100 Ω, d 23 = 3 cm) after 11% change in the resonance capacitance, while delivering 168.1 mW to the load (PDL).

  13. Influence of Natural Convection and Thermal Radiation Multi-Component Transport in MOCVD Reactors

    NASA Technical Reports Server (NTRS)

    Lowry, S.; Krishnan, A.; Clark, I.

    1999-01-01

    The influence of Grashof and Reynolds number in Metal Organic Chemical Vapor (MOCVD) reactors is being investigated under a combined empirical/numerical study. As part of that research, the deposition of Indium Phosphide in an MOCVD reactor is modeled using the computational code CFD-ACE. The model includes the effects of convection, conduction, and radiation as well as multi-component diffusion and multi-step surface/gas phase chemistry. The results of the prediction are compared with experimental data for a commercial reactor and analyzed with respect to the model accuracy.

  14. SU-F-R-46: Predicting Distant Failure in Lung SBRT Using Multi-Objective Radiomics Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Z; Folkert, M; Iyengar, P

    2016-06-15

    Purpose: To predict distant failure in lung stereotactic body radiation therapy (SBRT) in early stage non-small cell lung cancer (NSCLC) by using a new multi-objective radiomics model. Methods: Currently, most available radiomics models use the overall accuracy as the objective function. However, due to data imbalance, a single object may not reflect the performance of a predictive model. Therefore, we developed a multi-objective radiomics model which considers both sensitivity and specificity as the objective functions simultaneously. The new model is used to predict distant failure in lung SBRT using 52 patients treated at our institute. Quantitative imaging features of PETmore » and CT as well as clinical parameters are utilized to build the predictive model. Image features include intensity features (9), textural features (12) and geometric features (8). Clinical parameters for each patient include demographic parameters (4), tumor characteristics (8), treatment faction schemes (4) and pretreatment medicines (6). The modelling procedure consists of two steps: extracting features from segmented tumors in PET and CT; and selecting features and training model parameters based on multi-objective. Support Vector Machine (SVM) is used as the predictive model, while a nondominated sorting-based multi-objective evolutionary computation algorithm II (NSGA-II) is used for solving the multi-objective optimization. Results: The accuracy for PET, clinical, CT, PET+clinical, PET+CT, CT+clinical, PET+CT+clinical are 71.15%, 84.62%, 84.62%, 85.54%, 82.69%, 84.62%, 86.54%, respectively. The sensitivities for the above seven combinations are 41.76%, 58.33%, 50.00%, 50.00%, 41.67%, 41.67%, 58.33%, while the specificities are 80.00%, 92.50%, 90.00%, 97.50%, 92.50%, 97.50%, 97.50%. Conclusion: A new multi-objective radiomics model for predicting distant failure in NSCLC treated with SBRT was developed. The experimental results show that the best performance can be obtained by combining all features.« less

  15. COMPUTATIONAL CHALLENGES IN BUILDING MULTI-SCALE AND MULTI-PHYSICS MODELS OF CARDIAC ELECTRO-MECHANICS

    PubMed Central

    Plank, G; Prassl, AJ; Augustin, C

    2014-01-01

    Despite the evident multiphysics nature of the heart – it is an electrically controlled mechanical pump – most modeling studies considered electrophysiology and mechanics in isolation. In no small part, this is due to the formidable modeling challenges involved in building strongly coupled anatomically accurate and biophyically detailed multi-scale multi-physics models of cardiac electro-mechanics. Among the main challenges are the selection of model components and their adjustments to achieve integration into a consistent organ-scale model, dealing with technical difficulties such as the exchange of data between electro-physiological and mechanical model, particularly when using different spatio-temporal grids for discretization, and, finally, the implementation of advanced numerical techniques to deal with the substantial computational. In this study we report on progress made in developing a novel modeling framework suited to tackle these challenges. PMID:24043050

  16. Analysis of Thick Sandwich Shells with Embedded Ceramic Tiles

    NASA Technical Reports Server (NTRS)

    Davila, Carlos G.; Smith, C.; Lumban-Tobing, F.

    1996-01-01

    The Composite Armored Vehicle (CAV) is an advanced technology demonstrator of an all-composite ground combat vehicle. The CAV upper hull is made of a tough light-weight S2-glass/epoxy laminate with embedded ceramic tiles that serve as armor. The tiles are bonded to a rubber mat with a carefully selected, highly viscoelastic adhesive. The integration of armor and structure offers an efficient combination of ballistic protection and structural performance. The analysis of this anisotropic construction, with its inherent discontinuous and periodic nature, however, poses several challenges. The present paper describes a shell-based 'element-layering' technique that properly accounts for these effects and for the concentrated transverse shear flexibility in the rubber mat. One of the most important advantages of the element-layering technique over advanced higher-order elements is that it is based on conventional elements. This advantage allows the models to be portable to other structural analysis codes, a prerequisite in a program that involves the computational facilities of several manufacturers and government laboratories. The element-layering technique was implemented into an auto-layering program that automatically transforms a conventional shell model into a multi-layered model. The effects of tile layer homogenization, tile placement patterns, and tile gap size on the analysis results are described.

  17. A dynamic approach merging network theory and credit risk techniques to assess systemic risk in financial networks.

    PubMed

    Petrone, Daniele; Latora, Vito

    2018-04-03

    The interconnectedness of financial institutions affects instability and credit crises. To quantify systemic risk we introduce here the PD model, a dynamic model that combines credit risk techniques with a contagion mechanism on the network of exposures among banks. A potential loss distribution is obtained through a multi-period Monte Carlo simulation that considers the probability of default (PD) of the banks and their tendency of defaulting in the same time interval. A contagion process increases the PD of banks exposed toward distressed counterparties. The systemic risk is measured by statistics of the loss distribution, while the contribution of each node is quantified by the new measures PDRank and PDImpact. We illustrate how the model works on the network of the European Global Systemically Important Banks. For a certain range of the banks' capital and of their assets volatility, our results reveal the emergence of a strong contagion regime where lower default correlation between banks corresponds to higher losses. This is the opposite of the diversification benefits postulated by standard credit risk models used by banks and regulators who could therefore underestimate the capital needed to overcome a period of crisis, thereby contributing to the financial system instability.

  18. A GIS-based multi-source and multi-box modeling approach (GMSMB) for air pollution assessment--a North American case study.

    PubMed

    Wang, Bao-Zhen; Chen, Zhi

    2013-01-01

    This article presents a GIS-based multi-source and multi-box modeling approach (GMSMB) to predict the spatial concentration distributions of airborne pollutant on local and regional scales. In this method, an extended multi-box model combined with a multi-source and multi-grid Gaussian model are developed within the GIS framework to examine the contributions from both point- and area-source emissions. By using GIS, a large amount of data including emission sources, air quality monitoring, meteorological data, and spatial location information required for air quality modeling are brought into an integrated modeling environment. It helps more details of spatial variation in source distribution and meteorological condition to be quantitatively analyzed. The developed modeling approach has been examined to predict the spatial concentration distribution of four air pollutants (CO, NO(2), SO(2) and PM(2.5)) for the State of California. The modeling results are compared with the monitoring data. Good agreement is acquired which demonstrated that the developed modeling approach could deliver an effective air pollution assessment on both regional and local scales to support air pollution control and management planning.

  19. Multi-modality imaging of tumor phenotype and response to therapy

    NASA Astrophysics Data System (ADS)

    Nyflot, Matthew J.

    2011-12-01

    Imaging and radiation oncology have historically been closely linked. However, the vast majority of techniques used in the clinic involve anatomical imaging. Biological imaging offers the potential for innovation in the areas of cancer diagnosis and staging, radiotherapy target definition, and treatment response assessment. Some relevant imaging techniques are FDG PET (for imaging cellular metabolism), FLT PET (proliferation), CuATSM PET (hypoxia), and contrast-enhanced CT (vasculature and perfusion). Here, a technique for quantitative spatial correlation of tumor phenotype is presented for FDG PET, FLT PET, and CuATSM PET images. Additionally, multimodality imaging of treatment response with FLT PET, CuATSM, and dynamic contrast-enhanced CT is presented, in a trial of patients receiving an antiangiogenic agent (Avastin) combined with cisplatin and radiotherapy. Results are also presented for translational applications in animal models, including quantitative assessment of proliferative response to cetuximab with FLT PET and quantification of vascular volume with a blood-pool contrast agent (Fenestra). These techniques have clear applications to radiobiological research and optimized treatment strategies, and may eventually be used for personalized therapy for patients.

  20. Systems Biology Approaches for Host–Fungal Interactions: An Expanding Multi-Omics Frontier

    PubMed Central

    Culibrk, Luka; Croft, Carys A.

    2016-01-01

    Abstract Opportunistic fungal infections are an increasing threat for global health, and for immunocompromised patients in particular. These infections are characterized by interaction between fungal pathogen and host cells. The exact mechanisms and the attendant variability in host and fungal pathogen interaction remain to be fully elucidated. The field of systems biology aims to characterize a biological system, and utilize this knowledge to predict the system's response to stimuli such as fungal exposures. A multi-omics approach, for example, combining data from genomics, proteomics, metabolomics, would allow a more comprehensive and pan-optic “two systems” biology of both the host and the fungal pathogen. In this review and literature analysis, we present highly specialized and nascent methods for analysis of multiple -omes of biological systems, in addition to emerging single-molecule visualization techniques that may assist in determining biological relevance of multi-omics data. We provide an overview of computational methods for modeling of gene regulatory networks, including some that have been applied towards the study of an interacting host and pathogen. In sum, comprehensive characterizations of host–fungal pathogen systems are now possible, and utilization of these cutting-edge multi-omics strategies may yield advances in better understanding of both host biology and fungal pathogens at a systems scale. PMID:26885725

  1. The Fukushima-137Cs deposition case study: properties of the multi-model ensemble.

    PubMed

    Solazzo, E; Galmarini, S

    2015-01-01

    In this paper we analyse the properties of an eighteen-member ensemble generated by the combination of five atmospheric dispersion modelling systems and six meteorological data sets. The models have been applied to the total deposition of (137)Cs, following the nuclear accident of the Fukushima power plant in March 2011. Analysis is carried out with the scope of determining whether the ensemble is reliable, sufficiently diverse and if its accuracy and precision can be improved. Although ensemble practice is becoming more and more popular in many geophysical applications, good practice guidelines are missing as to how models should be combined for the ensembles to offer an improvement over single model realisations. We show that the ensemble of models share large portions of bias and variance and make use of several techniques to further show that subsets of models can explain the same amount of variance as the full ensemble mean with the advantage of being poorly correlated, allowing to save computational resources and reduce noise (and thus improving accuracy). We further propose and discuss two methods for selecting subsets of skilful and diverse members, and prove that, in the contingency of the present analysis, their mean outscores the full ensemble mean in terms of both accuracy (error) and precision (variance). Copyright © 2014. Published by Elsevier Ltd.

  2. A multi-technique phytoremediation approach to purify metals contaminated soil from e-waste recycling site.

    PubMed

    Luo, Jie; Cai, Limei; Qi, Shihua; Wu, Jian; Sophie Gu, Xiaowen

    2017-12-15

    Multiple techniques for soil decontamination were combined to enhance the phytoremediation efficiency of Eucalyptus globulese and alleviate the corresponding environmental risks. The approach constituted of chelating agent using, electrokinetic remediation, plant hormone foliar application and phytoremediation was designed to remediate multi-metal contaminated soils from a notorious e-waste recycling town. The decontamination ability of E. globulese increased from 1.35, 58.47 and 119.18 mg per plant for Cd, Pb and Cu in planting controls to 7.57, 198.68 and 174.34 mg per plant in individual EDTA treatments, respectively, but simultaneously, 0.9-11.5 times more metals leached from chelator treatments relative to controls. Low (2 V) and moderate (4 V) voltage electric fields provoked the growth of the species while high voltage (10 V) had an opposite effect and metal concentrations of the plants elevated with the increment of voltage. Volumes of the leachate decreased from 1224 to 134 mL with voltage increasing from 0 to 10 V due to electroosmosis and electrolysis. Comparing with individual phytoremediation, foliar cytokinin treatments produced 56% more biomass and intercepted 2.5 times more leachate attributed to the enhanced transpiration rate. The synergistic combination of the individuals resulted in the most biomass production and metal accumulation of the species under the stress condition relative to other methods. Time required for the multi-technique approach to decontaminate Cd, Pb and Cu from soil was 2.1-10.4 times less than individual chelator addition, electric field application or plant hormone utilization. It's especially important that nearly no leachate (60 mL in total) was collected from the multi-technique system. This approach is a suitable method to remediate metal polluted site considering its decontamination efficiency and associated environmental negligible risk. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Extended behavioural modelling of FET and lattice-mismatched HEMT devices

    NASA Astrophysics Data System (ADS)

    Khawam, Yahya; Albasha, Lutfi

    2017-07-01

    This study presents an improved large signal model that can be used for high electron mobility transistors (HEMTs) and field effect transistors using measurement-based behavioural modelling techniques. The steps for accurate large and small signal modelling for transistor are also discussed. The proposed DC model is based on the Fager model since it compensates between the number of model's parameters and accuracy. The objective is to increase the accuracy of the drain-source current model with respect to any change in gate or drain voltages. Also, the objective is to extend the improved DC model to account for soft breakdown and kink effect found in some variants of HEMT devices. A hybrid Newton's-Genetic algorithm is used in order to determine the unknown parameters in the developed model. In addition to accurate modelling of a transistor's DC characteristics, the complete large signal model is modelled using multi-bias s-parameter measurements. The way that the complete model is performed is by using a hybrid multi-objective optimisation technique (Non-dominated Sorting Genetic Algorithm II) and local minimum search (multivariable Newton's method) for parasitic elements extraction. Finally, the results of DC modelling and multi-bias s-parameters modelling are presented, and three-device modelling recommendations are discussed.

  4. On-Line Multi-Damage Scanning Spatial-Wavenumber Filter Based Imaging Method for Aircraft Composite Structure.

    PubMed

    Ren, Yuanqiang; Qiu, Lei; Yuan, Shenfang; Bao, Qiao

    2017-05-11

    Structural health monitoring (SHM) of aircraft composite structure is helpful to increase reliability and reduce maintenance costs. Due to the great effectiveness in distinguishing particular guided wave modes and identifying the propagation direction, the spatial-wavenumber filter technique has emerged as an interesting SHM topic. In this paper, a new scanning spatial-wavenumber filter (SSWF) based imaging method for multiple damages is proposed to conduct on-line monitoring of aircraft composite structures. Firstly, an on-line multi-damage SSWF is established, including the fundamental principle of SSWF for multiple damages based on a linear piezoelectric (PZT) sensor array, and a corresponding wavenumber-time imaging mechanism by using the multi-damage scattering signal. Secondly, through combining the on-line multi-damage SSWF and a PZT 2D cross-shaped array, an image-mapping method is proposed to conduct wavenumber synthesis and convert the two wavenumber-time images obtained by the PZT 2D cross-shaped array to an angle-distance image, from which the multiple damages can be directly recognized and located. In the experimental validation, both simulated multi-damage and real multi-damage introduced by repeated impacts are performed on a composite plate structure. The maximum localization error is less than 2 cm, which shows good performance of the multi-damage imaging method. Compared with the existing spatial-wavenumber filter based damage evaluation methods, the proposed method requires no more than the multi-damage scattering signal and can be performed without depending on any wavenumber modeling or measuring. Besides, this method locates multiple damages by imaging instead of the geometric method, which helps to improve the signal-to-noise ratio. Thus, it can be easily applied to on-line multi-damage monitoring of aircraft composite structures.

  5. On-Line Multi-Damage Scanning Spatial-Wavenumber Filter Based Imaging Method for Aircraft Composite Structure

    PubMed Central

    Ren, Yuanqiang; Qiu, Lei; Yuan, Shenfang; Bao, Qiao

    2017-01-01

    Structural health monitoring (SHM) of aircraft composite structure is helpful to increase reliability and reduce maintenance costs. Due to the great effectiveness in distinguishing particular guided wave modes and identifying the propagation direction, the spatial-wavenumber filter technique has emerged as an interesting SHM topic. In this paper, a new scanning spatial-wavenumber filter (SSWF) based imaging method for multiple damages is proposed to conduct on-line monitoring of aircraft composite structures. Firstly, an on-line multi-damage SSWF is established, including the fundamental principle of SSWF for multiple damages based on a linear piezoelectric (PZT) sensor array, and a corresponding wavenumber-time imaging mechanism by using the multi-damage scattering signal. Secondly, through combining the on-line multi-damage SSWF and a PZT 2D cross-shaped array, an image-mapping method is proposed to conduct wavenumber synthesis and convert the two wavenumber-time images obtained by the PZT 2D cross-shaped array to an angle-distance image, from which the multiple damages can be directly recognized and located. In the experimental validation, both simulated multi-damage and real multi-damage introduced by repeated impacts are performed on a composite plate structure. The maximum localization error is less than 2 cm, which shows good performance of the multi-damage imaging method. Compared with the existing spatial-wavenumber filter based damage evaluation methods, the proposed method requires no more than the multi-damage scattering signal and can be performed without depending on any wavenumber modeling or measuring. Besides, this method locates multiple damages by imaging instead of the geometric method, which helps to improve the signal-to-noise ratio. Thus, it can be easily applied to on-line multi-damage monitoring of aircraft composite structures. PMID:28772879

  6. Coherent beam combining of collimated fiber array based on target-in-the-loop technique

    NASA Astrophysics Data System (ADS)

    Li, Xinyang; Geng, Chao; Zhang, Xiaojun; Rao, Changhui

    2011-11-01

    Coherent beam combining (CBC) of fiber array is a promising way to generate high power and high quality laser beams. Target-in-the-loop (TIL) technique might be an effective way to ensure atmosphere propagation compensation without wavefront sensors. In this paper, we present very recent research work about CBC of collimated fiber array using TIL technique at the Key Lab on Adaptive Optics (KLAO), CAS. A novel Adaptive Fiber Optics Collimator (AFOC) composed of phase-locking module and tip/tilt control module was developed. CBC experimental setup of three-element fiber array was established. Feedback control is realized using stochastic parallel gradient descent (SPGD) algorithm. The CBC based on TIL with piston and tip/tilt correction simultaneously is demonstrated. And the beam pointing to locate or sweep position of combined spot on target was achieved through TIL technique too. The goal of our work is achieve multi-element CBC for long-distance transmission in atmosphere.

  7. Finite Volume Numerical Methods for Aeroheating Rate Calculations from Infrared Thermographic Data

    NASA Technical Reports Server (NTRS)

    Daryabeigi, Kamran; Berry, Scott A.; Horvath, Thomas J.; Nowak, Robert J.

    2006-01-01

    The use of multi-dimensional finite volume heat conduction techniques for calculating aeroheating rates from measured global surface temperatures on hypersonic wind tunnel models was investigated. Both direct and inverse finite volume techniques were investigated and compared with the standard one-dimensional semi-infinite technique. Global transient surface temperatures were measured using an infrared thermographic technique on a 0.333-scale model of the Hyper-X forebody in the NASA Langley Research Center 20-Inch Mach 6 Air tunnel. In these tests the effectiveness of vortices generated via gas injection for initiating hypersonic transition on the Hyper-X forebody was investigated. An array of streamwise-orientated heating striations was generated and visualized downstream of the gas injection sites. In regions without significant spatial temperature gradients, one-dimensional techniques provided accurate aeroheating rates. In regions with sharp temperature gradients caused by striation patterns multi-dimensional heat transfer techniques were necessary to obtain more accurate heating rates. The use of the one-dimensional technique resulted in differences of 20% in the calculated heating rates compared to 2-D analysis because it did not account for lateral heat conduction in the model.

  8. On the Optimization of Aerospace Plane Ascent Trajectory

    NASA Astrophysics Data System (ADS)

    Al-Garni, Ahmed; Kassem, Ayman Hamdy

    A hybrid heuristic optimization technique based on genetic algorithms and particle swarm optimization has been developed and tested for trajectory optimization problems with multi-constraints and a multi-objective cost function. The technique is used to calculate control settings for two types for ascending trajectories (constant dynamic pressure and minimum-fuel-minimum-heat) for a two-dimensional model of an aerospace plane. A thorough statistical analysis is done on the hybrid technique to make comparisons with both basic genetic algorithms and particle swarm optimization techniques with respect to convergence and execution time. Genetic algorithm optimization showed better execution time performance while particle swarm optimization showed better convergence performance. The hybrid optimization technique, benefiting from both techniques, showed superior robust performance compromising convergence trends and execution time.

  9. The use of multi-temporal Landsat Normalized Difference Vegetation Index (NDVI) data for mapping fuels in Yosemite National Park, USA

    USGS Publications Warehouse

    Van Wagtendonk, Jan W.; Root, Ralph R.

    2003-01-01

    The objective of this study was to test the applicability of using Normalized Difference Vegetation Index (NDVI) values derived from a temporal sequence of six Landsat Thematic Mapper (TM) scenes to map fuel models for Yosemite National Park, USA. An unsupervised classification algorithm was used to define 30 unique spectral-temporal classes of NDVI values. A combination of graphical, statistical and visual techniques was used to characterize the 30 classes and identify those that responded similarly and could be combined into fuel models. The final classification of fuel models included six different types: short annual and perennial grasses, tall perennial grasses, medium brush and evergreen hardwoods, short-needled conifers with no heavy fuels, long-needled conifers and deciduous hardwoods, and short-needled conifers with a component of heavy fuels. The NDVI, when analysed over a season of phenologically distinct periods along with ancillary data, can elicit information necessary to distinguish fuel model types. Fuels information derived from remote sensors has proven to be useful for initial classification of fuels and has been applied to fire management situations on the ground.

  10. Multi-physics CFD simulations in engineering

    NASA Astrophysics Data System (ADS)

    Yamamoto, Makoto

    2013-08-01

    Nowadays Computational Fluid Dynamics (CFD) software is adopted as a design and analysis tool in a great number of engineering fields. We can say that single-physics CFD has been sufficiently matured in the practical point of view. The main target of existing CFD software is single-phase flows such as water and air. However, many multi-physics problems exist in engineering. Most of them consist of flow and other physics, and the interactions between different physics are very important. Obviously, multi-physics phenomena are critical in developing machines and processes. A multi-physics phenomenon seems to be very complex, and it is so difficult to be predicted by adding other physics to flow phenomenon. Therefore, multi-physics CFD techniques are still under research and development. This would be caused from the facts that processing speed of current computers is not fast enough for conducting a multi-physics simulation, and furthermore physical models except for flow physics have not been suitably established. Therefore, in near future, we have to develop various physical models and efficient CFD techniques, in order to success multi-physics simulations in engineering. In the present paper, I will describe the present states of multi-physics CFD simulations, and then show some numerical results such as ice accretion and electro-chemical machining process of a three-dimensional compressor blade which were obtained in my laboratory. Multi-physics CFD simulations would be a key technology in near future.

  11. Quantitative, depth-resolved determination of particle motion using multi-exposure, spatial frequency domain laser speckle imaging.

    PubMed

    Rice, Tyler B; Kwan, Elliott; Hayakawa, Carole K; Durkin, Anthony J; Choi, Bernard; Tromberg, Bruce J

    2013-01-01

    Laser Speckle Imaging (LSI) is a simple, noninvasive technique for rapid imaging of particle motion in scattering media such as biological tissue. LSI is generally used to derive a qualitative index of relative blood flow due to unknown impact from several variables that affect speckle contrast. These variables may include optical absorption and scattering coefficients, multi-layer dynamics including static, non-ergodic regions, and systematic effects such as laser coherence length. In order to account for these effects and move toward quantitative, depth-resolved LSI, we have developed a method that combines Monte Carlo modeling, multi-exposure speckle imaging (MESI), spatial frequency domain imaging (SFDI), and careful instrument calibration. Monte Carlo models were used to generate total and layer-specific fractional momentum transfer distributions. This information was used to predict speckle contrast as a function of exposure time, spatial frequency, layer thickness, and layer dynamics. To verify with experimental data, controlled phantom experiments with characteristic tissue optical properties were performed using a structured light speckle imaging system. Three main geometries were explored: 1) diffusive dynamic layer beneath a static layer, 2) static layer beneath a diffuse dynamic layer, and 3) directed flow (tube) submerged in a dynamic scattering layer. Data fits were performed using the Monte Carlo model, which accurately reconstructed the type of particle flow (diffusive or directed) in each layer, the layer thickness, and absolute flow speeds to within 15% or better.

  12. Multi-scale Modeling of Plasticity in Tantalum.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lim, Hojun; Battaile, Corbett Chandler.; Carroll, Jay

    In this report, we present a multi-scale computational model to simulate plastic deformation of tantalum and validating experiments. In atomistic/ dislocation level, dislocation kink- pair theory is used to formulate temperature and strain rate dependent constitutive equations. The kink-pair theory is calibrated to available data from single crystal experiments to produce accurate and convenient constitutive laws. The model is then implemented into a BCC crystal plasticity finite element method (CP-FEM) model to predict temperature and strain rate dependent yield stresses of single and polycrystalline tantalum and compared with existing experimental data from the literature. Furthermore, classical continuum constitutive models describingmore » temperature and strain rate dependent flow behaviors are fit to the yield stresses obtained from the CP-FEM polycrystal predictions. The model is then used to conduct hydro- dynamic simulations of Taylor cylinder impact test and compared with experiments. In order to validate the proposed tantalum CP-FEM model with experiments, we introduce a method for quantitative comparison of CP-FEM models with various experimental techniques. To mitigate the effects of unknown subsurface microstructure, tantalum tensile specimens with a pseudo-two-dimensional grain structure and grain sizes on the order of millimeters are used. A technique combining an electron back scatter diffraction (EBSD) and high resolution digital image correlation (HR-DIC) is used to measure the texture and sub-grain strain fields upon uniaxial tensile loading at various applied strains. Deformed specimens are also analyzed with optical profilometry measurements to obtain out-of- plane strain fields. These high resolution measurements are directly compared with large-scale CP-FEM predictions. This computational method directly links fundamental dislocation physics to plastic deformations in the grain-scale and to the engineering-scale applications. Furthermore, direct and quantitative comparisons between experimental measurements and simulation show that the proposed model accurately captures plasticity in deformation of polycrystalline tantalum.« less

  13. Flaw investigation in a multi-layered, multi-material composite: Using air-coupled ultrasonic resonance imaging

    NASA Astrophysics Data System (ADS)

    Livings, R. A.; Dayal, V.; Barnard, D. J.; Hsu, D. K.

    2012-05-01

    Ceramic tiles are the main ingredient of a multi-material, multi-layered composite being considered for the modernization of tank armors. The high stiffness, low attenuation, and precise dimensions of these uniform tiles make them remarkable resonators when driven to vibrate. Defects in the tile, during manufacture or after usage, are expected to change the resonance frequencies and resonance images of the tile. The comparison of the resonance frequencies and resonance images of a pristine tile/lay-up to a defective tile/lay-up will thus be a quantitative damage metric. By examining the vibrational behavior of these tiles and the composite lay-up with Finite Element Modeling and analytical plate vibration equations, the development of a new Nondestructive Evaluation technique is possible. This study examines the development of the Air-Coupled Ultrasonic Resonance Imaging technique as applied to a hexagonal ceramic tile and a multi-material, multi-layered composite.

  14. Co-rotational thermo-mechanically coupled multi-field framework and finite element for the large displacement analysis of multi-layered shape memory alloy beam-like structures

    NASA Astrophysics Data System (ADS)

    Solomou, Alexandros G.; Machairas, Theodoros T.; Karakalas, Anargyros A.; Saravanos, Dimitris A.

    2017-06-01

    A thermo-mechanically coupled finite element (FE) for the simulation of multi-layered shape memory alloy (SMA) beams admitting large displacements and rotations (LDRs) is developed to capture the geometrically nonlinear effects which are present in many SMA applications. A generalized multi-field beam theory implementing a SMA constitutive model based on small strain theory, thermo-mechanically coupled governing equations and multi-field kinematic hypotheses combining first order shear deformation assumptions with a sixth order polynomial temperature field through the thickness of the beam section are extended to admit LDRs. The co-rotational formulation is adopted, where the motion of the beam is decomposed to rigid body motion and relative small deformation in the local frame. A new generalized multi-layered SMA FE is formulated. The nonlinear transient spatial discretized equations of motion of the SMA structure are synthesized and solved using the Newton-Raphson method combined with an implicit time integration scheme. Correlations of models incorporating the present beam FE with respective results of models incorporating plane stress SMA FEs, demonstrate excellent agreement of the predicted LDRs response, temperature and phase transformation fields, as well as, significant gains in computational time.

  15. Research into a distributed fault diagnosis system and its application

    NASA Astrophysics Data System (ADS)

    Qian, Suxiang; Jiao, Weidong; Lou, Yongjian; Shen, Xiaomei

    2005-12-01

    CORBA (Common Object Request Broker Architecture) is a solution to distributed computing methods over heterogeneity systems, which establishes a communication protocol between distributed objects. It takes great emphasis on realizing the interoperation between distributed objects. However, only after developing some application approaches and some practical technology in monitoring and diagnosis, can the customers share the monitoring and diagnosis information, so that the purpose of realizing remote multi-expert cooperation diagnosis online can be achieved. This paper aims at building an open fault monitoring and diagnosis platform combining CORBA, Web and agent. Heterogeneity diagnosis object interoperate in independent thread through the CORBA (soft-bus), realizing sharing resource and multi-expert cooperation diagnosis online, solving the disadvantage such as lack of diagnosis knowledge, oneness of diagnosis technique and imperfectness of analysis function, so that more complicated and further diagnosis can be carried on. Take high-speed centrifugal air compressor set for example, we demonstrate a distributed diagnosis based on CORBA. It proves that we can find out more efficient approaches to settle the problems such as real-time monitoring and diagnosis on the net and the break-up of complicated tasks, inosculating CORBA, Web technique and agent frame model to carry on complemental research. In this system, Multi-diagnosis Intelligent Agent helps improve diagnosis efficiency. Besides, this system offers an open circumstances, which is easy for the diagnosis objects to upgrade and for new diagnosis server objects to join in.

  16. Feature Fusion of ICP-AES, UV-Vis and FT-MIR for Origin Traceability of Boletus edulis Mushrooms in Combination with Chemometrics.

    PubMed

    Qi, Luming; Liu, Honggao; Li, Jieqing; Li, Tao; Wang, Yuanzhong

    2018-01-15

    Origin traceability is an important step to control the nutritional and pharmacological quality of food products. Boletus edulis mushroom is a well-known food resource in the world. Its nutritional and medicinal properties are drastically varied depending on geographical origins. In this study, three sensor systems (inductively coupled plasma atomic emission spectrophotometer (ICP-AES), ultraviolet-visible (UV-Vis) and Fourier transform mid-infrared spectroscopy (FT-MIR)) were applied for the origin traceability of 192 mushroom samples (caps and stipes) in combination with chemometrics. The difference between cap and stipe was clearly illustrated based on a single sensor technique, respectively. Feature variables from three instruments were used for origin traceability. Two supervised classification methods, partial least square discriminant analysis (FLS-DA) and grid search support vector machine (GS-SVM), were applied to develop mathematical models. Two steps (internal cross-validation and external prediction for unknown samples) were used to evaluate the performance of a classification model. The result is satisfactory with high accuracies ranging from 90.625% to 100%. These models also have an excellent generalization ability with the optimal parameters. Based on the combination of three sensory systems, our study provides a multi-sensory and comprehensive origin traceability of B. edulis mushrooms.

  17. Feature Fusion of ICP-AES, UV-Vis and FT-MIR for Origin Traceability of Boletus edulis Mushrooms in Combination with Chemometrics

    PubMed Central

    Qi, Luming; Liu, Honggao; Li, Jieqing; Li, Tao

    2018-01-01

    Origin traceability is an important step to control the nutritional and pharmacological quality of food products. Boletus edulis mushroom is a well-known food resource in the world. Its nutritional and medicinal properties are drastically varied depending on geographical origins. In this study, three sensor systems (inductively coupled plasma atomic emission spectrophotometer (ICP-AES), ultraviolet-visible (UV-Vis) and Fourier transform mid-infrared spectroscopy (FT-MIR)) were applied for the origin traceability of 184 mushroom samples (caps and stipes) in combination with chemometrics. The difference between cap and stipe was clearly illustrated based on a single sensor technique, respectively. Feature variables from three instruments were used for origin traceability. Two supervised classification methods, partial least square discriminant analysis (FLS-DA) and grid search support vector machine (GS-SVM), were applied to develop mathematical models. Two steps (internal cross-validation and external prediction for unknown samples) were used to evaluate the performance of a classification model. The result is satisfactory with high accuracies ranging from 90.625% to 100%. These models also have an excellent generalization ability with the optimal parameters. Based on the combination of three sensory systems, our study provides a multi-sensory and comprehensive origin traceability of B. edulis mushrooms. PMID:29342969

  18. Coupled numerical approach combining finite volume and lattice Boltzmann methods for multi-scale multi-physicochemical processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Li; He, Ya-Ling; Kang, Qinjun

    2013-12-15

    A coupled (hybrid) simulation strategy spatially combining the finite volume method (FVM) and the lattice Boltzmann method (LBM), called CFVLBM, is developed to simulate coupled multi-scale multi-physicochemical processes. In the CFVLBM, computational domain of multi-scale problems is divided into two sub-domains, i.e., an open, free fluid region and a region filled with porous materials. The FVM and LBM are used for these two regions, respectively, with information exchanged at the interface between the two sub-domains. A general reconstruction operator (RO) is proposed to derive the distribution functions in the LBM from the corresponding macro scalar, the governing equation of whichmore » obeys the convection–diffusion equation. The CFVLBM and the RO are validated in several typical physicochemical problems and then are applied to simulate complex multi-scale coupled fluid flow, heat transfer, mass transport, and chemical reaction in a wall-coated micro reactor. The maximum ratio of the grid size between the FVM and LBM regions is explored and discussed. -- Highlights: •A coupled simulation strategy for simulating multi-scale phenomena is developed. •Finite volume method and lattice Boltzmann method are coupled. •A reconstruction operator is derived to transfer information at the sub-domains interface. •Coupled multi-scale multiple physicochemical processes in micro reactor are simulated. •Techniques to save computational resources and improve the efficiency are discussed.« less

  19. a New Multi-Criteria Evaluation Model Based on the Combination of Non-Additive Fuzzy Ahp, Choquet Integral and Sugeno λ-MEASURE

    NASA Astrophysics Data System (ADS)

    Nadi, S.; Samiei, M.; Salari, H. R.; Karami, N.

    2017-09-01

    This paper proposes a new model for multi-criteria evaluation under uncertain condition. In this model we consider the interaction between criteria as one of the most challenging issues especially in the presence of uncertainty. In this case usual pairwise comparisons and weighted sum cannot be used to calculate the importance of criteria and to aggregate them. Our model is based on the combination of non-additive fuzzy linguistic preference relation AHP (FLPRAHP), Choquet integral and Sugeno λ-measure. The proposed model capture fuzzy preferences of users and fuzzy values of criteria and uses Sugeno λ -measure to determine the importance of criteria and their interaction. Then, integrating Choquet integral and FLPRAHP, all the interaction between criteria are taken in to account with least number of comparison and the final score for each alternative is determined. So we would model a comprehensive set of interactions between criteria that can lead us to more reliable result. An illustrative example presents the effectiveness and capability of the proposed model to evaluate different alternatives in a multi-criteria decision problem.

  20. PHOTOMETRIC SUPERNOVA CLASSIFICATION WITH MACHINE LEARNING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lochner, Michelle; Peiris, Hiranya V.; Lahav, Ofer

    Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques that fit parametric models tomore » curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k -nearest neighbors, support vector machines, artificial neural networks, and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieve an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.« less

  1. The Dynamic Atmospheres of Carbon Rich Giants: Constraining Models Via Interferometry

    NASA Astrophysics Data System (ADS)

    Rau, Gioia; Hron, Josef; Paladini, Claudia; Aringer, Bernard; Eriksson, Kjell; Marigo, Paola

    2016-07-01

    Dynamic models for the atmospheres of C-rich Asymptotic Giant Branch stars are quite advanced and have been overall successful in reproducing spectroscopic and photometric observations. Interferometry provides independent information and is thus an important technique to study the atmospheric stratification and to further constrain the dynamic models. We observed a sample of six C-rich AGBs with the mid infrared interferometer VLTI/MIDI. These observations, combined with photometric and spectroscopic data from the literature, are compared with synthetic observables derived from dynamic model atmospheres (DMA, Eriksson et al. 2014). The SEDs can be reasonably well modelled and the interferometry supports the extended and multi-component structure of the atmospheres, but some differences remain. We discuss the possible reasons for these differences and we compare the stellar parameters derived from this comparison with stellar evolution models. Finally, we point out the high potential of MATISSE, the second generation VLTI instrument allowing interferometric imaging in the L, M, and N bands, for further progress in this field.

  2. Single Channel EEG Artifact Identification Using Two-Dimensional Multi-Resolution Analysis.

    PubMed

    Taherisadr, Mojtaba; Dehzangi, Omid; Parsaei, Hossein

    2017-12-13

    As a diagnostic monitoring approach, electroencephalogram (EEG) signals can be decoded by signal processing methodologies for various health monitoring purposes. However, EEG recordings are contaminated by other interferences, particularly facial and ocular artifacts generated by the user. This is specifically an issue during continuous EEG recording sessions, and is therefore a key step in using EEG signals for either physiological monitoring and diagnosis or brain-computer interface to identify such artifacts from useful EEG components. In this study, we aim to design a new generic framework in order to process and characterize EEG recording as a multi-component and non-stationary signal with the aim of localizing and identifying its component (e.g., artifact). In the proposed method, we gather three complementary algorithms together to enhance the efficiency of the system. Algorithms include time-frequency (TF) analysis and representation, two-dimensional multi-resolution analysis (2D MRA), and feature extraction and classification. Then, a combination of spectro-temporal and geometric features are extracted by combining key instantaneous TF space descriptors, which enables the system to characterize the non-stationarities in the EEG dynamics. We fit a curvelet transform (as a MRA method) to 2D TF representation of EEG segments to decompose the given space to various levels of resolution. Such a decomposition efficiently improves the analysis of the TF spaces with different characteristics (e.g., resolution). Our experimental results demonstrate that the combination of expansion to TF space, analysis using MRA, and extracting a set of suitable features and applying a proper predictive model is effective in enhancing the EEG artifact identification performance. We also compare the performance of the designed system with another common EEG signal processing technique-namely, 1D wavelet transform. Our experimental results reveal that the proposed method outperforms 1D wavelet.

  3. Cloud and aerosol studies using combined CPL and MAS data

    NASA Astrophysics Data System (ADS)

    Vaughan, Mark A.; Rodier, Sharon; Hu, Yongxiang; McGill, Matthew J.; Holz, Robert E.

    2004-11-01

    Current uncertainties in the role of aerosols and clouds in the Earth's climate system limit our abilities to model the climate system and predict climate change. These limitations are due primarily to difficulties of adequately measuring aerosols and clouds on a global scale. The A-train satellites (Aqua, CALIPSO, CloudSat, PARASOL, and Aura) will provide an unprecedented opportunity to address these uncertainties. The various active and passive sensors of the A-train will use a variety of measurement techniques to provide comprehensive observations of the multi-dimensional properties of clouds and aerosols. However, to fully achieve the potential of this ensemble requires a robust data analysis framework to optimally and efficiently map these individual measurements into a comprehensive set of cloud and aerosol physical properties. In this work we introduce the Multi-Instrument Data Analysis and Synthesis (MIDAS) project, whose goal is to develop a suite of physically sound and computationally efficient algorithms that will combine active and passive remote sensing data in order to produce improved assessments of aerosol and cloud radiative and microphysical properties. These algorithms include (a) the development of an intelligent feature detection algorithm that combines inputs from both active and passive sensors, and (b) identifying recognizable multi-instrument signatures related to aerosol and cloud type derived from clusters of image pixels and the associated vertical profile information. Classification of these signatures will lead to the automated identification of aerosol and cloud types. Testing of these new algorithms is done using currently existing and readily available active and passive measurements from the Cloud Physics Lidar and the MODIS Airborne Simulator, which simulate, respectively, the CALIPSO and MODIS A-train instruments.

  4. Onboard planning for geological investigations using a rover team

    NASA Technical Reports Server (NTRS)

    Estlin, Tara; Gaines, Daniel; Fisher, Forest; Castano, Rebecca

    2004-01-01

    This paper describes an integrated system for coordinating multiple rover behavior with the overall goal of collecting planetary surface data. The Multi-Rover Integrated Science Understanding System (MISUS) combines techniques from planning and scheduling with machine learning to perform autonomous scientific exploration with cooperating rovers.

  5. Combining multi-atlas segmentation with brain surface estimation

    NASA Astrophysics Data System (ADS)

    Huo, Yuankai; Carass, Aaron; Resnick, Susan M.; Pham, Dzung L.; Prince, Jerry L.; Landman, Bennett A.

    2016-03-01

    Whole brain segmentation (with comprehensive cortical and subcortical labels) and cortical surface reconstruction are two essential techniques for investigating the human brain. The two tasks are typically conducted independently, however, which leads to spatial inconsistencies and hinders further integrated cortical analyses. To obtain self-consistent whole brain segmentations and surfaces, FreeSurfer segregates the subcortical and cortical segmentations before and after the cortical surface reconstruction. However, this "segmentation to surface to parcellation" strategy has shown limitation in various situations. In this work, we propose a novel "multi-atlas segmentation to surface" method called Multi-atlas CRUISE (MaCRUISE), which achieves self-consistent whole brain segmentations and cortical surfaces by combining multi-atlas segmentation with the cortical reconstruction method CRUISE. To our knowledge, this is the first work that achieves the reliability of state-of-the-art multi-atlas segmentation and labeling methods together with accurate and consistent cortical surface reconstruction. Compared with previous methods, MaCRUISE has three features: (1) MaCRUISE obtains 132 cortical/subcortical labels simultaneously from a single multi-atlas segmentation before reconstructing volume consistent surfaces; (2) Fuzzy tissue memberships are combined with multi-atlas segmentations to address partial volume effects; (3) MaCRUISE reconstructs topologically consistent cortical surfaces by using the sulci locations from multi-atlas segmentation. Two data sets, one consisting of five subjects with expertly traced landmarks and the other consisting of 100 volumes from elderly subjects are used for validation. Compared with CRUISE, MaCRUISE achieves self-consistent whole brain segmentation and cortical reconstruction without compromising on surface accuracy. MaCRUISE is comparably accurate to FreeSurfer while achieving greater robustness across an elderly population.

  6. Combining Multi-atlas Segmentation with Brain Surface Estimation.

    PubMed

    Huo, Yuankai; Carass, Aaron; Resnick, Susan M; Pham, Dzung L; Prince, Jerry L; Landman, Bennett A

    2016-02-27

    Whole brain segmentation (with comprehensive cortical and subcortical labels) and cortical surface reconstruction are two essential techniques for investigating the human brain. The two tasks are typically conducted independently, however, which leads to spatial inconsistencies and hinders further integrated cortical analyses. To obtain self-consistent whole brain segmentations and surfaces, FreeSurfer segregates the subcortical and cortical segmentations before and after the cortical surface reconstruction. However, this "segmentation to surface to parcellation" strategy has shown limitations in various situations. In this work, we propose a novel "multi-atlas segmentation to surface" method called Multi-atlas CRUISE (MaCRUISE), which achieves self-consistent whole brain segmentations and cortical surfaces by combining multi-atlas segmentation with the cortical reconstruction method CRUISE. To our knowledge, this is the first work that achieves the reliability of state-of-the-art multi-atlas segmentation and labeling methods together with accurate and consistent cortical surface reconstruction. Compared with previous methods, MaCRUISE has three features: (1) MaCRUISE obtains 132 cortical/subcortical labels simultaneously from a single multi-atlas segmentation before reconstructing volume consistent surfaces; (2) Fuzzy tissue memberships are combined with multi-atlas segmentations to address partial volume effects; (3) MaCRUISE reconstructs topologically consistent cortical surfaces by using the sulci locations from multi-atlas segmentation. Two data sets, one consisting of five subjects with expertly traced landmarks and the other consisting of 100 volumes from elderly subjects are used for validation. Compared with CRUISE, MaCRUISE achieves self-consistent whole brain segmentation and cortical reconstruction without compromising on surface accuracy. MaCRUISE is comparably accurate to FreeSurfer while achieving greater robustness across an elderly population.

  7. Multi-phase SPH model for simulation of erosion and scouring by means of the shields and Drucker-Prager criteria.

    NASA Astrophysics Data System (ADS)

    Zubeldia, Elizabeth H.; Fourtakas, Georgios; Rogers, Benedict D.; Farias, Márcio M.

    2018-07-01

    A two-phase numerical model using Smoothed Particle Hydrodynamics (SPH) is developed to model the scouring of two-phase liquid-sediments flows with large deformation. The rheology of sediment scouring due to flows with slow kinematics and high shear forces presents a challenge in terms of spurious numerical fluctuations. This paper bridges the gap between the non-Newtonian and Newtonian flows by proposing a model that combines the yielding, shear and suspension layer mechanics which are needed to predict accurately the local erosion phenomena. A critical bed-mobility condition based on the Shields criterion is imposed to the particles located at the sediment surface. Thus, the onset of the erosion process is independent on the pressure field and eliminates the numerical problem of pressure dependant erosion at the interface. This is combined with the Drucker-Prager yield criterion to predict the onset of yielding of the sediment surface and a concentration suspension model. The multi-phase model has been implemented in the open-source DualSPHysics code accelerated with a graphics processing unit (GPU). The multi-phase model has been compared with 2-D reference numerical models and new experimental data for scour with convergent results. Numerical results for a dry-bed dam break over an erodible bed shows improved agreement with experimental scour and water surface profiles compared to well-known SPH multi-phase models.

  8. Ultrasonic multi-skip tomography for pipe inspection

    NASA Astrophysics Data System (ADS)

    Volker, Arno; Vos, Rik; Hunter, Alan; Lorenz, Maarten

    2012-05-01

    The inspection of wall loss corrosion is difficult at pipe support locations due to limited accessibility. However, the recently developed ultrasonic Multi-Skip screening technique is suitable for this problem. The method employs ultrasonic transducers in a pitch-catch geometry positioned on opposite sides of the pipe support. Shear waves are transmitted in the axial direction within the pipe wall, reflecting multiple times between the inner and outer surfaces before reaching the receivers. Along this path, the signals accumulate information on the integral wall thickness (e.g., via variations in travel time). The method is very sensitive in detecting the presence of wall loss, but it is difficult to quantify both the extent and depth of the loss. If the extent is unknown, then only a conservative estimate of the depth can be made due to the cumulative nature of the travel time variations. Multi-Skip tomography is an extension of Multi-Skip screening and has shown promise as a complimentary follow-up inspection technique. In recent work, we have developed the technique and demonstrated its use for reconstructing high-resolution estimates of pipe wall thickness profiles. The method operates via a model-based full wave field inversion; this consists of a forward model for predicting the measured wave field and an iterative process that compares the predicted and measured wave fields and minimizes the differences with respect to the model parameters (i.e., the wall thickness profile). This paper presents our recent developments in Multi-Skip tomographic inversion, focusing on the initial localization of corrosion regions for efficient parameterization of the surface profile model and utilization of the signal phase information for improving resolution.

  9. Hetero-cellular prototyping by synchronized multi-material bioprinting for rotary cell culture system.

    PubMed

    Snyder, Jessica; Son, Ae Rin; Hamid, Qudus; Wu, Honglu; Sun, Wei

    2016-01-13

    Bottom-up tissue engineering requires methodological progress of biofabrication to capture key design facets of anatomical arrangements across micro, meso and macro-scales. The diffusive mass transfer properties necessary to elicit stability and functionality require hetero-typic contact, cell-to-cell signaling and uniform nutrient diffusion. Bioprinting techniques successfully build mathematically defined porous architecture to diminish resistance to mass transfer. Current limitations of bioprinted cell assemblies include poor micro-scale formability of cell-laden soft gels and asymmetrical macro-scale diffusion through 3D volumes. The objective of this work is to engineer a synchronized multi-material bioprinter (SMMB) system which improves the resolution and expands the capability of existing bioprinting systems by packaging multiple cell types in heterotypic arrays prior to deposition. This unit cell approach to arranging multiple cell-laden solutions is integrated with a motion system to print heterogeneous filaments as tissue engineered scaffolds and nanoliter droplets. The set of SMMB process parameters control the geometric arrangement of the combined flow's internal features and constituent material's volume fractions. SMMB printed hepatocyte-endothelial laden 200 nl droplets are cultured in a rotary cell culture system (RCCS) to study the effect of microgravity on an in vitro model of the human hepatic lobule. RCCS conditioning for 48 h increased hepatocyte cytoplasm diameter 2 μm, increased metabolic rate, and decreased drug half-life. SMMB hetero-cellular models present a 10-fold increase in metabolic rate, compared to SMMB mono-culture models. Improved bioprinting resolution due to process control of cell-laden matrix packaging as well as nanoliter droplet printing capability identify SMMB as a viable technique to improve in vitro model efficacy.

  10. Dynamic Analysis of Recalescence Process and Interface Growth of Eutectic Fe82B17Si1 Alloy

    NASA Astrophysics Data System (ADS)

    Fan, Y.; Liu, A. M.; Chen, Z.; Li, P. Z.; Zhang, C. H.

    2018-03-01

    By employing the glass fluxing technique in combination with cyclical superheating, the microstructural evolution of the undercooled Fe82B17Si1 alloy in the obtained undercooling range was studied. With increase in undercooling, a transition of cooling curves was detected from one recalescence to two recalescences, followed by one recalescence. The two types of cooling curves were fitted by the break equation and the Johnson-Mehl-Avrami-Kolmogorov model. Based on the cooling curves at different undercoolings, the recalescence rate was calculated by the multi-logistic growth model and the Boettinger-Coriel-Trivedi model. Both the recalescence features and the interface growth kinetics of the eutectic Fe82B17Si1 alloy were explored. The fitting results that were obtained using TEM (SAED), SEM and XRD were consistent with the changing rule of microstructures. Finally, the relationship between the microstructure and hardness was also investigated.

  11. Modeling of solid-state and excimer laser processes for 3D micromachining

    NASA Astrophysics Data System (ADS)

    Holmes, Andrew S.; Onischenko, Alexander I.; George, David S.; Pedder, James E.

    2005-04-01

    An efficient simulation method has recently been developed for multi-pulse ablation processes. This is based on pulse-by-pulse propagation of the machined surface according to one of several phenomenological models for the laser-material interaction. The technique allows quantitative predictions to be made about the surface shapes of complex machined parts, given only a minimal set of input data for parameter calibration. In the case of direct-write machining of polymers or glasses with ns-duration pulses, this data set can typically be limited to the surface profiles of a small number of standard test patterns. The use of phenomenological models for the laser-material interaction, calibrated by experimental feedback, allows fast simulation, and can achieve a high degree of accuracy for certain combinations of material, laser and geometry. In this paper, the capabilities and limitations of the approach are discussed, and recent results are presented for structures machined in SU8 photoresist.

  12. Irrigation water allocation optimization using multi-objective evolutionary algorithm (MOEA) - a review

    NASA Astrophysics Data System (ADS)

    Fanuel, Ibrahim Mwita; Mushi, Allen; Kajunguri, Damian

    2018-03-01

    This paper analyzes more than 40 papers with a restricted area of application of Multi-Objective Genetic Algorithm, Non-Dominated Sorting Genetic Algorithm-II and Multi-Objective Differential Evolution (MODE) to solve the multi-objective problem in agricultural water management. The paper focused on different application aspects which include water allocation, irrigation planning, crop pattern and allocation of available land. The performance and results of these techniques are discussed. The review finds that there is a potential to use MODE to analyzed the multi-objective problem, the application is more significance due to its advantage of being simple and powerful technique than any Evolutionary Algorithm. The paper concludes with the hopeful new trend of research that demand effective use of MODE; inclusion of benefits derived from farm byproducts and production costs into the model.

  13. Real-Time Identification of Smoldering and Flaming Combustion Phases in Forest Using a Wireless Sensor Network-Based Multi-Sensor System and Artificial Neural Network

    PubMed Central

    Yan, Xiaofei; Cheng, Hong; Zhao, Yandong; Yu, Wenhua; Huang, Huan; Zheng, Xiaoliang

    2016-01-01

    Diverse sensing techniques have been developed and combined with machine learning method for forest fire detection, but none of them referred to identifying smoldering and flaming combustion phases. This study attempts to real-time identify different combustion phases using a developed wireless sensor network (WSN)-based multi-sensor system and artificial neural network (ANN). Sensors (CO, CO2, smoke, air temperature and relative humidity) were integrated into one node of WSN. An experiment was conducted using burning materials from residual of forest to test responses of each node under no, smoldering-dominated and flaming-dominated combustion conditions. The results showed that the five sensors have reasonable responses to artificial forest fire. To reduce cost of the nodes, smoke, CO2 and temperature sensors were chiefly selected through correlation analysis. For achieving higher identification rate, an ANN model was built and trained with inputs of four sensor groups: smoke; smoke and CO2; smoke and temperature; smoke, CO2 and temperature. The model test results showed that multi-sensor input yielded higher predicting accuracy (≥82.5%) than single-sensor input (50.9%–92.5%). Based on these, it is possible to reduce the cost with a relatively high fire identification rate and potential application of the system can be tested in future under real forest condition. PMID:27527175

  14. Real-Time Identification of Smoldering and Flaming Combustion Phases in Forest Using a Wireless Sensor Network-Based Multi-Sensor System and Artificial Neural Network.

    PubMed

    Yan, Xiaofei; Cheng, Hong; Zhao, Yandong; Yu, Wenhua; Huang, Huan; Zheng, Xiaoliang

    2016-08-04

    Diverse sensing techniques have been developed and combined with machine learning method for forest fire detection, but none of them referred to identifying smoldering and flaming combustion phases. This study attempts to real-time identify different combustion phases using a developed wireless sensor network (WSN)-based multi-sensor system and artificial neural network (ANN). Sensors (CO, CO₂, smoke, air temperature and relative humidity) were integrated into one node of WSN. An experiment was conducted using burning materials from residual of forest to test responses of each node under no, smoldering-dominated and flaming-dominated combustion conditions. The results showed that the five sensors have reasonable responses to artificial forest fire. To reduce cost of the nodes, smoke, CO₂ and temperature sensors were chiefly selected through correlation analysis. For achieving higher identification rate, an ANN model was built and trained with inputs of four sensor groups: smoke; smoke and CO₂; smoke and temperature; smoke, CO₂ and temperature. The model test results showed that multi-sensor input yielded higher predicting accuracy (≥82.5%) than single-sensor input (50.9%-92.5%). Based on these, it is possible to reduce the cost with a relatively high fire identification rate and potential application of the system can be tested in future under real forest condition.

  15. Profile of Students’ Mental Model Change on Law Concepts Archimedes as Impact of Multi-Representation Approach

    NASA Astrophysics Data System (ADS)

    Taher, M.; Hamidah, I.; Suwarma, I. R.

    2017-09-01

    This paper outlined the results of an experimental study on the effects of multi-representation approach in learning Archimedes Law on students’ mental model improvement. The multi-representation techniques implemented in the study were verbal, pictorial, mathematical, and graphical representations. Students’ mental model was classified into three levels, i.e. scientific, synthetic, and initial levels, based on the students’ level of understanding. The present study employed the pre-experimental methodology, using one group pretest-posttest design. The subject of the study was 32 eleventh grade students in a Public Senior High School in Riau Province. The research instrument included model mental test on hydrostatic pressure concept, in the form of essay test judged by experts. The findings showed that there was positive change in students’ mental model, indicating that multi-representation approach was effective to improve students’ mental model.

  16. Trade-off analysis of discharge-desiltation-turbidity and ANN analysis on sedimentation of a combined reservoir-reach system under multi-phase and multi-layer conjunctive releasing operation

    NASA Astrophysics Data System (ADS)

    Huang, Chien-Lin; Hsu, Nien-Sheng; Wei, Chih-Chiang; Yao, Chun-Hao

    2017-10-01

    Multi-objective reservoir operation considering the trade-off of discharge-desiltation-turbidity during typhoons and sediment concentration (SC) simulation modeling are the vital components for sustainable reservoir management. The purposes of this study were (1) to analyze the multi-layer release trade-offs between reservoir desiltation and intake turbidity of downstream purification plants and thus propose a superior conjunctive operation strategy and (2) to develop ANFIS-based (adaptive network-based fuzzy inference system) and RTRLNN-based (real-time recurrent learning neural networks) substitute SC simulation models. To this end, this study proposed a methodology to develop (1) a series of multi-phase and multi-layer sediment-flood conjunctive release modes and (2) a specialized SC numerical model for a combined reservoir-reach system. The conjunctive release modes involve (1) an optimization model where the decision variables are multi-phase reduction/scaling ratios and the timings to generate a superior total release hydrograph for flood control (Phase I: phase prior to flood arrival, Phase II/III: phase prior to/subsequent to peak flow) and (2) a combination method with physical limitations regarding separation of the singular hydrograph into multi-layer release hydrographs for sediment control. This study employed the featured signals obtained from statistical quartiles/sediment duration curve in mesh segmentation, and an iterative optimization model with a sediment unit response matrix and corresponding geophysical-based acceleration factors, for efficient parameter calibration. This research applied the developed methodology to the Shihmen Reservoir basin in Taiwan. The trade-off analytical results using Typhoons Sinlaku and Jangmi as case examples revealed that owing to gravity current and re-suspension effects, Phase I + II can de-silt safely without violating the intake's turbidity limitation before reservoir discharge reaches 2238 m3/s; however, Phase III can only de-silt after the release at spillway reaches 827 m3/s, and before reservoir discharge reaches 1924 m3/s, with corresponding maximum desiltation ratio being 0.221 and 0.323, respectively. Moreover, the model construction results demonstrated that the self-adaption/fuzzy inference of ANFIS can effectively simulate the SC hydrograph in an unsteady state for suspended load-dominated water bodies, and that the real-time recurrent deterministic routing of RTRLNN can accurately simulate that of a bedload-dominated flow regime.

  17. Accelerating electrostatic surface potential calculation with multi-scale approximation on graphics processing units.

    PubMed

    Anandakrishnan, Ramu; Scogland, Tom R W; Fenley, Andrew T; Gordon, John C; Feng, Wu-chun; Onufriev, Alexey V

    2010-06-01

    Tools that compute and visualize biomolecular electrostatic surface potential have been used extensively for studying biomolecular function. However, determining the surface potential for large biomolecules on a typical desktop computer can take days or longer using currently available tools and methods. Two commonly used techniques to speed-up these types of electrostatic computations are approximations based on multi-scale coarse-graining and parallelization across multiple processors. This paper demonstrates that for the computation of electrostatic surface potential, these two techniques can be combined to deliver significantly greater speed-up than either one separately, something that is in general not always possible. Specifically, the electrostatic potential computation, using an analytical linearized Poisson-Boltzmann (ALPB) method, is approximated using the hierarchical charge partitioning (HCP) multi-scale method, and parallelized on an ATI Radeon 4870 graphical processing unit (GPU). The implementation delivers a combined 934-fold speed-up for a 476,040 atom viral capsid, compared to an equivalent non-parallel implementation on an Intel E6550 CPU without the approximation. This speed-up is significantly greater than the 42-fold speed-up for the HCP approximation alone or the 182-fold speed-up for the GPU alone. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  18. A multi-technique approach for characterizing the geomorphological evolution of a Villerville-Cricqueboeuf coastal landslide (Normandy, France).

    NASA Astrophysics Data System (ADS)

    Lissak Borges, Candide; Maquaire, Olivier; Malet, Jean-Philippe; Gomez, Christopher; Lavigne, Franck

    2010-05-01

    The Villerville and Cricqueboeuf coastal landslides (Calvados, Normandy, North-West France) have occurred in marly, sandy and chalky formations. The slope instability probably started during the last Quaternary period and is still active over the recent historic period. Since 1982, the slope is affected by a permanent activity (following the Varnes classification) with an annual average displacement of 5-10 cm.y-1 depending on the season. Three major events occurred in 1988, 1995 and 2001 and are controlled by the hydro-climatic conditions. These events induced pluri-decimetres to pluri-meters displacements (e.g. 5m horizontal displacements have been observed in 2001 at Cricqueboeuf) and generated economical and physical damage to buildings and roads. The landslide morphology is characterized by multi-metres scarps, reverse slopes caused by the tilting of landslide blocks and evolving cracks. The objective of this paper is to present the methodology used to characterize the recent historical (since 1808) geomorphological evolution of the landslides, and to discuss the spatio-temporal pattern of observed displacements. A multi-technique research approach has been applied and consisted in historical research, geomorphological mapping, geodetic monitoring and engineering geotechnical investigation. Information gained from different documents and techniques has been combined to propose a conceptual model of landslide evolution: - a retrospective study on landslide events inventoried in the historic period (archive investigation, newspapers); - a multi-temporal (1955-2006) analysis of aerial photographs (image processing, traditional stereoscopic techniques and image orthorectification), ancient maps and cadastres; - the creation of a detailed geomorphological map in 2009; - an analysis of recent displacements monitored since 1985 with traditional geodetic techniques (tacheometry, dGPS, micro-levelling) - geophysical investigation by ground-penetrating radar along the main road in order to assess the subsidence of the road according to the thickness of the filling material. Integration of the knowledge allows to characterize the landscape changes over the historical time. Displacement values obtained over nearly 200 years reflect annual slow movement and crisis acceleration. Values are dispersed in space and time. An average of displacements of 12.30 m year-1 (σ = 8.50) between 1829 and 2006 is observed for the Villerville landslide. This average allows calculating an annual displacement of 0.07 m which can be compared to data recorded since 1985 and by annual DGPS measurement data between 2008 and 2009.

  19. Massive integration of diverse protein quality assessment methods to improve template based modeling in CASP11.

    PubMed

    Cao, Renzhi; Bhattacharya, Debswapna; Adhikari, Badri; Li, Jilong; Cheng, Jianlin

    2016-09-01

    Model evaluation and selection is an important step and a big challenge in template-based protein structure prediction. Individual model quality assessment methods designed for recognizing some specific properties of protein structures often fail to consistently select good models from a model pool because of their limitations. Therefore, combining multiple complimentary quality assessment methods is useful for improving model ranking and consequently tertiary structure prediction. Here, we report the performance and analysis of our human tertiary structure predictor (MULTICOM) based on the massive integration of 14 diverse complementary quality assessment methods that was successfully benchmarked in the 11th Critical Assessment of Techniques of Protein Structure prediction (CASP11). The predictions of MULTICOM for 39 template-based domains were rigorously assessed by six scoring metrics covering global topology of Cα trace, local all-atom fitness, side chain quality, and physical reasonableness of the model. The results show that the massive integration of complementary, diverse single-model and multi-model quality assessment methods can effectively leverage the strength of single-model methods in distinguishing quality variation among similar good models and the advantage of multi-model quality assessment methods of identifying reasonable average-quality models. The overall excellent performance of the MULTICOM predictor demonstrates that integrating a large number of model quality assessment methods in conjunction with model clustering is a useful approach to improve the accuracy, diversity, and consequently robustness of template-based protein structure prediction. Proteins 2016; 84(Suppl 1):247-259. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  20. Combining patient journey modelling and visual multi-agent computer simulation: a framework to improving knowledge translation in a healthcare environment.

    PubMed

    Curry, Joanne; Fitzgerald, Anneke; Prodan, Ante; Dadich, Ann; Sloan, Terry

    2014-01-01

    This article focuses on a framework that will investigate the integration of two disparate methodologies: patient journey modelling and visual multi-agent simulation, and its impact on the speed and quality of knowledge translation to healthcare stakeholders. Literature describes patient journey modelling and visual simulation as discrete activities. This paper suggests that their combination and their impact on translating knowledge to practitioners are greater than the sum of the two technologies. The test-bed is ambulatory care and the goal is to determine if this approach can improve health services delivery, workflow, and patient outcomes and satisfaction. The multidisciplinary research team is comprised of expertise in patient journey modelling, simulation, and knowledge translation.

  1. Viewing zone duplication of multi-projection 3D display system using uniaxial crystal.

    PubMed

    Lee, Chang-Kun; Park, Soon-Gi; Moon, Seokil; Lee, Byoungho

    2016-04-18

    We propose a novel multiplexing technique for increasing the viewing zone of a multi-view based multi-projection 3D display system by employing double refraction in uniaxial crystal. When linearly polarized images from projector pass through the uniaxial crystal, two possible optical paths exist according to the polarization states of image. Therefore, the optical paths of the image could be changed, and the viewing zone is shifted in a lateral direction. The polarization modulation of the image from a single projection unit enables us to generate two viewing zones at different positions. For realizing full-color images at each viewing zone, a polarization-based temporal multiplexing technique is adopted with a conventional polarization switching device of liquid crystal (LC) display. Through experiments, a prototype of a ten-view multi-projection 3D display system presenting full-colored view images is implemented by combining five laser scanning projectors, an optically clear calcite (CaCO3) crystal, and an LC polarization rotator. For each time sequence of temporal multiplexing, the luminance distribution of the proposed system is measured and analyzed.

  2. Monolithic, multi-bandgap, tandem, ultra-thin, strain-counterbalanced, photovoltaic energy converters with optimal subcell bandgaps

    DOEpatents

    Wanlass, Mark W [Golden, CO; Mascarenhas, Angelo [Lakewood, CO

    2012-05-08

    Modeling a monolithic, multi-bandgap, tandem, solar photovoltaic converter or thermophotovoltaic converter by constraining the bandgap value for the bottom subcell to no less than a particular value produces an optimum combination of subcell bandgaps that provide theoretical energy conversion efficiencies nearly as good as unconstrained maximum theoretical conversion efficiency models, but which are more conducive to actual fabrication to achieve such conversion efficiencies than unconstrained model optimum bandgap combinations. Achieving such constrained or unconstrained optimum bandgap combinations includes growth of a graded layer transition from larger lattice constant on the parent substrate to a smaller lattice constant to accommodate higher bandgap upper subcells and at least one graded layer that transitions back to a larger lattice constant to accommodate lower bandgap lower subcells and to counter-strain the epistructure to mitigate epistructure bowing.

  3. Multi-off-grid methods in multi-step integration of ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Beaudet, P. R.

    1974-01-01

    Description of methods of solving first- and second-order systems of differential equations in which all derivatives are evaluated at off-grid locations in order to circumvent the Dahlquist stability limitation on the order of on-grid methods. The proposed multi-off-grid methods require off-grid state predictors for the evaluation of the n derivatives at each step. Progressing forward in time, the off-grid states are predicted using a linear combination of back on-grid state values and off-grid derivative evaluations. A comparison is made between the proposed multi-off-grid methods and the corresponding Adams and Cowell on-grid integration techniques in integrating systems of ordinary differential equations, showing a significant reduction in the error at larger step sizes in the case of the multi-off-grid integrator.

  4. Probabilistic Open Set Recognition

    NASA Astrophysics Data System (ADS)

    Jain, Lalit Prithviraj

    Real-world tasks in computer vision, pattern recognition and machine learning often touch upon the open set recognition problem: multi-class recognition with incomplete knowledge of the world and many unknown inputs. An obvious way to approach such problems is to develop a recognition system that thresholds probabilities to reject unknown classes. Traditional rejection techniques are not about the unknown; they are about the uncertain boundary and rejection around that boundary. Thus traditional techniques only represent the "known unknowns". However, a proper open set recognition algorithm is needed to reduce the risk from the "unknown unknowns". This dissertation examines this concept and finds existing probabilistic multi-class recognition approaches are ineffective for true open set recognition. We hypothesize the cause is due to weak adhoc assumptions combined with closed-world assumptions made by existing calibration techniques. Intuitively, if we could accurately model just the positive data for any known class without overfitting, we could reject the large set of unknown classes even under this assumption of incomplete class knowledge. For this, we formulate the problem as one of modeling positive training data by invoking statistical extreme value theory (EVT) near the decision boundary of positive data with respect to negative data. We provide a new algorithm called the PI-SVM for estimating the unnormalized posterior probability of class inclusion. This dissertation also introduces a new open set recognition model called Compact Abating Probability (CAP), where the probability of class membership decreases in value (abates) as points move from known data toward open space. We show that CAP models improve open set recognition for multiple algorithms. Leveraging the CAP formulation, we go on to describe the novel Weibull-calibrated SVM (W-SVM) algorithm, which combines the useful properties of statistical EVT for score calibration with one-class and binary support vector machines. Building from the success of statistical EVT based recognition methods such as PI-SVM and W-SVM on the open set problem, we present a new general supervised learning algorithm for multi-class classification and multi-class open set recognition called the Extreme Value Local Basis (EVLB). The design of this algorithm is motivated by the observation that extrema from known negative class distributions are the closest negative points to any positive sample during training, and thus should be used to define the parameters of a probabilistic decision model. In the EVLB, the kernel distribution for each positive training sample is estimated via an EVT distribution fit over the distances to the separating hyperplane between positive training sample and closest negative samples, with a subset of the overall positive training data retained to form a probabilistic decision boundary. Using this subset as a frame of reference, the probability of a sample at test time decreases as it moves away from the positive class. Possessing this property, the EVLB is well-suited to open set recognition problems where samples from unknown or novel classes are encountered at test. Our experimental evaluation shows that the EVLB provides a substantial improvement in scalability compared to standard radial basis function kernel machines, as well as P I-SVM and W-SVM, with improved accuracy in many cases. We evaluate our algorithm on open set variations of the standard visual learning benchmarks, as well as with an open subset of classes from Caltech 256 and ImageNet. Our experiments show that PI-SVM, WSVM and EVLB provide significant advances over the previous state-of-the-art solutions for the same tasks.

  5. Reaching multi-nanosecond timescales in combined QM/MM molecular dynamics simulations through parallel horsetail sampling.

    PubMed

    Martins-Costa, Marilia T C; Ruiz-López, Manuel F

    2017-04-15

    We report an enhanced sampling technique that allows to reach the multi-nanosecond timescale in quantum mechanics/molecular mechanics molecular dynamics simulations. The proposed technique, called horsetail sampling, is a specific type of multiple molecular dynamics approach exhibiting high parallel efficiency. It couples a main simulation with a large number of shorter trajectories launched on independent processors at periodic time intervals. The technique is applied to study hydrogen peroxide at the water liquid-vapor interface, a system of considerable atmospheric relevance. A total simulation time of a little more than 6 ns has been attained for a total CPU time of 5.1 years representing only about 20 days of wall-clock time. The discussion of the results highlights the strong influence of the solvation effects at the interface on the structure and the electronic properties of the solute. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  6. Correlative Tomography

    PubMed Central

    Burnett, T. L.; McDonald, S. A.; Gholinia, A.; Geurts, R.; Janus, M.; Slater, T.; Haigh, S. J.; Ornek, C.; Almuaili, F.; Engelberg, D. L.; Thompson, G. E.; Withers, P. J.

    2014-01-01

    Increasingly researchers are looking to bring together perspectives across multiple scales, or to combine insights from different techniques, for the same region of interest. To this end, correlative microscopy has already yielded substantial new insights in two dimensions (2D). Here we develop correlative tomography where the correlative task is somewhat more challenging because the volume of interest is typically hidden beneath the sample surface. We have threaded together x-ray computed tomography, serial section FIB-SEM tomography, electron backscatter diffraction and finally TEM elemental analysis all for the same 3D region. This has allowed observation of the competition between pitting corrosion and intergranular corrosion at multiple scales revealing the structural hierarchy, crystallography and chemistry of veiled corrosion pits in stainless steel. With automated correlative workflows and co-visualization of the multi-scale or multi-modal datasets the technique promises to provide insights across biological, geological and materials science that are impossible using either individual or multiple uncorrelated techniques. PMID:24736640

  7. Zero-forcing pre-coding for MIMO WiMAX transceivers: Performance analysis and implementation issues

    NASA Astrophysics Data System (ADS)

    Cattoni, A. F.; Le Moullec, Y.; Sacchi, C.

    Next generation wireless communication networks are expected to achieve ever increasing data rates. Multi-User Multiple-Input-Multiple-Output (MU-MIMO) is a key technique to obtain the expected performance, because such a technique combines the high capacity achievable using MIMO channel with the benefits of space division multiple access. In MU-MIMO systems, the base stations transmit signals to two or more users over the same channel, for this reason every user can experience inter-user interference. This paper provides a capacity analysis of an online, interference-based pre-coding algorithm able to mitigate the multi-user interference of the MU-MIMO systems in the context of a realistic WiMAX application scenario. Simulation results show that pre-coding can significantly increase the channel capacity. Furthermore, the paper presents several feasibility considerations for implementation of the analyzed technique in a possible FPGA-based software defined radio.

  8. Behavioral modeling and digital compensation of nonlinearity in DFB lasers for multi-band directly modulated radio-over-fiber systems

    NASA Astrophysics Data System (ADS)

    Li, Jianqiang; Yin, Chunjing; Chen, Hao; Yin, Feifei; Dai, Yitang; Xu, Kun

    2014-11-01

    The envisioned C-RAN concept in wireless communication sector replies on distributed antenna systems (DAS) which consist of a central unit (CU), multiple remote antenna units (RAUs) and the fronthaul links between them. As the legacy and emerging wireless communication standards will coexist for a long time, the fronthaul links are preferred to carry multi-band multi-standard wireless signals. Directly-modulated radio-over-fiber (ROF) links can serve as a lowcost option to make fronthaul connections conveying multi-band wireless signals. However, directly-modulated radioover- fiber (ROF) systems often suffer from inherent nonlinearities from directly-modulated lasers. Unlike ROF systems working at the single-band mode, the modulation nonlinearities in multi-band ROF systems can result in both in-band and cross-band nonlinear distortions. In order to address this issue, we have recently investigated the multi-band nonlinear behavior of directly-modulated DFB lasers based on multi-dimensional memory polynomial model. Based on this model, an efficient multi-dimensional baseband digital predistortion technique was developed and experimentally demonstrated for linearization of multi-band directly-modulated ROF systems.

  9. A new approach to spike sorting for multi-neuronal activities recorded with a tetrode--how ICA can be practical.

    PubMed

    Takahashi, Susumu; Anzai, Yuichiro; Sakurai, Yoshio

    2003-07-01

    Multi-neuronal recording with a tetrode is a powerful technique to reveal neuronal interactions in local circuits. However, it is difficult to detect precise spike timings among closely neighboring neurons because the spike waveforms of individual neurons overlap on the electrode when more than two neurons fire simultaneously. In addition, the spike waveforms of single neurons, especially in the presence of complex spikes, are often non-stationary. These problems limit the ability of ordinary spike sorting to sort multi-neuronal activities recorded using tetrodes into their single-neuron components. Though sorting with independent component analysis (ICA) can solve these problems, it has one serious limitation that the number of separated neurons must be less than the number of electrodes. Using a combination of ICA and the efficiency of ordinary spike sorting technique (k-means clustering), we developed an automatic procedure to solve the spike-overlapping and the non-stationarity problems with no limitation on the number of separated neurons. The results for the procedure applied to real multi-neuronal data demonstrated that some outliers which may be assigned to distinct clusters if ordinary spike-sorting methods were used can be identified as overlapping spikes, and that there are functional connections between a putative pyramidal neuron and its putative dendrite. These findings suggest that the combination of ICA and k-means clustering can provide insights into the precise nature of functional circuits among neurons, i.e. cell assemblies.

  10. Multicolor Super-Resolution Fluorescence Imaging via Multi-Parameter Fluorophore Detection

    PubMed Central

    Bates, Mark; Dempsey, Graham T; Chen, Kok Hao; Zhuang, Xiaowei

    2012-01-01

    Understanding the complexity of the cellular environment will benefit from the ability to unambiguously resolve multiple cellular components, simultaneously and with nanometer-scale spatial resolution. Multicolor super-resolution fluorescence microscopy techniques have been developed to achieve this goal, yet challenges remain in terms of the number of targets that can be simultaneously imaged and the crosstalk between color channels. Herein, we demonstrate multicolor stochastic optical reconstruction microscopy (STORM) based on a multi-parameter detection strategy, which uses both the fluorescence activation wavelength and the emission color to discriminate between photo-activatable fluorescent probes. First, we obtained two-color super-resolution images using the near-infrared cyanine dye Alexa 750 in conjunction with a red cyanine dye Alexa 647, and quantified color crosstalk levels and image registration accuracy. Combinatorial pairing of these two switchable dyes with fluorophores which enhance photo-activation enabled multi-parameter detection of six different probes. Using this approach, we obtained six-color super-resolution fluorescence images of a model sample. The combination of multiple fluorescence detection parameters for improved fluorophore discrimination promises to substantially enhance our ability to visualize multiple cellular targets with sub-diffraction-limit resolution. PMID:22213647

  11. Short, multi-needle FDR sensor suitable for measuring soil water content

    USDA-ARS?s Scientific Manuscript database

    Time domain reflectometry (TDR) is a well-established electromagnetic technique used to measure soil water content. TDR sensors have been combined with heat pulse sensors to produce thermo-TDR sensors. Thermo-TDR sensors are restricted to having relatively short needles in order to accurately measur...

  12. Rapid Multi-Tracer PET Tumor Imaging With F-FDG and Secondary Shorter-Lived Tracers.

    PubMed

    Black, Noel F; McJames, Scott; Kadrmas, Dan J

    2009-10-01

    Rapid multi-tracer PET, where two to three PET tracers are rapidly scanned with staggered injections, can recover certain imaging measures for each tracer based on differences in tracer kinetics and decay. We previously showed that single-tracer imaging measures can be recovered to a certain extent from rapid dual-tracer (62)Cu - PTSM (blood flow) + (62)Cu - ATSM (hypoxia) tumor imaging. In this work, the feasibility of rapidly imaging (18)F-FDG plus one or two of these shorter-lived secondary tracers was evaluated in the same tumor model. Dynamic PET imaging was performed in four dogs with pre-existing tumors, and the raw scan data was combined to emulate 60 minute long dual- and triple-tracer scans, using the single-tracer scans as gold standards. The multi-tracer data were processed for static (SUV) and kinetic (K(1), K(net)) endpoints for each tracer, followed by linear regression analysis of multi-tracer versus single-tracer results. Static and quantitative dynamic imaging measures of FDG were both accurately recovered from the multi-tracer scans, closely matching the single-tracer FDG standards (R > 0.99). Quantitative blood flow information, as measured by PTSM K(1) and SUV, was also accurately recovered from the multi-tracer scans (R = 0.97). Recovery of ATSM kinetic parameters proved more difficult, though the ATSM SUV was reasonably well recovered (R = 0.92). We conclude that certain additional information from one to two shorter-lived PET tracers may be measured in a rapid multi-tracer scan alongside FDG without compromising the assessment of glucose metabolism. Such additional and complementary information has the potential to improve tumor characterization in vivo, warranting further investigation of rapid multi-tracer techniques.

  13. Rapid Multi-Tracer PET Tumor Imaging With 18F-FDG and Secondary Shorter-Lived Tracers

    PubMed Central

    Black, Noel F.; McJames, Scott; Kadrmas, Dan J.

    2009-01-01

    Rapid multi-tracer PET, where two to three PET tracers are rapidly scanned with staggered injections, can recover certain imaging measures for each tracer based on differences in tracer kinetics and decay. We previously showed that single-tracer imaging measures can be recovered to a certain extent from rapid dual-tracer 62Cu – PTSM (blood flow) + 62Cu — ATSM (hypoxia) tumor imaging. In this work, the feasibility of rapidly imaging 18F-FDG plus one or two of these shorter-lived secondary tracers was evaluated in the same tumor model. Dynamic PET imaging was performed in four dogs with pre-existing tumors, and the raw scan data was combined to emulate 60 minute long dual- and triple-tracer scans, using the single-tracer scans as gold standards. The multi-tracer data were processed for static (SUV) and kinetic (K1, Knet) endpoints for each tracer, followed by linear regression analysis of multi-tracer versus single-tracer results. Static and quantitative dynamic imaging measures of FDG were both accurately recovered from the multi-tracer scans, closely matching the single-tracer FDG standards (R > 0.99). Quantitative blood flow information, as measured by PTSM K1 and SUV, was also accurately recovered from the multi-tracer scans (R = 0.97). Recovery of ATSM kinetic parameters proved more difficult, though the ATSM SUV was reasonably well recovered (R = 0.92). We conclude that certain additional information from one to two shorter-lived PET tracers may be measured in a rapid multi-tracer scan alongside FDG without compromising the assessment of glucose metabolism. Such additional and complementary information has the potential to improve tumor characterization in vivo, warranting further investigation of rapid multi-tracer techniques. PMID:20046800

  14. Diffusion-Based Design of Multi-Layered Ophthalmic Lenses for Controlled Drug Release

    PubMed Central

    Pimenta, Andreia F. R.; Serro, Ana Paula; Paradiso, Patrizia; Saramago, Benilde

    2016-01-01

    The study of ocular drug delivery systems has been one of the most covered topics in drug delivery research. One potential drug carrier solution is the use of materials that are already commercially available in ophthalmic lenses for the correction of refractive errors. In this study, we present a diffusion-based mathematical model in which the parameters can be adjusted based on experimental results obtained under controlled conditions. The model allows for the design of multi-layered therapeutic ophthalmic lenses for controlled drug delivery. We show that the proper combination of materials with adequate drug diffusion coefficients, thicknesses and interfacial transport characteristics allows for the control of the delivery of drugs from multi-layered ophthalmic lenses, such that drug bursts can be minimized, and the release time can be maximized. As far as we know, this combination of a mathematical modelling approach with experimental validation of non-constant activity source lamellar structures, made of layers of different materials, accounting for the interface resistance to the drug diffusion, is a novel approach to the design of drug loaded multi-layered contact lenses. PMID:27936138

  15. A novel application of artificial neural network for wind speed estimation

    NASA Astrophysics Data System (ADS)

    Fang, Da; Wang, Jianzhou

    2017-05-01

    Providing accurate multi-steps wind speed estimation models has increasing significance, because of the important technical and economic impacts of wind speed on power grid security and environment benefits. In this study, the combined strategies for wind speed forecasting are proposed based on an intelligent data processing system using artificial neural network (ANN). Generalized regression neural network and Elman neural network are employed to form two hybrid models. The approach employs one of ANN to model the samples achieving data denoising and assimilation and apply the other to predict wind speed using the pre-processed samples. The proposed method is demonstrated in terms of the predicting improvements of the hybrid models compared with single ANN and the typical forecasting method. To give sufficient cases for the study, four observation sites with monthly average wind speed of four given years in Western China were used to test the models. Multiple evaluation methods demonstrated that the proposed method provides a promising alternative technique in monthly average wind speed estimation.

  16. Adaptive surrogate model based multi-objective transfer trajectory optimization between different libration points

    NASA Astrophysics Data System (ADS)

    Peng, Haijun; Wang, Wei

    2016-10-01

    An adaptive surrogate model-based multi-objective optimization strategy that combines the benefits of invariant manifolds and low-thrust control toward developing a low-computational-cost transfer trajectory between libration orbits around the L1 and L2 libration points in the Sun-Earth system has been proposed in this paper. A new structure for a multi-objective transfer trajectory optimization model that divides the transfer trajectory into several segments and gives the dominations for invariant manifolds and low-thrust control in different segments has been established. To reduce the computational cost of multi-objective transfer trajectory optimization, a mixed sampling strategy-based adaptive surrogate model has been proposed. Numerical simulations show that the results obtained from the adaptive surrogate-based multi-objective optimization are in agreement with the results obtained using direct multi-objective optimization methods, and the computational workload of the adaptive surrogate-based multi-objective optimization is only approximately 10% of that of direct multi-objective optimization. Furthermore, the generating efficiency of the Pareto points of the adaptive surrogate-based multi-objective optimization is approximately 8 times that of the direct multi-objective optimization. Therefore, the proposed adaptive surrogate-based multi-objective optimization provides obvious advantages over direct multi-objective optimization methods.

  17. Designs for the combination of group- and individual-level data

    PubMed Central

    Haneuse, Sebastien; Bartell, Scott

    2012-01-01

    Background Studies of ecologic or aggregate data suffer from a broad range of biases when scientific interest lies with individual-level associations. To overcome these biases, epidemiologists can choose from a range of designs that combine these group-level data with individual-level data. The individual-level data provide information to identify, evaluate, and control bias, while the group-level data are often readily accessible and provide gains in efficiency and power. Within this context, the literature on developing models, particularly multi-level models, is well-established, but little work has been published to help researchers choose among competing designs and plan additional data collection. Methods We review recently proposed “combined” group- and individual-level designs and methods that collect and analyze data at two levels of aggregation. These include aggregate data designs, hierarchical related regression, two-phase designs, and hybrid designs for ecologic inference. Results The various methods differ in (i) the data elements available at the group and individual levels and (ii) the statistical techniques used to combine the two data sources. Implementing these techniques requires care, and it may often be simpler to ignore the group-level data once the individual-level data are collected. A simulation study, based on birth-weight data from North Carolina, is used to illustrate the benefit of incorporating group-level information. Conclusions Our focus is on settings where there are individual-level data to supplement readily accessible group-level data. In this context, no single design is ideal. Choosing which design to adopt depends primarily on the model of interest and the nature of the available group-level data. PMID:21490533

  18. Imaging and machine learning techniques for diagnosis of Alzheimer's disease.

    PubMed

    Mirzaei, Golrokh; Adeli, Anahita; Adeli, Hojjat

    2016-12-01

    Alzheimer's disease (AD) is a common health problem in elderly people. There has been considerable research toward the diagnosis and early detection of this disease in the past decade. The sensitivity of biomarkers and the accuracy of the detection techniques have been defined to be the key to an accurate diagnosis. This paper presents a state-of-the-art review of the research performed on the diagnosis of AD based on imaging and machine learning techniques. Different segmentation and machine learning techniques used for the diagnosis of AD are reviewed including thresholding, supervised and unsupervised learning, probabilistic techniques, Atlas-based approaches, and fusion of different image modalities. More recent and powerful classification techniques such as the enhanced probabilistic neural network of Ahmadlou and Adeli should be investigated with the goal of improving the diagnosis accuracy. A combination of different image modalities can help improve the diagnosis accuracy rate. Research is needed on the combination of modalities to discover multi-modal biomarkers.

  19. Development of Super-Ensemble techniques for ocean analyses: the Mediterranean Sea case

    NASA Astrophysics Data System (ADS)

    Pistoia, Jenny; Pinardi, Nadia; Oddo, Paolo; Collins, Matthew; Korres, Gerasimos; Drillet, Yann

    2017-04-01

    Short-term ocean analyses for Sea Surface Temperature SST in the Mediterranean Sea can be improved by a statistical post-processing technique, called super-ensemble. This technique consists in a multi-linear regression algorithm applied to a Multi-Physics Multi-Model Super-Ensemble (MMSE) dataset, a collection of different operational forecasting analyses together with ad-hoc simulations produced by modifying selected numerical model parameterizations. A new linear regression algorithm based on Empirical Orthogonal Function filtering techniques is capable to prevent overfitting problems, even if best performances are achieved when we add correlation to the super-ensemble structure using a simple spatial filter applied after the linear regression. Our outcomes show that super-ensemble performances depend on the selection of an unbiased operator and the length of the learning period, but the quality of the generating MMSE dataset has the largest impact on the MMSE analysis Root Mean Square Error (RMSE) evaluated with respect to observed satellite SST. Lower RMSE analysis estimates result from the following choices: 15 days training period, an overconfident MMSE dataset (a subset with the higher quality ensemble members), and the least square algorithm being filtered a posteriori.

  20. On the Impact of Execution Models: A Case Study in Computational Chemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chavarría-Miranda, Daniel; Halappanavar, Mahantesh; Krishnamoorthy, Sriram

    2015-05-25

    Efficient utilization of high-performance computing (HPC) platforms is an important and complex problem. Execution models, abstract descriptions of the dynamic runtime behavior of the execution stack, have significant impact on the utilization of HPC systems. Using a computational chemistry kernel as a case study and a wide variety of execution models combined with load balancing techniques, we explore the impact of execution models on the utilization of an HPC system. We demonstrate a 50 percent improvement in performance by using work stealing relative to a more traditional static scheduling approach. We also use a novel semi-matching technique for load balancingmore » that has comparable performance to a traditional hypergraph-based partitioning implementation, which is computationally expensive. Using this study, we found that execution model design choices and assumptions can limit critical optimizations such as global, dynamic load balancing and finding the correct balance between available work units and different system and runtime overheads. With the emergence of multi- and many-core architectures and the consequent growth in the complexity of HPC platforms, we believe that these lessons will be beneficial to researchers tuning diverse applications on modern HPC platforms, especially on emerging dynamic platforms with energy-induced performance variability.« less

  1. Energy-aware Thread and Data Management in Heterogeneous Multi-core, Multi-memory Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Su, Chun-Yi

    By 2004, microprocessor design focused on multicore scaling—increasing the number of cores per die in each generation—as the primary strategy for improving performance. These multicore processors typically equip multiple memory subsystems to improve data throughput. In addition, these systems employ heterogeneous processors such as GPUs and heterogeneous memories like non-volatile memory to improve performance, capacity, and energy efficiency. With the increasing volume of hardware resources and system complexity caused by heterogeneity, future systems will require intelligent ways to manage hardware resources. Early research to improve performance and energy efficiency on heterogeneous, multi-core, multi-memory systems focused on tuning a single primitivemore » or at best a few primitives in the systems. The key limitation of past efforts is their lack of a holistic approach to resource management that balances the tradeoff between performance and energy consumption. In addition, the shift from simple, homogeneous systems to these heterogeneous, multicore, multi-memory systems requires in-depth understanding of efficient resource management for scalable execution, including new models that capture the interchange between performance and energy, smarter resource management strategies, and novel low-level performance/energy tuning primitives and runtime systems. Tuning an application to control available resources efficiently has become a daunting challenge; managing resources in automation is still a dark art since the tradeoffs among programming, energy, and performance remain insufficiently understood. In this dissertation, I have developed theories, models, and resource management techniques to enable energy-efficient execution of parallel applications through thread and data management in these heterogeneous multi-core, multi-memory systems. I study the effect of dynamic concurrent throttling on the performance and energy of multi-core, non-uniform memory access (NUMA) systems. I use critical path analysis to quantify memory contention in the NUMA memory system and determine thread mappings. In addition, I implement a runtime system that combines concurrent throttling and a novel thread mapping algorithm to manage thread resources and improve energy efficient execution in multi-core, NUMA systems.« less

  2. Multi-technique approach to assess the effects of microbial biofilms involved in copper plumbing corrosion.

    PubMed

    Vargas, Ignacio T; Alsina, Marco A; Pavissich, Juan P; Jeria, Gustavo A; Pastén, Pablo A; Walczak, Magdalena; Pizarro, Gonzalo E

    2014-06-01

    Microbially influenced corrosion (MIC) is recognized as an unusual and severe type of corrosion that causes costly failures around the world. A microbial biofilm could enhance the copper release from copper plumbing into the water by forming a reactive interface. The biofilm increases the corrosion rate, the mobility of labile copper from its matrix and the detachment of particles enriched with copper under variable shear stress due to flow conditions. MIC is currently considered as a series of interdependent processes occurring at the metal-liquid interface. The presence of a biofilm results in the following effects: (a) the formation of localized microenvironments with distinct pH, dissolved oxygen concentrations, and redox conditions; (b) sorption and desorption of labile copper bonded to organic compounds under changing water chemistry conditions; (c) change in morphology by deposition of solid corrosion by-products; (d) diffusive transport of reactive chemical species from or towards the metal surface; and (e) detachment of scale particles under flow conditions. Using a multi-technique approach that combines pipe and coupon experiments this paper reviews the effects of microbial biofilms on the corrosion of copper plumbing systems, and proposes an integrated conceptual model for this phenomenon supported by new experimental data. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. Assessing the impact of land use change on hydrology by ensemble modelling (LUCHEM) II: Ensemble combinations and predictions

    USGS Publications Warehouse

    Viney, N.R.; Bormann, H.; Breuer, L.; Bronstert, A.; Croke, B.F.W.; Frede, H.; Graff, T.; Hubrechts, L.; Huisman, J.A.; Jakeman, A.J.; Kite, G.W.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Willems, P.

    2009-01-01

    This paper reports on a project to compare predictions from a range of catchment models applied to a mesoscale river basin in central Germany and to assess various ensemble predictions of catchment streamflow. The models encompass a large range in inherent complexity and input requirements. In approximate order of decreasing complexity, they are DHSVM, MIKE-SHE, TOPLATS, WASIM-ETH, SWAT, PRMS, SLURP, HBV, LASCAM and IHACRES. The models are calibrated twice using different sets of input data. The two predictions from each model are then combined by simple averaging to produce a single-model ensemble. The 10 resulting single-model ensembles are combined in various ways to produce multi-model ensemble predictions. Both the single-model ensembles and the multi-model ensembles are shown to give predictions that are generally superior to those of their respective constituent models, both during a 7-year calibration period and a 9-year validation period. This occurs despite a considerable disparity in performance of the individual models. Even the weakest of models is shown to contribute useful information to the ensembles they are part of. The best model combination methods are a trimmed mean (constructed using the central four or six predictions each day) and a weighted mean ensemble (with weights calculated from calibration performance) that places relatively large weights on the better performing models. Conditional ensembles, in which separate model weights are used in different system states (e.g. summer and winter, high and low flows) generally yield little improvement over the weighted mean ensemble. However a conditional ensemble that discriminates between rising and receding flows shows moderate improvement. An analysis of ensemble predictions shows that the best ensembles are not necessarily those containing the best individual models. Conversely, it appears that some models that predict well individually do not necessarily combine well with other models in multi-model ensembles. The reasons behind these observations may relate to the effects of the weighting schemes, non-stationarity of the climate series and possible cross-correlations between models. Crown Copyright ?? 2008.

  4. Model Predictive Control techniques with application to photovoltaic, DC Microgrid, and a multi-sourced hybrid energy system

    NASA Astrophysics Data System (ADS)

    Shadmand, Mohammad Bagher

    Renewable energy sources continue to gain popularity. However, two major limitations exist that prevent widespread adoption: availability and variability of the electricity generated and the cost of the equipment. The focus of this dissertation is Model Predictive Control (MPC) for optimal sized photovoltaic (PV), DC Microgrid, and multi-sourced hybrid energy systems. The main considered applications are: maximum power point tracking (MPPT) by MPC, droop predictive control of DC microgrid, MPC of grid-interaction inverter, MPC of a capacitor-less VAR compensator based on matrix converter (MC). This dissertation firstly investigates a multi-objective optimization technique for a hybrid distribution system. The variability of a high-penetration PV scenario is also studied when incorporated into the microgrid concept. Emerging (PV) technologies have enabled the creation of contoured and conformal PV surfaces; the effect of using non-planar PV modules on variability is also analyzed. The proposed predictive control to achieve maximum power point for isolated and grid-tied PV systems speeds up the control loop since it predicts error before the switching signal is applied to the converter. The low conversion efficiency of PV cells means we want to ensure always operating at maximum possible power point to make the system economical. Thus the proposed MPPT technique can capture more energy compared to the conventional MPPT techniques from same amount of installed solar panel. Because of the MPPT requirement, the output voltage of the converter may vary. Therefore a droop control is needed to feed multiple arrays of photovoltaic systems to a DC bus in microgrid community. Development of a droop control technique by means of predictive control is another application of this dissertation. Reactive power, denoted as Volt Ampere Reactive (VAR), has several undesirable consequences on AC power system network such as reduction in power transfer capability and increase in transmission loss if not controlled appropriately. Inductive loads which operate with lagging power factor consume VARs, thus load compensation techniques by capacitor bank employment locally supply VARs needed by the load. Capacitors are highly unreliable components due to their failure modes and aging inherent. Approximately 60% of power electronic devices failure such as voltage-source inverter based static synchronous compensator (STATCOM) is due to the use of aluminum electrolytic DC capacitors. Therefore, a capacitor-less VAR compensation is desired. This dissertation also investigates a STATCOM capacitor-less reactive power compensation that uses only inductors combined with predictive controlled matrix converter.

  5. Scaling dimensions in spectroscopy of soil and vegetation

    NASA Astrophysics Data System (ADS)

    Malenovský, Zbyněk; Bartholomeus, Harm M.; Acerbi-Junior, Fausto W.; Schopfer, Jürg T.; Painter, Thomas H.; Epema, Gerrit F.; Bregt, Arnold K.

    2007-05-01

    The paper revises and clarifies definitions of the term scale and scaling conversions for imaging spectroscopy of soil and vegetation. We demonstrate a new four-dimensional scale concept that includes not only spatial but also the spectral, directional and temporal components. Three scaling remote sensing techniques are reviewed: (1) radiative transfer, (2) spectral (un)mixing, and (3) data fusion. Relevant case studies are given in the context of their up- and/or down-scaling abilities over the soil/vegetation surfaces and a multi-source approach is proposed for their integration. Radiative transfer (RT) models are described to show their capacity for spatial, spectral up-scaling, and directional down-scaling within a heterogeneous environment. Spectral information and spectral derivatives, like vegetation indices (e.g. TCARI/OSAVI), can be scaled and even tested by their means. Radiative transfer of an experimental Norway spruce ( Picea abies (L.) Karst.) research plot in the Czech Republic was simulated by the Discrete Anisotropic Radiative Transfer (DART) model to prove relevance of the correct object optical properties scaled up to image data at two different spatial resolutions. Interconnection of the successive modelling levels in vegetation is shown. A future development in measurement and simulation of the leaf directional spectral properties is discussed. We describe linear and/or non-linear spectral mixing techniques and unmixing methods that demonstrate spatial down-scaling. Relevance of proper selection or acquisition of the spectral endmembers using spectral libraries, field measurements, and pure pixels of the hyperspectral image is highlighted. An extensive list of advanced unmixing techniques, a particular example of unmixing a reflective optics system imaging spectrometer (ROSIS) image from Spain, and examples of other mixture applications give insight into the present status of scaling capabilities. Simultaneous spatial and temporal down-scaling by means of a data fusion technique is described. A demonstrative example is given for the moderate resolution imaging spectroradiometer (MODIS) and LANDSAT Thematic Mapper (TM) data from Brazil. Corresponding spectral bands of both sensors were fused via a pyramidal wavelet transform in Fourier space. New spectral and temporal information of the resultant image can be used for thematic classification or qualitative mapping. All three described scaling techniques can be integrated as the relevant methodological steps within a complex multi-source approach. We present this concept of combining numerous optical remote sensing data and methods to generate inputs for ecosystem process models.

  6. Urban soil exploration through multi-receiver electromagnetic induction and stepped-frequency ground penetrating radar.

    PubMed

    Van De Vijver, Ellen; Van Meirvenne, Marc; Vandenhaute, Laura; Delefortrie, Samuël; De Smedt, Philippe; Saey, Timothy; Seuntjens, Piet

    2015-07-01

    In environmental assessments, the characterization of urban soils relies heavily on invasive investigation, which is often insufficient to capture their full spatial heterogeneity. Non-invasive geophysical techniques enable rapid collection of high-resolution data and provide a cost-effective alternative to investigate soil in a spatially comprehensive way. This paper presents the results of combining multi-receiver electromagnetic induction and stepped-frequency ground penetrating radar to characterize a former garage site contaminated with petroleum hydrocarbons. The sensor combination showed the ability to identify and accurately locate building remains and a high-density soil layer, thus demonstrating the high potential to investigate anthropogenic disturbances of physical nature. In addition, a correspondence was found between an area of lower electrical conductivity and elevated concentrations of petroleum hydrocarbons, suggesting the potential to detect specific chemical disturbances. We conclude that the sensor combination provides valuable information for preliminary assessment of urban soils.

  7. Laser effects based optimal laser parameter identifications for paint removal from metal substrate at 1064 nm: a multi-pulse model

    NASA Astrophysics Data System (ADS)

    Han, Jinghua; Cui, Xudong; Wang, Sha; Feng, Guoying; Deng, Guoliang; Hu, Ruifeng

    2017-10-01

    Paint removal by laser ablation is favoured among cleaning techniques due to its high efficiency. How to predict the optimal laser parameters without producing damage to substrate still remains challenging for accurate paint stripping. On the basis of ablation morphologies and combining experiments with numerical modelling, the underlying mechanisms and the optimal conditions for paint removal by laser ablation are thoroughly investigated. Our studies suggest that laser paint removal is dominated by the laser vaporization effect, thermal stress effect and laser plasma effect, in which thermal stress effect is the most favoured while laser plasma effect should be avoided during removal operations. Based on the thermodynamic equations, we numerically evaluated the spatial distribution of the temperature as well as thermal stress in the paint and substrate under the irradiation of laser pulse at 1064 nm. The obtained curves of the paint thickness vs. threshold fluences can provide the reference standard of laser parameter selection in view of the paint layer with different thickness. A multi-pulse model is proposed and validated under a constant laser fluence to perfectly remove a thicker paint layer. The investigations and the methods proposed here might give hints to the efficient operations on the paint removal and lowering the risk of substrate damages.

  8. A New Approach to Observing Coronal Dynamics: MUSE, the Multi-Slit Solar Explorer

    NASA Astrophysics Data System (ADS)

    Tarbell, T. D.

    2017-12-01

    The Multi-Slit Solar Explorer is a Small Explorer mission recently selected for a Phase A study, which could lead to a launch in 2022. It will provide unprecendented observations of the dynamics of the corona and transition region using both conventional and novel spectral imaging techniques. The physical processes that heat the multi-million degree solar corona, accelerate the solar wind and drive solar activity (CMEs and flares) remain poorly known. A breakthrough in these areas can only come from radically innovative instrumentation and state-of-the-art numerical modeling and will lead to better understanding of space weather origins. MUSE's multi-slit coronal spectroscopy will exploit a 100x improvement in spectral raster cadence to fill a crucial gap in our knowledge of Sun-Earth connections; it will reveal temperatures, velocities and non-thermal processes over a wide temperature range to diagnose physical processes that remain invisible to current or planned instruments. MUSE will contain two instruments: an EUV spectrograph (SG) and EUV context imager (CI). Both have similar spatial resolution and leverage extensive heritage from previous high-resolution instruments such as IRIS and the HiC rocket payload. The MUSE investigation will build on the success of IRIS by combining numerical modeling with a uniquely capable observatory: MUSE will obtain EUV spectra and images with the highest resolution in space (1/3 arcsec) and time (1-4 s) ever achieved for the transition region and corona, along 35 slits and a large context FOV simultaneously. The MUSE consortium includes LMSAL, SAO, Stanford, ARC, HAO, GSFC, MSFC, MSU, ITA Oslo and other institutions.

  9. Multi-Population Invariance with Dichotomous Measures: Combining Multi-Group and MIMIC Methodologies in Evaluating the General Aptitude Test in the Arabic Language

    ERIC Educational Resources Information Center

    Sideridis, Georgios D.; Tsaousis, Ioannis; Al-harbi, Khaleel A.

    2015-01-01

    The purpose of the present study was to extend the model of measurement invariance by simultaneously estimating invariance across multiple populations in the dichotomous instrument case using multi-group confirmatory factor analytic and multiple indicator multiple causes (MIMIC) methodologies. Using the Arabic version of the General Aptitude Test…

  10. A split-step method to include electron–electron collisions via Monte Carlo in multiple rate equation simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huthmacher, Klaus; Molberg, Andreas K.; Rethfeld, Bärbel

    2016-10-01

    A split-step numerical method for calculating ultrafast free-electron dynamics in dielectrics is introduced. The two split steps, independently programmed in C++11 and FORTRAN 2003, are interfaced via the presented open source wrapper. The first step solves a deterministic extended multi-rate equation for the ionization, electron–phonon collisions, and single photon absorption by free-carriers. The second step is stochastic and models electron–electron collisions using Monte-Carlo techniques. This combination of deterministic and stochastic approaches is a unique and efficient method of calculating the nonlinear dynamics of 3D materials exposed to high intensity ultrashort pulses. Results from simulations solving the proposed model demonstrate howmore » electron–electron scattering relaxes the non-equilibrium electron distribution on the femtosecond time scale.« less

  11. Multi-Innovation Gradient Iterative Locally Weighted Learning Identification for A Nonlinear Ship Maneuvering System

    NASA Astrophysics Data System (ADS)

    Bai, Wei-wei; Ren, Jun-sheng; Li, Tie-shan

    2018-06-01

    This paper explores a highly accurate identification modeling approach for the ship maneuvering motion with fullscale trial. A multi-innovation gradient iterative (MIGI) approach is proposed to optimize the distance metric of locally weighted learning (LWL), and a novel non-parametric modeling technique is developed for a nonlinear ship maneuvering system. This proposed method's advantages are as follows: first, it can avoid the unmodeled dynamics and multicollinearity inherent to the conventional parametric model; second, it eliminates the over-learning or underlearning and obtains the optimal distance metric; and third, the MIGI is not sensitive to the initial parameter value and requires less time during the training phase. These advantages result in a highly accurate mathematical modeling technique that can be conveniently implemented in applications. To verify the characteristics of this mathematical model, two examples are used as the model platforms to study the ship maneuvering.

  12. Modeling Emergence in Neuroprotective Regulatory Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanfilippo, Antonio P.; Haack, Jereme N.; McDermott, Jason E.

    2013-01-05

    The use of predictive modeling in the analysis of gene expression data can greatly accelerate the pace of scientific discovery in biomedical research by enabling in silico experimentation to test disease triggers and potential drug therapies. Techniques that focus on modeling emergence, such as agent-based modeling and multi-agent simulations, are of particular interest as they support the discovery of pathways that may have never been observed in the past. Thus far, these techniques have been primarily applied at the multi-cellular level, or have focused on signaling and metabolic networks. We present an approach where emergence modeling is extended to regulatorymore » networks and demonstrate its application to the discovery of neuroprotective pathways. An initial evaluation of the approach indicates that emergence modeling provides novel insights for the analysis of regulatory networks that can advance the discovery of acute treatments for stroke and other diseases.« less

  13. Novel techniques for optical sensor using single core multi-layer structures for electric field detection

    NASA Astrophysics Data System (ADS)

    Ali, Amir R.; Kamel, Mohamed A.

    2017-05-01

    This paper studies the effect of the electrostriction force on the single optical dielectric core coated with multi-layers based on whispering gallery mode (WGM). The sensing element is a dielectric core made of polymeric material coated with multi-layers having different dielectric and mechanical properties. The external electric field deforming the sensing element causing shifts in its WGM spectrum. The multi-layer structures will enhance the body and the pressure forces acting on the core of the sensing element. Due to the gradient on the dielectric permittivity; pressure forces at the interface between every two layers will be created. Also, the gradient on Young's modulus will affect the overall stiffness of the optical sensor. In turn the sensitivity of the optical sensor to the electric field will be increased when the materials of each layer selected properly. A mathematical model is used to test the effect for that multi-layer structures. Two layering techniques are considered to increase the sensor's sensitivity; (i) Pressure force enhancement technique; and (ii) Young's modulus reduction technique. In the first technique, Young's modulus is kept constant for all layers, while the dielectric permittivity is varying. In this technique the results will be affected by the value dielectric permittivity of the outer medium surrounding the cavity. If the medium's dielectric permittivity is greater than that of the cavity, then the ascending ordered layers of the cavity will yield the highest sensitivity (the core will have the smallest dielectric permittivity) to the applied electric field and vice versa. In the second technique, Young's modulus is varying along the layers, while the dielectric permittivity has a certain constant value per layer. On the other hand, the descending order will enhance the sensitivity in the second technique. Overall, results show the multi-layer cavity based on these techniques will enhance the sensitivity compared to the typical polymeric optical sensor.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Collins, William D; Johansen, Hans; Evans, Katherine J

    We present a survey of physical and computational techniques that have the potential to con- tribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy andmore » fidelity in simulation of dynamics and allow more complete representations of climate features at the global scale. At the same time, part- nerships with computer science teams have focused on taking advantage of evolving computer architectures, such as many-core processors and GPUs, so that these approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.« less

  15. Simultaneous measurement and modulation of multiple physiological parameters in the isolated heart using optical techniques

    PubMed Central

    Lee, Peter; Yan, Ping; Ewart, Paul; Kohl, Peter

    2012-01-01

    Whole-heart multi-parametric optical mapping has provided valuable insight into the interplay of electro-physiological parameters, and this technology will continue to thrive as dyes are improved and technical solutions for imaging become simpler and cheaper. Here, we show the advantage of using improved 2nd-generation voltage dyes, provide a simple solution to panoramic multi-parametric mapping, and illustrate the application of flash photolysis of caged compounds for studies in the whole heart. For proof of principle, we used the isolated rat whole-heart model. After characterising the blue and green isosbestic points of di-4-ANBDQBS and di-4-ANBDQPQ, respectively, two voltage and calcium mapping systems are described. With two newly custom-made multi-band optical filters, (1) di-4-ANBDQBS and fluo-4 and (2) di-4-ANBDQPQ and rhod-2 mapping are demonstrated. Furthermore, we demonstrate three-parameter mapping using di-4-ANBDQPQ, rhod-2 and NADH. Using off-the-shelf optics and the di-4-ANBDQPQ and rhod-2 combination, we demonstrate panoramic multi-parametric mapping, affording a 360° spatiotemporal record of activity. Finally, local optical perturbation of calcium dynamics in the whole heart is demonstrated using the caged compound, o-nitrophenyl ethylene glycol tetraacetic acid (NP-EGTA), with an ultraviolet light-emitting diode (LED). Calcium maps (heart loaded with di-4-ANBDQPQ and rhod-2) demonstrate successful NP-EGTA loading and local flash photolysis. All imaging systems were built using only a single camera. In conclusion, using novel 2nd-generation voltage dyes, we developed scalable techniques for multi-parametric optical mapping of the whole heart from one point of view and panoramically. In addition to these parameter imaging approaches, we show that it is possible to use caged compounds and ultraviolet LEDs to locally perturb electrophysiological parameters in the whole heart. PMID:22886365

  16. Multi-scale Modeling in Clinical Oncology: Opportunities and Barriers to Success.

    PubMed

    Yankeelov, Thomas E; An, Gary; Saut, Oliver; Luebeck, E Georg; Popel, Aleksander S; Ribba, Benjamin; Vicini, Paolo; Zhou, Xiaobo; Weis, Jared A; Ye, Kaiming; Genin, Guy M

    2016-09-01

    Hierarchical processes spanning several orders of magnitude of both space and time underlie nearly all cancers. Multi-scale statistical, mathematical, and computational modeling methods are central to designing, implementing and assessing treatment strategies that account for these hierarchies. The basic science underlying these modeling efforts is maturing into a new discipline that is close to influencing and facilitating clinical successes. The purpose of this review is to capture the state-of-the-art as well as the key barriers to success for multi-scale modeling in clinical oncology. We begin with a summary of the long-envisioned promise of multi-scale modeling in clinical oncology, including the synthesis of disparate data types into models that reveal underlying mechanisms and allow for experimental testing of hypotheses. We then evaluate the mathematical techniques employed most widely and present several examples illustrating their application as well as the current gap between pre-clinical and clinical applications. We conclude with a discussion of what we view to be the key challenges and opportunities for multi-scale modeling in clinical oncology.

  17. Multi-scale Modeling in Clinical Oncology: Opportunities and Barriers to Success

    PubMed Central

    Yankeelov, Thomas E.; An, Gary; Saut, Oliver; Luebeck, E. Georg; Popel, Aleksander S.; Ribba, Benjamin; Vicini, Paolo; Zhou, Xiaobo; Weis, Jared A.; Ye, Kaiming; Genin, Guy M.

    2016-01-01

    Hierarchical processes spanning several orders of magnitude of both space and time underlie nearly all cancers. Multi-scale statistical, mathematical, and computational modeling methods are central to designing, implementing and assessing treatment strategies that account for these hierarchies. The basic science underlying these modeling efforts is maturing into a new discipline that is close to influencing and facilitating clinical successes. The purpose of this review is to capture the state-of-the-art as well as the key barriers to success for multi-scale modeling in clinical oncology. We begin with a summary of the long-envisioned promise of multi-scale modeling in clinical oncology, including the synthesis of disparate data types into models that reveal underlying mechanisms and allow for experimental testing of hypotheses. We then evaluate the mathematical techniques employed most widely and present several examples illustrating their application as well as the current gap between pre-clinical and clinical applications. We conclude with a discussion of what we view to be the key challenges and opportunities for multi-scale modeling in clinical oncology. PMID:27384942

  18. Whole abdominal wall segmentation using augmented active shape models (AASM) with multi-atlas label fusion and level set

    NASA Astrophysics Data System (ADS)

    Xu, Zhoubing; Baucom, Rebeccah B.; Abramson, Richard G.; Poulose, Benjamin K.; Landman, Bennett A.

    2016-03-01

    The abdominal wall is an important structure differentiating subcutaneous and visceral compartments and intimately involved with maintaining abdominal structure. Segmentation of the whole abdominal wall on routinely acquired computed tomography (CT) scans remains challenging due to variations and complexities of the wall and surrounding tissues. In this study, we propose a slice-wise augmented active shape model (AASM) approach to robustly segment both the outer and inner surfaces of the abdominal wall. Multi-atlas label fusion (MALF) and level set (LS) techniques are integrated into the traditional ASM framework. The AASM approach globally optimizes the landmark updates in the presence of complicated underlying local anatomical contexts. The proposed approach was validated on 184 axial slices of 20 CT scans. The Hausdorff distance against the manual segmentation was significantly reduced using proposed approach compared to that using ASM, MALF, and LS individually. Our segmentation of the whole abdominal wall enables the subcutaneous and visceral fat measurement, with high correlation to the measurement derived from manual segmentation. This study presents the first generic algorithm that combines ASM, MALF, and LS, and demonstrates practical application for automatically capturing visceral and subcutaneous fat volumes.

  19. IFSM fractal image compression with entropy and sparsity constraints: A sequential quadratic programming approach

    NASA Astrophysics Data System (ADS)

    Kunze, Herb; La Torre, Davide; Lin, Jianyi

    2017-01-01

    We consider the inverse problem associated with IFSM: Given a target function f , find an IFSM, such that its fixed point f ¯ is sufficiently close to f in the Lp distance. Forte and Vrscay [1] showed how to reduce this problem to a quadratic optimization model. In this paper, we extend the collage-based method developed by Kunze, La Torre and Vrscay ([2][3][4]), by proposing the minimization of the 1-norm instead of the 0-norm. In fact, optimization problems involving the 0-norm are combinatorial in nature, and hence in general NP-hard. To overcome these difficulties, we introduce the 1-norm and propose a Sequential Quadratic Programming algorithm to solve the corresponding inverse problem. As in Kunze, La Torre and Vrscay [3] in our formulation, the minimization of collage error is treated as a multi-criteria problem that includes three different and conflicting criteria i.e., collage error, entropy and sparsity. This multi-criteria program is solved by means of a scalarization technique which reduces the model to a single-criterion program by combining all objective functions with different trade-off weights. The results of some numerical computations are presented.

  20. Image-based multi-scale simulation and experimental validation of thermal conductivity of lanthanum zirconate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Xingye; Hu, Bin; Wei, Changdong

    Lanthanum zirconate (La2Zr2O7) is a promising candidate material for thermal barrier coating (TBC) applications due to its low thermal conductivity and high-temperature phase stability. In this work, a novel image-based multi-scale simulation framework combining molecular dynamics (MD) and finite element (FE) calculations is proposed to study the thermal conductivity of La2Zr2O7 coatings. Since there is no experimental data of single crystal La2Zr2O7 thermal conductivity, a reverse non-equilibrium molecular dynamics (reverse NEMD) approach is first employed to compute the temperature-dependent thermal conductivity of single crystal La2Zr2O7. The single crystal data is then passed to a FE model which takes into accountmore » of realistic thermal barrier coating microstructures. The predicted thermal conductivities from the FE model are in good agreement with experimental validations using both flash laser technique and pulsed thermal imaging-multilayer analysis. The framework proposed in this work provides a powerful tool for future design of advanced coating systems. (C) 2016 Elsevier Ltd. All rights reserved.« less

  1. Development of an Open Rotor Cycle Model in NPSS Using a Multi-Design Point Approach

    NASA Technical Reports Server (NTRS)

    Hendricks, Eric S.

    2011-01-01

    NASA's Environmentally Responsible Aviation Project and Subsonic Fixed Wing Project are focused on developing concepts and technologies which may enable dramatic reductions to the environmental impact of future generation subsonic aircraft (Refs. 1 and 2). The open rotor concept (also referred to as the Unducted Fan or advanced turboprop) may allow the achievement of this objective by reducing engine emissions and fuel consumption. To evaluate its potential impact, an open rotor cycle modeling capability is needed. This paper presents the initial development of an open rotor cycle model in the Numerical Propulsion System Simulation (NPSS) computer program which can then be used to evaluate the potential benefit of this engine. The development of this open rotor model necessitated addressing two modeling needs within NPSS. First, a method for evaluating the performance of counter-rotating propellers was needed. Therefore, a new counter-rotating propeller NPSS component was created. This component uses propeller performance maps developed from historic counter-rotating propeller experiments to determine the thrust delivered and power required. Second, several methods for modeling a counter-rotating power turbine within NPSS were explored. These techniques used several combinations of turbine components within NPSS to provide the necessary power to the propellers. Ultimately, a single turbine component with a conventional turbine map was selected. Using these modeling enhancements, an open rotor cycle model was developed in NPSS using a multi-design point approach. The multi-design point (MDP) approach improves the engine cycle analysis process by making it easier to properly size the engine to meet a variety of thrust targets throughout the flight envelope. A number of design points are considered including an aerodynamic design point, sea-level static, takeoff and top of climb. The development of this MDP model was also enabled by the selection of a simple power management scheme which schedules propeller blade angles with the freestream Mach number. Finally, sample open rotor performance results and areas for further model improvements are presented.

  2. [Do different interpretative methods used for evaluation of checkerboard synergy test affect the results?].

    PubMed

    Ozseven, Ayşe Gül; Sesli Çetin, Emel; Ozseven, Levent

    2012-07-01

    In recent years, owing to the presence of multi-drug resistant nosocomial bacteria, combination therapies are more frequently applied. Thus there is more need to investigate the in vitro activity of drug combinations against multi-drug resistant bacteria. Checkerboard synergy testing is among the most widely used standard technique to determine the activity of antibiotic combinations. It is based on microdilution susceptibility testing of antibiotic combinations. Although this test has a standardised procedure, there are many different methods for interpreting the results. In many previous studies carried out with multi-drug resistant bacteria, different rates of synergy have been reported with various antibiotic combinations using checkerboard technique. These differences might be attributed to the different features of the strains. However, different synergy rates detected by checkerboard method have also been reported in other studies using the same drug combinations and same types of bacteria. It was thought that these differences in synergy rates might be due to the different methods of interpretation of synergy test results. In recent years, multi-drug resistant Acinetobacter baumannii has been the most commonly encountered nosocomial pathogen especially in intensive-care units. For this reason, multidrug resistant A.baumannii has been the subject of a considerable amount of research about antimicrobial combinations. In the present study, the in vitro activities of frequently preferred combinations in A.baumannii infections like imipenem plus ampicillin/sulbactam, and meropenem plus ampicillin/sulbactam were tested by checkerboard synergy method against 34 multi-drug resistant A.baumannii isolates. Minimum inhibitory concentration (MIC) values for imipenem, meropenem and ampicillin/sulbactam were determined by the broth microdilution method. Subsequently the activity of two different combinations were tested in the dilution range of 4 x MIC and 0.03 x MIC in 96-well checkerboard plates. The results were obtained separately using the four different interpretation methods frequently preferred by researchers. Thus, it was aimed to detect to what extent the rates of synergistic, indifferent and antagonistic interactions were affected by different interpretation methods. The differences between the interpretation methods were tested by chi-square analysis for each combination used. Statistically significant differences were detected between the four different interpretation methods for the determination of synergistic and indifferent interactions (p< 0.0001). Highest rates of synergy were observed with both combinations by the method that used the lowest fractional inhibitory concentration index of all the non-turbid wells along the turbidity/non-turbidity interface. There was no statistically significant difference between the four methods for the detection of antagonism (p> 0.05). In conclusion although there is a standard procedure for checkerboard synergy testing it fails to exhibit standard results owing to different methods of interpretation of the results. Thus, there is a need to standardise the interpretation method for checkerboard synergy testing. To determine the most appropriate method of interpretation further studies investigating the clinical benefits of synergic combinations and additionally comparing the consistency of the results obtained from the other standard combination tests like time-kill studies, are required.

  3. Fatigue damage monitoring for basalt fiber reinforced polymer composites using acoustic emission technique

    NASA Astrophysics Data System (ADS)

    Wang, Wentao; Li, Hui; Qu, Zhi

    2012-04-01

    Basalt fiber reinforced polymer (BFRP) is a structural material with superior mechanical properties. In this study, unidirectional BFRP laminates with 14 layers are made with the hand lay-up method. Then, the acoustic emission technique (AE) combined with the scanning electronic microscope (SEM) technique is employed to monitor the fatigue damage evolution of the BFRP plates in the fatigue loading tests. Time-frequency analysis using the wavelet transform technique is proposed to analyze the received AE signal instead of the peak frequency method. A comparison between AE signals and SEM images indicates that the multi-frequency peaks picked from the time-frequency curves of AE signals reflect the accumulated fatigue damage evolution and fatigue damage patterns. Furthermore, seven damage patterns, that is, matrix cracking, delamination, fiber fracture and their combinations, are identified from the time-frequency curves of the AE signals.

  4. Automatic Prediction of Protein 3D Structures by Probabilistic Multi-template Homology Modeling.

    PubMed

    Meier, Armin; Söding, Johannes

    2015-10-01

    Homology modeling predicts the 3D structure of a query protein based on the sequence alignment with one or more template proteins of known structure. Its great importance for biological research is owed to its speed, simplicity, reliability and wide applicability, covering more than half of the residues in protein sequence space. Although multiple templates have been shown to generally increase model quality over single templates, the information from multiple templates has so far been combined using empirically motivated, heuristic approaches. We present here a rigorous statistical framework for multi-template homology modeling. First, we find that the query proteins' atomic distance restraints can be accurately described by two-component Gaussian mixtures. This insight allowed us to apply the standard laws of probability theory to combine restraints from multiple templates. Second, we derive theoretically optimal weights to correct for the redundancy among related templates. Third, a heuristic template selection strategy is proposed. We improve the average GDT-ha model quality score by 11% over single template modeling and by 6.5% over a conventional multi-template approach on a set of 1000 query proteins. Robustness with respect to wrong constraints is likewise improved. We have integrated our multi-template modeling approach with the popular MODELLER homology modeling software in our free HHpred server http://toolkit.tuebingen.mpg.de/hhpred and also offer open source software for running MODELLER with the new restraints at https://bitbucket.org/soedinglab/hh-suite.

  5. Multi-object segmentation framework using deformable models for medical imaging analysis.

    PubMed

    Namías, Rafael; D'Amato, Juan Pablo; Del Fresno, Mariana; Vénere, Marcelo; Pirró, Nicola; Bellemare, Marc-Emmanuel

    2016-08-01

    Segmenting structures of interest in medical images is an important step in different tasks such as visualization, quantitative analysis, simulation, and image-guided surgery, among several other clinical applications. Numerous segmentation methods have been developed in the past three decades for extraction of anatomical or functional structures on medical imaging. Deformable models, which include the active contour models or snakes, are among the most popular methods for image segmentation combining several desirable features such as inherent connectivity and smoothness. Even though different approaches have been proposed and significant work has been dedicated to the improvement of such algorithms, there are still challenging research directions as the simultaneous extraction of multiple objects and the integration of individual techniques. This paper presents a novel open-source framework called deformable model array (DMA) for the segmentation of multiple and complex structures of interest in different imaging modalities. While most active contour algorithms can extract one region at a time, DMA allows integrating several deformable models to deal with multiple segmentation scenarios. Moreover, it is possible to consider any existing explicit deformable model formulation and even to incorporate new active contour methods, allowing to select a suitable combination in different conditions. The framework also introduces a control module that coordinates the cooperative evolution of the snakes and is able to solve interaction issues toward the segmentation goal. Thus, DMA can implement complex object and multi-object segmentations in both 2D and 3D using the contextual information derived from the model interaction. These are important features for several medical image analysis tasks in which different but related objects need to be simultaneously extracted. Experimental results on both computed tomography and magnetic resonance imaging show that the proposed framework has a wide range of applications especially in the presence of adjacent structures of interest or under intra-structure inhomogeneities giving excellent quantitative results.

  6. Combinatorial techniques to efficiently investigate and optimize organic thin film processing and properties.

    PubMed

    Wieberger, Florian; Kolb, Tristan; Neuber, Christian; Ober, Christopher K; Schmidt, Hans-Werner

    2013-04-08

    In this article we present several developed and improved combinatorial techniques to optimize processing conditions and material properties of organic thin films. The combinatorial approach allows investigations of multi-variable dependencies and is the perfect tool to investigate organic thin films regarding their high performance purposes. In this context we develop and establish the reliable preparation of gradients of material composition, temperature, exposure, and immersion time. Furthermore we demonstrate the smart application of combinations of composition and processing gradients to create combinatorial libraries. First a binary combinatorial library is created by applying two gradients perpendicular to each other. A third gradient is carried out in very small areas and arranged matrix-like over the entire binary combinatorial library resulting in a ternary combinatorial library. Ternary combinatorial libraries allow identifying precise trends for the optimization of multi-variable dependent processes which is demonstrated on the lithographic patterning process. Here we verify conclusively the strong interaction and thus the interdependency of variables in the preparation and properties of complex organic thin film systems. The established gradient preparation techniques are not limited to lithographic patterning. It is possible to utilize and transfer the reported combinatorial techniques to other multi-variable dependent processes and to investigate and optimize thin film layers and devices for optical, electro-optical, and electronic applications.

  7. Application of dragonfly algorithm for optimal performance analysis of process parameters in turn-mill operations- A case study

    NASA Astrophysics Data System (ADS)

    Vikram, K. Arun; Ratnam, Ch; Lakshmi, VVK; Kumar, A. Sunny; Ramakanth, RT

    2018-02-01

    Meta-heuristic multi-response optimization methods are widely in use to solve multi-objective problems to obtain Pareto optimal solutions during optimization. This work focuses on optimal multi-response evaluation of process parameters in generating responses like surface roughness (Ra), surface hardness (H) and tool vibration displacement amplitude (Vib) while performing operations like tangential and orthogonal turn-mill processes on A-axis Computer Numerical Control vertical milling center. Process parameters like tool speed, feed rate and depth of cut are considered as process parameters machined over brass material under dry condition with high speed steel end milling cutters using Taguchi design of experiments (DOE). Meta-heuristic like Dragonfly algorithm is used to optimize the multi-objectives like ‘Ra’, ‘H’ and ‘Vib’ to identify the optimal multi-response process parameters combination. Later, the results thus obtained from multi-objective dragonfly algorithm (MODA) are compared with another multi-response optimization technique Viz. Grey relational analysis (GRA).

  8. Inferring the most probable maps of underground utilities using Bayesian mapping model

    NASA Astrophysics Data System (ADS)

    Bilal, Muhammad; Khan, Wasiq; Muggleton, Jennifer; Rustighi, Emiliano; Jenks, Hugo; Pennock, Steve R.; Atkins, Phil R.; Cohn, Anthony

    2018-03-01

    Mapping the Underworld (MTU), a major initiative in the UK, is focused on addressing social, environmental and economic consequences raised from the inability to locate buried underground utilities (such as pipes and cables) by developing a multi-sensor mobile device. The aim of MTU device is to locate different types of buried assets in real time with the use of automated data processing techniques and statutory records. The statutory records, even though typically being inaccurate and incomplete, provide useful prior information on what is buried under the ground and where. However, the integration of information from multiple sensors (raw data) with these qualitative maps and their visualization is challenging and requires the implementation of robust machine learning/data fusion approaches. An approach for automated creation of revised maps was developed as a Bayesian Mapping model in this paper by integrating the knowledge extracted from sensors raw data and available statutory records. The combination of statutory records with the hypotheses from sensors was for initial estimation of what might be found underground and roughly where. The maps were (re)constructed using automated image segmentation techniques for hypotheses extraction and Bayesian classification techniques for segment-manhole connections. The model consisting of image segmentation algorithm and various Bayesian classification techniques (segment recognition and expectation maximization (EM) algorithm) provided robust performance on various simulated as well as real sites in terms of predicting linear/non-linear segments and constructing refined 2D/3D maps.

  9. A Geostatistical Data Fusion Technique for Merging Remote Sensing and Ground-Based Observations of Aerosol Optical Thickness

    NASA Technical Reports Server (NTRS)

    Chatterjee, Abhishek; Michalak, Anna M.; Kahn, Ralph A.; Paradise, Susan R.; Braverman, Amy J.; Miller, Charles E.

    2010-01-01

    Particles in the atmosphere reflect incoming sunlight, tending to cool the Earth below. Some particles, such as soot, also absorb sunlight, which tens to warm the ambient atmosphere. Aerosol optical depth (AOD) is a measure of the amount of particulate matter in the atmosphere, and is a key input to computer models that simulate and predict Earth's changing climate. The global AOD products from the Multi-angle Imaging SpectroRadiometer (MISR) and the MODerate resolution Imaging Spectroradiometer (MODIS), both of which fly on the NASA Earth Observing System's Terra satellite, provide complementary views of the particles in the atmosphere. Whereas MODIS offers global coverage about four times as frequent as MISR, the multi-angle data makes it possible to separate the surface and atmospheric contributions to the observed top-of-atmosphere radiances, and also to more effectively discriminate particle type. Surface-based AERONET sun photometers retrieve AOD with smaller uncertainties than the satellite instruments, but only at a few fixed locations. So there are clear reasons to combine these data sets in a way that takes advantage of their respective strengths. This paper represents an effort at combining MISR, MODIS and AERONET AOD products over the continental US, using a common spatial statistical technique called kriging. The technique uses the correlation between the satellite data and the "ground-truth" sun photometer observations to assign uncertainty to the satellite data on a region-by-region basis. The larger fraction of the sun photometer variance that is duplicated by the satellite data, the higher the confidence assigned to the satellite data in that region. In the Western and Central US, MISR AOD correlation with AERONET are significantly higher than those with MODIS, likely due to bright surfaces in these regions, which pose greater challenges for the single-view MODIS retrievals. In the east, MODIS correlations are higher, due to more frequent sampling of the varying AOD. These results demonstrate how the MISR and MODIS aerosol products are complementary. The underlying technique also provides one method for combining these products in such a way that takes advantage of the strengths of each, in the places and times when they are maximal, and in addition, yields an estimate of the associated uncertainties in space and time.

  10. Multi-level, multi-scale resource selection functions and resistance surfaces for conservation planning: Pumas as a case study

    PubMed Central

    Vickers, T. Winston; Ernest, Holly B.; Boyce, Walter M.

    2017-01-01

    The importance of examining multiple hierarchical levels when modeling resource use for wildlife has been acknowledged for decades. Multi-level resource selection functions have recently been promoted as a method to synthesize resource use across nested organizational levels into a single predictive surface. Analyzing multiple scales of selection within each hierarchical level further strengthens multi-level resource selection functions. We extend this multi-level, multi-scale framework to modeling resistance for wildlife by combining multi-scale resistance surfaces from two data types, genetic and movement. Resistance estimation has typically been conducted with one of these data types, or compared between the two. However, we contend it is not an either/or issue and that resistance may be better-modeled using a combination of resistance surfaces that represent processes at different hierarchical levels. Resistance surfaces estimated from genetic data characterize temporally broad-scale dispersal and successful breeding over generations, whereas resistance surfaces estimated from movement data represent fine-scale travel and contextualized movement decisions. We used telemetry and genetic data from a long-term study on pumas (Puma concolor) in a highly developed landscape in southern California to develop a multi-level, multi-scale resource selection function and a multi-level, multi-scale resistance surface. We used these multi-level, multi-scale surfaces to identify resource use patches and resistant kernel corridors. Across levels, we found puma avoided urban, agricultural areas, and roads and preferred riparian areas and more rugged terrain. For other landscape features, selection differed among levels, as did the scales of selection for each feature. With these results, we developed a conservation plan for one of the most isolated puma populations in the U.S. Our approach captured a wide spectrum of ecological relationships for a population, resulted in effective conservation planning, and can be readily applied to other wildlife species. PMID:28609466

  11. Multi-level, multi-scale resource selection functions and resistance surfaces for conservation planning: Pumas as a case study.

    PubMed

    Zeller, Katherine A; Vickers, T Winston; Ernest, Holly B; Boyce, Walter M

    2017-01-01

    The importance of examining multiple hierarchical levels when modeling resource use for wildlife has been acknowledged for decades. Multi-level resource selection functions have recently been promoted as a method to synthesize resource use across nested organizational levels into a single predictive surface. Analyzing multiple scales of selection within each hierarchical level further strengthens multi-level resource selection functions. We extend this multi-level, multi-scale framework to modeling resistance for wildlife by combining multi-scale resistance surfaces from two data types, genetic and movement. Resistance estimation has typically been conducted with one of these data types, or compared between the two. However, we contend it is not an either/or issue and that resistance may be better-modeled using a combination of resistance surfaces that represent processes at different hierarchical levels. Resistance surfaces estimated from genetic data characterize temporally broad-scale dispersal and successful breeding over generations, whereas resistance surfaces estimated from movement data represent fine-scale travel and contextualized movement decisions. We used telemetry and genetic data from a long-term study on pumas (Puma concolor) in a highly developed landscape in southern California to develop a multi-level, multi-scale resource selection function and a multi-level, multi-scale resistance surface. We used these multi-level, multi-scale surfaces to identify resource use patches and resistant kernel corridors. Across levels, we found puma avoided urban, agricultural areas, and roads and preferred riparian areas and more rugged terrain. For other landscape features, selection differed among levels, as did the scales of selection for each feature. With these results, we developed a conservation plan for one of the most isolated puma populations in the U.S. Our approach captured a wide spectrum of ecological relationships for a population, resulted in effective conservation planning, and can be readily applied to other wildlife species.

  12. PROTAX-Sound: A probabilistic framework for automated animal sound identification

    PubMed Central

    Somervuo, Panu; Ovaskainen, Otso

    2017-01-01

    Autonomous audio recording is stimulating new field in bioacoustics, with a great promise for conducting cost-effective species surveys. One major current challenge is the lack of reliable classifiers capable of multi-species identification. We present PROTAX-Sound, a statistical framework to perform probabilistic classification of animal sounds. PROTAX-Sound is based on a multinomial regression model, and it can utilize as predictors any kind of sound features or classifications produced by other existing algorithms. PROTAX-Sound combines audio and image processing techniques to scan environmental audio files. It identifies regions of interest (a segment of the audio file that contains a vocalization to be classified), extracts acoustic features from them and compares with samples in a reference database. The output of PROTAX-Sound is the probabilistic classification of each vocalization, including the possibility that it represents species not present in the reference database. We demonstrate the performance of PROTAX-Sound by classifying audio from a species-rich case study of tropical birds. The best performing classifier achieved 68% classification accuracy for 200 bird species. PROTAX-Sound improves the classification power of current techniques by combining information from multiple classifiers in a manner that yields calibrated classification probabilities. PMID:28863178

  13. PROTAX-Sound: A probabilistic framework for automated animal sound identification.

    PubMed

    de Camargo, Ulisses Moliterno; Somervuo, Panu; Ovaskainen, Otso

    2017-01-01

    Autonomous audio recording is stimulating new field in bioacoustics, with a great promise for conducting cost-effective species surveys. One major current challenge is the lack of reliable classifiers capable of multi-species identification. We present PROTAX-Sound, a statistical framework to perform probabilistic classification of animal sounds. PROTAX-Sound is based on a multinomial regression model, and it can utilize as predictors any kind of sound features or classifications produced by other existing algorithms. PROTAX-Sound combines audio and image processing techniques to scan environmental audio files. It identifies regions of interest (a segment of the audio file that contains a vocalization to be classified), extracts acoustic features from them and compares with samples in a reference database. The output of PROTAX-Sound is the probabilistic classification of each vocalization, including the possibility that it represents species not present in the reference database. We demonstrate the performance of PROTAX-Sound by classifying audio from a species-rich case study of tropical birds. The best performing classifier achieved 68% classification accuracy for 200 bird species. PROTAX-Sound improves the classification power of current techniques by combining information from multiple classifiers in a manner that yields calibrated classification probabilities.

  14. [Research on the methods for multi-class kernel CSP-based feature extraction].

    PubMed

    Wang, Jinjia; Zhang, Lingzhi; Hu, Bei

    2012-04-01

    To relax the presumption of strictly linear patterns in the common spatial patterns (CSP), we studied the kernel CSP (KCSP). A new multi-class KCSP (MKCSP) approach was proposed in this paper, which combines the kernel approach with multi-class CSP technique. In this approach, we used kernel spatial patterns for each class against all others, and extracted signal components specific to one condition from EEG data sets of multiple conditions. Then we performed classification using the Logistic linear classifier. Brain computer interface (BCI) competition III_3a was used in the experiment. Through the experiment, it can be proved that this approach could decompose the raw EEG singles into spatial patterns extracted from multi-class of single trial EEG, and could obtain good classification results.

  15. Fabrication of hierarchical hybrid structures using bio-enabled layer-by-layer self-assembly.

    PubMed

    Hnilova, Marketa; Karaca, Banu Taktak; Park, James; Jia, Carol; Wilson, Brandon R; Sarikaya, Mehmet; Tamerler, Candan

    2012-05-01

    Development of versatile and flexible assembly systems for fabrication of functional hybrid nanomaterials with well-defined hierarchical and spatial organization is of a significant importance in practical nanobiotechnology applications. Here we demonstrate a bio-enabled self-assembly technique for fabrication of multi-layered protein and nanometallic assemblies utilizing a modular gold-binding (AuBP1) fusion tag. To accomplish the bottom-up assembly we first genetically fused the AuBP1 peptide sequence to the C'-terminus of maltose-binding protein (MBP) using two different linkers to produce MBP-AuBP1 hetero-functional constructs. Using various spectroscopic techniques, surface plasmon resonance (SPR) and localized surface plasmon resonance (LSPR), we verified the exceptional binding and self-assembly characteristics of AuBP1 peptide. The AuBP1 peptide tag can direct the organization of recombinant MBP protein on various gold surfaces through an efficient control of the organic-inorganic interface at the molecular level. Furthermore using a combination of soft-lithography, self-assembly techniques and advanced AuBP1 peptide tag technology, we produced spatially and hierarchically controlled protein multi-layered assemblies on gold nanoparticle arrays with high molecular packing density and pattering efficiency in simple, reproducible steps. This model system offers layer-by-layer assembly capability based on specific AuBP1 peptide tag and constitutes novel biological routes for biofabrication of various protein arrays, plasmon-active nanometallic assemblies and devices with controlled organization, packing density and architecture. Copyright © 2011 Wiley Periodicals, Inc.

  16. Modeling of BN Lifetime Prediction of a System Based on Integrated Multi-Level Information

    PubMed Central

    Wang, Xiaohong; Wang, Lizhi

    2017-01-01

    Predicting system lifetime is important to ensure safe and reliable operation of products, which requires integrated modeling based on multi-level, multi-sensor information. However, lifetime characteristics of equipment in a system are different and failure mechanisms are inter-coupled, which leads to complex logical correlations and the lack of a uniform lifetime measure. Based on a Bayesian network (BN), a lifetime prediction method for systems that combine multi-level sensor information is proposed. The method considers the correlation between accidental failures and degradation failure mechanisms, and achieves system modeling and lifetime prediction under complex logic correlations. This method is applied in the lifetime prediction of a multi-level solar-powered unmanned system, and the predicted results can provide guidance for the improvement of system reliability and for the maintenance and protection of the system. PMID:28926930

  17. Modeling of BN Lifetime Prediction of a System Based on Integrated Multi-Level Information.

    PubMed

    Wang, Jingbin; Wang, Xiaohong; Wang, Lizhi

    2017-09-15

    Predicting system lifetime is important to ensure safe and reliable operation of products, which requires integrated modeling based on multi-level, multi-sensor information. However, lifetime characteristics of equipment in a system are different and failure mechanisms are inter-coupled, which leads to complex logical correlations and the lack of a uniform lifetime measure. Based on a Bayesian network (BN), a lifetime prediction method for systems that combine multi-level sensor information is proposed. The method considers the correlation between accidental failures and degradation failure mechanisms, and achieves system modeling and lifetime prediction under complex logic correlations. This method is applied in the lifetime prediction of a multi-level solar-powered unmanned system, and the predicted results can provide guidance for the improvement of system reliability and for the maintenance and protection of the system.

  18. Improved NSGA model for multi objective operation scheduling and its evaluation

    NASA Astrophysics Data System (ADS)

    Li, Weining; Wang, Fuyu

    2017-09-01

    Reasonable operation can increase the income of the hospital and improve the patient’s satisfactory level. In this paper, by using multi object operation scheduling method with improved NSGA algorithm, it shortens the operation time, reduces the operation costand lowers the operation risk, the multi-objective optimization model is established for flexible operation scheduling, through the MATLAB simulation method, the Pareto solution is obtained, the standardization of data processing. The optimal scheduling scheme is selected by using entropy weight -Topsis combination method. The results show that the algorithm is feasible to solve the multi-objective operation scheduling problem, and provide a reference for hospital operation scheduling.

  19. Combined stamping-forging for non-axisymmetric product

    NASA Astrophysics Data System (ADS)

    Taureza, Muhammad; Danno, Atsushi; Song, Xu; Oh, Jin An

    2016-10-01

    Successive combined stamping-forging (CSF) is proposed to produce multi-thickness non-axisymmetric components. This method involves successive compression to create exclusively outward metal flow. Hitherto, the development of CSF has been mostly done for axisymmetric geometry. Using this technique, defect-free rectangular case component with length to thickness ratio of 40 is produced with lower forging pressure. This technology has potential for high throughput production of parts with multiple thicknesses and high width to thickness ratio.

  20. Optics clustered to output unique solutions: A multi-laser facility for combined single molecule and ensemble microscopy

    NASA Astrophysics Data System (ADS)

    Clarke, David T.; Botchway, Stanley W.; Coles, Benjamin C.; Needham, Sarah R.; Roberts, Selene K.; Rolfe, Daniel J.; Tynan, Christopher J.; Ward, Andrew D.; Webb, Stephen E. D.; Yadav, Rahul; Zanetti-Domingues, Laura; Martin-Fernandez, Marisa L.

    2011-09-01

    Optics clustered to output unique solutions (OCTOPUS) is a microscopy platform that combines single molecule and ensemble imaging methodologies. A novel aspect of OCTOPUS is its laser excitation system, which consists of a central core of interlocked continuous wave and pulsed laser sources, launched into optical fibres and linked via laser combiners. Fibres are plugged into wall-mounted patch panels that reach microscopy end-stations in adjacent rooms. This allows multiple tailor-made combinations of laser colours and time characteristics to be shared by different end-stations minimising the need for laser duplications. This setup brings significant benefits in terms of cost effectiveness, ease of operation, and user safety. The modular nature of OCTOPUS also facilitates the addition of new techniques as required, allowing the use of existing lasers in new microscopes while retaining the ability to run the established parts of the facility. To date, techniques interlinked are multi-photon/multicolour confocal fluorescence lifetime imaging for several modalities of fluorescence resonance energy transfer (FRET) and time-resolved anisotropy, total internal reflection fluorescence, single molecule imaging of single pair FRET, single molecule fluorescence polarisation, particle tracking, and optical tweezers. Here, we use a well-studied system, the epidermal growth factor receptor network, to illustrate how OCTOPUS can aid in the investigation of complex biological phenomena.

  1. Active spectral shaping with polarization-encoded Ti:sapphire amplifiers for sub-20 fs multi-terawatt systems

    NASA Astrophysics Data System (ADS)

    Cao, H.; Kalashnikov, M.; Osvay, K.; Khodakovskiy, N.; Nagymihaly, R. S.; Chvykov, V.

    2018-04-01

    A combination of a polarization-encoded (PE) and a conventional multi-pass amplifier was studied to overcome gain narrowing in the Ti:sapphire active medium. The seed spectrum was pre-shaped and blue-shifted during PE amplification and was then further broadened in a conventional, saturated multi-pass amplifier, resulting in an overall increase of the amplified bandwidth. Using this technique, seed pulses of 44 nm were amplified and simultaneously spectrally broadened to 57 nm without the use of passive spectral corrections. The amplified pulse after the PE amplifier was recompressed to 19 fs. The supported simulations confirm all aspects of experimental operation.

  2. Perturbational treatment of spin-orbit coupling for generally applicable high-level multi-reference methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mai, Sebastian; Marquetand, Philipp; González, Leticia

    2014-08-21

    An efficient perturbational treatment of spin-orbit coupling within the framework of high-level multi-reference techniques has been implemented in the most recent version of the COLUMBUS quantum chemistry package, extending the existing fully variational two-component (2c) multi-reference configuration interaction singles and doubles (MRCISD) method. The proposed scheme follows related implementations of quasi-degenerate perturbation theory (QDPT) model space techniques. Our model space is built either from uncontracted, large-scale scalar relativistic MRCISD wavefunctions or based on the scalar-relativistic solutions of the linear-response-theory-based multi-configurational averaged quadratic coupled cluster method (LRT-MRAQCC). The latter approach allows for a consistent, approximatively size-consistent and size-extensive treatment of spin-orbitmore » coupling. The approach is described in detail and compared to a number of related techniques. The inherent accuracy of the QDPT approach is validated by comparing cuts of the potential energy surfaces of acrolein and its S, Se, and Te analoga with the corresponding data obtained from matching fully variational spin-orbit MRCISD calculations. The conceptual availability of approximate analytic gradients with respect to geometrical displacements is an attractive feature of the 2c-QDPT-MRCISD and 2c-QDPT-LRT-MRAQCC methods for structure optimization and ab inito molecular dynamics simulations.« less

  3. A mathematical model approach toward combining information from multiple image projections of the same patient

    NASA Astrophysics Data System (ADS)

    Chawla, Amarpreet S.; Samei, Ehsan; Abbey, Craig

    2007-03-01

    In this study, we used a mathematical observer model to combine information obtained from multiple angular projections of the same breast to determine the overall detection performance of a multi-projection breast imaging system in detectability of a simulated mass. 82 subjects participated in the study and 25 angular projections of each breast were acquired. Projections from a simulated 3 mm 3-D lesion were added to the projection images. The lesion was assumed to be embedded in the compressed breast at a distance of 3 cm from the detector. Hotelling observer with Laguerre-Gauss channels (LG CHO) was applied to each image. Detectability was analyzed in terms of ROC curves and the area under ROC curves (AUC). The critical question studied is how to best integrate the individual decision variables across multiple (correlated) views. Towards that end, three different methods were investigated. Specifically, 1) ROCs from different projections were simply averaged; 2) the test statistics from different projections were averaged; and 3) a Bayesian decision fusion rule was used. Finally, AUC of the combined ROC was used as a parameter to optimize the acquisition parameters to maximize the performance of the system. It was found that the Bayesian decision fusion technique performs better than the other two techniques and likely offers the best approximation of the diagnostic process. Furthermore, if the total dose level is held constant at 1/25th of dual-view mammographic screening dose, the highest detectability performance is observed when considering only two projections spread along an angular span of 11.4°.

  4. Action detection by double hierarchical multi-structure space-time statistical matching model

    NASA Astrophysics Data System (ADS)

    Han, Jing; Zhu, Junwei; Cui, Yiyin; Bai, Lianfa; Yue, Jiang

    2018-03-01

    Aimed at the complex information in videos and low detection efficiency, an actions detection model based on neighboring Gaussian structure and 3D LARK features is put forward. We exploit a double hierarchical multi-structure space-time statistical matching model (DMSM) in temporal action localization. First, a neighboring Gaussian structure is presented to describe the multi-scale structural relationship. Then, a space-time statistical matching method is proposed to achieve two similarity matrices on both large and small scales, which combines double hierarchical structural constraints in model by both the neighboring Gaussian structure and the 3D LARK local structure. Finally, the double hierarchical similarity is fused and analyzed to detect actions. Besides, the multi-scale composite template extends the model application into multi-view. Experimental results of DMSM on the complex visual tracker benchmark data sets and THUMOS 2014 data sets show the promising performance. Compared with other state-of-the-art algorithm, DMSM achieves superior performances.

  5. Action detection by double hierarchical multi-structure space–time statistical matching model

    NASA Astrophysics Data System (ADS)

    Han, Jing; Zhu, Junwei; Cui, Yiyin; Bai, Lianfa; Yue, Jiang

    2018-06-01

    Aimed at the complex information in videos and low detection efficiency, an actions detection model based on neighboring Gaussian structure and 3D LARK features is put forward. We exploit a double hierarchical multi-structure space-time statistical matching model (DMSM) in temporal action localization. First, a neighboring Gaussian structure is presented to describe the multi-scale structural relationship. Then, a space-time statistical matching method is proposed to achieve two similarity matrices on both large and small scales, which combines double hierarchical structural constraints in model by both the neighboring Gaussian structure and the 3D LARK local structure. Finally, the double hierarchical similarity is fused and analyzed to detect actions. Besides, the multi-scale composite template extends the model application into multi-view. Experimental results of DMSM on the complex visual tracker benchmark data sets and THUMOS 2014 data sets show the promising performance. Compared with other state-of-the-art algorithm, DMSM achieves superior performances.

  6. Integrating multi-criteria evaluation techniques with geographic information systems for landfill site selection: a case study using ordered weighted average.

    PubMed

    Gorsevski, Pece V; Donevska, Katerina R; Mitrovski, Cvetko D; Frizado, Joseph P

    2012-02-01

    This paper presents a GIS-based multi-criteria decision analysis approach for evaluating the suitability for landfill site selection in the Polog Region, Macedonia. The multi-criteria decision framework considers environmental and economic factors which are standardized by fuzzy membership functions and combined by integration of analytical hierarchy process (AHP) and ordered weighted average (OWA) techniques. The AHP is used for the elicitation of attribute weights while the OWA operator function is used to generate a wide range of decision alternatives for addressing uncertainty associated with interaction between multiple criteria. The usefulness of the approach is illustrated by different OWA scenarios that report landfill suitability on a scale between 0 and 1. The OWA scenarios are intended to quantify the level of risk taking (i.e., optimistic, pessimistic, and neutral) and to facilitate a better understanding of patterns that emerge from decision alternatives involved in the decision making process. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Using Nominal Group Technique to Develop a Consensus Derived Model for Peer Review of Teaching across a Multi-School Faculty

    ERIC Educational Resources Information Center

    Burrows, Tracy; Findlay, Naomi; Killen, Chloe; Dempsey, Shane E.; Hunter, Sharyn; Chiarelli, Pauline; Snodgrass, Suzanne

    2011-01-01

    This paper describes the development of a peer review of teaching model for the Faculty of Health at the University of Newcastle, Australia. The process involved using the nominal group technique to engage Faculty academic staff to consider seven key decision points that informed the development of the peer review of teaching model. Use of the…

  8. Deep frequency modulation interferometry.

    PubMed

    Gerberding, Oliver

    2015-06-01

    Laser interferometry with pm/Hz precision and multi-fringe dynamic range at low frequencies is a core technology to measure the motion of various objects (test masses) in space and ground based experiments for gravitational wave detection and geodesy. Even though available interferometer schemes are well understood, their construction remains complex, often involving, for example, the need to build quasi-monolithic optical benches with dozens of components. In recent years techniques have been investigated that aim to reduce this complexity by combining phase modulation techniques with sophisticated digital readout algorithms. This article presents a new scheme that uses strong laser frequency modulations in combination with the deep phase modulation readout algorithm to construct simpler and easily scalable interferometers.

  9. Deep Impact Sequence Planning Using Multi-Mission Adaptable Planning Tools With Integrated Spacecraft Models

    NASA Technical Reports Server (NTRS)

    Wissler, Steven S.; Maldague, Pierre; Rocca, Jennifer; Seybold, Calina

    2006-01-01

    The Deep Impact mission was ambitious and challenging. JPL's well proven, easily adaptable multi-mission sequence planning tools combined with integrated spacecraft subsystem models enabled a small operations team to develop, validate, and execute extremely complex sequence-based activities within very short development times. This paper focuses on the core planning tool used in the mission, APGEN. It shows how the multi-mission design and adaptability of APGEN made it possible to model spacecraft subsystems as well as ground assets throughout the lifecycle of the Deep Impact project, starting with models of initial, high-level mission objectives, and culminating in detailed predictions of spacecraft behavior during mission-critical activities.

  10. Detection of geothermal anomalies in Tengchong, Yunnan Province, China from MODIS multi-temporal night LST imagery

    NASA Astrophysics Data System (ADS)

    Li, H.; Kusky, T. M.; Peng, S.; Zhu, M.

    2012-12-01

    Thermal infrared (TIR) remote sensing is an important technique in the exploration of geothermal resources. In this study, a geothermal survey is conducted in Tengchong area of Yunnan province in China using multi-temporal MODIS LST (Land Surface Temperature). The monthly night MODIS LST data from Mar. 2000 to Mar. 2011 of the study area were collected and analyzed. The 132 month average LST map was derived and three geothermal anomalies were identified. The findings of this study agree well with the results from relative geothermal gradient measurements. Finally, we conclude that TIR remote sensing is a cost-effective technique to detect geothermal anomalies. Combining TIR remote sensing with geological analysis and the understanding of geothermal mechanism is an accurate and efficient approach to geothermal area detection.

  11. Taxonomy of multi-focal nematode image stacks by a CNN based image fusion approach.

    PubMed

    Liu, Min; Wang, Xueping; Zhang, Hongzhong

    2018-03-01

    In the biomedical field, digital multi-focal images are very important for documentation and communication of specimen data, because the morphological information for a transparent specimen can be captured in form of a stack of high-quality images. Given biomedical image stacks containing multi-focal images, how to efficiently extract effective features from all layers to classify the image stacks is still an open question. We present to use a deep convolutional neural network (CNN) image fusion based multilinear approach for the taxonomy of multi-focal image stacks. A deep CNN based image fusion technique is used to combine relevant information of multi-focal images within a given image stack into a single image, which is more informative and complete than any single image in the given stack. Besides, multi-focal images within a stack are fused along 3 orthogonal directions, and multiple features extracted from the fused images along different directions are combined by canonical correlation analysis (CCA). Because multi-focal image stacks represent the effect of different factors - texture, shape, different instances within the same class and different classes of objects, we embed the deep CNN based image fusion method within a multilinear framework to propose an image fusion based multilinear classifier. The experimental results on nematode multi-focal image stacks demonstrated that the deep CNN image fusion based multilinear classifier can reach a higher classification rate (95.7%) than that by the previous multilinear based approach (88.7%), even we only use the texture feature instead of the combination of texture and shape features as in the previous work. The proposed deep CNN image fusion based multilinear approach shows great potential in building an automated nematode taxonomy system for nematologists. It is effective to classify multi-focal image stacks. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Observations and modeling of cool, evolved stars: from chromospheric to wind regions

    NASA Astrophysics Data System (ADS)

    Rau, Gioia; Carpenter, Ken G.; Nielsen, Krister E.; Kober, Gladys V.; Josef Hron, Bernard Aringer, Kjell Eriksson, Paola Marigo, Claudia Paladini

    2018-01-01

    Evolved stars are fundamental contributors to the enrichment of the interstellar medium, via their mass loss, with heavy elements produced in their interior, and with the dust formed in their envelope. We present the results of the first systematic comparison (Rau et al. 2017, 2015) of multi-technique observations of a sample of C-rich Mira, semi-regular and irregular stars with the predictions from dynamic model atmospheres (Mattsson et al. 2010) and simpler models based on hydrostatic atmospheres combined with dusty envelopes. The chromosphere, located in the outer atmosphere of these stars, plays a crucial role in driving the mass loss in evolved K-M giant stars (see e.g. Carpenter et al. 2014, 1988). Despite recent efforts, details of the mass-loss scenario remain mysterious, as well as a complete understanding of the dynamic line formation regions, profiles, and structures. To solve these riddles, we present observation of flow and turbulent velocities, together with preliminary derivation of thermodynamic constraints for theoretical models (Rau, Carpenter, et al., in prep).

  13. A Multi-Resolution Data Structure for Two-Dimensional Morse Functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bremer, P-T; Edelsbrunner, H; Hamann, B

    2003-07-30

    The efficient construction of simplified models is a central problem in the field of visualization. We combine topological and geometric methods to construct a multi-resolution data structure for functions over two-dimensional domains. Starting with the Morse-Smale complex we build a hierarchy by progressively canceling critical points in pairs. The data structure supports mesh traversal operations similar to traditional multi-resolution representations.

  14. Assessment of Molecular Acoustic Angiography for Combined Microvascular and Molecular Imaging in Preclinical Tumor Models

    PubMed Central

    Lindsey, Brooks D.; Shelton, Sarah E.; Foster, F. Stuart; Dayton, Paul A.

    2017-01-01

    Purpose To evaluate a new ultrasound molecular imaging approach in its ability to image a preclinical tumor model and to investigate the capacity to visualize and quantify co-registered microvascular and molecular imaging volumes. Procedures Molecular imaging using the new technique was compared with a conventional ultrasound molecular imaging technique (multi-pulse imaging) by varying the injected microbubble dose and scanning each animal using both techniques. Each of the 14 animals was randomly assigned one of three doses; bolus dose was varied, and the animals were imaged for three consecutive days so that each animal received every dose. A microvascular scan was also acquired for each animal by administering an infusion of non-targeted microbubbles. These scans were paired with co-registered molecular images (VEGFR2-targeted microbubbles), the vessels were segmented, and the spatial relationships between vessels and VEGFR2 targeting locations were analyzed. In 5 animals, an additional scan was performed in which the animal received a bolus of microbubbles targeted to E- and P-selectin. Vessel tortuosity as a function of distance from VEGF and selectin targeting was analyzed in these animals. Results Although resulting differences in image intensity due to varying microbubble dose were not significant between the two lowest doses, superharmonic imaging had significantly higher contrast-to-tissue ratio (CTR) than multi-pulse imaging (mean across all doses: 13.98 dB for molecular acoustic angiography vs. 0.53 dB for multi-pulse imaging; p = 4.9 × 10−10). Analysis of registered microvascular and molecular imaging volumes indicated that vessel tortuosity decreases with increasing distance from both VEGFR2 and selectin targeting sites. Conclusions Molecular acoustic angiography (superharmonic molecular imaging) exhibited a significant increase in CTR at all doses tested due to superior rejection of tissue artifact signals. Due to the high resolution of acoustic angiography molecular imaging, it is possible to analyze spatial relationships in aligned microvascular and molecular superharmonic imaging volumes. Future studies are required to separate the effects of biomarker expression and blood flow kinetics in comparing local tortuosity differences between different endothelial markers such as VEGFR2, E-selectin and P-selectin. PMID:27519522

  15. Exploiting physical constraints for multi-spectral exo-planet detection

    NASA Astrophysics Data System (ADS)

    Thiébaut, Éric; Devaney, Nicholas; Langlois, Maud; Hanley, Kenneth

    2016-07-01

    We derive a physical model of the on-axis PSF for a high contrast imaging system such as GPI or SPHERE. This model is based on a multi-spectral Taylor series expansion of the diffraction pattern and predicts that the speckles should be a combination of spatial modes with deterministic chromatic magnification and weighting. We propose to remove most of the residuals by fitting this model on a set of images at multiple wavelengths and times. On simulated data, we demonstrate that our approach achieves very good speckle suppression without additional heuristic parameters. The residual speckles1, 2 set the most serious limitation in the detection of exo-planets in high contrast coronographic images provided by instruments such as SPHERE3 at the VLT, GPI4, 5 at Gemini, or SCExAO6 at Subaru. A number of post-processing methods have been proposed to remove as much as possible of the residual speckles while preserving the signal from the planets. These methods exploit the fact that the speckles and the planetary signal have different temporal and spectral behaviors. Some methods like LOCI7 are based on angular differential imaging8 (ADI), spectral differential imaging9, 10 (SDI), or on a combination of ADI and SDI.11 Instead of working on image differences, we propose to tackle the exo-planet detection as an inverse problem where a model of the residual speckles is fit on the set of multi-spectral images and, possibly, multiple exposures. In order to reduce the number of degrees of freedom, we impose specific constraints on the spatio-spectral distribution of stellar speckles. These constraints are deduced from a multi-spectral Taylor series expansion of the diffraction pattern for an on-axis source which implies that the speckles are a combination of spatial modes with deterministic chromatic magnification and weighting. Using simulated data, the efficiency of speckle removal by fitting the proposed multi-spectral model is compared to the result of using an approximation based on the singular value decomposition of the rescaled images. We show how the difficult problem to fitting a bilinear model on the can be solved in practise. The results are promising for further developments including application to real data and joint planet detection in multi-variate data (multi-spectral and multiple exposures images).

  16. Multi-beam range imager for autonomous operations

    NASA Technical Reports Server (NTRS)

    Marzwell, Neville I.; Lee, H. Sang; Ramaswami, R.

    1993-01-01

    For space operations from the Space Station Freedom the real time range imager will be very valuable in terms of refuelling, docking as well as space exploration operations. For these applications as well as many other robotics and remote ranging applications, a small potable, power efficient, robust range imager capable of a few tens of km ranging with 10 cm accuracy is needed. The system developed is based on a well known pseudo-random modulation technique applied to a laser transmitter combined with a novel range resolution enhancement technique. In this technique, the transmitter is modulated by a relatively low frequency of an order of a few MHz to enhance the signal to noise ratio and to ease the stringent systems engineering requirements while accomplishing a very high resolution. The desired resolution cannot easily be attained by other conventional approaches. The engineering model of the system is being designed to obtain better than 10 cm range accuracy simply by implementing a high precision clock circuit. In this paper we present the principle of the pseudo-random noise (PN) lidar system and the results of the proof of experiment.

  17. Modelling a suitable location for Urban Solid Waste Management using AHP method and GIS -A geospatial approach and MCDM Model

    NASA Astrophysics Data System (ADS)

    Iqbal, M.; Islam, A.; Hossain, A.; Mustaque, S.

    2016-12-01

    Multi-Criteria Decision Making(MCDM) is advanced analytical method to evaluate appropriate result or decision from multiple criterion environment. Present time in advanced research, MCDM technique is progressive analytical process to evaluate a logical decision from various conflict. In addition, Present day Geospatial approach (e.g. Remote sensing and GIS) also another advanced technical approach in a research to collect, process and analyze various spatial data at a time. GIS and Remote sensing together with the MCDM technique could be the best platform to solve a complex decision making process. These two latest process combined very effectively used in site selection for solid waste management in urban policy. The most popular MCDM technique is Weighted Linear Method (WLC) where Analytical Hierarchy Process (AHP) is another popular and consistent techniques used in worldwide as dependable decision making. Consequently, the main objective of this study is improving a AHP model as MCDM technique with Geographic Information System (GIS) to select a suitable landfill site for urban solid waste management. Here AHP technique used as a MCDM tool to select the best suitable landfill location for urban solid waste management. To protect the urban environment in a sustainable way municipal waste needs an appropriate landfill site considering environmental, geological, social and technical aspect of the region. A MCDM model generate from five class related which related to environmental, geological, social and technical using AHP method and input the result set in GIS for final model location for urban solid waste management. The final suitable location comes out that 12.2% of the area corresponds to 22.89 km2 considering the total study area. In this study, Keraniganj sub-district of Dhaka district in Bangladesh is consider as study area which is densely populated city currently undergoes an unmanaged waste management system especially the suitable landfill sites for waste dumping site.

  18. Methodology for quantitative rapid multi-tracer PET tumor characterizations.

    PubMed

    Kadrmas, Dan J; Hoffman, John M

    2013-10-04

    Positron emission tomography (PET) can image a wide variety of functional and physiological parameters in vivo using different radiotracers. As more is learned about the molecular basis for disease and treatment, the potential value of molecular imaging for characterizing and monitoring disease status has increased. Characterizing multiple aspects of tumor physiology by imaging multiple PET tracers in a single patient provides additional complementary information, and there is a significant body of literature supporting the potential value of multi-tracer PET imaging in oncology. However, imaging multiple PET tracers in a single patient presents a number of challenges. A number of techniques are under development for rapidly imaging multiple PET tracers in a single scan, where signal-recovery processing algorithms are employed to recover various imaging endpoints for each tracer. Dynamic imaging is generally used with tracer injections staggered in time, and kinetic constraints are utilized to estimate each tracers' contribution to the multi-tracer imaging signal. This article summarizes past and ongoing work in multi-tracer PET tumor imaging, and then organizes and describes the main algorithmic approaches for achieving multi-tracer PET signal-recovery. While significant advances have been made, the complexity of the approach necessitates protocol design, optimization, and testing for each particular tracer combination and application. Rapid multi-tracer PET techniques have great potential for both research and clinical cancer imaging applications, and continued research in this area is warranted.

  19. Methodology for Quantitative Rapid Multi-Tracer PET Tumor Characterizations

    PubMed Central

    Kadrmas, Dan J.; Hoffman, John M.

    2013-01-01

    Positron emission tomography (PET) can image a wide variety of functional and physiological parameters in vivo using different radiotracers. As more is learned about the molecular basis for disease and treatment, the potential value of molecular imaging for characterizing and monitoring disease status has increased. Characterizing multiple aspects of tumor physiology by imaging multiple PET tracers in a single patient provides additional complementary information, and there is a significant body of literature supporting the potential value of multi-tracer PET imaging in oncology. However, imaging multiple PET tracers in a single patient presents a number of challenges. A number of techniques are under development for rapidly imaging multiple PET tracers in a single scan, where signal-recovery processing algorithms are employed to recover various imaging endpoints for each tracer. Dynamic imaging is generally used with tracer injections staggered in time, and kinetic constraints are utilized to estimate each tracers' contribution to the multi-tracer imaging signal. This article summarizes past and ongoing work in multi-tracer PET tumor imaging, and then organizes and describes the main algorithmic approaches for achieving multi-tracer PET signal-recovery. While significant advances have been made, the complexity of the approach necessitates protocol design, optimization, and testing for each particular tracer combination and application. Rapid multi-tracer PET techniques have great potential for both research and clinical cancer imaging applications, and continued research in this area is warranted. PMID:24312149

  20. Artificial intelligence techniques for scheduling Space Shuttle missions

    NASA Technical Reports Server (NTRS)

    Henke, Andrea L.; Stottler, Richard H.

    1994-01-01

    Planning and scheduling of NASA Space Shuttle missions is a complex, labor-intensive process requiring the expertise of experienced mission planners. We have developed a planning and scheduling system using combinations of artificial intelligence knowledge representations and planning techniques to capture mission planning knowledge and automate the multi-mission planning process. Our integrated object oriented and rule-based approach reduces planning time by orders of magnitude and provides planners with the flexibility to easily modify planning knowledge and constraints without requiring programming expertise.

  1. MMX-I: A data-processing software for multi-modal X-ray imaging and tomography

    NASA Astrophysics Data System (ADS)

    Bergamaschi, A.; Medjoubi, K.; Messaoudi, C.; Marco, S.; Somogyi, A.

    2017-06-01

    Scanning hard X-ray imaging allows simultaneous acquisition of multimodal information, including X-ray fluorescence, absorption, phase and dark-field contrasts, providing structural and chemical details of the samples. Combining these scanning techniques with the infrastructure developed for fast data acquisition at Synchrotron Soleil permits to perform multimodal imaging and tomography during routine user experiments at the Nanoscopium beamline. A main challenge of such imaging techniques is the online processing and analysis of the generated very large volume (several hundreds of Giga Bytes) multimodal data-sets. This is especially important for the wide user community foreseen at the user oriented Nanoscopium beamline (e.g. from the fields of Biology, Life Sciences, Geology, Geobiology), having no experience in such data-handling. MMX-I is a new multi-platform open-source freeware for the processing and reconstruction of scanning multi-technique X-ray imaging and tomographic datasets. The MMX-I project aims to offer, both expert users and beginners, the possibility of processing and analysing raw data, either on-site or off-site. Therefore we have developed a multi-platform (Mac, Windows and Linux 64bit) data processing tool, which is easy to install, comprehensive, intuitive, extendable and user-friendly. MMX-I is now routinely used by the Nanoscopium user community and has demonstrated its performance in treating big data.

  2. The Monitoring, Detection, Isolation and Assessment of Information Warfare Attacks Through Multi-Level, Multi-Scale System Modeling and Model Based Technology

    DTIC Science & Technology

    2004-01-01

    login identity to the one under which the system call is executed, the parameters of the system call execution - file names including full path...Anomaly detection COAST-EIMDT Distributed on target hosts EMERALD Distributed on target hosts and security servers Signature recognition Anomaly...uses a centralized architecture, and employs an anomaly detection technique for intrusion detection. The EMERALD project [80] proposes a

  3. Brain plasticity and functionality explored by nonlinear optical microscopy

    NASA Astrophysics Data System (ADS)

    Sacconi, L.; Allegra, L.; Buffelli, M.; Cesare, P.; D'Angelo, E.; Gandolfi, D.; Grasselli, G.; Lotti, J.; Mapelli, J.; Strata, P.; Pavone, F. S.

    2010-02-01

    In combination with fluorescent protein (XFP) expression techniques, two-photon microscopy has become an indispensable tool to image cortical plasticity in living mice. In parallel to its application in imaging, multi-photon absorption has also been used as a tool for the dissection of single neurites with submicrometric precision without causing any visible collateral damage to the surrounding neuronal structures. In this work, multi-photon nanosurgery is applied to dissect single climbing fibers expressing GFP in the cerebellar cortex. The morphological consequences are then characterized with time lapse 3-dimensional two-photon imaging over a period of minutes to days after the procedure. Preliminary investigations show that the laser induced fiber dissection recalls a regenerative process in the fiber itself over a period of days. These results show the possibility of this innovative technique to investigate regenerative processes in adult brain. In parallel with imaging and manipulation technique, non-linear microscopy offers the opportunity to optically record electrical activity in intact neuronal networks. In this work, we combined the advantages of second-harmonic generation (SHG) with a random access (RA) excitation scheme to realize a new microscope (RASH) capable of optically recording fast membrane potential events occurring in a wide-field of view. The RASH microscope, in combination with bulk loading of tissue with FM4-64 dye, was used to simultaneously record electrical activity from clusters of Purkinje cells in acute cerebellar slices. Complex spikes, both synchronous and asynchronous, were optically recorded simultaneously across a given population of neurons. Spontaneous electrical activity was also monitored simultaneously in pairs of neurons, where action potentials were recorded without averaging across trials. These results show the strength of this technique in describing the temporal dynamics of neuronal assemblies, opening promising perspectives in understanding the computations of neuronal networks.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coletti, Chiara, E-mail: chiara.coletti@studenti.u

    During the firing of bricks, mineralogical and textural transformations produce an artificial aggregate characterised by significant porosity. Particularly as regards pore-size distribution and the interconnection model, porosity is an important parameter to evaluate and predict the durability of bricks. The pore system is in fact the main element, which correlates building materials and their environment (especially in cases of aggressive weathering, e.g., salt crystallisation and freeze-thaw cycles) and determines their durability. Four industrial bricks with differing compositions and firing temperatures were analysed with “direct” and “indirect” techniques, traditional methods (mercury intrusion porosimetry, hydric tests, nitrogen adsorption) and new analytical approachesmore » based on digital image reconstruction of 2D and 3D models (back-scattered electrons and computerised X-ray micro-Tomography, respectively). The comparison of results from different analytical methods in the “overlapping ranges” of porosity and the careful reconstruction of a cumulative curve, allowed overcoming their specific limitations and achieving better knowledge on the pore system of bricks. - Highlights: •Pore-size distribution and structure of the pore system in four commercial bricks •A multi-analytical approach combining “direct” and “indirect” techniques •Traditional methods vs. new approaches based on 2D/3D digital image reconstruction •The use of “overlapping ranges” to overcome the limitations of various techniques.« less

  5. Proper orthogonal decomposition-based spectral higher-order stochastic estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baars, Woutijn J., E-mail: wbaars@unimelb.edu.au; Tinney, Charles E.

    A unique routine, capable of identifying both linear and higher-order coherence in multiple-input/output systems, is presented. The technique combines two well-established methods: Proper Orthogonal Decomposition (POD) and Higher-Order Spectra Analysis. The latter of these is based on known methods for characterizing nonlinear systems by way of Volterra series. In that, both linear and higher-order kernels are formed to quantify the spectral (nonlinear) transfer of energy between the system's input and output. This reduces essentially to spectral Linear Stochastic Estimation when only first-order terms are considered, and is therefore presented in the context of stochastic estimation as spectral Higher-Order Stochastic Estimationmore » (HOSE). The trade-off to seeking higher-order transfer kernels is that the increased complexity restricts the analysis to single-input/output systems. Low-dimensional (POD-based) analysis techniques are inserted to alleviate this void as POD coefficients represent the dynamics of the spatial structures (modes) of a multi-degree-of-freedom system. The mathematical framework behind this POD-based HOSE method is first described. The method is then tested in the context of jet aeroacoustics by modeling acoustically efficient large-scale instabilities as combinations of wave packets. The growth, saturation, and decay of these spatially convecting wave packets are shown to couple both linearly and nonlinearly in the near-field to produce waveforms that propagate acoustically to the far-field for different frequency combinations.« less

  6. An adaptive Hidden Markov Model for activity recognition based on a wearable multi-sensor device

    USDA-ARS?s Scientific Manuscript database

    Human activity recognition is important in the study of personal health, wellness and lifestyle. In order to acquire human activity information from the personal space, many wearable multi-sensor devices have been developed. In this paper, a novel technique for automatic activity recognition based o...

  7. Multi-object segmentation using coupled nonparametric shape and relative pose priors

    NASA Astrophysics Data System (ADS)

    Uzunbas, Mustafa Gökhan; Soldea, Octavian; Çetin, Müjdat; Ünal, Gözde; Erçil, Aytül; Unay, Devrim; Ekin, Ahmet; Firat, Zeynep

    2009-02-01

    We present a new method for multi-object segmentation in a maximum a posteriori estimation framework. Our method is motivated by the observation that neighboring or coupling objects in images generate configurations and co-dependencies which could potentially aid in segmentation if properly exploited. Our approach employs coupled shape and inter-shape pose priors that are computed using training images in a nonparametric multi-variate kernel density estimation framework. The coupled shape prior is obtained by estimating the joint shape distribution of multiple objects and the inter-shape pose priors are modeled via standard moments. Based on such statistical models, we formulate an optimization problem for segmentation, which we solve by an algorithm based on active contours. Our technique provides significant improvements in the segmentation of weakly contrasted objects in a number of applications. In particular for medical image analysis, we use our method to extract brain Basal Ganglia structures, which are members of a complex multi-object system posing a challenging segmentation problem. We also apply our technique to the problem of handwritten character segmentation. Finally, we use our method to segment cars in urban scenes.

  8. NUMERICAL METHODS FOR SOLVING THE MULTI-TERM TIME-FRACTIONAL WAVE-DIFFUSION EQUATION.

    PubMed

    Liu, F; Meerschaert, M M; McGough, R J; Zhuang, P; Liu, Q

    2013-03-01

    In this paper, the multi-term time-fractional wave-diffusion equations are considered. The multi-term time fractional derivatives are defined in the Caputo sense, whose orders belong to the intervals [0,1], [1,2), [0,2), [0,3), [2,3) and [2,4), respectively. Some computationally effective numerical methods are proposed for simulating the multi-term time-fractional wave-diffusion equations. The numerical results demonstrate the effectiveness of theoretical analysis. These methods and techniques can also be extended to other kinds of the multi-term fractional time-space models with fractional Laplacian.

  9. NUMERICAL METHODS FOR SOLVING THE MULTI-TERM TIME-FRACTIONAL WAVE-DIFFUSION EQUATION

    PubMed Central

    Liu, F.; Meerschaert, M.M.; McGough, R.J.; Zhuang, P.; Liu, Q.

    2013-01-01

    In this paper, the multi-term time-fractional wave-diffusion equations are considered. The multi-term time fractional derivatives are defined in the Caputo sense, whose orders belong to the intervals [0,1], [1,2), [0,2), [0,3), [2,3) and [2,4), respectively. Some computationally effective numerical methods are proposed for simulating the multi-term time-fractional wave-diffusion equations. The numerical results demonstrate the effectiveness of theoretical analysis. These methods and techniques can also be extended to other kinds of the multi-term fractional time-space models with fractional Laplacian. PMID:23772179

  10. Application of advanced cytometric and molecular technologies to minimal residual disease monitoring

    NASA Astrophysics Data System (ADS)

    Leary, James F.; He, Feng; Reece, Lisa M.

    2000-04-01

    Minimal residual disease monitoring presents a number of theoretical and practical challenges. Recently it has been possible to meet some of these challenges by combining a number of new advanced biotechnologies. To monitor the number of residual tumor cells requires complex cocktails of molecular probes that collectively provide sensitivities of detection on the order of one residual tumor cell per million total cells. Ultra-high-speed, multi parameter flow cytometry is capable of analyzing cells at rates in excess of 100,000 cells/sec. Residual tumor selection marker cocktails can be optimized by use of receiver operating characteristic analysis. New data minimizing techniques when combined with multi variate statistical or neural network classifications of tumor cells can more accurately predict residual tumor cell frequencies. The combination of these techniques can, under at least some circumstances, detect frequencies of tumor cells as low as one cell in a million with an accuracy of over 98 percent correct classification. Detection of mutations in tumor suppressor genes requires insolation of these rare tumor cells and single-cell DNA sequencing. Rare residual tumor cells can be isolated at single cell level by high-resolution single-cell cell sorting. Molecular characterization of tumor suppressor gene mutations can be accomplished using a combination of single- cell polymerase chain reaction amplification of specific gene sequences followed by TA cloning techniques and DNA sequencing. Mutations as small as a single base pair in a tumor suppressor gene of a single sorted tumor cell have been detected using these methods. Using new amplification procedures and DNA micro arrays it should be possible to extend the capabilities shown in this paper to screening of multiple DNA mutations in tumor suppressor and other genes on small numbers of sorted metastatic tumor cells.

  11. Projected changes in precipitation intensity and frequency over complex topography: a multi-model perspective

    NASA Astrophysics Data System (ADS)

    Fischer, Andreas; Keller, Denise; Liniger, Mark; Rajczak, Jan; Schär, Christoph; Appenzeller, Christof

    2014-05-01

    Fundamental changes in the hydrological cycle are expected in a future warmer climate. This is of particular relevance for the Alpine region, as a source and reservoir of several major rivers in Europe and being prone to extreme events such as floodings. For this region, climate change assessments based on the ENSEMBLES regional climate models (RCMs) project a significant decrease in summer mean precipitation under the A1B emission scenario by the mid-to-end of this century, while winter mean precipitation is expected to slightly rise. From an impact perspective, projected changes in seasonal means, however, are often insufficient to adequately address the multifaceted challenges of climate change adaptation. In this study, we revisit the full matrix of the ENSEMBLES RCM projections regarding changes in frequency and intensity, precipitation-type (convective versus stratiform) and temporal structure (wet/dry spells and transition probabilities) over Switzerland and surroundings. As proxies for raintype changes, we rely on the model parameterized convective and large-scale precipitation components. Part of the analysis involves a Bayesian multi-model combination algorithm to infer changes from the multi-model ensemble. The analysis suggests a summer drying that evolves altitude-specific: over low-land regions it is associated with wet-day frequency decreases of convective and large-scale precipitation, while over elevated regions it is primarily associated with a decline in large-scale precipitation only. As a consequence, almost all the models project an increase in the convective fraction at elevated Alpine altitudes. The decrease in the number of wet days during summer is accompanied by decreases (increases) in multi-day wet (dry) spells. This shift in multi-day episodes also lowers the likelihood of short dry spell occurrence in all of the models. For spring and autumn the combined multi-model projections indicate higher mean precipitation intensity north of the Alps, while a similar tendency is expected for the winter season over most of Switzerland.

  12. A Numerical Investigation of the Startup Transient in a Wave Rotor

    NASA Technical Reports Server (NTRS)

    Paxson, Daniel E.

    1996-01-01

    The startup process is investigated for a hypothetical four-port wave rotor, envisioned as a topping cycle for a small gas turbine engine. The investigation is conducted numerically using a multi-passage, one-dimensional CFD-based wave rotor simulation in combination with lumped volume models for the combustor, exhaust valve plenum, and rotor center cavity components. The simulation is described and several startup transients are presented which illustrate potential difficulties for the specific cycle design investigated. In particular it is observed that, prior to combustor light-off, or just after, the flow through the combustor loop is reversed from the design direction. The phenomenon is demonstrated and several possible modifications techniques are discussed which avoid or overcome the problem.

  13. The BRITE spectropolarimetric survey

    NASA Astrophysics Data System (ADS)

    Neiner, C.; Lèbre, A.

    2014-12-01

    The BRITE constellation of nanosatellites observes very bright stars to perform seismology. We have set up a spectropolarimetric survey of all BRITE targets, i.e. all ˜600 stars brighter than V=4, with Narval at TBL, ESPaDOnS at CFHT and HarpsPol at ESO. We plan to reach a magnetic detection threshold of B_{pol} = 50 G for stars hotter than F5 and B_{pol} = 5 G for cooler stars. This program will allow us to combine magnetic information with the BRITE seismic information and obtain a better interpretation and modelling of the internal structure of the stars. It will also lead to new discoveries of very bright magnetic stars, which are unique targets for follow-up and multi-technique studies.

  14. Superresolution fluorescence imaging by pump-probe setup using repetitive stimulated transition process

    NASA Astrophysics Data System (ADS)

    Dake, Fumihiro; Fukutake, Naoki; Hayashi, Seri; Taki, Yusuke

    2018-02-01

    We proposed superresolution nonlinear fluorescence microscopy with pump-probe setup that utilizes repetitive stimulated absorption and stimulated emission caused by two-color laser beams. The resulting nonlinear fluorescence that undergoes such a repetitive stimulated transition is detectable as a signal via the lock-in technique. As the nonlinear fluorescence signal is produced by the multi-ply combination of incident beams, the optical resolution can be improved. A theoretical model of the nonlinear optical process is provided using rate equations, which offers phenomenological interpretation of nonlinear fluorescence and estimation of the signal properties. The proposed method is demonstrated as having the scalability of optical resolution. Theoretical resolution and bead image are also estimated to validate the experimental result.

  15. Validation of a Predictive Scoring System for Deep Sternal Wound Infection after Bilateral Internal Thoracic Artery Grafting in a Cohort of French Patients.

    PubMed

    Perrotti, Andrea; Gatti, Giuseppe; Dorigo, Enrica; Sinagra, Gianfranco; Pappalardo, Aniello; Chocron, Sidney

    The Gatti score is a weighted scoring system based on risk factors for deep sternal wound infection (DSWI) that was created in an Italian center to predict DSWI risk after bilateral internal thoracic artery (BITA) grafting. No external evaluation based on validation samples derived from other surgical centers has been performed. The aim of this study is to perform this validation. During 2015, BITA grafts were used as skeletonized conduits in all 255 consecutive patients with multi-vessel coronary disease who underwent isolated coronary bypass surgery at the Department of Thoracic and Cardio-Vascular Surgery, University Hospital Jean Minjoz, Besançon, France. Baseline characteristics, operative data, and immediate outcomes of every patient were collected prospectively. A DSWI risk score was assigned to each patient pre-operatively. The discrimination power of both models, pre-operative and combined, of the Gatti score was assessed with the calculation of the area under the receiver operating characteristic curve. Fourteen (5.5%) patients had DSWI. Major differences both as the baseline characteristics of patients and surgical techniques were found between this series and the original series from which the Gatti score was derived. The area under the receiver operating characteristic curve was 0.78 (95% confidence interval: 0.64-0.92) for the pre-operative model and 0.84 (95% confidence interval: 0.69-0.98) for the combined model. The Gatti score has proven to be effective even in a cohort of French patients despite major differences from the original Italian series. Multi-center validation studies must be performed before introducing the score into clinical practice.

  16. Multi-spectral imaging of oxygen saturation

    NASA Astrophysics Data System (ADS)

    Savelieva, Tatiana A.; Stratonnikov, Aleksander A.; Loschenov, Victor B.

    2008-06-01

    The system of multi-spectral imaging of oxygen saturation is an instrument that can record both spectral and spatial information about a sample. In this project, the spectral imaging technique is used for monitoring of oxygen saturation of hemoglobin in human tissues. This system can be used for monitoring spatial distribution of oxygen saturation in photodynamic therapy, surgery or sports medicine. Diffuse reflectance spectroscopy in the visible range is an effective and extensively used technique for the non-invasive study and characterization of various biological tissues. In this article, a short review of modeling techniques being currently in use for diffuse reflection from semi-infinite turbid media is presented. A simple and practical model for use with a real-time imaging system is proposed. This model is based on linear approximation of the dependence of the diffuse reflectance coefficient on relation between absorbance and reduced scattering coefficient. This dependence was obtained with the Monte Carlo simulation of photon propagation in turbid media. Spectra of the oxygenated and deoxygenated forms of hemoglobin differ mostly in the red area (520 - 600 nm) and have several characteristic points there. Thus four band-pass filters were used for multi-spectral imaging. After having measured the reflectance, the data obtained are used for fitting the concentration of oxygenated and free hemoglobin, and hemoglobin oxygen saturation.

  17. Multi-View Multi-Instance Learning Based on Joint Sparse Representation and Multi-View Dictionary Learning.

    PubMed

    Li, Bing; Yuan, Chunfeng; Xiong, Weihua; Hu, Weiming; Peng, Houwen; Ding, Xinmiao; Maybank, Steve

    2017-12-01

    In multi-instance learning (MIL), the relations among instances in a bag convey important contextual information in many applications. Previous studies on MIL either ignore such relations or simply model them with a fixed graph structure so that the overall performance inevitably degrades in complex environments. To address this problem, this paper proposes a novel multi-view multi-instance learning algorithm (MIL) that combines multiple context structures in a bag into a unified framework. The novel aspects are: (i) we propose a sparse -graph model that can generate different graphs with different parameters to represent various context relations in a bag, (ii) we propose a multi-view joint sparse representation that integrates these graphs into a unified framework for bag classification, and (iii) we propose a multi-view dictionary learning algorithm to obtain a multi-view graph dictionary that considers cues from all views simultaneously to improve the discrimination of the MIL. Experiments and analyses in many practical applications prove the effectiveness of the M IL.

  18. Identifying natural compounds as multi-target-directed ligands against Alzheimer's disease: an in silico approach.

    PubMed

    Ambure, Pravin; Bhat, Jyotsna; Puzyn, Tomasz; Roy, Kunal

    2018-04-23

    Alzheimer's disease (AD) is a multi-factorial disease, which can be simply outlined as an irreversible and progressive neurodegenerative disorder with an unclear root cause. It is a major cause of dementia in old aged people. In the present study, utilizing the structural and biological activity information of ligands for five important and mostly studied vital targets (i.e. cyclin-dependant kinase 5, β-secretase, monoamine oxidase B, glycogen synthase kinase 3β, acetylcholinesterase) that are believed to be effective against AD, we have developed five classification models using linear discriminant analysis (LDA) technique. Considering the importance of data curation, we have given more attention towards the chemical and biological data curation, which is a difficult task especially in case of big data-sets. Thus, to ease the curation process we have designed Konstanz Information Miner (KNIME) workflows, which are made available at http://teqip.jdvu.ac.in/QSAR_Tools/ . The developed models were appropriately validated based on the predictions for experiment derived data from test sets, as well as true external set compounds including known multi-target compounds. The domain of applicability for each classification model was checked based on a confidence estimation approach. Further, these validated models were employed for screening of natural compounds collected from the InterBioScreen natural database ( https://www.ibscreen.com/natural-compounds ). Further, the natural compounds that were categorized as 'actives' in at least two classification models out of five developed models were considered as multi-target leads, and these compounds were further screened using the drug-like filter, molecular docking technique and then thoroughly analyzed using molecular dynamics studies. Finally, the most potential multi-target natural compounds against AD are suggested.

  19. Computational identification of potential multi-drug combinations for reduction of microglial inflammation in Alzheimer disease

    PubMed Central

    Anastasio, Thomas J.

    2015-01-01

    Like other neurodegenerative diseases, Alzheimer Disease (AD) has a prominent inflammatory component mediated by brain microglia. Reducing microglial inflammation could potentially halt or at least slow the neurodegenerative process. A major challenge in the development of treatments targeting brain inflammation is the sheer complexity of the molecular mechanisms that determine whether microglia become inflammatory or take on a more neuroprotective phenotype. The process is highly multifactorial, raising the possibility that a multi-target/multi-drug strategy could be more effective than conventional monotherapy. This study takes a computational approach in finding combinations of approved drugs that are potentially more effective than single drugs in reducing microglial inflammation in AD. This novel approach exploits the distinct advantages of two different computer programming languages, one imperative and the other declarative. Existing programs written in both languages implement the same model of microglial behavior, and the input/output relationships of both programs agree with each other and with data on microglia over an extensive test battery. Here the imperative program is used efficiently to screen the model for the most efficacious combinations of 10 drugs, while the declarative program is used to analyze in detail the mechanisms of action of the most efficacious combinations. Of the 1024 possible drug combinations, the simulated screen identifies only 7 that are able to move simulated microglia at least 50% of the way from a neurotoxic to a neuroprotective phenotype. Subsequent analysis shows that of the 7 most efficacious combinations, 2 stand out as superior both in strength and reliability. The model offers many experimentally testable and therapeutically relevant predictions concerning effective drug combinations and their mechanisms of action. PMID:26097457

  20. A combined approach for the attribution of handwriting: the case of Antonio Stradivari's manuscripts

    NASA Astrophysics Data System (ADS)

    Fichera, Giusj Valentina; Dondi, Piercarlo; Licchelli, Maurizio; Lombardi, Luca; Ridolfi, Stefano; Malagodi, Marco

    2016-11-01

    Numerous artefacts from Antonio Stradivari's workshop are currently preserved in the "Museo del Violino" (Museum of the Violin) in Cremona, Italy. A large number of them are paper models containing instructions and technical notes by the great violin maker. After his death, this collection has had several owners, while new annotations added to the original ones, sometimes imitating Stradivari's handwriting, caused problems of authenticity. The attribution of these relics is a complex task and, until now, only a small part of them has been examined by palaeographers. This paper introduces a multi-analytical approach able to facilitate the study of handwriting in manuscripts with the combined use of image processing and X-ray fluorescence spectroscopy: the former provides a fast and automatic screening of documents; the latter allows to analyse the chemical composition of inks. For our tests, 17 paper relics, dated between 1684 and 1729, were chosen. Palaeographic analysis was used as reference. The results obtained showed the validity of the combined approach proposed herein: the two techniques proved to be complementary and useful to clarify the attribution of different pieces of handwriting.

  1. Swarm Intelligence for Urban Dynamics Modelling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ghnemat, Rawan; Bertelle, Cyrille; Duchamp, Gerard H. E.

    2009-04-16

    In this paper, we propose swarm intelligence algorithms to deal with dynamical and spatial organization emergence. The goal is to model and simulate the developement of spatial centers using multi-criteria. We combine a decentralized approach based on emergent clustering mixed with spatial constraints or attractions. We propose an extension of the ant nest building algorithm with multi-center and adaptive process. Typically, this model is suitable to analyse and simulate urban dynamics like gentrification or the dynamics of the cultural equipment in urban area.

  2. Swarm Intelligence for Urban Dynamics Modelling

    NASA Astrophysics Data System (ADS)

    Ghnemat, Rawan; Bertelle, Cyrille; Duchamp, Gérard H. E.

    2009-04-01

    In this paper, we propose swarm intelligence algorithms to deal with dynamical and spatial organization emergence. The goal is to model and simulate the developement of spatial centers using multi-criteria. We combine a decentralized approach based on emergent clustering mixed with spatial constraints or attractions. We propose an extension of the ant nest building algorithm with multi-center and adaptive process. Typically, this model is suitable to analyse and simulate urban dynamics like gentrification or the dynamics of the cultural equipment in urban area.

  3. Microphysics in the Multi-Scale Modeling Systems with Unified Physics

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Chern, J.; Lamg, S.; Matsui, T.; Shen, B.; Zeng, X.; Shi, R.

    2011-01-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (l) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, the microphysics developments of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the heavy precipitation processes will be presented.

  4. Multi-element array signal reconstruction with adaptive least-squares algorithms

    NASA Technical Reports Server (NTRS)

    Kumar, R.

    1992-01-01

    Two versions of the adaptive least-squares algorithm are presented for combining signals from multiple feeds placed in the focal plane of a mechanical antenna whose reflector surface is distorted due to various deformations. Coherent signal combining techniques based on the adaptive least-squares algorithm are examined for nearly optimally and adaptively combining the outputs of the feeds. The performance of the two versions is evaluated by simulations. It is demonstrated for the example considered that both of the adaptive least-squares algorithms are capable of offsetting most of the loss in the antenna gain incurred due to reflector surface deformations.

  5. Performance Evaluation of Localization Accuracy for a Log-Normal Shadow Fading Wireless Sensor Network under Physical Barrier Attacks

    PubMed Central

    Abdulqader Hussein, Ahmed; Rahman, Tharek A.; Leow, Chee Yen

    2015-01-01

    Localization is an apparent aspect of a wireless sensor network, which is the focus of much interesting research. One of the severe conditions that needs to be taken into consideration is localizing a mobile target through a dispersed sensor network in the presence of physical barrier attacks. These attacks confuse the localization process and cause location estimation errors. Range-based methods, like the received signal strength indication (RSSI), face the major influence of this kind of attack. This paper proposes a solution based on a combination of multi-frequency multi-power localization (C-MFMPL) and step function multi-frequency multi-power localization (SF-MFMPL), including the fingerprint matching technique and lateration, to provide a robust and accurate localization technique. In addition, this paper proposes a grid coloring algorithm to detect the signal hole map in the network, which refers to the attack-prone regions, in order to carry out corrective actions. The simulation results show the enhancement and robustness of RSS localization performance in the face of log normal shadow fading effects, besides the presence of physical barrier attacks, through detecting, filtering and eliminating the effect of these attacks. PMID:26690159

  6. A multi-strategy approach to informative gene identification from gene expression data.

    PubMed

    Liu, Ziying; Phan, Sieu; Famili, Fazel; Pan, Youlian; Lenferink, Anne E G; Cantin, Christiane; Collins, Catherine; O'Connor-McCourt, Maureen D

    2010-02-01

    An unsupervised multi-strategy approach has been developed to identify informative genes from high throughput genomic data. Several statistical methods have been used in the field to identify differentially expressed genes. Since different methods generate different lists of genes, it is very challenging to determine the most reliable gene list and the appropriate method. This paper presents a multi-strategy method, in which a combination of several data analysis techniques are applied to a given dataset and a confidence measure is established to select genes from the gene lists generated by these techniques to form the core of our final selection. The remainder of the genes that form the peripheral region are subject to exclusion or inclusion into the final selection. This paper demonstrates this methodology through its application to an in-house cancer genomics dataset and a public dataset. The results indicate that our method provides more reliable list of genes, which are validated using biological knowledge, biological experiments, and literature search. We further evaluated our multi-strategy method by consolidating two pairs of independent datasets, each pair is for the same disease, but generated by different labs using different platforms. The results showed that our method has produced far better results.

  7. Quantitative multi-modal NDT data analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heideklang, René; Shokouhi, Parisa

    2014-02-18

    A single NDT technique is often not adequate to provide assessments about the integrity of test objects with the required coverage or accuracy. In such situations, it is often resorted to multi-modal testing, where complementary and overlapping information from different NDT techniques are combined for a more comprehensive evaluation. Multi-modal material and defect characterization is an interesting task which involves several diverse fields of research, including signal and image processing, statistics and data mining. The fusion of different modalities may improve quantitative nondestructive evaluation by effectively exploiting the augmented set of multi-sensor information about the material. It is the redundantmore » information in particular, whose quantification is expected to lead to increased reliability and robustness of the inspection results. There are different systematic approaches to data fusion, each with its specific advantages and drawbacks. In our contribution, these will be discussed in the context of nondestructive materials testing. A practical study adopting a high-level scheme for the fusion of Eddy Current, GMR and Thermography measurements on a reference metallic specimen with built-in grooves will be presented. Results show that fusion is able to outperform the best single sensor regarding detection specificity, while retaining the same level of sensitivity.« less

  8. Performance Evaluation of Localization Accuracy for a Log-Normal Shadow Fading Wireless Sensor Network under Physical Barrier Attacks.

    PubMed

    Hussein, Ahmed Abdulqader; Rahman, Tharek A; Leow, Chee Yen

    2015-12-04

    Localization is an apparent aspect of a wireless sensor network, which is the focus of much interesting research. One of the severe conditions that needs to be taken into consideration is localizing a mobile target through a dispersed sensor network in the presence of physical barrier attacks. These attacks confuse the localization process and cause location estimation errors. Range-based methods, like the received signal strength indication (RSSI), face the major influence of this kind of attack. This paper proposes a solution based on a combination of multi-frequency multi-power localization (C-MFMPL) and step function multi-frequency multi-power localization (SF-MFMPL), including the fingerprint matching technique and lateration, to provide a robust and accurate localization technique. In addition, this paper proposes a grid coloring algorithm to detect the signal hole map in the network, which refers to the attack-prone regions, in order to carry out corrective actions. The simulation results show the enhancement and robustness of RSS localization performance in the face of log normal shadow fading effects, besides the presence of physical barrier attacks, through detecting, filtering and eliminating the effect of these attacks.

  9. Saliency detection using mutual consistency-guided spatial cues combination

    NASA Astrophysics Data System (ADS)

    Wang, Xin; Ning, Chen; Xu, Lizhong

    2015-09-01

    Saliency detection has received extensive interests due to its remarkable contribution to wide computer vision and pattern recognition applications. However, most existing computational models are designed for detecting saliency in visible images or videos. When applied to infrared images, they may suffer from limitations in saliency detection accuracy and robustness. In this paper, we propose a novel algorithm to detect visual saliency in infrared images by mutual consistency-guided spatial cues combination. First, based on the luminance contrast and contour characteristics of infrared images, two effective saliency maps, i.e., the luminance contrast saliency map and contour saliency map are constructed, respectively. Afterwards, an adaptive combination scheme guided by mutual consistency is exploited to integrate these two maps to generate the spatial saliency map. This idea is motivated by the observation that different maps are actually related to each other and the fusion scheme should present a logically consistent view of these maps. Finally, an enhancement technique is adopted to incorporate spatial saliency maps at various scales into a unified multi-scale framework to improve the reliability of the final saliency map. Comprehensive evaluations on real-life infrared images and comparisons with many state-of-the-art saliency models demonstrate the effectiveness and superiority of the proposed method for saliency detection in infrared images.

  10. Benchmarking GPU and CPU codes for Heisenberg spin glass over-relaxation

    NASA Astrophysics Data System (ADS)

    Bernaschi, M.; Parisi, G.; Parisi, L.

    2011-06-01

    We present a set of possible implementations for Graphics Processing Units (GPU) of the Over-relaxation technique applied to the 3D Heisenberg spin glass model. The results show that a carefully tuned code can achieve more than 100 GFlops/s of sustained performance and update a single spin in about 0.6 nanoseconds. A multi-hit technique that exploits the GPU shared memory further reduces this time. Such results are compared with those obtained by means of a highly-tuned vector-parallel code on latest generation multi-core CPUs.

  11. Method of Obtaining High Resolution Intrinsic Wire Boom Damping Parameters for Multi-Body Dynamics Simulations

    NASA Technical Reports Server (NTRS)

    Yew, Alvin G.; Chai, Dean J.; Olney, David J.

    2010-01-01

    The goal of NASA's Magnetospheric MultiScale (MMS) mission is to understand magnetic reconnection with sensor measurements from four spinning satellites flown in a tight tetrahedron formation. Four of the six electric field sensors on each satellite are located at the end of 60- meter wire booms to increase measurement sensitivity in the spin plane and to minimize motion coupling from perturbations on the main body. A propulsion burn however, might induce boom oscillations that could impact science measurements if oscillations do not damp to values on the order of 0.1 degree in a timely fashion. Large damping time constants could also adversely affect flight dynamics and attitude control performance. In this paper, we will discuss the implementation of a high resolution method for calculating the boom's intrinsic damping, which was used in multi-body dynamics simulations. In summary, experimental data was obtained with a scaled-down boom, which was suspended as a pendulum in vacuum. Optical techniques were designed to accurately measure the natural decay of angular position and subsequently, data processing algorithms resulted in excellent spatial and temporal resolutions. This method was repeated in a parametric study for various lengths, root tensions and vacuum levels. For all data sets, regression models for damping were applied, including: nonlinear viscous, frequency-independent hysteretic, coulomb and some combination of them. Our data analysis and dynamics models have shown that the intrinsic damping for the baseline boom is insufficient, thereby forcing project management to explore mitigation strategies.

  12. Performance and Stability Analyses of Rocket Thrust Chambers with Oxygen/Methane Propellants

    NASA Technical Reports Server (NTRS)

    Hulka, James R.; Jones, Gregg W.

    2010-01-01

    Liquid rocket engines using oxygen and methane propellants are being considered by the National Aeronautics and Space Administration (NASA) for future in-space vehicles. This propellant combination has not been previously used in flight-qualified engine systems developed by NASA, so limited test data and analysis results are available at this stage of early development. As part of activities for the Propulsion and Cryogenic Advanced Development (PCAD) project funded under the Exploration Technology Development Program, the NASA Marshall Space Flight Center (MSFC) has been evaluating capability to model combustion performance and stability for oxygen and methane propellants. This activity has been proceeding for about two years and this paper is a summary of results to date. Hot-fire test results of oxygen/methane propellant rocket engine combustion devices for the modeling investigations have come from several sources, including multi-element injector tests with gaseous methane from the 1980s, single element tests with gaseous methane funded through the Constellation University Institutes Program, and multi-element injector tests with both gaseous and liquid methane conducted at the NASA MSFC funded by PCAD. For the latter, test results of both impinging and coaxial element injectors using liquid oxygen and liquid methane propellants are included. Configurations were modeled with two one-dimensional liquid rocket combustion analysis codes, the Rocket Combustor Interactive Design and Analysis code and the Coaxial Injector Combustion Model. Special effort was focused on how these codes can be used to model combustion and performance with oxygen/methane propellants a priori, and what anchoring or calibrating features need to be applied, improved or developed in the future. Low frequency combustion instability (chug) occurred, with frequencies ranging from 150 to 250 Hz, with several multi-element injectors with liquid/liquid propellants, and was modeled using techniques from Wenzel and Szuch. High-frequency combustion instability also occurred at the first tangential (1T) mode, at about 4500 Hz, with several multi-element injectors with liquid/liquid propellants. Analyses of the transverse mode instability were conducted by evaluating injector resonances and empirical methods developed by Hewitt.

  13. Semiconductor Laser Multi-Spectral Sensing and Imaging

    PubMed Central

    Le, Han Q.; Wang, Yang

    2010-01-01

    Multi-spectral laser imaging is a technique that can offer a combination of the laser capability of accurate spectral sensing with the desirable features of passive multispectral imaging. The technique can be used for detection, discrimination, and identification of objects by their spectral signature. This article describes and reviews the development and evaluation of semiconductor multi-spectral laser imaging systems. Although the method is certainly not specific to any laser technology, the use of semiconductor lasers is significant with respect to practicality and affordability. More relevantly, semiconductor lasers have their own characteristics; they offer excellent wavelength diversity but usually with modest power. Thus, system design and engineering issues are analyzed for approaches and trade-offs that can make the best use of semiconductor laser capabilities in multispectral imaging. A few systems were developed and the technique was tested and evaluated on a variety of natural and man-made objects. It was shown capable of high spectral resolution imaging which, unlike non-imaging point sensing, allows detecting and discriminating objects of interest even without a priori spectroscopic knowledge of the targets. Examples include material and chemical discrimination. It was also shown capable of dealing with the complexity of interpreting diffuse scattered spectral images and produced results that could otherwise be ambiguous with conventional imaging. Examples with glucose and spectral imaging of drug pills were discussed. Lastly, the technique was shown with conventional laser spectroscopy such as wavelength modulation spectroscopy to image a gas (CO). These results suggest the versatility and power of multi-spectral laser imaging, which can be practical with the use of semiconductor lasers. PMID:22315555

  14. Semiconductor laser multi-spectral sensing and imaging.

    PubMed

    Le, Han Q; Wang, Yang

    2010-01-01

    Multi-spectral laser imaging is a technique that can offer a combination of the laser capability of accurate spectral sensing with the desirable features of passive multispectral imaging. The technique can be used for detection, discrimination, and identification of objects by their spectral signature. This article describes and reviews the development and evaluation of semiconductor multi-spectral laser imaging systems. Although the method is certainly not specific to any laser technology, the use of semiconductor lasers is significant with respect to practicality and affordability. More relevantly, semiconductor lasers have their own characteristics; they offer excellent wavelength diversity but usually with modest power. Thus, system design and engineering issues are analyzed for approaches and trade-offs that can make the best use of semiconductor laser capabilities in multispectral imaging. A few systems were developed and the technique was tested and evaluated on a variety of natural and man-made objects. It was shown capable of high spectral resolution imaging which, unlike non-imaging point sensing, allows detecting and discriminating objects of interest even without a priori spectroscopic knowledge of the targets. Examples include material and chemical discrimination. It was also shown capable of dealing with the complexity of interpreting diffuse scattered spectral images and produced results that could otherwise be ambiguous with conventional imaging. Examples with glucose and spectral imaging of drug pills were discussed. Lastly, the technique was shown with conventional laser spectroscopy such as wavelength modulation spectroscopy to image a gas (CO). These results suggest the versatility and power of multi-spectral laser imaging, which can be practical with the use of semiconductor lasers.

  15. Data Fusion in Wind Tunnel Testing; Combined Pressure Paint and Model Deformation Measurements (Invited)

    NASA Technical Reports Server (NTRS)

    Bell, James H.; Burner, Alpheus W.

    2004-01-01

    As the benefit-to-cost ratio of advanced optical techniques for wind tunnel measurements such as Video Model Deformation (VMD), Pressure-Sensitive Paint (PSP), and others increases, these techniques are being used more and more often in large-scale production type facilities. Further benefits might be achieved if multiple optical techniques could be deployed in a wind tunnel test simultaneously. The present study discusses the problems and benefits of combining VMD and PSP systems. The desirable attributes of useful optical techniques for wind tunnels, including the ability to accommodate the myriad optical techniques available today, are discussed. The VMD and PSP techniques are briefly reviewed. Commonalties and differences between the two techniques are discussed. Recent wind tunnel experiences and problems when combining PSP and VMD are presented, as are suggestions for future developments in combined PSP and deformation measurements.

  16. Communication and cooperation in underwater acoustic networks

    NASA Astrophysics Data System (ADS)

    Yerramalli, Srinivas

    In this thesis, we present a study of several problems related to underwater point to point communications and network formation. We explore techniques to improve the achievable data rate on a point to point link using better physical layer techniques and then study sensor cooperation which improves the throughput and reliability in an underwater network. Robust point-to-point communications in underwater networks has become increasingly critical in several military and civilian applications related to underwater communications. We present several physical layer signaling and detection techniques tailored to the underwater channel model to improve the reliability of data detection. First, a simplified underwater channel model in which the time scale distortion on each path is assumed to be the same (single scale channel model in contrast to a more general multi scale model). A novel technique, which exploits the nature of OFDM signaling and the time scale distortion, called Partial FFT Demodulation is derived. It is observed that this new technique has some unique interference suppression properties and performs better than traditional equalizers in several scenarios of interest. Next, we consider the multi scale model for the underwater channel and assume that single scale processing is performed at the receiver. We then derive optimized front end pre-processing techniques to reduce the interference caused during single scale processing of signals transmitted on a multi-scale channel. We then propose an improvised channel estimation technique using dictionary optimization methods for compressive sensing and show that significant performance gains can be obtained using this technique. In the next part of this thesis, we consider the problem of sensor node cooperation among rational nodes whose objective is to improve their individual data rates. We first consider the problem of transmitter cooperation in a multiple access channel and investigate the stability of the grand coalition of transmitters using tools from cooperative game theory and show that the grand coalition in both the asymptotic regimes of high and low SNR. Towards studying the problem of receiver cooperation for a broadcast channel, we propose a game theoretic model for the broadcast channel and then derive a game theoretic duality between the multiple access and the broadcast channel and show that how the equilibria of the broadcast channel are related to the multiple access channel and vice versa.

  17. Using synchronization in multi-model ensembles to improve prediction

    NASA Astrophysics Data System (ADS)

    Hiemstra, P.; Selten, F.

    2012-04-01

    In recent decades, many climate models have been developed to understand and predict the behavior of the Earth's climate system. Although these models are all based on the same basic physical principles, they still show different behavior. This is for example caused by the choice of how to parametrize sub-grid scale processes. One method to combine these imperfect models, is to run a multi-model ensemble. The models are given identical initial conditions and are integrated forward in time. A multi-model estimate can for example be a weighted mean of the ensemble members. We propose to go a step further, and try to obtain synchronization between the imperfect models by connecting the multi-model ensemble, and exchanging information. The combined multi-model ensemble is also known as a supermodel. The supermodel has learned from observations how to optimally exchange information between the ensemble members. In this study we focused on the density and formulation of the onnections within the supermodel. The main question was whether we could obtain syn-chronization between two climate models when connecting only a subset of their state spaces. Limiting the connected subspace has two advantages: 1) it limits the transfer of data (bytes) between the ensemble, which can be a limiting factor in large scale climate models, and 2) learning the optimal connection strategy from observations is easier. To answer the research question, we connected two identical quasi-geostrohic (QG) atmospheric models to each other, where the model have different initial conditions. The QG model is a qualitatively realistic simulation of the winter flow on the Northern hemisphere, has three layers and uses a spectral imple-mentation. We connected the models in the original spherical harmonical state space, and in linear combinations of these spherical harmonics, i.e. Empirical Orthogonal Functions (EOFs). We show that when connecting through spherical harmonics, we only need to connect 28% of the state variables to obtain synchronization. In addition, when connecting through EOFs, we can reduce this percentage even more to 12%. This reduction is caused by the more efficient description of the model state variables when using EOFs. The connected state variables center around the medium scale structures in the model. Small and large scale structures need not be connected in order to obtain synchronization. This could be related to the baroclinic instabilities in the QG model which are located in the medium scale structures of the model. The baroclinic instabilities are the main source of divergence between the two connected models.

  18. Lameness detection in dairy cattle: single predictor v. multivariate analysis of image-based posture processing and behaviour and performance sensing.

    PubMed

    Van Hertem, T; Bahr, C; Schlageter Tello, A; Viazzi, S; Steensels, M; Romanini, C E B; Lokhorst, C; Maltz, E; Halachmi, I; Berckmans, D

    2016-09-01

    The objective of this study was to evaluate if a multi-sensor system (milk, activity, body posture) was a better classifier for lameness than the single-sensor-based detection models. Between September 2013 and August 2014, 3629 cow observations were collected on a commercial dairy farm in Belgium. Human locomotion scoring was used as reference for the model development and evaluation. Cow behaviour and performance was measured with existing sensors that were already present at the farm. A prototype of three-dimensional-based video recording system was used to quantify automatically the back posture of a cow. For the single predictor comparisons, a receiver operating characteristics curve was made. For the multivariate detection models, logistic regression and generalized linear mixed models (GLMM) were developed. The best lameness classification model was obtained by the multi-sensor analysis (area under the receiver operating characteristics curve (AUC)=0.757±0.029), containing a combination of milk and milking variables, activity and gait and posture variables from videos. Second, the multivariate video-based system (AUC=0.732±0.011) performed better than the multivariate milk sensors (AUC=0.604±0.026) and the multivariate behaviour sensors (AUC=0.633±0.018). The video-based system performed better than the combined behaviour and performance-based detection model (AUC=0.669±0.028), indicating that it is worthwhile to consider a video-based lameness detection system, regardless the presence of other existing sensors in the farm. The results suggest that Θ2, the feature variable for the back curvature around the hip joints, with an AUC of 0.719 is the best single predictor variable for lameness detection based on locomotion scoring. In general, this study showed that the video-based back posture monitoring system is outperforming the behaviour and performance sensing techniques for locomotion scoring-based lameness detection. A GLMM with seven specific variables (walking speed, back posture measurement, daytime activity, milk yield, lactation stage, milk peak flow rate and milk peak conductivity) is the best combination of variables for lameness classification. The accuracy on four-level lameness classification was 60.3%. The accuracy improved to 79.8% for binary lameness classification. The binary GLMM obtained a sensitivity of 68.5% and a specificity of 87.6%, which both exceed the sensitivity (52.1%±4.7%) and specificity (83.2%±2.3%) of the multi-sensor logistic regression model. This shows that the repeated measures analysis in the GLMM, taking into account the individual history of the animal, outperforms the classification when thresholds based on herd level (a statistical population) are used.

  19. Location and Navigation with Ultra-Wideband Signals

    DTIC Science & Technology

    2012-06-07

    Coherent vs. Noncoherent Combination 26 F Ranging with Multi-Band UWB Signals: Random Phase Ratation 29 F.1 MB-OFDM System Model...adopted to combine the channel information from subbands: the coherent combining and the noncoherent combining. For the coherent combining, estimates of...channel frequency response coefficients for all subbands are jointly used to estimate the time domain channel with Eq. (33). For the noncoherent

  20. Radiation-Spray Coupling for Realistic Flow Configurations

    NASA Technical Reports Server (NTRS)

    El-Asrag, Hossam; Iannetti, Anthony C.

    2011-01-01

    Three Large Eddy Simulations (LES) for a lean-direct injection (LDI) combustor are performed and compared. In addition to the cold flow simulation, the effect of radiation coupling with the multi-physics reactive flow is analyzed. The flame let progress variable approach is used as a subgrid combustion model combined with a stochastic subgrid model for spray atomization and an optically thin radiation model. For accurate chemistry modeling, a detailed Jet-A surrogate mechanism is utilized. To achieve realistic inflow, a simple recycling technique is performed at the inflow section upstream of the swirler. Good comparison is shown with the experimental data mean and root mean square profiles. The effect of combustion is found to change the shape and size of the central recirculation zone. Radiation is found to change the spray dynamics and atomization by changing the heat release distribution and the local temperature values impacting the evaporation process. The simulation with radiation modeling shows wider range of droplet size distribution by altering the evaporation rate. The current study proves the importance of radiation modeling for accurate prediction in realistic spray combustion configurations, even for low pressure systems.

Top