Sample records for exploitation model performance

  1. Exploiting Thread Parallelism for Ocean Modeling on Cray XC Supercomputers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarje, Abhinav; Jacobsen, Douglas W.; Williams, Samuel W.

    The incorporation of increasing core counts in modern processors used to build state-of-the-art supercomputers is driving application development towards exploitation of thread parallelism, in addition to distributed memory parallelism, with the goal of delivering efficient high-performance codes. In this work we describe the exploitation of threading and our experiences with it with respect to a real-world ocean modeling application code, MPAS-Ocean. We present detailed performance analysis and comparisons of various approaches and configurations for threading on the Cray XC series supercomputers.

  2. Minimizing Actuator-Induced Residual Error in Active Space Telescope Primary Mirrors

    DTIC Science & Technology

    2010-09-01

    actuator geometry, and rib-to-facesheet intersection geometry are exploited to achieve improved performance in silicon carbide ( SiC ) mirrors . A...are exploited to achieve improved performance in silicon carbide ( SiC ) mirrors . A parametric finite element model is used to explore the trade space...MOST) finite element model. The move to lightweight actively-controlled silicon carbide ( SiC ) mirrors is traced back to previous generations of space

  3. Identifying potential disaster zones around the Verkhnekamskoye potash deposit (Russia) using advanced information technology (IT)

    NASA Astrophysics Data System (ADS)

    Royer, J. J.; Filippov, L. O.

    2017-07-01

    This work aims at improving the exploitation of the K, Mg, salts ore of the Verkhnekamskoye deposit using advanced information technology (IT) such as 3D geostatistical modeling techniques together with high performance flotation. It is expected to provide a more profitable exploitation of the actual deposit avoiding the formation of dramatic sinkholes by a better knowledge of the deposit. The GeoChron modelling method for sedimentary formations (Mallet, 2014) was used to improve the knowledge of the Verkhnekamskoye potash deposit, Perm region, Russia. After a short introduction on the modern theory of mathematical modelling applied to mineral resources exploitation and geology, new results are presented on the sedimentary architecture of the ore deposit. They enlighten the structural geology and the fault orientations, a key point for avoiding catastrophic water inflows recharging zone during exploitation. These results are important for avoiding catastrophic sinkholes during exploitation.

  4. Verification of Electromagnetic Physics Models for Parallel Computing Architectures in the GeantV Project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amadio, G.; et al.

    An intensive R&D and programming effort is required to accomplish new challenges posed by future experimental high-energy particle physics (HEP) programs. The GeantV project aims to narrow the gap between the performance of the existing HEP detector simulation software and the ideal performance achievable, exploiting latest advances in computing technology. The project has developed a particle detector simulation prototype capable of transporting in parallel particles in complex geometries exploiting instruction level microparallelism (SIMD and SIMT), task-level parallelism (multithreading) and high-level parallelism (MPI), leveraging both the multi-core and the many-core opportunities. We present preliminary verification results concerning the electromagnetic (EM) physicsmore » models developed for parallel computing architectures within the GeantV project. In order to exploit the potential of vectorization and accelerators and to make the physics model effectively parallelizable, advanced sampling techniques have been implemented and tested. In this paper we introduce a set of automated statistical tests in order to verify the vectorized models by checking their consistency with the corresponding Geant4 models and to validate them against experimental data.« less

  5. ATR performance modeling concepts

    NASA Astrophysics Data System (ADS)

    Ross, Timothy D.; Baker, Hyatt B.; Nolan, Adam R.; McGinnis, Ryan E.; Paulson, Christopher R.

    2016-05-01

    Performance models are needed for automatic target recognition (ATR) development and use. ATRs consume sensor data and produce decisions about the scene observed. ATR performance models (APMs) on the other hand consume operating conditions (OCs) and produce probabilities about what the ATR will produce. APMs are needed for many modeling roles of many kinds of ATRs (each with different sensing modality and exploitation functionality combinations); moreover, there are different approaches to constructing the APMs. Therefore, although many APMs have been developed, there is rarely one that fits a particular need. Clarified APM concepts may allow us to recognize new uses of existing APMs and identify new APM technologies and components that better support coverage of the needed APMs. The concepts begin with thinking of ATRs as mapping OCs of the real scene (including the sensor data) to reports. An APM is then a mapping from explicit quantized OCs (represented with less resolution than the real OCs) and latent OC distributions to report distributions. The roles of APMs can be distinguished by the explicit OCs they consume. APMs used in simulations consume the true state that the ATR is attempting to report. APMs used online with the exploitation consume the sensor signal and derivatives, such as match scores. APMs used in sensor management consume neither of those, but estimate performance from other OCs. This paper will summarize the major building blocks for APMs, including knowledge sources, OC models, look-up tables, analytical and learned mappings, and tools for signal synthesis and exploitation.

  6. Evaluation methodology for query-based scene understanding systems

    NASA Astrophysics Data System (ADS)

    Huster, Todd P.; Ross, Timothy D.; Culbertson, Jared L.

    2015-05-01

    In this paper, we are proposing a method for the principled evaluation of scene understanding systems in a query-based framework. We can think of a query-based scene understanding system as a generalization of typical sensor exploitation systems where instead of performing a narrowly defined task (e.g., detect, track, classify, etc.), the system can perform general user-defined tasks specified in a query language. Examples of this type of system have been developed as part of DARPA's Mathematics of Sensing, Exploitation, and Execution (MSEE) program. There is a body of literature on the evaluation of typical sensor exploitation systems, but the open-ended nature of the query interface introduces new aspects to the evaluation problem that have not been widely considered before. In this paper, we state the evaluation problem and propose an approach to efficiently learn about the quality of the system under test. We consider the objective of the evaluation to be to build a performance model of the system under test, and we rely on the principles of Bayesian experiment design to help construct and select optimal queries for learning about the parameters of that model.

  7. The Exploration-Exploitation Dilemma: A Multidisciplinary Framework

    PubMed Central

    Berger-Tal, Oded; Meron, Ehud; Saltz, David

    2014-01-01

    The trade-off between the need to obtain new knowledge and the need to use that knowledge to improve performance is one of the most basic trade-offs in nature, and optimal performance usually requires some balance between exploratory and exploitative behaviors. Researchers in many disciplines have been searching for the optimal solution to this dilemma. Here we present a novel model in which the exploration strategy itself is dynamic and varies with time in order to optimize a definite goal, such as the acquisition of energy, money, or prestige. Our model produced four very distinct phases: Knowledge establishment, Knowledge accumulation, Knowledge maintenance, and Knowledge exploitation, giving rise to a multidisciplinary framework that applies equally to humans, animals, and organizations. The framework can be used to explain a multitude of phenomena in various disciplines, such as the movement of animals in novel landscapes, the most efficient resource allocation for a start-up company, or the effects of old age on knowledge acquisition in humans. PMID:24756026

  8. Exploration–exploitation trade-off features a saltatory search behaviour

    PubMed Central

    Volchenkov, Dimitri; Helbach, Jonathan; Tscherepanow, Marko; Kühnel, Sina

    2013-01-01

    Searching experiments conducted in different virtual environments over a gender-balanced group of people revealed a gender irrelevant scale-free spread of searching activity on large spatio-temporal scales. We have suggested and solved analytically a simple statistical model of the coherent-noise type describing the exploration–exploitation trade-off in humans (‘should I stay’ or ‘should I go’). The model exhibits a variety of saltatory behaviours, ranging from Lévy flights occurring under uncertainty to Brownian walks performed by a treasure hunter confident of the eventual success. PMID:23782535

  9. ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.

    2005-01-01

    ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.

  10. Numerical Modeling of Exploitation Relics and Faults Influence on Rock Mass Deformations

    NASA Astrophysics Data System (ADS)

    Wesołowski, Marek

    2016-12-01

    This article presents numerical modeling results of fault planes and exploitation relics influenced by the size and distribution of rock mass and surface area deformations. Numerical calculations were performed using the finite difference program FLAC. To assess the changes taking place in a rock mass, an anisotropic elasto-plastic ubiquitous joint model was used, into which the Coulomb-Mohr strength (plasticity) condition was implemented. The article takes as an example the actual exploitation of the longwall 225 area in the seam 502wg of the "Pokój" coal mine. Computer simulations have shown that it is possible to determine the influence of fault planes and exploitation relics on the size and distribution of rock mass and its surface deformation. The main factor causing additional deformations of the area surface are the abandoned workings in the seam 502wd. These abandoned workings are the activation factor that caused additional subsidences and also, due to the significant dip, they are a layer on which the rock mass slides down in the direction of the extracted space. These factors are not taken into account by the geometrical and integral theories.

  11. Main principles of developing exploitation models of semiconductor devices

    NASA Astrophysics Data System (ADS)

    Gradoboev, A. V.; Simonova, A. V.

    2018-05-01

    The paper represents primary tasks, solutions of which allow to develop the exploitation modes of semiconductor devices taking into account complex and combined influence of ionizing irradiation and operation factors. The structure of the exploitation model of the semiconductor device is presented, which is based on radiation and reliability models. Furthermore, it was shown that the exploitation model should take into account complex and combine influence of various ionizing irradiation types and operation factors. The algorithm of developing the exploitation model of the semiconductor devices is proposed. The possibility of creating the radiation model of Schottky barrier diode, Schottky field-effect transistor and Gunn diode is shown based on the available experimental data. The basic exploitation model of IR-LEDs based upon double AlGaAs heterostructures is represented. The practical application of the exploitation models will allow to output the electronic products with guaranteed operational properties.

  12. Block sparsity-based joint compressed sensing recovery of multi-channel ECG signals.

    PubMed

    Singh, Anurag; Dandapat, Samarendra

    2017-04-01

    In recent years, compressed sensing (CS) has emerged as an effective alternative to conventional wavelet based data compression techniques. This is due to its simple and energy-efficient data reduction procedure, which makes it suitable for resource-constrained wireless body area network (WBAN)-enabled electrocardiogram (ECG) telemonitoring applications. Both spatial and temporal correlations exist simultaneously in multi-channel ECG (MECG) signals. Exploitation of both types of correlations is very important in CS-based ECG telemonitoring systems for better performance. However, most of the existing CS-based works exploit either of the correlations, which results in a suboptimal performance. In this work, within a CS framework, the authors propose to exploit both types of correlations simultaneously using a sparse Bayesian learning-based approach. A spatiotemporal sparse model is employed for joint compression/reconstruction of MECG signals. Discrete wavelets transform domain block sparsity of MECG signals is exploited for simultaneous reconstruction of all the channels. Performance evaluations using Physikalisch-Technische Bundesanstalt MECG diagnostic database show a significant gain in the diagnostic reconstruction quality of the MECG signals compared with the state-of-the art techniques at reduced number of measurements. Low measurement requirement may lead to significant savings in the energy-cost of the existing CS-based WBAN systems.

  13. Compressive Sensing via Nonlocal Smoothed Rank Function

    PubMed Central

    Fan, Ya-Ru; Liu, Jun; Zhao, Xi-Le

    2016-01-01

    Compressive sensing (CS) theory asserts that we can reconstruct signals and images with only a small number of samples or measurements. Recent works exploiting the nonlocal similarity have led to better results in various CS studies. To better exploit the nonlocal similarity, in this paper, we propose a non-convex smoothed rank function based model for CS image reconstruction. We also propose an efficient alternating minimization method to solve the proposed model, which reduces a difficult and coupled problem to two tractable subproblems. Experimental results have shown that the proposed method performs better than several existing state-of-the-art CS methods for image reconstruction. PMID:27583683

  14. [Ecotourism exploitation model in Bita Lake Natural Reserve of Yunnan].

    PubMed

    Yang, G; Wang, Y; Zhong, L

    2000-12-01

    Bita lake provincial natural reserve is located in Shangri-La region of North-western Yunnan, and was set as a demonstrating area for ecotourism exploitation in 1998. After a year's exploitation construction and half a year's operation as a branch of the 99' Kunming International Horticulture Exposition to accept tourists, it was proved that the ecotourism demonstrating area attained four integrated functions of ecotourism, i.e., tourism, protection, poverty clearing and environment education. Five exploitation and management models including function zoned exploitation model, featured tourism communication model signs system designing model, local Tibetan family reception model and environmental monitoring model, were also successful, which were demonstrated and spreaded to the whole province. Bita lake provincial natural reserve could be a good sample for the ecotourism exploitation natural reserves of the whole country.

  15. A Deep Learning Architecture for Temporal Sleep Stage Classification Using Multivariate and Multimodal Time Series.

    PubMed

    Chambon, Stanislas; Galtier, Mathieu N; Arnal, Pierrick J; Wainrib, Gilles; Gramfort, Alexandre

    2018-04-01

    Sleep stage classification constitutes an important preliminary exam in the diagnosis of sleep disorders. It is traditionally performed by a sleep expert who assigns to each 30 s of the signal of a sleep stage, based on the visual inspection of signals such as electroencephalograms (EEGs), electrooculograms (EOGs), electrocardiograms, and electromyograms (EMGs). We introduce here the first deep learning approach for sleep stage classification that learns end-to-end without computing spectrograms or extracting handcrafted features, that exploits all multivariate and multimodal polysomnography (PSG) signals (EEG, EMG, and EOG), and that can exploit the temporal context of each 30-s window of data. For each modality, the first layer learns linear spatial filters that exploit the array of sensors to increase the signal-to-noise ratio, and the last layer feeds the learnt representation to a softmax classifier. Our model is compared to alternative automatic approaches based on convolutional networks or decisions trees. Results obtained on 61 publicly available PSG records with up to 20 EEG channels demonstrate that our network architecture yields the state-of-the-art performance. Our study reveals a number of insights on the spatiotemporal distribution of the signal of interest: a good tradeoff for optimal classification performance measured with balanced accuracy is to use 6 EEG with 2 EOG (left and right) and 3 EMG chin channels. Also exploiting 1 min of data before and after each data segment offers the strongest improvement when a limited number of channels are available. As sleep experts, our system exploits the multivariate and multimodal nature of PSG signals in order to deliver the state-of-the-art classification performance with a small computational cost.

  16. Joint modality fusion and temporal context exploitation for semantic video analysis

    NASA Astrophysics Data System (ADS)

    Papadopoulos, Georgios Th; Mezaris, Vasileios; Kompatsiaris, Ioannis; Strintzis, Michael G.

    2011-12-01

    In this paper, a multi-modal context-aware approach to semantic video analysis is presented. Overall, the examined video sequence is initially segmented into shots and for every resulting shot appropriate color, motion and audio features are extracted. Then, Hidden Markov Models (HMMs) are employed for performing an initial association of each shot with the semantic classes that are of interest separately for each modality. Subsequently, a graphical modeling-based approach is proposed for jointly performing modality fusion and temporal context exploitation. Novelties of this work include the combined use of contextual information and multi-modal fusion, and the development of a new representation for providing motion distribution information to HMMs. Specifically, an integrated Bayesian Network is introduced for simultaneously performing information fusion of the individual modality analysis results and exploitation of temporal context, contrary to the usual practice of performing each task separately. Contextual information is in the form of temporal relations among the supported classes. Additionally, a new computationally efficient method for providing motion energy distribution-related information to HMMs, which supports the incorporation of motion characteristics from previous frames to the currently examined one, is presented. The final outcome of this overall video analysis framework is the association of a semantic class with every shot. Experimental results as well as comparative evaluation from the application of the proposed approach to four datasets belonging to the domains of tennis, news and volleyball broadcast video are presented.

  17. Taking movement data to new depths: Inferring prey availability and patch profitability from seabird foraging behavior.

    PubMed

    Chimienti, Marianna; Cornulier, Thomas; Owen, Ellie; Bolton, Mark; Davies, Ian M; Travis, Justin M J; Scott, Beth E

    2017-12-01

    Detailed information acquired using tracking technology has the potential to provide accurate pictures of the types of movements and behaviors performed by animals. To date, such data have not been widely exploited to provide inferred information about the foraging habitat. We collected data using multiple sensors (GPS, time depth recorders, and accelerometers) from two species of diving seabirds, razorbills ( Alca torda , N  = 5, from Fair Isle, UK) and common guillemots ( Uria aalge , N  = 2 from Fair Isle and N  = 2 from Colonsay, UK). We used a clustering algorithm to identify pursuit and catching events and the time spent pursuing and catching underwater, which we then used as indicators for inferring prey encounters throughout the water column and responses to changes in prey availability of the areas visited at two levels: individual dives and groups of dives. For each individual dive ( N  = 661 for guillemots, 6214 for razorbills), we modeled the number of pursuit and catching events, in relation to dive depth, duration, and type of dive performed (benthic vs. pelagic). For groups of dives ( N  = 58 for guillemots, 156 for razorbills), we modeled the total time spent pursuing and catching in relation to time spent underwater. Razorbills performed only pelagic dives, most likely exploiting prey available at shallow depths as indicated by the vertical distribution of pursuit and catching events. In contrast, guillemots were more flexible in their behavior, switching between benthic and pelagic dives. Capture attempt rates indicated that they were exploiting deep prey aggregations. The study highlights how novel analysis of movement data can give new insights into how animals exploit food patches, offering a unique opportunity to comprehend the behavioral ecology behind different movement patterns and understand how animals might respond to changes in prey distributions.

  18. The Sheperd equation and chaos identification.

    PubMed

    Gregson, Robert A M

    2010-04-01

    An equation created by Sheperd (1982) to model stability in exploited fish populations has been found to have a wider application, and it exhibits complicated internal dynamics, including phases of strict periodicity and of chaos. It may be potentially applicable to other psychophysiological contexts. The problems of determining goodness-of fit, and the comparative performance of alternative models including the Shephed model, are briefly addressed.

  19. Fish tracking by combining motion based segmentation and particle filtering

    NASA Astrophysics Data System (ADS)

    Bichot, E.; Mascarilla, L.; Courtellemont, P.

    2006-01-01

    In this paper, we suggest a new importance sampling scheme to improve a particle filtering based tracking process. This scheme relies on exploitation of motion segmentation. More precisely, we propagate hypotheses from particle filtering to blobs of similar motion to target. Hence, search is driven toward regions of interest in the state space and prediction is more accurate. We also propose to exploit segmentation to update target model. Once the moving target has been identified, a representative model is learnt from its spatial support. We refer to this model in the correction step of the tracking process. The importance sampling scheme and the strategy to update target model improve the performance of particle filtering in complex situations of occlusions compared to a simple Bootstrap approach as shown by our experiments on real fish tank sequences.

  20. Bio-inspired computational heuristics to study Lane-Emden systems arising in astrophysics model.

    PubMed

    Ahmad, Iftikhar; Raja, Muhammad Asif Zahoor; Bilal, Muhammad; Ashraf, Farooq

    2016-01-01

    This study reports novel hybrid computational methods for the solutions of nonlinear singular Lane-Emden type differential equation arising in astrophysics models by exploiting the strength of unsupervised neural network models and stochastic optimization techniques. In the scheme the neural network, sub-part of large field called soft computing, is exploited for modelling of the equation in an unsupervised manner. The proposed approximated solutions of higher order ordinary differential equation are calculated with the weights of neural networks trained with genetic algorithm, and pattern search hybrid with sequential quadratic programming for rapid local convergence. The results of proposed solvers for solving the nonlinear singular systems are in good agreements with the standard solutions. Accuracy and convergence the design schemes are demonstrated by the results of statistical performance measures based on the sufficient large number of independent runs.

  1. Informatic analysis for hidden pulse attack exploiting spectral characteristics of optics in plug-and-play quantum key distribution system

    NASA Astrophysics Data System (ADS)

    Ko, Heasin; Lim, Kyongchun; Oh, Junsang; Rhee, June-Koo Kevin

    2016-10-01

    Quantum channel loopholes due to imperfect implementations of practical devices expose quantum key distribution (QKD) systems to potential eavesdropping attacks. Even though QKD systems are implemented with optical devices that are highly selective on spectral characteristics, information theory-based analysis about a pertinent attack strategy built with a reasonable framework exploiting it has never been clarified. This paper proposes a new type of trojan horse attack called hidden pulse attack that can be applied in a plug-and-play QKD system, using general and optimal attack strategies that can extract quantum information from phase-disturbed quantum states of eavesdropper's hidden pulses. It exploits spectral characteristics of a photodiode used in a plug-and-play QKD system in order to probe modulation states of photon qubits. We analyze the security performance of the decoy-state BB84 QKD system under the optimal hidden pulse attack model that shows enormous performance degradation in terms of both secret key rate and transmission distance.

  2. A spatially adaptive spectral re-ordering technique for lossless coding of hyper-spectral images

    NASA Technical Reports Server (NTRS)

    Memon, Nasir D.; Galatsanos, Nikolas

    1995-01-01

    In this paper, we propose a new approach, applicable to lossless compression of hyper-spectral images, that alleviates some limitations of linear prediction as applied to this problem. According to this approach, an adaptive re-ordering of the spectral components of each pixel is performed prior to prediction and encoding. This re-ordering adaptively exploits, on a pixel-by pixel basis, the presence of inter-band correlations for prediction. Furthermore, the proposed approach takes advantage of spatial correlations, and does not introduce any coding overhead to transmit the order of the spectral bands. This is accomplished by using the assumption that two spatially adjacent pixels are expected to have similar spectral relationships. We thus have a simple technique to exploit spectral and spatial correlations in hyper-spectral data sets, leading to compression performance improvements as compared to our previously reported techniques for lossless compression. We also look at some simple error modeling techniques for further exploiting any structure that remains in the prediction residuals prior to entropy coding.

  3. Inverse Modeling of the Thermal Hydrodynamic and Chemical Processes During Exploitation of the Mutnovsky Geothermal Field (Kamchatka, Russia)

    NASA Astrophysics Data System (ADS)

    Kiryukhin, A. V.

    2012-12-01

    A TOUGH2-EOS1 3D rectangular numerical model of the Mutnovsky geothermal field (Kiryukhin, 1996) was re-calibrated using natural state and history exploitation data during the time period 1984-2006 years. Recalibration using iTOUGH2-EOS1+tracer inversion modeling capabilities, was useful to remove outliers from calibration data, identify sets of the estimated parameters of the model, and perform estimations. Chloride ion was used as a "tracer" in this modeling. Thermal hydrodynamic observational data which were used for model recalibration are as follows: 37 temperature and 1 pressure calibration points - for natural state, 13 production wells with monthly averaged enthalpies (650 values during the time period 1983-1987, 2000-2006 years) and 1 transient pressure monitoring wells (57 values during 2003-2006 years) - for exploitation history match. Chemical observational data includes transient mass chloride concentrations from 10 production wells and chloride hot spring sampling data (149 values during 1999-2006 years). The following features of Mutnovsky geothermal reservoir based on integrated inverse modeling analysis of natural state and exploitation data were estimated and better understood: 1. Reservoir permeability was found to be one order more comparable to model-1996, especially the lower part coinciding with intrusion contact zone (600-800 mD at -750 - -1250 masl); 2. Local meteoric inflow in the central part of the field accounting for 45 - 80 kg/s since year 2002; 3. Reinjection rates were estimated significantly lower, than officially reported as 100% of total fluid withdrawal; 4. Upflow fluid flows were estimated hotter (314oC) and the rates are larger (+50%), than assumed before; 5. Global double porosity parameters estimates are: fracture spacing - 5 - 10 m, void fraction N 10-3; 6. Main upflow zone chloride mass concentration estimate is 150 ppm. Conversion of the calibrated TOUGH2-EOS1+tracer model into electrical resistivity model using TOUGH2-EOS9 (L. Magnusdottir, 2012) may significantly improve efficiency of Electrical Resistivity Tomography (ETR) applications to detect spatial features of infiltration downflows and chloride enriched reinjected flows during reservoir exploitation.

  4. Memetic computing through bio-inspired heuristics integration with sequential quadratic programming for nonlinear systems arising in different physical models.

    PubMed

    Raja, Muhammad Asif Zahoor; Kiani, Adiqa Kausar; Shehzad, Azam; Zameer, Aneela

    2016-01-01

    In this study, bio-inspired computing is exploited for solving system of nonlinear equations using variants of genetic algorithms (GAs) as a tool for global search method hybrid with sequential quadratic programming (SQP) for efficient local search. The fitness function is constructed by defining the error function for systems of nonlinear equations in mean square sense. The design parameters of mathematical models are trained by exploiting the competency of GAs and refinement are carried out by viable SQP algorithm. Twelve versions of the memetic approach GA-SQP are designed by taking a different set of reproduction routines in the optimization process. Performance of proposed variants is evaluated on six numerical problems comprising of system of nonlinear equations arising in the interval arithmetic benchmark model, kinematics, neurophysiology, combustion and chemical equilibrium. Comparative studies of the proposed results in terms of accuracy, convergence and complexity are performed with the help of statistical performance indices to establish the worth of the schemes. Accuracy and convergence of the memetic computing GA-SQP is found better in each case of the simulation study and effectiveness of the scheme is further established through results of statistics based on different performance indices for accuracy and complexity.

  5. Availability Control for Means of Transport in Decisive Semi-Markov Models of Exploitation Process

    NASA Astrophysics Data System (ADS)

    Migawa, Klaudiusz

    2012-12-01

    The issues presented in this research paper refer to problems connected with the control process for exploitation implemented in the complex systems of exploitation for technical objects. The article presents the description of the method concerning the control availability for technical objects (means of transport) on the basis of the mathematical model of the exploitation process with the implementation of the decisive processes by semi-Markov. The presented method means focused on the preparing the decisive for the exploitation process for technical objects (semi-Markov model) and after that specifying the best control strategy (optimal strategy) from among possible decisive variants in accordance with the approved criterion (criteria) of the activity evaluation of the system of exploitation for technical objects. In the presented method specifying the optimal strategy for control availability in the technical objects means a choice of a sequence of control decisions made in individual states of modelled exploitation process for which the function being a criterion of evaluation reaches the extreme value. In order to choose the optimal control strategy the implementation of the genetic algorithm was chosen. The opinions were presented on the example of the exploitation process of the means of transport implemented in the real system of the bus municipal transport. The model of the exploitation process for the means of transports was prepared on the basis of the results implemented in the real transport system. The mathematical model of the exploitation process was built taking into consideration the fact that the model of the process constitutes the homogenous semi-Markov process.

  6. A Spectral Element Ocean Model on the Cray T3D: the interannual variability of the Mediterranean Sea general circulation

    NASA Astrophysics Data System (ADS)

    Molcard, A. J.; Pinardi, N.; Ansaloni, R.

    A new numerical model, SEOM (Spectral Element Ocean Model, (Iskandarani et al, 1994)), has been implemented in the Mediterranean Sea. Spectral element methods combine the geometric flexibility of finite element techniques with the rapid convergence rate of spectral schemes. The current version solves the shallow water equations with a fifth (or sixth) order accuracy spectral scheme and about 50.000 nodes. The domain decomposition philosophy makes it possible to exploit the power of parallel machines. The original MIMD master/slave version of SEOM, written in F90 and PVM, has been ported to the Cray T3D. When critical for performance, Cray specific high-performance one-sided communication routines (SHMEM) have been adopted to fully exploit the Cray T3D interprocessor network. Tests performed with highly unstructured and irregular grid, on up to 128 processors, show an almost linear scalability even with unoptimized domain decomposition techniques. Results from various case studies on the Mediterranean Sea are shown, involving realistic coastline geometry, and monthly mean 1000mb winds from the ECMWF's atmospheric model operational analysis from the period January 1987 to December 1994. The simulation results show that variability in the wind forcing considerably affect the circulation dynamics of the Mediterranean Sea.

  7. Elder Fraud and Financial Exploitation: Application of Routine Activity Theory.

    PubMed

    DeLiema, Marguerite

    2017-03-10

    Elder financial exploitation, committed by individuals in positions of trust, and elder fraud, committed by predatory strangers, are two forms of financial victimization that target vulnerable older adults. This study analyzes differences between fraud and financial exploitation victims and tests routine activity theory as a contextual model for victimization. Routine activity theory predicts that criminal opportunities arise when a motivated offender and suitable target meet in the absence of capable guardians. Fifty-three financial exploitation and fraud cases were sampled from an elder abuse forensic center. Data include law enforcement and caseworker investigation reports, victim medical records, perpetrator demographic information, and forensic assessments of victim health and cognitive functioning. Fraud and financial exploitation victims performed poorly on tests of cognitive functioning and financial decision making administered by a forensic neuropsychologist following the allegations. Based on retrospective record review, there were few significant differences in physical health and cognitive functioning at the time victims' assets were taken, although their social contexts were different. Significantly more fraud victims were childless compared with financial exploitation victims. Fraud perpetrators took advantage of elders when they had no trustworthy friends or relatives to safeguard their assets. Findings support an adapted routine activity theory as a contextual model for financial victimization. Fraud most often occurred when a vulnerable elder was solicited by a financial predator in the absence of capable guardians. Prevention efforts should focus on reducing social isolation to enhance protection. © The Author 2017. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. Artificial neural networks in models of specialization, guild evolution and sympatric speciation.

    PubMed

    Holmgren, Noél M A; Norrström, Niclas; Getz, Wayne M

    2007-03-29

    Sympatric speciation can arise as a result of disruptive selection with assortative mating as a pleiotropic by-product. Studies on host choice, employing artificial neural networks as models for the host recognition system in exploiters, illustrate how disruptive selection on host choice coupled with assortative mating can arise as a consequence of selection for specialization. Our studies demonstrate that a generalist exploiter population can evolve into a guild of specialists with an 'ideal free' frequency distribution across hosts. The ideal free distribution arises from variability in host suitability and density-dependent exploiter fitness on different host species. Specialists are less subject to inter-phenotypic competition than generalists and to harmful mutations that are common in generalists exploiting multiple hosts. When host signals used as cues by exploiters coevolve with exploiter recognition systems, our studies show that evolutionary changes may be continuous and cyclic. Selection changes back and forth between specialization and generalization in the exploiters, and weak and strong mimicry in the hosts, where non-defended hosts use the host investing in defence as a model. Thus, host signals and exploiter responses are engaged in a red-queen mimicry process that is ultimately cyclic rather then directional. In one phase, evolving signals of exploitable hosts mimic those of hosts less suitable for exploitation (i.e. the model). Signals in the model hosts also evolve through selection to escape the mimic and its exploiters. Response saturation constraints in the model hosts lead to the mimic hosts finally perfecting its mimicry, after which specialization in the exploiter guild is lost. This loss of exploiter specialization provides an opportunity for the model hosts to escape their mimics. Therefore, this cycle then repeats. We suggest that a species can readily evolve sympatrically when disruptive selection for specialization on hosts is the first step. In a sexual reproduction setting, partial reproductive isolation may first evolve by mate choice being confined to individuals on the same host. Secondly, this disruptive selection will favour assortative mate choice on genotype, thereby leading to increased reproductive isolation.

  9. R&D 100, 2016: Pyomo 4.0 – Python Optimization Modeling Objects

    ScienceCinema

    Hart, William; Laird, Carl; Siirola, John

    2018-06-13

    Pyomo provides a rich software environment for formulating and analyzing optimization applications. Pyomo supports the algebraic specification of complex sets of objectives and constraints, which enables optimization solvers to exploit problem structure to efficiently perform optimization.

  10. Ambidextrous Leadership and Employees' Self-reported Innovative Performance: The Role of Exploration and Exploitation Behaviors

    ERIC Educational Resources Information Center

    Zacher, Hannes; Robinson, Alecia J.; Rosing, Kathrin

    2016-01-01

    The ambidexterity theory of leadership for innovation proposes that leaders' opening and closing behaviors positively predict employees' exploration and exploitation behaviors, respectively. The interaction of exploration and exploitation behaviors, in turn, is assumed to influence employee innovative performance, such that innovative performance…

  11. Organizational Learning, Strategic Flexibility and Business Model Innovation: An Empirical Research Based on Logistics Enterprises

    NASA Astrophysics Data System (ADS)

    Bao, Yaodong; Cheng, Lin; Zhang, Jian

    Using the data of 237 Jiangsu logistics firms, this paper empirically studies the relationship among organizational learning capability, business model innovation, strategic flexibility. The results show as follows; organizational learning capability has positive impacts on business model innovation performance; strategic flexibility plays mediating roles on the relationship between organizational learning capability and business model innovation; interaction among strategic flexibility, explorative learning and exploitative learning play significant roles in radical business model innovation and incremental business model innovation.

  12. Enhanced Self Tuning On-Board Real-Time Model (eSTORM) for Aircraft Engine Performance Health Tracking

    NASA Technical Reports Server (NTRS)

    Volponi, Al; Simon, Donald L. (Technical Monitor)

    2008-01-01

    A key technological concept for producing reliable engine diagnostics and prognostics exploits the benefits of fusing sensor data, information, and/or processing algorithms. This report describes the development of a hybrid engine model for a propulsion gas turbine engine, which is the result of fusing two diverse modeling methodologies: a physics-based model approach and an empirical model approach. The report describes the process and methods involved in deriving and implementing a hybrid model configuration for a commercial turbofan engine. Among the intended uses for such a model is to enable real-time, on-board tracking of engine module performance changes and engine parameter synthesis for fault detection and accommodation.

  13. Evaluation of genome-enabled selection for bacterial cold water disease resistance using progeny performance data in Rainbow Trout: Insights on genotyping methods and genomic prediction models

    USDA-ARS?s Scientific Manuscript database

    Bacterial cold water disease (BCWD) causes significant economic losses in salmonid aquaculture, and traditional family-based breeding programs aimed at improving BCWD resistance have been limited to exploiting only between-family variation. We used genomic selection (GS) models to predict genomic br...

  14. Building entity models through observation and learning

    NASA Astrophysics Data System (ADS)

    Garcia, Richard; Kania, Robert; Fields, MaryAnne; Barnes, Laura

    2011-05-01

    To support the missions and tasks of mixed robotic/human teams, future robotic systems will need to adapt to the dynamic behavior of both teammates and opponents. One of the basic elements of this adaptation is the ability to exploit both long and short-term temporal data. This adaptation allows robotic systems to predict/anticipate, as well as influence, future behavior for both opponents and teammates and will afford the system the ability to adjust its own behavior in order to optimize its ability to achieve the mission goals. This work is a preliminary step in the effort to develop online entity behavior models through a combination of learning techniques and observations. As knowledge is extracted from the system through sensor and temporal feedback, agents within the multi-agent system attempt to develop and exploit a basic movement model of an opponent. For the purpose of this work, extraction and exploitation is performed through the use of a discretized two-dimensional game. The game consists of a predetermined number of sentries attempting to keep an unknown intruder agent from penetrating their territory. The sentries utilize temporal data coupled with past opponent observations to hypothesize the probable locations of the opponent and thus optimize their guarding locations.

  15. Performance of the Heavy Flavor Tracker (HFT) detector in star experiment at RHIC

    NASA Astrophysics Data System (ADS)

    Alruwaili, Manal

    With the growing technology, the number of the processors is becoming massive. Current supercomputer processing will be available on desktops in the next decade. For mass scale application software development on massive parallel computing available on desktops, existing popular languages with large libraries have to be augmented with new constructs and paradigms that exploit massive parallel computing and distributed memory models while retaining the user-friendliness. Currently, available object oriented languages for massive parallel computing such as Chapel, X10 and UPC++ exploit distributed computing, data parallel computing and thread-parallelism at the process level in the PGAS (Partitioned Global Address Space) memory model. However, they do not incorporate: 1) any extension at for object distribution to exploit PGAS model; 2) the programs lack the flexibility of migrating or cloning an object between places to exploit load balancing; and 3) lack the programming paradigms that will result from the integration of data and thread-level parallelism and object distribution. In the proposed thesis, I compare different languages in PGAS model; propose new constructs that extend C++ with object distribution and object migration; and integrate PGAS based process constructs with these extensions on distributed objects. Object cloning and object migration. Also a new paradigm MIDD (Multiple Invocation Distributed Data) is presented when different copies of the same class can be invoked, and work on different elements of a distributed data concurrently using remote method invocations. I present new constructs, their grammar and their behavior. The new constructs have been explained using simple programs utilizing these constructs.

  16. Disentangling and modeling interactions in fish with burst-and-coast swimming reveal distinct alignment and attraction behaviors

    PubMed Central

    Calovi, Daniel S.; Litchinko, Alexandra; Lopez, Ugo; Chaté, Hugues; Sire, Clément

    2018-01-01

    The development of tracking methods for automatically quantifying individual behavior and social interactions in animal groups has open up new perspectives for building quantitative and predictive models of collective behavior. In this work, we combine extensive data analyses with a modeling approach to measure, disentangle, and reconstruct the actual functional form of interactions involved in the coordination of swimming in Rummy-nose tetra (Hemigrammus rhodostomus). This species of fish performs burst-and-coast swimming behavior that consists of sudden heading changes combined with brief accelerations followed by quasi-passive, straight decelerations. We quantify the spontaneous stochastic behavior of a fish and the interactions that govern wall avoidance and the reaction to a neighboring fish, the latter by exploiting general symmetry constraints for the interactions. In contrast with previous experimental works, we find that both attraction and alignment behaviors control the reaction of fish to a neighbor. We then exploit these results to build a model of spontaneous burst-and-coast swimming and interactions of fish, with all parameters being estimated or directly measured from experiments. This model quantitatively reproduces the key features of the motion and spatial distributions observed in experiments with a single fish and with two fish. This demonstrates the power of our method that exploits large amounts of data for disentangling and fully characterizing the interactions that govern collective behaviors in animals groups. PMID:29324853

  17. Challenging data and workload management in CMS Computing with network-aware systems

    NASA Astrophysics Data System (ADS)

    D, Bonacorsi; T, Wildish

    2014-06-01

    After a successful first run at the LHC, and during the Long Shutdown (LS1) of the accelerator, the workload and data management sectors of the CMS Computing Model are entering into an operational review phase in order to concretely assess area of possible improvements and paths to exploit new promising technology trends. In particular, since the preparation activities for the LHC start, the Networks have constantly been of paramount importance for the execution of CMS workflows, exceeding the original expectations - as from the MONARC model - in terms of performance, stability and reliability. The low-latency transfers of PetaBytes of CMS data among dozens of WLCG Tiers worldwide using the PhEDEx dataset replication system is an example of the importance of reliable Networks. Another example is the exploitation of WAN data access over data federations in CMS. A new emerging area of work is the exploitation of Intelligent Network Services, including also bandwidth on demand concepts. In this paper, we will review the work done in CMS on this, and the next steps.

  18. Exploiting different active silicon detectors in the International Space Station: ALTEA and DOSTEL galactic cosmic radiation (GCR) measurements

    NASA Astrophysics Data System (ADS)

    Narici, Livo; Berger, Thomas; Burmeister, Sönke; Di Fino, Luca; Rizzo, Alessandro; Matthiä, Daniel; Reitz, Günther

    2017-08-01

    The solar system exploration by humans requires to successfully deal with the radiation exposition issue. The scientific aspect of this issue is twofold: knowing the radiation environment the astronauts are going to face and linking radiation exposure to health risks. Here we focus on the first issue. It is generally agreed that the final tool to describe the radiation environment in a space habitat will be a model featuring the needed amount of details to perform a meaningful risk assessment. The model should also take into account the shield changes due to the movement of materials inside the habitat, which in turn produce changes in the radiation environment. This model will have to undergo a final validation with a radiation field of similar complexity. The International Space Station (ISS) is a space habitat that features a radiation environment inside which is similar to what will be found in habitats in deep space, if we use measurements acquired only during high latitude passages (where the effects of the Earth magnetic field are reduced). Active detectors, providing time information, that can easily select data from different orbital sections, are the ones best fulfilling the requirements for these kinds of measurements. The exploitation of the radiation measurements performed in the ISS by all the available instruments is therefore mandatory to provide the largest possible database to the scientific community, to be merged with detailed Computer Aided Design (CAD) models, in the quest for a full model validation. While some efforts in comparing results from multiple active detectors have been attempted, a thorough study of a procedure to merge data in a single data matrix in order to provide the best validation set for radiation environment models has never been attempted. The aim of this paper is to provide such a procedure, to apply it to two of the most performing active detector systems in the ISS: the Anomalous Long Term Effects in Astronauts (ALTEA) instrument and the DOSimetry TELescope (DOSTEL) detectors, applied in the frame of the DOSIS and DOSIS 3D project onboard the ISS and to present combined results exploiting the features of each of the two apparatuses.

  19. Computer Aided Evaluation of Higher Education Tutors' Performance

    ERIC Educational Resources Information Center

    Xenos, Michalis; Papadopoulos, Thanos

    2007-01-01

    This article presents a method for computer-aided tutor evaluation: Bayesian Networks are used for organizing the collected data about tutors and for enabling accurate estimations and predictions about future tutor behavior. The model provides indications about each tutor's strengths and weaknesses, which enables the evaluator to exploit strengths…

  20. Automated Decomposition of Model-based Learning Problems

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Millar, Bill

    1996-01-01

    A new generation of sensor rich, massively distributed autonomous systems is being developed that has the potential for unprecedented performance, such as smart buildings, reconfigurable factories, adaptive traffic systems and remote earth ecosystem monitoring. To achieve high performance these massive systems will need to accurately model themselves and their environment from sensor information. Accomplishing this on a grand scale requires automating the art of large-scale modeling. This paper presents a formalization of [\\em decompositional model-based learning (DML)], a method developed by observing a modeler's expertise at decomposing large scale model estimation tasks. The method exploits a striking analogy between learning and consistency-based diagnosis. Moriarty, an implementation of DML, has been applied to thermal modeling of a smart building, demonstrating a significant improvement in learning rate.

  1. Simple Plans or Sophisticated Habits? State, Transition and Learning Interactions in the Two-Step Task.

    PubMed

    Akam, Thomas; Costa, Rui; Dayan, Peter

    2015-12-01

    The recently developed 'two-step' behavioural task promises to differentiate model-based from model-free reinforcement learning, while generating neurophysiologically-friendly decision datasets with parametric variation of decision variables. These desirable features have prompted its widespread adoption. Here, we analyse the interactions between a range of different strategies and the structure of transitions and outcomes in order to examine constraints on what can be learned from behavioural performance. The task involves a trade-off between the need for stochasticity, to allow strategies to be discriminated, and a need for determinism, so that it is worth subjects' investment of effort to exploit the contingencies optimally. We show through simulation that under certain conditions model-free strategies can masquerade as being model-based. We first show that seemingly innocuous modifications to the task structure can induce correlations between action values at the start of the trial and the subsequent trial events in such a way that analysis based on comparing successive trials can lead to erroneous conclusions. We confirm the power of a suggested correction to the analysis that can alleviate this problem. We then consider model-free reinforcement learning strategies that exploit correlations between where rewards are obtained and which actions have high expected value. These generate behaviour that appears model-based under these, and also more sophisticated, analyses. Exploiting the full potential of the two-step task as a tool for behavioural neuroscience requires an understanding of these issues.

  2. Simple Plans or Sophisticated Habits? State, Transition and Learning Interactions in the Two-Step Task

    PubMed Central

    Akam, Thomas; Costa, Rui; Dayan, Peter

    2015-01-01

    The recently developed ‘two-step’ behavioural task promises to differentiate model-based from model-free reinforcement learning, while generating neurophysiologically-friendly decision datasets with parametric variation of decision variables. These desirable features have prompted its widespread adoption. Here, we analyse the interactions between a range of different strategies and the structure of transitions and outcomes in order to examine constraints on what can be learned from behavioural performance. The task involves a trade-off between the need for stochasticity, to allow strategies to be discriminated, and a need for determinism, so that it is worth subjects’ investment of effort to exploit the contingencies optimally. We show through simulation that under certain conditions model-free strategies can masquerade as being model-based. We first show that seemingly innocuous modifications to the task structure can induce correlations between action values at the start of the trial and the subsequent trial events in such a way that analysis based on comparing successive trials can lead to erroneous conclusions. We confirm the power of a suggested correction to the analysis that can alleviate this problem. We then consider model-free reinforcement learning strategies that exploit correlations between where rewards are obtained and which actions have high expected value. These generate behaviour that appears model-based under these, and also more sophisticated, analyses. Exploiting the full potential of the two-step task as a tool for behavioural neuroscience requires an understanding of these issues. PMID:26657806

  3. High Performance Polymer Memory and Its Formation

    DTIC Science & Technology

    2007-04-26

    the retention time of the device was performed to estimate the barrier height of the charge trap . The activation energy was approximated to be about...characteristics and presented a model to explain the mechanism of electrical switching in the device. By exploiting an electric-field induced charge transfer...electrical current in the high conductivity state would be due to some temperature-independent charge tunneling processes. The IV curves could be

  4. Center for Center for Technology for Advanced Scientific Component Software (TASCS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostadin, Damevski

    A resounding success of the Scientific Discovery through Advanced Computing (SciDAC) program is that high-performance computational science is now universally recognized as a critical aspect of scientific discovery [71], complementing both theoretical and experimental research. As scientific communities prepare to exploit unprecedented computing capabilities of emerging leadership-class machines for multi-model simulations at the extreme scale [72], it is more important than ever to address the technical and social challenges of geographically distributed teams that combine expertise in domain science, applied mathematics, and computer science to build robust and flexible codes that can incorporate changes over time. The Center for Technologymore » for Advanced Scientific Component Software (TASCS)1 tackles these these issues by exploiting component-based software development to facilitate collaborative high-performance scientific computing.« less

  5. Optimal exploitation strategies for an animal population in a Markovian environment: A theory and an example

    USGS Publications Warehouse

    Anderson, D.R.

    1975-01-01

    Optimal exploitation strategies were studied for an animal population in a Markovian (stochastic, serially correlated) environment. This is a general case and encompasses a number of important special cases as simplifications. Extensive empirical data on the Mallard (Anas platyrhynchos) were used as an example of general theory. The number of small ponds on the central breeding grounds was used as an index to the state of the environment. A general mathematical model was formulated to provide a synthesis of the existing literature, estimates of parameters developed from an analysis of data, and hypotheses regarding the specific effect of exploitation on total survival. The literature and analysis of data were inconclusive concerning the effect of exploitation on survival. Therefore, two hypotheses were explored: (1) exploitation mortality represents a largely additive form of mortality, and (2) exploitation mortality is compensatory with other forms of mortality, at least to some threshold level. Models incorporating these two hypotheses were formulated as stochastic dynamic programming models and optimal exploitation strategies were derived numerically on a digital computer. Optimal exploitation strategies were found to exist under the rather general conditions. Direct feedback control was an integral component in the optimal decision-making process. Optimal exploitation was found to be substantially different depending upon the hypothesis regarding the effect of exploitation on the population. If we assume that exploitation is largely an additive force of mortality in Mallards, then optimal exploitation decisions are a convex function of the size of the breeding population and a linear or slight concave function of the environmental conditions. Under the hypothesis of compensatory mortality forces, optimal exploitation decisions are approximately linearly related to the size of the Mallard breeding population. Dynamic programming is suggested as a very general formulation for realistic solutions to the general optimal exploitation problem. The concepts of state vectors and stage transformations are completely general. Populations can be modeled stochastically and the objective function can include extra-biological factors. The optimal level of exploitation in year t must be based on the observed size of the population and the state of the environment in year t unless the dynamics of the population, the state of the environment, and the result of the exploitation decisions are completely deterministic. Exploitation based on an average harvest, or harvest rate, or designed to maintain a constant breeding population size is inefficient.

  6. Pursuit-evasion games with information uncertainties for elusive orbital maneuver and space object tracking

    NASA Astrophysics Data System (ADS)

    Shen, Dan; Jia, Bin; Chen, Genshe; Blasch, Erik; Pham, Khanh

    2015-05-01

    This paper develops and evaluates a pursuit-evasion (PE) game approach for elusive orbital maneuver and space object tracking. Unlike the PE games in the literature, where the assumption is that either both players have perfect knowledge of the opponents' positions or use primitive sensing models, the proposed PE approach solves the realistic space situation awareness (SSA) problem with imperfect information, where the evaders will exploit the pursuers' sensing and tracking models to confuse their opponents by maneuvering their orbits to increase the uncertainties, which the pursuers perform orbital maneuvers to minimize. In the game setup, each game player P (pursuer) and E (evader) has its own motion equations with a small continuous low-thrust. The magnitude of the low thrust is fixed and the direction can be controlled by the associated game player. The entropic uncertainty is used to generate the cost functions of game players. The Nash or mixed Nash equilibrium is composed of the directional controls of low-thrusts. Numerical simulations are emulated to demonstrate the performance. Simplified perturbations models (SGP4/SDP4) are exploited to calculate the ground truth of the satellite states (position and speed).

  7. Knowledge-Based Topic Model for Unsupervised Object Discovery and Localization.

    PubMed

    Niu, Zhenxing; Hua, Gang; Wang, Le; Gao, Xinbo

    Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object instances from a given image collection without any supervision. Previous work has attempted to tackle this problem with vanilla topic models, such as latent Dirichlet allocation (LDA). However, in those methods no prior knowledge for the given image collection is exploited to facilitate object discovery. On the other hand, the topic models used in those methods suffer from the topic coherence issue-some inferred topics do not have clear meaning, which limits the final performance of object discovery. In this paper, prior knowledge in terms of the so-called must-links are exploited from Web images on the Internet. Furthermore, a novel knowledge-based topic model, called LDA with mixture of Dirichlet trees, is proposed to incorporate the must-links into topic modeling for object discovery. In particular, to better deal with the polysemy phenomenon of visual words, the must-link is re-defined as that one must-link only constrains one or some topic(s) instead of all topics, which leads to significantly improved topic coherence. Moreover, the must-links are built and grouped with respect to specific object classes, thus the must-links in our approach are semantic-specific , which allows to more efficiently exploit discriminative prior knowledge from Web images. Extensive experiments validated the efficiency of our proposed approach on several data sets. It is shown that our method significantly improves topic coherence and outperforms the unsupervised methods for object discovery and localization. In addition, compared with discriminative methods, the naturally existing object classes in the given image collection can be subtly discovered, which makes our approach well suited for realistic applications of unsupervised object discovery.Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object instances from a given image collection without any supervision. Previous work has attempted to tackle this problem with vanilla topic models, such as latent Dirichlet allocation (LDA). However, in those methods no prior knowledge for the given image collection is exploited to facilitate object discovery. On the other hand, the topic models used in those methods suffer from the topic coherence issue-some inferred topics do not have clear meaning, which limits the final performance of object discovery. In this paper, prior knowledge in terms of the so-called must-links are exploited from Web images on the Internet. Furthermore, a novel knowledge-based topic model, called LDA with mixture of Dirichlet trees, is proposed to incorporate the must-links into topic modeling for object discovery. In particular, to better deal with the polysemy phenomenon of visual words, the must-link is re-defined as that one must-link only constrains one or some topic(s) instead of all topics, which leads to significantly improved topic coherence. Moreover, the must-links are built and grouped with respect to specific object classes, thus the must-links in our approach are semantic-specific , which allows to more efficiently exploit discriminative prior knowledge from Web images. Extensive experiments validated the efficiency of our proposed approach on several data sets. It is shown that our method significantly improves topic coherence and outperforms the unsupervised methods for object discovery and localization. In addition, compared with discriminative methods, the naturally existing object classes in the given image collection can be subtly discovered, which makes our approach well suited for realistic applications of unsupervised object discovery.

  8. Profile control simulations and experiments on TCV: a controller test environment and results using a model-based predictive controller

    NASA Astrophysics Data System (ADS)

    Maljaars, E.; Felici, F.; Blanken, T. C.; Galperti, C.; Sauter, O.; de Baar, M. R.; Carpanese, F.; Goodman, T. P.; Kim, D.; Kim, S. H.; Kong, M.; Mavkov, B.; Merle, A.; Moret, J. M.; Nouailletas, R.; Scheffer, M.; Teplukhina, A. A.; Vu, N. M. T.; The EUROfusion MST1-team; The TCV-team

    2017-12-01

    The successful performance of a model predictive profile controller is demonstrated in simulations and experiments on the TCV tokamak, employing a profile controller test environment. Stable high-performance tokamak operation in hybrid and advanced plasma scenarios requires control over the safety factor profile (q-profile) and kinetic plasma parameters such as the plasma beta. This demands to establish reliable profile control routines in presently operational tokamaks. We present a model predictive profile controller that controls the q-profile and plasma beta using power requests to two clusters of gyrotrons and the plasma current request. The performance of the controller is analyzed in both simulation and TCV L-mode discharges where successful tracking of the estimated inverse q-profile as well as plasma beta is demonstrated under uncertain plasma conditions and the presence of disturbances. The controller exploits the knowledge of the time-varying actuator limits in the actuator input calculation itself such that fast transitions between targets are achieved without overshoot. A software environment is employed to prepare and test this and three other profile controllers in parallel in simulations and experiments on TCV. This set of tools includes the rapid plasma transport simulator RAPTOR and various algorithms to reconstruct the plasma equilibrium and plasma profiles by merging the available measurements with model-based predictions. In this work the estimated q-profile is merely based on RAPTOR model predictions due to the absence of internal current density measurements in TCV. These results encourage to further exploit model predictive profile control in experiments on TCV and other (future) tokamaks.

  9. Model-based development of a fault signature matrix to improve solid oxide fuel cell systems on-site diagnosis

    NASA Astrophysics Data System (ADS)

    Polverino, Pierpaolo; Pianese, Cesare; Sorrentino, Marco; Marra, Dario

    2015-04-01

    The paper focuses on the design of a procedure for the development of an on-field diagnostic algorithm for solid oxide fuel cell (SOFC) systems. The diagnosis design phase relies on an in-deep analysis of the mutual interactions among all system components by exploiting the physical knowledge of the SOFC system as a whole. This phase consists of the Fault Tree Analysis (FTA), which identifies the correlations among possible faults and their corresponding symptoms at system components level. The main outcome of the FTA is an inferential isolation tool (Fault Signature Matrix - FSM), which univocally links the faults to the symptoms detected during the system monitoring. In this work the FTA is considered as a starting point to develop an improved FSM. Making use of a model-based investigation, a fault-to-symptoms dependency study is performed. To this purpose a dynamic model, previously developed by the authors, is exploited to simulate the system under faulty conditions. Five faults are simulated, one for the stack and four occurring at BOP level. Moreover, the robustness of the FSM design is increased by exploiting symptom thresholds defined for the investigation of the quantitative effects of the simulated faults on the affected variables.

  10. Mathematical models and photogrammetric exploitation of image sensing

    NASA Astrophysics Data System (ADS)

    Puatanachokchai, Chokchai

    Mathematical models of image sensing are generally categorized into physical/geometrical sensor models and replacement sensor models. While the former is determined from image sensing geometry, the latter is based on knowledge of the physical/geometric sensor models and on using such models for its implementation. The main thrust of this research is in replacement sensor models which have three important characteristics: (1) Highly accurate ground-to-image functions; (2) Rigorous error propagation that is essentially of the same accuracy as the physical model; and, (3) Adjustability, or the ability to upgrade the replacement sensor model parameters when additional control information becomes available after the replacement sensor model has replaced the physical model. In this research, such replacement sensor models are considered as True Replacement Models or TRMs. TRMs provide a significant advantage of universality, particularly for image exploitation functions. There have been several writings about replacement sensor models, and except for the so called RSM (Replacement Sensor Model as a product described in the Manual of Photogrammetry), almost all of them pay very little or no attention to errors and their propagation. This is because, it is suspected, the few physical sensor parameters are usually replaced by many more parameters, thus presenting a potential error estimation difficulty. The third characteristic, adjustability, is perhaps the most demanding. It provides an equivalent flexibility to that of triangulation using the physical model. Primary contributions of this thesis include not only "the eigen-approach", a novel means of replacing the original sensor parameter covariance matrices at the time of estimating the TRM, but also the implementation of the hybrid approach that combines the eigen-approach with the added parameters approach used in the RSM. Using either the eigen-approach or the hybrid approach, rigorous error propagation can be performed during image exploitation. Further, adjustability can be performed when additional control information becomes available after the TRM has been implemented. The TRM is shown to apply to imagery from sensors having different geometries, including an aerial frame camera, a spaceborne linear array sensor, an airborne pushbroom sensor, and an airborne whiskbroom sensor. TRM results show essentially negligible differences as compared to those from rigorous physical sensor models, both for geopositioning from single and overlapping images. Simulated as well as real image data are used to address all three characteristics of the TRM.

  11. Using optimal control methods with constraints to generate singlet states in NMR

    NASA Astrophysics Data System (ADS)

    Rodin, Bogdan A.; Kiryutin, Alexey S.; Yurkovskaya, Alexandra V.; Ivanov, Konstantin L.; Yamamoto, Satoru; Sato, Kazunobu; Takui, Takeji

    2018-06-01

    A method is proposed for optimizing the performance of the APSOC (Adiabatic-Passage Spin Order Conversion) technique, which can be exploited in NMR experiments with singlet spin states. In this technique magnetization-to-singlet conversion (and singlet-to-magnetization conversion) is performed by using adiabatically ramped RF-fields. Optimization utilizes the GRAPE (Gradient Ascent Pulse Engineering) approach, in which for a fixed search area we assume monotonicity to the envelope of the RF-field. Such an approach allows one to achieve much better performance for APSOC; consequently, the efficiency of magnetization-to-singlet conversion is greatly improved as compared to simple model RF-ramps, e.g., linear ramps. We also demonstrate that the optimization method is reasonably robust to possible inaccuracies in determining NMR parameters of the spin system under study and also in setting the RF-field parameters. The present approach can be exploited in other NMR and EPR applications using adiabatic switching of spin Hamiltonians.

  12. How to use MPI communication in highly parallel climate simulations more easily and more efficiently.

    NASA Astrophysics Data System (ADS)

    Behrens, Jörg; Hanke, Moritz; Jahns, Thomas

    2014-05-01

    In this talk we present a way to facilitate efficient use of MPI communication for developers of climate models. Exploitation of the performance potential of today's highly parallel supercomputers with real world simulations is a complex task. This is partly caused by the low level nature of the MPI communication library which is the dominant communication tool at least for inter-node communication. In order to manage the complexity of the task, climate simulations with non-trivial communication patterns often use an internal abstraction layer above MPI without exploiting the benefits of communication aggregation or MPI-datatypes. The solution for the complexity and performance problem we propose is the communication library YAXT. This library is built on top of MPI and takes high level descriptions of arbitrary domain decompositions and automatically derives an efficient collective data exchange. Several exchanges can be aggregated in order to reduce latency costs. Examples are given which demonstrate the simplicity and the performance gains for selected climate applications.

  13. GPU accelerated particle visualization with Splotch

    NASA Astrophysics Data System (ADS)

    Rivi, M.; Gheller, C.; Dykes, T.; Krokos, M.; Dolag, K.

    2014-07-01

    Splotch is a rendering algorithm for exploration and visual discovery in particle-based datasets coming from astronomical observations or numerical simulations. The strengths of the approach are production of high quality imagery and support for very large-scale datasets through an effective mix of the OpenMP and MPI parallel programming paradigms. This article reports our experiences in re-designing Splotch for exploiting emerging HPC architectures nowadays increasingly populated with GPUs. A performance model is introduced to guide our re-factoring of Splotch. A number of parallelization issues are discussed, in particular relating to race conditions and workload balancing, towards achieving optimal performances. Our implementation was accomplished by using the CUDA programming paradigm. Our strategy is founded on novel schemes achieving optimized data organization and classification of particles. We deploy a reference cosmological simulation to present performance results on acceleration gains and scalability. We finally outline our vision for future work developments including possibilities for further optimizations and exploitation of hybrid systems and emerging accelerators.

  14. Automated UAV-based video exploitation using service oriented architecture framework

    NASA Astrophysics Data System (ADS)

    Se, Stephen; Nadeau, Christian; Wood, Scott

    2011-05-01

    Airborne surveillance and reconnaissance are essential for successful military missions. Such capabilities are critical for troop protection, situational awareness, mission planning, damage assessment, and others. Unmanned Aerial Vehicles (UAVs) gather huge amounts of video data but it is extremely labour-intensive for operators to analyze hours and hours of received data. At MDA, we have developed a suite of tools that can process the UAV video data automatically, including mosaicking, change detection and 3D reconstruction, which have been integrated within a standard GIS framework. In addition, the mosaicking and 3D reconstruction tools have also been integrated in a Service Oriented Architecture (SOA) framework. The Visualization and Exploitation Workstation (VIEW) integrates 2D and 3D visualization, processing, and analysis capabilities developed for UAV video exploitation. Visualization capabilities are supported through a thick-client Graphical User Interface (GUI), which allows visualization of 2D imagery, video, and 3D models. The GUI interacts with the VIEW server, which provides video mosaicking and 3D reconstruction exploitation services through the SOA framework. The SOA framework allows multiple users to perform video exploitation by running a GUI client on the operator's computer and invoking the video exploitation functionalities residing on the server. This allows the exploitation services to be upgraded easily and allows the intensive video processing to run on powerful workstations. MDA provides UAV services to the Canadian and Australian forces in Afghanistan with the Heron, a Medium Altitude Long Endurance (MALE) UAV system. On-going flight operations service provides important intelligence, surveillance, and reconnaissance information to commanders and front-line soldiers.

  15. Pupil Diameter Tracks the Exploration-Exploitation Trade-off during Analogical Reasoning and Explains Individual Differences in Fluid Intelligence.

    PubMed

    Hayes, Taylor R; Petrov, Alexander A

    2016-02-01

    The ability to adaptively shift between exploration and exploitation control states is critical for optimizing behavioral performance. Converging evidence from primate electrophysiology and computational neural modeling has suggested that this ability may be mediated by the broad norepinephrine projections emanating from the locus coeruleus (LC) [Aston-Jones, G., & Cohen, J. D. An integrative theory of locus coeruleus-norepinephrine function: Adaptive gain and optimal performance. Annual Review of Neuroscience, 28, 403-450, 2005]. There is also evidence that pupil diameter covaries systematically with LC activity. Although imperfect and indirect, this link makes pupillometry a useful tool for studying the locus coeruleus norepinephrine system in humans and in high-level tasks. Here, we present a novel paradigm that examines how the pupillary response during exploration and exploitation covaries with individual differences in fluid intelligence during analogical reasoning on Raven's Advanced Progressive Matrices. Pupillometry was used as a noninvasive proxy for LC activity, and concurrent think-aloud verbal protocols were used to identify exploratory and exploitative solution periods. This novel combination of pupillometry and verbal protocols from 40 participants revealed a decrease in pupil diameter during exploitation and an increase during exploration. The temporal dynamics of the pupillary response was characterized by a steep increase during the transition to exploratory periods, sustained dilation for many seconds afterward, and followed by gradual return to baseline. Moreover, the individual differences in the relative magnitude of pupillary dilation accounted for 16% of the variance in Advanced Progressive Matrices scores. Assuming that pupil diameter is a valid index of LC activity, these results establish promising preliminary connections between the literature on locus coeruleus norepinephrine-mediated cognitive control and the literature on analogical reasoning and fluid intelligence.

  16. Quantitative Evaluation of Performance during Robot-assisted Treatment.

    PubMed

    Peri, E; Biffi, E; Maghini, C; Servodio Iammarrone, F; Gagliardi, C; Germiniasi, C; Pedrocchi, A; Turconi, A C; Reni, G

    2016-01-01

    This article is part of the Focus Theme of Methods of Information in Medicine on "Methodologies, Models and Algorithms for Patients Rehabilitation". The great potential of robots in extracting quantitative and meaningful data is not always exploited in clinical practice. The aim of the present work is to describe a simple parameter to assess the performance of subjects during upper limb robotic training exploiting data automatically recorded by the robot, with no additional effort for patients and clinicians. Fourteen children affected by cerebral palsy (CP) performed a training with Armeo®Spring. Each session was evaluated with P, a simple parameter that depends on the overall performance recorded, and median and interquartile values were computed to perform a group analysis. Median (interquartile) values of P significantly increased from 0.27 (0.21) at T0 to 0.55 (0.27) at T1 . This improvement was functionally validated by a significant increase of the Melbourne Assessment of Unilateral Upper Limb Function. The parameter described here was able to show variations in performance over time and enabled a quantitative evaluation of motion abilities in a way that is reliable with respect to a well-known clinical scale.

  17. Multivariate curve resolution based chromatographic peak alignment combined with parallel factor analysis to exploit second-order advantage in complex chromatographic measurements.

    PubMed

    Parastar, Hadi; Akvan, Nadia

    2014-03-13

    In the present contribution, a new combination of multivariate curve resolution-correlation optimized warping (MCR-COW) with trilinear parallel factor analysis (PARAFAC) is developed to exploit second-order advantage in complex chromatographic measurements. In MCR-COW, the complexity of the chromatographic data is reduced by arranging the data in a column-wise augmented matrix, analyzing using MCR bilinear model and aligning the resolved elution profiles using COW in a component-wise manner. The aligned chromatographic data is then decomposed using trilinear model of PARAFAC in order to exploit pure chromatographic and spectroscopic information. The performance of this strategy is evaluated using simulated and real high-performance liquid chromatography-diode array detection (HPLC-DAD) datasets. The obtained results showed that the MCR-COW can efficiently correct elution time shifts of target compounds that are completely overlapped by coeluted interferences in complex chromatographic data. In addition, the PARAFAC analysis of aligned chromatographic data has the advantage of unique decomposition of overlapped chromatographic peaks to identify and quantify the target compounds in the presence of interferences. Finally, to confirm the reliability of the proposed strategy, the performance of the MCR-COW-PARAFAC is compared with the frequently used methods of PARAFAC, COW-PARAFAC, multivariate curve resolution-alternating least squares (MCR-ALS), and MCR-COW-MCR. In general, in most of the cases the MCR-COW-PARAFAC showed an improvement in terms of lack of fit (LOF), relative error (RE) and spectral correlation coefficients in comparison to the PARAFAC, COW-PARAFAC, MCR-ALS and MCR-COW-MCR results. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. A semi-analytical refrigeration cycle modelling approach for a heat pump hot water heater

    NASA Astrophysics Data System (ADS)

    Panaras, G.; Mathioulakis, E.; Belessiotis, V.

    2018-04-01

    The use of heat pump systems in applications like the production of hot water or space heating makes important the modelling of the processes for the evaluation of the performance of existing systems, as well as for design purposes. The proposed semi-analytical model offers the opportunity to estimate the performance of a heat pump system producing hot water, without using detailed geometrical or any performance data. This is important, as for many commercial systems the type and characteristics of the involved subcomponents can hardly be detected, thus not allowing the implementation of more analytical approaches or the exploitation of the manufacturers' catalogue performance data. The analysis copes with the issues related with the development of the models of the subcomponents involved in the studied system. Issues not discussed thoroughly in the existing literature, as the refrigerant mass inventory in the case an accumulator is present, are examined effectively.

  19. Limited angle CT reconstruction by simultaneous spatial and Radon domain regularization based on TV and data-driven tight frame

    NASA Astrophysics Data System (ADS)

    Zhang, Wenkun; Zhang, Hanming; Wang, Linyuan; Cai, Ailong; Li, Lei; Yan, Bin

    2018-02-01

    Limited angle computed tomography (CT) reconstruction is widely performed in medical diagnosis and industrial testing because of the size of objects, engine/armor inspection requirements, and limited scan flexibility. Limited angle reconstruction necessitates usage of optimization-based methods that utilize additional sparse priors. However, most of conventional methods solely exploit sparsity priors of spatial domains. When CT projection suffers from serious data deficiency or various noises, obtaining reconstruction images that meet the requirement of quality becomes difficult and challenging. To solve this problem, this paper developed an adaptive reconstruction method for limited angle CT problem. The proposed method simultaneously uses spatial and Radon domain regularization model based on total variation (TV) and data-driven tight frame. Data-driven tight frame being derived from wavelet transformation aims at exploiting sparsity priors of sinogram in Radon domain. Unlike existing works that utilize pre-constructed sparse transformation, the framelets of the data-driven regularization model can be adaptively learned from the latest projection data in the process of iterative reconstruction to provide optimal sparse approximations for given sinogram. At the same time, an effective alternating direction method is designed to solve the simultaneous spatial and Radon domain regularization model. The experiments for both simulation and real data demonstrate that the proposed algorithm shows better performance in artifacts depression and details preservation than the algorithms solely using regularization model of spatial domain. Quantitative evaluations for the results also indicate that the proposed algorithm applying learning strategy performs better than the dual domains algorithms without learning regularization model

  20. How ecology shapes exploitation: a framework to predict the behavioural response of human and animal foragers along exploration-exploitation trade-offs.

    PubMed

    Monk, Christopher T; Barbier, Matthieu; Romanczuk, Pawel; Watson, James R; Alós, Josep; Nakayama, Shinnosuke; Rubenstein, Daniel I; Levin, Simon A; Arlinghaus, Robert

    2018-06-01

    Understanding how humans and other animals behave in response to changes in their environments is vital for predicting population dynamics and the trajectory of coupled social-ecological systems. Here, we present a novel framework for identifying emergent social behaviours in foragers (including humans engaged in fishing or hunting) in predator-prey contexts based on the exploration difficulty and exploitation potential of a renewable natural resource. A qualitative framework is introduced that predicts when foragers should behave territorially, search collectively, act independently or switch among these states. To validate it, we derived quantitative predictions from two models of different structure: a generic mathematical model, and a lattice-based evolutionary model emphasising exploitation and exclusion costs. These models independently identified that the exploration difficulty and exploitation potential of the natural resource controls the social behaviour of resource exploiters. Our theoretical predictions were finally compared to a diverse set of empirical cases focusing on fisheries and aquatic organisms across a range of taxa, substantiating the framework's predictions. Understanding social behaviour for given social-ecological characteristics has important implications, particularly for the design of governance structures and regulations to move exploited systems, such as fisheries, towards sustainability. Our framework provides concrete steps in this direction. © 2018 John Wiley & Sons Ltd/CNRS.

  1. Multiple attractors and dynamics in an OLG model with productive environment

    NASA Astrophysics Data System (ADS)

    Caravaggio, Andrea; Sodini, Mauro

    2018-05-01

    This work analyses an overlapping generations model in which economic activity depends on the exploitation of a free-access natural resource. In addition, public expenditures for environmental maintenance are assumed. By characterising some properties of the map and performing numerical simulations, we investigate consequences of the interplay between environmental public expenditure and private sector. In particular, we identify different scenarios in which multiple equilibria as well as complex dynamics may arise.

  2. Photons Revisited

    NASA Astrophysics Data System (ADS)

    Batic, Matej; Begalli, Marcia; Han, Min Cheol; Hauf, Steffen; Hoff, Gabriela; Kim, Chan Hyeong; Kim, Han Sung; Grazia Pia, Maria; Saracco, Paolo; Weidenspointner, Georg

    2014-06-01

    A systematic review of methods and data for the Monte Carlo simulation of photon interactions is in progress: it concerns a wide set of theoretical modeling approaches and data libraries available for this purpose. Models and data libraries are assessed quantitatively with respect to an extensive collection of experimental measurements documented in the literature to determine their accuracy; this evaluation exploits rigorous statistical analysis methods. The computational performance of the associated modeling algorithms is evaluated as well. An overview of the assessment of photon interaction models and results of the experimental validation are presented.

  3. Surrogate Model Application to the Identification of Optimal Groundwater Exploitation Scheme Based on Regression Kriging Method—A Case Study of Western Jilin Province

    PubMed Central

    An, Yongkai; Lu, Wenxi; Cheng, Weiguo

    2015-01-01

    This paper introduces a surrogate model to identify an optimal exploitation scheme, while the western Jilin province was selected as the study area. A numerical simulation model of groundwater flow was established first, and four exploitation wells were set in the Tongyu county and Qian Gorlos county respectively so as to supply water to Daan county. Second, the Latin Hypercube Sampling (LHS) method was used to collect data in the feasible region for input variables. A surrogate model of the numerical simulation model of groundwater flow was developed using the regression kriging method. An optimization model was established to search an optimal groundwater exploitation scheme using the minimum average drawdown of groundwater table and the minimum cost of groundwater exploitation as multi-objective functions. Finally, the surrogate model was invoked by the optimization model in the process of solving the optimization problem. Results show that the relative error and root mean square error of the groundwater table drawdown between the simulation model and the surrogate model for 10 validation samples are both lower than 5%, which is a high approximation accuracy. The contrast between the surrogate-based simulation optimization model and the conventional simulation optimization model for solving the same optimization problem, shows the former only needs 5.5 hours, and the latter needs 25 days. The above results indicate that the surrogate model developed in this study could not only considerably reduce the computational burden of the simulation optimization process, but also maintain high computational accuracy. This can thus provide an effective method for identifying an optimal groundwater exploitation scheme quickly and accurately. PMID:26264008

  4. To defer or to stand up? How offender formidability affects third party moral outrage.

    PubMed

    Jensen, Niels Holm; Petersen, Michael Bang

    2011-03-16

    According to models of animal behavior, the relative formidability of conspecifics determines the utility of deferring versus aggressing in situations of conflict. Here we apply and extend these models by investigating how the formidability of exploiters shapes third party moral outrage in humans. Deciding whether to defer to or stand up against a formidable exploiter is a complicated decision as there is both much to lose (formidable individuals are able and prone to retaliate) and much to gain (formidable individuals pose a great future threat). An optimally designed outrage system should, therefore, be sensitive to these cost- benefit trade-offs. To test this argument, participants read scenarios containing exploitative acts (trivial vs. serious) and were presented with head-shot photos of the apparent exploiters (formidable vs. non-formidable). As predicted, results showed that, compared to the non- formidable exploiter, the formidable exploiter activated significantly more outrage in male participants when the exploitative act was serious. Conversely, when it was trivial, the formidable exploiter activated significantly less outrage in male participants. However, these findings were conditioned by the exploiters' perceived trustworthiness. Among female participants, the results showed that moral outrage was not modulated by exploiter formidability.

  5. Performance Analysis of GFDL's GCM Line-By-Line Radiative Transfer Model on GPU and MIC Architectures

    NASA Astrophysics Data System (ADS)

    Menzel, R.; Paynter, D.; Jones, A. L.

    2017-12-01

    Due to their relatively low computational cost, radiative transfer models in global climate models (GCMs) run on traditional CPU architectures generally consist of shortwave and longwave parameterizations over a small number of wavelength bands. With the rise of newer GPU and MIC architectures, however, the performance of high resolution line-by-line radiative transfer models may soon approach those of the physical parameterizations currently employed in GCMs. Here we present an analysis of the current performance of a new line-by-line radiative transfer model currently under development at GFDL. Although originally designed to specifically exploit GPU architectures through the use of CUDA, the radiative transfer model has recently been extended to include OpenMP in an effort to also effectively target MIC architectures such as Intel's Xeon Phi. Using input data provided by the upcoming Radiative Forcing Model Intercomparison Project (RFMIP, as part of CMIP 6), we compare model results and performance data for various model configurations and spectral resolutions run on both GPU and Intel Knights Landing architectures to analogous runs of the standard Oxford Reference Forward Model on traditional CPUs.

  6. Geometric saliency to characterize radar exploitation performance

    NASA Astrophysics Data System (ADS)

    Nolan, Adam; Keserich, Brad; Lingg, Andrew; Goley, Steve

    2014-06-01

    Based on the fundamental scattering mechanisms of facetized computer-aided design (CAD) models, we are able to define expected contributions (EC) to the radar signature. The net result of this analysis is the prediction of the salient aspects and contributing vehicle morphology based on the aspect. Although this approach does not provide the fidelity of an asymptotic electromagnetic (EM) simulation, it does provide very fast estimates of the unique scattering that can be consumed by a signature exploitation algorithm. The speed of this approach is particularly relevant when considering the high dimensionality of target configuration variability due to articulating parts which are computationally burdensome to predict. The key scattering phenomena considered in this work are the specular response from a single bounce interaction with surfaces and dihedral response formed between the ground plane and vehicle. Results of this analysis are demonstrated for a set of civilian target models.

  7. Physical and numerical modeling of hydrophysical proceses on the site of underwater pipelines

    NASA Astrophysics Data System (ADS)

    Garmakova, M. E.; Degtyarev, V. V.; Fedorova, N. N.; Shlychkov, V. A.

    2018-03-01

    The paper outlines issues related to ensuring the exploitation safety of underwater pipelines that are at risk of accidents. The performed research is based on physical and mathematical modeling of local bottom erosion in the area of pipeline location. The experimental studies were performed on the basis of the Hydraulics Laboratory of the Department of Hydraulic Engineering Construction, Safety and Ecology of NSUACE (Sibstrin). In the course of physical experiments it was revealed that the intensity of the bottom soil reforming depends on the deepening of the pipeline. The ANSYS software has been used for numerical modeling. The process of erosion of the sandy bottom was modeled under the pipeline. Comparison of computational results at various mass flow rates was made.

  8. Electromagnetic Physics Models for Parallel Computing Architectures

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-10-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well.

  9. Intensively exploited Mediterranean aquifers: resilience to seawater intrusion and proximity to critical thresholds

    NASA Astrophysics Data System (ADS)

    Mazi, K.; Koussis, A. D.; Destouni, G.

    2014-05-01

    We investigate seawater intrusion in three prominent Mediterranean aquifers that are subject to intensive exploitation and modified hydrologic regimes by human activities: the Nile Delta, Israel Coastal and Cyprus Akrotiri aquifers. Using a generalized analytical sharp interface model, we review the salinization history and current status of these aquifers, and quantify their resilience/vulnerability to current and future seawater intrusion forcings. We identify two different critical limits of seawater intrusion under groundwater exploitation and/or climatic stress: a limit of well intrusion, at which intruded seawater reaches key locations of groundwater pumping, and a tipping point of complete seawater intrusion up to the prevailing groundwater divide of a coastal aquifer. Either limit can be reached, and ultimately crossed, under intensive aquifer exploitation and/or climate-driven change. We show that seawater intrusion vulnerability for different aquifer cases can be directly compared in terms of normalized intrusion performance curves. The site-specific assessments show that (a) the intruding seawater currently seriously threatens the Nile Delta aquifer, (b) in the Israel Coastal aquifer the sharp interface toe approaches the well location and (c) the Cyprus Akrotiri aquifer is currently somewhat less threatened by increased seawater intrusion.

  10. Intensively exploited Mediterranean aquifers: resilience and proximity to critical points of seawater intrusion

    NASA Astrophysics Data System (ADS)

    Mazi, K.; Koussis, A. D.; Destouni, G.

    2013-11-01

    We investigate here seawater intrusion in three prominent Mediterranean aquifers that are subject to intensive exploitation and modified hydrologic regimes by human activities: the Nile Delta Aquifer, the Israel Coastal Aquifer and the Cyprus Akrotiri Aquifer. Using a generalized analytical sharp-interface model, we review the salinization history and current status of these aquifers, and quantify their resilience/vulnerability to current and future sea intrusion forcings. We identify two different critical limits of sea intrusion under groundwater exploitation and/or climatic stress: a limit of well intrusion, at which intruded seawater reaches key locations of groundwater pumping, and a tipping point of complete sea intrusion upto the prevailing groundwater divide of a coastal aquifer. Either limit can be reached, and ultimately crossed, under intensive aquifer exploitation and/or climate-driven change. We show that sea intrusion vulnerability for different aquifer cases can be directly compared in terms of normalized intrusion performance curves. The site-specific assessments show that the advance of seawater currently seriously threatens the Nile Delta Aquifer and the Israel Coastal Aquifer. The Cyprus Akrotiri Aquifer is currently somewhat less threatened by increased seawater intrusion.

  11. Improving link prediction in complex networks by adaptively exploiting multiple structural features of networks

    NASA Astrophysics Data System (ADS)

    Ma, Chuang; Bao, Zhong-Kui; Zhang, Hai-Feng

    2017-10-01

    So far, many network-structure-based link prediction methods have been proposed. However, these methods only highlight one or two structural features of networks, and then use the methods to predict missing links in different networks. The performances of these existing methods are not always satisfied in all cases since each network has its unique underlying structural features. In this paper, by analyzing different real networks, we find that the structural features of different networks are remarkably different. In particular, even in the same network, their inner structural features are utterly different. Therefore, more structural features should be considered. However, owing to the remarkably different structural features, the contributions of different features are hard to be given in advance. Inspired by these facts, an adaptive fusion model regarding link prediction is proposed to incorporate multiple structural features. In the model, a logistic function combing multiple structural features is defined, then the weight of each feature in the logistic function is adaptively determined by exploiting the known structure information. Last, we use the "learnt" logistic function to predict the connection probabilities of missing links. According to our experimental results, we find that the performance of our adaptive fusion model is better than many similarity indices.

  12. Combining Psychological Models with Machine Learning to Better Predict People’s Decisions

    DTIC Science & Technology

    2012-03-09

    us know that certain activities are unhealthy – smoking, eating non-healthy foods, and not exercising enough. However, we prefer these behaviors as...health, e.g. accept small discounts, while fewer are willing to take drastic lifestyle changes. As was true with the AAT studies, alternative models...the game, the driver must balance between exploitation, or choosing the arm which performed best until the current time, and exploration, or trying new

  13. PREMOR: a point reactor exposure model computer code for survey analysis of power plant performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vondy, D.R.

    1979-10-01

    The PREMOR computer code was written to exploit a simple, two-group point nuclear reactor power plant model for survey analysis. Up to thirteen actinides, fourteen fission products, and one lumped absorber nuclide density are followed over a reactor history. Successive feed batches are accounted for with provision for from one to twenty batches resident. The effect of exposure of each of the batches to the same neutron flux is determined.

  14. Effects of anthropogenic groundwater exploitation on land surface processes: A case study of the Haihe River Basin, Northern China

    NASA Astrophysics Data System (ADS)

    Xie, Z.; Zou, J.; Qin, P.; Sun, Q.

    2014-12-01

    In this study, we incorporated a groundwater exploitation scheme into the land surface model CLM3.5 to investigate the effects of the anthropogenic exploitation of groundwater on land surface processes in a river basin. Simulations of the Haihe River Basin in northern China were conducted for the years 1965-2000 using the model. A control simulation without exploitation and three exploitation simulations with different water demands derived from socioeconomic data related to the Basin were conducted. The results showed that groundwater exploitation for human activities resulted in increased wetting and cooling effects at the land surface and reduced groundwater storage. A lowering of the groundwater table, increased upper soil moisture, reduced 2 m air temperature, and enhanced latent heat flux were detected by the end of the simulated period, and the changes at the land surface were related linearly to the water demands. To determine the possible responses of the land surface processes in extreme cases (i.e., in which the exploitation process either continued or ceased), additional hypothetical simulations for the coming 200 years with constant climate forcing were conducted, regardless of changes in climate. The simulations revealed that the local groundwater storage on the plains could not contend with high-intensity exploitation for long if the exploitation process continues at the current rate. Changes attributable to groundwater exploitation reached extreme values and then weakened within decades with the depletion of groundwater resources and the exploitation process will therefore cease. However, if exploitation is stopped completely to allow groundwater to recover, drying and warming effects, such as increased temperature, reduced soil moisture, and reduced total runoff, would occur in the Basin within the early decades of the simulation period. The effects of exploitation will then gradually disappear, and the land surface variables will approach the natural state and stabilize at different rates. Simulations were also conducted for cases in which exploitation either continues or ceases using future climate scenario outputs from a general circulation model. The resulting trends were almost the same as those of the simulations with constant climate forcing.

  15. Exploitation oflnfrared Radiance and Retrieval Product Data to Improve Numerical Dust Modeling

    DTIC Science & Technology

    2017-12-20

    if it does not display a currently valid 0MB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ORGANIZATION . 1. REPORT DATE (DD-MM-YYYY) 12...AUTHOR(S) 5d. PROJECT NUMBER Robert Holz (PI) 75-036C-16 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8...PERFORMING ORGANIZATION University of Wisconsin REPORT NUMBER 1015 Atmospheric Oceanic & Space Sciences 1225 Da)ton Street Madison, WI 53706 9

  16. Comparing barrier algorithms

    NASA Technical Reports Server (NTRS)

    Arenstorf, Norbert S.; Jordan, Harry F.

    1987-01-01

    A barrier is a method for synchronizing a large number of concurrent computer processes. After considering some basic synchronization mechanisms, a collection of barrier algorithms with either linear or logarithmic depth are presented. A graphical model is described that profiles the execution of the barriers and other parallel programming constructs. This model shows how the interaction between the barrier algorithms and the work that they synchronize can impact their performance. One result is that logarithmic tree structured barriers show good performance when synchronizing fixed length work, while linear self-scheduled barriers show better performance when synchronizing fixed length work with an imbedded critical section. The linear barriers are better able to exploit the process skew associated with critical sections. Timing experiments, performed on an eighteen processor Flex/32 shared memory multiprocessor, that support these conclusions are detailed.

  17. An adaptable neuromorphic model of orientation selectivity based on floating gate dynamics

    PubMed Central

    Gupta, Priti; Markan, C. M.

    2014-01-01

    The biggest challenge that the neuromorphic community faces today is to build systems that can be considered truly cognitive. Adaptation and self-organization are the two basic principles that underlie any cognitive function that the brain performs. If we can replicate this behavior in hardware, we move a step closer to our goal of having cognitive neuromorphic systems. Adaptive feature selectivity is a mechanism by which nature optimizes resources so as to have greater acuity for more abundant features. Developing neuromorphic feature maps can help design generic machines that can emulate this adaptive behavior. Most neuromorphic models that have attempted to build self-organizing systems, follow the approach of modeling abstract theoretical frameworks in hardware. While this is good from a modeling and analysis perspective, it may not lead to the most efficient hardware. On the other hand, exploiting hardware dynamics to build adaptive systems rather than forcing the hardware to behave like mathematical equations, seems to be a more robust methodology when it comes to developing actual hardware for real world applications. In this paper we use a novel time-staggered Winner Take All circuit, that exploits the adaptation dynamics of floating gate transistors, to model an adaptive cortical cell that demonstrates Orientation Selectivity, a well-known biological phenomenon observed in the visual cortex. The cell performs competitive learning, refining its weights in response to input patterns resembling different oriented bars, becoming selective to a particular oriented pattern. Different analysis performed on the cell such as orientation tuning, application of abnormal inputs, response to spatial frequency and periodic patterns reveal close similarity between our cell and its biological counterpart. Embedded in a RC grid, these cells interact diffusively exhibiting cluster formation, making way for adaptively building orientation selective maps in silicon. PMID:24765062

  18. Conditional High-Order Boltzmann Machines for Supervised Relation Learning.

    PubMed

    Huang, Yan; Wang, Wei; Wang, Liang; Tan, Tieniu

    2017-09-01

    Relation learning is a fundamental problem in many vision tasks. Recently, high-order Boltzmann machine and its variants have shown their great potentials in learning various types of data relation in a range of tasks. But most of these models are learned in an unsupervised way, i.e., without using relation class labels, which are not very discriminative for some challenging tasks, e.g., face verification. In this paper, with the goal to perform supervised relation learning, we introduce relation class labels into conventional high-order multiplicative interactions with pairwise input samples, and propose a conditional high-order Boltzmann Machine (CHBM), which can learn to classify the data relation in a binary classification way. To be able to deal with more complex data relation, we develop two improved variants of CHBM: 1) latent CHBM, which jointly performs relation feature learning and classification, by using a set of latent variables to block the pathway from pairwise input samples to output relation labels and 2) gated CHBM, which untangles factors of variation in data relation, by exploiting a set of latent variables to multiplicatively gate the classification of CHBM. To reduce the large number of model parameters generated by the multiplicative interactions, we approximately factorize high-order parameter tensors into multiple matrices. Then, we develop efficient supervised learning algorithms, by first pretraining the models using joint likelihood to provide good parameter initialization, and then finetuning them using conditional likelihood to enhance the discriminant ability. We apply the proposed models to a series of tasks including invariant recognition, face verification, and action similarity labeling. Experimental results demonstrate that by exploiting supervised relation labels, our models can greatly improve the performance.

  19. The Impact of School Improvement Grants on Achievement: Plans for a National Evaluation Using a Regression Discontinuity Design

    ERIC Educational Resources Information Center

    Deke, John; Dragoset, Lisa

    2015-01-01

    Does receipt of School Improvement Grants (SIG) funding to implement a school intervention model have an impact on outcomes for low-performing schools? This study answers this question using a regression discontinuity design (RDD) that exploits cutoff values on the continuous variables used to define SIG eligibility tiers, comparing outcomes in…

  20. Lattice Boltzmann Methods for Fluid Structure Interaction

    DTIC Science & Technology

    2012-09-01

    MONTEREY, CALIFORNIA DISSERTATION LATTICE BOLTZMANN METHODS FOR FLUID STRUCTURE INTERACTION by Stuart R. Blair September 2012 Dissertation Supervisor...200 words) The use of lattice Boltzmann methods (LBM) for fluid flow and its coupling with finite element method (FEM) structural models for fluid... structure interaction (FSI) is investigated. A body of high performance LBM software that exploits graphic processing unit (GPU) and multiprocessor

  1. Uncertainty in tsunami sediment transport modeling

    USGS Publications Warehouse

    Jaffe, Bruce E.; Goto, Kazuhisa; Sugawara, Daisuke; Gelfenbaum, Guy R.; La Selle, SeanPaul M.

    2016-01-01

    Erosion and deposition from tsunamis record information about tsunami hydrodynamics and size that can be interpreted to improve tsunami hazard assessment. We explore sources and methods for quantifying uncertainty in tsunami sediment transport modeling. Uncertainty varies with tsunami, study site, available input data, sediment grain size, and model. Although uncertainty has the potential to be large, published case studies indicate that both forward and inverse tsunami sediment transport models perform well enough to be useful for deciphering tsunami characteristics, including size, from deposits. New techniques for quantifying uncertainty, such as Ensemble Kalman Filtering inversion, and more rigorous reporting of uncertainties will advance the science of tsunami sediment transport modeling. Uncertainty may be decreased with additional laboratory studies that increase our understanding of the semi-empirical parameters and physics of tsunami sediment transport, standardized benchmark tests to assess model performance, and development of hybrid modeling approaches to exploit the strengths of forward and inverse models.

  2. Risk assessment by dynamic representation of vulnerability, exploitation, and impact

    NASA Astrophysics Data System (ADS)

    Cam, Hasan

    2015-05-01

    Assessing and quantifying cyber risk accurately in real-time is essential to providing security and mission assurance in any system and network. This paper presents a modeling and dynamic analysis approach to assessing cyber risk of a network in real-time by representing dynamically its vulnerabilities, exploitations, and impact using integrated Bayesian network and Markov models. Given the set of vulnerabilities detected by a vulnerability scanner in a network, this paper addresses how its risk can be assessed by estimating in real-time the exploit likelihood and impact of vulnerability exploitation on the network, based on real-time observations and measurements over the network. The dynamic representation of the network in terms of its vulnerabilities, sensor measurements, and observations is constructed dynamically using the integrated Bayesian network and Markov models. The transition rates of outgoing and incoming links of states in hidden Markov models are used in determining exploit likelihood and impact of attacks, whereas emission rates help quantify the attack states of vulnerabilities. Simulation results show the quantification and evolving risk scores over time for individual and aggregated vulnerabilities of a network.

  3. Electromagnetic physics models for parallel computing architectures

    DOE PAGES

    Amadio, G.; Ananya, A.; Apostolakis, J.; ...

    2016-11-21

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part ofmore » the GeantV project. Finally, the results of preliminary performance evaluation and physics validation are presented as well.« less

  4. Novel prescribed performance neural control of a flexible air-breathing hypersonic vehicle with unknown initial errors.

    PubMed

    Bu, Xiangwei; Wu, Xiaoyan; Zhu, Fujing; Huang, Jiaqi; Ma, Zhen; Zhang, Rui

    2015-11-01

    A novel prescribed performance neural controller with unknown initial errors is addressed for the longitudinal dynamic model of a flexible air-breathing hypersonic vehicle (FAHV) subject to parametric uncertainties. Different from traditional prescribed performance control (PPC) requiring that the initial errors have to be known accurately, this paper investigates the tracking control without accurate initial errors via exploiting a new performance function. A combined neural back-stepping and minimal learning parameter (MLP) technology is employed for exploring a prescribed performance controller that provides robust tracking of velocity and altitude reference trajectories. The highlight is that the transient performance of velocity and altitude tracking errors is satisfactory and the computational load of neural approximation is low. Finally, numerical simulation results from a nonlinear FAHV model demonstrate the efficacy of the proposed strategy. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Tug-of-war model for the two-bandit problem: nonlocally-correlated parallel exploration via resource conservation.

    PubMed

    Kim, Song-Ju; Aono, Masashi; Hara, Masahiko

    2010-07-01

    We propose a model - the "tug-of-war (TOW) model" - to conduct unique parallel searches using many nonlocally-correlated search agents. The model is based on the property of a single-celled amoeba, the true slime mold Physarum, which maintains a constant intracellular resource volume while collecting environmental information by concurrently expanding and shrinking its branches. The conservation law entails a "nonlocal correlation" among the branches, i.e., volume increment in one branch is immediately compensated by volume decrement(s) in the other branch(es). This nonlocal correlation was shown to be useful for decision making in the case of a dilemma. The multi-armed bandit problem is to determine the optimal strategy for maximizing the total reward sum with incompatible demands, by either exploiting the rewards obtained using the already collected information or exploring new information for acquiring higher payoffs involving risks. Our model can efficiently manage the "exploration-exploitation dilemma" and exhibits good performances. The average accuracy rate of our model is higher than those of well-known algorithms such as the modified -greedy algorithm and modified softmax algorithm, especially, for solving relatively difficult problems. Moreover, our model flexibly adapts to changing environments, a property essential for living organisms surviving in uncertain environments.

  6. Optimal exploitation strategies for an animal population in a stochastic serially correlated environment

    USGS Publications Warehouse

    Anderson, D.R.

    1974-01-01

    Optimal exploitation strategies were studied for an animal population in a stochastic, serially correlated environment. This is a general case and encompasses a number of important cases as simplifications. Data on the mallard (Anas platyrhynchos) were used to explore the exploitation strategies and test several hypotheses because relatively much is known concerning the life history and general ecology of this species and extensive empirical data are available for analysis. The number of small ponds on the central breeding grounds was used as an index to the state of the environment. Desirable properties of an optimal exploitation strategy were defined. A mathematical model was formulated to provide a synthesis of the existing literature, estimates of parameters developed from an analysis of data, and hypotheses regarding the specific effect of exploitation on total survival. Both the literature and the analysis of data were inconclusive concerning the effect of exploitation on survival. Therefore, alternative hypotheses were formulated: (1) exploitation mortality represents a largely additive form of mortality, or (2 ) exploitation mortality is compensatory with other forms of mortality, at least to some threshold level. Models incorporating these two hypotheses were formulated as stochastic dynamic programming models and optimal exploitation strategies were derived numerically on a digital computer. Optimal exploitation strategies were found to exist under rather general conditions. Direct feedback control was an integral component in the optimal decision-making process. Optimal exploitation was found to be substantially different depending upon the hypothesis regarding the effect of exploitation on the population. Assuming that exploitation is largely an additive force of mortality, optimal exploitation decisions are a convex function of the size of the breeding population and a linear or slightly concave function of the environmental conditions. Optimal exploitation under this hypothesis tends to reduce the variance of the size of the population. Under the hypothesis of compensatory mortality forces, optimal exploitation decisions are approximately linearly related to the size of the breeding population. Environmental variables may be somewhat more important than the size of the breeding population to the production of young mallards. In contrast, the size of the breeding population appears to be more important in the exploitation process than is the state of the environment. The form of the exploitation strategy appears to be relatively insensitive to small changes in the production rate. In general, the relative importance of the size of the breeding population may decrease as fecundity increases. The optimal level of exploitation in year t must be based on the observed size of the population and the state of the environment in year t unless the dynamics of the population, the state of the environment, and the result of the exploitation decisions are completely deterministic. Exploitation based on an average harvest, harvest rate, or designed to maintain a constant breeding population size is inefficient.

  7. An alternative method for centrifugal compressor loading factor modelling

    NASA Astrophysics Data System (ADS)

    Galerkin, Y.; Drozdov, A.; Rekstin, A.; Soldatova, K.

    2017-08-01

    The loading factor at design point is calculated by one or other empirical formula in classical design methods. Performance modelling as a whole is out of consideration. Test data of compressor stages demonstrates that loading factor versus flow coefficient at the impeller exit has a linear character independent of compressibility. Known Universal Modelling Method exploits this fact. Two points define the function - loading factor at design point and at zero flow rate. The proper formulae include empirical coefficients. A good modelling result is possible if the choice of coefficients is based on experience and close analogs. Earlier Y. Galerkin and K. Soldatova had proposed to define loading factor performance by the angle of its inclination to the ordinate axis and by the loading factor at zero flow rate. Simple and definite equations with four geometry parameters were proposed for loading factor performance calculated for inviscid flow. The authors of this publication have studied the test performance of thirteen stages of different types. The equations are proposed with universal empirical coefficients. The calculation error lies in the range of plus to minus 1,5%. The alternative model of a loading factor performance modelling is included in new versions of the Universal Modelling Method.

  8. Modeling of Karachaganak field development

    NASA Astrophysics Data System (ADS)

    Sadvakasov, A. A.; Shamsutdinova, G. F.; Almukhametova, E. M.; Gabdrakhmanov, N. Kh

    2018-05-01

    Management of a geological deposit includes the study and analysis of oil recovery, identification of factors influencing production performance and oil-bearing rock flooding, reserve recovery and other indicators characterizing field development in general. Regulation of oil deposits exploitation is a mere control over the fluid flow within a reservoir, which is ensured through the designed system of development via continuous improvement of production and injection wells placement, optimum performance modes, service conditions of downhole and surface oil-field equipment taking into account various changes and physical-geological properties of a field when using modern equipment to obtain the best performance indicators.

  9. The Peace and Power Conceptual Model: An Assessment Guide for School Nurses Regarding Commercial Sexual Exploitation of Children.

    PubMed

    Fraley, Hannah E; Aronowitz, Teri

    2017-10-01

    Human trafficking is a global problem; more than half of all victims are children. In the United States (US), at-risk youth continue to attend school. School nurses are on the frontlines, presenting a window of opportunity to identify and prevent exploitation. Available papers targeting school nurses report that school nurses may lack awareness of commercial sexual exploitation and may have attitudes and misperceptions about behaviors of school children at risk. This is a theoretical paper applying the Peace and Power Conceptual Model to understand the role of school nurses in commercial sexual exploitation of children.

  10. A learning perspective on individual differences in skilled reading: Exploring and exploiting orthographic and semantic discrimination cues.

    PubMed

    Milin, Petar; Divjak, Dagmar; Baayen, R Harald

    2017-11-01

    The goal of the present study is to understand the role orthographic and semantic information play in the behavior of skilled readers. Reading latencies from a self-paced sentence reading experiment in which Russian near-synonymous verbs were manipulated appear well-predicted by a combination of bottom-up sublexical letter triplets (trigraphs) and top-down semantic generalizations, modeled using the Naive Discrimination Learner. The results reveal a complex interplay of bottom-up and top-down support from orthography and semantics to the target verbs, whereby activations from orthography only are modulated by individual differences. Using performance on a serial reaction time (SRT) task for a novel operationalization of the mental speed hypothesis, we explain the observed individual differences in reading behavior in terms of the exploration/exploitation hypothesis from reinforcement learning, where initially slower and more variable behavior leads to better performance overall. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. Performance analysis of distributed symmetric sparse matrix vector multiplication algorithm for multi-core architectures

    DOE PAGES

    Oryspayev, Dossay; Aktulga, Hasan Metin; Sosonkina, Masha; ...

    2015-07-14

    In this article, sparse matrix vector multiply (SpMVM) is an important kernel that frequently arises in high performance computing applications. Due to its low arithmetic intensity, several approaches have been proposed in literature to improve its scalability and efficiency in large scale computations. In this paper, our target systems are high end multi-core architectures and we use messaging passing interface + open multiprocessing hybrid programming model for parallelism. We analyze the performance of recently proposed implementation of the distributed symmetric SpMVM, originally developed for large sparse symmetric matrices arising in ab initio nuclear structure calculations. We also study important featuresmore » of this implementation and compare with previously reported implementations that do not exploit underlying symmetry. Our SpMVM implementations leverage the hybrid paradigm to efficiently overlap expensive communications with computations. Our main comparison criterion is the "CPU core hours" metric, which is the main measure of resource usage on supercomputers. We analyze the effects of topology-aware mapping heuristic using simplified network load model. Furthermore, we have tested the different SpMVM implementations on two large clusters with 3D Torus and Dragonfly topology. Our results show that the distributed SpMVM implementation that exploits matrix symmetry and hides communication yields the best value for the "CPU core hours" metric and significantly reduces data movement overheads.« less

  12. Effects of anthropogenic groundwater exploitation on land surface processes: A case study of the Haihe River Basin, northern China

    NASA Astrophysics Data System (ADS)

    Zou, Jing; Xie, Zhenghui; Zhan, Chesheng; Qin, Peihua; Sun, Qin; Jia, Binghao; Xia, Jun

    2015-05-01

    In this study, we incorporated a groundwater exploitation scheme into the land surface model CLM3.5 to investigate the effects of the anthropogenic exploitation of groundwater on land surface processes in a river basin. Simulations of the Haihe River Basin in northern China were conducted for the years 1965-2000 using the model. A control simulation without exploitation and three exploitation simulations with different water demands derived from socioeconomic data related to the Basin were conducted. The results showed that groundwater exploitation for human activities resulted in increased wetting and cooling effects at the land surface and reduced groundwater storage. A lowering of the groundwater table, increased upper soil moisture, reduced 2 m air temperature, and enhanced latent heat flux were detected by the end of the simulated period, and the changes at the land surface were related linearly to the water demands. To determine the possible responses of the land surface processes in extreme cases (i.e., in which the exploitation process either continued or ceased), additional hypothetical simulations for the coming 200 years with constant climate forcing were conducted, regardless of changes in climate. The simulations revealed that the local groundwater storage on the plains could not contend with high-intensity exploitation for long if the exploitation process continues at the current rate. Changes attributable to groundwater exploitation reached extreme values and then weakened within decades with the depletion of groundwater resources and the exploitation process will therefore cease. However, if exploitation is stopped completely to allow groundwater to recover, drying and warming effects, such as increased temperature, reduced soil moisture, and reduced total runoff, would occur in the Basin within the early decades of the simulation period. The effects of exploitation will then gradually disappear, and the variables will approach the natural state and stabilize at different rates. Simulations were also conducted for cases in which exploitation either continues or ceases using future climate scenario outputs from a general circulation model. The resulting trends were almost the same as those of the simulations with constant climate forcing, despite differences in the climate data input. Therefore, a balance between slow groundwater restoration and rapid human development of the land must be achieved to maintain a sustainable water resource.

  13. Trade-off between learning and exploitation: the Pareto-optimal versus evolutionarily stable learning schedule in cumulative cultural evolution.

    PubMed

    Wakano, Joe Yuichiro; Miura, Chiaki

    2014-02-01

    Inheritance of culture is achieved by social learning and improvement is achieved by individual learning. To realize cumulative cultural evolution, social and individual learning should be performed in this order in one's life. However, it is not clear whether such a learning schedule can evolve by the maximization of individual fitness. Here we study optimal allocation of lifetime to learning and exploitation in a two-stage life history model under a constant environment. We show that the learning schedule by which high cultural level is achieved through cumulative cultural evolution is unlikely to evolve as a result of the maximization of individual fitness, if there exists a trade-off between the time spent in learning and the time spent in exploiting the knowledge that has been learned in earlier stages of one's life. Collapse of a fully developed culture is predicted by a game-theoretical analysis where individuals behave selfishly, e.g., less learning and more exploiting. The present study suggests that such factors as group selection, the ability of learning-while-working ("on the job training"), or environmental fluctuation might be important in the realization of rapid and cumulative cultural evolution that is observed in humans. Copyright © 2013 Elsevier Inc. All rights reserved.

  14. Cloud Model Bat Algorithm

    PubMed Central

    Zhou, Yongquan; Xie, Jian; Li, Liangliang; Ma, Mingzhi

    2014-01-01

    Bat algorithm (BA) is a novel stochastic global optimization algorithm. Cloud model is an effective tool in transforming between qualitative concepts and their quantitative representation. Based on the bat echolocation mechanism and excellent characteristics of cloud model on uncertainty knowledge representation, a new cloud model bat algorithm (CBA) is proposed. This paper focuses on remodeling echolocation model based on living and preying characteristics of bats, utilizing the transformation theory of cloud model to depict the qualitative concept: “bats approach their prey.” Furthermore, Lévy flight mode and population information communication mechanism of bats are introduced to balance the advantage between exploration and exploitation. The simulation results show that the cloud model bat algorithm has good performance on functions optimization. PMID:24967425

  15. Quantitative petri net model of gene regulated metabolic networks in the cell.

    PubMed

    Chen, Ming; Hofestädt, Ralf

    2011-01-01

    A method to exploit hybrid Petri nets (HPN) for quantitatively modeling and simulating gene regulated metabolic networks is demonstrated. A global kinetic modeling strategy and Petri net modeling algorithm are applied to perform the bioprocess functioning and model analysis. With the model, the interrelations between pathway analysis and metabolic control mechanism are outlined. Diagrammatical results of the dynamics of metabolites are simulated and observed by implementing a HPN tool, Visual Object Net ++. An explanation of the observed behavior of the urea cycle is proposed to indicate possibilities for metabolic engineering and medical care. Finally, the perspective of Petri nets on modeling and simulation of metabolic networks is discussed.

  16. Sustainable exploitation and management of autogenic ecosystem engineers: application to oysters in Chesapeake Bay.

    PubMed

    Wilberg, Michael J; Wiedenmann, John R; Robinson, Jason M

    2013-06-01

    Autogenic ecosystem engineers are critically important parts of many marine and estuarine systems because of their substantial effect on ecosystem services. Oysters are of particular importance because of their capacity to modify coastal and estuarine habitats and the highly degraded status of their habitats worldwide. However, models to predict dynamics of ecosystem engineers have not previously included the effects of exploitation. We developed a linked population and habitat model for autogenic ecosystem engineers undergoing exploitation. We parameterized the model to represent eastern oyster (Crassostrea virginica) in upper Chesapeake Bay by selecting sets of parameter values that matched observed rates of change in abundance and habitat. We used the model to evaluate the effects of a range of management and restoration options including sustainability of historical fishing pressure, effectiveness of a newly enacted sanctuary program, and relative performance of two restoration approaches. In general, autogenic ecosystem engineers are expected to be substantially less resilient to fishing than an equivalent species that does not rely on itself for habitat. Historical fishing mortality rates in upper Chesapeake Bay for oysters were above the levels that would lead to extirpation. Reductions in fishing or closure of the fishery were projected to lead to long-term increases in abundance and habitat. For fisheries to become sustainable outside of sanctuaries, a substantial larval subsidy would be required from oysters within sanctuaries. Restoration efforts using high-relief reefs were predicted to allow recovery within a shorter period of time than low-relief reefs. Models such as ours, that allow for feedbacks between population and habitat dynamics, can be effective tools for guiding management and restoration of autogenic ecosystem engineers.

  17. Sparsity-Cognizant Algorithms with Applications to Communications, Signal Processing, and the Smart Grid

    NASA Astrophysics Data System (ADS)

    Zhu, Hao

    Sparsity plays an instrumental role in a plethora of scientific fields, including statistical inference for variable selection, parsimonious signal representations, and solving under-determined systems of linear equations - what has led to the ground-breaking result of compressive sampling (CS). This Thesis leverages exciting ideas of sparse signal reconstruction to develop sparsity-cognizant algorithms, and analyze their performance. The vision is to devise tools exploiting the 'right' form of sparsity for the 'right' application domain of multiuser communication systems, array signal processing systems, and the emerging challenges in the smart power grid. Two important power system monitoring tasks are addressed first by capitalizing on the hidden sparsity. To robustify power system state estimation, a sparse outlier model is leveraged to capture the possible corruption in every datum, while the problem nonconvexity due to nonlinear measurements is handled using the semidefinite relaxation technique. Different from existing iterative methods, the proposed algorithm approximates well the global optimum regardless of the initialization. In addition, for enhanced situational awareness, a novel sparse overcomplete representation is introduced to capture (possibly multiple) line outages, and develop real-time algorithms for solving the combinatorially complex identification problem. The proposed algorithms exhibit near-optimal performance while incurring only linear complexity in the number of lines, which makes it possible to quickly bring contingencies to attention. This Thesis also accounts for two basic issues in CS, namely fully-perturbed models and the finite alphabet property. The sparse total least-squares (S-TLS) approach is proposed to furnish CS algorithms for fully-perturbed linear models, leading to statistically optimal and computationally efficient solvers. The S-TLS framework is well motivated for grid-based sensing applications and exhibits higher accuracy than existing sparse algorithms. On the other hand, exploiting the finite alphabet of unknown signals emerges naturally in communication systems, along with sparsity coming from the low activity of each user. Compared to approaches only accounting for either one of the two, joint exploitation of both leads to statistically optimal detectors with improved error performance.

  18. Initial Results from an Energy-Aware Airborne Dynamic, Data-Driven Application System Performing Sampling in Coherent Boundary-Layer Structures

    NASA Astrophysics Data System (ADS)

    Frew, E.; Argrow, B. M.; Houston, A. L.; Weiss, C.

    2014-12-01

    The energy-aware airborne dynamic, data-driven application system (EA-DDDAS) performs persistent sampling in complex atmospheric conditions by exploiting wind energy using the dynamic data-driven application system paradigm. The main challenge for future airborne sampling missions is operation with tight integration of physical and computational resources over wireless communication networks, in complex atmospheric conditions. The physical resources considered here include sensor platforms, particularly mobile Doppler radar and unmanned aircraft, the complex conditions in which they operate, and the region of interest. Autonomous operation requires distributed computational effort connected by layered wireless communication. Onboard decision-making and coordination algorithms can be enhanced by atmospheric models that assimilate input from physics-based models and wind fields derived from multiple sources. These models are generally too complex to be run onboard the aircraft, so they need to be executed in ground vehicles in the field, and connected over broadband or other wireless links back to the field. Finally, the wind field environment drives strong interaction between the computational and physical systems, both as a challenge to autonomous path planning algorithms and as a novel energy source that can be exploited to improve system range and endurance. Implementation details of a complete EA-DDDAS will be provided, along with preliminary flight test results targeting coherent boundary-layer structures.

  19. A geographically weighted regression model for geothermal potential assessment in mediterranean cultural landscape

    NASA Astrophysics Data System (ADS)

    D'Arpa, S.; Zaccarelli, N.; Bruno, D. E.; Leucci, G.; Uricchio, V. F.; Zurlini, G.

    2012-04-01

    Geothermal heat can be used directly in many applications (agro-industrial processes, sanitary hot water production, heating/cooling systems, etc.). These applications respond to energetic and environmental sustainability criteria, ensuring substantial energy savings with low environmental impacts. In particular, in Mediterranean cultural landscapes the exploitation of geothermal energy offers a valuable alternative compared to other exploitation systems more land-consuming and visual-impact. However, low enthalpy geothermal energy applications at regional scale, require careful design and planning to fully exploit benefits and reduce drawbacks. We propose a first example of application of a Geographically Weighted Regression (GWR) for the modeling of geothermal potential in the Apulia Region (South Italy) by integrating hydrological (e.g. depth to water table, water speed and temperature), geological-geotechnical (e.g. lithology, thermal conductivity) parameters and land-use indicators. The GWR model can effectively cope with data quality, spatial anisotropy, lack of stationarity and presence of discontinuities in the underlying data maps. The geothermal potential assessment required a good knowledge of the space-time variation of the numerous parameters related to the status of geothermal resource, a contextual analysis of spatial and environmental features, as well as the presence and nature of regulations or infrastructures constraints. We create an ad hoc geodatabase within ArcGIS 10 collecting relevant data and performing a quality assessment. Cross-validation shows high level of consistency of the spatial local models, as well as error maps can depict areas of lower reliability. Based on low enthalpy geothermal potential map created, a first zoning of the study area is proposed, considering four level of possible exploitation. Such zoning is linked and refined by the actual legal constraints acting at regional or province level as enforced by the regional plan for the protection of the landscape ("Piano Urbanistico Territoriale Tematico Paesaggio"), the regional plan for the protection of water and groundwater ("Piano di Tutela delle Acque"), the regional plan of hydrogeological risk ("Piano di Assetto Idrogeologico") and the province level master-plan for the development ("Piano Territoriale di Coordinamento Provinciale"). We believe our results can be a substantial contribution for the ongoing regional debate on the exploitation of geothermal potential as well as an important knowledge base for the integration of such topic in the new regional energetic and environmental plan ("Piano Energetico Ambientale Regionale").

  20. Rich client data exploration and research prototyping for NOAA

    NASA Astrophysics Data System (ADS)

    Grossberg, Michael; Gladkova, Irina; Guch, Ingrid; Alabi, Paul; Shahriar, Fazlul; Bonev, George; Aizenman, Hannah

    2009-08-01

    Data from satellites and model simulations is increasing exponentially as observations and model computing power improve rapidly. Not only is technology producing more data, but it often comes from sources all over the world. Researchers and scientists who must collaborate are also located globally. This work presents a software design and technologies which will make it possible for groups of researchers to explore large data sets visually together without the need to download these data sets locally. The design will also make it possible to exploit high performance computing remotely and transparently to analyze and explore large data sets. Computer power, high quality sensing, and data storage capacity have improved at a rate that outstrips our ability to develop software applications that exploit these resources. It is impractical for NOAA scientists to download all of the satellite and model data that may be relevant to a given problem and the computing environments available to a given researcher range from supercomputers to only a web browser. The size and volume of satellite and model data are increasing exponentially. There are at least 50 multisensor satellite platforms collecting Earth science data. On the ground and in the sea there are sensor networks, as well as networks of ground based radar stations, producing a rich real-time stream of data. This new wealth of data would have limited use were it not for the arrival of large-scale high-performance computation provided by parallel computers, clusters, grids, and clouds. With these computational resources and vast archives available, it is now possible to analyze subtle relationships which are global, multi-modal and cut across many data sources. Researchers, educators, and even the general public, need tools to access, discover, and use vast data center archives and high performance computing through a simple yet flexible interface.

  1. Forward flight of birds revisited. Part 1: aerodynamics and performance.

    PubMed

    Iosilevskii, G

    2014-10-01

    This paper is the first part of the two-part exposition, addressing performance and dynamic stability of birds. The aerodynamic model underlying the entire study is presented in this part. It exploits the simplicity of the lifting line approximation to furnish the forces and moments acting on a single wing in closed analytical forms. The accuracy of the model is corroborated by comparison with numerical simulations based on the vortex lattice method. Performance is studied both in tethered (as on a sting in a wind tunnel) and in free flights. Wing twist is identified as the main parameter affecting the flight performance-at high speeds, it improves efficiency, the rate of climb and the maximal level speed; at low speeds, it allows flying slower. It is demonstrated that, under most circumstances, the difference in performance between tethered and free flights is small.

  2. Parallel computing of a climate model on the dawn 1000 by domain decomposition method

    NASA Astrophysics Data System (ADS)

    Bi, Xunqiang

    1997-12-01

    In this paper the parallel computing of a grid-point nine-level atmospheric general circulation model on the Dawn 1000 is introduced. The model was developed by the Institute of Atmospheric Physics (IAP), Chinese Academy of Sciences (CAS). The Dawn 1000 is a MIMD massive parallel computer made by National Research Center for Intelligent Computer (NCIC), CAS. A two-dimensional domain decomposition method is adopted to perform the parallel computing. The potential ways to increase the speed-up ratio and exploit more resources of future massively parallel supercomputation are also discussed.

  3. Automated Assume-Guarantee Reasoning by Abstraction Refinement

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Giannakopoulous, Dimitra; Glannakopoulou, Dimitra

    2008-01-01

    Current automated approaches for compositional model checking in the assume-guarantee style are based on learning of assumptions as deterministic automata. We propose an alternative approach based on abstraction refinement. Our new method computes the assumptions for the assume-guarantee rules as conservative and not necessarily deterministic abstractions of some of the components, and refines those abstractions using counter-examples obtained from model checking them together with the other components. Our approach also exploits the alphabets of the interfaces between components and performs iterative refinement of those alphabets as well as of the abstractions. We show experimentally that our preliminary implementation of the proposed alternative achieves similar or better performance than a previous learning-based implementation.

  4. Plasma brake model for preliminary mission analysis

    NASA Astrophysics Data System (ADS)

    Orsini, Leonardo; Niccolai, Lorenzo; Mengali, Giovanni; Quarta, Alessandro A.

    2018-03-01

    Plasma brake is an innovative propellantless propulsion system concept that exploits the Coulomb collisions between a charged tether and the ions in the surrounding environment (typically, the ionosphere) to generate an electrostatic force orthogonal to the tether direction. Previous studies on the plasma brake effect have emphasized the existence of a number of different parameters necessary to obtain an accurate description of the propulsive acceleration from a physical viewpoint. The aim of this work is to discuss an analytical model capable of estimating, with the accuracy required by a preliminary mission analysis, the performance of a spacecraft equipped with a plasma brake in a (near-circular) low Earth orbit. The simplified mathematical model is first validated through numerical simulations, and is then used to evaluate the plasma brake performance in some typical mission scenarios, in order to quantify the influence of the system parameters on the mission performance index.

  5. Public–private interaction in pharmaceutical research

    PubMed Central

    Cockburn, Iain; Henderson, Rebecca

    1996-01-01

    We empirically examine interaction between the public and private sectors in pharmaceutical research using qualitative data on the drug discovery process and quantitative data on the incidence of coauthorship between public and private institutions. We find evidence of significant reciprocal interaction, and reject a simple “linear” dichotomous model in which the public sector performs basic research and the private sector exploits it. Linkages to the public sector differ across firms, reflecting variation in internal incentives and policy choices, and the nature of these linkages correlates with their research performance. PMID:8917485

  6. Generating performance portable geoscientific simulation code with Firedrake (Invited)

    NASA Astrophysics Data System (ADS)

    Ham, D. A.; Bercea, G.; Cotter, C. J.; Kelly, P. H.; Loriant, N.; Luporini, F.; McRae, A. T.; Mitchell, L.; Rathgeber, F.

    2013-12-01

    This presentation will demonstrate how a change in simulation programming paradigm can be exploited to deliver sophisticated simulation capability which is far easier to programme than are conventional models, is capable of exploiting different emerging parallel hardware, and is tailored to the specific needs of geoscientific simulation. Geoscientific simulation represents a grand challenge computational task: many of the largest computers in the world are tasked with this field, and the requirements of resolution and complexity of scientists in this field are far from being sated. However, single thread performance has stalled, even sometimes decreased, over the last decade, and has been replaced by ever more parallel systems: both as conventional multicore CPUs and in the emerging world of accelerators. At the same time, the needs of scientists to couple ever-more complex dynamics and parametrisations into their models makes the model development task vastly more complex. The conventional approach of writing code in low level languages such as Fortran or C/C++ and then hand-coding parallelism for different platforms by adding library calls and directives forces the intermingling of the numerical code with its implementation. This results in an almost impossible set of skill requirements for developers, who must simultaneously be domain science experts, numericists, software engineers and parallelisation specialists. Even more critically, it requires code to be essentially rewritten for each emerging hardware platform. Since new platforms are emerging constantly, and since code owners do not usually control the procurement of the supercomputers on which they must run, this represents an unsustainable development load. The Firedrake system, conversely, offers the developer the opportunity to write PDE discretisations in the high-level mathematical language UFL from the FEniCS project (http://fenicsproject.org). Non-PDE model components, such as parametrisations, can be written as short C kernels operating locally on the underlying mesh, with no explicit parallelism. The executable code is then generated in C, CUDA or OpenCL and executed in parallel on the target architecture. The system also offers features of special relevance to the geosciences. In particular, the large scale separation between the vertical and horizontal directions in many geoscientific processes can be exploited to offer the flexibility of unstructured meshes in the horizontal direction, without the performance penalty usually associated with those methods.

  7. What is in the feedback? Effect of induced happiness vs. sadness on probabilistic learning with vs. without exploration.

    PubMed

    Bakic, Jasmina; De Raedt, Rudi; Jepma, Marieke; Pourtois, Gilles

    2015-01-01

    According to dominant neuropsychological theories of affect, emotions signal salience of events and in turn facilitate a wide spectrum of response options or action tendencies. Valence of an emotional experience is pivotal here, as it alters reward and punishment processing, as well as the balance between safety and risk taking, which can be translated into changes in the exploration-exploitation trade-off during reinforcement learning (RL). To test this idea, we compared the behavioral performance of three groups of participants that all completed a variant of a standard probabilistic learning task, but who differed regarding which mood state was actually induced and maintained (happy, sad or neutral). To foster a change from an exploration to an exploitation-based mode, we removed feedback information once learning was reliably established. Although changes in mood were successful, learning performance was balanced between the three groups. Critically, when focusing on exploitation-driven learning only, they did not differ either. Moreover, mood valence did not alter the learning rate or exploration per se, when titrated using complementing computational modeling. By comparing systematically these results to our previous study (Bakic et al., 2014), we found that arousal levels did differ between studies, which might account for limited modulatory effects of (positive) mood on RL in the present case. These results challenge the assumption that mood valence alone is enough to create strong shifts in the way exploitation or exploration is eventually carried out during (probabilistic) learning. In this context, we discuss the possibility that both valence and arousal are actually necessary components of the emotional mood state to yield changes in the use and exploration of incentives cues during RL.

  8. CamOptimus: a tool for exploiting complex adaptive evolution to optimize experiments and processes in biotechnology.

    PubMed

    Cankorur-Cetinkaya, Ayca; Dias, Joao M L; Kludas, Jana; Slater, Nigel K H; Rousu, Juho; Oliver, Stephen G; Dikicioglu, Duygu

    2017-06-01

    Multiple interacting factors affect the performance of engineered biological systems in synthetic biology projects. The complexity of these biological systems means that experimental design should often be treated as a multiparametric optimization problem. However, the available methodologies are either impractical, due to a combinatorial explosion in the number of experiments to be performed, or are inaccessible to most experimentalists due to the lack of publicly available, user-friendly software. Although evolutionary algorithms may be employed as alternative approaches to optimize experimental design, the lack of simple-to-use software again restricts their use to specialist practitioners. In addition, the lack of subsidiary approaches to further investigate critical factors and their interactions prevents the full analysis and exploitation of the biotechnological system. We have addressed these problems and, here, provide a simple-to-use and freely available graphical user interface to empower a broad range of experimental biologists to employ complex evolutionary algorithms to optimize their experimental designs. Our approach exploits a Genetic Algorithm to discover the subspace containing the optimal combination of parameters, and Symbolic Regression to construct a model to evaluate the sensitivity of the experiment to each parameter under investigation. We demonstrate the utility of this method using an example in which the culture conditions for the microbial production of a bioactive human protein are optimized. CamOptimus is available through: (https://doi.org/10.17863/CAM.10257).

  9. Autonomous learning in humanoid robotics through mental imagery.

    PubMed

    Di Nuovo, Alessandro G; Marocco, Davide; Di Nuovo, Santo; Cangelosi, Angelo

    2013-05-01

    In this paper we focus on modeling autonomous learning to improve performance of a humanoid robot through a modular artificial neural networks architecture. A model of a neural controller is presented, which allows a humanoid robot iCub to autonomously improve its sensorimotor skills. This is achieved by endowing the neural controller with a secondary neural system that, by exploiting the sensorimotor skills already acquired by the robot, is able to generate additional imaginary examples that can be used by the controller itself to improve the performance through a simulated mental training. Results and analysis presented in the paper provide evidence of the viability of the approach proposed and help to clarify the rational behind the chosen model and its implementation. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. A methodology for least-squares local quasi-geoid modelling using a noisy satellite-only gravity field model

    NASA Astrophysics Data System (ADS)

    Klees, R.; Slobbe, D. C.; Farahani, H. H.

    2018-04-01

    The paper is about a methodology to combine a noisy satellite-only global gravity field model (GGM) with other noisy datasets to estimate a local quasi-geoid model using weighted least-squares techniques. In this way, we attempt to improve the quality of the estimated quasi-geoid model and to complement it with a full noise covariance matrix for quality control and further data processing. The methodology goes beyond the classical remove-compute-restore approach, which does not account for the noise in the satellite-only GGM. We suggest and analyse three different approaches of data combination. Two of them are based on a local single-scale spherical radial basis function (SRBF) model of the disturbing potential, and one is based on a two-scale SRBF model. Using numerical experiments, we show that a single-scale SRBF model does not fully exploit the information in the satellite-only GGM. We explain this by a lack of flexibility of a single-scale SRBF model to deal with datasets of significantly different bandwidths. The two-scale SRBF model performs well in this respect, provided that the model coefficients representing the two scales are estimated separately. The corresponding methodology is developed in this paper. Using the statistics of the least-squares residuals and the statistics of the errors in the estimated two-scale quasi-geoid model, we demonstrate that the developed methodology provides a two-scale quasi-geoid model, which exploits the information in all datasets.

  11. Conceptual model and map of financial exploitation of older adults.

    PubMed

    Conrad, Kendon J; Iris, Madelyn; Ridings, John W; Fairman, Kimberly P; Rosen, Abby; Wilber, Kathleen H

    2011-10-01

    This article describes the processes and outcomes of three-dimensional concept mapping to conceptualize financial exploitation of older adults. Statements were generated from a literature review and by local and national panels consisting of 16 experts in the field of financial exploitation. These statements were sorted and rated using Concept Systems software, which grouped the statements into clusters and depicted them as a map. Statements were grouped into six clusters, and ranked by the experts as follows in descending severity: (a) theft and scams, (b) financial victimization, (c) financial entitlement, (d) coercion, (e) signs of possible financial exploitation, and (f) money management difficulties. The hierarchical model can be used to identify elder financial exploitation and differentiate it from related but distinct areas of victimization. The severity hierarchy may be used to develop measures that will enable more precise screening for triage of clients into appropriate interventions.

  12. Unsupervised chunking based on graph propagation from bilingual corpus.

    PubMed

    Zhu, Ling; Wong, Derek F; Chao, Lidia S

    2014-01-01

    This paper presents a novel approach for unsupervised shallow parsing model trained on the unannotated Chinese text of parallel Chinese-English corpus. In this approach, no information of the Chinese side is applied. The exploitation of graph-based label propagation for bilingual knowledge transfer, along with an application of using the projected labels as features in unsupervised model, contributes to a better performance. The experimental comparisons with the state-of-the-art algorithms show that the proposed approach is able to achieve impressive higher accuracy in terms of F-score.

  13. JETSPIN: A specific-purpose open-source software for simulations of nanofiber electrospinning

    NASA Astrophysics Data System (ADS)

    Lauricella, Marco; Pontrelli, Giuseppe; Coluzza, Ivan; Pisignano, Dario; Succi, Sauro

    2015-12-01

    We present the open-source computer program JETSPIN, specifically designed to simulate the electrospinning process of nanofibers. Its capabilities are shown with proper reference to the underlying model, as well as a description of the relevant input variables and associated test-case simulations. The various interactions included in the electrospinning model implemented in JETSPIN are discussed in detail. The code is designed to exploit different computational architectures, from single to parallel processor workstations. This paper provides an overview of JETSPIN, focusing primarily on its structure, parallel implementations, functionality, performance, and availability.

  14. Characterization of the IEC 61000-4-6 Electromagnetic Clamp for Conducted-Immunity Testing

    NASA Astrophysics Data System (ADS)

    Grassi, F.; Pignari, S. A.; Spadacini, G.; Toscani, N.; Pelissou, P.

    2016-05-01

    A multiconductor transmission line model (MTL) is used to investigate the operation of the IEC 61000-4-6 electromagnetic (EM) clamp in a conducted-immunity test setup for aerospace applications. Aspects of interest include the performance of such a coupling device at very high frequencies (up to 1 GHz), and for extreme values of the common-mode impedance of equipment (short circuits, open circuits). The MTL model is finally exploited to predict the frequency response of coupling and decoupling factors defined in the IEC 61000-4-6 standard.

  15. Advanced Video Activity Analytics (AVAA): Human Performance Model Report

    DTIC Science & Technology

    2017-12-01

    NOTICES Disclaimers The findings in this report are not to be construed as an official Department of the Army position unless so designated by other...estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data...Video Activity Analytics (AVAA) system. AVAA was designed to help US Army Intelligence Analysts exploit full-motion video more efficiently and

  16. Principals of Design for High Performing Organizations: A Suggested Research Program. Appendixes

    DTIC Science & Technology

    1994-03-01

    keiretsu are subjected to the measure of the market and the sting of competition . The Japanese model combines fierce internal competition with a...reasonable that such a quasi-market can exploit the benefits of competition and price signals to provide internal customers with better service at a...system. Would the restriction of market entry necessarily lead to monopolistic competition that ends up extracting unnecessary rents from captive

  17. Integrated Fusion, Performance Prediction, and Sensor Management for Automatic Target Exploitation

    DTIC Science & Technology

    2007-05-30

    with large region of attraction about the true minimum. The physical optics models provide features for high confidence identification of stationary...the detection test are used to estimate 3D object scattering; multiple images can be noncoherently combined to reconstruct a more complete object...Proc. SPIE Algorithms for Synthetic Aper- ture Radar Imagery XIII, The International Society for Optical Engineering, April 2006. [40] K. Varshney, M. C

  18. Explore or Exploit? A Generic Model and an Exactly Solvable Case

    NASA Astrophysics Data System (ADS)

    Gueudré, Thomas; Dobrinevski, Alexander; Bouchaud, Jean-Philippe

    2014-02-01

    Finding a good compromise between the exploitation of known resources and the exploration of unknown, but potentially more profitable choices, is a general problem, which arises in many different scientific disciplines. We propose a stylized model for these exploration-exploitation situations, including population or economic growth, portfolio optimization, evolutionary dynamics, or the problem of optimal pinning of vortices or dislocations in disordered materials. We find the exact growth rate of this model for treelike geometries and prove the existence of an optimal migration rate in this case. Numerical simulations in the one-dimensional case confirm the generic existence of an optimum.

  19. Explore or exploit? A generic model and an exactly solvable case.

    PubMed

    Gueudré, Thomas; Dobrinevski, Alexander; Bouchaud, Jean-Philippe

    2014-02-07

    Finding a good compromise between the exploitation of known resources and the exploration of unknown, but potentially more profitable choices, is a general problem, which arises in many different scientific disciplines. We propose a stylized model for these exploration-exploitation situations, including population or economic growth, portfolio optimization, evolutionary dynamics, or the problem of optimal pinning of vortices or dislocations in disordered materials. We find the exact growth rate of this model for treelike geometries and prove the existence of an optimal migration rate in this case. Numerical simulations in the one-dimensional case confirm the generic existence of an optimum.

  20. From Desktop to Teraflop: Exploiting the U.S. Lead in High Performance Computing. NSF Blue Ribbon Panel on High Performance Computing.

    ERIC Educational Resources Information Center

    National Science Foundation, Washington, DC.

    This report addresses an opportunity to accelerate progress in virtually every branch of science and engineering concurrently, while also boosting the American economy as business firms also learn to exploit these new capabilities. The successful rapid advancement in both science and technology creates its own challenges, four of which are…

  1. The Bi-Directional Prediction of Carbon Fiber Production Using a Combination of Improved Particle Swarm Optimization and Support Vector Machine.

    PubMed

    Xiao, Chuncai; Hao, Kuangrong; Ding, Yongsheng

    2014-12-30

    This paper creates a bi-directional prediction model to predict the performance of carbon fiber and the productive parameters based on a support vector machine (SVM) and improved particle swarm optimization (IPSO) algorithm (SVM-IPSO). In the SVM, it is crucial to select the parameters that have an important impact on the performance of prediction. The IPSO is proposed to optimize them, and then the SVM-IPSO model is applied to the bi-directional prediction of carbon fiber production. The predictive accuracy of SVM is mainly dependent on its parameters, and IPSO is thus exploited to seek the optimal parameters for SVM in order to improve its prediction capability. Inspired by a cell communication mechanism, we propose IPSO by incorporating information of the global best solution into the search strategy to improve exploitation, and we employ IPSO to establish the bi-directional prediction model: in the direction of the forward prediction, we consider productive parameters as input and property indexes as output; in the direction of the backward prediction, we consider property indexes as input and productive parameters as output, and in this case, the model becomes a scheme design for novel style carbon fibers. The results from a set of the experimental data show that the proposed model can outperform the radial basis function neural network (RNN), the basic particle swarm optimization (PSO) method and the hybrid approach of genetic algorithm and improved particle swarm optimization (GA-IPSO) method in most of the experiments. In other words, simulation results demonstrate the effectiveness and advantages of the SVM-IPSO model in dealing with the problem of forecasting.

  2. Pore-Scale Simulation and Sensitivity Analysis of Apparent Gas Permeability in Shale Matrix

    PubMed Central

    Zhang, Pengwei; Hu, Liming; Meegoda, Jay N.

    2017-01-01

    Extremely low permeability due to nano-scale pores is a distinctive feature of gas transport in a shale matrix. The permeability of shale depends on pore pressure, porosity, pore throat size and gas type. The pore network model is a practical way to explain the macro flow behavior of porous media from a microscopic point of view. In this research, gas flow in a shale matrix is simulated using a previously developed three-dimensional pore network model that includes typical bimodal pore size distribution, anisotropy and low connectivity of the pore structure in shale. The apparent gas permeability of shale matrix was calculated under different reservoir pressures corresponding to different gas exploitation stages. Results indicate that gas permeability is strongly related to reservoir gas pressure, and hence the apparent permeability is not a unique value during the shale gas exploitation, and simulations suggested that a constant permeability for continuum-scale simulation is not accurate. Hence, the reservoir pressures of different shale gas exploitations should be considered. In addition, a sensitivity analysis was also performed to determine the contributions to apparent permeability of a shale matrix from petro-physical properties of shale such as pore throat size and porosity. Finally, the impact of connectivity of nano-scale pores on shale gas flux was analyzed. These results would provide an insight into understanding nano/micro scale flows of shale gas in the shale matrix. PMID:28772465

  3. Pore-Scale Simulation and Sensitivity Analysis of Apparent Gas Permeability in Shale Matrix.

    PubMed

    Zhang, Pengwei; Hu, Liming; Meegoda, Jay N

    2017-01-25

    Extremely low permeability due to nano-scale pores is a distinctive feature of gas transport in a shale matrix. The permeability of shale depends on pore pressure, porosity, pore throat size and gas type. The pore network model is a practical way to explain the macro flow behavior of porous media from a microscopic point of view. In this research, gas flow in a shale matrix is simulated using a previously developed three-dimensional pore network model that includes typical bimodal pore size distribution, anisotropy and low connectivity of the pore structure in shale. The apparent gas permeability of shale matrix was calculated under different reservoir pressures corresponding to different gas exploitation stages. Results indicate that gas permeability is strongly related to reservoir gas pressure, and hence the apparent permeability is not a unique value during the shale gas exploitation, and simulations suggested that a constant permeability for continuum-scale simulation is not accurate. Hence, the reservoir pressures of different shale gas exploitations should be considered. In addition, a sensitivity analysis was also performed to determine the contributions to apparent permeability of a shale matrix from petro-physical properties of shale such as pore throat size and porosity. Finally, the impact of connectivity of nano-scale pores on shale gas flux was analyzed. These results would provide an insight into understanding nano/micro scale flows of shale gas in the shale matrix.

  4. Colonialism in Modern America: The Appalachian Case.

    ERIC Educational Resources Information Center

    Lewis, Helen Matthews, Ed.; And Others

    The essays in this book illustrate a conceptual model for analyzing the social and economic problems of the Appalachian region. The model is variously called Colonialism, Internal Colonialism, Exploitation, or External Oppression. It highlights the process through which dominant outside industrial interests establish control, exploit the region,…

  5. Metrological analysis of a virtual flowmeter-based transducer for cryogenic helium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arpaia, P., E-mail: pasquale.arpaia@unina.it; Technology Department, European Organization for Nuclear Research; Girone, M., E-mail: mario.girone@cern.ch

    2015-12-15

    The metrological performance of a virtual flowmeter-based transducer for monitoring helium under cryogenic conditions is assessed. At this aim, an uncertainty model of the transducer, mainly based on a valve model, exploiting finite-element approach, and a virtual flowmeter model, based on the Sereg-Schlumberger method, are presented. The models are validated experimentally on a case study for helium monitoring in cryogenic systems at the European Organization for Nuclear Research (CERN). The impact of uncertainty sources on the transducer metrological performance is assessed by a sensitivity analysis, based on statistical experiment design and analysis of variance. In this way, the uncertainty sourcesmore » most influencing metrological performance of the transducer are singled out over the input range as a whole, at varying operating and setting conditions. This analysis turns out to be important for CERN cryogenics operation because the metrological design of the transducer is validated, and its components and working conditions with critical specifications for future improvements are identified.« less

  6. Implementation and performance of parallel Prolog interpreter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, S.; Kale, L.V.; Balkrishna, R.

    1988-01-01

    In this paper, the authors discuss the implementation of a parallel Prolog interpreter on different parallel machines. The implementation is based on the REDUCE--OR process model which exploits both AND and OR parallelism in logic programs. It is machine independent as it runs on top of the chare-kernel--a machine-independent parallel programming system. The authors also give the performance of the interpreter running a diverse set of benchmark pargrams on parallel machines including shared memory systems: an Alliant FX/8, Sequent and a MultiMax, and a non-shared memory systems: Intel iPSC/32 hypercube, in addition to its performance on a multiprocessor simulation system.

  7. Efficient rejection-based simulation of biochemical reactions with stochastic noise and delays

    NASA Astrophysics Data System (ADS)

    Thanh, Vo Hong; Priami, Corrado; Zunino, Roberto

    2014-10-01

    We propose a new exact stochastic rejection-based simulation algorithm for biochemical reactions and extend it to systems with delays. Our algorithm accelerates the simulation by pre-computing reaction propensity bounds to select the next reaction to perform. Exploiting such bounds, we are able to avoid recomputing propensities every time a (delayed) reaction is initiated or finished, as is typically necessary in standard approaches. Propensity updates in our approach are still performed, but only infrequently and limited for a small number of reactions, saving computation time and without sacrificing exactness. We evaluate the performance improvement of our algorithm by experimenting with concrete biological models.

  8. Rational Exploitation and Utilizing of Groundwater in Jiangsu Coastal Area

    NASA Astrophysics Data System (ADS)

    Kang, B.; Lin, X.

    2017-12-01

    Jiangsu coastal area is located in the southeast coast of China, where is a new industrial base and an important coastal and Land Resources Development Zone of China. In the areas with strong human exploitation activities, regional groundwater evolution is obviously affected by human activities. In order to solve the environmental geological problems caused by groundwater exploitation fundamentally, we must find out the forming conditions of regional groundwater hydrodynamic field, and the impact of human activities on groundwater hydrodynamic field evolution and hydrogeochemical evolition. Based on these results, scientific management and reasonable exploitation of the regional groundwater resources can be provided for the utilization. Taking the coastal area of Jiangsu as the research area, we investigate and analyze of the regional hydrogeological conditions. The numerical simulation model of groundwater flow was established according to the water power, chemical and isotopic methods, the conditions of water flow and the influence of hydrodynamic field on the water chemical field. We predict the evolution of regional groundwater dynamics under the influence of human activities and climate change and evaluate the influence of groundwater dynamic field evolution on the environmental geological problems caused by groundwater exploitation under various conditions. We get the following conclusions: Three groundwater exploitation optimal schemes were established. The groundwater salinization was taken as the primary control condition. The substitution model was proposed to model groundwater exploitation and water level changes by BP network method.Then genetic algorithm was used to solve the optimization solution. Three groundwater exploitation optimal schemes were submit to local water resource management. The first sheme was used to solve the groundwater salinization problem. The second sheme focused on dual water supply. The third sheme concerned on emergency water supppy. This is the first time environment problem taken as water management objectinve in this coastal area.

  9. The Intersection of Financial Exploitation and Financial Capacity

    PubMed Central

    Lichtenberg, P.A.

    2016-01-01

    Research in the past decade has documented that financial exploitation of older adults has become a major problem and Psychology is only recently increasing its presence in efforts to reduce exploitation. During the same time period, Psychology has been a leader in setting best practices for the assessment of diminished capacity in older adults culminating in the 2008 ABA/APA joint publication on a handbook for psychologists. Assessment of financial decision making capacity is often the cornerstone assessment needed in cases of financial exploitation. This paper will examine the intersection of financial exploitation and decision making capacity; introduce a new conceptual model and new tools for both the investigation and prevention of financial exploitation. PMID:27159438

  10. Sensor modeling and demonstration of a multi-object spectrometer for performance-driven sensing

    NASA Astrophysics Data System (ADS)

    Kerekes, John P.; Presnar, Michael D.; Fourspring, Kenneth D.; Ninkov, Zoran; Pogorzala, David R.; Raisanen, Alan D.; Rice, Andrew C.; Vasquez, Juan R.; Patel, Jeffrey P.; MacIntyre, Robert T.; Brown, Scott D.

    2009-05-01

    A novel multi-object spectrometer (MOS) is being explored for use as an adaptive performance-driven sensor that tracks moving targets. Developed originally for astronomical applications, the instrument utilizes an array of micromirrors to reflect light to a panchromatic imaging array. When an object of interest is detected the individual micromirrors imaging the object are tilted to reflect the light to a spectrometer to collect a full spectrum. This paper will present example sensor performance from empirical data collected in laboratory experiments, as well as our approach in designing optical and radiometric models of the MOS channels and the micromirror array. Simulation of moving vehicles in a highfidelity, hyperspectral scene is used to generate a dynamic video input for the adaptive sensor. Performance-driven algorithms for feature-aided target tracking and modality selection exploit multiple electromagnetic observables to track moving vehicle targets.

  11. Transnational gestational surrogacy: does it have to be exploitative?

    PubMed

    Kirby, Jeffrey

    2014-01-01

    This article explores the controversial practice of transnational gestational surrogacy and poses a provocative question: Does it have to be exploitative? Various existing models of exploitation are considered and a novel exploitation-evaluation heuristic is introduced to assist in the analysis of the potentially exploitative dimensions/elements of complex health-related practices. On the basis of application of the heuristic, I conclude that transnational gestational surrogacy, as currently practiced in low-income country settings (such as rural, western India), is exploitative of surrogate women. Arising out of consideration of the heuristic's exploitation conditions, a set of public education and enabled choice, enhanced protections, and empowerment reforms to transnational gestational surrogacy practice is proposed that, if incorporated into a national regulatory framework and actualized within a low income country, could possibly render such practice nonexploitative.

  12. Clique Relaxations in Biological and Social Network Analysis Foundations and Algorithms

    DTIC Science & Technology

    2015-10-26

    study of clique relaxation models arising in biological and social networks. This project examines the elementary clique-defining properties... elementary clique-defining properties inherently exploited in the available clique relaxation models and pro- poses a taxonomic framework that not...analyzes the elementary clique-defining properties implicitly exploited in the available clique relaxation models and proposes a taxonomic framework that

  13. The alarming decline of Mediterranean fish stocks.

    PubMed

    Vasilakopoulos, Paraskevas; Maravelias, Christos D; Tserpes, George

    2014-07-21

    In recent years, fisheries management has succeeded in stabilizing and even improving the state of many global fisheries resources [1-5]. This is particularly evident in areas where stocks are exploited in compliance with scientific advice and strong institutional structures are in place [1, 5]. In Europe, the well-managed northeast (NE) Atlantic fish stocks have been recovering in response to decreasing fishing pressure over the past decade [3-6], albeit with a long way to go for a universal stock rebuild [3, 7]. Meanwhile, little is known about the temporal development of the European Mediterranean stocks, whose management relies on input controls that are often poorly enforced. Here, we perform a meta-analysis of 42 European Mediterranean stocks of nine species in 1990-2010, showing that exploitation rate has been steadily increasing, selectivity (proportional exploitation of juveniles) has been deteriorating, and stocks have been shrinking. We implement species-specific simulation models to quantify changes in exploitation rate and selectivity that would maximize long-term yields and halt stock depletion. We show that stocks would be more resilient to fishing and produce higher long-term yields if harvested a few years after maturation because current selectivity is far from optimal, especially for demersal stocks. The European Common Fisheries Policy that has assisted in improving the state of NE Atlantic fish stocks in the past 10 years has failed to deliver similar results for Mediterranean stocks managed under the same policy. Limiting juvenile exploitation, advancing management plans, and strengthening compliance, control, and enforcement could promote fisheries sustainability in the Mediterranean. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Ringed Seal Search for Global Optimization via a Sensitive Search Model.

    PubMed

    Saadi, Younes; Yanto, Iwan Tri Riyadi; Herawan, Tutut; Balakrishnan, Vimala; Chiroma, Haruna; Risnumawan, Anhar

    2016-01-01

    The efficiency of a metaheuristic algorithm for global optimization is based on its ability to search and find the global optimum. However, a good search often requires to be balanced between exploration and exploitation of the search space. In this paper, a new metaheuristic algorithm called Ringed Seal Search (RSS) is introduced. It is inspired by the natural behavior of the seal pup. This algorithm mimics the seal pup movement behavior and its ability to search and choose the best lair to escape predators. The scenario starts once the seal mother gives birth to a new pup in a birthing lair that is constructed for this purpose. The seal pup strategy consists of searching and selecting the best lair by performing a random walk to find a new lair. Affected by the sensitive nature of seals against external noise emitted by predators, the random walk of the seal pup takes two different search states, normal state and urgent state. In the normal state, the pup performs an intensive search between closely adjacent lairs; this movement is modeled via a Brownian walk. In an urgent state, the pup leaves the proximity area and performs an extensive search to find a new lair from sparse targets; this movement is modeled via a Levy walk. The switch between these two states is realized by the random noise emitted by predators. The algorithm keeps switching between normal and urgent states until the global optimum is reached. Tests and validations were performed using fifteen benchmark test functions to compare the performance of RSS with other baseline algorithms. The results show that RSS is more efficient than Genetic Algorithm, Particles Swarm Optimization and Cuckoo Search in terms of convergence rate to the global optimum. The RSS shows an improvement in terms of balance between exploration (extensive) and exploitation (intensive) of the search space. The RSS can efficiently mimic seal pups behavior to find best lair and provide a new algorithm to be used in global optimization problems.

  15. Modeling work of the dispatching service of high-rise building as queuing system

    NASA Astrophysics Data System (ADS)

    Dement'eva, Marina; Dement'eva, Anastasiya

    2018-03-01

    The article presents the results of calculating the performance indicators of the dispatcher service of a high-rise building as a queuing system with an unlimited queue. The calculation was carried out for three models: with a single control room and brigade of service, with a single control room and a specialized service, with several dispatch centers and specialized services. The aim of the work was to investigate the influence of the structural scheme of the organization of the dispatcher service of a high-rise building on the amount of operating costs and the time of processing and fulfilling applications. The problems of high-rise construction and their impact on the complication of exploitation are analyzed. The composition of exploitation activities of high-rise buildings is analyzed. The relevance of the study is justified by the need to review the role of dispatch services in the structure of management of the quality of buildings. Dispatching service from the lower level of management of individual engineering systems becomes the main link in the centralized automated management of the exploitation of high-rise buildings. With the transition to market relations, the criterion of profitability at the organization of the dispatching service becomes one of the main parameters of the effectiveness of its work. A mathematical model for assessing the efficiency of the dispatching service on a set of quality of service indicators is proposed. The structure of operating costs is presented. The algorithm of decision-making is given when choosing the optimal structural scheme of the dispatching service of a high-rise building.

  16. Modeling Cooperative Threads to Project GPU Performance for Adaptive Parallelism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, Jiayuan; Uram, Thomas; Morozov, Vitali A.

    Most accelerators, such as graphics processing units (GPUs) and vector processors, are particularly suitable for accelerating massively parallel workloads. On the other hand, conventional workloads are developed for multi-core parallelism, which often scale to only a few dozen OpenMP threads. When hardware threads significantly outnumber the degree of parallelism in the outer loop, programmers are challenged with efficient hardware utilization. A common solution is to further exploit the parallelism hidden deep in the code structure. Such parallelism is less structured: parallel and sequential loops may be imperfectly nested within each other, neigh boring inner loops may exhibit different concurrency patternsmore » (e.g. Reduction vs. Forall), yet have to be parallelized in the same parallel section. Many input-dependent transformations have to be explored. A programmer often employs a larger group of hardware threads to cooperatively walk through a smaller outer loop partition and adaptively exploit any encountered parallelism. This process is time-consuming and error-prone, yet the risk of gaining little or no performance remains high for such workloads. To reduce risk and guide implementation, we propose a technique to model workloads with limited parallelism that can automatically explore and evaluate transformations involving cooperative threads. Eventually, our framework projects the best achievable performance and the most promising transformations without implementing GPU code or using physical hardware. We envision our technique to be integrated into future compilers or optimization frameworks for autotuning.« less

  17. Improving Search Algorithms by Using Intelligent Coordinates

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Tumer, Kagan; Bandari, Esfandiar

    2004-01-01

    We consider algorithms that maximize a global function G in a distributed manner, using a different adaptive computational agent to set each variable of the underlying space. Each agent eta is self-interested; it sets its variable to maximize its own function g (sub eta). Three factors govern such a distributed algorithm's performance, related to exploration/exploitation, game theory, and machine learning. We demonstrate how to exploit alI three factors by modifying a search algorithm's exploration stage: rather than random exploration, each coordinate of the search space is now controlled by a separate machine-learning-based player engaged in a noncooperative game. Experiments demonstrate that this modification improves simulated annealing (SA) by up to an order of magnitude for bin packing and for a model of an economic process run over an underlying network. These experiments also reveal interesting small-world phenomena.

  18. Improving search algorithms by using intelligent coordinates

    NASA Astrophysics Data System (ADS)

    Wolpert, David; Tumer, Kagan; Bandari, Esfandiar

    2004-01-01

    We consider algorithms that maximize a global function G in a distributed manner, using a different adaptive computational agent to set each variable of the underlying space. Each agent η is self-interested; it sets its variable to maximize its own function gη. Three factors govern such a distributed algorithm’s performance, related to exploration/exploitation, game theory, and machine learning. We demonstrate how to exploit all three factors by modifying a search algorithm’s exploration stage: rather than random exploration, each coordinate of the search space is now controlled by a separate machine-learning-based “player” engaged in a noncooperative game. Experiments demonstrate that this modification improves simulated annealing (SA) by up to an order of magnitude for bin packing and for a model of an economic process run over an underlying network. These experiments also reveal interesting small-world phenomena.

  19. Radar-acoustic interaction for IFF applications

    NASA Astrophysics Data System (ADS)

    Saffold, James A.; Williamson, Frank R.; Ahuja, Krishan; Stein, Lawrence R.; Muller, Marjorie

    1998-08-01

    This paper describes the results of an internal development program (IDP) No. 97-1 conducted from August 1-October 1 1996 at the Georgia Tech Research Institute. The IDP program was implemented to establish theoretical relationships and verify the interaction between X-band radar waves and ultrasonic acoustics. Low cost, off-the-shelf components were used for the verification in order to illustrate the cost savings potential of developing and utilizing these systems. The measured data was used to calibrate the developed models of the phenomenology and to support extrapolation for radar systems which can exploit these interactions. One such exploitation is for soldier identification IFF and radar taggant concepts. The described IDP program provided the phenomenological data which is being used to extrapolate concept system performances based on technological limitations and battlefield conditions for low cost IFF and taggant configurations.

  20. Waveform design for detection of weapons based on signature exploitation

    NASA Astrophysics Data System (ADS)

    Ahmad, Fauzia; Amin, Moeness G.; Dogaru, Traian

    2010-04-01

    We present waveform design based on signature exploitation techniques for improved detection of weapons in urban sensing applications. A single-antenna monostatic radar system is considered. Under the assumption of exact knowledge of the target orientation and, hence, known impulse response, matched illumination approach is used for optimal target detection. For the case of unknown target orientation, we analyze the target signatures as random processes and perform signal-to-noise-ratio based waveform optimization. Numerical electromagnetic modeling is used to provide the impulse responses of an AK-47 assault rifle for various target aspect angles relative to the radar. Simulation results depict an improvement in the signal-to-noise-ratio at the output of the matched filter receiver for both matched illumination and stochastic waveforms as compared to a chirp waveform of the same duration and energy.

  1. Exploiting salient semantic analysis for information retrieval

    NASA Astrophysics Data System (ADS)

    Luo, Jing; Meng, Bo; Quan, Changqin; Tu, Xinhui

    2016-11-01

    Recently, many Wikipedia-based methods have been proposed to improve the performance of different natural language processing (NLP) tasks, such as semantic relatedness computation, text classification and information retrieval. Among these methods, salient semantic analysis (SSA) has been proven to be an effective way to generate conceptual representation for words or documents. However, its feasibility and effectiveness in information retrieval is mostly unknown. In this paper, we study how to efficiently use SSA to improve the information retrieval performance, and propose a SSA-based retrieval method under the language model framework. First, SSA model is adopted to build conceptual representations for documents and queries. Then, these conceptual representations and the bag-of-words (BOW) representations can be used in combination to estimate the language models of queries and documents. The proposed method is evaluated on several standard text retrieval conference (TREC) collections. Experiment results on standard TREC collections show the proposed models consistently outperform the existing Wikipedia-based retrieval methods.

  2. Computer simulation of multiple pilots flying a modern high performance helicopter

    NASA Technical Reports Server (NTRS)

    Zipf, Mark E.; Vogt, William G.; Mickle, Marlin H.; Hoelzeman, Ronald G.; Kai, Fei; Mihaloew, James R.

    1988-01-01

    A computer simulation of a human response pilot mechanism within the flight control loop of a high-performance modern helicopter is presented. A human response mechanism, implemented by a low order, linear transfer function, is used in a decoupled single variable configuration that exploits the dominant vehicle characteristics by associating cockpit controls and instrumentation with specific vehicle dynamics. Low order helicopter models obtained from evaluations of the time and frequency domain responses of a nonlinear simulation model, provided by NASA Lewis Research Center, are presented and considered in the discussion of the pilot development. Pilot responses and reactions to test maneuvers are presented and discussed. Higher level implementation, using the pilot mechanisms, are discussed and considered for their use in a comprehensive control structure.

  3. Modelling and temporal performances evaluation of networked control systems using (max, +) algebra

    NASA Astrophysics Data System (ADS)

    Ammour, R.; Amari, S.

    2015-01-01

    In this paper, we address the problem of temporal performances evaluation of producer/consumer networked control systems. The aim is to develop a formal method for evaluating the response time of this type of control systems. Our approach consists on modelling, using Petri nets classes, the behaviour of the whole architecture including the switches that support multicast communications used by this protocol. (max, +) algebra formalism is then exploited to obtain analytical formulas of the response time and the maximal and minimal bounds. The main novelty is that our approach takes into account all delays experienced at the different stages of networked automation systems. Finally, we show how to apply the obtained results through an example of networked control system.

  4. Groundwater vulnerability indices conditioned by Supervised Intelligence Committee Machine (SICM).

    PubMed

    Nadiri, Ata Allah; Gharekhani, Maryam; Khatibi, Rahman; Sadeghfam, Sina; Moghaddam, Asghar Asghari

    2017-01-01

    This research presents a Supervised Intelligent Committee Machine (SICM) model to assess groundwater vulnerability indices of an aquifer. SICM uses Artificial Neural Networks (ANN) to overarch three Artificial Intelligence (AI) models: Support Vector Machine (SVM), Neuro-Fuzzy (NF) and Gene Expression Programming (GEP). Each model uses the DRASTIC index, the acronym of 7 geological, hydrological and hydrogeological parameters, which collectively represents intrinsic (or natural) vulnerability and gives a sense of contaminants, such as nitrate-N, penetrating aquifers from the surface. These models are trained to modify or condition their DRASTIC index values by measured nitrate-N concentration. The three AI-techniques often perform similarly but have differences as well and therefore SICM exploits the situation to improve the modeled values by producing a hybrid modeling results through selecting better performing SVM, NF and GEP components. The models of the study area at Ardabil aquifer show that the vulnerability indices by the DRASTIC framework produces sharp fronts but AI models smoothen the fronts and reflect a better correlation with observed nitrate values; SICM improves on the performances of three AI models and cope well with heterogeneity and uncertain parameters. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Receding horizon online optimization for torque control of gasoline engines.

    PubMed

    Kang, Mingxin; Shen, Tielong

    2016-11-01

    This paper proposes a model-based nonlinear receding horizon optimal control scheme for the engine torque tracking problem. The controller design directly employs the nonlinear model exploited based on mean-value modeling principle of engine systems without any linearizing reformation, and the online optimization is achieved by applying the Continuation/GMRES (generalized minimum residual) approach. Several receding horizon control schemes are designed to investigate the effects of the integral action and integral gain selection. Simulation analyses and experimental validations are implemented to demonstrate the real-time optimization performance and control effects of the proposed torque tracking controllers. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  6. A simple mathematical model of society collapse applied to Easter Island

    NASA Astrophysics Data System (ADS)

    Bologna, M.; Flores, J. C.

    2008-02-01

    In this paper we consider a mathematical model for the evolution and collapse of the Easter Island society. Based on historical reports, the available primary resources consisted almost exclusively in the trees, then we describe the inhabitants and the resources as an isolated dynamical system. A mathematical, and numerical, analysis about the Easter Island community collapse is performed. In particular, we analyze the critical values of the fundamental parameters and a demographic curve is presented. The technological parameter, quantifying the exploitation of the resources, is calculated and applied to the case of another extinguished civilization (Copán Maya) confirming the consistency of the adopted model.

  7. Assessing the performance of a covert automatic target recognition algorithm

    NASA Astrophysics Data System (ADS)

    Ehrman, Lisa M.; Lanterman, Aaron D.

    2005-05-01

    Passive radar systems exploit illuminators of opportunity, such as TV and FM radio, to illuminate potential targets. Doing so allows them to operate covertly and inexpensively. Our research seeks to enhance passive radar systems by adding automatic target recognition (ATR) capabilities. In previous papers we proposed conducting ATR by comparing the radar cross section (RCS) of aircraft detected by a passive radar system to the precomputed RCS of aircraft in the target class. To effectively model the low-frequency setting, the comparison is made via a Rician likelihood model. Monte Carlo simulations indicate that the approach is viable. This paper builds on that work by developing a method for quickly assessing the potential performance of the ATR algorithm without using exhaustive Monte Carlo trials. This method exploits the relation between the probability of error in a binary hypothesis test under the Bayesian framework to the Chernoff information. Since the data are well-modeled as Rician, we begin by deriving a closed-form approximation for the Chernoff information between two Rician densities. This leads to an approximation for the probability of error in the classification algorithm that is a function of the number of available measurements. We conclude with an application that would be particularly cumbersome to accomplish via Monte Carlo trials, but that can be quickly addressed using the Chernoff information approach. This application evaluates the length of time that an aircraft must be tracked before the probability of error in the ATR algorithm drops below a desired threshold.

  8. Fluency Heuristic: A Model of How the Mind Exploits a By-Product of Information Retrieval

    ERIC Educational Resources Information Center

    Hertwig, Ralph; Herzog, Stefan M.; Schooler, Lael J.; Reimer, Torsten

    2008-01-01

    Boundedly rational heuristics for inference can be surprisingly accurate and frugal for several reasons. They can exploit environmental structures, co-opt complex capacities, and elude effortful search by exploiting information that automatically arrives on the mental stage. The fluency heuristic is a prime example of a heuristic that makes the…

  9. Generating Billion-Edge Scale-Free Networks in Seconds: Performance Study of a Novel GPU-based Preferential Attachment Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perumalla, Kalyan S.; Alam, Maksudul

    A novel parallel algorithm is presented for generating random scale-free networks using the preferential-attachment model. The algorithm, named cuPPA, is custom-designed for single instruction multiple data (SIMD) style of parallel processing supported by modern processors such as graphical processing units (GPUs). To the best of our knowledge, our algorithm is the first to exploit GPUs, and also the fastest implementation available today, to generate scale free networks using the preferential attachment model. A detailed performance study is presented to understand the scalability and runtime characteristics of the cuPPA algorithm. In one of the best cases, when executed on an NVidiamore » GeForce 1080 GPU, cuPPA generates a scale free network of a billion edges in less than 2 seconds.« less

  10. Predicting chroma from luma with frequency domain intra prediction

    NASA Astrophysics Data System (ADS)

    Egge, Nathan E.; Valin, Jean-Marc

    2015-03-01

    This paper describes a technique for performing intra prediction of the chroma planes based on the reconstructed luma plane in the frequency domain. This prediction exploits the fact that while RGB to YUV color conversion has the property that it decorrelates the color planes globally across an image, there is still some correlation locally at the block level.1 Previous proposals compute a linear model of the spatial relationship between the luma plane (Y) and the two chroma planes (U and V).2 In codecs that use lapped transforms this is not possible since transform support extends across the block boundaries3 and thus neighboring blocks are unavailable during intra- prediction. We design a frequency domain intra predictor for chroma that exploits the same local correlation with lower complexity than the spatial predictor and which works with lapped transforms. We then describe a low- complexity algorithm that directly uses luma coefficients as a chroma predictor based on gain-shape quantization and band partitioning. An experiment is performed that compares these two techniques inside the experimental Daala video codec and shows the lower complexity algorithm to be a better chroma predictor.

  11. Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding

    PubMed Central

    Xiao, Rui; Gao, Junbin; Bossomaier, Terry

    2016-01-01

    A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102

  12. Exploitation and Optimization of Reservoir Performance in Hunton Formation, Oklahoma, Budget Period I, Class Revisit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kelkar, Mohan

    2002-04-02

    This report explains the unusual characteristics of West Carney Field based on detailed geological and engineering analyses. A geological history that explains the presence of mobile water and oil in the reservoir was proposed. The combination of matrix and fractures in the reservoir explains the reservoir?s flow behavior. We confirm our hypothesis by matching observed performance with a simulated model and develop procedures for correlating core data to log data so that the analysis can be extended to other, similar fields where the core coverage may be limited.

  13. Modeling summer month hydrological drought probabilities in the United States using antecedent flow conditions

    USGS Publications Warehouse

    Austin, Samuel H.; Nelms, David L.

    2017-01-01

    Climate change raises concern that risks of hydrological drought may be increasing. We estimate hydrological drought probabilities for rivers and streams in the United States (U.S.) using maximum likelihood logistic regression (MLLR). Streamflow data from winter months are used to estimate the chance of hydrological drought during summer months. Daily streamflow data collected from 9,144 stream gages from January 1, 1884 through January 9, 2014 provide hydrological drought streamflow probabilities for July, August, and September as functions of streamflows during October, November, December, January, and February, estimating outcomes 5-11 months ahead of their occurrence. Few drought prediction methods exploit temporal links among streamflows. We find MLLR modeling of drought streamflow probabilities exploits the explanatory power of temporally linked water flows. MLLR models with strong correct classification rates were produced for streams throughout the U.S. One ad hoc test of correct prediction rates of September 2013 hydrological droughts exceeded 90% correct classification. Some of the best-performing models coincide with areas of high concern including the West, the Midwest, Texas, the Southeast, and the Mid-Atlantic. Using hydrological drought MLLR probability estimates in a water management context can inform understanding of drought streamflow conditions, provide warning of future drought conditions, and aid water management decision making.

  14. Rational selective exploitation and distress: employee reactions to performance-based and mobility-based reward allocations.

    PubMed

    Rusbult, C E; Campbell, M A; Price, M E

    1990-09-01

    Prior research has demonstrated that allocators frequently distribute greater rewards to persons with high professional and geographic mobility than to persons with constrained mobility, especially among the very competent. This phenomenon has been termed rational selective exploitation. Do the recipients of such allocations actually experience this distribution rule as unjust and distressing, or is it a misnomer to refer to this phenomenon as exploitation? Two studies were conducted to explore this question. Study 1 was a laboratory experiment in which we manipulated relative performance level, relative mobility level, and allocation standard: performance based versus mobility based. Study 2 was a cross-sectional survey of actual employees in which subjects reported the degree to which performance and mobility were the basis for pay decisions at their places of employment, as well as the degree to which they perceived each standard to be fair. Both studies demonstrated that people regard mobility-based allocations as less fair and more distressing than performance-based allocations. Furthermore, the degree of distress resulting from mobility-based allocations is greater among persons who are disadvantaged by that standard: among people with constrained mobility, especially those who perform at high levels. These findings provide good support for the assertion that so-called rational selective exploitation is indeed distressing to employees. Reactions to this form of distress are also explored, and the implications of these findings for the allocation process are discussed.

  15. Pioneering topological methods for network-based drug-target prediction by exploiting a brain-network self-organization theory.

    PubMed

    Durán, Claudio; Daminelli, Simone; Thomas, Josephine M; Haupt, V Joachim; Schroeder, Michael; Cannistraci, Carlo Vittorio

    2017-04-26

    The bipartite network representation of the drug-target interactions (DTIs) in a biosystem enhances understanding of the drugs' multifaceted action modes, suggests therapeutic switching for approved drugs and unveils possible side effects. As experimental testing of DTIs is costly and time-consuming, computational predictors are of great aid. Here, for the first time, state-of-the-art DTI supervised predictors custom-made in network biology were compared-using standard and innovative validation frameworks-with unsupervised pure topological-based models designed for general-purpose link prediction in bipartite networks. Surprisingly, our results show that the bipartite topology alone, if adequately exploited by means of the recently proposed local-community-paradigm (LCP) theory-initially detected in brain-network topological self-organization and afterwards generalized to any complex network-is able to suggest highly reliable predictions, with comparable performance with the state-of-the-art-supervised methods that exploit additional (non-topological, for instance biochemical) DTI knowledge. Furthermore, a detailed analysis of the novel predictions revealed that each class of methods prioritizes distinct true interactions; hence, combining methodologies based on diverse principles represents a promising strategy to improve drug-target discovery. To conclude, this study promotes the power of bio-inspired computing, demonstrating that simple unsupervised rules inspired by principles of topological self-organization and adaptiveness arising during learning in living intelligent systems (like the brain) can efficiently equal perform complicated algorithms based on advanced, supervised and knowledge-based engineering. © The Author 2017. Published by Oxford University Press.

  16. CamOptimus: a tool for exploiting complex adaptive evolution to optimize experiments and processes in biotechnology

    PubMed Central

    Cankorur-Cetinkaya, Ayca; Dias, Joao M. L.; Kludas, Jana; Slater, Nigel K. H.; Rousu, Juho; Dikicioglu, Duygu

    2017-01-01

    Multiple interacting factors affect the performance of engineered biological systems in synthetic biology projects. The complexity of these biological systems means that experimental design should often be treated as a multiparametric optimization problem. However, the available methodologies are either impractical, due to a combinatorial explosion in the number of experiments to be performed, or are inaccessible to most experimentalists due to the lack of publicly available, user-friendly software. Although evolutionary algorithms may be employed as alternative approaches to optimize experimental design, the lack of simple-to-use software again restricts their use to specialist practitioners. In addition, the lack of subsidiary approaches to further investigate critical factors and their interactions prevents the full analysis and exploitation of the biotechnological system. We have addressed these problems and, here, provide a simple‐to‐use and freely available graphical user interface to empower a broad range of experimental biologists to employ complex evolutionary algorithms to optimize their experimental designs. Our approach exploits a Genetic Algorithm to discover the subspace containing the optimal combination of parameters, and Symbolic Regression to construct a model to evaluate the sensitivity of the experiment to each parameter under investigation. We demonstrate the utility of this method using an example in which the culture conditions for the microbial production of a bioactive human protein are optimized. CamOptimus is available through: (https://doi.org/10.17863/CAM.10257). PMID:28635591

  17. CryoSat Plus For Oceans: an ESA Project for CryoSat-2 Data Exploitation Over Ocean

    NASA Astrophysics Data System (ADS)

    Benveniste, J.; Cotton, D.; Clarizia, M.; Roca, M.; Gommenginger, C. P.; Naeije, M. C.; Labroue, S.; Picot, N.; Fernandes, J.; Andersen, O. B.; Cancet, M.; Dinardo, S.; Lucas, B. M.

    2012-12-01

    The ESA CryoSat-2 mission is the first space mission to carry a space-borne radar altimeter that is able to operate in the conventional pulsewidth-limited (LRM) mode and in the novel Synthetic Aperture Radar (SAR) mode. Although the prime objective of the Cryosat-2 mission is dedicated to monitoring land and marine ice, the SAR mode capability of the Cryosat-2 SIRAL altimeter also presents the possibility of demonstrating significant potential benefits of SAR altimetry for ocean applications, based on expected performance enhancements which include improved range precision and finer along track spatial resolution. With this scope in mind, the "CryoSat Plus for Oceans" (CP4O) Project, dedicated to the exploitation of CryoSat-2 Data over ocean, supported by the ESA STSE (Support To Science Element) programme, brings together an expert European consortium comprising: DTU Space, isardSAT, National Oceanography Centre , Noveltis, SatOC, Starlab, TU Delft, the University of Porto and CLS (supported by CNES),. The objectives of CP4O are: - to build a sound scientific basis for new scientific and operational applications of Cryosat-2 data over the open ocean, polar ocean, coastal seas and for sea-floor mapping. - to generate and evaluate new methods and products that will enable the full exploitation of the capabilities of the Cryosat-2 SIRAL altimeter , and extend their application beyond the initial mission objectives. - to ensure that the scientific return of the Cryosat-2 mission is maximised. In particular four themes will be addressed: -Open Ocean Altimetry: Combining GOCE Geoid Model with CryoSat Oceanographic LRM Products for the retrieval of CryoSat MSS/MDT model over open ocean surfaces and for analysis of mesoscale and large scale prominent open ocean features. Under this priority the project will also foster the exploitation of the finer resolution and higher SNR of novel CryoSat SAR Data to detect short spatial scale open ocean features. -High Resolution Polar Ocean Altimetry: Combination of GOCE Geoid Model with CryoSat Oceanographic SAR Products over polar oceans for the retrieval of CryoSat MSS/MDT and currents circulations system improving the polar tides models and studying the coupling between blowing wind and current pattern. -High Resolution Coastal Zone Altimetry: Exploitation of the finer resolution and higher SNR of novel CryoSat SAR Data to get the radar altimetry closer to the shore exploiting the SARIn mode for the discrimination of off-nadir land targets (e.g. steep cliffs) in the radar footprint from nadir sea return. -High Resolution Sea-Floor Altimetry: Exploitation of the finer resolution and higher SNR of novel CryoSat SAR Data to resolve the weak short-wavelength sea surface signals caused by sea-floor topography elements and to map uncharted sea-mounts/trenches. One of the first project activities is the consolidation of preliminary scientific requirements for the four themes under investigation. This paper will present the CP4O project content and objectives and will address the first initial results from the on-going work to define the scientific requirements.

  18. Compression of contour data through exploiting curve-to-curve dependence

    NASA Technical Reports Server (NTRS)

    Yalabik, N.; Cooper, D. B.

    1975-01-01

    An approach to exploiting curve-to-curve dependencies in order to achieve high data compression is presented. One of the approaches to date of along curve compression through use of cubic spline approximation is taken and extended by investigating the additional compressibility achievable through curve-to-curve structure exploitation. One of the models under investigation is reported on.

  19. On Taylor-Series Approximations of Residual Stress

    NASA Technical Reports Server (NTRS)

    Pruett, C. David

    1999-01-01

    Although subgrid-scale models of similarity type are insufficiently dissipative for practical applications to large-eddy simulation, in recently published a priori analyses, they perform remarkably well in the sense of correlating highly against exact residual stresses. Here, Taylor-series expansions of residual stress are exploited to explain the observed behavior and "success" of similarity models. Until very recently, little attention has been given to issues related to the convergence of such expansions. Here, we re-express the convergence criterion of Vasilyev [J. Comput. Phys., 146 (1998)] in terms of the transfer function and the wavenumber cutoff of the grid filter.

  20. Efficient rejection-based simulation of biochemical reactions with stochastic noise and delays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thanh, Vo Hong, E-mail: vo@cosbi.eu; Priami, Corrado, E-mail: priami@cosbi.eu; Department of Mathematics, University of Trento

    2014-10-07

    We propose a new exact stochastic rejection-based simulation algorithm for biochemical reactions and extend it to systems with delays. Our algorithm accelerates the simulation by pre-computing reaction propensity bounds to select the next reaction to perform. Exploiting such bounds, we are able to avoid recomputing propensities every time a (delayed) reaction is initiated or finished, as is typically necessary in standard approaches. Propensity updates in our approach are still performed, but only infrequently and limited for a small number of reactions, saving computation time and without sacrificing exactness. We evaluate the performance improvement of our algorithm by experimenting with concretemore » biological models.« less

  1. ICCD: interactive continuous collision detection between deformable models using connectivity-based culling.

    PubMed

    Tang, Min; Curtis, Sean; Yoon, Sung-Eui; Manocha, Dinesh

    2009-01-01

    We present an interactive algorithm for continuous collision detection between deformable models. We introduce multiple techniques to improve the culling efficiency and the overall performance of continuous collision detection. First, we present a novel formulation for continuous normal cones and use these normal cones to efficiently cull large regions of the mesh as part of self-collision tests. Second, we introduce the concept of "procedural representative triangles" to remove all redundant elementary tests between nonadjacent triangles. Finally, we exploit the mesh connectivity and introduce the concept of "orphan sets" to eliminate redundant elementary tests between adjacent triangle primitives. In practice, we can reduce the number of elementary tests by two orders of magnitude. These culling techniques have been combined with bounding volume hierarchies and can result in one order of magnitude performance improvement as compared to prior collision detection algorithms for deformable models. We highlight the performance of our algorithm on several benchmarks, including cloth simulations, N-body simulations, and breaking objects.

  2. Brain MRI analysis for Alzheimer's disease diagnosis using an ensemble system of deep convolutional neural networks.

    PubMed

    Islam, Jyoti; Zhang, Yanqing

    2018-05-31

    Alzheimer's disease is an incurable, progressive neurological brain disorder. Earlier detection of Alzheimer's disease can help with proper treatment and prevent brain tissue damage. Several statistical and machine learning models have been exploited by researchers for Alzheimer's disease diagnosis. Analyzing magnetic resonance imaging (MRI) is a common practice for Alzheimer's disease diagnosis in clinical research. Detection of Alzheimer's disease is exacting due to the similarity in Alzheimer's disease MRI data and standard healthy MRI data of older people. Recently, advanced deep learning techniques have successfully demonstrated human-level performance in numerous fields including medical image analysis. We propose a deep convolutional neural network for Alzheimer's disease diagnosis using brain MRI data analysis. While most of the existing approaches perform binary classification, our model can identify different stages of Alzheimer's disease and obtains superior performance for early-stage diagnosis. We conducted ample experiments to demonstrate that our proposed model outperformed comparative baselines on the Open Access Series of Imaging Studies dataset.

  3. Support for ICES International Symposium: Recruitment Dynamics of Exploited Marine Populations: Physical-biological Interactions

    DTIC Science & Technology

    1997-09-30

    Environmental Science ,Chesapeake Biological Laboratory,PO Box 38,Solomons,MD,20688 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING...DYNAMICS OF EXPLOITED MARINE POPULATIONS: PHYSICAL-BIOLOGICAL INTERACTIONS Michael J. Fogarty University of Maryland Center for Environmental Science Chesapeake

  4. Life-history plasticity and sustainable exploitation: a theory of growth compensation applied to walleye management.

    PubMed

    Lester, Nigel P; Shuter, Brian J; Venturelli, Paul; Nadeau, Daniel

    2014-01-01

    A simple population model was developed to evaluate the role of plastic and evolutionary life-history changes on sustainable exploitation rates. Plastic changes are embodied in density-dependent compensatory adjustments to somatic growth rate and larval/juvenile survival, which can compensate for the reductions in reproductive lifetime and mean population fecundity that accompany the higher adult mortality imposed by exploitation. Evolutionary changes are embodied in the selective pressures that higher adult mortality imposes on age at maturity, length at maturity, and reproductive investment. Analytical development, based on a biphasic growth model, led to simple equations that show explicitly how sustainable exploitation rates are bounded by each of these effects. We show that density-dependent growth combined with a fixed length at maturity and fixed reproductive investment can support exploitation-driven mortality that is 80% of the level supported by evolutionary changes in maturation and reproductive investment. Sustainable fishing mortality is proportional to natural mortality (M) times the degree of density-dependent growth, as modified by both the degree of density-dependent early survival and the minimum harvestable length. We applied this model to estimate sustainable exploitation rates for North American walleye populations (Sander vitreus). Our analysis of demographic data from walleye populations spread across a broad latitudinal range indicates that density-dependent variation in growth rate can vary by a factor of 2. Implications of this growth response are generally consistent with empirical studies suggesting that optimal fishing mortality is approximately 0.75M for teleosts. This approach can be adapted to the management of other species, particularly when significant exploitation is imposed on many, widely distributed, but geographically isolated populations.

  5. Fixed-topology Lorentzian triangulations: Quantum Regge Calculus in the Lorentzian domain

    NASA Astrophysics Data System (ADS)

    Tate, Kyle; Visser, Matt

    2011-11-01

    A key insight used in developing the theory of Causal Dynamical Triangu-lations (CDTs) is to use the causal (or light-cone) structure of Lorentzian manifolds to restrict the class of geometries appearing in the Quantum Gravity (QG) path integral. By exploiting this structure the models developed in CDTs differ from the analogous models developed in the Euclidean domain, models of (Euclidean) Dynamical Triangulations (DT), and the corresponding Lorentzian results are in many ways more "physical". In this paper we use this insight to formulate a Lorentzian signature model that is anal-ogous to the Quantum Regge Calculus (QRC) approach to Euclidean Quantum Gravity. We exploit another crucial fact about the structure of Lorentzian manifolds, namely that certain simplices are not constrained by the triangle inequalities present in Euclidean signa-ture. We show that this model is not related to QRC by a naive Wick rotation; this serves as another demonstration that the sum over Lorentzian geometries is not simply related to the sum over Euclidean geometries. By removing the triangle inequality constraints, there is more freedom to perform analytical calculations, and in addition numerical simulations are more computationally efficient. We first formulate the model in 1 + 1 dimensions, and derive scaling relations for the pure gravity path integral on the torus using two different measures. It appears relatively easy to generate "large" universes, both in spatial and temporal extent. In addition, loopto-loop amplitudes are discussed, and a transfer matrix is derived. We then also discuss the model in higher dimensions.

  6. An Analysis of CNO Availability Performance Metrics and Their Relation to Availability Performance

    DTIC Science & Technology

    2013-06-01

    burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instruction, searching...2. Decide how to exploit the constraint. 3. Subordinate everything else to the above decision. 4 . Restructure the system to exploit the system’s...15 work items. 4 . Significance of WIP Management Along with the production bow wave, WIP is actively managed but not directly controlled like

  7. New Enhanced Artificial Bee Colony (JA-ABC5) Algorithm with Application for Reactive Power Optimization

    PubMed Central

    2015-01-01

    The standard artificial bee colony (ABC) algorithm involves exploration and exploitation processes which need to be balanced for enhanced performance. This paper proposes a new modified ABC algorithm named JA-ABC5 to enhance convergence speed and improve the ability to reach the global optimum by balancing exploration and exploitation processes. New stages have been proposed at the earlier stages of the algorithm to increase the exploitation process. Besides that, modified mutation equations have also been introduced in the employed and onlooker-bees phases to balance the two processes. The performance of JA-ABC5 has been analyzed on 27 commonly used benchmark functions and tested to optimize the reactive power optimization problem. The performance results have clearly shown that the newly proposed algorithm has outperformed other compared algorithms in terms of convergence speed and global optimum achievement. PMID:25879054

  8. New enhanced artificial bee colony (JA-ABC5) algorithm with application for reactive power optimization.

    PubMed

    Sulaiman, Noorazliza; Mohamad-Saleh, Junita; Abro, Abdul Ghani

    2015-01-01

    The standard artificial bee colony (ABC) algorithm involves exploration and exploitation processes which need to be balanced for enhanced performance. This paper proposes a new modified ABC algorithm named JA-ABC5 to enhance convergence speed and improve the ability to reach the global optimum by balancing exploration and exploitation processes. New stages have been proposed at the earlier stages of the algorithm to increase the exploitation process. Besides that, modified mutation equations have also been introduced in the employed and onlooker-bees phases to balance the two processes. The performance of JA-ABC5 has been analyzed on 27 commonly used benchmark functions and tested to optimize the reactive power optimization problem. The performance results have clearly shown that the newly proposed algorithm has outperformed other compared algorithms in terms of convergence speed and global optimum achievement.

  9. Exploiting Multiple Levels of Parallelism in Sparse Matrix-Matrix Multiplication

    DOE PAGES

    Azad, Ariful; Ballard, Grey; Buluc, Aydin; ...

    2016-11-08

    Sparse matrix-matrix multiplication (or SpGEMM) is a key primitive for many high-performance graph algorithms as well as for some linear solvers, such as algebraic multigrid. The scaling of existing parallel implementations of SpGEMM is heavily bound by communication. Even though 3D (or 2.5D) algorithms have been proposed and theoretically analyzed in the flat MPI model on Erdös-Rényi matrices, those algorithms had not been implemented in practice and their complexities had not been analyzed for the general case. In this work, we present the first implementation of the 3D SpGEMM formulation that exploits multiple (intranode and internode) levels of parallelism, achievingmore » significant speedups over the state-of-the-art publicly available codes at all levels of concurrencies. We extensively evaluate our implementation and identify bottlenecks that should be subject to further research.« less

  10. Region-Based Prediction for Image Compression in the Cloud.

    PubMed

    Begaint, Jean; Thoreau, Dominique; Guillotel, Philippe; Guillemot, Christine

    2018-04-01

    Thanks to the increasing number of images stored in the cloud, external image similarities can be leveraged to efficiently compress images by exploiting inter-images correlations. In this paper, we propose a novel image prediction scheme for cloud storage. Unlike current state-of-the-art methods, we use a semi-local approach to exploit inter-image correlation. The reference image is first segmented into multiple planar regions determined from matched local features and super-pixels. The geometric and photometric disparities between the matched regions of the reference image and the current image are then compensated. Finally, multiple references are generated from the estimated compensation models and organized in a pseudo-sequence to differentially encode the input image using classical video coding tools. Experimental results demonstrate that the proposed approach yields significant rate-distortion performance improvements compared with the current image inter-coding solutions such as high efficiency video coding.

  11. Action detection by double hierarchical multi-structure space-time statistical matching model

    NASA Astrophysics Data System (ADS)

    Han, Jing; Zhu, Junwei; Cui, Yiyin; Bai, Lianfa; Yue, Jiang

    2018-03-01

    Aimed at the complex information in videos and low detection efficiency, an actions detection model based on neighboring Gaussian structure and 3D LARK features is put forward. We exploit a double hierarchical multi-structure space-time statistical matching model (DMSM) in temporal action localization. First, a neighboring Gaussian structure is presented to describe the multi-scale structural relationship. Then, a space-time statistical matching method is proposed to achieve two similarity matrices on both large and small scales, which combines double hierarchical structural constraints in model by both the neighboring Gaussian structure and the 3D LARK local structure. Finally, the double hierarchical similarity is fused and analyzed to detect actions. Besides, the multi-scale composite template extends the model application into multi-view. Experimental results of DMSM on the complex visual tracker benchmark data sets and THUMOS 2014 data sets show the promising performance. Compared with other state-of-the-art algorithm, DMSM achieves superior performances.

  12. Action detection by double hierarchical multi-structure space–time statistical matching model

    NASA Astrophysics Data System (ADS)

    Han, Jing; Zhu, Junwei; Cui, Yiyin; Bai, Lianfa; Yue, Jiang

    2018-06-01

    Aimed at the complex information in videos and low detection efficiency, an actions detection model based on neighboring Gaussian structure and 3D LARK features is put forward. We exploit a double hierarchical multi-structure space-time statistical matching model (DMSM) in temporal action localization. First, a neighboring Gaussian structure is presented to describe the multi-scale structural relationship. Then, a space-time statistical matching method is proposed to achieve two similarity matrices on both large and small scales, which combines double hierarchical structural constraints in model by both the neighboring Gaussian structure and the 3D LARK local structure. Finally, the double hierarchical similarity is fused and analyzed to detect actions. Besides, the multi-scale composite template extends the model application into multi-view. Experimental results of DMSM on the complex visual tracker benchmark data sets and THUMOS 2014 data sets show the promising performance. Compared with other state-of-the-art algorithm, DMSM achieves superior performances.

  13. SemiBoost: boosting for semi-supervised learning.

    PubMed

    Mallapragada, Pavan Kumar; Jin, Rong; Jain, Anil K; Liu, Yi

    2009-11-01

    Semi-supervised learning has attracted a significant amount of attention in pattern recognition and machine learning. Most previous studies have focused on designing special algorithms to effectively exploit the unlabeled data in conjunction with labeled data. Our goal is to improve the classification accuracy of any given supervised learning algorithm by using the available unlabeled examples. We call this as the Semi-supervised improvement problem, to distinguish the proposed approach from the existing approaches. We design a metasemi-supervised learning algorithm that wraps around the underlying supervised algorithm and improves its performance using unlabeled data. This problem is particularly important when we need to train a supervised learning algorithm with a limited number of labeled examples and a multitude of unlabeled examples. We present a boosting framework for semi-supervised learning, termed as SemiBoost. The key advantages of the proposed semi-supervised learning approach are: 1) performance improvement of any supervised learning algorithm with a multitude of unlabeled data, 2) efficient computation by the iterative boosting algorithm, and 3) exploiting both manifold and cluster assumption in training classification models. An empirical study on 16 different data sets and text categorization demonstrates that the proposed framework improves the performance of several commonly used supervised learning algorithms, given a large number of unlabeled examples. We also show that the performance of the proposed algorithm, SemiBoost, is comparable to the state-of-the-art semi-supervised learning algorithms.

  14. The COPERNIC3 project: how AREVA is successfully developing an advanced global fuel rod performance code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garnier, Ch.; Mailhe, P.; Sontheimer, F.

    2007-07-01

    Fuel performance is a key factor for minimizing operating costs in nuclear plants. One of the important aspects of fuel performance is fuel rod design, based upon reliable tools able to verify the safety of current fuel solutions, prevent potential issues in new core managements and guide the invention of tomorrow's fuels. AREVA is developing its future global fuel rod code COPERNIC3, which is able to calculate the thermal-mechanical behavior of advanced fuel rods in nuclear plants. Some of the best practices to achieve this goal are described, by reviewing the three pillars of a fuel rod code: the database,more » the modelling and the computer and numerical aspects. At first, the COPERNIC3 database content is described, accompanied by the tools developed to effectively exploit the data. Then is given an overview of the main modelling aspects, by emphasizing the thermal, fission gas release and mechanical sub-models. In the last part, numerical solutions are detailed in order to increase the computational performance of the code, with a presentation of software configuration management solutions. (authors)« less

  15. Microseismic event location by master-event waveform stacking

    NASA Astrophysics Data System (ADS)

    Grigoli, F.; Cesca, S.; Dahm, T.

    2016-12-01

    Waveform stacking location methods are nowadays extensively used to monitor induced seismicity monitoring assoiciated with several underground industrial activities such as Mining, Oil&Gas production and Geothermal energy exploitation. In the last decade a significant effort has been spent to develop or improve methodologies able to perform automated seismological analysis for weak events at a local scale. This effort was accompanied by the improvement of monitoring systems, resulting in an increasing number of large microseismicity catalogs. The analysis of microseismicity is challenging, because of the large number of recorded events often characterized by a low signal-to-noise ratio. A significant limitation of the traditional location approaches is that automated picking is often done on each seismogram individually, making little or no use of the coherency information between stations. In order to improve the performance of the traditional location methods, in the last year, alternative approaches have been proposed. These methods exploits the coherence of the waveforms recorded at different stations and do not require any automated picking procedure. The main advantage of this methods relies on their robustness even when the recorded waveforms are very noisy. On the other hand, like any other location method, the location performance strongly depends on the accuracy of the available velocity model. When dealing with inaccurate velocity models, in fact, location results can be affected by large errors. Here we will introduce a new automated waveform stacking location method which is less dependent on the knowledge of the velocity model and presents several benefits, which improve the location accuracy: 1) it accounts for phase delays due to local site effects, e.g. surface topography or variable sediment thickness 2) theoretical velocity model are only used to estimate travel times within the source volume, and not along the whole source-sensor path. We finally compare the location results for both synthetics and real data with those obtained by using classical waveforms stacking approaches.

  16. Minimum entropy deconvolution optimized sinusoidal synthesis and its application to vibration based fault detection

    NASA Astrophysics Data System (ADS)

    Li, Gang; Zhao, Qing

    2017-03-01

    In this paper, a minimum entropy deconvolution based sinusoidal synthesis (MEDSS) filter is proposed to improve the fault detection performance of the regular sinusoidal synthesis (SS) method. The SS filter is an efficient linear predictor that exploits the frequency properties during model construction. The phase information of the harmonic components is not used in the regular SS filter. However, the phase relationships are important in differentiating noise from characteristic impulsive fault signatures. Therefore, in this work, the minimum entropy deconvolution (MED) technique is used to optimize the SS filter during the model construction process. A time-weighted-error Kalman filter is used to estimate the MEDSS model parameters adaptively. Three simulation examples and a practical application case study are provided to illustrate the effectiveness of the proposed method. The regular SS method and the autoregressive MED (ARMED) method are also implemented for comparison. The MEDSS model has demonstrated superior performance compared to the regular SS method and it also shows comparable or better performance with much less computational intensity than the ARMED method.

  17. Genomic selection models double the accuracy of predicted breeding values for bacterial cold water disease resistance compared to a traditional pedigree-based model in rainbow trout aquaculture

    USDA-ARS?s Scientific Manuscript database

    Previously we have shown that bacterial cold water disease (BCWD) resistance in rainbow trout can be improved using traditional family-based selection, but progress has been limited to exploiting only between-family genetic variation. Genomic selection (GS) is a new alternative enabling exploitation...

  18. Analytical framework for reconstructing heterogeneous environmental variables from mammal community structure.

    PubMed

    Louys, Julien; Meloro, Carlo; Elton, Sarah; Ditchfield, Peter; Bishop, Laura C

    2015-01-01

    We test the performance of two models that use mammalian communities to reconstruct multivariate palaeoenvironments. While both models exploit the correlation between mammal communities (defined in terms of functional groups) and arboreal heterogeneity, the first uses a multiple multivariate regression of community structure and arboreal heterogeneity, while the second uses a linear regression of the principal components of each ecospace. The success of these methods means the palaeoenvironment of a particular locality can be reconstructed in terms of the proportions of heavy, moderate, light, and absent tree canopy cover. The linear regression is less biased, and more precisely and accurately reconstructs heavy tree canopy cover than the multiple multivariate model. However, the multiple multivariate model performs better than the linear regression for all other canopy cover categories. Both models consistently perform better than randomly generated reconstructions. We apply both models to the palaeocommunity of the Upper Laetolil Beds, Tanzania. Our reconstructions indicate that there was very little heavy tree cover at this site (likely less than 10%), with the palaeo-landscape instead comprising a mixture of light and absent tree cover. These reconstructions help resolve the previous conflicting palaeoecological reconstructions made for this site. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Microworlds of the dynamic balanced scorecard for university (DBSC-UNI)

    NASA Astrophysics Data System (ADS)

    Hawari, Nurul Nazihah; Tahar, Razman Mat

    2015-12-01

    This research focuses on the development of a Microworlds of the dynamic balanced scorecard for university in order to enhance the university strategic planning process. To develop the model, we integrated both the balanced scorecard method and the system dynamics modelling method. Contrasting the traditional university planning tools, the developed model addresses university management problems holistically and dynamically. It is found that using system dynamics modelling method, the cause-and-effect relationships among variables related to the four conventional balanced scorecard perspectives are better understand. The dynamic processes that give rise to performance differences between targeted and actual performances also could be better understood. So, it is expected that the quality of the decisions taken are improved because of being better informed. The developed Microworlds can be exploited by university management to design policies that can positively influence the future in the direction of desired goals, and will have minimal side effects. This paper integrates balanced scorecard and system dynamics modelling methods in analyzing university performance. Therefore, this paper demonstrates the effectiveness and strength of system dynamics modelling method in solving problem in strategic planning area particularly in higher education sector.

  20. A multifactor approach to forecasting Romanian gross domestic product (GDP) in the short run.

    PubMed

    Armeanu, Daniel; Andrei, Jean Vasile; Lache, Leonard; Panait, Mirela

    2017-01-01

    The purpose of this paper is to investigate the application of a generalized dynamic factor model (GDFM) based on dynamic principal components analysis to forecasting short-term economic growth in Romania. We have used a generalized principal components approach to estimate a dynamic model based on a dataset comprising 86 economic and non-economic variables that are linked to economic output. The model exploits the dynamic correlations between these variables and uses three common components that account for roughly 72% of the information contained in the original space. We show that it is possible to generate reliable forecasts of quarterly real gross domestic product (GDP) using just the common components while also assessing the contribution of the individual variables to the dynamics of real GDP. In order to assess the relative performance of the GDFM to standard models based on principal components analysis, we have also estimated two Stock-Watson (SW) models that were used to perform the same out-of-sample forecasts as the GDFM. The results indicate significantly better performance of the GDFM compared with the competing SW models, which empirically confirms our expectations that the GDFM produces more accurate forecasts when dealing with large datasets.

  1. A multifactor approach to forecasting Romanian gross domestic product (GDP) in the short run

    PubMed Central

    Armeanu, Daniel; Lache, Leonard; Panait, Mirela

    2017-01-01

    The purpose of this paper is to investigate the application of a generalized dynamic factor model (GDFM) based on dynamic principal components analysis to forecasting short-term economic growth in Romania. We have used a generalized principal components approach to estimate a dynamic model based on a dataset comprising 86 economic and non-economic variables that are linked to economic output. The model exploits the dynamic correlations between these variables and uses three common components that account for roughly 72% of the information contained in the original space. We show that it is possible to generate reliable forecasts of quarterly real gross domestic product (GDP) using just the common components while also assessing the contribution of the individual variables to the dynamics of real GDP. In order to assess the relative performance of the GDFM to standard models based on principal components analysis, we have also estimated two Stock-Watson (SW) models that were used to perform the same out-of-sample forecasts as the GDFM. The results indicate significantly better performance of the GDFM compared with the competing SW models, which empirically confirms our expectations that the GDFM produces more accurate forecasts when dealing with large datasets. PMID:28742100

  2. Unifying Model-Based and Reactive Programming within a Model-Based Executive

    NASA Technical Reports Server (NTRS)

    Williams, Brian C.; Gupta, Vineet; Norvig, Peter (Technical Monitor)

    1999-01-01

    Real-time, model-based, deduction has recently emerged as a vital component in AI's tool box for developing highly autonomous reactive systems. Yet one of the current hurdles towards developing model-based reactive systems is the number of methods simultaneously employed, and their corresponding melange of programming and modeling languages. This paper offers an important step towards unification. We introduce RMPL, a rich modeling language that combines probabilistic, constraint-based modeling with reactive programming constructs, while offering a simple semantics in terms of hidden state Markov processes. We introduce probabilistic, hierarchical constraint automata (PHCA), which allow Markov processes to be expressed in a compact representation that preserves the modularity of RMPL programs. Finally, a model-based executive, called Reactive Burton is described that exploits this compact encoding to perform efficIent simulation, belief state update and control sequence generation.

  3. A novel spatial performance metric for robust pattern optimization of distributed hydrological models

    NASA Astrophysics Data System (ADS)

    Stisen, S.; Demirel, C.; Koch, J.

    2017-12-01

    Evaluation of performance is an integral part of model development and calibration as well as it is of paramount importance when communicating modelling results to stakeholders and the scientific community. There exists a comprehensive and well tested toolbox of metrics to assess temporal model performance in the hydrological modelling community. On the contrary, the experience to evaluate spatial performance is not corresponding to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study aims at making a contribution towards advancing spatial pattern oriented model evaluation for distributed hydrological models. This is achieved by introducing a novel spatial performance metric which provides robust pattern performance during model calibration. The promoted SPAtial EFficiency (spaef) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multi-component approach is necessary in order to adequately compare spatial patterns. spaef, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are tested in a spatial pattern oriented model calibration of a catchment model in Denmark. The calibration is constrained by a remote sensing based spatial pattern of evapotranspiration and discharge timeseries at two stations. Our results stress that stand-alone metrics tend to fail to provide holistic pattern information to the optimizer which underlines the importance of multi-component metrics. The three spaef components are independent which allows them to complement each other in a meaningful way. This study promotes the use of bias insensitive metrics which allow comparing variables which are related but may differ in unit in order to optimally exploit spatial observations made available by remote sensing platforms. We see great potential of spaef across environmental disciplines dealing with spatially distributed modelling.

  4. Mathematical Model for a Simplified Calculation of the Input Momentum Coefficient for AFC Purposes

    NASA Astrophysics Data System (ADS)

    Hirsch, Damian; Gharib, Morteza

    2016-11-01

    Active Flow Control (AFC) is an emerging technology which aims at enhancing the aerodynamic performance of flight vehicles (i.e., to save fuel). A viable AFC system must consider the limited resources available on a plane for attaining performance goals. A higher performance goal (i.e., airplane incremental lift) demands a higher input fluidic requirement (i.e., mass flow rate). Therefore, the key requirement for a successful and practical design is to minimize power input while maximizing performance to achieve design targets. One of the most used design parameters is the input momentum coefficient Cμ. The difficulty associated with Cμ lies in obtaining the parameters for its calculation. In the literature two main approaches can be found, which both have their own disadvantages (assumptions, difficult measurements). A new, much simpler calculation approach will be presented that is based on a mathematical model that can be applied to most jet designs (i.e., steady or sweeping jets). The model-incorporated assumptions will be justified theoretically as well as experimentally. Furthermore, the model's capabilities are exploited to give new insight to the AFC technology and its physical limitations. Supported by Boeing.

  5. Modeling the Office of Science Ten Year FacilitiesPlan: The PERI Architecture Tiger Team

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de Supinski, B R; Alam, S R; Bailey, D H

    2009-05-27

    The Performance Engineering Institute (PERI) originally proposed a tiger team activity as a mechanism to target significant effort to the optimization of key Office of Science applications, a model that was successfully realized with the assistance of two JOULE metric teams. However, the Office of Science requested a new focus beginning in 2008: assistance in forming its ten year facilities plan. To meet this request, PERI formed the Architecture Tiger Team, which is modeling the performance of key science applications on future architectures, with S3D, FLASH and GTC chosen as the first application targets. In this activity, we have measuredmore » the performance of these applications on current systems in order to understand their baseline performance and to ensure that our modeling activity focuses on the right versions and inputs of the applications. We have applied a variety of modeling techniques to anticipate the performance of these applications on a range of anticipated systems. While our initial findings predict that Office of Science applications will continue to perform well on future machines from major hardware vendors, we have also encountered several areas in which we must extend our modeling techniques in order to fulfill our mission accurately and completely. In addition, we anticipate that models of a wider range of applications will reveal critical differences between expected future systems, thus providing guidance for future Office of Science procurement decisions, and will enable DOE applications to exploit machines in future facilities fully.« less

  6. Modeling the Office of Science Ten Year Facilities Plan: The PERI Architecture Tiger Team

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de Supinski, Bronis R.; Alam, Sadaf; Bailey, David H.

    2009-06-26

    The Performance Engineering Institute (PERI) originally proposed a tiger team activity as a mechanism to target significant effort optimizing key Office of Science applications, a model that was successfully realized with the assistance of two JOULE metric teams. However, the Office of Science requested a new focus beginning in 2008: assistance in forming its ten year facilities plan. To meet this request, PERI formed the Architecture Tiger Team, which is modeling the performance of key science applications on future architectures, with S3D, FLASH and GTC chosen as the first application targets. In this activity, we have measured the performance ofmore » these applications on current systems in order to understand their baseline performance and to ensure that our modeling activity focuses on the right versions and inputs of the applications. We have applied a variety of modeling techniques to anticipate the performance of these applications on a range of anticipated systems. While our initial findings predict that Office of Science applications will continue to perform well on future machines from major hardware vendors, we have also encountered several areas in which we must extend our modeling techniques in order to fulfill our mission accurately and completely. In addition, we anticipate that models of a wider range of applications will reveal critical differences between expected future systems, thus providing guidance for future Office of Science procurement decisions, and will enable DOE applications to exploit machines in future facilities fully.« less

  7. Modeling the Office of Science Ten Year Facilities Plan: The PERI Architecture Team

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de Supinski, Bronis R.; Alam, Sadaf R; Bailey, David

    2009-01-01

    The Performance Engineering Institute (PERI) originally proposed a tiger team activity as a mechanism to target significant effort optimizing key Office of Science applications, a model that was successfully realized with the assistance of two JOULE metric teams. However, the Office of Science requested a new focus beginning in 2008: assistance in forming its ten year facilities plan. To meet this request, PERI formed the Architecture Tiger Team, which is modeling the performance of key science applications on future architectures, with S3D, FLASH and GTC chosen as the first application targets. In this activity, we have measured the performance ofmore » these applications on current systems in order to understand their baseline performance and to ensure that our modeling activity focuses on the right versions and inputs of the applications. We have applied a variety of modeling techniques to anticipate the performance of these applications on a range of anticipated systems. While our initial findings predict that Office of Science applications will continue to perform well on future machines from major hardware vendors, we have also encountered several areas in which we must extend our modeling techniques in order to fulfilll our mission accurately and completely. In addition, we anticipate that models of a wider range of applications will reveal critical differences between expected future systems, thus providing guidance for future Office of Science procurement decisions, and will enable DOE applications to exploit machines in future facilities fully.« less

  8. Towards Cloud-Resolving European-Scale Climate Simulations using a fully GPU-enabled Prototype of the COSMO Regional Model

    NASA Astrophysics Data System (ADS)

    Leutwyler, David; Fuhrer, Oliver; Cumming, Benjamin; Lapillonne, Xavier; Gysi, Tobias; Lüthi, Daniel; Osuna, Carlos; Schär, Christoph

    2014-05-01

    The representation of moist convection is a major shortcoming of current global and regional climate models. State-of-the-art global models usually operate at grid spacings of 10-300 km, and therefore cannot fully resolve the relevant upscale and downscale energy cascades. Therefore parametrization of the relevant sub-grid scale processes is required. Several studies have shown that this approach entails major uncertainties for precipitation processes, which raises concerns about the model's ability to represent precipitation statistics and associated feedback processes, as well as their sensitivities to large-scale conditions. Further refining the model resolution to the kilometer scale allows representing these processes much closer to first principles and thus should yield an improved representation of the water cycle including the drivers of extreme events. Although cloud-resolving simulations are very useful tools for climate simulations and numerical weather prediction, their high horizontal resolution and consequently the small time steps needed, challenge current supercomputers to model large domains and long time scales. The recent innovations in the domain of hybrid supercomputers have led to mixed node designs with a conventional CPU and an accelerator such as a graphics processing unit (GPU). GPUs relax the necessity for cache coherency and complex memory hierarchies, but have a larger system memory-bandwidth. This is highly beneficial for low compute intensity codes such as atmospheric stencil-based models. However, to efficiently exploit these hybrid architectures, climate models need to be ported and/or redesigned. Within the framework of the Swiss High Performance High Productivity Computing initiative (HP2C) a project to port the COSMO model to hybrid architectures has recently come to and end. The product of these efforts is a version of COSMO with an improved performance on traditional x86-based clusters as well as hybrid architectures with GPUs. We present our redesign and porting approach as well as our experience and lessons learned. Furthermore, we discuss relevant performance benchmarks obtained on the new hybrid Cray XC30 system "Piz Daint" installed at the Swiss National Supercomputing Centre (CSCS), both in terms of time-to-solution as well as energy consumption. We will demonstrate a first set of short cloud-resolving climate simulations at the European-scale using the GPU-enabled COSMO prototype and elaborate our future plans on how to exploit this new model capability.

  9. Teotihuacan, tepeapulco, and obsidian exploitation.

    PubMed

    Charlton, T H

    1978-06-16

    Current cultural ecological models of the development of civilization in central Mexico emphasize the role of subsistence production techniques and organization. The recent use of established and productive archeological surface survey techniques along natural corridors of communication between favorable niches for cultural development within the Central Mexican symbiotic region resulted in the location of sites that indicate an early development of a decentralized resource exploitation, manufacturing, and exchange network. The association of the development of this system with Teotihuacán indicates the importance such nonsubsistence production and exchange had in the evolution of this first central Mexican civilization. The later expansion of Teotihuacán into more distant areas of Mesoamerica was based on this resource exploitation model. Later civilizations centered at Tula and Tenochtitlán also used such a model in their expansion.

  10. In Silico Neuro-Oncology: Brownian Motion-Based Mathematical Treatment as a Potential Platform for Modeling the Infiltration of Glioma Cells into Normal Brain Tissue.

    PubMed

    Antonopoulos, Markos; Stamatakos, Georgios

    2015-01-01

    Intensive glioma tumor infiltration into the surrounding normal brain tissues is one of the most critical causes of glioma treatment failure. To quantitatively understand and mathematically simulate this phenomenon, several diffusion-based mathematical models have appeared in the literature. The majority of them ignore the anisotropic character of diffusion of glioma cells since availability of pertinent truly exploitable tomographic imaging data is limited. Aiming at enriching the anisotropy-enhanced glioma model weaponry so as to increase the potential of exploiting available tomographic imaging data, we propose a Brownian motion-based mathematical analysis that could serve as the basis for a simulation model estimating the infiltration of glioblastoma cells into the surrounding brain tissue. The analysis is based on clinical observations and exploits diffusion tensor imaging (DTI) data. Numerical simulations and suggestions for further elaboration are provided.

  11. Host discrimination in modular mutualisms: a theoretical framework for meta-populations of mutualists and exploiters

    PubMed Central

    Steidinger, Brian S.; Bever, James D.

    2016-01-01

    Plants in multiple symbioses are exploited by symbionts that consume their resources without providing services. Discriminating hosts are thought to stabilize mutualism by preferentially allocating resources into anatomical structures (modules) where services are generated, with examples of modules including the entire inflorescences of figs and the root nodules of legumes. Modules are often colonized by multiple symbiotic partners, such that exploiters that co-occur with mutualists within mixed modules can share rewards generated by their mutualist competitors. We developed a meta-population model to answer how the population dynamics of mutualists and exploiters change when they interact with hosts with different module occupancies (number of colonists per module) and functionally different patterns of allocation into mixed modules. We find that as module occupancy increases, hosts must increase the magnitude of preferentially allocated resources in order to sustain comparable populations of mutualists. Further, we find that mixed colonization can result in the coexistence of mutualist and exploiter partners, but only when preferential allocation follows a saturating function of the number of mutualists in a module. Finally, using published data from the fig–wasp mutualism as an illustrative example, we derive model predictions that approximate the proportion of exploiter, non-pollinating wasps observed in the field. PMID:26740613

  12. Constructing Neuronal Network Models in Massively Parallel Environments.

    PubMed

    Ippen, Tammo; Eppler, Jochen M; Plesser, Hans E; Diesmann, Markus

    2017-01-01

    Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers.

  13. Constructing Neuronal Network Models in Massively Parallel Environments

    PubMed Central

    Ippen, Tammo; Eppler, Jochen M.; Plesser, Hans E.; Diesmann, Markus

    2017-01-01

    Recent advances in the development of data structures to represent spiking neuron network models enable us to exploit the complete memory of petascale computers for a single brain-scale network simulation. In this work, we investigate how well we can exploit the computing power of such supercomputers for the creation of neuronal networks. Using an established benchmark, we divide the runtime of simulation code into the phase of network construction and the phase during which the dynamical state is advanced in time. We find that on multi-core compute nodes network creation scales well with process-parallel code but exhibits a prohibitively large memory consumption. Thread-parallel network creation, in contrast, exhibits speedup only up to a small number of threads but has little overhead in terms of memory. We further observe that the algorithms creating instances of model neurons and their connections scale well for networks of ten thousand neurons, but do not show the same speedup for networks of millions of neurons. Our work uncovers that the lack of scaling of thread-parallel network creation is due to inadequate memory allocation strategies and demonstrates that thread-optimized memory allocators recover excellent scaling. An analysis of the loop order used for network construction reveals that more complex tests on the locality of operations significantly improve scaling and reduce runtime by allowing construction algorithms to step through large networks more efficiently than in existing code. The combination of these techniques increases performance by an order of magnitude and harnesses the increasingly parallel compute power of the compute nodes in high-performance clusters and supercomputers. PMID:28559808

  14. Establishing the relationship between manufacturing and component performance in stretch formed thermoplastic composites

    NASA Technical Reports Server (NTRS)

    Santare, Michael H.; Pipes, R. Byron; Beaussart, A. J.; Coffin, D. W.; Otoole, B. J.; Shuler, S. F.

    1993-01-01

    Flexible manufacturing methods are needed to reduce the cost of using advanced composites in structural applications. One method that allows for this is the stretch forming of long discontinuous fiber materials with thermoplastic matrices. In order to exploit this flexibility in an economical way, a thorough understanding of the relationship between manufacturing and component performance must be developed. This paper reviews some of the recent work geared toward establishing this understanding. Micromechanics models have been developed to predict the formability of the material during processing. The latest improvement of these models includes the viscoelastic nature of the matrix and comparison with experimental data. A finite element scheme is described which can be used to model the forming process. This model uses equivalent anisotropic viscosities from the micromechanics models and predicts the microstructure in the formed part. In addition, structural models have been built to account for the material property gradients that can result from the manufacturing procedures. Recent developments in this area include the analysis of stress concentrations and a failure model each accounting for the heterogeneous material fields.

  15. Electrical performances of pyroelectric bimetallic strip heat engines describing a Stirling cycle

    NASA Astrophysics Data System (ADS)

    Arnaud, A.; Boughaleb, J.; Monfray, S.; Boeuf, F.; Cugat, O.; Skotnicki, T.

    2015-12-01

    This paper deals with the analytical modeling of pyroelectric bimetallic strip heat engines. These devices are designed to exploit the snap-through of a thermo-mechanically bistable membrane to transform a part of the heat flowing through the membrane into mechanical energy and to convert it into electric energy by means of a piezoelectric layer deposited on the surface of the bistable membrane. In this paper, we describe the properties of these heat engines in the case when they complete a Stirling cycle, and we evaluate the performances (available energy, Carnot efficiency...) of these harvesters at the macro- and micro-scale.

  16. Subband Image Coding with Jointly Optimized Quantizers

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith Mark J. T.

    1995-01-01

    An iterative design algorithm for the joint design of complexity- and entropy-constrained subband quantizers and associated entropy coders is proposed. Unlike conventional subband design algorithms, the proposed algorithm does not require the use of various bit allocation algorithms. Multistage residual quantizers are employed here because they provide greater control of the complexity-performance tradeoffs, and also because they allow efficient and effective high-order statistical modeling. The resulting subband coder exploits statistical dependencies within subbands, across subbands, and across stages, mainly through complexity-constrained high-order entropy coding. Experimental results demonstrate that the complexity-rate-distortion performance of the new subband coder is exceptional.

  17. Going ballistic: Graphene hot electron transistors

    NASA Astrophysics Data System (ADS)

    Vaziri, S.; Smith, A. D.; Östling, M.; Lupina, G.; Dabrowski, J.; Lippert, G.; Mehr, W.; Driussi, F.; Venica, S.; Di Lecce, V.; Gnudi, A.; König, M.; Ruhl, G.; Belete, M.; Lemme, M. C.

    2015-12-01

    This paper reviews the experimental and theoretical state of the art in ballistic hot electron transistors that utilize two-dimensional base contacts made from graphene, i.e. graphene base transistors (GBTs). Early performance predictions that indicated potential for THz operation still hold true today, even with improved models that take non-idealities into account. Experimental results clearly demonstrate the basic functionality, with on/off current switching over several orders of magnitude, but further developments are required to exploit the full potential of the GBT device family. In particular, interfaces between graphene and semiconductors or dielectrics are far from perfect and thus limit experimental device integrity, reliability and performance.

  18. Numerical and analytical modeling of the end-loaded split (ELS) test specimens made of multi-directional coupled composite laminates

    NASA Astrophysics Data System (ADS)

    Samborski, Sylwester; Valvo, Paolo S.

    2018-01-01

    The paper deals with the numerical and analytical modelling of the end-loaded split test for multi-directional laminates affected by the typical elastic couplings. Numerical analysis of three-dimensional finite element models was performed with the Abaqus software exploiting the virtual crack closure technique (VCCT). The results show possible asymmetries in the widthwise deflections of the specimen, as well as in the strain energy release rate (SERR) distributions along the delamination front. Analytical modelling based on a beam-theory approach was also conducted in simpler cases, where only bending-extension coupling is present, but no out-of-plane effects. The analytical results matched the numerical ones, thus demonstrating that the analytical models are feasible for test design and experimental data reduction.

  19. Dipole response of the odd-proton nucleus 205Tl up to the neutron-separation energy

    NASA Astrophysics Data System (ADS)

    Benouaret, N.; Beller, J.; Pai, H.; Pietralla, N.; Ponomarev, V. Yu; Romig, C.; Schnorrenberger, L.; Zweidinger, M.; Scheck, M.; Isaak, J.; Savran, D.; Sonnabend, K.; Raut, R.; Rusev, G.; Tonchev, A. P.; Tornow, W.; Weller, H. R.; Kelley, J. H.

    2016-11-01

    The low-lying electromagnetic dipole strength of the odd-proton nuclide 205Tl has been investigated up to the neutron separation energy exploiting the method of nuclear resonance fluorescence. In total, 61 levels of 205Tl have been identified. The measured strength distribution of 205Tl is discussed and compared to those of even-even and even-odd mass nuclei in the same mass region as well as to calculations that have been performed within the quasi-particle phonon model.

  20. Rapid Elemental Analysis and Provenance Study of Blumea balsamifera DC Using Laser-Induced Breakdown Spectroscopy

    PubMed Central

    Liu, Xiaona; Zhang, Qiao; Wu, Zhisheng; Shi, Xinyuan; Zhao, Na; Qiao, Yanjiang

    2015-01-01

    Laser-induced breakdown spectroscopy (LIBS) was applied to perform a rapid elemental analysis and provenance study of Blumea balsamifera DC. Principal component analysis (PCA) and partial least squares discriminant analysis (PLS-DA) were implemented to exploit the multivariate nature of the LIBS data. Scores and loadings of computed principal components visually illustrated the differing spectral data. The PLS-DA algorithm showed good classification performance. The PLS-DA model using complete spectra as input variables had similar discrimination performance to using selected spectral lines as input variables. The down-selection of spectral lines was specifically focused on the major elements of B. balsamifera samples. Results indicated that LIBS could be used to rapidly analyze elements and to perform provenance study of B. balsamifera. PMID:25558999

  1. Coastal Thematic Exploitation Platform (C-TEP): An innovative and collaborative platform to facilitate Big Data coastal research

    NASA Astrophysics Data System (ADS)

    Tuohy, Eimear; Clerc, Sebastien; Politi, Eirini; Mangin, Antoine; Datcu, Mihai; Vignudelli, Stefano; Illuzzi, Diomede; Craciunescu, Vasile; Aspetsberger, Michael

    2017-04-01

    The Coastal Thematic Exploitation Platform (C-TEP) is an on-going European Space Agency (ESA) funded project to develop a web service dedicated to the observation of the coastal environment and to support coastal management and monitoring. For over 20 years ESA satellites have provided a wealth of environmental data. The availability of an ever increasing volume of environmental data from satellite remote sensing provides a unique opportunity for exploratory science and the development of coastal applications. However, the diversity and complexity of EO data available, the need for efficient data access, information extraction, data management and high spec processing tools pose major challenges to achieving its full potential in terms of Big Data exploitation. C-TEP will provide a new means to handle the technical challenges of the observation of coastal areas and contribute to improved understanding and decision-making with respect to coastal resources and environments. C-TEP will unlock coastal knowledge and innovation as a collaborative, virtual work environment providing access to a comprehensive database of coastal Earth Observation (EO) data, in-situ data, model data and the tools and processors necessary to fully exploit these vast and heterogeneous datasets. The cloud processing capabilities provided, allow users to perform heavy processing tasks through a user-friendly Graphical User Interface (GUI). A connection to the PEPS (Plateforme pour l'Exploitation des Produits Sentinel) archive will provide data from Sentinel missions 1, 2 and 3. Automatic comparison tools will be provided to exploit the in-situ datasets in synergy with EO data. In addition, users may develop, test and share their own advanced algorithms for the extraction of coastal information. Algorithm validation will be facilitated by the capabilities to compute statistics over long time-series. Finally, C-TEP subscription services will allow users to perform automatic monitoring of some key indicators (water quality, water level, vegetation stress) from Near Real Time data. To demonstrate the benefits of C-TEP, three pilot cases have been implemented, each addressing specific, and highly topical, coastal research needs. These applications include change detection in land and seabed cover, water quality monitoring and reporting, and a coastal altimetry processor. The pilot cases demonstrate the wide scope of C-TEP and how it may contribute to European projects and international coastal networks. In conclusion, CTEP aims to provide new services and tools which will revolutionise accessibility to EO datasets, support a multi-disciplinary research collaboration, and the provision of long-term data series and innovative services for the monitoring of coastal regions.

  2. Interleaved numerical renormalization group as an efficient multiband impurity solver

    NASA Astrophysics Data System (ADS)

    Stadler, K. M.; Mitchell, A. K.; von Delft, J.; Weichselbaum, A.

    2016-06-01

    Quantum impurity problems can be solved using the numerical renormalization group (NRG), which involves discretizing the free conduction electron system and mapping to a "Wilson chain." It was shown recently that Wilson chains for different electronic species can be interleaved by use of a modified discretization, dramatically increasing the numerical efficiency of the RG scheme [Phys. Rev. B 89, 121105(R) (2014), 10.1103/PhysRevB.89.121105]. Here we systematically examine the accuracy and efficiency of the "interleaved" NRG (iNRG) method in the context of the single impurity Anderson model, the two-channel Kondo model, and a three-channel Anderson-Hund model. The performance of iNRG is explicitly compared with "standard" NRG (sNRG): when the average number of states kept per iteration is the same in both calculations, the accuracy of iNRG is equivalent to that of sNRG but the computational costs are significantly lower in iNRG when the same symmetries are exploited. Although iNRG weakly breaks SU(N ) channel symmetry (if present), both accuracy and numerical cost are entirely competitive with sNRG exploiting full symmetries. iNRG is therefore shown to be a viable and technically simple alternative to sNRG for high-symmetry models. Moreover, iNRG can be used to solve a range of lower-symmetry multiband problems that are inaccessible to sNRG.

  3. Determining similarity of scientific entities in annotation datasets

    PubMed Central

    Palma, Guillermo; Vidal, Maria-Esther; Haag, Eric; Raschid, Louiqa; Thor, Andreas

    2015-01-01

    Linked Open Data initiatives have made available a diversity of scientific collections where scientists have annotated entities in the datasets with controlled vocabulary terms from ontologies. Annotations encode scientific knowledge, which is captured in annotation datasets. Determining relatedness between annotated entities becomes a building block for pattern mining, e.g. identifying drug–drug relationships may depend on the similarity of the targets that interact with each drug. A diversity of similarity measures has been proposed in the literature to compute relatedness between a pair of entities. Each measure exploits some knowledge including the name, function, relationships with other entities, taxonomic neighborhood and semantic knowledge. We propose a novel general-purpose annotation similarity measure called ‘AnnSim’ that measures the relatedness between two entities based on the similarity of their annotations. We model AnnSim as a 1–1 maximum weight bipartite match and exploit properties of existing solvers to provide an efficient solution. We empirically study the performance of AnnSim on real-world datasets of drugs and disease associations from clinical trials and relationships between drugs and (genomic) targets. Using baselines that include a variety of measures, we identify where AnnSim can provide a deeper understanding of the semantics underlying the relatedness of a pair of entities or where it could lead to predicting new links or identifying potential novel patterns. Although AnnSim does not exploit knowledge or properties of a particular domain, its performance compares well with a variety of state-of-the-art domain-specific measures. Database URL: http://www.yeastgenome.org/ PMID:25725057

  4. Determining similarity of scientific entities in annotation datasets.

    PubMed

    Palma, Guillermo; Vidal, Maria-Esther; Haag, Eric; Raschid, Louiqa; Thor, Andreas

    2015-01-01

    Linked Open Data initiatives have made available a diversity of scientific collections where scientists have annotated entities in the datasets with controlled vocabulary terms from ontologies. Annotations encode scientific knowledge, which is captured in annotation datasets. Determining relatedness between annotated entities becomes a building block for pattern mining, e.g. identifying drug-drug relationships may depend on the similarity of the targets that interact with each drug. A diversity of similarity measures has been proposed in the literature to compute relatedness between a pair of entities. Each measure exploits some knowledge including the name, function, relationships with other entities, taxonomic neighborhood and semantic knowledge. We propose a novel general-purpose annotation similarity measure called 'AnnSim' that measures the relatedness between two entities based on the similarity of their annotations. We model AnnSim as a 1-1 maximum weight bipartite match and exploit properties of existing solvers to provide an efficient solution. We empirically study the performance of AnnSim on real-world datasets of drugs and disease associations from clinical trials and relationships between drugs and (genomic) targets. Using baselines that include a variety of measures, we identify where AnnSim can provide a deeper understanding of the semantics underlying the relatedness of a pair of entities or where it could lead to predicting new links or identifying potential novel patterns. Although AnnSim does not exploit knowledge or properties of a particular domain, its performance compares well with a variety of state-of-the-art domain-specific measures. Database URL: http://www.yeastgenome.org/ © The Author(s) 2015. Published by Oxford University Press.

  5. GPU Based N-Gram String Matching Algorithm with Score Table Approach for String Searching in Many Documents

    NASA Astrophysics Data System (ADS)

    Srinivasa, K. G.; Shree Devi, B. N.

    2017-10-01

    String searching in documents has become a tedious task with the evolution of Big Data. Generation of large data sets demand for a high performance search algorithm in areas such as text mining, information retrieval and many others. The popularity of GPU's for general purpose computing has been increasing for various applications. Therefore it is of great interest to exploit the thread feature of a GPU to provide a high performance search algorithm. This paper proposes an optimized new approach to N-gram model for string search in a number of lengthy documents and its GPU implementation. The algorithm exploits GPGPUs for searching strings in many documents employing character level N-gram matching with parallel Score Table approach and search using CUDA API. The new approach of Score table used for frequency storage of N-grams in a document, makes the search independent of the document's length and allows faster access to the frequency values, thus decreasing the search complexity. The extensive thread feature in a GPU has been exploited to enable parallel pre-processing of trigrams in a document for Score Table creation and parallel search in huge number of documents, thus speeding up the whole search process even for a large pattern size. Experiments were carried out for many documents of varied length and search strings from the standard Lorem Ipsum text on NVIDIA's GeForce GT 540M GPU with 96 cores. Results prove that the parallel approach for Score Table creation and searching gives a good speed up than the same approach executed serially.

  6. Mathematics of Sensing, Exploitation, and Execution (MSEE) Hierarchical Representations for the Evaluation of Sensed Data

    DTIC Science & Technology

    2016-06-01

    theories of the mammalian visual system, and exploiting descriptive text that may accompany a still image for improved inference. The focus of the Brown...test, computer vision, semantic description , street scenes, belief propagation, generative models, nonlinear filtering, sufficient statistics 16...visual system, and exploiting descriptive text that may accompany a still image for improved inference. The focus of the Brown team was on single images

  7. A practical technique for quantifying the performance of acoustic emission systems on plate-like structures.

    PubMed

    Scholey, J J; Wilcox, P D; Wisnom, M R; Friswell, M I

    2009-06-01

    A model for quantifying the performance of acoustic emission (AE) systems on plate-like structures is presented. Employing a linear transfer function approach the model is applicable to both isotropic and anisotropic materials. The model requires several inputs including source waveforms, phase velocity and attenuation. It is recognised that these variables may not be readily available, thus efficient measurement techniques are presented for obtaining phase velocity and attenuation in a form that can be exploited directly in the model. Inspired by previously documented methods, the application of these techniques is examined and some important implications for propagation characterisation in plates are discussed. Example measurements are made on isotropic and anisotropic plates and, where possible, comparisons with numerical solutions are made. By inputting experimentally obtained data into the model, quantitative system metrics are examined for different threshold values and sensor locations. By producing plots describing areas of hit success and source location error, the ability to measure the performance of different AE system configurations is demonstrated. This quantitative approach will help to place AE testing on a more solid foundation, underpinning its use in industrial AE applications.

  8. Learning accurate very fast decision trees from uncertain data streams

    NASA Astrophysics Data System (ADS)

    Liang, Chunquan; Zhang, Yang; Shi, Peng; Hu, Zhengguo

    2015-12-01

    Most existing works on data stream classification assume the streaming data is precise and definite. Such assumption, however, does not always hold in practice, since data uncertainty is ubiquitous in data stream applications due to imprecise measurement, missing values, privacy protection, etc. The goal of this paper is to learn accurate decision tree models from uncertain data streams for classification analysis. On the basis of very fast decision tree (VFDT) algorithms, we proposed an algorithm for constructing an uncertain VFDT tree with classifiers at tree leaves (uVFDTc). The uVFDTc algorithm can exploit uncertain information effectively and efficiently in both the learning and the classification phases. In the learning phase, it uses Hoeffding bound theory to learn from uncertain data streams and yield fast and reasonable decision trees. In the classification phase, at tree leaves it uses uncertain naive Bayes (UNB) classifiers to improve the classification performance. Experimental results on both synthetic and real-life datasets demonstrate the strong ability of uVFDTc to classify uncertain data streams. The use of UNB at tree leaves has improved the performance of uVFDTc, especially the any-time property, the benefit of exploiting uncertain information, and the robustness against uncertainty.

  9. FDD Massive MIMO Channel Estimation With Arbitrary 2D-Array Geometry

    NASA Astrophysics Data System (ADS)

    Dai, Jisheng; Liu, An; Lau, Vincent K. N.

    2018-05-01

    This paper addresses the problem of downlink channel estimation in frequency-division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems. The existing methods usually exploit hidden sparsity under a discrete Fourier transform (DFT) basis to estimate the cdownlink channel. However, there are at least two shortcomings of these DFT-based methods: 1) they are applicable to uniform linear arrays (ULAs) only, since the DFT basis requires a special structure of ULAs, and 2) they always suffer from a performance loss due to the leakage of energy over some DFT bins. To deal with the above shortcomings, we introduce an off-grid model for downlink channel sparse representation with arbitrary 2D-array antenna geometry, and propose an efficient sparse Bayesian learning (SBL) approach for the sparse channel recovery and off-grid refinement. The main idea of the proposed off-grid method is to consider the sampled grid points as adjustable parameters. Utilizing an in-exact block majorization-minimization (MM) algorithm, the grid points are refined iteratively to minimize the off-grid gap. Finally, we further extend the solution to uplink-aided channel estimation by exploiting the angular reciprocity between downlink and uplink channels, which brings enhanced recovery performance.

  10. Application of Peterson's stray light model to complex optical instruments

    NASA Astrophysics Data System (ADS)

    Fray, S.; Goepel, M.; Kroneberger, M.

    2016-07-01

    Gary L. Peterson (Breault Research Organization) presented a simple analytical model for in- field stray light evaluation of axial optical systems. We exploited this idea for more complex optical instruments of the Meteosat Third Generation (MTG) mission. For the Flexible Combined Imager (FCI) we evaluated the in-field stray light of its three-mirroranastigmat telescope, while for the Infrared Sounder (IRS) we performed an end-to-end analysis including the front telescope, interferometer and back telescope assembly and the cold optics. A comparison to simulations will be presented. The authors acknowledge the support by ESA and Thales Alenia Space through the MTG satellites program.

  11. A bi-population based scheme for an explicit exploration/exploitation trade-off in dynamic environments

    NASA Astrophysics Data System (ADS)

    Ben-Romdhane, Hajer; Krichen, Saoussen; Alba, Enrique

    2017-05-01

    Optimisation in changing environments is a challenging research topic since many real-world problems are inherently dynamic. Inspired by the natural evolution process, evolutionary algorithms (EAs) are among the most successful and promising approaches that have addressed dynamic optimisation problems. However, managing the exploration/exploitation trade-off in EAs is still a prevalent issue, and this is due to the difficulties associated with the control and measurement of such a behaviour. The proposal of this paper is to achieve a balance between exploration and exploitation in an explicit manner. The idea is to use two equally sized populations: the first one performs exploration while the second one is responsible for exploitation. These tasks are alternated from one generation to the next one in a regular pattern, so as to obtain a balanced search engine. Besides, we reinforce the ability of our algorithm to quickly adapt after cnhanges by means of a memory of past solutions. Such a combination aims to restrain the premature convergence, to broaden the search area, and to speed up the optimisation. We show through computational experiments, and based on a series of dynamic problems and many performance measures, that our approach improves the performance of EAs and outperforms competing algorithms.

  12. Ringed Seal Search for Global Optimization via a Sensitive Search Model

    PubMed Central

    Saadi, Younes; Yanto, Iwan Tri Riyadi; Herawan, Tutut; Balakrishnan, Vimala; Chiroma, Haruna; Risnumawan, Anhar

    2016-01-01

    The efficiency of a metaheuristic algorithm for global optimization is based on its ability to search and find the global optimum. However, a good search often requires to be balanced between exploration and exploitation of the search space. In this paper, a new metaheuristic algorithm called Ringed Seal Search (RSS) is introduced. It is inspired by the natural behavior of the seal pup. This algorithm mimics the seal pup movement behavior and its ability to search and choose the best lair to escape predators. The scenario starts once the seal mother gives birth to a new pup in a birthing lair that is constructed for this purpose. The seal pup strategy consists of searching and selecting the best lair by performing a random walk to find a new lair. Affected by the sensitive nature of seals against external noise emitted by predators, the random walk of the seal pup takes two different search states, normal state and urgent state. In the normal state, the pup performs an intensive search between closely adjacent lairs; this movement is modeled via a Brownian walk. In an urgent state, the pup leaves the proximity area and performs an extensive search to find a new lair from sparse targets; this movement is modeled via a Levy walk. The switch between these two states is realized by the random noise emitted by predators. The algorithm keeps switching between normal and urgent states until the global optimum is reached. Tests and validations were performed using fifteen benchmark test functions to compare the performance of RSS with other baseline algorithms. The results show that RSS is more efficient than Genetic Algorithm, Particles Swarm Optimization and Cuckoo Search in terms of convergence rate to the global optimum. The RSS shows an improvement in terms of balance between exploration (extensive) and exploitation (intensive) of the search space. The RSS can efficiently mimic seal pups behavior to find best lair and provide a new algorithm to be used in global optimization problems. PMID:26790131

  13. Nano- and micro-electromechanical switch dynamics

    NASA Astrophysics Data System (ADS)

    Pulskamp, Jeffrey S.; Proie, Robert M.; Polcawich, Ronald G.

    2013-01-01

    This paper reports theoretical analysis and experimental results on the dynamics of piezoelectric MEMS mechanical logic relays. The multiple degree of freedom analytical model, based on modal decomposition, utilizes modal parameters obtained from finite element analysis and an analytical model of piezoelectric actuation. The model accounts for exact device geometry, damping, drive waveform variables, and high electric field piezoelectric nonlinearity. The piezoelectrically excited modal force is calculated directly and provides insight into design optimization for switching speed. The model accurately predicts the propagation delay dependence on actuation voltage of mechanically distinct relay designs. The model explains the observed discrepancies in switching speed of these devices relative to single degree of freedom switching speed models and suggests the strong potential for improved switching speed performance in relays designed for mechanical logic and RF circuits through the exploitation of higher order vibrational modes.

  14. Nonlinear hyperspectral unmixing based on sparse non-negative matrix factorization

    NASA Astrophysics Data System (ADS)

    Li, Jing; Li, Xiaorun; Zhao, Liaoying

    2016-01-01

    Hyperspectral unmixing aims at extracting pure material spectra, accompanied by their corresponding proportions, from a mixed pixel. Owing to modeling more accurate distribution of real material, nonlinear mixing models (non-LMM) are usually considered to hold better performance than LMMs in complicated scenarios. In the past years, numerous nonlinear models have been successfully applied to hyperspectral unmixing. However, most non-LMMs only think of sum-to-one constraint or positivity constraint while the widespread sparsity among real materials mixing is the very factor that cannot be ignored. That is, for non-LMMs, a pixel is usually composed of a few spectral signatures of different materials from all the pure pixel set. Thus, in this paper, a smooth sparsity constraint is incorporated into the state-of-the-art Fan nonlinear model to exploit the sparsity feature in nonlinear model and use it to enhance the unmixing performance. This sparsity-constrained Fan model is solved with the non-negative matrix factorization. The algorithm was implemented on synthetic and real hyperspectral data and presented its advantage over those competing algorithms in the experiments.

  15. Virtual Reconstruction of Lost Architectures: from the Tls Survey to AR Visualization

    NASA Astrophysics Data System (ADS)

    Quattrini, R.; Pierdicca, R.; Frontoni, E.; Barcaglioni, R.

    2016-06-01

    The exploitation of high quality 3D models for dissemination of archaeological heritage is currently an investigated topic, although Mobile Augmented Reality platforms for historical architecture are not available, allowing to develop low-cost pipelines for effective contents. The paper presents a virtual anastylosis, starting from historical sources and from 3D model based on TLS survey. Several efforts and outputs in augmented or immersive environments, exploiting this reconstruction, are discussed. The work demonstrates the feasibility of a 3D reconstruction approach for complex architectural shapes starting from point clouds and its AR/VR exploitation, allowing the superimposition with archaeological evidences. Major contributions consist in the presentation and the discussion of a pipeline starting from the virtual model, to its simplification showing several outcomes, comparing also the supported data qualities and advantages/disadvantages due to MAR and VR limitations.

  16. Balancing the Budget through Social Exploitation: Why Hard Times Are Even Harder for Some.

    PubMed

    Tropman, John; Nicklett, Emily

    2012-01-01

    In all societies needs and wants regularly exceed resources. Thus societies are always in deficit; demand always exceeds supply and "balancing the budget" is a constant social problem. To make matters somewhat worse, research suggests that need- and want-fulfillment tends to further stimulate the cycle of wantseeking rather than satiating desire. Societies use various resource-allocation mechanisms, including price, to cope with gaps between wants and resources. Social exploitation is a second mechanism, securing labor from population segments that can be coerced or convinced to perform necessary work for free or at below-market compensation. Using practical examples, this article develops a theoretical framework for understanding social exploitation. It then offers case examples of how different segments of the population emerge as exploited groups in the United States, due to changes in social policies. These exploitative processes have been exacerbated and accelerated by the economic downturn that began in 2007.

  17. Probabilistic Modeling and Visualization of the Flexibility in Morphable Models

    NASA Astrophysics Data System (ADS)

    Lüthi, M.; Albrecht, T.; Vetter, T.

    Statistical shape models, and in particular morphable models, have gained widespread use in computer vision, computer graphics and medical imaging. Researchers have started to build models of almost any anatomical structure in the human body. While these models provide a useful prior for many image analysis task, relatively little information about the shape represented by the morphable model is exploited. We propose a method for computing and visualizing the remaining flexibility, when a part of the shape is fixed. Our method, which is based on Probabilistic PCA, not only leads to an approach for reconstructing the full shape from partial information, but also allows us to investigate and visualize the uncertainty of a reconstruction. To show the feasibility of our approach we performed experiments on a statistical model of the human face and the femur bone. The visualization of the remaining flexibility allows for greater insight into the statistical properties of the shape.

  18. Thermofluid Analysis of Magnetocaloric Refrigeration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdelaziz, Omar; Gluesenkamp, Kyle R; Vineyard, Edward Allan

    While there have been extensive studies on thermofluid characteristics of different magnetocaloric refrigeration systems, a conclusive optimization study using non-dimensional parameters which can be applied to a generic system has not been reported yet. In this study, a numerical model has been developed for optimization of active magnetic refrigerator (AMR). This model is computationally efficient and robust, making it appropriate for running the thousands of simulations required for parametric study and optimization. The governing equations have been non-dimensionalized and numerically solved using finite difference method. A parametric study on a wide range of non-dimensional numbers has been performed. While themore » goal of AMR systems is to improve the performance of competitive parameters including COP, cooling capacity and temperature span, new parameters called AMR performance index-1 have been introduced in order to perform multi objective optimization and simultaneously exploit all these parameters. The multi-objective optimization is carried out for a wide range of the non-dimensional parameters. The results of this study will provide general guidelines for designing high performance AMR systems.« less

  19. Exploiting range imagery: techniques and applications

    NASA Astrophysics Data System (ADS)

    Armbruster, Walter

    2009-07-01

    Practically no applications exist for which automatic processing of 2D intensity imagery can equal human visual perception. This is not the case for range imagery. The paper gives examples of 3D laser radar applications, for which automatic data processing can exceed human visual cognition capabilities and describes basic processing techniques for attaining these results. The examples are drawn from the fields of helicopter obstacle avoidance, object detection in surveillance applications, object recognition at high range, multi-object-tracking, and object re-identification in range image sequences. Processing times and recognition performances are summarized. The techniques used exploit the bijective continuity of the imaging process as well as its independence of object reflectivity, emissivity and illumination. This allows precise formulations of the probability distributions involved in figure-ground segmentation, feature-based object classification and model based object recognition. The probabilistic approach guarantees optimal solutions for single images and enables Bayesian learning in range image sequences. Finally, due to recent results in 3D-surface completion, no prior model libraries are required for recognizing and re-identifying objects of quite general object categories, opening the way to unsupervised learning and fully autonomous cognitive systems.

  20. Study on the groundwater sustainable problem by numerical simulation in a multi-layered coastal aquifer system of Zhanjiang, China

    NASA Astrophysics Data System (ADS)

    Zhou, Pengpeng; Li, Ming; Lu, Yaodong

    2017-10-01

    Assessing sustainability of coastal groundwater is significant for groundwater management as coastal groundwater is vulnerable to over-exploitation and contamination. To address the issues of serious groundwater level drawdown and potential seawater intrusion risk of a multi-layered coastal aquifer system in Zhanjiang, China, this paper presents a numerical modelling study to research groundwater sustainability of this aquifer system. The transient modelling results show that the groundwater budget was negative (-3826× 104 to -4502× 10^{4 } m3/a) during the years 2008-2011, revealing that this aquifer system was over-exploited. Meanwhile, the groundwater sustainability was assessed by evaluating the negative hydraulic pressure area (NHPA) of the unconfined aquifer and the groundwater level dynamic and flow velocity of the offshore boundaries of the confined aquifers. The results demonstrate that the Nansan Island is most influenced by NHPA and that the local groundwater should not be exploited. The results also suggest that, with the current groundwater exploitation scheme, the sustainable yield should be 1.784× 108 m3/a (i.e., decreased by 20% from the current exploitation amount). To satisfy public water demands, the 20% decrease of the exploitation amount can be offset by the groundwater sourced from the Taiping groundwater resource field. These results provide valuable guidance for groundwater management of Zhanjiang.

  1. Execution models for mapping programs onto distributed memory parallel computers

    NASA Technical Reports Server (NTRS)

    Sussman, Alan

    1992-01-01

    The problem of exploiting the parallelism available in a program to efficiently employ the resources of the target machine is addressed. The problem is discussed in the context of building a mapping compiler for a distributed memory parallel machine. The paper describes using execution models to drive the process of mapping a program in the most efficient way onto a particular machine. Through analysis of the execution models for several mapping techniques for one class of programs, we show that the selection of the best technique for a particular program instance can make a significant difference in performance. On the other hand, the results of benchmarks from an implementation of a mapping compiler show that our execution models are accurate enough to select the best mapping technique for a given program.

  2. Linear fitting of multi-threshold counting data with a pixel-array detector for spectral X-ray imaging

    PubMed Central

    Muir, Ryan D.; Pogranichney, Nicholas R.; Muir, J. Lewis; Sullivan, Shane Z.; Battaile, Kevin P.; Mulichak, Anne M.; Toth, Scott J.; Keefe, Lisa J.; Simpson, Garth J.

    2014-01-01

    Experiments and modeling are described to perform spectral fitting of multi-threshold counting measurements on a pixel-array detector. An analytical model was developed for describing the probability density function of detected voltage in X-ray photon-counting arrays, utilizing fractional photon counting to account for edge/corner effects from voltage plumes that spread across multiple pixels. Each pixel was mathematically calibrated by fitting the detected voltage distributions to the model at both 13.5 keV and 15.0 keV X-ray energies. The model and established pixel responses were then exploited to statistically recover images of X-ray intensity as a function of X-ray energy in a simulated multi-wavelength and multi-counting threshold experiment. PMID:25178010

  3. Linear fitting of multi-threshold counting data with a pixel-array detector for spectral X-ray imaging.

    PubMed

    Muir, Ryan D; Pogranichney, Nicholas R; Muir, J Lewis; Sullivan, Shane Z; Battaile, Kevin P; Mulichak, Anne M; Toth, Scott J; Keefe, Lisa J; Simpson, Garth J

    2014-09-01

    Experiments and modeling are described to perform spectral fitting of multi-threshold counting measurements on a pixel-array detector. An analytical model was developed for describing the probability density function of detected voltage in X-ray photon-counting arrays, utilizing fractional photon counting to account for edge/corner effects from voltage plumes that spread across multiple pixels. Each pixel was mathematically calibrated by fitting the detected voltage distributions to the model at both 13.5 keV and 15.0 keV X-ray energies. The model and established pixel responses were then exploited to statistically recover images of X-ray intensity as a function of X-ray energy in a simulated multi-wavelength and multi-counting threshold experiment.

  4. Development of a model counter-rotating type horizontal-axis tidal turbine

    NASA Astrophysics Data System (ADS)

    Huang, B.; Yoshida, K.; Kanemoto, T.

    2016-05-01

    In the past decade, the tidal energies have caused worldwide concern as it can provide regular and predictable renewable energy resource for power generation. The majority of technologies for exploiting the tidal stream energy are based on the concept of the horizontal axis tidal turbine (HATT). A unique counter-rotating type HATT was proposed in the present work. The original blade profiles were designed according to the developed blade element momentum theory (BEMT). CFD simulations and experimental tests were adopted to the performance of the model counter-rotating type HATT. The experimental data provides an evidence of validation of the CFD model. Further optimization of the blade profiles was also carried out based on the CFD results.

  5. First experience of vectorizing electromagnetic physics models for detector simulation

    NASA Astrophysics Data System (ADS)

    Amadio, G.; Apostolakis, J.; Bandieramonte, M.; Bianchini, C.; Bitzes, G.; Brun, R.; Canal, P.; Carminati, F.; de Fine Licht, J.; Duhem, L.; Elvira, D.; Gheata, A.; Jun, S. Y.; Lima, G.; Novak, M.; Presbyterian, M.; Shadura, O.; Seghal, R.; Wenzel, S.

    2015-12-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. The GeantV vector prototype for detector simulations has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth, parallelization needed to achieve optimal performance or memory access latency and speed. An additional challenge is to avoid the code duplication often inherent to supporting heterogeneous platforms. In this paper we present the first experience of vectorizing electromagnetic physics models developed for the GeantV project.

  6. Empirical estimation of recreational exploitation of burbot, Lota lota, in the Wind River drainage of Wyoming using a multistate capture–recapture model

    USGS Publications Warehouse

    Lewandoski, S. A.; Guy, Christopher S.; Zale, Alexander V.; Gerrity, Paul C.; Deromedi, J. W.; Johnson, K.M.; Skates, D. L.

    2017-01-01

    Burbot, Lota lota (Linnaeus), is a regionally popular sportfish in the Wind River drainage of Wyoming, USA, at the southern boundary of the range of the species. Recent declines in burbot abundances were hypothesised to be caused by overexploitation, entrainment in irrigation canals and habitat loss. This study addressed the overexploitation hypothesis using tagging data to generate reliable exploitation, abundance and density estimates from a multistate capture–recapture model that accounted for incomplete angler reporting and tag loss. Exploitation rate μ was variable among the study lakes and inversely correlated with density. Exploitation thresholds μ40 associated with population densities remaining above 40% of carrying capacity were generated to characterise risk of overharvest using exploitation and density estimates from tagging data and a logistic surplus-production model parameterised with data from other burbot populations. Bull Lake (μ = 0.06, 95% CI: 0.03–0.11; μ40 = 0.18) and Torrey Lake (μ = 0.02, 95% CI: 0.00–0.11; μ40 = 0.18) had a low risk of overfishing, Upper Dinwoody Lake had intermediate risk (μ = 0.08, 95% CI: 0.02–0.32; μ40 = 0.18) and Lower Dinwoody Lake had high risk (μ = 0.32, 95% CI: 0.10–0.67; μ40 = 0.08). These exploitation and density estimates can be used to guide sustainable management of the Wind River drainage recreational burbot fishery and inform management of other burbot fisheries elsewhere.

  7. Hierarchical Recurrent Neural Hashing for Image Retrieval With Hierarchical Convolutional Features.

    PubMed

    Lu, Xiaoqiang; Chen, Yaxiong; Li, Xuelong

    Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.Hashing has been an important and effective technology in image retrieval due to its computational efficiency and fast search speed. The traditional hashing methods usually learn hash functions to obtain binary codes by exploiting hand-crafted features, which cannot optimally represent the information of the sample. Recently, deep learning methods can achieve better performance, since deep learning architectures can learn more effective image representation features. However, these methods only use semantic features to generate hash codes by shallow projection but ignore texture details. In this paper, we proposed a novel hashing method, namely hierarchical recurrent neural hashing (HRNH), to exploit hierarchical recurrent neural network to generate effective hash codes. There are three contributions of this paper. First, a deep hashing method is proposed to extensively exploit both spatial details and semantic information, in which, we leverage hierarchical convolutional features to construct image pyramid representation. Second, our proposed deep network can exploit directly convolutional feature maps as input to preserve the spatial structure of convolutional feature maps. Finally, we propose a new loss function that considers the quantization error of binarizing the continuous embeddings into the discrete binary codes, and simultaneously maintains the semantic similarity and balanceable property of hash codes. Experimental results on four widely used data sets demonstrate that the proposed HRNH can achieve superior performance over other state-of-the-art hashing methods.

  8. Time-Series INSAR: An Integer Least-Squares Approach For Distributed Scatterers

    NASA Astrophysics Data System (ADS)

    Samiei-Esfahany, Sami; Hanssen, Ramon F.

    2012-01-01

    The objective of this research is to extend the geode- tic mathematical model which was developed for persistent scatterers to a model which can exploit distributed scatterers (DS). The main focus is on the integer least- squares framework, and the main challenge is to include the decorrelation effect in the mathematical model. In order to adapt the integer least-squares mathematical model for DS we altered the model from a single master to a multi-master configuration and introduced the decorrelation effect stochastically. This effect is described in our model by a full covariance matrix. We propose to de- rive this covariance matrix by numerical integration of the (joint) probability distribution function (PDF) of interferometric phases. This PDF is a function of coherence values and can be directly computed from radar data. We show that the use of this model can improve the performance of temporal phase unwrapping of distributed scatterers.

  9. zipHMMlib: a highly optimised HMM library exploiting repetitions in the input to speed up the forward algorithm.

    PubMed

    Sand, Andreas; Kristiansen, Martin; Pedersen, Christian N S; Mailund, Thomas

    2013-11-22

    Hidden Markov models are widely used for genome analysis as they combine ease of modelling with efficient analysis algorithms. Calculating the likelihood of a model using the forward algorithm has worst case time complexity linear in the length of the sequence and quadratic in the number of states in the model. For genome analysis, however, the length runs to millions or billions of observations, and when maximising the likelihood hundreds of evaluations are often needed. A time efficient forward algorithm is therefore a key ingredient in an efficient hidden Markov model library. We have built a software library for efficiently computing the likelihood of a hidden Markov model. The library exploits commonly occurring substrings in the input to reuse computations in the forward algorithm. In a pre-processing step our library identifies common substrings and builds a structure over the computations in the forward algorithm which can be reused. This analysis can be saved between uses of the library and is independent of concrete hidden Markov models so one preprocessing can be used to run a number of different models.Using this library, we achieve up to 78 times shorter wall-clock time for realistic whole-genome analyses with a real and reasonably complex hidden Markov model. In one particular case the analysis was performed in less than 8 minutes compared to 9.6 hours for the previously fastest library. We have implemented the preprocessing procedure and forward algorithm as a C++ library, zipHMM, with Python bindings for use in scripts. The library is available at http://birc.au.dk/software/ziphmm/.

  10. Programming for 1.6 Millon cores: Early experiences with IBM's BG/Q SMP architecture

    NASA Astrophysics Data System (ADS)

    Glosli, James

    2013-03-01

    With the stall in clock cycle improvements a decade ago, the drive for computational performance has continues along a path of increasing core counts on a processor. The multi-core evolution has been expressed in both a symmetric multi processor (SMP) architecture and cpu/GPU architecture. Debates rage in the high performance computing (HPC) community which architecture best serves HPC. In this talk I will not attempt to resolve that debate but perhaps fuel it. I will discuss the experience of exploiting Sequoia, a 98304 node IBM Blue Gene/Q SMP at Lawrence Livermore National Laboratory. The advantages and challenges of leveraging the computational power BG/Q will be detailed through the discussion of two applications. The first application is a Molecular Dynamics code called ddcMD. This is a code developed over the last decade at LLNL and ported to BG/Q. The second application is a cardiac modeling code called Cardioid. This is a code that was recently designed and developed at LLNL to exploit the fine scale parallelism of BG/Q's SMP architecture. Through the lenses of these efforts I'll illustrate the need to rethink how we express and implement our computational approaches. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  11. An exploratory model of girls' vulnerability to commercial sexual exploitation in prostitution.

    PubMed

    Reid, Joan A

    2011-05-01

    Due to inaccessibility of child victims of commercial sexual exploitation, the majority of emergent research on the problem lacks theoretical framing or sufficient data for quantitative analysis. Drawing from Agnew's general strain theory, this study utilized structural equation modeling to explore: whether caregiver strain is linked to child maltreatment, if experiencing maltreatment is associated with risk-inflating behaviors or sexual denigration of self/others, and if these behavioral and psychosocial dysfunctions are related to vulnerability to commercial sexual exploitation. The proposed model was tested with data from 174 predominately African American women, 12% of whom indicated involvement in prostitution while a minor. Findings revealed child maltreatment worsened with increased caregiver strain. Experiencing child maltreatment was linked to running away, initiating substance use at earlier ages, and higher levels of sexual denigration of self/others. Sexual denigration of self/others was significantly related to the likelihood of prostitution as a minor. The network of variables in the model accounted for 34% of the variance in prostitution as a minor.

  12. Diversity in livestock resources in pastoral systems in Africa.

    PubMed

    Kaufmann, B A; Lelea, M A; Hulsebusch, C G

    2016-11-01

    Pastoral systems are important producers and repositories of livestock diversity. Pastoralists use variability in their livestock resources to manage high levels of environmental variability in economically advantageous ways. In pastoral systems, human-animal-environment interactions are the basis of production and the key to higher productivity and efficiency. In other words, pastoralists manage a production system that exploits variability and keeps production costs low. When differentiating, characterising and evaluating pastoral breeds, this context-specific, functional dimension of diversity in livestock resources needs to be considered. The interaction of animals with their environment is determined not only by morphological and physiological traits but also by experience and socially learned behaviour. This high proportion of non-genetic components determining the performance of livestock means that current models for analysing livestock diversity and performance, which are based on genetic inheritance, have limited ability to describe pastoral performance. There is a need for methodological innovations to evaluate pastoral breeds and animals, since comparisons based on performance 'under optimal conditions' are irrelevant within this production system. Such innovations must acknowledge that livestock or breed performance is governed by complex human-animal-environment interactions, and varies through time and space due to the mobile and seasonal nature of the pastoral system. Pastoralists' breeding concepts and selection strategies seem to be geared towards improving their animals' capability to exploit variability, by - among other things - enhancing within-breed diversity. In-depth studies of these concepts and strategies could contribute considerably towards developing methodological innovations for the characterisation and evaluation of pastoral livestock resources.

  13. Sign: large-scale gene network estimation environment for high performance computing.

    PubMed

    Tamada, Yoshinori; Shimamura, Teppei; Yamaguchi, Rui; Imoto, Seiya; Nagasaki, Masao; Miyano, Satoru

    2011-01-01

    Our research group is currently developing software for estimating large-scale gene networks from gene expression data. The software, called SiGN, is specifically designed for the Japanese flagship supercomputer "K computer" which is planned to achieve 10 petaflops in 2012, and other high performance computing environments including Human Genome Center (HGC) supercomputer system. SiGN is a collection of gene network estimation software with three different sub-programs: SiGN-BN, SiGN-SSM and SiGN-L1. In these three programs, five different models are available: static and dynamic nonparametric Bayesian networks, state space models, graphical Gaussian models, and vector autoregressive models. All these models require a huge amount of computational resources for estimating large-scale gene networks and therefore are designed to be able to exploit the speed of 10 petaflops. The software will be available freely for "K computer" and HGC supercomputer system users. The estimated networks can be viewed and analyzed by Cell Illustrator Online and SBiP (Systems Biology integrative Pipeline). The software project web site is available at http://sign.hgc.jp/ .

  14. Simple Deterministically Constructed Recurrent Neural Networks

    NASA Astrophysics Data System (ADS)

    Rodan, Ali; Tiňo, Peter

    A large number of models for time series processing, forecasting or modeling follows a state-space formulation. Models in the specific class of state-space approaches, referred to as Reservoir Computing, fix their state-transition function. The state space with the associated state transition structure forms a reservoir, which is supposed to be sufficiently complex so as to capture a large number of features of the input stream that can be potentially exploited by the reservoir-to-output readout mapping. The largely "black box" character of reservoirs prevents us from performing a deeper theoretical investigation of the dynamical properties of successful reservoirs. Reservoir construction is largely driven by a series of (more-or-less) ad-hoc randomized model building stages, with both the researchers and practitioners having to rely on a series of trials and errors. We show that a very simple deterministically constructed reservoir with simple cycle topology gives performances comparable to those of the Echo State Network (ESN) on a number of time series benchmarks. Moreover, we argue that the memory capacity of such a model can be made arbitrarily close to the proved theoretical limit.

  15. Laser vibrometry exploitation for vehicle identification

    NASA Astrophysics Data System (ADS)

    Nolan, Adam; Lingg, Andrew; Goley, Steve; Sigmund, Kevin; Kangas, Scott

    2014-06-01

    Vibration signatures sensed from distant vehicles using laser vibrometry systems provide valuable information that may be used to help identify key vehicle features such as engine type, engine speed, and number of cylinders. Through the use of physics models of the vibration phenomenology, features are chosen to support classification algorithms. Various individual exploitation algorithms were developed using these models to classify vibration signatures into engine type (piston vs. turbine), engine configuration (Inline 4 vs. Inline 6 vs. V6 vs. V8 vs. V12) and vehicle type. The results of these algorithms will be presented for an 8 class problem. Finally, the benefits of using a factor graph representation to link these independent algorithms together will be presented which constructs a classification hierarchy for the vibration exploitation problem.

  16. A performance model for GPUs with caches

    DOE PAGES

    Dao, Thanh Tuan; Kim, Jungwon; Seo, Sangmin; ...

    2014-06-24

    To exploit the abundant computational power of the world's fastest supercomputers, an even workload distribution to the typically heterogeneous compute devices is necessary. While relatively accurate performance models exist for conventional CPUs, accurate performance estimation models for modern GPUs do not exist. This paper presents two accurate models for modern GPUs: a sampling-based linear model, and a model based on machine-learning (ML) techniques which improves the accuracy of the linear model and is applicable to modern GPUs with and without caches. We first construct the sampling-based linear model to predict the runtime of an arbitrary OpenCL kernel. Based on anmore » analysis of NVIDIA GPUs' scheduling policies we determine the earliest sampling points that allow an accurate estimation. The linear model cannot capture well the significant effects that memory coalescing or caching as implemented in modern GPUs have on performance. We therefore propose a model based on ML techniques that takes several compiler-generated statistics about the kernel as well as the GPU's hardware performance counters as additional inputs to obtain a more accurate runtime performance estimation for modern GPUs. We demonstrate the effectiveness and broad applicability of the model by applying it to three different NVIDIA GPU architectures and one AMD GPU architecture. On an extensive set of OpenCL benchmarks, on average, the proposed model estimates the runtime performance with less than 7 percent error for a second-generation GTX 280 with no on-chip caches and less than 5 percent for the Fermi-based GTX 580 with hardware caches. On the Kepler-based GTX 680, the linear model has an error of less than 10 percent. On an AMD GPU architecture, Radeon HD 6970, the model estimates with 8 percent of error rates. As a result, the proposed technique outperforms existing models by a factor of 5 to 6 in terms of accuracy.« less

  17. Software reliability report

    NASA Technical Reports Server (NTRS)

    Wilson, Larry

    1991-01-01

    There are many software reliability models which try to predict future performance of software based on data generated by the debugging process. Unfortunately, the models appear to be unable to account for the random nature of the data. If the same code is debugged multiple times and one of the models is used to make predictions, intolerable variance is observed in the resulting reliability predictions. It is believed that data replication can remove this variance in lab type situations and that it is less than scientific to talk about validating a software reliability model without considering replication. It is also believed that data replication may prove to be cost effective in the real world, thus the research centered on verification of the need for replication and on methodologies for generating replicated data in a cost effective manner. The context of the debugging graph was pursued by simulation and experimentation. Simulation was done for the Basic model and the Log-Poisson model. Reasonable values of the parameters were assigned and used to generate simulated data which is then processed by the models in order to determine limitations on their accuracy. These experiments exploit the existing software and program specimens which are in AIR-LAB to measure the performance of reliability models.

  18. Final Report, “Exploiting Global View for Resilience”

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chien, Andrew

    2017-03-29

    Final technical report for the "Exploiting Global View for Resilience" project. The GVR project aims to create a new approach to portable, resilient applications. The GVR approach builds on a global view data model,, adding versioning (multi-version), user control of timing and rate (multi-stream), and flexible cross layer error signalling and recovery. With a versioned array as a portable abstraction, GVR enables application programmers to exploit deep scientific and application code insights to manage resilience (and its overhead) in a flexible, portable fashion.

  19. Observational and Modeling-based Study of Corsica Thunderstorms: Preparation of the EXAEDRE Airborne Campaign

    NASA Astrophysics Data System (ADS)

    Defer, E.; Coquillat, S.; Lambert, D.; Pinty, J. P.; Prieur, S.; Caumont, O.; Labatut, L.; Nuret, M.; Blanchet, P.; Buguet, M.; Lalande, P.; Labrouche, G.; Pedeboy, S.; Lojou, J. Y.; Schwarzenboeck, A.; Delanoë, J.; Bourdon, A.; Guiraud, L.

    2017-12-01

    The 4-year EXAEDRE (EXploiting new Atmospheric Electricity Data for Research and the Environment; Oct 2016-Sept 2020) project is sponsored by the French Science Foundation ANR (Agence Nationale de la Recherche). This project is a French contribution to the HyMeX (HYdrological cycle in the Mediterranean EXperiment) program. The EXAEDRE activities rely on innovative multi-disciplinary and state of the art instrumentation and modeling tools to provide a comprehensive description of the electrical activity in thunderstorms. The EXAEDRE observational part is based on i) existing lightning records collected during HyMeX Special Observation Period (SOP1; Sept-Nov 2012), and permanent lightning observations provided by the research Lightning Mapping Array SAETTA and the operational Météorage lightning locating systems, ii) additional lightning observations mapped with a new VHF interferometer especially developed within the EXAEDRE project, and iii) a dedicated airborne campaign over Corsica. The modeling part of the EXAEDRE project exploits the electrification and lightning schemes developed in the cloud resolving model MesoNH and promotes an innovative technique of flash data assimilation in the french operational model AROME of Météo-France. An overview of the EXAEDRE project will be given with an emphasis on the instrumental, observational and modeling activities performed during the 1st year of the project. The preparation of the EXAEDRE airborne campaign scheduled for September 2018 over Corsica will then be discussed. Acknowledgements. The EXAEDRE project is sponsored by grant ANR-16-CE04-0005 with support from the MISTRALS/HyMeX meta program.

  20. Development Issues on Linked Data Weblog Enrichment

    NASA Astrophysics Data System (ADS)

    Ruiz-Rube, Iván; Cornejo, Carlos M.; Dodero, Juan Manuel; García, Vicente M.

    In this paper, we describe the issues found during the development of LinkedBlog, a Linked Data extension for WordPress blogs. This extension enables to enrich text-based and video information contained in blog entries with RDF triples that are suitable to be stored, managed and exploited by other web-based applications. The issues have to do with the generality, usability, tracking, depth, security, trustiness and performance of the linked data enrichment process. The presented annotation approach aims at maintaining web-based contents independent from the underlying ontological model, by providing a loosely coupled RDFa-based approach in the linked data application. Finally, we detail how the performance of annotations can be improved through a semantic reasoner.

  1. Positioning the endoscope in laparoscopic surgery by foot: Influential factors on surgeons' performance in virtual trainer.

    PubMed

    Abdi, Elahe; Bouri, Mohamed; Burdet, Etienne; Himidan, Sharifa; Bleuler, Hannes

    2017-07-01

    We have investigated how surgeons can use the foot to position a laparoscopic endoscope, a task that normally requires an extra assistant. Surgeons need to train in order to exploit the possibilities offered by this new technique and safely manipulate the endoscope together with the hands movements. A realistic abdominal cavity has been developed as training simulator to investigate this multi-arm manipulation. In this virtual environment, the surgeon's biological hands are modelled as laparoscopic graspers while the viewpoint is controlled by the dominant foot. 23 surgeons and medical students performed single-handed and bimanual manipulation in this environment. The results show that residents had superior performance compared to both medical students and more experienced surgeons, suggesting that residency is an ideal period for this training. Performing the single-handed task improves the performance in the bimanual task, whereas the converse was not true.

  2. Exploiting HPC Platforms for Metagenomics: Challenges and Opportunities (MICW - Metagenomics Informatics Challenges Workshop: 10K Genomes at a Time)

    ScienceCinema

    Canon, Shane

    2018-01-24

    DOE JGI's Zhong Wang, chair of the High-performance Computing session, gives a brief introduction before Berkeley Lab's Shane Canon talks about "Exploiting HPC Platforms for Metagenomics: Challenges and Opportunities" at the Metagenomics Informatics Challenges Workshop held at the DOE JGI on October 12-13, 2011.

  3. Leveraging tagging and rating for recommendation: RMF meets weighted diffusion on tripartite graphs

    NASA Astrophysics Data System (ADS)

    Li, Jianguo; Tang, Yong; Chen, Jiemin

    2017-10-01

    Recommender systems (RSs) have been a widely exploited approach to solving the information overload problem. However, the performance is still limited due to the extreme sparsity of the rating data. With the popularity of Web 2.0, the social tagging system provides more external information to improve recommendation accuracy. Although some existing approaches combine the matrix factorization models with the tag co-occurrence and context of tags, they neglect the issue of tag sparsity that would also result in inaccurate recommendations. Consequently, in this paper, we propose a novel hybrid collaborative filtering model named WUDiff_RMF, which improves regularized matrix factorization (RMF) model by integrating Weighted User-Diffusion-based CF algorithm(WUDiff) that obtains the information of similar users from the weighted tripartite user-item-tag graph. This model aims to capture the degree correlation of the user-item-tag tripartite network to enhance the performance of recommendation. Experiments conducted on four real-world datasets demonstrate that our approach significantly performs better than already widely used methods in the accuracy of recommendation. Moreover, results show that WUDiff_RMF can alleviate the data sparsity, especially in the circumstance that users have made few ratings and few tags.

  4. PANTHER. Pattern ANalytics To support High-performance Exploitation and Reasoning.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Czuchlewski, Kristina Rodriguez; Hart, William E.

    Sandia has approached the analysis of big datasets with an integrated methodology that uses computer science, image processing, and human factors to exploit critical patterns and relationships in large datasets despite the variety and rapidity of information. The work is part of a three-year LDRD Grand Challenge called PANTHER (Pattern ANalytics To support High-performance Exploitation and Reasoning). To maximize data analysis capability, Sandia pursued scientific advances across three key technical domains: (1) geospatial-temporal feature extraction via image segmentation and classification; (2) geospatial-temporal analysis capabilities tailored to identify and process new signatures more efficiently; and (3) domain- relevant models of humanmore » perception and cognition informing the design of analytic systems. Our integrated results include advances in geographical information systems (GIS) in which we discover activity patterns in noisy, spatial-temporal datasets using geospatial-temporal semantic graphs. We employed computational geometry and machine learning to allow us to extract and predict spatial-temporal patterns and outliers from large aircraft and maritime trajectory datasets. We automatically extracted static and ephemeral features from real, noisy synthetic aperture radar imagery for ingestion into a geospatial-temporal semantic graph. We worked with analysts and investigated analytic workflows to (1) determine how experiential knowledge evolves and is deployed in high-demand, high-throughput visual search workflows, and (2) better understand visual search performance and attention. Through PANTHER, Sandia's fundamental rethinking of key aspects of geospatial data analysis permits the extraction of much richer information from large amounts of data. The project results enable analysts to examine mountains of historical and current data that would otherwise go untouched, while also gaining meaningful, measurable, and defensible insights into overlooked relationships and patterns. The capability is directly relevant to the nation's nonproliferation remote-sensing activities and has broad national security applications for military and intelligence- gathering organizations.« less

  5. Design Sketches For Optical Crossbar Switches Intended For Large-Scale Parallel Processing Applications

    NASA Astrophysics Data System (ADS)

    Hartmann, Alfred; Redfield, Steve

    1989-04-01

    This paper discusses design of large-scale (1000x 1000) optical crossbar switching networks for use in parallel processing supercom-puters. Alternative design sketches for an optical crossbar switching network are presented using free-space optical transmission with either a beam spreading/masking model or a beam steering model for internodal communications. The performances of alternative multiple access channel communications protocol-unslotted and slotted ALOHA and carrier sense multiple access (CSMA)-are compared with the performance of the classic arbitrated bus crossbar of conventional electronic parallel computing. These comparisons indicate an almost inverse relationship between ease of implementation and speed of operation. Practical issues of optical system design are addressed, and an optically addressed, composite spatial light modulator design is presented for fabrication to arbitrarily large scale. The wide range of switch architecture, communications protocol, optical systems design, device fabrication, and system performance problems presented by these design sketches poses a serious challenge to practical exploitation of highly parallel optical interconnects in advanced computer designs.

  6. Performance analysis of improved methodology for incorporation of spatial/spectral variability in synthetic hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Scanlan, Neil W.; Schott, John R.; Brown, Scott D.

    2004-01-01

    Synthetic imagery has traditionally been used to support sensor design by enabling design engineers to pre-evaluate image products during the design and development stages. Increasingly exploitation analysts are looking to synthetic imagery as a way to develop and test exploitation algorithms before image data are available from new sensors. Even when sensors are available, synthetic imagery can significantly aid in algorithm development by providing a wide range of "ground truthed" images with varying illumination, atmospheric, viewing and scene conditions. One limitation of synthetic data is that the background variability is often too bland. It does not exhibit the spatial and spectral variability present in real data. In this work, four fundamentally different texture modeling algorithms will first be implemented as necessary into the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model environment. Two of the models to be tested are variants of a statistical Z-Score selection model, while the remaining two involve a texture synthesis and a spectral end-member fractional abundance map approach, respectively. A detailed comparative performance analysis of each model will then be carried out on several texturally significant regions of the resultant synthetic hyperspectral imagery. The quantitative assessment of each model will utilize a set of three peformance metrics that have been derived from spatial Gray Level Co-Occurrence Matrix (GLCM) analysis, hyperspectral Signal-to-Clutter Ratio (SCR) measures, and a new concept termed the Spectral Co-Occurrence Matrix (SCM) metric which permits the simultaneous measurement of spatial and spectral texture. Previous research efforts on the validation and performance analysis of texture characterization models have been largely qualitative in nature based on conducting visual inspections of synthetic textures in order to judge the degree of similarity to the original sample texture imagery. The quantitative measures used in this study will in combination attempt to determine which texture characterization models best capture the correct statistical and radiometric attributes of the corresponding real image textures in both the spatial and spectral domains. The motivation for this work is to refine our understanding of the complexities of texture phenomena so that an optimal texture characterization model that can accurately account for these complexities can be eventually implemented into a synthetic image generation (SIG) model. Further, conclusions will be drawn regarding which of the candidate texture models are able to achieve realistic levels of spatial and spectral clutter, thereby permitting more effective and robust testing of hyperspectral algorithms in synthetic imagery.

  7. Performance analysis of improved methodology for incorporation of spatial/spectral variability in synthetic hyperspectral imagery

    NASA Astrophysics Data System (ADS)

    Scanlan, Neil W.; Schott, John R.; Brown, Scott D.

    2003-12-01

    Synthetic imagery has traditionally been used to support sensor design by enabling design engineers to pre-evaluate image products during the design and development stages. Increasingly exploitation analysts are looking to synthetic imagery as a way to develop and test exploitation algorithms before image data are available from new sensors. Even when sensors are available, synthetic imagery can significantly aid in algorithm development by providing a wide range of "ground truthed" images with varying illumination, atmospheric, viewing and scene conditions. One limitation of synthetic data is that the background variability is often too bland. It does not exhibit the spatial and spectral variability present in real data. In this work, four fundamentally different texture modeling algorithms will first be implemented as necessary into the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model environment. Two of the models to be tested are variants of a statistical Z-Score selection model, while the remaining two involve a texture synthesis and a spectral end-member fractional abundance map approach, respectively. A detailed comparative performance analysis of each model will then be carried out on several texturally significant regions of the resultant synthetic hyperspectral imagery. The quantitative assessment of each model will utilize a set of three peformance metrics that have been derived from spatial Gray Level Co-Occurrence Matrix (GLCM) analysis, hyperspectral Signal-to-Clutter Ratio (SCR) measures, and a new concept termed the Spectral Co-Occurrence Matrix (SCM) metric which permits the simultaneous measurement of spatial and spectral texture. Previous research efforts on the validation and performance analysis of texture characterization models have been largely qualitative in nature based on conducting visual inspections of synthetic textures in order to judge the degree of similarity to the original sample texture imagery. The quantitative measures used in this study will in combination attempt to determine which texture characterization models best capture the correct statistical and radiometric attributes of the corresponding real image textures in both the spatial and spectral domains. The motivation for this work is to refine our understanding of the complexities of texture phenomena so that an optimal texture characterization model that can accurately account for these complexities can be eventually implemented into a synthetic image generation (SIG) model. Further, conclusions will be drawn regarding which of the candidate texture models are able to achieve realistic levels of spatial and spectral clutter, thereby permitting more effective and robust testing of hyperspectral algorithms in synthetic imagery.

  8. Confidence and psychosis: a neuro-computational account of contingency learning disruption by NMDA blockade

    PubMed Central

    Vinckier, F; Gaillard, R; Palminteri, S; Rigoux, L; Salvador, A; Fornito, A; Adapa, R; Krebs, M O; Pessiglione, M; Fletcher, P C

    2016-01-01

    A state of pathological uncertainty about environmental regularities might represent a key step in the pathway to psychotic illness. Early psychosis can be investigated in healthy volunteers under ketamine, an NMDA receptor antagonist. Here, we explored the effects of ketamine on contingency learning using a placebo-controlled, double-blind, crossover design. During functional magnetic resonance imaging, participants performed an instrumental learning task, in which cue-outcome contingencies were probabilistic and reversed between blocks. Bayesian model comparison indicated that in such an unstable environment, reinforcement learning parameters are downregulated depending on confidence level, an adaptive mechanism that was specifically disrupted by ketamine administration. Drug effects were underpinned by altered neural activity in a fronto-parietal network, which reflected the confidence-based shift to exploitation of learned contingencies. Our findings suggest that an early characteristic of psychosis lies in a persistent doubt that undermines the stabilization of behavioral policy resulting in a failure to exploit regularities in the environment. PMID:26055423

  9. Confidence and psychosis: a neuro-computational account of contingency learning disruption by NMDA blockade.

    PubMed

    Vinckier, F; Gaillard, R; Palminteri, S; Rigoux, L; Salvador, A; Fornito, A; Adapa, R; Krebs, M O; Pessiglione, M; Fletcher, P C

    2016-07-01

    A state of pathological uncertainty about environmental regularities might represent a key step in the pathway to psychotic illness. Early psychosis can be investigated in healthy volunteers under ketamine, an NMDA receptor antagonist. Here, we explored the effects of ketamine on contingency learning using a placebo-controlled, double-blind, crossover design. During functional magnetic resonance imaging, participants performed an instrumental learning task, in which cue-outcome contingencies were probabilistic and reversed between blocks. Bayesian model comparison indicated that in such an unstable environment, reinforcement learning parameters are downregulated depending on confidence level, an adaptive mechanism that was specifically disrupted by ketamine administration. Drug effects were underpinned by altered neural activity in a fronto-parietal network, which reflected the confidence-based shift to exploitation of learned contingencies. Our findings suggest that an early characteristic of psychosis lies in a persistent doubt that undermines the stabilization of behavioral policy resulting in a failure to exploit regularities in the environment.

  10. Homography-based visual servo regulation of mobile robots.

    PubMed

    Fang, Yongchun; Dixon, Warren E; Dawson, Darren M; Chawda, Prakash

    2005-10-01

    A monocular camera-based vision system attached to a mobile robot (i.e., the camera-in-hand configuration) is considered in this paper. By comparing corresponding target points of an object from two different camera images, geometric relationships are exploited to derive a transformation that relates the actual position and orientation of the mobile robot to a reference position and orientation. This transformation is used to synthesize a rotation and translation error system from the current position and orientation to the fixed reference position and orientation. Lyapunov-based techniques are used to construct an adaptive estimate to compensate for a constant, unmeasurable depth parameter, and to prove asymptotic regulation of the mobile robot. The contribution of this paper is that Lyapunov techniques are exploited to craft an adaptive controller that enables mobile robot position and orientation regulation despite the lack of an object model and the lack of depth information. Experimental results are provided to illustrate the performance of the controller.

  11. On the derivation of flow rating curves in data-scarce environments

    NASA Astrophysics Data System (ADS)

    Manfreda, Salvatore

    2018-07-01

    River monitoring is a critical issue for hydrological modelling that relies strongly on the use of flow rating curves (FRCs). In most cases, these functions are derived by least-squares fitting which usually leads to good performance indices, even when based on a limited range of data that especially lack high flow observations. In this context, cross-section geometry is a controlling factor which is not fully exploited in classical approaches. In fact, river discharge is obtained as the product of two factors: 1) the area of the wetted cross-section and 2) the cross-sectionally averaged velocity. Both factors can be expressed as a function of the river stage, defining a viable alternative in the derivation of FRCs. This makes it possible to exploit information about cross-section geometry limiting, at least partially, the uncertainty in the extrapolation of discharge at higher flow values. Numerical analyses and field data confirm the reliability of the proposed procedure for the derivation of FRCs.

  12. Exploiting Listener Gaze to Improve Situated Communication in Dynamic Virtual Environments.

    PubMed

    Garoufi, Konstantina; Staudte, Maria; Koller, Alexander; Crocker, Matthew W

    2016-09-01

    Beyond the observation that both speakers and listeners rapidly inspect the visual targets of referring expressions, it has been argued that such gaze may constitute part of the communicative signal. In this study, we investigate whether a speaker may, in principle, exploit listener gaze to improve communicative success. In the context of a virtual environment where listeners follow computer-generated instructions, we provide two kinds of support for this claim. First, we show that listener gaze provides a reliable real-time index of understanding even in dynamic and complex environments, and on a per-utterance basis. Second, we show that a language generation system that uses listener gaze to provide rapid feedback improves overall task performance in comparison with two systems that do not use gaze. Aside from demonstrating the utility of listener gaze in situated communication, our findings open the door to new methods for developing and evaluating multi-modal models of situated interaction. Copyright © 2015 Cognitive Science Society, Inc.

  13. Parallel and fault-tolerant algorithms for hypercube multiprocessors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aykanat, C.

    1988-01-01

    Several techniques for increasing the performance of parallel algorithms on distributed-memory message-passing multi-processor systems are investigated. These techniques are effectively implemented for the parallelization of the Scaled Conjugate Gradient (SCG) algorithm on a hypercube connected message-passing multi-processor. Significant performance improvement is achieved by using these techniques. The SCG algorithm is used for the solution phase of an FE modeling system. Almost linear speed-up is achieved, and it is shown that hypercube topology is scalable for an FE class of problem. The SCG algorithm is also shown to be suitable for vectorization, and near supercomputer performance is achieved on a vectormore » hypercube multiprocessor by exploiting both parallelization and vectorization. Fault-tolerance issues for the parallel SCG algorithm and for the hypercube topology are also addressed.« less

  14. Deception in plants: mimicry or perceptual exploitation?

    PubMed

    Schaefer, H Martin; Ruxton, Graeme D

    2009-12-01

    Mimicry involves adaptive resemblance between a mimic and a model. However, despite much recent research, it remains contentious in plants. Here, we review recent progress on studying deception by flowers, distinguishing between plants relying on mimicry to achieve pollination and those relying on the exploitation of the perceptual biases of animals. We disclose fundamental differences between both mechanisms and explain why the evolution of exploitation is less constrained than that of mimicry. Exploitation of perceptual biases might thus be a precursor for the gradual evolution of mimicry. Increasing knowledge on the sensory and cognitive filters in animals, and on the selective pressures that maintain them, should aid researchers in tracing the evolutionary dynamics of deception in plants.

  15. Constructing an Efficient Self-Tuning Aircraft Engine Model for Control and Health Management Applications

    NASA Technical Reports Server (NTRS)

    Armstrong, Jeffrey B.; Simon, Donald L.

    2012-01-01

    Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulations.Self-tuning aircraft engine models can be applied for control and health management applications. The self-tuning feature of these models minimizes the mismatch between any given engine and the underlying engineering model describing an engine family. This paper provides details of the construction of a self-tuning engine model centered on a piecewise linear Kalman filter design. Starting from a nonlinear transient aerothermal model, a piecewise linear representation is first extracted. The linearization procedure creates a database of trim vectors and state-space matrices that are subsequently scheduled for interpolation based on engine operating point. A series of steady-state Kalman gains can next be constructed from a reduced-order form of the piecewise linear model. Reduction of the piecewise linear model to an observable dimension with respect to available sensed engine measurements can be achieved using either a subset or an optimal linear combination of "health" parameters, which describe engine performance. The resulting piecewise linear Kalman filter is then implemented for faster-than-real-time processing of sensed engine measurements, generating outputs appropriate for trending engine performance, estimating both measured and unmeasured parameters for control purposes, and performing on-board gas-path fault diagnostics. Computational efficiency is achieved by designing multidimensional interpolation algorithms that exploit the shared scheduling of multiple trim vectors and system matrices. An example application illustrates the accuracy of a self-tuning piecewise linear Kalman filter model when applied to a nonlinear turbofan engine simulation. Additional discussions focus on the issue of transient response accuracy and the advantages of a piecewise linear Kalman filter in the context of validation and verification. The techniques described provide a framework for constructing efficient self-tuning aircraft engine models from complex nonlinear simulatns.

  16. Evolution of a designless nanoparticle network into reconfigurable Boolean logic

    NASA Astrophysics Data System (ADS)

    Bose, S. K.; Lawrence, C. P.; Liu, Z.; Makarenko, K. S.; van Damme, R. M. J.; Broersma, H. J.; van der Wiel, W. G.

    2015-12-01

    Natural computers exploit the emergent properties and massive parallelism of interconnected networks of locally active components. Evolution has resulted in systems that compute quickly and that use energy efficiently, utilizing whatever physical properties are exploitable. Man-made computers, on the other hand, are based on circuits of functional units that follow given design rules. Hence, potentially exploitable physical processes, such as capacitive crosstalk, to solve a problem are left out. Until now, designless nanoscale networks of inanimate matter that exhibit robust computational functionality had not been realized. Here we artificially evolve the electrical properties of a disordered nanomaterials system (by optimizing the values of control voltages using a genetic algorithm) to perform computational tasks reconfigurably. We exploit the rich behaviour that emerges from interconnected metal nanoparticles, which act as strongly nonlinear single-electron transistors, and find that this nanoscale architecture can be configured in situ into any Boolean logic gate. This universal, reconfigurable gate would require about ten transistors in a conventional circuit. Our system meets the criteria for the physical realization of (cellular) neural networks: universality (arbitrary Boolean functions), compactness, robustness and evolvability, which implies scalability to perform more advanced tasks. Our evolutionary approach works around device-to-device variations and the accompanying uncertainties in performance. Moreover, it bears a great potential for more energy-efficient computation, and for solving problems that are very hard to tackle in conventional architectures.

  17. Long-range laser scanning and 3D imaging for the Gneiss quarries survey

    NASA Astrophysics Data System (ADS)

    Schenker, Filippo Luca; Spataro, Alessio; Pozzoni, Maurizio; Ambrosi, Christian; Cannata, Massimiliano; Günther, Felix; Corboud, Federico

    2016-04-01

    In Canton Ticino (Southern Switzerland), the exploitation of natural stone, mostly gneisses, is an important activity of valley's economies. Nowadays, these economic activities are menaced by (i) the exploitation costs related to geological phenomena such as fractures, faults and heterogeneous rocks that hinder the processing of the stone product, (ii) continuously changing demand because of the evolving natural stone fashion and (iii) increasing administrative limits and rules acting to protect the environment. Therefore, the sustainable development of the sector for the next decades needs new and effective strategies to regulate and plan the quarries. A fundamental step in this process is the building of a 3D geological model of the quarries to constrain the volume of commercial natural stone and the volume of waste. In this context, we conducted Terrestrial Laser Scanning surveys of the quarries in the Maggia Valley to obtain a detailed 3D topography onto which the geological units were mapped. The topographic 3D model was obtained with a long-range laser scanning Riegl VZ4000 that can measure from up to 4 km of distance with a speed of 147,000 points per second. It operates with the new V-line technology, which defines the surface relief by sensing differentiated signals (echoes), even in the presence of obstacles such as vegetation. Depending on the esthetics of the gneisses, we defined seven types of natural stones that, together with faults and joints, were mapped onto the 3D models of the exploitation sites. According to the orientation of the geological limits and structures, we projected the different rock units and fractures into the excavation front. This way, we obtained a 3D geological model from which we can quantitatively estimate the volume of the seven different natural stones (with different commercial value) and waste (with low commercial value). To verify the 3D geological models and to quantify exploited rock and waste volumes the same procedure will be repeated after ca. 6 months. Finally, these 3D geological models can be useful to (i) decrease the exploitation costs because they yield the extraction potential of quarry, (ii) become more efficient in the exploitation and more dynamic in the market because they permit better planning and (iii) decrease the waste by limiting the excavation in regions with low-quality rocks.

  18. Global/local stress analysis of composite panels

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.; Knight, Norman F., Jr.

    1989-01-01

    A method for performing a global/local stress analysis is described, and its capabilities are demonstrated. The method employs spline interpolation functions which satisfy the linear plate bending equation to determine displacements and rotations from a global model which are used as boundary conditions for the local model. Then, the local model is analyzed independent of the global model of the structure. This approach can be used to determine local, detailed stress states for specific structural regions using independent, refined local models which exploit information from less-refined global models. The method presented is not restricted to having a priori knowledge of the location of the regions requiring local detailed stress analysis. This approach also reduces the computational effort necessary to obtain the detailed stress state. Criteria for applying the method are developed. The effectiveness of the method is demonstrated using a classical stress concentration problem and a graphite-epoxy blade-stiffened panel with a discontinuous stiffener.

  19. Global/local stress analysis of composite structures. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.

    1989-01-01

    A method for performing a global/local stress analysis is described and its capabilities are demonstrated. The method employs spline interpolation functions which satisfy the linear plate bending equation to determine displacements and rotations from a global model which are used as boundary conditions for the local model. Then, the local model is analyzed independent of the global model of the structure. This approach can be used to determine local, detailed stress states for specific structural regions using independent, refined local models which exploit information from less-refined global models. The method presented is not restricted to having a priori knowledge of the location of the regions requiring local detailed stress analysis. This approach also reduces the computational effort necessary to obtain the detailed stress state. Criteria for applying the method are developed. The effectiveness of the method is demonstrated using a classical stress concentration problem and a graphite-epoxy blade-stiffened panel with a discontinuous stiffener.

  20. Shape analysis modeling for character recognition

    NASA Astrophysics Data System (ADS)

    Khan, Nadeem A. M.; Hegt, Hans A.

    1998-10-01

    Optimal shape modeling of character-classes is crucial for achieving high performance on recognition of mixed-font, hand-written or (and) poor quality text. A novel scheme is presented in this regard focusing on constructing such structural models that can be hierarchically examined. These models utilize a certain `well-thought' set of shape primitives. They are simplified enough to ignore the inter- class variations in font-type or writing style yet retaining enough details for discrimination between the samples of the similar classes. Thus the number of models per class required can be kept minimal without sacrificing the recognition accuracy. In this connection a flexible multi- stage matching scheme exploiting the proposed modeling is also described. This leads to a system which is robust against various distortions and degradation including those related to cases of touching and broken characters. Finally, we present some examples and test results as a proof-of- concept demonstrating the validity and the robustness of the approach.

  1. Two-dimensional hidden semantic information model for target saliency detection and eyetracking identification

    NASA Astrophysics Data System (ADS)

    Wan, Weibing; Yuan, Lingfeng; Zhao, Qunfei; Fang, Tao

    2018-01-01

    Saliency detection has been applied to the target acquisition case. This paper proposes a two-dimensional hidden Markov model (2D-HMM) that exploits the hidden semantic information of an image to detect its salient regions. A spatial pyramid histogram of oriented gradient descriptors is used to extract features. After encoding the image by a learned dictionary, the 2D-Viterbi algorithm is applied to infer the saliency map. This model can predict fixation of the targets and further creates robust and effective depictions of the targets' change in posture and viewpoint. To validate the model with a human visual search mechanism, two eyetrack experiments are employed to train our model directly from eye movement data. The results show that our model achieves better performance than visual attention. Moreover, it indicates the plausibility of utilizing visual track data to identify targets.

  2. Reinforcement learning for routing in cognitive radio ad hoc networks.

    PubMed

    Al-Rawi, Hasan A A; Yau, Kok-Lim Alvin; Mohamad, Hafizal; Ramli, Nordin; Hashim, Wahidah

    2014-01-01

    Cognitive radio (CR) enables unlicensed users (or secondary users, SUs) to sense for and exploit underutilized licensed spectrum owned by the licensed users (or primary users, PUs). Reinforcement learning (RL) is an artificial intelligence approach that enables a node to observe, learn, and make appropriate decisions on action selection in order to maximize network performance. Routing enables a source node to search for a least-cost route to its destination node. While there have been increasing efforts to enhance the traditional RL approach for routing in wireless networks, this research area remains largely unexplored in the domain of routing in CR networks. This paper applies RL in routing and investigates the effects of various features of RL (i.e., reward function, exploitation, and exploration, as well as learning rate) through simulation. New approaches and recommendations are proposed to enhance the features in order to improve the network performance brought about by RL to routing. Simulation results show that the RL parameters of the reward function, exploitation, and exploration, as well as learning rate, must be well regulated, and the new approaches proposed in this paper improves SUs' network performance without significantly jeopardizing PUs' network performance, specifically SUs' interference to PUs.

  3. Reinforcement Learning for Routing in Cognitive Radio Ad Hoc Networks

    PubMed Central

    Al-Rawi, Hasan A. A.; Mohamad, Hafizal; Hashim, Wahidah

    2014-01-01

    Cognitive radio (CR) enables unlicensed users (or secondary users, SUs) to sense for and exploit underutilized licensed spectrum owned by the licensed users (or primary users, PUs). Reinforcement learning (RL) is an artificial intelligence approach that enables a node to observe, learn, and make appropriate decisions on action selection in order to maximize network performance. Routing enables a source node to search for a least-cost route to its destination node. While there have been increasing efforts to enhance the traditional RL approach for routing in wireless networks, this research area remains largely unexplored in the domain of routing in CR networks. This paper applies RL in routing and investigates the effects of various features of RL (i.e., reward function, exploitation, and exploration, as well as learning rate) through simulation. New approaches and recommendations are proposed to enhance the features in order to improve the network performance brought about by RL to routing. Simulation results show that the RL parameters of the reward function, exploitation, and exploration, as well as learning rate, must be well regulated, and the new approaches proposed in this paper improves SUs' network performance without significantly jeopardizing PUs' network performance, specifically SUs' interference to PUs. PMID:25140350

  4. A Fast MHD Code for Gravitationally Stratified Media using Graphical Processing Units: SMAUG

    NASA Astrophysics Data System (ADS)

    Griffiths, M. K.; Fedun, V.; Erdélyi, R.

    2015-03-01

    Parallelization techniques have been exploited most successfully by the gaming/graphics industry with the adoption of graphical processing units (GPUs), possessing hundreds of processor cores. The opportunity has been recognized by the computational sciences and engineering communities, who have recently harnessed successfully the numerical performance of GPUs. For example, parallel magnetohydrodynamic (MHD) algorithms are important for numerical modelling of highly inhomogeneous solar, astrophysical and geophysical plasmas. Here, we describe the implementation of SMAUG, the Sheffield Magnetohydrodynamics Algorithm Using GPUs. SMAUG is a 1-3D MHD code capable of modelling magnetized and gravitationally stratified plasma. The objective of this paper is to present the numerical methods and techniques used for porting the code to this novel and highly parallel compute architecture. The methods employed are justified by the performance benchmarks and validation results demonstrating that the code successfully simulates the physics for a range of test scenarios including a full 3D realistic model of wave propagation in the solar atmosphere.

  5. Accelerating a three-dimensional eco-hydrological cellular automaton on GPGPU with OpenCL

    NASA Astrophysics Data System (ADS)

    Senatore, Alfonso; D'Ambrosio, Donato; De Rango, Alessio; Rongo, Rocco; Spataro, William; Straface, Salvatore; Mendicino, Giuseppe

    2016-10-01

    This work presents an effective implementation of a numerical model for complete eco-hydrological Cellular Automata modeling on Graphical Processing Units (GPU) with OpenCL (Open Computing Language) for heterogeneous computation (i.e., on CPUs and/or GPUs). Different types of parallel implementations were carried out (e.g., use of fast local memory, loop unrolling, etc), showing increasing performance improvements in terms of speedup, adopting also some original optimizations strategies. Moreover, numerical analysis of results (i.e., comparison of CPU and GPU outcomes in terms of rounding errors) have proven to be satisfactory. Experiments were carried out on a workstation with two CPUs (Intel Xeon E5440 at 2.83GHz), one GPU AMD R9 280X and one GPU nVIDIA Tesla K20c. Results have been extremely positive, but further testing should be performed to assess the functionality of the adopted strategies on other complete models and their ability to fruitfully exploit parallel systems resources.

  6. Evaluating harvest-based control of invasive fish with telemetry: Performance of sea lamprey traps in the Great Lakes

    USGS Publications Warehouse

    Holbrook, Christopher; Bergstedt, Roger A.; Barber, Jessica M.; Bravener, Gale A; Jones, Michael L.; Krueger, Charles C.

    2016-01-01

    Physical removal (e.g., harvest via traps or nets) of mature individuals may be a cost-effective or socially acceptable alternative to chemical control strategies for invasive species, but requires knowledge of the spatial distribution of a population over time. We used acoustic telemetry to determine the current and possible future role of traps to control and assess invasive sea lampreys, Petromyzon marinus, in the St. Marys River, the connecting channel between Lake Superior and Lake Huron. Exploitation rates (i.e., fractions of an adult sea lamprey population removed by traps) at two upstream locations were compared among three years and two points of entry to the system. Telemetry receivers throughout the drainage allowed trap performance (exploitation rate) to be partitioned into two components: proportion of migrating sea lampreys that visited trap sites (availability) and proportion of available sea lampreys that were caught by traps (local trap efficiency). Estimated exploitation rates were well below those needed to provide population control in the absence of lampricides and were limited by availability and local trap efficiency. Local trap efficiency estimates for acoustic-tagged sea lampreys were lower than analogous estimates regularly obtained using traditional mark–recapture methods, suggesting that abundance had been previously underestimated. Results suggested major changes would be required to substantially increase catch, including improvements to existing traps, installation of new traps, or other modifications to attract and retain more sea lampreys. This case study also shows how bias associated with telemetry tags can be estimated and incorporated in models to improve inferences about parameters that are directly relevant to fishery management.

  7. Evaluating harvest-based control of invasive fish with telemetry: performance of sea lamprey traps in the Great Lakes.

    PubMed

    Holbrook, Christopher M; Bergstedt, Roger A; Barber, Jessica; Bravener, Gale A; Jones, Michael L; Krueger, Charles C

    2016-09-01

    Physical removal (e.g., harvest via traps or nets) of mature individuals may be a cost-effective or socially acceptable alternative to chemical control strategies for invasive species, but requires knowledge of the spatial distribution of a population over time. We used acoustic telemetry to determine the current and possible future role of traps to control and assess invasive sea lampreys, Petromyzon marinus, in the St. Marys River, the connecting channel between Lake Superior and Lake Huron. Exploitation rates (i.e., fractions of an adult sea lamprey population removed by traps) at two upstream locations were compared among three years and two points of entry to the system. Telemetry receivers throughout the drainage allowed trap performance (exploitation rate) to be partitioned into two components: proportion of migrating sea lampreys that visited trap sites (availability) and proportion of available sea lampreys that were caught by traps (local trap efficiency). Estimated exploitation rates were well below those needed to provide population control in the absence of lampricides and were limited by availability and local trap efficiency. Local trap efficiency estimates for acoustic-tagged sea lampreys were lower than analogous estimates regularly obtained using traditional mark-recapture methods, suggesting that abundance had been previously underestimated. Results suggested major changes would be required to substantially increase catch, including improvements to existing traps, installation of new traps, or other modifications to attract and retain more sea lampreys. This case study also shows how bias associated with telemetry tags can be estimated and incorporated in models to improve inferences about parameters that are directly relevant to fishery management. © 2016 by the Ecological Society of America.

  8. Benchmarking GPU and CPU codes for Heisenberg spin glass over-relaxation

    NASA Astrophysics Data System (ADS)

    Bernaschi, M.; Parisi, G.; Parisi, L.

    2011-06-01

    We present a set of possible implementations for Graphics Processing Units (GPU) of the Over-relaxation technique applied to the 3D Heisenberg spin glass model. The results show that a carefully tuned code can achieve more than 100 GFlops/s of sustained performance and update a single spin in about 0.6 nanoseconds. A multi-hit technique that exploits the GPU shared memory further reduces this time. Such results are compared with those obtained by means of a highly-tuned vector-parallel code on latest generation multi-core CPUs.

  9. Dipole response of the odd-proton nucleus 205Tl up to the neutron-separation energy

    DOE PAGES

    Benouaret, N.; Beller, J.; Pai, H.; ...

    2016-10-17

    The low-lying electromagnetic dipole strength of the odd-proton nuclide 205Tl has been investigated up to the neutron separation energy exploiting the method of nuclear resonance fluorescence. In total, 61 levels of 205Tl have been identified. Lastly, the measured strength distribution of 205Tl were discussed and compared to those of even–even and even–odd mass nuclei in the same mass region as well as to calculations that have been performed within the quasi-particle phonon model.

  10. Bell nonlocality: a resource for device-independent quantum information protocols

    NASA Astrophysics Data System (ADS)

    Acin, Antonio

    2015-05-01

    Bell nonlocality is not only one of the most fundamental properties of quantum physics, but has also recently acquired the status of an information resource for device-independent quantum information protocols. In the device-independent approach, protocols are designed so that their performance is independent of the internal working of the devices used in the implementation. We discuss all these ideas and argue that device-independent protocols are especially relevant or cryptographic applications, as they are insensitive to hacking attacks exploiting imperfections on the modelling of the devices.

  11. A channel differential EZW coding scheme for EEG data compression.

    PubMed

    Dehkordi, Vahid R; Daou, Hoda; Labeau, Fabrice

    2011-11-01

    In this paper, a method is proposed to compress multichannel electroencephalographic (EEG) signals in a scalable fashion. Correlation between EEG channels is exploited through clustering using a k-means method. Representative channels for each of the clusters are encoded individually while other channels are encoded differentially, i.e., with respect to their respective cluster representatives. The compression is performed using the embedded zero-tree wavelet encoding adapted to 1-D signals. Simulations show that the scalable features of the scheme lead to a flexible quality/rate tradeoff, without requiring detailed EEG signal modeling.

  12. Unstructured grids on SIMD torus machines

    NASA Technical Reports Server (NTRS)

    Bjorstad, Petter E.; Schreiber, Robert

    1994-01-01

    Unstructured grids lead to unstructured communication on distributed memory parallel computers, a problem that has been considered difficult. Here, we consider adaptive, offline communication routing for a SIMD processor grid. Our approach is empirical. We use large data sets drawn from supercomputing applications instead of an analytic model of communication load. The chief contribution of this paper is an experimental demonstration of the effectiveness of certain routing heuristics. Our routing algorithm is adaptive, nonminimal, and is generally designed to exploit locality. We have a parallel implementation of the router, and we report on its performance.

  13. Epos TCS Satellite Data

    NASA Astrophysics Data System (ADS)

    Manunta, Michele; Mandea, Mioara; Fernández-Turiel, José Luis; Stramondo, Salvatore; Wright, Tim; Walter, Thomas; Bally, Philippe; Casu, Francesco; Zeni, Giovanni; Buonanno, Sabatino; Zinno, Ivana; Tizzani, Pietro; Castaldo, Raffaele; Ostanciaux, Emilie; Diament, Michel; Hooper, Andy; Maccaferri, Francesco; Lanari, Riccardo

    2016-04-01

    TCS Satellite Data is devoted to provide Earth Observation (EO) services, transversal with respect to the large EPOS community, suitable to be used in several application scenarios. In particular, the main goal is to contribute with mature services that have already well demonstrated their effectiveness and relevance in investigating the physical processes controlling earthquakes, volcanic eruptions and unrest episodes as well as those driving tectonics and Earth surface dynamics. The TCS Satellite Data will provide two kinds of services: satellite products/services, and Value-added satellite products/services. The satellite products/services are composed of three (EPOSAR, GDM and COMET) well-identified and partly already operational elements for delivering Level 1 products. Such services will be devoted to the generation of SAR interferograms, DTM and ground displacement maps through the exploitation of different advanced EO techniques for InSAR and optical data analysis. The Value-added satellite products/services are composed of 4 elements (EPOSAR, 3D-Def, Mod and COMET) of Level 2 and 3 products. Such services integrate satellite and in situ measurements and observations to retrieve information on source mechanism, such as the geometry (spatial location, depth, volume changes) and the physical parameters of the deformation sources, through the exploitation of modelling approaches. The TCS Satellite Data will provide products in two different processing and delivery modes: 1- surveillance mode - routinely product generation; 2- on demand mode - product generation performed on demand by the user. Concerning the surveillance mode, the goal is providing continuous satellite measurements in areas of particular interest from a geophysical perspective (supersites). The objective is the detection of displacement patterns changing along time and their geophysical explanation. This is a valid approach for inter-seismic movements and volcanic unrest, post-seismic and post-eruptive displacements, urban subsidence, coastal movements. The on demand mode will allow users to process available satellite data-stack by selecting the scenes and the area of interest, and properly setting some processing parameters or to perform modelling analyses. This processing mode will guarantee the possibility to analyse areas of interest for the users, thus exploiting as much as possible the global coverage strategy of satellites, as well as performing user-driven processing, benefiting from the knowledge of the characteristics of the particular investigated area and/or deformation phenomenon.

  14. High Performance Programming Using Explicit Shared Memory Model on Cray T3D1

    NASA Technical Reports Server (NTRS)

    Simon, Horst D.; Saini, Subhash; Grassi, Charles

    1994-01-01

    The Cray T3D system is the first-phase system in Cray Research, Inc.'s (CRI) three-phase massively parallel processing (MPP) program. This system features a heterogeneous architecture that closely couples DEC's Alpha microprocessors and CRI's parallel-vector technology, i.e., the Cray Y-MP and Cray C90. An overview of the Cray T3D hardware and available programming models is presented. Under Cray Research adaptive Fortran (CRAFT) model four programming methods (data parallel, work sharing, message-passing using PVM, and explicit shared memory model) are available to the users. However, at this time data parallel and work sharing programming models are not available to the user community. The differences between standard PVM and CRI's PVM are highlighted with performance measurements such as latencies and communication bandwidths. We have found that the performance of neither standard PVM nor CRI s PVM exploits the hardware capabilities of the T3D. The reasons for the bad performance of PVM as a native message-passing library are presented. This is illustrated by the performance of NAS Parallel Benchmarks (NPB) programmed in explicit shared memory model on Cray T3D. In general, the performance of standard PVM is about 4 to 5 times less than obtained by using explicit shared memory model. This degradation in performance is also seen on CM-5 where the performance of applications using native message-passing library CMMD on CM-5 is also about 4 to 5 times less than using data parallel methods. The issues involved (such as barriers, synchronization, invalidating data cache, aligning data cache etc.) while programming in explicit shared memory model are discussed. Comparative performance of NPB using explicit shared memory programming model on the Cray T3D and other highly parallel systems such as the TMC CM-5, Intel Paragon, Cray C90, IBM-SP1, etc. is presented.

  15. Search for neutral resonances decaying into a Z boson and a pair of b jets or τ leptons

    NASA Astrophysics Data System (ADS)

    Khachatryan, V.; Sirunyan, A. M.; Tumasyan, A.; Adam, W.; Asilar, E.; Bergauer, T.; Brandstetter, J.; Brondolin, E.; Dragicevic, M.; Erö, J.; Flechl, M.; Friedl, M.; Frühwirth, R.; Ghete, V. M.; Hartl, C.; Hörmann, N.; Hrubec, J.; Jeitler, M.; Knünz, V.; König, A.; Krammer, M.; Krätschmer, I.; Liko, D.; Matsushita, T.; Mikulec, I.; Rabady, D.; Rahbaran, B.; Rohringer, H.; Schieck, J.; Schöfbeck, R.; Strauss, J.; Treberer-Treberspurg, W.; Waltenberger, W.; Wulz, C.-E.; Mossolov, V.; Shumeiko, N.; Suarez Gonzalez, J.; Alderweireldt, S.; Cornelis, T.; De Wolf, E. A.; Janssen, X.; Knutsson, A.; Lauwers, J.; Luyckx, S.; Van De Klundert, M.; Van Haevermaet, H.; Van Mechelen, P.; Van Remortel, N.; Van Spilbeeck, A.; Abu Zeid, S.; Blekman, F.; D'Hondt, J.; Daci, N.; De Bruyn, I.; Deroover, K.; Heracleous, N.; Keaveney, J.; Lowette, S.; Moreels, L.; Olbrechts, A.; Python, Q.; Strom, D.; Tavernier, S.; Van Doninck, W.; Van Mulders, P.; Van Onsem, G. P.; Van Parijs, I.; Barria, P.; Brun, H.; Caillol, C.; Clerbaux, B.; De Lentdecker, G.; Fasanella, G.; Favart, L.; Grebenyuk, A.; Karapostoli, G.; Lenzi, T.; Léonard, A.; Maerschalk, T.; Marinov, A.; Perniè, L.; Randle-Conde, A.; Seva, T.; Vander Velde, C.; Vanlaer, P.; Yonamine, R.; Zenoni, F.; Zhang, F.; Beernaert, K.; Benucci, L.; Cimmino, A.; Crucy, S.; Dobur, D.; Fagot, A.; Garcia, G.; Gul, M.; Mccartin, J.; Ocampo Rios, A. A.; Poyraz, D.; Ryckbosch, D.; Salva, S.; Sigamani, M.; Tytgat, M.; Van Driessche, W.; Yazgan, E.; Zaganidis, N.; Basegmez, S.; Beluffi, C.; Bondu, O.; Brochet, S.; Bruno, G.; Caudron, A.; Ceard, L.; Da Silveira, G. G.; Delaere, C.; Favart, D.; Forthomme, L.; Giammanco, A.; Hollar, J.; Jafari, A.; Jez, P.; Komm, M.; Lemaitre, V.; Mertens, A.; Musich, M.; Nuttens, C.; Perrini, L.; Pin, A.; Piotrzkowski, K.; Popov, A.; Quertenmont, L.; Selvaggi, M.; Vidal Marono, M.; Beliy, N.; Hammad, G. H.; Aldá Júnior, W. L.; Alves, F. L.; Alves, G. A.; Brito, L.; Correa Martins, M.; Hamer, M.; Hensel, C.; Moraes, A.; Pol, M. E.; Rebello Teles, P.; Belchior Batista Das Chagas, E.; Carvalho, W.; Chinellato, J.; Custódio, A.; Da Costa, E. M.; De Jesus Damiao, D.; De Oliveira Martins, C.; Fonseca De Souza, S.; Huertas Guativa, L. M.; Malbouisson, H.; Matos Figueiredo, D.; Mora Herrera, C.; Mundim, L.; Nogima, H.; Prado Da Silva, W. L.; Santoro, A.; Sznajder, A.; Tonelli Manganote, E. J.; Vilela Pereira, A.; Ahuja, S.; Bernardes, C. A.; De Souza Santos, A.; Dogra, S.; Fernandez Perez Tomei, T. R.; Gregores, E. M.; Mercadante, P. G.; Moon, C. S.; Novaes, S. F.; Padula, Sandra S.; Romero Abad, D.; Ruiz Vargas, J. C.; Aleksandrov, A.; Hadjiiska, R.; Iaydjiev, P.; Rodozov, M.; Stoykova, S.; Sultanov, G.; Vutova, M.; Dimitrov, A.; Glushkov, I.; Litov, L.; Pavlov, B.; Petkov, P.; Ahmad, M.; Bian, J. G.; Chen, G. M.; Chen, H. S.; Chen, M.; Cheng, T.; Du, R.; Jiang, C. H.; Plestina, R.; Romeo, F.; Shaheen, S. M.; Spiezia, A.; Tao, J.; Wang, C.; Wang, Z.; Zhang, H.; Asawatangtrakuldee, C.; Ban, Y.; Li, Q.; Liu, S.; Mao, Y.; Qian, S. J.; Wang, D.; Xu, Z.; Avila, C.; Cabrera, A.; Chaparro Sierra, L. F.; Florez, C.; Gomez, J. P.; Gomez Moreno, B.; Sanabria, J. C.; Godinovic, N.; Lelas, D.; Puljak, I.; Ribeiro Cipriano, P. M.; Antunovic, Z.; Kovac, M.; Brigljevic, V.; Kadija, K.; Luetic, J.; Micanovic, S.; Sudic, L.; Attikis, A.; Mavromanolakis, G.; Mousa, J.; Nicolaou, C.; Ptochos, F.; Razis, P. A.; Rykaczewski, H.; Bodlak, M.; Finger, M.; Finger, M.; El-khateeb, E.; Elkafrawy, T.; Mohamed, A.; Salama, E.; Calpas, B.; Kadastik, M.; Murumaa, M.; Raidal, M.; Tiko, A.; Veelken, C.; Eerola, P.; Pekkanen, J.; Voutilainen, M.; Härkönen, J.; Karimäki, V.; Kinnunen, R.; Lampén, T.; Lassila-Perini, K.; Lehti, S.; Lindén, T.; Luukka, P.; Peltola, T.; Tuominen, E.; Tuominiemi, J.; Tuovinen, E.; Wendland, L.; Talvitie, J.; Tuuva, T.; Besancon, M.; Couderc, F.; Dejardin, M.; Denegri, D.; Fabbro, B.; Faure, J. L.; Favaro, C.; Ferri, F.; Ganjour, S.; Givernaud, A.; Gras, P.; Hamel de Monchenault, G.; Jarry, P.; Locci, E.; Machet, M.; Malcles, J.; Rander, J.; Rosowsky, A.; Titov, M.; Zghiche, A.; Antropov, I.; Baffioni, S.; Beaudette, F.; Busson, P.; Cadamuro, L.; Chapon, E.; Charlot, C.; Davignon, O.; Filipovic, N.; Granier de Cassagnac, R.; Jo, M.; Lisniak, S.; Mastrolorenzo, L.; Miné, P.; Naranjo, I. N.; Nguyen, M.; Ochando, C.; Ortona, G.; Paganini, P.; Pigard, P.; Regnard, S.; Salerno, R.; Sauvan, J. B.; Sirois, Y.; Strebler, T.; Yilmaz, Y.; Zabi, A.; Agram, J.-L.; Andrea, J.; Aubin, A.; Bloch, D.; Brom, J.-M.; Buttignol, M.; Chabert, E. C.; Chanon, N.; Collard, C.; Conte, E.; Coubez, X.; Fontaine, J.-C.; Gelé, D.; Goerlach, U.; Goetzmann, C.; Le Bihan, A.-C.; Merlin, J. A.; Skovpen, K.; Van Hove, P.; Gadrat, S.; Beauceron, S.; Bernet, C.; Boudoul, G.; Bouvier, E.; Carrillo Montoya, C. A.; Chierici, R.; Contardo, D.; Courbon, B.; Depasse, P.; El Mamouni, H.; Fan, J.; Fay, J.; Gascon, S.; Gouzevitch, M.; Ille, B.; Lagarde, F.; Laktineh, I. B.; Lethuillier, M.; Mirabito, L.; Pequegnot, A. L.; Perries, S.; Ruiz Alvarez, J. D.; Sabes, D.; Sgandurra, L.; Sordini, V.; Vander Donckt, M.; Verdier, P.; Viret, S.; Toriashvili, T.; Tsamalaidze, Z.; Autermann, C.; Beranek, S.; Feld, L.; Heister, A.; Kiesel, M. K.; Klein, K.; Lipinski, M.; Ostapchuk, A.; Preuten, M.; Raupach, F.; Schael, S.; Schulte, J. F.; Verlage, T.; Weber, H.; Zhukov, V.; Ata, M.; Brodski, M.; Dietz-Laursonn, E.; Duchardt, D.; Endres, M.; Erdmann, M.; Erdweg, S.; Esch, T.; Fischer, R.; Güth, A.; Hebbeker, T.; Heidemann, C.; Hoepfner, K.; Knutzen, S.; Kreuzer, P.; Merschmeyer, M.; Meyer, A.; Millet, P.; Olschewski, M.; Padeken, K.; Papacz, P.; Pook, T.; Radziej, M.; Reithler, H.; Rieger, M.; Scheuch, F.; Sonnenschein, L.; Teyssier, D.; Thüer, S.; Cherepanov, V.; Erdogan, Y.; Flügge, G.; Geenen, H.; Geisler, M.; Hoehle, F.; Kargoll, B.; Kress, T.; Kuessel, Y.; Künsken, A.; Lingemann, J.; Nehrkorn, A.; Nowack, A.; Nugent, I. M.; Pistone, C.; Pooth, O.; Stahl, A.; Aldaya Martin, M.; Asin, I.; Bartosik, N.; Behnke, O.; Behrens, U.; Bell, A. J.; Borras, K.; Burgmeier, A.; Campbell, A.; Choudhury, S.; Costanza, F.; Diez Pardos, C.; Dolinska, G.; Dooling, S.; Dorland, T.; Eckerlin, G.; Eckstein, D.; Eichhorn, T.; Flucke, G.; Gallo, E.; Garay Garcia, J.; Geiser, A.; Gizhko, A.; Gunnellini, P.; Hauk, J.; Hempel, M.; Jung, H.; Kalogeropoulos, A.; Karacheban, O.; Kasemann, M.; Katsas, P.; Kieseler, J.; Kleinwort, C.; Korol, I.; Lange, W.; Leonard, J.; Lipka, K.; Lobanov, A.; Lohmann, W.; Mankel, R.; Marfin, I.; Melzer-Pellmann, I.-A.; Meyer, A. B.; Mittag, G.; Mnich, J.; Mussgiller, A.; Naumann-Emme, S.; Nayak, A.; Ntomari, E.; Perrey, H.; Pitzl, D.; Placakyte, R.; Raspereza, A.; Roland, B.; Sahin, M. Ö.; Saxena, P.; Schoerner-Sadenius, T.; Schröder, M.; Seitz, C.; Spannagel, S.; Trippkewitz, K. D.; Walsh, R.; Wissing, C.; Blobel, V.; Centis Vignali, M.; Draeger, A. R.; Erfle, J.; Garutti, E.; Goebel, K.; Gonzalez, D.; Görner, M.; Haller, J.; Hoffmann, M.; Höing, R. S.; Junkes, A.; Klanner, R.; Kogler, R.; Kovalchuk, N.; Lapsien, T.; Lenz, T.; Marchesini, I.; Marconi, D.; Meyer, M.; Nowatschin, D.; Ott, J.; Pantaleo, F.; Peiffer, T.; Perieanu, A.; Pietsch, N.; Poehlsen, J.; Rathjens, D.; Sander, C.; Scharf, C.; Schettler, H.; Schleper, P.; Schlieckau, E.; Schmidt, A.; Schwandt, J.; Sola, V.; Stadie, H.; Steinbrück, G.; Tholen, H.; Troendle, D.; Usai, E.; Vanelderen, L.; Vanhoefer, A.; Vormwald, B.; Barth, C.; Baus, C.; Berger, J.; Böser, C.; Butz, E.; Chwalek, T.; Colombo, F.; De Boer, W.; Descroix, A.; Dierlamm, A.; Fink, S.; Frensch, F.; Friese, R.; Giffels, M.; Gilbert, A.; Haitz, D.; Hartmann, F.; Heindl, S. M.; Husemann, U.; Katkov, I.; Kornmayer, A.; Lobelle Pardo, P.; Maier, B.; Mildner, H.; Mozer, M. U.; Müller, T.; Müller, Th.; Plagge, M.; Quast, G.; Rabbertz, K.; Röcker, S.; Roscher, F.; Sieber, G.; Simonis, H. J.; Stober, F. M.; Ulrich, R.; Wagner-Kuhr, J.; Wayand, S.; Weber, M.; Weiler, T.; Williamson, S.; Wöhrmann, C.; Wolf, R.; Anagnostou, G.; Daskalakis, G.; Geralis, T.; Giakoumopoulou, V. A.; Kyriakis, A.; Loukas, D.; Psallidas, A.; Topsis-Giotis, I.; Agapitos, A.; Kesisoglou, S.; Panagiotou, A.; Saoulidou, N.; Tziaferi, E.; Evangelou, I.; Flouris, G.; Foudas, C.; Kokkas, P.; Loukas, N.; Manthos, N.; Papadopoulos, I.; Paradas, E.; Strologas, J.; Bencze, G.; Hajdu, C.; Hazi, A.; Hidas, P.; Horvath, D.; Sikler, F.; Veszpremi, V.; Vesztergombi, G.; Zsigmond, A. J.; Beni, N.; Czellar, S.; Karancsi, J.; Molnar, J.; Szillasi, Z.; Bartók, M.; Makovec, A.; Raics, P.; Trocsanyi, Z. L.; Ujvari, B.; Mal, P.; Mandal, K.; Sahoo, D. K.; Sahoo, N.; Swain, S. K.; Bansal, S.; Beri, S. B.; Bhatnagar, V.; Chawla, R.; Gupta, R.; Bhawandeep, U.; Kalsi, A. K.; Kaur, A.; Kaur, M.; Kumar, R.; Mehta, A.; Mittal, M.; Singh, J. B.; Walia, G.; Kumar, Ashok; Bhardwaj, A.; Choudhary, B. C.; Garg, R. B.; Kumar, A.; Malhotra, S.; Naimuddin, M.; Nishu, N.; Ranjan, K.; Sharma, R.; Sharma, V.; Bhattacharya, S.; Chatterjee, K.; Dey, S.; Dutta, S.; Jain, Sa.; Majumdar, N.; Modak, A.; Mondal, K.; Mukherjee, S.; Mukhopadhyay, S.; Roy, A.; Roy, D.; Roy Chowdhury, S.; Sarkar, S.; Sharan, M.; Abdulsalam, A.; Chudasama, R.; Dutta, D.; Jha, V.; Kumar, V.; Mohanty, A. K.; Pant, L. M.; Shukla, P.; Topkar, A.; Aziz, T.; Banerjee, S.; Bhowmik, S.; Chatterjee, R. M.; Dewanjee, R. K.; Dugad, S.; Ganguly, S.; Ghosh, S.; Guchait, M.; Gurtu, A.; Kole, G.; Kumar, S.; Mahakud, B.; Maity, M.; Majumder, G.; Mazumdar, K.; Mitra, S.; Mohanty, G. B.; Parida, B.; Sarkar, T.; Sur, N.; Sutar, B.; Wickramage, N.; Chauhan, S.; Dube, S.; Kapoor, A.; Kothekar, K.; Sharma, S.; Bakhshiansohi, H.; Behnamian, H.; Etesami, S. M.; Fahim, A.; Goldouzian, R.; Khakzad, M.; Mohammadi Najafabadi, M.; Naseri, M.; Paktinat Mehdiabadi, S.; Rezaei Hosseinabadi, F.; Safarzadeh, B.; Zeinali, M.; Felcini, M.; Grunewald, M.; Abbrescia, M.; Calabria, C.; Caputo, C.; Colaleo, A.; Creanza, D.; Cristella, L.; De Filippis, N.; De Palma, M.; Fiore, L.; Iaselli, G.; Maggi, G.; Maggi, M.; Miniello, G.; My, S.; Nuzzo, S.; Pompili, A.; Pugliese, G.; Radogna, R.; Ranieri, A.; Selvaggi, G.; Silvestris, L.; Venditti, R.; Verwilligen, P.; Abbiendi, G.; Battilana, C.; Benvenuti, A. C.; Bonacorsi, D.; Braibant-Giacomelli, S.; Brigliadori, L.; Campanini, R.; Capiluppi, P.; Castro, A.; Cavallo, F. R.; Chhibra, S. S.; Codispoti, G.; Cuffiani, M.; Dallavalle, G. M.; Fabbri, F.; Fanfani, A.; Fasanella, D.; Giacomelli, P.; Grandi, C.; Guiducci, L.; Marcellini, S.; Masetti, G.; Montanari, A.; Navarria, F. L.; Perrotta, A.; Rossi, A. M.; Rovelli, T.; Siroli, G. P.; Tosi, N.; Travaglini, R.; Cappello, G.; Chiorboli, M.; Costa, S.; Di Mattia, A.; Giordano, F.; Potenza, R.; Tricomi, A.; Tuve, C.; Barbagli, G.; Ciulli, V.; Civinini, C.; D'Alessandro, R.; Focardi, E.; Gori, V.; Lenzi, P.; Meschini, M.; Paoletti, S.; Sguazzoni, G.; Viliani, L.; Benussi, L.; Bianco, S.; Fabbri, F.; Piccolo, D.; Primavera, F.; Calvelli, V.; Ferro, F.; Lo Vetere, M.; Monge, M. R.; Robutti, E.; Tosi, S.; Brianza, L.; Dinardo, M. E.; Fiorendi, S.; Gennai, S.; Gerosa, R.; Ghezzi, A.; Govoni, P.; Malvezzi, S.; Manzoni, R. A.; Marzocchi, B.; Menasce, D.; Moroni, L.; Paganoni, M.; Pedrini, D.; Ragazzi, S.; Redaelli, N.; Tabarelli de Fatis, T.; Buontempo, S.; Cavallo, N.; Di Guida, S.; Esposito, M.; Fabozzi, F.; Iorio, A. O. M.; Lanza, G.; Lista, L.; Meola, S.; Merola, M.; Paolucci, P.; Sciacca, C.; Thyssen, F.; Azzi, P.; Bacchetta, N.; Bellato, M.; Benato, L.; Bisello, D.; Boletti, A.; Carlin, R.; Checchia, P.; Dall'Osso, M.; Dorigo, T.; Dosselli, U.; Gasparini, F.; Gasparini, U.; Gozzelino, A.; Lacaprara, S.; Margoni, M.; Meneguzzo, A. T.; Pazzini, J.; Pozzobon, N.; Ronchese, P.; Simonetto, F.; Torassa, E.; Tosi, M.; Vanini, S.; Ventura, S.; Zanetti, M.; Zotto, P.; Zucchetta, A.; Zumerle, G.; Braghieri, A.; Magnani, A.; Montagna, P.; Ratti, S. P.; Re, V.; Riccardi, C.; Salvini, P.; Vai, I.; Vitulo, P.; Alunni Solestizi, L.; Bilei, G. M.; Ciangottini, D.; Fanò, L.; Lariccia, P.; Mantovani, G.; Menichelli, M.; Saha, A.; Santocchia, A.; Androsov, K.; Azzurri, P.; Bagliesi, G.; Bernardini, J.; Boccali, T.; Castaldi, R.; Ciocci, M. A.; Dell'Orso, R.; Donato, S.; Fedi, G.; Foà, L.; Giassi, A.; Grippo, M. T.; Ligabue, F.; Lomtadze, T.; Martini, L.; Messineo, A.; Palla, F.; Rizzi, A.; Savoy-Navarro, A.; Serban, A. T.; Spagnolo, P.; Tenchini, R.; Tonelli, G.; Venturi, A.; Verdini, P. G.; Barone, L.; Cavallari, F.; D'imperio, G.; Del Re, D.; Diemoz, M.; Gelli, S.; Jorda, C.; Longo, E.; Margaroli, F.; Meridiani, P.; Organtini, G.; Paramatti, R.; Preiato, F.; Rahatlou, S.; Rovelli, C.; Santanastasio, F.; Traczyk, P.; Amapane, N.; Arcidiacono, R.; Argiro, S.; Arneodo, M.; Bellan, R.; Biino, C.; Cartiglia, N.; Costa, M.; Covarelli, R.; Degano, A.; Demaria, N.; Finco, L.; Kiani, B.; Mariotti, C.; Maselli, S.; Migliore, E.; Monaco, V.; Monteil, E.; Obertino, M. M.; Pacher, L.; Pastrone, N.; Pelliccioni, M.; Pinna Angioni, G. L.; Ravera, F.; Romero, A.; Ruspa, M.; Sacchi, R.; Solano, A.; Staiano, A.; Belforte, S.; Candelise, V.; Casarsa, M.; Cossutti, F.; Della Ricca, G.; Gobbo, B.; La Licata, C.; Marone, M.; Schizzi, A.; Zanetti, A.; Kropivnitskaya, A.; Nam, S. K.; Kim, D. H.; Kim, G. N.; Kim, M. S.; Kong, D. J.; Lee, S.; Oh, Y. D.; Sakharov, A.; Son, D. C.; Brochero Cifuentes, J. A.; Kim, H.; Kim, T. J.; Song, S.; Choi, S.; Go, Y.; Gyun, D.; Hong, B.; Kim, H.; Kim, Y.; Lee, B.; Lee, K.; Lee, K. S.; Lee, S.; Park, S. K.; Roh, Y.; Yoo, H. D.; Choi, M.; Kim, H.; Kim, J. H.; Lee, J. S. H.; Park, I. C.; Ryu, G.; Ryu, M. S.; Choi, Y.; Goh, J.; Kim, D.; Kwon, E.; Lee, J.; Yu, I.; Dudenas, V.; Juodagalvis, A.; Vaitkus, J.; Ahmed, I.; Ibrahim, Z. A.; Komaragiri, J. R.; Md Ali, M. A. B.; Mohamad Idris, F.; Wan Abdullah, W. A. T.; Yusli, M. N.; Casimiro Linares, E.; Castilla-Valdez, H.; De La Cruz-Burelo, E.; Heredia-De La Cruz, I.; Hernandez-Almada, A.; Lopez-Fernandez, R.; Sanchez-Hernandez, A.; Carrillo Moreno, S.; Vazquez Valencia, F.; Pedraza, I.; Salazar Ibarguen, H. A.; Morelos Pineda, A.; Krofcheck, D.; Butler, P. H.; Ahmad, A.; Ahmad, M.; Hassan, Q.; Hoorani, H. R.; Khan, W. A.; Khurshid, T.; Shoaib, M.; Bialkowska, H.; Bluj, M.; Boimska, B.; Frueboes, T.; Górski, M.; Kazana, M.; Nawrocki, K.; Romanowska-Rybinska, K.; Szleper, M.; Zalewski, P.; Brona, G.; Bunkowski, K.; Byszuk, A.; Doroba, K.; Kalinowski, A.; Konecki, M.; Krolikowski, J.; Misiura, M.; Olszewski, M.; Walczak, M.; Bargassa, P.; Beirão Da Cruz E Silva, C.; Di Francesco, A.; Faccioli, P.; Ferreira Parracho, P. G.; Gallinaro, M.; Leonardo, N.; Lloret Iglesias, L.; Nguyen, F.; Rodrigues Antunes, J.; Seixas, J.; Toldaiev, O.; Vadruccio, D.; Varela, J.; Vischia, P.; Bunin, P.; Golutvin, I.; Gorbunov, I.; Kamenev, A.; Karjavin, V.; Konoplyanikov, V.; Kozlov, G.; Lanev, A.; Malakhov, A.; Matveev, V.; Moisenz, P.; Palichik, V.; Perelygin, V.; Savina, M.; Shmatov, S.; Shulha, S.; Skatchkov, N.; Smirnov, V.; Zarubin, A.; Golovtsov, V.; Ivanov, Y.; Kim, V.; Kuznetsova, E.; Levchenko, P.; Murzin, V.; Oreshkin, V.; Smirnov, I.; Sulimov, V.; Uvarov, L.; Vavilov, S.; Vorobyev, A.; Andreev, Yu.; Dermenev, A.; Gninenko, S.; Golubev, N.; Karneyeu, A.; Kirsanov, M.; Krasnikov, N.; Pashenkov, A.; Tlisov, D.; Toropin, A.; Epshteyn, V.; Gavrilov, V.; Lychkovskaya, N.; Popov, V.; Pozdnyakov, I.; Safronov, G.; Spiridonov, A.; Vlasov, E.; Zhokin, A.; Bylinkin, A.; Andreev, V.; Azarkin, M.; Dremin, I.; Kirakosyan, M.; Leonidov, A.; Mesyats, G.; Rusakov, S. V.; Baskakov, A.; Belyaev, A.; Boos, E.; Bunichev, V.; Dubinin, M.; Dudko, L.; Ershov, A.; Gribushin, A.; Klyukhin, V.; Kodolova, O.; Lokhtin, I.; Myagkov, I.; Obraztsov, S.; Petrushanko, S.; Savrin, V.; Azhgirey, I.; Bayshev, I.; Bitioukov, S.; Kachanov, V.; Kalinin, A.; Konstantinov, D.; Krychkine, V.; Petrov, V.; Ryutin, R.; Sobol, A.; Tourtchanovitch, L.; Troshin, S.; Tyurin, N.; Uzunian, A.; Volkov, A.; Adzic, P.; Cirkovic, P.; Milosevic, J.; Rekovic, V.; Alcaraz Maestre, J.; Calvo, E.; Cerrada, M.; Chamizo Llatas, M.; Colino, N.; De La Cruz, B.; Delgado Peris, A.; Escalante Del Valle, A.; Fernandez Bedoya, C.; Fernández Ramos, J. P.; Flix, J.; Fouz, M. C.; Garcia-Abia, P.; Gonzalez Lopez, O.; Goy Lopez, S.; Hernandez, J. M.; Josa, M. I.; Navarro De Martino, E.; Pérez-Calero Yzquierdo, A.; Puerta Pelayo, J.; Quintario Olmeda, A.; Redondo, I.; Romero, L.; Santaolalla, J.; Soares, M. S.; Albajar, C.; de Trocóniz, J. F.; Missiroli, M.; Moran, D.; Cuevas, J.; Fernandez Menendez, J.; Folgueras, S.; Gonzalez Caballero, I.; Palencia Cortezon, E.; Vizan Garcia, J. M.; Cabrillo, I. J.; Calderon, A.; Castiñeiras De Saa, J. R.; De Castro Manzano, P.; Fernandez, M.; Garcia-Ferrero, J.; Gomez, G.; Lopez Virto, A.; Marco, J.; Marco, R.; Martinez Rivero, C.; Matorras, F.; Piedra Gomez, J.; Rodrigo, T.; Rodríguez-Marrero, A. Y.; Ruiz-Jimeno, A.; Scodellaro, L.; Trevisani, N.; Vila, I.; Vilar Cortabitarte, R.; Abbaneo, D.; Auffray, E.; Auzinger, G.; Bachtis, M.; Baillon, P.; Ball, A. H.; Barney, D.; Benaglia, A.; Bendavid, J.; Benhabib, L.; Benitez, J. F.; Berruti, G. M.; Bloch, P.; Bocci, A.; Bonato, A.; Botta, C.; Breuker, H.; Camporesi, T.; Castello, R.; Cerminara, G.; D'Alfonso, M.; d'Enterria, D.; Dabrowski, A.; Daponte, V.; David, A.; De Gruttola, M.; De Guio, F.; De Roeck, A.; De Visscher, S.; Di Marco, E.; Dobson, M.; Dordevic, M.; Dorney, B.; du Pree, T.; Duggan, D.; Dünser, M.; Dupont, N.; Elliott-Peisert, A.; Franzoni, G.; Fulcher, J.; Funk, W.; Gigi, D.; Gill, K.; Giordano, D.; Girone, M.; Glege, F.; Guida, R.; Gundacker, S.; Guthoff, M.; Hammer, J.; Harris, P.; Hegeman, J.; Innocente, V.; Janot, P.; Kirschenmann, H.; Kortelainen, M. J.; Kousouris, K.; Krajczar, K.; Lecoq, P.; Lourenço, C.; Lucchini, M. T.; Magini, N.; Malgeri, L.; Mannelli, M.; Martelli, A.; Masetti, L.; Meijers, F.; Mersi, S.; Meschi, E.; Moortgat, F.; Morovic, S.; Mulders, M.; Nemallapudi, M. V.; Neugebauer, H.; Orfanelli, S.; Orsini, L.; Pape, L.; Perez, E.; Peruzzi, M.; Petrilli, A.; Petrucciani, G.; Pfeiffer, A.; Pierini, M.; Piparo, D.; Racz, A.; Reis, T.; Rolandi, G.; Rovere, M.; Ruan, M.; Sakulin, H.; Schäfer, C.; Schwick, C.; Seidel, M.; Sharma, A.; Silva, P.; Simon, M.; Sphicas, P.; Steggemann, J.; Stieger, B.; Stoye, M.; Takahashi, Y.; Treille, D.; Triossi, A.; Tsirou, A.; Veres, G. I.; Wardle, N.; Wöhri, H. K.; Zagozdzinska, A.; Zeuner, W. D.; Bertl, W.; Deiters, K.; Erdmann, W.; Horisberger, R.; Ingram, Q.; Kaestli, H. C.; Kotlinski, D.; Langenegger, U.; Renker, D.; Rohe, T.; Bachmair, F.; Bäni, L.; Bianchini, L.; Casal, B.; Dissertori, G.; Dittmar, M.; Donegà, M.; Eller, P.; Grab, C.; Heidegger, C.; Hits, D.; Hoss, J.; Kasieczka, G.; Lustermann, W.; Mangano, B.; Marionneau, M.; Martinez Ruiz del Arbol, P.; Masciovecchio, M.; Meister, D.; Micheli, F.; Musella, P.; Nessi-Tedaldi, F.; Pandolfi, F.; Pata, J.; Pauss, F.; Perrozzi, L.; Quittnat, M.; Rossini, M.; Schönenberger, M.; Starodumov, A.; Takahashi, M.; Tavolaro, V. R.; Theofilatos, K.; Wallny, R.; Aarrestad, T. K.; Amsler, C.; Caminada, L.; Canelli, M. F.; Chiochia, V.; De Cosa, A.; Galloni, C.; Hinzmann, A.; Hreus, T.; Kilminster, B.; Lange, C.; Ngadiuba, J.; Pinna, D.; Rauco, G.; Robmann, P.; Ronga, F. J.; Salerno, D.; Yang, Y.; Cardaci, M.; Chen, K. H.; Doan, T. H.; Jain, Sh.; Khurana, R.; Konyushikhin, M.; Kuo, C. M.; Lin, W.; Lu, Y. J.; Pozdnyakov, A.; Yu, S. S.; Kumar, Arun; Bartek, R.; Chang, P.; Chang, Y. H.; Chang, Y. W.; Chao, Y.; Chen, K. F.; Chen, P. H.; Dietz, C.; Fiori, F.; Grundler, U.; Hou, W.-S.; Hsiung, Y.; Liu, Y. F.; Lu, R.-S.; Miñano Moya, M.; Petrakou, E.; Tsai, J. f.; Tzeng, Y. M.; Asavapibhop, B.; Kovitanggoon, K.; Singh, G.; Srimanobhas, N.; Suwonjandee, N.; Adiguzel, A.; Cerci, S.; Demiroglu, Z. S.; Dozen, C.; Dumanoglu, I.; Gecit, F. H.; Girgis, S.; Gokbulut, G.; Guler, Y.; Gurpinar, E.; Hos, I.; Kangal, E. E.; Kayis Topaksu, A.; Onengut, G.; Ozcan, M.; Ozdemir, K.; Ozturk, S.; Tali, B.; Topakli, H.; Vergili, M.; Zorbilmez, C.; Akin, I. V.; Bilin, B.; Bilmis, S.; Isildak, B.; Karapinar, G.; Yalvac, M.; Zeyrek, M.; Gülmez, E.; Kaya, M.; Kaya, O.; Yetkin, E. A.; Yetkin, T.; Cakir, A.; Cankocak, K.; Sen, S.; Vardarlı, F. I.; Grynyov, B.; Levchuk, L.; Sorokin, P.; Aggleton, R.; Ball, F.; Beck, L.; Brooke, J. J.; Clement, E.; Cussans, D.; Flacher, H.; Goldstein, J.; Grimes, M.; Heath, G. P.; Heath, H. F.; Jacob, J.; Kreczko, L.; Lucas, C.; Meng, Z.; Newbold, D. M.; Paramesvaran, S.; Poll, A.; Sakuma, T.; Seif El Nasr-Storey, S.; Senkin, S.; Smith, D.; Smith, V. J.; Bell, K. W.; Belyaev, A.; Brew, C.; Brown, R. M.; Calligaris, L.; Cieri, D.; Cockerill, D. J. A.; Coughlan, J. A.; Harder, K.; Harper, S.; Olaiya, E.; Petyt, D.; Shepherd-Themistocleous, C. H.; Thea, A.; Tomalin, I. R.; Williams, T.; Worm, S. D.; Baber, M.; Bainbridge, R.; Buchmuller, O.; Bundock, A.; Burton, D.; Casasso, S.; Citron, M.; Colling, D.; Corpe, L.; Cripps, N.; Dauncey, P.; Davies, G.; De Wit, A.; Della Negra, M.; Dunne, P.; Elwood, A.; Ferguson, W.; Futyan, D.; Hall, G.; Iles, G.; Kenzie, M.; Lane, R.; Lucas, R.; Lyons, L.; Magnan, A.-M.; Malik, S.; Nash, J.; Nikitenko, A.; Pela, J.; Pesaresi, M.; Petridis, K.; Raymond, D. M.; Richards, A.; Rose, A.; Seez, C.; Tapper, A.; Uchida, K.; Vazquez Acosta, M.; Virdee, T.; Zenz, S. C.; Cole, J. E.; Hobson, P. R.; Khan, A.; Kyberd, P.; Leggat, D.; Leslie, D.; Reid, I. D.; Symonds, P.; Teodorescu, L.; Turner, M.; Borzou, A.; Call, K.; Dittmann, J.; Hatakeyama, K.; Liu, H.; Pastika, N.; Charaf, O.; Cooper, S. I.; Henderson, C.; Rumerio, P.; Arcaro, D.; Avetisyan, A.; Bose, T.; Fantasia, C.; Gastler, D.; Lawson, P.; Rankin, D.; Richardson, C.; Rohlf, J.; St. John, J.; Sulak, L.; Zou, D.; Alimena, J.; Berry, E.; Bhattacharya, S.; Cutts, D.; Ferapontov, A.; Garabedian, A.; Hakala, J.; Heintz, U.; Laird, E.; Landsberg, G.; Mao, Z.; Narain, M.; Piperov, S.; Sagir, S.; Syarif, R.; Breedon, R.; Breto, G.; Calderon De La Barca Sanchez, M.; Chauhan, S.; Chertok, M.; Conway, J.; Conway, R.; Cox, P. T.; Erbacher, R.; Funk, G.; Gardner, M.; Ko, W.; Lander, R.; Mclean, C.; Mulhearn, M.; Pellett, D.; Pilot, J.; Ricci-Tam, F.; Shalhout, S.; Smith, J.; Squires, M.; Stolp, D.; Tripathi, M.; Wilbur, S.; Yohay, R.; Cousins, R.; Everaerts, P.; Florent, A.; Hauser, J.; Ignatenko, M.; Saltzberg, D.; Takasugi, E.; Valuev, V.; Weber, M.; Burt, K.; Clare, R.; Ellison, J.; Gary, J. W.; Hanson, G.; Heilman, J.; Ivova Paneva, M.; Jandir, P.; Kennedy, E.; Lacroix, F.; Long, O. R.; Luthra, A.; Malberti, M.; Olmedo Negrete, M.; Shrinivas, A.; Wei, H.; Wimpenny, S.; Yates, B. R.; Branson, J. G.; Cerati, G. B.; Cittolin, S.; D'Agnolo, R. T.; Derdzinski, M.; Holzner, A.; Kelley, R.; Klein, D.; Letts, J.; Macneill, I.; Olivito, D.; Padhi, S.; Pieri, M.; Sani, M.; Sharma, V.; Simon, S.; Tadel, M.; Vartak, A.; Wasserbaech, S.; Welke, C.; Würthwein, F.; Yagil, A.; Zevi Della Porta, G.; Bradmiller-Feld, J.; Campagnari, C.; Dishaw, A.; Dutta, V.; Flowers, K.; Franco Sevilla, M.; Geffert, P.; George, C.; Golf, F.; Gouskos, L.; Gran, J.; Incandela, J.; Mccoll, N.; Mullin, S. D.; Richman, J.; Stuart, D.; Suarez, I.; West, C.; Yoo, J.; Anderson, D.; Apresyan, A.; Bornheim, A.; Bunn, J.; Chen, Y.; Duarte, J.; Mott, A.; Newman, H. B.; Pena, C.; Spiropulu, M.; Vlimant, J. R.; Xie, S.; Zhu, R. Y.; Andrews, M. B.; Azzolini, V.; Calamba, A.; Carlson, B.; Ferguson, T.; Paulini, M.; Russ, J.; Sun, M.; Vogel, H.; Vorobiev, I.; Cumalat, J. P.; Ford, W. T.; Gaz, A.; Jensen, F.; Johnson, A.; Krohn, M.; Mulholland, T.; Nauenberg, U.; Stenson, K.; Wagner, S. R.; Alexander, J.; Chatterjee, A.; Chaves, J.; Chu, J.; Dittmer, S.; Eggert, N.; Mirman, N.; Nicolas Kaufman, G.; Patterson, J. R.; Rinkevicius, A.; Ryd, A.; Skinnari, L.; Soffi, L.; Sun, W.; Tan, S. M.; Teo, W. D.; Thom, J.; Thompson, J.; Tucker, J.; Weng, Y.; Wittich, P.; Abdullin, S.; Albrow, M.; Apollinari, G.; Banerjee, S.; Bauerdick, L. A. T.; Beretvas, A.; Berryhill, J.; Bhat, P. C.; Bolla, G.; Burkett, K.; Butler, J. N.; Cheung, H. W. K.; Chlebana, F.; Cihangir, S.; Elvira, V. D.; Fisk, I.; Freeman, J.; Gottschalk, E.; Gray, L.; Green, D.; Grünendahl, S.; Gutsche, O.; Hanlon, J.; Hare, D.; Harris, R. M.; Hasegawa, S.; Hirschauer, J.; Hu, Z.; Jayatilaka, B.; Jindariani, S.; Johnson, M.; Joshi, U.; Klima, B.; Kreis, B.; Lammel, S.; Linacre, J.; Lincoln, D.; Lipton, R.; Liu, T.; Lopes De Sá, R.; Lykken, J.; Maeshima, K.; Marraffino, J. M.; Maruyama, S.; Mason, D.; McBride, P.; Merkel, P.; Mishra, K.; Mrenna, S.; Nahn, S.; Newman-Holmes, C.; O'Dell, V.; Pedro, K.; Prokofyev, O.; Rakness, G.; Sexton-Kennedy, E.; Soha, A.; Spalding, W. J.; Spiegel, L.; Strobbe, N.; Taylor, L.; Tkaczyk, S.; Tran, N. V.; Uplegger, L.; Vaandering, E. W.; Vernieri, C.; Verzocchi, M.; Vidal, R.; Weber, H. A.; Whitbeck, A.; Acosta, D.; Avery, P.; Bortignon, P.; Bourilkov, D.; Carnes, A.; Carver, M.; Curry, D.; Das, S.; Field, R. D.; Furic, I. K.; Gleyzer, S. V.; Konigsberg, J.; Korytov, A.; Kotov, K.; Low, J. F.; Ma, P.; Matchev, K.; Mei, H.; Milenovic, P.; Mitselmakher, G.; Rank, D.; Rossin, R.; Shchutska, L.; Snowball, M.; Sperka, D.; Terentyev, N.; Thomas, L.; Wang, J.; Wang, S.; Yelton, J.; Hewamanage, S.; Linn, S.; Markowitz, P.; Martinez, G.; Rodriguez, J. L.; Ackert, A.; Adams, J. R.; Adams, T.; Askew, A.; Bein, S.; Bochenek, J.; Diamond, B.; Haas, J.; Hagopian, S.; Hagopian, V.; Johnson, K. F.; Khatiwada, A.; Prosper, H.; Weinberg, M.; Baarmand, M. M.; Bhopatkar, V.; Colafranceschi, S.; Hohlmann, M.; Kalakhety, H.; Noonan, D.; Roy, T.; Yumiceva, F.; Adams, M. R.; Apanasevich, L.; Berry, D.; Betts, R. R.; Bucinskaite, I.; Cavanaugh, R.; Evdokimov, O.; Gauthier, L.; Gerber, C. E.; Hofman, D. J.; Kurt, P.; O'Brien, C.; Sandoval Gonzalez, I. D.; Turner, P.; Varelas, N.; Wu, Z.; Zakaria, M.; Bilki, B.; Clarida, W.; Dilsiz, K.; Durgut, S.; Gandrajula, R. P.; Haytmyradov, M.; Khristenko, V.; Merlo, J.-P.; Mermerkaya, H.; Mestvirishvili, A.; Moeller, A.; Nachtman, J.; Ogul, H.; Onel, Y.; Ozok, F.; Penzo, A.; Snyder, C.; Tiras, E.; Wetzel, J.; Yi, K.; Anderson, I.; Barnett, B. A.; Blumenfeld, B.; Eminizer, N.; Fehling, D.; Feng, L.; Gritsan, A. V.; Maksimovic, P.; Martin, C.; Osherson, M.; Roskes, J.; Sady, A.; Sarica, U.; Swartz, M.; Xiao, M.; Xin, Y.; You, C.; Baringer, P.; Bean, A.; Benelli, G.; Bruner, C.; Kenny, R. P.; Majumder, D.; Malek, M.; Murray, M.; Sanders, S.; Stringer, R.; Wang, Q.; Ivanov, A.; Kaadze, K.; Khalil, S.; Makouski, M.; Maravin, Y.; Mohammadi, A.; Saini, L. K.; Skhirtladze, N.; Toda, S.; Lange, D.; Rebassoo, F.; Wright, D.; Anelli, C.; Baden, A.; Baron, O.; Belloni, A.; Calvert, B.; Eno, S. C.; Ferraioli, C.; Gomez, J. A.; Hadley, N. J.; Jabeen, S.; Kellogg, R. G.; Kolberg, T.; Kunkle, J.; Lu, Y.; Mignerey, A. C.; Shin, Y. H.; Skuja, A.; Tonjes, M. B.; Tonwar, S. C.; Apyan, A.; Barbieri, R.; Baty, A.; Bierwagen, K.; Brandt, S.; Busza, W.; Cali, I. A.; Demiragli, Z.; Di Matteo, L.; Gomez Ceballos, G.; Goncharov, M.; Gulhan, D.; Iiyama, Y.; Innocenti, G. M.; Klute, M.; Kovalskyi, D.; Lai, Y. S.; Lee, Y.-J.; Levin, A.; Luckey, P. D.; Marini, A. C.; Mcginn, C.; Mironov, C.; Narayanan, S.; Niu, X.; Paus, C.; Roland, C.; Roland, G.; Salfeld-Nebgen, J.; Stephans, G. S. F.; Sumorok, K.; Varma, M.; Velicanu, D.; Veverka, J.; Wang, J.; Wang, T. W.; Wyslouch, B.; Yang, M.; Zhukova, V.; Dahmes, B.; Evans, A.; Finkel, A.; Gude, A.; Hansen, P.; Kalafut, S.; Kao, S. C.; Klapoetke, K.; Kubota, Y.; Lesko, Z.; Mans, J.; Nourbakhsh, S.; Ruckstuhl, N.; Rusack, R.; Tambe, N.; Turkewitz, J.; Acosta, J. G.; Oliveros, S.; Avdeeva, E.; Bloom, K.; Bose, S.; Claes, D. R.; Dominguez, A.; Fangmeier, C.; Gonzalez Suarez, R.; Kamalieddin, R.; Knowlton, D.; Kravchenko, I.; Meier, F.; Monroy, J.; Ratnikov, F.; Siado, J. E.; Snow, G. R.; Alyari, M.; Dolen, J.; George, J.; Godshalk, A.; Harrington, C.; Iashvili, I.; Kaisen, J.; Kharchilava, A.; Kumar, A.; Rappoccio, S.; Roozbahani, B.; Alverson, G.; Barberis, E.; Baumgartel, D.; Chasco, M.; Hortiangtham, A.; Massironi, A.; Morse, D. M.; Nash, D.; Orimoto, T.; Teixeira De Lima, R.; Trocino, D.; Wang, R.-J.; Wood, D.; Zhang, J.; Hahn, K. A.; Kubik, A.; Mucia, N.; Odell, N.; Pollack, B.; Schmitt, M.; Stoynev, S.; Sung, K.; Trovato, M.; Velasco, M.; Brinkerhoff, A.; Dev, N.; Hildreth, M.; Jessop, C.; Karmgard, D. J.; Kellams, N.; Lannon, K.; Marinelli, N.; Meng, F.; Mueller, C.; Musienko, Y.; Planer, M.; Reinsvold, A.; Ruchti, R.; Smith, G.; Taroni, S.; Valls, N.; Wayne, M.; Wolf, M.; Woodard, A.; Antonelli, L.; Brinson, J.; Bylsma, B.; Durkin, L. S.; Flowers, S.; Hart, A.; Hill, C.; Hughes, R.; Ji, W.; Ling, T. Y.; Liu, B.; Luo, W.; Puigh, D.; Rodenburg, M.; Winer, B. L.; Wulsin, H. W.; Driga, O.; Elmer, P.; Hardenbrook, J.; Hebda, P.; Koay, S. A.; Lujan, P.; Marlow, D.; Medvedeva, T.; Mooney, M.; Olsen, J.; Palmer, C.; Piroué, P.; Saka, H.; Stickland, D.; Tully, C.; Zuranski, A.; Malik, S.; Barker, A.; Barnes, V. E.; Benedetti, D.; Bortoletto, D.; Gutay, L.; Jha, M. K.; Jones, M.; Jung, A. W.; Jung, K.; Miller, D. H.; Neumeister, N.; Radburn-Smith, B. C.; Shi, X.; Shipsey, I.; Silvers, D.; Sun, J.; Svyatkovskiy, A.; Wang, F.; Xie, W.; Xu, L.; Parashar, N.; Stupak, J.; Adair, A.; Akgun, B.; Chen, Z.; Ecklund, K. M.; Geurts, F. J. M.; Guilbaud, M.; Li, W.; Michlin, B.; Northup, M.; Padley, B. P.; Redjimi, R.; Roberts, J.; Rorie, J.; Tu, Z.; Zabel, J.; Betchart, B.; Bodek, A.; de Barbaro, P.; Demina, R.; Eshaq, Y.; Ferbel, T.; Galanti, M.; Garcia-Bellido, A.; Han, J.; Harel, A.; Hindrichs, O.; Khukhunaishvili, A.; Petrillo, G.; Tan, P.; Verzetti, M.; Arora, S.; Chou, J. P.; Contreras-Campana, C.; Contreras-Campana, E.; Ferencek, D.; Gershtein, Y.; Gray, R.; Halkiadakis, E.; Hidas, D.; Hughes, E.; Kaplan, S.; Kunnawalkam Elayavalli, R.; Lath, A.; Nash, K.; Panwalkar, S.; Park, M.; Salur, S.; Schnetzer, S.; Sheffield, D.; Somalwar, S.; Stone, R.; Thomas, S.; Thomassen, P.; Walker, M.; Foerster, M.; Riley, G.; Rose, K.; Spanier, S.; Bouhali, O.; Castaneda Hernandez, A.; Celik, A.; Dalchenko, M.; De Mattia, M.; Delgado, A.; Dildick, S.; Eusebi, R.; Gilmore, J.; Huang, T.; Kamon, T.; Krutelyov, V.; Mueller, R.; Osipenkov, I.; Pakhotin, Y.; Patel, R.; Perloff, A.; Rose, A.; Safonov, A.; Tatarinov, A.; Ulmer, K. A.; Akchurin, N.; Cowden, C.; Damgov, J.; Dragoiu, C.; Dudero, P. R.; Faulkner, J.; Kunori, S.; Lamichhane, K.; Lee, S. W.; Libeiro, T.; Undleeb, S.; Volobouev, I.; Appelt, E.; Delannoy, A. G.; Greene, S.; Gurrola, A.; Janjam, R.; Johns, W.; Maguire, C.; Mao, Y.; Melo, A.; Ni, H.; Sheldon, P.; Snook, B.; Tuo, S.; Velkovska, J.; Xu, Q.; Arenton, M. W.; Cox, B.; Francis, B.; Goodell, J.; Hirosky, R.; Ledovskoy, A.; Li, H.; Lin, C.; Neu, C.; Sinthuprasith, T.; Sun, X.; Wang, Y.; Wolfe, E.; Wood, J.; Xia, F.; Clarke, C.; Harr, R.; Karchin, P. E.; Kottachchi Kankanamge Don, C.; Lamichhane, P.; Sturdy, J.; Belknap, D. A.; Carlsmith, D.; Cepeda, M.; Dasu, S.; Dodd, L.; Duric, S.; Gomber, B.; Grothe, M.; Hall-Wilton, R.; Herndon, M.; Hervé, A.; Klabbers, P.; Lanaro, A.; Levine, A.; Long, K.; Loveless, R.; Mohapatra, A.; Ojalvo, I.; Perry, T.; Pierro, G. A.; Polese, G.; Ruggles, T.; Sarangi, T.; Savin, A.; Sharma, A.; Smith, N.; Smith, W. H.; Taylor, D.; Woods, N.

    2016-08-01

    A search is performed for a new resonance decaying into a lighter resonance and a Z boson. Two channels are studied, targeting the decay of the lighter resonance into either a pair of oppositely charged τ leptons or a b b ‾ pair. The Z boson is identified via its decays to electrons or muons. The search exploits data collected by the CMS experiment at a centre-of-mass energy of 8 TeV, corresponding to an integrated luminosity of 19.8 fb-1. No significant deviations are observed from the standard model expectation and limits are set on production cross sections and parameters of two-Higgs-doublet models.

  16. Cloud Cover

    ERIC Educational Resources Information Center

    Schaffhauser, Dian

    2012-01-01

    This article features a major statewide initiative in North Carolina that is showing how a consortium model can minimize risks for districts and help them exploit the advantages of cloud computing. Edgecombe County Public Schools in Tarboro, North Carolina, intends to exploit a major cloud initiative being refined in the state and involving every…

  17. Benefiting from networks by occupying central positions: an empirical study of the Taiwan health care industry.

    PubMed

    Peng, Tzu-Ju Ann; Lo, Fang-Yi; Lin, Chin-Shien; Yu, Chwo-Ming Joseph

    2006-01-01

    At issue is whether network resources imply some resources available to all members in networks or available only to those occupying structurally central positions in networks. In this article, two conceptual models, the additive and interaction models of the firm, are empirically tested regarding the impact of hospital resources, network resources, and centrality on hospital performance in the Taiwan health care industry. The results demonstrate that: (1) in the additive model, hospital resources and centrality independently affect performance, whereas network resources do not; and (2) no evidence supports the interaction effect of centrality and resources on performance. Based on our findings in Taiwanese practices, the extent to which the resources are acquired externally from networks, we suggest that while adopting interorganizational strategies, hospitals should clearly identify those important resources that reside in-house and those transferred from network partners. How hospitals access resources from central positions is more important than what network resources can hospitals acquire from networks. Hospitals should improve performance by exploiting its in-house resources rather than obtaining network resources externally. In addition, hospitals should not only invest in hospital resources for better performance but should also move to central positions in networks to benefit from collaborations.

  18. Querying and Extracting Timeline Information from Road Traffic Sensor Data

    PubMed Central

    Imawan, Ardi; Indikawati, Fitri Indra; Kwon, Joonho; Rao, Praveen

    2016-01-01

    The escalation of traffic congestion in urban cities has urged many countries to use intelligent transportation system (ITS) centers to collect historical traffic sensor data from multiple heterogeneous sources. By analyzing historical traffic data, we can obtain valuable insights into traffic behavior. Many existing applications have been proposed with limited analysis results because of the inability to cope with several types of analytical queries. In this paper, we propose the QET (querying and extracting timeline information) system—a novel analytical query processing method based on a timeline model for road traffic sensor data. To address query performance, we build a TQ-index (timeline query-index) that exploits spatio-temporal features of timeline modeling. We also propose an intuitive timeline visualization method to display congestion events obtained from specified query parameters. In addition, we demonstrate the benefit of our system through a performance evaluation using a Busan ITS dataset and a Seattle freeway dataset. PMID:27563900

  19. Composing problem solvers for simulation experimentation: a case study on steady state estimation.

    PubMed

    Leye, Stefan; Ewald, Roland; Uhrmacher, Adelinde M

    2014-01-01

    Simulation experiments involve various sub-tasks, e.g., parameter optimization, simulation execution, or output data analysis. Many algorithms can be applied to such tasks, but their performance depends on the given problem. Steady state estimation in systems biology is a typical example for this: several estimators have been proposed, each with its own (dis-)advantages. Experimenters, therefore, must choose from the available options, even though they may not be aware of the consequences. To support those users, we propose a general scheme to aggregate such algorithms to so-called synthetic problem solvers, which exploit algorithm differences to improve overall performance. Our approach subsumes various aggregation mechanisms, supports automatic configuration from training data (e.g., via ensemble learning or portfolio selection), and extends the plugin system of the open source modeling and simulation framework James II. We show the benefits of our approach by applying it to steady state estimation for cell-biological models.

  20. Statistical iterative material image reconstruction for spectral CT using a semi-empirical forward model

    NASA Astrophysics Data System (ADS)

    Mechlem, Korbinian; Ehn, Sebastian; Sellerer, Thorsten; Pfeiffer, Franz; Noël, Peter B.

    2017-03-01

    In spectral computed tomography (spectral CT), the additional information about the energy dependence of attenuation coefficients can be exploited to generate material selective images. These images have found applications in various areas such as artifact reduction, quantitative imaging or clinical diagnosis. However, significant noise amplification on material decomposed images remains a fundamental problem of spectral CT. Most spectral CT algorithms separate the process of material decomposition and image reconstruction. Separating these steps is suboptimal because the full statistical information contained in the spectral tomographic measurements cannot be exploited. Statistical iterative reconstruction (SIR) techniques provide an alternative, mathematically elegant approach to obtaining material selective images with improved tradeoffs between noise and resolution. Furthermore, image reconstruction and material decomposition can be performed jointly. This is accomplished by a forward model which directly connects the (expected) spectral projection measurements and the material selective images. To obtain this forward model, detailed knowledge of the different photon energy spectra and the detector response was assumed in previous work. However, accurately determining the spectrum is often difficult in practice. In this work, a new algorithm for statistical iterative material decomposition is presented. It uses a semi-empirical forward model which relies on simple calibration measurements. Furthermore, an efficient optimization algorithm based on separable surrogate functions is employed. This partially negates one of the major shortcomings of SIR, namely high computational cost and long reconstruction times. Numerical simulations and real experiments show strongly improved image quality and reduced statistical bias compared to projection-based material decomposition.

  1. Coupled ecosystem/supply chain modelling of fish products from sea to shelf: the Peruvian anchoveta case.

    PubMed

    Avadí, Angel; Fréon, Pierre; Tam, Jorge

    2014-01-01

    Sustainability assessment of food supply chains is relevant for global sustainable development. A framework is proposed for analysing fishfood (fish products for direct human consumption) supply chains with local or international scopes. It combines a material flow model (including an ecosystem dimension) of the supply chains, calculation of sustainability indicators (environmental, socio-economic, nutritional), and finally multi-criteria comparison of alternative supply chains (e.g. fates of landed fish) and future exploitation scenarios. The Peruvian anchoveta fishery is the starting point for various local and global supply chains, especially via reduction of anchoveta into fishmeal and oil, used worldwide as a key input in livestock and fish feeds. The Peruvian anchoveta supply chains are described, and the proposed methodology is used to model them. Three scenarios were explored: status quo of fish exploitation (Scenario 1), increase in anchoveta landings for food (Scenario 2), and radical decrease in total anchoveta landings to allow other fish stocks to prosper (Scenario 3). It was found that Scenario 2 provided the best balance of sustainability improvements among the three scenarios, but further refinement of the assessment is recommended. In the long term, the best opportunities for improving the environmental and socio-economic performance of Peruvian fisheries are related to sustainability-improving management and policy changes affecting the reduction industry. Our approach provides the tools and quantitative results to identify these best improvement opportunities.

  2. Coupled Ecosystem/Supply Chain Modelling of Fish Products from Sea to Shelf: The Peruvian Anchoveta Case

    PubMed Central

    Avadí, Angel; Fréon, Pierre; Tam, Jorge

    2014-01-01

    Sustainability assessment of food supply chains is relevant for global sustainable development. A framework is proposed for analysing fishfood (fish products for direct human consumption) supply chains with local or international scopes. It combines a material flow model (including an ecosystem dimension) of the supply chains, calculation of sustainability indicators (environmental, socio-economic, nutritional), and finally multi-criteria comparison of alternative supply chains (e.g. fates of landed fish) and future exploitation scenarios. The Peruvian anchoveta fishery is the starting point for various local and global supply chains, especially via reduction of anchoveta into fishmeal and oil, used worldwide as a key input in livestock and fish feeds. The Peruvian anchoveta supply chains are described, and the proposed methodology is used to model them. Three scenarios were explored: status quo of fish exploitation (Scenario 1), increase in anchoveta landings for food (Scenario 2), and radical decrease in total anchoveta landings to allow other fish stocks to prosper (Scenario 3). It was found that Scenario 2 provided the best balance of sustainability improvements among the three scenarios, but further refinement of the assessment is recommended. In the long term, the best opportunities for improving the environmental and socio-economic performance of Peruvian fisheries are related to sustainability-improving management and policy changes affecting the reduction industry. Our approach provides the tools and quantitative results to identify these best improvement opportunities. PMID:25003196

  3. Balancing the Budget through Social Exploitation: Why Hard Times Are Even Harder for Some

    PubMed Central

    Tropman, John; Nicklett, Emily

    2013-01-01

    In all societies needs and wants regularly exceed resources. Thus societies are always in deficit; demand always exceeds supply and “balancing the budget” is a constant social problem. To make matters somewhat worse, research suggests that need- and want-fulfillment tends to further stimulate the cycle of wantseeking rather than satiating desire. Societies use various resource-allocation mechanisms, including price, to cope with gaps between wants and resources. Social exploitation is a second mechanism, securing labor from population segments that can be coerced or convinced to perform necessary work for free or at below-market compensation. Using practical examples, this article develops a theoretical framework for understanding social exploitation. It then offers case examples of how different segments of the population emerge as exploited groups in the United States, due to changes in social policies. These exploitative processes have been exacerbated and accelerated by the economic downturn that began in 2007. PMID:23936753

  4. Misreporting behaviour in iterated prisoner's dilemma game with combined trust strategy

    NASA Astrophysics Data System (ADS)

    Chen, Bo; Zhang, Bin; Wu, Hua-qing

    2015-01-01

    Effects of agents' misreporting behaviour on system cooperation are studied in a multi-agent iterated prisoner's dilemma game. Agents, adopting combined trust strategy (denoted by CTS) are classified into three groups, i.e., honest CTS, positive-reporting CTS and negative-reporting CTS. The differences of cooperation frequency and pay-off under three different systems, i.e., system only with honest CTS, system with honest CTS and positive-reporting CTS and system with honest CTS and negative-reporting CTS, are compared. Furthermore, we also investigate the effects of misreporting behaviour on an exploiter who adopts an exploiting strategy (denoted by EXPL) in a system with two CTSs and one EXPL. At last, numerical simulations are performed for understanding the effects of misreporting behaviour on CTS. The results reveal that positive-reporting behaviour can strengthen system cooperation, while negative-reporting behaviour cannot. When EXPL exists in a system, positive-reporting behaviour helps the exploiter in reducing its exploiting cost and encourages agents to adopt exploiting strategy, but hurts other agents' interests.

  5. How to do research fairly in an unjust world.

    PubMed

    Ballantyne, Angela J

    2010-06-01

    International research, sponsored by for-profit companies, is regularly criticised as unethical on the grounds that it exploits research subjects in developing countries. Many commentators agree that exploitation occurs when the benefits of cooperative activity are unfairly distributed between the parties. To determine whether international research is exploitative we therefore need an account of fair distribution. Procedural accounts of fair bargaining have been popular solutions to this problem, but I argue that they are insufficient to protect against exploitation. I argue instead that a maximin principle of fair distribution provides a more compelling normative account of fairness in relationships characterised by extreme vulnerability and inequality of bargaining potential between the parties. A global tax on international research would provide a mechanism for implementing the maximin account of fair benefits. This model has the capacity to ensure fair benefits and thereby prevent exploitation in international research.

  6. Optimizing the stimulus presentation paradigm design for the P300-based brain-computer interface using performance prediction.

    PubMed

    Mainsah, B O; Reeves, G; Collins, L M; Throckmorton, C S

    2017-08-01

    The role of a brain-computer interface (BCI) is to discern a user's intended message or action by extracting and decoding relevant information from brain signals. Stimulus-driven BCIs, such as the P300 speller, rely on detecting event-related potentials (ERPs) in response to a user attending to relevant or target stimulus events. However, this process is error-prone because the ERPs are embedded in noisy electroencephalography (EEG) data, representing a fundamental problem in communication of the uncertainty in the information that is received during noisy transmission. A BCI can be modeled as a noisy communication system and an information-theoretic approach can be exploited to design a stimulus presentation paradigm to maximize the information content that is presented to the user. However, previous methods that focused on designing error-correcting codes failed to provide significant performance improvements due to underestimating the effects of psycho-physiological factors on the P300 ERP elicitation process and a limited ability to predict online performance with their proposed methods. Maximizing the information rate favors the selection of stimulus presentation patterns with increased target presentation frequency, which exacerbates refractory effects and negatively impacts performance within the context of an oddball paradigm. An information-theoretic approach that seeks to understand the fundamental trade-off between information rate and reliability is desirable. We developed a performance-based paradigm (PBP) by tuning specific parameters of the stimulus presentation paradigm to maximize performance while minimizing refractory effects. We used a probabilistic-based performance prediction method as an evaluation criterion to select a final configuration of the PBP. With our PBP, we demonstrate statistically significant improvements in online performance, both in accuracy and spelling rate, compared to the conventional row-column paradigm. By accounting for refractory effects, an information-theoretic approach can be exploited to significantly improve BCI performance across a wide range of performance levels.

  7. Optimizing the stimulus presentation paradigm design for the P300-based brain-computer interface using performance prediction

    NASA Astrophysics Data System (ADS)

    Mainsah, B. O.; Reeves, G.; Collins, L. M.; Throckmorton, C. S.

    2017-08-01

    Objective. The role of a brain-computer interface (BCI) is to discern a user’s intended message or action by extracting and decoding relevant information from brain signals. Stimulus-driven BCIs, such as the P300 speller, rely on detecting event-related potentials (ERPs) in response to a user attending to relevant or target stimulus events. However, this process is error-prone because the ERPs are embedded in noisy electroencephalography (EEG) data, representing a fundamental problem in communication of the uncertainty in the information that is received during noisy transmission. A BCI can be modeled as a noisy communication system and an information-theoretic approach can be exploited to design a stimulus presentation paradigm to maximize the information content that is presented to the user. However, previous methods that focused on designing error-correcting codes failed to provide significant performance improvements due to underestimating the effects of psycho-physiological factors on the P300 ERP elicitation process and a limited ability to predict online performance with their proposed methods. Maximizing the information rate favors the selection of stimulus presentation patterns with increased target presentation frequency, which exacerbates refractory effects and negatively impacts performance within the context of an oddball paradigm. An information-theoretic approach that seeks to understand the fundamental trade-off between information rate and reliability is desirable. Approach. We developed a performance-based paradigm (PBP) by tuning specific parameters of the stimulus presentation paradigm to maximize performance while minimizing refractory effects. We used a probabilistic-based performance prediction method as an evaluation criterion to select a final configuration of the PBP. Main results. With our PBP, we demonstrate statistically significant improvements in online performance, both in accuracy and spelling rate, compared to the conventional row-column paradigm. Significance. By accounting for refractory effects, an information-theoretic approach can be exploited to significantly improve BCI performance across a wide range of performance levels.

  8. Life Cycle Assessment of the MBT plant in Ano Liossia, Athens, Greece

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abeliotis, Konstadinos, E-mail: kabeli@hua.gr; Kalogeropoulos, Alexandros; Lasaridi, Katia

    2012-01-15

    Highlights: Black-Right-Pointing-Pointer We model the operation of an MBT plant in Greece based on LCA. Black-Right-Pointing-Pointer We compare four different MBT operating scenarios (among them and with landfilling). Black-Right-Pointing-Pointer Even the current operation of the MBT plant is preferable to landfilling. Black-Right-Pointing-Pointer Utilization of the MBT compost and metals generates the most environmental gains. Black-Right-Pointing-Pointer Thermal exploitation of RDF improves further the environmental performance of the plant. - Abstract: The aim of this paper is the application of Life Cycle Assessment to the operation of the MBT facility of Ano Liossia in the region of Attica in Greece. The regionmore » of Attica is home to almost half the population of Greece and the management of its waste is a major issue. In order to explicitly analyze the operation of the MBT plant, five scenarios were generated. Actual operation data of the MBT plant for the year 2008 were provided by the region of Attica and the LCA modeling was performed via the SimaPro 5.1 software while impact assessment was performed utilizing the Eco-indicator'99 method. The results of our analysis indicate that even the current operation of the MBT plant is preferable to landfilling. Among the scenarios of MBT operation, the one with complete utilization of the MBT outputs, i.e. compost, RDF, ferrous and non-ferrous metals, is the one that generates the most environmental gains. Our analysis indicates that the exploitation of RDF via incineration is the key factor towards improving the environmental performance of the MBT plant. Our findings provide a quantitative understanding of the MBT plant. Interpretation of results showed that proper operation of the modern waste management systems can lead to substantial reduction of environmental impacts and savings of resources.« less

  9. The use of the Finite Element method for the earthquakes modelling in different geodynamic environments

    NASA Astrophysics Data System (ADS)

    Castaldo, Raffaele; Tizzani, Pietro

    2016-04-01

    Many numerical models have been developed to simulate the deformation and stress changes associated to the faulting process. This aspect is an important topic in fracture mechanism. In the proposed study, we investigate the impact of the deep fault geometry and tectonic setting on the co-seismic ground deformation pattern associated to different earthquake phenomena. We exploit the impact of the structural-geological data in Finite Element environment through an optimization procedure. In this framework, we model the failure processes in a physical mechanical scenario to evaluate the kinematics associated to the Mw 6.1 L'Aquila 2009 earthquake (Italy), the Mw 5.9 Ferrara and Mw 5.8 Mirandola 2012 earthquake (Italy) and the Mw 8.3 Gorkha 2015 earthquake (Nepal). These seismic events are representative of different tectonic scenario: the normal, the reverse and thrust faulting processes, respectively. In order to simulate the kinematic of the analyzed natural phenomena, we assume, under the plane stress approximation (is defined to be a state of stress in which the normal stress, sz, and the shear stress sxz and syz, directed perpendicular to x-y plane are assumed to be zero), the linear elastic behavior of the involved media. The performed finite element procedure consist of through two stages: (i) compacting under the weight of the rock successions (gravity loading), the deformation model reaches a stable equilibrium; (ii) the co-seismic stage simulates, through a distributed slip along the active fault, the released stresses. To constrain the models solution, we exploit the DInSAR deformation velocity maps retrieved by satellite data acquired by old and new generation sensors, as ENVISAT, RADARSAT-2 and SENTINEL 1A, encompassing the studied earthquakes. More specifically, we first generate 2D several forward mechanical models, then, we compare these with the recorded ground deformation fields, in order to select the best boundaries setting and parameters. Finally, the performed multi-parametric finite element models allow us to verify the effect of the crustal structures on the ground deformation and evaluate the stress-drop associated to the studied earthquakes on the surrounding structures.

  10. SMART: A Spatially Explicit Bio-Economic Model for Assessing and Managing Demersal Fisheries, with an Application to Italian Trawlers in the Strait of Sicily

    PubMed Central

    Russo, Tommaso; Parisi, Antonio; Garofalo, Germana; Gristina, Michele; Cataudella, Stefano; Fiorentino, Fabio

    2014-01-01

    Management of catches, effort and exploitation pattern are considered the most effective measures to control fishing mortality and ultimately ensure productivity and sustainability of fisheries. Despite the growing concerns about the spatial dimension of fisheries, the distribution of resources and fishing effort in space is seldom considered in assessment and management processes. Here we propose SMART (Spatial MAnagement of demersal Resources for Trawl fisheries), a tool for assessing bio-economic feedback in different management scenarios. SMART combines information from different tasks gathered within the European Data Collection Framework on fisheries and is composed of: 1) spatial models of fishing effort, environmental characteristics and distribution of demersal resources; 2) an Artificial Neural Network which captures the relationships among these aspects in a spatially explicit way and uses them to predict resources abundances; 3) a deterministic module which analyzes the size structure of catches and the associated revenues, according to different spatially-based management scenarios. SMART is applied to demersal fishery in the Strait of Sicily, one of the most productive fisheries of the Mediterranean Sea. Three of the main target species are used as proxies for the whole range exploited by trawlers. After training, SMART is used to evaluate different management scenarios, including spatial closures, using a simulation approach that mimics the recent exploitation patterns. Results evidence good model performance, with a noteworthy coherence and reliability of outputs for the different components. Among others, the main finding is that a partial improvement in resource conditions can be achieved by means of nursery closures, even if the overall fishing effort in the area remains stable. Accordingly, a series of strategically designed areas of trawling closures could significantly improve the resource conditions of demersal fisheries in the Strait of Sicily, also supporting sustainable economic returns for fishermen if not applied simultaneously for different species. PMID:24465971

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flores-Tlalpa, A.; Montano, J.; Ramirez-Zavaleta, F.

    We perform a complete calculation at the one-loop level for the Zggg and Z{sup '}ggg couplings in the context of the minimal 331 model, which predicts the existence of a new Z{sup '} gauge boson and new exotic quarks. Bose symmetry is exploited to write a compact and manifest SU{sub C}(3)-invariant vertex function for the Vggg (V=Z, Z{sup '}) coupling. Previous results on the Z{yields}ggg decay in the standard model are reproduced. It is found that this decay is insensitive to the effects of the new exotic quarks. This in contrast with the Z{sup '}{yields}ggg decay, which is sensitive tomore » both the standard model and exotic quarks, whose branching ratio is larger than that of the Z{yields}ggg transition by about a factor of 4.« less

  12. A reactive transport model for the quantification of risks induced by groundwater heat pump systems in urban aquifers

    NASA Astrophysics Data System (ADS)

    García-Gil, Alejandro; Epting, Jannis; Ayora, Carlos; Garrido, Eduardo; Vázquez-Suñé, Enric; Huggenberger, Peter; Gimenez, Ana Cristina

    2016-11-01

    Shallow geothermal resource exploitation through the use of groundwater heat pump systems not only has hydraulic and thermal effects on the environment but also induces physicochemical changes that can compromise the operability of installations. This study focuses on chemical clogging and dissolution subsidence processes observed during the geothermal re-injection of pumped groundwater into an urban aquifer. To explain these phenomena, two transient reactive transport models of a groundwater heat pump installation in an alluvial aquifer were used to reproduce groundwater-solid matrix interactions occurring in a surrounding aquifer environment during system operation. The models couple groundwater flow, heat and solute transport together with chemical reactions. In these models, the permeability distribution in space changes with precipitation-dissolution reactions over time. The simulations allowed us to estimate the calcite precipitation rates and porosity variations over space and time as a function of existent hydraulic gradients in an aquifer as well as the intensity of CO2 exchanges with the atmosphere. The results obtained from the numerical model show how CO2 exolution processes that occur during groundwater reinjection into an aquifer and calcite precipitation are related to hydraulic efficiency losses in exploitation systems. Finally, the performance of reinjection wells was evaluated over time according to different scenarios until the systems were fully obstructed. Our simulations also show a reduction in hydraulic conductivity that forces re-injected water to flow downwards, thereby enhancing the dissolution of evaporitic bedrock and producing subsidence that can ultimately result in a dramatic collapse of the injection well infrastructure.

  13. Structure-Based Low-Rank Model With Graph Nuclear Norm Regularization for Noise Removal.

    PubMed

    Ge, Qi; Jing, Xiao-Yuan; Wu, Fei; Wei, Zhi-Hui; Xiao, Liang; Shao, Wen-Ze; Yue, Dong; Li, Hai-Bo

    2017-07-01

    Nonlocal image representation methods, including group-based sparse coding and block-matching 3-D filtering, have shown their great performance in application to low-level tasks. The nonlocal prior is extracted from each group consisting of patches with similar intensities. Grouping patches based on intensity similarity, however, gives rise to disturbance and inaccuracy in estimation of the true images. To address this problem, we propose a structure-based low-rank model with graph nuclear norm regularization. We exploit the local manifold structure inside a patch and group the patches by the distance metric of manifold structure. With the manifold structure information, a graph nuclear norm regularization is established and incorporated into a low-rank approximation model. We then prove that the graph-based regularization is equivalent to a weighted nuclear norm and the proposed model can be solved by a weighted singular-value thresholding algorithm. Extensive experiments on additive white Gaussian noise removal and mixed noise removal demonstrate that the proposed method achieves a better performance than several state-of-the-art algorithms.

  14. Magnetic field feature extraction and selection for indoor location estimation.

    PubMed

    Galván-Tejada, Carlos E; García-Vázquez, Juan Pablo; Brena, Ramon F

    2014-06-20

    User indoor positioning has been under constant improvement especially with the availability of new sensors integrated into the modern mobile devices, which allows us to exploit not only infrastructures made for everyday use, such as WiFi, but also natural infrastructure, as is the case of natural magnetic field. In this paper we present an extension and improvement of our current indoor localization model based on the feature extraction of 46 magnetic field signal features. The extension adds a feature selection phase to our methodology, which is performed through Genetic Algorithm (GA) with the aim of optimizing the fitness of our current model. In addition, we present an evaluation of the final model in two different scenarios: home and office building. The results indicate that performing a feature selection process allows us to reduce the number of signal features of the model from 46 to 5 regardless the scenario and room location distribution. Further, we verified that reducing the number of features increases the probability of our estimator correctly detecting the user's location (sensitivity) and its capacity to detect false positives (specificity) in both scenarios.

  15. The modular modality frame model: continuous body state estimation and plausibility-weighted information fusion.

    PubMed

    Ehrenfeld, Stephan; Butz, Martin V

    2013-02-01

    Humans show admirable capabilities in movement planning and execution. They can perform complex tasks in various contexts, using the available sensory information very effectively. Body models and continuous body state estimations appear necessary to realize such capabilities. We introduce the Modular Modality Frame (MMF) model, which maintains a highly distributed, modularized body model continuously updating, modularized probabilistic body state estimations over time. Modularization is realized with respect to modality frames, that is, sensory modalities in particular frames of reference and with respect to particular body parts. We evaluate MMF performance on a simulated, nine degree of freedom arm in 3D space. The results show that MMF is able to maintain accurate body state estimations despite high sensor and motor noise. Moreover, by comparing the sensory information available in different modality frames, MMF can identify faulty sensory measurements on the fly. In the near future, applications to lightweight robot control should be pursued. Moreover, MMF may be enhanced with neural encodings by introducing neural population codes and learning techniques. Finally, more dexterous goal-directed behavior should be realized by exploiting the available redundant state representations.

  16. Where neuroscience and dynamic system theory meet autonomous robotics: a contracting basal ganglia model for action selection.

    PubMed

    Girard, B; Tabareau, N; Pham, Q C; Berthoz, A; Slotine, J-J

    2008-05-01

    Action selection, the problem of choosing what to do next, is central to any autonomous agent architecture. We use here a multi-disciplinary approach at the convergence of neuroscience, dynamical system theory and autonomous robotics, in order to propose an efficient action selection mechanism based on a new model of the basal ganglia. We first describe new developments of contraction theory regarding locally projected dynamical systems. We exploit these results to design a stable computational model of the cortico-baso-thalamo-cortical loops. Based on recent anatomical data, we include usually neglected neural projections, which participate in performing accurate selection. Finally, the efficiency of this model as an autonomous robot action selection mechanism is assessed in a standard survival task. The model exhibits valuable dithering avoidance and energy-saving properties, when compared with a simple if-then-else decision rule.

  17. Electricity Load Forecasting Using Support Vector Regression with Memetic Algorithms

    PubMed Central

    Hu, Zhongyi; Xiong, Tao

    2013-01-01

    Electricity load forecasting is an important issue that is widely explored and examined in power systems operation literature and commercial transactions in electricity markets literature as well. Among the existing forecasting models, support vector regression (SVR) has gained much attention. Considering the performance of SVR highly depends on its parameters; this study proposed a firefly algorithm (FA) based memetic algorithm (FA-MA) to appropriately determine the parameters of SVR forecasting model. In the proposed FA-MA algorithm, the FA algorithm is applied to explore the solution space, and the pattern search is used to conduct individual learning and thus enhance the exploitation of FA. Experimental results confirm that the proposed FA-MA based SVR model can not only yield more accurate forecasting results than the other four evolutionary algorithms based SVR models and three well-known forecasting models but also outperform the hybrid algorithms in the related existing literature. PMID:24459425

  18. Electricity load forecasting using support vector regression with memetic algorithms.

    PubMed

    Hu, Zhongyi; Bao, Yukun; Xiong, Tao

    2013-01-01

    Electricity load forecasting is an important issue that is widely explored and examined in power systems operation literature and commercial transactions in electricity markets literature as well. Among the existing forecasting models, support vector regression (SVR) has gained much attention. Considering the performance of SVR highly depends on its parameters; this study proposed a firefly algorithm (FA) based memetic algorithm (FA-MA) to appropriately determine the parameters of SVR forecasting model. In the proposed FA-MA algorithm, the FA algorithm is applied to explore the solution space, and the pattern search is used to conduct individual learning and thus enhance the exploitation of FA. Experimental results confirm that the proposed FA-MA based SVR model can not only yield more accurate forecasting results than the other four evolutionary algorithms based SVR models and three well-known forecasting models but also outperform the hybrid algorithms in the related existing literature.

  19. Unscented Kalman Filter-Trained Neural Networks for Slip Model Prediction

    PubMed Central

    Li, Zhencai; Wang, Yang; Liu, Zhen

    2016-01-01

    The purpose of this work is to investigate the accurate trajectory tracking control of a wheeled mobile robot (WMR) based on the slip model prediction. Generally, a nonholonomic WMR may increase the slippage risk, when traveling on outdoor unstructured terrain (such as longitudinal and lateral slippage of wheels). In order to control a WMR stably and accurately under the effect of slippage, an unscented Kalman filter and neural networks (NNs) are applied to estimate the slip model in real time. This method exploits the model approximating capabilities of nonlinear state–space NN, and the unscented Kalman filter is used to train NN’s weights online. The slip parameters can be estimated and used to predict the time series of deviation velocity, which can be used to compensate control inputs of a WMR. The results of numerical simulation show that the desired trajectory tracking control can be performed by predicting the nonlinear slip model. PMID:27467703

  20. System analysis through bond graph modeling

    NASA Astrophysics Data System (ADS)

    McBride, Robert Thomas

    2005-07-01

    Modeling and simulation form an integral role in the engineering design process. An accurate mathematical description of a system provides the design engineer the flexibility to perform trade studies quickly and accurately to expedite the design process. Most often, the mathematical model of the system contains components of different engineering disciplines. A modeling methodology that can handle these types of systems might be used in an indirect fashion to extract added information from the model. This research examines the ability of a modeling methodology to provide added insight into system analysis and design. The modeling methodology used is bond graph modeling. An investigation into the creation of a bond graph model using the Lagrangian of the system is provided. Upon creation of the bond graph, system analysis is performed. To aid in the system analysis, an object-oriented approach to bond graph modeling is introduced. A framework is provided to simulate the bond graph directly. Through object-oriented simulation of a bond graph, the information contained within the bond graph can be exploited to create a measurement of system efficiency. A definition of system efficiency is given. This measurement of efficiency is used in the design of different controllers of varying architectures. Optimal control of a missile autopilot is discussed within the framework of the calculated system efficiency.

  1. Demographic threats to the sustainability of Brazil nut exploitation.

    PubMed

    Peres, Carlos A; Baider, Claudia; Zuidema, Pieter A; Wadt, Lúcia H O; Kainer, Karen A; Gomes-Silva, Daisy A P; Salomão, Rafael P; Simões, Luciana L; Franciosi, Eduardo R N; Cornejo Valverde, Fernando; Gribel, Rogério; Shepard, Glenn H; Kanashiro, Milton; Coventry, Peter; Yu, Douglas W; Watkinson, Andrew R; Freckleton, Robert P

    2003-12-19

    A comparative analysis of 23 populations of the Brazil nut tree (Bertholletia excelsa) across the Brazilian, Peruvian, and Bolivian Amazon shows that the history and intensity of Brazil nut exploitation are major determinants of population size structure. Populations subjected to persistent levels of harvest lack juvenile trees less than 60 centimeters in diameter at breast height; only populations with a history of either light or recent exploitation contain large numbers of juvenile trees. A harvesting model confirms that intensive exploitation levels over the past century are such that juvenile recruitment is insufficient to maintain populations over the long term. Without management, intensively harvested populations will succumb to a process of senescence and demographic collapse, threatening this cornerstone of the Amazonian extractive economy.

  2. Empirical Analysis of Exploiting Review Helpfulness for Extractive Summarization of Online Reviews

    ERIC Educational Resources Information Center

    Xiong, Wenting; Litman, Diane

    2014-01-01

    We propose a novel unsupervised extractive approach for summarizing online reviews by exploiting review helpfulness ratings. In addition to using the helpfulness ratings for review-level filtering, we suggest using them as the supervision of a topic model for sentence-level content scoring. The proposed method is metadata-driven, requiring no…

  3. Multifidelity-CMA: a multifidelity approach for efficient personalisation of 3D cardiac electromechanical models.

    PubMed

    Molléro, Roch; Pennec, Xavier; Delingette, Hervé; Garny, Alan; Ayache, Nicholas; Sermesant, Maxime

    2018-02-01

    Personalised computational models of the heart are of increasing interest for clinical applications due to their discriminative and predictive abilities. However, the simulation of a single heartbeat with a 3D cardiac electromechanical model can be long and computationally expensive, which makes some practical applications, such as the estimation of model parameters from clinical data (the personalisation), very slow. Here we introduce an original multifidelity approach between a 3D cardiac model and a simplified "0D" version of this model, which enables to get reliable (and extremely fast) approximations of the global behaviour of the 3D model using 0D simulations. We then use this multifidelity approximation to speed-up an efficient parameter estimation algorithm, leading to a fast and computationally efficient personalisation method of the 3D model. In particular, we show results on a cohort of 121 different heart geometries and measurements. Finally, an exploitable code of the 0D model with scripts to perform parameter estimation will be released to the community.

  4. Impact of Chaos Functions on Modern Swarm Optimizers.

    PubMed

    Emary, E; Zawbaa, Hossam M

    2016-01-01

    Exploration and exploitation are two essential components for any optimization algorithm. Much exploration leads to oscillation and premature convergence while too much exploitation slows down the optimization algorithm and the optimizer may be stuck in local minima. Therefore, balancing the rates of exploration and exploitation at the optimization lifetime is a challenge. This study evaluates the impact of using chaos-based control of exploration/exploitation rates against using the systematic native control. Three modern algorithms were used in the study namely grey wolf optimizer (GWO), antlion optimizer (ALO) and moth-flame optimizer (MFO) in the domain of machine learning for feature selection. Results on a set of standard machine learning data using a set of assessment indicators prove advance in optimization algorithm performance when using variational repeated periods of declined exploration rates over using systematically decreased exploration rates.

  5. Least-squares model-based halftoning

    NASA Astrophysics Data System (ADS)

    Pappas, Thrasyvoulos N.; Neuhoff, David L.

    1992-08-01

    A least-squares model-based approach to digital halftoning is proposed. It exploits both a printer model and a model for visual perception. It attempts to produce an 'optimal' halftoned reproduction, by minimizing the squared error between the response of the cascade of the printer and visual models to the binary image and the response of the visual model to the original gray-scale image. Conventional methods, such as clustered ordered dither, use the properties of the eye only implicitly, and resist printer distortions at the expense of spatial and gray-scale resolution. In previous work we showed that our printer model can be used to modify error diffusion to account for printer distortions. The modified error diffusion algorithm has better spatial and gray-scale resolution than conventional techniques, but produces some well known artifacts and asymmetries because it does not make use of an explicit eye model. Least-squares model-based halftoning uses explicit eye models and relies on printer models that predict distortions and exploit them to increase, rather than decrease, both spatial and gray-scale resolution. We have shown that the one-dimensional least-squares problem, in which each row or column of the image is halftoned independently, can be implemented with the Viterbi's algorithm. Unfortunately, no closed form solution can be found in two dimensions. The two-dimensional least squares solution is obtained by iterative techniques. Experiments show that least-squares model-based halftoning produces more gray levels and better spatial resolution than conventional techniques. We also show that the least- squares approach eliminates the problems associated with error diffusion. Model-based halftoning can be especially useful in transmission of high quality documents using high fidelity gray-scale image encoders. As we have shown, in such cases halftoning can be performed at the receiver, just before printing. Apart from coding efficiency, this approach permits the halftoner to be tuned to the individual printer, whose characteristics may vary considerably from those of other printers, for example, write-black vs. write-white laser printers.

  6. An ecology-oriented exploitation mode of groundwater resources in the northern Tianshan Mountains, China

    NASA Astrophysics Data System (ADS)

    Shang, Haimin; Wang, Wenke; Dai, Zhenxue; Duan, Lei; Zhao, Yaqian; Zhang, Jing

    2016-12-01

    In recent years, ecological degradation caused by irrational groundwater exploitation has been of growing concern in arid and semiarid regions. To address the groundwater-ecological issues, this paper proposes a groundwater-resource exploitation mode to evaluate the tradeoff between groundwater development and ecological environment in the northern Tianshan Mountains, northwest China's Xinjiang Uygur Autonomous Region. Field surveys and remote sensing studies were conducted to analyze the relation between the distribution of hydrological conditions and the occurrence of ecological types. The results show that there is a good correlation between groundwater depth and the supergene ecological type. Numerical simulations and ecological assessment models were applied to develop an ecology-oriented exploitation mode of groundwater resources. The mode allows the groundwater levels in different zones to be regulated by optimizing groundwater exploitation modes. The prediction results show that the supergene ecological quality will be better in 2020 and even more groundwater can be exploited in this mode. This study provides guidance for regional groundwater management, especially in regions with an obvious water scarcity.

  7. Toward automatic time-series forecasting using neural networks.

    PubMed

    Yan, Weizhong

    2012-07-01

    Over the past few decades, application of artificial neural networks (ANN) to time-series forecasting (TSF) has been growing rapidly due to several unique features of ANN models. However, to date, a consistent ANN performance over different studies has not been achieved. Many factors contribute to the inconsistency in the performance of neural network models. One such factor is that ANN modeling involves determining a large number of design parameters, and the current design practice is essentially heuristic and ad hoc, this does not exploit the full potential of neural networks. Systematic ANN modeling processes and strategies for TSF are, therefore, greatly needed. Motivated by this need, this paper attempts to develop an automatic ANN modeling scheme. It is based on the generalized regression neural network (GRNN), a special type of neural network. By taking advantage of several GRNN properties (i.e., a single design parameter and fast learning) and by incorporating several design strategies (e.g., fusing multiple GRNNs), we have been able to make the proposed modeling scheme to be effective for modeling large-scale business time series. The initial model was entered into the NN3 time-series competition. It was awarded the best prediction on the reduced dataset among approximately 60 different models submitted by scholars worldwide.

  8. Performance and robustness of hybrid model predictive control for controllable dampers in building models

    NASA Astrophysics Data System (ADS)

    Johnson, Erik A.; Elhaddad, Wael M.; Wojtkiewicz, Steven F.

    2016-04-01

    A variety of strategies have been developed over the past few decades to determine controllable damping device forces to mitigate the response of structures and mechanical systems to natural hazards and other excitations. These "smart" damping devices produce forces through passive means but have properties that can be controlled in real time, based on sensor measurements of response across the structure, to dramatically reduce structural motion by exploiting more than the local "information" that is available to purely passive devices. A common strategy is to design optimal damping forces using active control approaches and then try to reproduce those forces with the smart damper. However, these design forces, for some structures and performance objectives, may achieve high performance by selectively adding energy, which cannot be replicated by a controllable damping device, causing the smart damper performance to fall far short of what an active system would provide. The authors have recently demonstrated that a model predictive control strategy using hybrid system models, which utilize both continuous and binary states (the latter to capture the switching behavior between dissipative and non-dissipative forces), can provide reductions in structural response on the order of 50% relative to the conventional clipped-optimal design strategy. This paper explores the robustness of this newly proposed control strategy through evaluating controllable damper performance when the structure model differs from the nominal one used to design the damping strategy. Results from the application to a two-degree-of-freedom structure model confirms the robustness of the proposed strategy.

  9. Human-Assisted Machine Information Exploitation: a crowdsourced investigation of information-based problem solving

    NASA Astrophysics Data System (ADS)

    Kase, Sue E.; Vanni, Michelle; Caylor, Justine; Hoye, Jeff

    2017-05-01

    The Human-Assisted Machine Information Exploitation (HAMIE) investigation utilizes large-scale online data collection for developing models of information-based problem solving (IBPS) behavior in a simulated time-critical operational environment. These types of environments are characteristic of intelligence workflow processes conducted during human-geo-political unrest situations when the ability to make the best decision at the right time ensures strategic overmatch. The project takes a systems approach to Human Information Interaction (HII) by harnessing the expertise of crowds to model the interaction of the information consumer and the information required to solve a problem at different levels of system restrictiveness and decisional guidance. The design variables derived from Decision Support Systems (DSS) research represent the experimental conditions in this online single-player against-the-clock game where the player, acting in the role of an intelligence analyst, is tasked with a Commander's Critical Information Requirement (CCIR) in an information overload scenario. The player performs a sequence of three information processing tasks (annotation, relation identification, and link diagram formation) with the assistance of `HAMIE the robot' who offers varying levels of information understanding dependent on question complexity. We provide preliminary results from a pilot study conducted with Amazon Mechanical Turk (AMT) participants on the Volunteer Science scientific research platform.

  10. Sparse estimation of model-based diffuse thermal dust emission

    NASA Astrophysics Data System (ADS)

    Irfan, Melis O.; Bobin, Jérôme

    2018-03-01

    Component separation for the Planck High Frequency Instrument (HFI) data is primarily concerned with the estimation of thermal dust emission, which requires the separation of thermal dust from the cosmic infrared background (CIB). For that purpose, current estimation methods rely on filtering techniques to decouple thermal dust emission from CIB anisotropies, which tend to yield a smooth, low-resolution, estimation of the dust emission. In this paper, we present a new parameter estimation method, premise: Parameter Recovery Exploiting Model Informed Sparse Estimates. This method exploits the sparse nature of thermal dust emission to calculate all-sky maps of thermal dust temperature, spectral index, and optical depth at 353 GHz. premise is evaluated and validated on full-sky simulated data. We find the percentage difference between the premise results and the true values to be 2.8, 5.7, and 7.2 per cent at the 1σ level across the full sky for thermal dust temperature, spectral index, and optical depth at 353 GHz, respectively. A comparison between premise and a GNILC-like method over selected regions of our sky simulation reveals that both methods perform comparably within high signal-to-noise regions. However, outside of the Galactic plane, premise is seen to outperform the GNILC-like method with increasing success as the signal-to-noise ratio worsens.

  11. Joint reconstruction of PET-MRI by exploiting structural similarity

    NASA Astrophysics Data System (ADS)

    Ehrhardt, Matthias J.; Thielemans, Kris; Pizarro, Luis; Atkinson, David; Ourselin, Sébastien; Hutton, Brian F.; Arridge, Simon R.

    2015-01-01

    Recent advances in technology have enabled the combination of positron emission tomography (PET) with magnetic resonance imaging (MRI). These PET-MRI scanners simultaneously acquire functional PET and anatomical or functional MRI data. As function and anatomy are not independent of one another the images to be reconstructed are likely to have shared structures. We aim to exploit this inherent structural similarity by reconstructing from both modalities in a joint reconstruction framework. The structural similarity between two modalities can be modelled in two different ways: edges are more likely to be at similar positions and/or to have similar orientations. We analyse the diffusion process generated by minimizing priors that encapsulate these different models. It turns out that the class of parallel level set priors always corresponds to anisotropic diffusion which is sometimes forward and sometimes backward diffusion. We perform numerical experiments where we jointly reconstruct from blurred Radon data with Poisson noise (PET) and under-sampled Fourier data with Gaussian noise (MRI). Our results show that both modalities benefit from each other in areas of shared edge information. The joint reconstructions have less artefacts and sharper edges compared to separate reconstructions and the ℓ2-error can be reduced in all of the considered cases of under-sampling.

  12. The SPAtial EFficiency metric (SPAEF): multiple-component evaluation of spatial patterns for optimization of hydrological models

    NASA Astrophysics Data System (ADS)

    Koch, Julian; Cüneyd Demirel, Mehmet; Stisen, Simon

    2018-05-01

    The process of model evaluation is not only an integral part of model development and calibration but also of paramount importance when communicating modelling results to the scientific community and stakeholders. The modelling community has a large and well-tested toolbox of metrics to evaluate temporal model performance. In contrast, spatial performance evaluation does not correspond to the grand availability of spatial observations readily available and to the sophisticate model codes simulating the spatial variability of complex hydrological processes. This study makes a contribution towards advancing spatial-pattern-oriented model calibration by rigorously testing a multiple-component performance metric. The promoted SPAtial EFficiency (SPAEF) metric reflects three equally weighted components: correlation, coefficient of variation and histogram overlap. This multiple-component approach is found to be advantageous in order to achieve the complex task of comparing spatial patterns. SPAEF, its three components individually and two alternative spatial performance metrics, i.e. connectivity analysis and fractions skill score, are applied in a spatial-pattern-oriented model calibration of a catchment model in Denmark. Results suggest the importance of multiple-component metrics because stand-alone metrics tend to fail to provide holistic pattern information. The three SPAEF components are found to be independent, which allows them to complement each other in a meaningful way. In order to optimally exploit spatial observations made available by remote sensing platforms, this study suggests applying bias insensitive metrics which further allow for a comparison of variables which are related but may differ in unit. This study applies SPAEF in the hydrological context using the mesoscale Hydrologic Model (mHM; version 5.8), but we see great potential across disciplines related to spatially distributed earth system modelling.

  13. Emerging from the bottleneck: Benefits of the comparative approach to modern neuroscience

    PubMed Central

    Brenowitz, Eliot A.; Zakon, Harold H.

    2015-01-01

    Neuroscience historically exploited a wide diversity of animal taxa. Recently, however, research focused increasingly on a few model species. This trend accelerated with the genetic revolution, as genomic sequences and genetic tools became available for a few species, which formed a bottleneck. This coalescence on a small set of model species comes with several costs often not considered, especially in the current drive to use mice explicitly as models for human diseases. Comparative studies of strategically chosen non-model species can complement model species research and yield more rigorous studies. As genetic sequences and tools become available for many more species, we are poised to emerge from the bottleneck and once again exploit the rich biological diversity offered by comparative studies. PMID:25800324

  14. Simulated population responses of common carp to commercial exploitation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weber, Michael J.; Hennen, Matthew J.; Brown, Michael L.

    2011-12-01

    Common carp Cyprinus carpio is a widespread invasive species that can become highly abundant and impose deleterious ecosystem effects. Thus, aquatic resource managers are interested in controlling common carp populations. Control of invasive common carp populations is difficult, due in part to the inherent uncertainty of how populations respond to exploitation. To understand how common carp populations respond to exploitation, we evaluated common carp population dynamics (recruitment, growth, and mortality) in three natural lakes in eastern South Dakota. Common carp exhibited similar population dynamics across these three systems that were characterized by consistent recruitment (ages 3 to 15 years present),more » fast growth (K = 0.37 to 0.59), and low mortality (A = 1 to 7%). We then modeled the effects of commercial exploitation on size structure, abundance, and egg production to determine its utility as a management tool to control populations. All three populations responded similarly to exploitation simulations with a 575-mm length restriction, representing commercial gear selectivity. Simulated common carp size structure modestly declined (9 to 37%) in all simulations. Abundance of common carp declined dramatically (28 to 56%) at low levels of exploitation (0 to 20%) but exploitation >40% had little additive effect and populations were only reduced by 49 to 79% despite high exploitation (>90%). Maximum lifetime egg production was reduced from 77 to 89% at a moderate level of exploitation (40%), indicating the potential for recruitment overfishing. Exploitation further reduced common carp size structure, abundance, and egg production when simulations were not size selective. Our results provide insights to how common carp populations may respond to exploitation. Although commercial exploitation may be able to partially control populations, an integrated removal approach that removes all sizes of common carp has a greater chance of controlling population abundance and reducing perturbations induced by this invasive species.« less

  15. Effective deep learning training for single-image super-resolution in endomicroscopy exploiting video-registration-based reconstruction.

    PubMed

    Ravì, Daniele; Szczotka, Agnieszka Barbara; Shakir, Dzhoshkun Ismail; Pereira, Stephen P; Vercauteren, Tom

    2018-06-01

    Probe-based confocal laser endomicroscopy (pCLE) is a recent imaging modality that allows performing in vivo optical biopsies. The design of pCLE hardware, and its reliance on an optical fibre bundle, fundamentally limits the image quality with a few tens of thousands fibres, each acting as the equivalent of a single-pixel detector, assembled into a single fibre bundle. Video registration techniques can be used to estimate high-resolution (HR) images by exploiting the temporal information contained in a sequence of low-resolution (LR) images. However, the alignment of LR frames, required for the fusion, is computationally demanding and prone to artefacts. In this work, we propose a novel synthetic data generation approach to train exemplar-based Deep Neural Networks (DNNs). HR pCLE images with enhanced quality are recovered by the models trained on pairs of estimated HR images (generated by the video registration algorithm) and realistic synthetic LR images. Performance of three different state-of-the-art DNNs techniques were analysed on a Smart Atlas database of 8806 images from 238 pCLE video sequences. The results were validated through an extensive image quality assessment that takes into account different quality scores, including a Mean Opinion Score (MOS). Results indicate that the proposed solution produces an effective improvement in the quality of the obtained reconstructed image. The proposed training strategy and associated DNNs allows us to perform convincing super-resolution of pCLE images.

  16. Animation control of surface motion capture.

    PubMed

    Tejera, Margara; Casas, Dan; Hilton, Adrian

    2013-12-01

    Surface motion capture (SurfCap) of actor performance from multiple view video provides reconstruction of the natural nonrigid deformation of skin and clothing. This paper introduces techniques for interactive animation control of SurfCap sequences which allow the flexibility in editing and interactive manipulation associated with existing tools for animation from skeletal motion capture (MoCap). Laplacian mesh editing is extended using a basis model learned from SurfCap sequences to constrain the surface shape to reproduce natural deformation. Three novel approaches for animation control of SurfCap sequences, which exploit the constrained Laplacian mesh editing, are introduced: 1) space–time editing for interactive sequence manipulation; 2) skeleton-driven animation to achieve natural nonrigid surface deformation; and 3) hybrid combination of skeletal MoCap driven and SurfCap sequence to extend the range of movement. These approaches are combined with high-level parametric control of SurfCap sequences in a hybrid surface and skeleton-driven animation control framework to achieve natural surface deformation with an extended range of movement by exploiting existing MoCap archives. Evaluation of each approach and the integrated animation framework are presented on real SurfCap sequences for actors performing multiple motions with a variety of clothing styles. Results demonstrate that these techniques enable flexible control for interactive animation with the natural nonrigid surface dynamics of the captured performance and provide a powerful tool to extend current SurfCap databases by incorporating new motions from MoCap sequences.

  17. An integrated logit model for contamination event detection in water distribution systems.

    PubMed

    Housh, Mashor; Ostfeld, Avi

    2015-05-15

    The problem of contamination event detection in water distribution systems has become one of the most challenging research topics in water distribution systems analysis. Current attempts for event detection utilize a variety of approaches including statistical, heuristics, machine learning, and optimization methods. Several existing event detection systems share a common feature in which alarms are obtained separately for each of the water quality indicators. Unifying those single alarms from different indicators is usually performed by means of simple heuristics. A salient feature of the current developed approach is using a statistically oriented model for discrete choice prediction which is estimated using the maximum likelihood method for integrating the single alarms. The discrete choice model is jointly calibrated with other components of the event detection system framework in a training data set using genetic algorithms. The fusing process of each indicator probabilities, which is left out of focus in many existing event detection system models, is confirmed to be a crucial part of the system which could be modelled by exploiting a discrete choice model for improving its performance. The developed methodology is tested on real water quality data, showing improved performances in decreasing the number of false positive alarms and in its ability to detect events with higher probabilities, compared to previous studies. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Fast and Accurate Poisson Denoising With Trainable Nonlinear Diffusion.

    PubMed

    Feng, Wensen; Qiao, Peng; Chen, Yunjin; Wensen Feng; Peng Qiao; Yunjin Chen; Feng, Wensen; Chen, Yunjin; Qiao, Peng

    2018-06-01

    The degradation of the acquired signal by Poisson noise is a common problem for various imaging applications, such as medical imaging, night vision, and microscopy. Up to now, many state-of-the-art Poisson denoising techniques mainly concentrate on achieving utmost performance, with little consideration for the computation efficiency. Therefore, in this paper we aim to propose an efficient Poisson denoising model with both high computational efficiency and recovery quality. To this end, we exploit the newly developed trainable nonlinear reaction diffusion (TNRD) model which has proven an extremely fast image restoration approach with performance surpassing recent state-of-the-arts. However, the straightforward direct gradient descent employed in the original TNRD-based denoising task is not applicable in this paper. To solve this problem, we resort to the proximal gradient descent method. We retrain the model parameters, including the linear filters and influence functions by taking into account the Poisson noise statistics, and end up with a well-trained nonlinear diffusion model specialized for Poisson denoising. The trained model provides strongly competitive results against state-of-the-art approaches, meanwhile bearing the properties of simple structure and high efficiency. Furthermore, our proposed model comes along with an additional advantage, that the diffusion process is well-suited for parallel computation on graphics processing units (GPUs). For images of size , our GPU implementation takes less than 0.1 s to produce state-of-the-art Poisson denoising performance.

  19. Accelerating Sequences in the Presence of Metal by Exploiting the Spatial Distribution of Off-Resonance

    PubMed Central

    Smith, Matthew R.; Artz, Nathan S.; Koch, Kevin M.; Samsonov, Alexey; Reeder, Scott B.

    2014-01-01

    Purpose To demonstrate feasibility of exploiting the spatial distribution of off-resonance surrounding metallic implants for accelerating multispectral imaging techniques. Theory Multispectral imaging (MSI) techniques perform time-consuming independent 3D acquisitions with varying RF frequency offsets to address the extreme off-resonance from metallic implants. Each off-resonance bin provides a unique spatial sensitivity that is analogous to the sensitivity of a receiver coil, and therefore provides a unique opportunity for acceleration. Methods Fully sampled MSI was performed to demonstrate retrospective acceleration. A uniform sampling pattern across off-resonance bins was compared to several adaptive sampling strategies using a total hip replacement phantom. Monte Carlo simulations were performed to compare noise propagation of two of these strategies. With a total knee replacement phantom, positive and negative off-resonance bins were strategically sampled with respect to the B0 field to minimize aliasing. Reconstructions were performed with a parallel imaging framework to demonstrate retrospective acceleration. Results An adaptive sampling scheme dramatically improved reconstruction quality, which was supported by the noise propagation analysis. Independent acceleration of negative and positive off-resonance bins demonstrated reduced overlapping of aliased signal to improve the reconstruction. Conclusion This work presents the feasibility of acceleration in the presence of metal by exploiting the spatial sensitivities of off-resonance bins. PMID:24431210

  20. Exploiting GPUs in Virtual Machine for BioCloud

    PubMed Central

    Jo, Heeseung; Jeong, Jinkyu; Lee, Myoungho; Choi, Dong Hoon

    2013-01-01

    Recently, biological applications start to be reimplemented into the applications which exploit many cores of GPUs for better computation performance. Therefore, by providing virtualized GPUs to VMs in cloud computing environment, many biological applications will willingly move into cloud environment to enhance their computation performance and utilize infinite cloud computing resource while reducing expenses for computations. In this paper, we propose a BioCloud system architecture that enables VMs to use GPUs in cloud environment. Because much of the previous research has focused on the sharing mechanism of GPUs among VMs, they cannot achieve enough performance for biological applications of which computation throughput is more crucial rather than sharing. The proposed system exploits the pass-through mode of PCI express (PCI-E) channel. By making each VM be able to access underlying GPUs directly, applications can show almost the same performance as when those are in native environment. In addition, our scheme multiplexes GPUs by using hot plug-in/out device features of PCI-E channel. By adding or removing GPUs in each VM in on-demand manner, VMs in the same physical host can time-share their GPUs. We implemented the proposed system using the Xen VMM and NVIDIA GPUs and showed that our prototype is highly effective for biological GPU applications in cloud environment. PMID:23710465

  1. Exploiting GPUs in virtual machine for BioCloud.

    PubMed

    Jo, Heeseung; Jeong, Jinkyu; Lee, Myoungho; Choi, Dong Hoon

    2013-01-01

    Recently, biological applications start to be reimplemented into the applications which exploit many cores of GPUs for better computation performance. Therefore, by providing virtualized GPUs to VMs in cloud computing environment, many biological applications will willingly move into cloud environment to enhance their computation performance and utilize infinite cloud computing resource while reducing expenses for computations. In this paper, we propose a BioCloud system architecture that enables VMs to use GPUs in cloud environment. Because much of the previous research has focused on the sharing mechanism of GPUs among VMs, they cannot achieve enough performance for biological applications of which computation throughput is more crucial rather than sharing. The proposed system exploits the pass-through mode of PCI express (PCI-E) channel. By making each VM be able to access underlying GPUs directly, applications can show almost the same performance as when those are in native environment. In addition, our scheme multiplexes GPUs by using hot plug-in/out device features of PCI-E channel. By adding or removing GPUs in each VM in on-demand manner, VMs in the same physical host can time-share their GPUs. We implemented the proposed system using the Xen VMM and NVIDIA GPUs and showed that our prototype is highly effective for biological GPU applications in cloud environment.

  2. Intensive time series data exploitation: the Multi-sensor Evolution Analysis (MEA) platform

    NASA Astrophysics Data System (ADS)

    Mantovani, Simone; Natali, Stefano; Folegani, Marco; Scremin, Alessandro

    2014-05-01

    The monitoring of the temporal evolution of natural phenomena must be performed in order to ensure their correct description and to allow improvements in modelling and forecast capabilities. This assumption, that is obvious for ground-based measurements, has not always been true for data collected through space-based platforms: except for geostationary satellites and sensors, that allow providing a very effective monitoring of phenomena with geometric scale from regional to global; smaller phenomena (with characteristic dimension lower than few kilometres) have been monitored with instruments that could collect data only with a time interval in the order of several days; bi-temporal techniques have been the most used ones for years, in order to characterise temporal changes and try identifying specific phenomena. The more the number of flying sensor has grown and their performance improved, the more their capability of monitoring natural phenomena at a smaller geographic scale has grown: we can now count on tenth of years of remotely sensed data, collected by hundreds of sensors that are now accessible from a wide users' community, and the techniques for data processing have to be adapted to move toward a data intensive exploitation. Starting from 2008, the European Space Agency has initiated the development of the Multi-sensor Evolution Analysis (MEA) platform (https://mea.eo.esa.int), whose first aim was to permit the access and exploitation of long term remotely sensed satellite data from different platforms: 15 years of global (A)ATSR data together with 5 years of regional AVNIR-2 data were loaded into the system and were used, through a web-based graphic user interface, for land cover change analysis. The MEA data availability has grown during years integrating multi-disciplinary data that feature spatial and temporal dimensions: so far tenths of Terabytes of data in the land and atmosphere domains are available and can be visualized and exploited, keeping the time dimension as the most relevant one (https://mea.eo.esa.int/data_availability.html). MEA is also used as Climate Data gateway in the framework of the FP7 EarthServer Project. In the present work, principles of the MEA platform are presented, emphasizing the general concept and the methods that have been implemented for data access (including OGC standard data access) and exploitation. In order to show its effectiveness, use cases focused on multi-field and multi-temporal data analysis are shown.

  3. Twin Neurons for Efficient Real-World Data Distribution in Networks of Neural Cliques: Applications in Power Management in Electronic Circuits.

    PubMed

    Boguslawski, Bartosz; Gripon, Vincent; Seguin, Fabrice; Heitzmann, Frédéric

    2016-02-01

    Associative memories are data structures that allow retrieval of previously stored messages given part of their content. They, thus, behave similarly to the human brain's memory that is capable, for instance, of retrieving the end of a song, given its beginning. Among different families of associative memories, sparse ones are known to provide the best efficiency (ratio of the number of bits stored to that of the bits used). Recently, a new family of sparse associative memories achieving almost optimal efficiency has been proposed. Their structure, relying on binary connections and neurons, induces a direct mapping between input messages and stored patterns. Nevertheless, it is well known that nonuniformity of the stored messages can lead to a dramatic decrease in performance. In this paper, we show the impact of nonuniformity on the performance of this recent model, and we exploit the structure of the model to improve its performance in practical applications, where data are not necessarily uniform. In order to approach the performance of networks with uniformly distributed messages presented in theoretical studies, twin neurons are introduced. To assess the adapted model, twin neurons are used with the real-world data to optimize power consumption of electronic circuits in practical test cases.

  4. Application of territorial GIS to study of natural environment for regions under mining exploitation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirsanov, A.

    1996-07-01

    Mineral resources exploitation becomes one of the leading factors of technogenic impact to natural environment. The processes accompanying exploitation lead to changes of geological/geomorphological, engineering/geological, hydrogeological, geochemical and landscape conditions over the large territories surrounded mining exploitation districts. The types of environmental changes and disturbances are stipulated by several reasons such as kind of exploited resources (ore, petroleum, gas, coal, peat, building materials etc.); the ways of extraction (opened by quarry or closed by mine); natural zone (tundra, taiga, steppe, desert etc.). Expressive revelation and control of these environmental changes is impossible without wide using and analysis of various typesmore » and different times materials of airborne and satellite surveys (MASS). They are the basis of system approach to environmental study because of image is the decreased spatial model of territory. For integrated estimation of natural resources and perspectives of its economical profit using, as well as examination of influence of extraction objects to natural environment necessary to involve different data. Only territorial GIS permits to solve the tasks of collection, keeping, processing and analysis of this data as well as to conduct modelling of situations and presentation of information necessary to accept the decision. The core of GIS is the Data base which consists of initial remote sensing and cartographic data allow in completely obtain various information providing of full value and objectivity of investigations.« less

  5. Exploiting similarity in turbulent shear flows for turbulence modeling

    NASA Technical Reports Server (NTRS)

    Robinson, David F.; Harris, Julius E.; Hassan, H. A.

    1992-01-01

    It is well known that current k-epsilon models cannot predict the flow over a flat plate and its wake. In an effort to address this issue and other issues associated with turbulence closure, a new approach for turbulence modeling is proposed which exploits similarities in the flow field. Thus, if we consider the flow over a flat plate and its wake, then in addition to taking advantage of the log-law region, we can exploit the fact that the flow becomes self-similar in the far wake. This latter behavior makes it possible to cast the governing equations as a set of total differential equations. Solutions of this set and comparison with measured shear stress and velocity profiles yields the desired set of model constants. Such a set is, in general, different from other sets of model constants. The rational for such an approach is that if we can correctly model the flow over a flat plate and its far wake, then we can have a better chance of predicting the behavior in between. It is to be noted that the approach does not appeal, in any way, to the decay of homogeneous turbulence. This is because the asymptotic behavior of the flow under consideration is not representative of the decay of homogeneous turbulence.

  6. Exploiting similarity in turbulent shear flows for turbulence modeling

    NASA Astrophysics Data System (ADS)

    Robinson, David F.; Harris, Julius E.; Hassan, H. A.

    1992-12-01

    It is well known that current k-epsilon models cannot predict the flow over a flat plate and its wake. In an effort to address this issue and other issues associated with turbulence closure, a new approach for turbulence modeling is proposed which exploits similarities in the flow field. Thus, if we consider the flow over a flat plate and its wake, then in addition to taking advantage of the log-law region, we can exploit the fact that the flow becomes self-similar in the far wake. This latter behavior makes it possible to cast the governing equations as a set of total differential equations. Solutions of this set and comparison with measured shear stress and velocity profiles yields the desired set of model constants. Such a set is, in general, different from other sets of model constants. The rational for such an approach is that if we can correctly model the flow over a flat plate and its far wake, then we can have a better chance of predicting the behavior in between. It is to be noted that the approach does not appeal, in any way, to the decay of homogeneous turbulence. This is because the asymptotic behavior of the flow under consideration is not representative of the decay of homogeneous turbulence.

  7. Teaching Students about Biodiversity by Studying the Correlation between Plants & Arthropods

    ERIC Educational Resources Information Center

    Richardson, Matthew L.; Hari, Janice

    2008-01-01

    On Earth there is a huge diversity of arthropods, many of which are highly adaptive and able to exploit virtually every terrestrial habitat. Because of their prevalence even in urban environments, they make an excellent model system for any life science class. Since plants also exploit virtually every terrestrial habitat, studying the relationship…

  8. Learning Compositional Simulation Models

    DTIC Science & Technology

    2010-01-01

    techniques developed by social scientists, economists, and medical researchers over the past four decades. Quasi-experimental designs (QEDs) are...statistical techniques from the social sciences known as quasi- experimental design (QED). QEDs allow a researcher to exploit unique characteristics...can be grouped under the rubric “quasi-experimental design ” (QED), and they attempt to exploit inherent characteristics of observational data sets

  9. Toward intensifying design of experiments in upstream bioprocess development: An industrial Escherichia coli feasibility study.

    PubMed

    von Stosch, Moritz; Hamelink, Jan-Martijn; Oliveira, Rui

    2016-09-01

    In this study, step variations in temperature, pH, and carbon substrate feeding rate were performed within five high cell density Escherichia coli fermentations to assess whether intraexperiment step changes, can principally be used to exploit the process operation space in a design of experiment manner. A dynamic process modeling approach was adopted to determine parameter interactions. A bioreactor model was integrated with an artificial neural network that describes biomass and product formation rates as function of varied fed-batch fermentation conditions for heterologous protein production. A model reliability measure was introduced to assess in which process region the model can be expected to predict process states accurately. It was found that the model could accurately predict process states of multiple fermentations performed at fixed conditions within the determined validity domain. The results suggest that intraexperimental variations of process conditions could be used to reduce the number of experiments by a factor, which in limit would be equivalent to the number of intraexperimental variations per experiment. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:1343-1352, 2016. © 2016 American Institute of Chemical Engineers.

  10. Closed-form solution of decomposable stochastic models

    NASA Technical Reports Server (NTRS)

    Sjogren, Jon A.

    1990-01-01

    Markov and semi-Markov processes are increasingly being used in the modeling of complex reconfigurable systems (fault tolerant computers). The estimation of the reliability (or some measure of performance) of the system reduces to solving the process for its state probabilities. Such a model may exhibit numerous states and complicated transition distributions, contributing to an expensive and numerically delicate solution procedure. Thus, when a system exhibits a decomposition property, either structurally (autonomous subsystems), or behaviorally (component failure versus reconfiguration), it is desirable to exploit this decomposition in the reliability calculation. In interesting cases there can be failure states which arise from non-failure states of the subsystems. Equations are presented which allow the computation of failure probabilities of the total (combined) model without requiring a complete solution of the combined model. This material is presented within the context of closed-form functional representation of probabilities as utilized in the Symbolic Hierarchical Automated Reliability and Performance Evaluator (SHARPE) tool. The techniques adopted enable one to compute such probability functions for a much wider class of systems at a reduced computational cost. Several examples show how the method is used, especially in enhancing the versatility of the SHARPE tool.

  11. Multiscale modelling for tokamak pedestals

    NASA Astrophysics Data System (ADS)

    Abel, I. G.

    2018-04-01

    Pedestal modelling is crucial to predict the performance of future fusion devices. Current modelling efforts suffer either from a lack of kinetic physics, or an excess of computational complexity. To ameliorate these problems, we take a first-principles multiscale approach to the pedestal. We will present three separate sets of equations, covering the dynamics of edge localised modes (ELMs), the inter-ELM pedestal and pedestal turbulence, respectively. Precisely how these equations should be coupled to each other is covered in detail. This framework is completely self-consistent; it is derived from first principles by means of an asymptotic expansion of the fundamental Vlasov-Landau-Maxwell system in appropriate small parameters. The derivation exploits the narrowness of the pedestal region, the smallness of the thermal gyroradius and the low plasma (the ratio of thermal to magnetic pressures) typical of current pedestal operation to achieve its simplifications. The relationship between this framework and gyrokinetics is analysed, and possibilities to directly match our systems of equations onto multiscale gyrokinetics are explored. A detailed comparison between our model and other models in the literature is performed. Finally, the potential for matching this framework onto an open-field-line region is briefly discussed.

  12. Comparative study on the efficiency of some optical methods for artwork diagnostics

    NASA Astrophysics Data System (ADS)

    Schirripa Spagnolo, Giuseppe; Ambrosini, Dario; Paoletti, Domenica

    2001-10-01

    Scientific investigation methods are founding their place besides the stylistic-historical study methods in art research works. In particular, optical techniques, transferred from other fields or developed ad hoc, can make a strong contribution to the safeguarding and exploitation of cultural heritage. This paper describes the use of different optical techniques, such as holographic interferometry, decorrelation, shearography and ESPI, in the diagnostics of works of art. A comparison between different methods is obtained by performing tests on specially designed models, prepared using typical techniques and materials. Inside the model structure, a number of defects of known types, form and extension are inserted. The different features of each technique are outlined and a comparison with IR thermography is also carried out.

  13. Search for neutral resonances decaying into a Z boson and a pair of b jets or τ leptons

    DOE PAGES

    Khachatryan, Vardan

    2016-05-31

    A search is performed for a new resonance decaying into a lighter resonance and a Z boson. Two channels are studied, targeting the decay of the lighter resonance into either a pair of oppositely charged tau leptons or a b-bbar pair. The Z boson is identified via its decays to electrons or muons. The search exploits data collected by the CMS experiment at a centre-of-mass energy of 8 TeV, corresponding to an integrated luminosity of 19.8 fb –1. Furthermore, no significant deviations are observed from the standard model expectation and limits are set on production cross sections and parameters ofmore » two-Higgs-doublet models.« less

  14. Genetically Engineered Microelectronic Infrared Filters

    NASA Technical Reports Server (NTRS)

    Cwik, Tom; Klimeck, Gerhard

    1998-01-01

    A genetic algorithm is used for design of infrared filters and in the understanding of the material structure of a resonant tunneling diode. These two components are examples of microdevices and nanodevices that can be numerically simulated using fundamental mathematical and physical models. Because the number of parameters that can be used in the design of one of these devices is large, and because experimental exploration of the design space is unfeasible, reliable software models integrated with global optimization methods are examined The genetic algorithm and engineering design codes have been implemented on massively parallel computers to exploit their high performance. Design results are presented for the infrared filter showing new and optimized device design. Results for nanodevices are presented in a companion paper at this workshop.

  15. Dissipative environment may improve the quantum annealing performances of the ferromagnetic p -spin model

    NASA Astrophysics Data System (ADS)

    Passarelli, G.; De Filippis, G.; Cataudella, V.; Lucignano, P.

    2018-02-01

    We investigate the quantum annealing of the ferromagnetic p -spin model in a dissipative environment (p =5 and p =7 ). This model, in the large-p limit, codifies Grover's algorithm for searching in an unsorted database [L. K. Grover, Proceedings of the 28th Annual ACM Symposium on Theory of Computing (ACM, New York, 1996), pp. 212-219]. The dissipative environment is described by a phonon bath in thermal equilibrium at finite temperature. The dynamics is studied in the framework of a Lindblad master equation for the reduced density matrix describing only the spins. Exploiting the symmetries of our model Hamiltonian, we can describe many spins and extrapolate expected trends for large N and p . While at weak system-bath coupling the dissipative environment has detrimental effects on the annealing results, we show that in the intermediate-coupling regime, the phonon bath seems to speed up the annealing at low temperatures. This improvement in the performance is likely not due to thermal fluctuation but rather arises from a correlated spin-bath state and persists even at zero temperature. This result may pave the way to a new scenario in which, by appropriately engineering the system-bath coupling, one may optimize quantum annealing performances below either the purely quantum or the classical limit.

  16. Change Detection Analysis of Water Pollution in Coimbatore Region using Different Color Models

    NASA Astrophysics Data System (ADS)

    Jiji, G. Wiselin; Devi, R. Naveena

    2017-12-01

    The data acquired through remote sensing satellites furnish facts about the land and water at varying resolutions and has been widely used for several change detection studies. Apart from the existence of many change detection methodologies and techniques, emergence of new ones continues to subsist. Existing change detection techniques exploit images that are either in gray scale or RGB color model. In this paper we introduced color models for performing change detection for water pollution. Here the polluted lakes are classified and post-classification change detection techniques are applied to RGB images and results obtained are analysed for changes to exist or not. Furthermore RGB images obtained after classification when converted to any of the two color models YCbCr and YIQ is found to produce the same results as that of the RGB model images. Thus it can be concluded that other color models like YCbCr, YIQ can be used as substitution to RGB color model for analysing change detection with regard to water pollution.

  17. GPU-Accelerated Molecular Modeling Coming Of Age

    PubMed Central

    Stone, John E.; Hardy, David J.; Ufimtsev, Ivan S.

    2010-01-01

    Graphics processing units (GPUs) have traditionally been used in molecular modeling solely for visualization of molecular structures and animation of trajectories resulting from molecular dynamics simulations. Modern GPUs have evolved into fully programmable, massively parallel co-processors that can now be exploited to accelerate many scientific computations, typically providing about one order of magnitude speedup over CPU code and in special cases providing speedups of two orders of magnitude. This paper surveys the development of molecular modeling algorithms that leverage GPU computing, the advances already made and remaining issues to be resolved, and the continuing evolution of GPU technology that promises to become even more useful to molecular modeling. Hardware acceleration with commodity GPUs is expected to benefit the overall computational biology community by bringing teraflops performance to desktop workstations and in some cases potentially changing what were formerly batch-mode computational jobs into interactive tasks. PMID:20675161

  18. Unsupervised Unmixing of Hyperspectral Images Accounting for Endmember Variability.

    PubMed

    Halimi, Abderrahim; Dobigeon, Nicolas; Tourneret, Jean-Yves

    2015-12-01

    This paper presents an unsupervised Bayesian algorithm for hyperspectral image unmixing, accounting for endmember variability. The pixels are modeled by a linear combination of endmembers weighted by their corresponding abundances. However, the endmembers are assumed random to consider their variability in the image. An additive noise is also considered in the proposed model, generalizing the normal compositional model. The proposed algorithm exploits the whole image to benefit from both spectral and spatial information. It estimates both the mean and the covariance matrix of each endmember in the image. This allows the behavior of each material to be analyzed and its variability to be quantified in the scene. A spatial segmentation is also obtained based on the estimated abundances. In order to estimate the parameters associated with the proposed Bayesian model, we propose to use a Hamiltonian Monte Carlo algorithm. The performance of the resulting unmixing strategy is evaluated through simulations conducted on both synthetic and real data.

  19. GPU-accelerated molecular modeling coming of age.

    PubMed

    Stone, John E; Hardy, David J; Ufimtsev, Ivan S; Schulten, Klaus

    2010-09-01

    Graphics processing units (GPUs) have traditionally been used in molecular modeling solely for visualization of molecular structures and animation of trajectories resulting from molecular dynamics simulations. Modern GPUs have evolved into fully programmable, massively parallel co-processors that can now be exploited to accelerate many scientific computations, typically providing about one order of magnitude speedup over CPU code and in special cases providing speedups of two orders of magnitude. This paper surveys the development of molecular modeling algorithms that leverage GPU computing, the advances already made and remaining issues to be resolved, and the continuing evolution of GPU technology that promises to become even more useful to molecular modeling. Hardware acceleration with commodity GPUs is expected to benefit the overall computational biology community by bringing teraflops performance to desktop workstations and in some cases potentially changing what were formerly batch-mode computational jobs into interactive tasks. (c) 2010 Elsevier Inc. All rights reserved.

  20. Investigation of transfer length, development length, flexural strength, and prestress losses in lightweight prestressed concrete girders.

    DOT National Transportation Integrated Search

    2003-01-01

    Encouraged by the performance of high performance normal weight composite girders, the Virginia Department of Transportation has sought to exploit the use of high performance lightweight composite concrete (HPLWC) girders to achieve economies brought...

  1. Transformation-aware Exploit Generation using a HI-CFG

    DTIC Science & Technology

    2013-05-16

    testing has many limitations of its own: it can require significant target -specific setup to perform well; it is unlikely to trigger vulnerabilities...check fails represents a potential vulnerability, but a conservative analysis can produce false positives , so we can use exploit generation to find...warnings that correspond to true positives . We can also find potentially vulnerable instructions in the course of a manual binary- level security audit

  2. Experiments on Adaptive Techniques for Host-Based Intrusion Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DRAELOS, TIMOTHY J.; COLLINS, MICHAEL J.; DUGGAN, DAVID P.

    2001-09-01

    This research explores four experiments of adaptive host-based intrusion detection (ID) techniques in an attempt to develop systems that can detect novel exploits. The technique considered to have the most potential is adaptive critic designs (ACDs) because of their utilization of reinforcement learning, which allows learning exploits that are difficult to pinpoint in sensor data. Preliminary results of ID using an ACD, an Elman recurrent neural network, and a statistical anomaly detection technique demonstrate an ability to learn to distinguish between clean and exploit data. We used the Solaris Basic Security Module (BSM) as a data source and performed considerablemore » preprocessing on the raw data. A detection approach called generalized signature-based ID is recommended as a middle ground between signature-based ID, which has an inability to detect novel exploits, and anomaly detection, which detects too many events including events that are not exploits. The primary results of the ID experiments demonstrate the use of custom data for generalized signature-based intrusion detection and the ability of neural network-based systems to learn in this application environment.« less

  3. INDIGO-DataCloud solutions for Earth Sciences

    NASA Astrophysics Data System (ADS)

    Aguilar Gómez, Fernando; de Lucas, Jesús Marco; Fiore, Sandro; Monna, Stephen; Chen, Yin

    2017-04-01

    INDIGO-DataCloud (https://www.indigo-datacloud.eu/) is a European Commission funded project aiming to develop a data and computing platform targeting scientific communities, deployable on multiple hardware and provisioned over hybrid (private or public) e-infrastructures. The development of INDIGO solutions covers the different layers in cloud computing (IaaS, PaaS, SaaS), and provides tools to exploit resources like HPC or GPGPUs. INDIGO is oriented to support European Scientific research communities, that are well represented in the project. Twelve different Case Studies have been analyzed in detail from different fields: Biological & Medical sciences, Social sciences & Humanities, Environmental and Earth sciences and Physics & Astrophysics. INDIGO-DataCloud provides solutions to emerging challenges in Earth Science like: -Enabling an easy deployment of community services at different cloud sites. Many Earth Science research infrastructures often involve distributed observation stations across countries, and also have distributed data centers to support the corresponding data acquisition and curation. There is a need to easily deploy new data center services while the research infrastructure continuous spans. As an example: LifeWatch (ESFRI, Ecosystems and Biodiversity) uses INDIGO solutions to manage the deployment of services to perform complex hydrodynamics and water quality modelling over a Cloud Computing environment, predicting algae blooms, using the Docker technology: TOSCA requirement description, Docker repository, Orchestrator for deployment, AAI (AuthN, AuthZ) and OneData (Distributed Storage System). -Supporting Big Data Analysis. Nowadays, many Earth Science research communities produce large amounts of data and and are challenged by the difficulties of processing and analysing it. A climate models intercomparison data analysis case study for the European Network for Earth System Modelling (ENES) community has been setup, based on the Ophidia big data analysis framework and the Kepler workflow management system. Such services normally involve a large and distributed set of data and computing resources. In this regard, this case study exploits the INDIGO PaaS for a flexible and dynamic allocation of the resources at the infrastructural level. -Providing Distributed Data Storage Solutions. In order to allow scientific communities to perform heavy computation on huge datasets, INDIGO provides global data access solutions allowing researchers to access data in a distributed environment like fashion regardless of its location, and also to publish and share their research results with public or close communities. INDIGO solutions that support the access to distributed data storage (OneData) are being tested on EMSO infrastructure (Ocean Sciences and Geohazards) data. Another aspect of interest for the EMSO community is in efficient data processing by exploiting INDIGO services like PaaS Orchestrator. Further, for HPC exploitation, a new solution named Udocker has been implemented, enabling users to execute docker containers in supercomputers, without requiring administration privileges. This presentation will overview INDIGO solutions that are interesting and useful for Earth science communities and will show how they can be applied to other Case Studies.

  4. Defining the uncertainty of electro-optical identification system performance estimates using a 3D optical environment derived from satellite

    NASA Astrophysics Data System (ADS)

    Ladner, S. D.; Arnone, R.; Casey, B.; Weidemann, A.; Gray, D.; Shulman, I.; Mahoney, K.; Giddings, T.; Shirron, J.

    2009-05-01

    Current United States Navy Mine-Counter-Measure (MCM) operations primarily use electro-optical identification (EOID) sensors to identify underwater targets after detection via acoustic sensors. These EOID sensors which are based on laser underwater imaging by design work best in "clear" waters and are limited in coastal waters especially with strong optical layers. Optical properties and in particular scattering and absorption play an important role on systems performance. Surface optical properties alone from satellite are not adequate to determine how well a system will perform at depth due to the existence of optical layers. The spatial and temporal characteristics of the 3d optical variability of the coastal waters along with strength and location of subsurface optical layers maximize chances of identifying underwater targets by exploiting optimum sensor deployment. Advanced methods have been developed to fuse the optical measurements from gliders, optical properties from "surface" satellite snapshot and 3-D ocean circulation models to extend the two-dimensional (2-D) surface satellite optical image into a three-dimensional (3-D) optical volume with subsurface optical layers. Modifications were made to an EOID performance model to integrate a 3-D optical volume covering an entire region of interest as input and derive system performance field. These enhancements extend present capability based on glider optics and EOID sensor models to estimate the system's "image quality". This only yields system performance information for a single glider profile location in a very large operational region. Finally, we define the uncertainty of the system performance by coupling the EOID performance model with the 3-D optical volume uncertainties. Knowing the ensemble spread of EOID performance field provides a new and unique capability for tactical decision makers and Navy Operations.

  5. Isosemantic rendering of clinical information using formal ontologies and RDF.

    PubMed

    Martínez-Costa, Catalina; Bosca, Diego; Legaz-García, Mari Carmen; Tao, Cui; Fernández Breis, Jesualdo Tomás; Schulz, Stefan; Chute, Christopher G

    2013-01-01

    The generation of a semantic clinical infostructure requires linking ontologies, clinical models and terminologies [1]. Here we describe an approach that would permit data coming from different sources and represented in different standards to be queried in a homogeneous and integrated way. Our assumption is that data providers should be able to agree and share the meaning of the data they want to exchange and to exploit. We will describe how Clinical Element Model (CEM) and OpenEHR datasets can be jointly exploited in Semantic Web environments.

  6. Game theoretic approach for cooperative feature extraction in camera networks

    NASA Astrophysics Data System (ADS)

    Redondi, Alessandro E. C.; Baroffio, Luca; Cesana, Matteo; Tagliasacchi, Marco

    2016-07-01

    Visual sensor networks (VSNs) consist of several camera nodes with wireless communication capabilities that can perform visual analysis tasks such as object identification, recognition, and tracking. Often, VSN deployments result in many camera nodes with overlapping fields of view. In the past, such redundancy has been exploited in two different ways: (1) to improve the accuracy/quality of the visual analysis task by exploiting multiview information or (2) to reduce the energy consumed for performing the visual task, by applying temporal scheduling techniques among the cameras. We propose a game theoretic framework based on the Nash bargaining solution to bridge the gap between the two aforementioned approaches. The key tenet of the proposed framework is for cameras to reduce the consumed energy in the analysis process by exploiting the redundancy in the reciprocal fields of view. Experimental results in both simulated and real-life scenarios confirm that the proposed scheme is able to increase the network lifetime, with a negligible loss in terms of visual analysis accuracy.

  7. Study on the stress changes due to the regional groundwater exploitation based on a 3-D fully coupled poroelastic model: An example of the North China Plain

    NASA Astrophysics Data System (ADS)

    Cheng, H.; Zhang, H.; Pang, Y. J.; Shi, Y.

    2017-12-01

    With the quick urban development, over-exploitation of groundwater resources becomes more and more intense, which leads to not only widespread groundwater depression cones but also a series of harsh environmental and geological hazards. Among which, the most intuitive phenomenon is the ground subsidence in loose sediments. However, another direct consequence triggered by the groundwater depletion is the substantial crustal deformation and potential modulation of crustal stress underneath the groundwater over-pumping zones. In our previous 3-D viscoelastic finite element model, we found that continuous over-exploitation of groundwater resources in North China Plain during the past 60 years give rise to crustal-scale uplift reaching 4.9cm, with the Coulomb failure stress decreasing by up to 12 kPa, which may inhibit the nucleation of possible big earthquake events. Furthermore, according to the effective pressure principle and lab experiments, the pore pressure may also have changed due to the reduced water level. In order to quantitatively analyze the stress changes due to the regional groundwater exploitation in North China Plain, a three-dimensional fully coupled poroelastic finite element model is developed in this study. The high resolution topography, grounwater level fluctuation, fault parameters and etc, are taken into consideration. Further, the changes of Coulomb Failure Stress, in correspondence to elastic stress and pore pressure changes induced by fluid diffusion are calculated. Meanwhile, the elastic strain energy accumulation in region due to the regional groundwater exploitation is obtained. Finally, we try to analyze the seismic risk of major faults within North China Plain to further discuss the regional seismic activities.

  8. Online Farsi digit recognition using their upper half structure

    NASA Astrophysics Data System (ADS)

    Ghods, Vahid; Sohrabi, Mohammad Karim

    2015-03-01

    In this paper, we investigated the efficiency of upper half Farsi numerical digit structure. In other words, half of data (upper half of the digit shapes) was exploited for the recognition of Farsi numerical digits. This method can be used for both offline and online recognition. Half of data is more effective in speed process, data transfer and in this application accuracy. Hidden Markov model (HMM) was used to classify online Farsi digits. Evaluation was performed by TMU dataset. This dataset contains more than 1200 samples of online handwritten Farsi digits. The proposed method yielded more accuracy in recognition rate.

  9. Robust Kriged Kalman Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baingana, Brian; Dall'Anese, Emiliano; Mateos, Gonzalo

    2015-11-11

    Although the kriged Kalman filter (KKF) has well-documented merits for prediction of spatial-temporal processes, its performance degrades in the presence of outliers due to anomalous events, or measurement equipment failures. This paper proposes a robust KKF model that explicitly accounts for presence of measurement outliers. Exploiting outlier sparsity, a novel l1-regularized estimator that jointly predicts the spatial-temporal process at unmonitored locations, while identifying measurement outliers is put forth. Numerical tests are conducted on a synthetic Internet protocol (IP) network, and real transformer load data. Test results corroborate the effectiveness of the novel estimator in joint spatial prediction and outlier identification.

  10. Parallel solution of sparse one-dimensional dynamic programming problems

    NASA Technical Reports Server (NTRS)

    Nicol, David M.

    1989-01-01

    Parallel computation offers the potential for quickly solving large computational problems. However, it is often a non-trivial task to effectively use parallel computers. Solution methods must sometimes be reformulated to exploit parallelism; the reformulations are often more complex than their slower serial counterparts. We illustrate these points by studying the parallelization of sparse one-dimensional dynamic programming problems, those which do not obviously admit substantial parallelization. We propose a new method for parallelizing such problems, develop analytic models which help us to identify problems which parallelize well, and compare the performance of our algorithm with existing algorithms on a multiprocessor.

  11. ATLAS computing on Swiss Cloud SWITCHengines

    NASA Astrophysics Data System (ADS)

    Haug, S.; Sciacca, F. G.; ATLAS Collaboration

    2017-10-01

    Consolidation towards more computing at flat budgets beyond what pure chip technology can offer, is a requirement for the full scientific exploitation of the future data from the Large Hadron Collider at CERN in Geneva. One consolidation measure is to exploit cloud infrastructures whenever they are financially competitive. We report on the technical solutions and the performances used and achieved running simulation tasks for the ATLAS experiment on SWITCHengines. SWITCHengines is a new infrastructure as a service offered to Swiss academia by the National Research and Education Network SWITCH. While solutions and performances are general, financial considerations and policies, on which we also report, are country specific.

  12. Multi-Criteria Approach in Multifunctional Building Design Process

    NASA Astrophysics Data System (ADS)

    Gerigk, Mateusz

    2017-10-01

    The paper presents new approach in multifunctional building design process. Publication defines problems related to the design of complex multifunctional buildings. Currently, contemporary urban areas are characterized by very intensive use of space. Today, buildings are being built bigger and contain more diverse functions to meet the needs of a large number of users in one capacity. The trends show the need for recognition of design objects in an organized structure, which must meet current design criteria. The design process in terms of the complex system is a theoretical model, which is the basis for optimization solutions for the entire life cycle of the building. From the concept phase through exploitation phase to disposal phase multipurpose spaces should guarantee aesthetics, functionality, system efficiency, system safety and environmental protection in the best possible way. The result of the analysis of the design process is presented as a theoretical model of the multifunctional structure. Recognition of multi-criteria model in the form of Cartesian product allows to create a holistic representation of the designed building in the form of a graph model. The proposed network is the theoretical base that can be used in the design process of complex engineering systems. The systematic multi-criteria approach makes possible to maintain control over the entire design process and to provide the best possible performance. With respect to current design requirements, there are no established design rules for multifunctional buildings in relation to their operating phase. Enrichment of the basic criteria with functional flexibility criterion makes it possible to extend the exploitation phase which brings advantages on many levels.

  13. Delay-induced Turing-like waves for one-species reaction-diffusion model on a network

    NASA Astrophysics Data System (ADS)

    Petit, Julien; Carletti, Timoteo; Asllani, Malbor; Fanelli, Duccio

    2015-09-01

    A one-species time-delay reaction-diffusion system defined on a complex network is studied. Traveling waves are predicted to occur following a symmetry-breaking instability of a homogeneous stationary stable solution, subject to an external nonhomogeneous perturbation. These are generalized Turing-like waves that materialize in a single-species populations dynamics model, as the unexpected byproduct of the imposed delay in the diffusion part. Sufficient conditions for the onset of the instability are mathematically provided by performing a linear stability analysis adapted to time-delayed differential equations. The method here developed exploits the properties of the Lambert W-function. The prediction of the theory are confirmed by direct numerical simulation carried out for a modified version of the classical Fisher model, defined on a Watts-Strogatz network and with the inclusion of the delay.

  14. Understanding intratumor heterogeneity by combining genome analysis and mathematical modeling.

    PubMed

    Niida, Atsushi; Nagayama, Satoshi; Miyano, Satoru; Mimori, Koshi

    2018-04-01

    Cancer is composed of multiple cell populations with different genomes. This phenomenon called intratumor heterogeneity (ITH) is supposed to be a fundamental cause of therapeutic failure. Therefore, its principle-level understanding is a clinically important issue. To achieve this goal, an interdisciplinary approach combining genome analysis and mathematical modeling is essential. For example, we have recently performed multiregion sequencing to unveil extensive ITH in colorectal cancer. Moreover, by employing mathematical modeling of cancer evolution, we demonstrated that it is possible that this ITH is generated by neutral evolution. In this review, we introduce recent advances in a research field related to ITH and also discuss strategies for exploiting novel findings on ITH in a clinical setting. © 2018 The Authors. Cancer Science published by John Wiley & Sons Australia, Ltd on behalf of Japanese Cancer Association.

  15. Solving the master equation without kinetic Monte Carlo: Tensor train approximations for a CO oxidation model

    NASA Astrophysics Data System (ADS)

    Gelß, Patrick; Matera, Sebastian; Schütte, Christof

    2016-06-01

    In multiscale modeling of heterogeneous catalytic processes, one crucial point is the solution of a Markovian master equation describing the stochastic reaction kinetics. Usually, this is too high-dimensional to be solved with standard numerical techniques and one has to rely on sampling approaches based on the kinetic Monte Carlo method. In this study we break the curse of dimensionality for the direct solution of the Markovian master equation by exploiting the Tensor Train Format for this purpose. The performance of the approach is demonstrated on a first principles based, reduced model for the CO oxidation on the RuO2(110) surface. We investigate the complexity for increasing system size and for various reaction conditions. The advantage over the stochastic simulation approach is illustrated by a problem with increased stiffness.

  16. Exploiting Publication Contents and Collaboration Networks for Collaborator Recommendation

    PubMed Central

    Kong, Xiangjie; Jiang, Huizhen; Yang, Zhuo; Xu, Zhenzhen; Xia, Feng; Tolba, Amr

    2016-01-01

    Thanks to the proliferation of online social networks, it has become conventional for researchers to communicate and collaborate with each other. Meanwhile, one critical challenge arises, that is, how to find the most relevant and potential collaborators for each researcher? In this work, we propose a novel collaborator recommendation model called CCRec, which combines the information on researchers’ publications and collaboration network to generate better recommendation. In order to effectively identify the most potential collaborators for researchers, we adopt a topic clustering model to identify the academic domains, as well as a random walk model to compute researchers’ feature vectors. Using DBLP datasets, we conduct benchmarking experiments to examine the performance of CCRec. The experimental results show that CCRec outperforms other state-of-the-art methods in terms of precision, recall and F1 score. PMID:26849682

  17. A semiparametric graphical modelling approach for large-scale equity selection.

    PubMed

    Liu, Han; Mulvey, John; Zhao, Tianqi

    2016-01-01

    We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption.

  18. Development of a simulation model of semi-active suspension for monorail

    NASA Astrophysics Data System (ADS)

    Hasnan, K.; Didane, D. H.; Kamarudin, M. A.; Bakhsh, Qadir; Abdulmalik, R. E.

    2016-11-01

    The new Kuala Lumpur Monorail Fleet Expansion Project (KLMFEP) uses semiactive technology in its suspension system. It is recognized that the suspension system influences the ride quality. Thus, among the way to further improve the ride quality is by fine- tuning the semi-active suspension system on the new KL Monorail. The semi-active suspension for the monorail specifically in terms of improving ride quality could be exploited further. Hence a simulation model which will act as a platform to test the design of a complete suspension system particularly to investigate the ride comfort performance is required. MSC Adams software was considered as the tool to develop the simulation platform, where all parameters and data are represented by mathematical equations; whereas the new KL Monorail being the reference model. In the simulation, the model went through step disturbance on the guideway for stability and ride comfort analysis. The model has shown positive results where the monorail is in stable condition as an outcome from stability analysis. The model also scores a Rating 1 classification in ISO 2631 Ride Comfort performance which is very comfortable as an overall outcome from ride comfort analysis. The model is also adjustable, flexibile and understandable by the engineers within the field for the purpose of further development.

  19. Numerical simulations of highly buoyant flows in the Castel Giorgio - Torre Alfina deep geothermal reservoir

    NASA Astrophysics Data System (ADS)

    Volpi, Giorgio; Crosta, Giovanni B.; Colucci, Francesca; Fischer, Thomas; Magri, Fabien

    2017-04-01

    Geothermal heat is a viable source of energy and its environmental impact in terms of CO2 emissions is significantly lower than conventional fossil fuels. However, nowadays its utilization is inconsistent with the enormous amount of energy available underneath the surface of the earth. This is mainly due to the uncertainties associated with it, as for example the lack of appropriate computational tools, necessary to perform effective analyses. The aim of the present study is to build an accurate 3D numerical model, to simulate the exploitation process of the deep geothermal reservoir of Castel Giorgio - Torre Alfina (central Italy), and to compare results and performances of parallel simulations performed with TOUGH2 (Pruess et al. 1999), FEFLOW (Diersch 2014) and the open source software OpenGeoSys (Kolditz et al. 2012). Detailed geological, structural and hydrogeological data, available for the selected area since early 70s, show that Castel Giorgio - Torre Alfina is a potential geothermal reservoir with high thermal characteristics (120 ° C - 150 ° C) and fluids such as pressurized water and gas, mainly CO2, hosted in a carbonate formation. Our two steps simulations firstly recreate the undisturbed natural state of the considered system and then perform the predictive analysis of the industrial exploitation process. The three adopted software showed a strong numerical simulations accuracy, which has been verified by comparing the simulated and measured temperature and pressure values of the geothermal wells in the area. The results of our simulations have demonstrated the sustainability of the investigated geothermal field for the development of a 5 MW pilot plant with total fluids reinjection in the same original formation. From the thermal point of view, a very efficient buoyant circulation inside the geothermal system has been observed, thus allowing the reservoir to support the hypothesis of a 50 years production time with a flow rate of 1050 t/h. Furthermore, with the modeled distances our simulations showed no interference effects between the production and re-injection wells. Besides providing valuable guidelines for future exploitation of the Castel Giorgio - Torre Alfina deep geothermal reservoir, this example also highlights the large applicability and the high performance of the OpenGeoSys open-source code in handling coupled hydro-thermal simulations. REFERENCES Diersch, H. J. (2014). FEFLOW Finite Element Modeling of Flow, Mass and Heat Transport in Porous and Fractured Media, Springer-Verlag Berlin Heidelberg, ISBN 978-3-642-38738-8. Kolditz, O., Bauer, S., Bilke, L., Böttcher, N., Delfs, J. O., Fischer, T., U. J. Görke, T. Kalbacher, G. Kosakowski, McDermott, C. I., Park, C. H., Radu, F., Rink, K., Shao, H., Shao, H.B., Sun, F., Sun, Y., Sun, A., Singh, K., Taron, J., Walther, M., Wang,W., Watanabe, N., Wu, Y., Xie, M., Xu, W., Zehner, B. (2012). OpenGeoSys: an open-source initiative for numerical simulation of thermo-hydro-mechanical/chemical (THM/C) processes in porous media. Environmental Earth Sciences, 67(2), 589-599. Pruess, K., Oldenburg, C. M., & Moridis, G. J. (1999). TOUGH2 user's guide version 2. Lawrence Berkeley National Laboratory.

  20. Semantic Repositories for eGovernment Initiatives: Integrating Knowledge and Services

    NASA Astrophysics Data System (ADS)

    Palmonari, Matteo; Viscusi, Gianluigi

    In recent years, public sector investments in eGovernment initiatives have depended on making more reliable existing governmental ICT systems and infrastructures. Furthermore, we assist at a change in the focus of public sector management, from the disaggregation, competition and performance measurements typical of the New Public Management (NPM), to new models of governance, aiming for the reintegration of services under a new perspective in bureaucracy, namely a holistic approach to policy making which exploits the extensive digitalization of administrative operations. In this scenario, major challenges are related to support effective access to information both at the front-end level, by means of highly modular and customizable content provision, and at the back-end level, by means of information integration initiatives. Repositories of information about data and services that exploit semantic models and technologies can support these goals by bridging the gap between the data-level representations and the human-level knowledge involved in accessing information and in searching for services. Moreover, semantic repository technologies can reach a new level of automation for different tasks involved in interoperability programs, both related to data integration techniques and service-oriented computing approaches. In this chapter, we discuss the above topics by referring to techniques and experiences where repositories based on conceptual models and ontologies are used at different levels in eGovernment initiatives: at the back-end level to produce a comprehensive view of the information managed in the public administrations' (PA) information systems, and at the front-end level to support effective service delivery.

  1. Performance and scalability of Fourier domain optical coherence tomography acceleration using graphics processing units.

    PubMed

    Li, Jian; Bloch, Pavel; Xu, Jing; Sarunic, Marinko V; Shannon, Lesley

    2011-05-01

    Fourier domain optical coherence tomography (FD-OCT) provides faster line rates, better resolution, and higher sensitivity for noninvasive, in vivo biomedical imaging compared to traditional time domain OCT (TD-OCT). However, because the signal processing for FD-OCT is computationally intensive, real-time FD-OCT applications demand powerful computing platforms to deliver acceptable performance. Graphics processing units (GPUs) have been used as coprocessors to accelerate FD-OCT by leveraging their relatively simple programming model to exploit thread-level parallelism. Unfortunately, GPUs do not "share" memory with their host processors, requiring additional data transfers between the GPU and CPU. In this paper, we implement a complete FD-OCT accelerator on a consumer grade GPU/CPU platform. Our data acquisition system uses spectrometer-based detection and a dual-arm interferometer topology with numerical dispersion compensation for retinal imaging. We demonstrate that the maximum line rate is dictated by the memory transfer time and not the processing time due to the GPU platform's memory model. Finally, we discuss how the performance trends of GPU-based accelerators compare to the expected future requirements of FD-OCT data rates.

  2. Metabolic robustness in young roots underpins a predictive model of maize hybrid performance in the field.

    PubMed

    de Abreu E Lima, Francisco; Westhues, Matthias; Cuadros-Inostroza, Álvaro; Willmitzer, Lothar; Melchinger, Albrecht E; Nikoloski, Zoran

    2017-04-01

    Heterosis has been extensively exploited for yield gain in maize (Zea mays L.). Here we conducted a comparative metabolomics-based analysis of young roots from in vitro germinating seedlings and from leaves of field-grown plants in a panel of inbred lines from the Dent and Flint heterotic patterns as well as selected F 1 hybrids. We found that metabolite levels in hybrids were more robust than in inbred lines. Using state-of-the-art modeling techniques, the most robust metabolites from roots and leaves explained up to 37 and 44% of the variance in the biomass from plants grown in two distinct field trials. In addition, a correlation-based analysis highlighted the trade-off between defense-related metabolites and hybrid performance. Therefore, our findings demonstrated the potential of metabolic profiles from young maize roots grown under tightly controlled conditions to predict hybrid performance in multiple field trials, thus bridging the greenhouse-field gap. © 2017 The Authors The Plant Journal © 2017 John Wiley & Sons Ltd.

  3. Improved l1-SPIRiT using 3D walsh transform-based sparsity basis.

    PubMed

    Feng, Zhen; Liu, Feng; Jiang, Mingfeng; Crozier, Stuart; Guo, He; Wang, Yuxin

    2014-09-01

    l1-SPIRiT is a fast magnetic resonance imaging (MRI) method which combines parallel imaging (PI) with compressed sensing (CS) by performing a joint l1-norm and l2-norm optimization procedure. The original l1-SPIRiT method uses two-dimensional (2D) Wavelet transform to exploit the intra-coil data redundancies and a joint sparsity model to exploit the inter-coil data redundancies. In this work, we propose to stack all the coil images into a three-dimensional (3D) matrix, and then a novel 3D Walsh transform-based sparsity basis is applied to simultaneously reduce the intra-coil and inter-coil data redundancies. Both the 2D Wavelet transform-based and the proposed 3D Walsh transform-based sparsity bases were investigated in the l1-SPIRiT method. The experimental results show that the proposed 3D Walsh transform-based l1-SPIRiT method outperformed the original l1-SPIRiT in terms of image quality and computational efficiency. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. CubiCal - Fast radio interferometric calibration suite exploiting complex optimisation

    NASA Astrophysics Data System (ADS)

    Kenyon, J. S.; Smirnov, O. M.; Grobler, T. L.; Perkins, S. J.

    2018-05-01

    It has recently been shown that radio interferometric gain calibration can be expressed succinctly in the language of complex optimisation. In addition to providing an elegant framework for further development, it exposes properties of the calibration problem which can be exploited to accelerate traditional non-linear least squares solvers such as Gauss-Newton and Levenberg-Marquardt. We extend existing derivations to chains of Jones terms: products of several gains which model different aberrant effects. In doing so, we find that the useful properties found in the single term case still hold. We also develop several specialised solvers which deal with complex gains parameterised by real values. The newly developed solvers have been implemented in a Python package called CubiCal, which uses a combination of Cython, multiprocessing and shared memory to leverage the power of modern hardware. We apply CubiCal to both simulated and real data, and perform both direction-independent and direction-dependent self-calibration. Finally, we present the results of some rudimentary profiling to show that CubiCal is competitive with respect to existing calibration tools such as MeqTrees.

  5. [Advance in researches on the effect of forest on hydrological process].

    PubMed

    Zhang, Zhiqiang; Yu, Xinxiao; Zhao, Yutao; Qin, Yongsheng

    2003-01-01

    According to the effects of forest on hydrological process, forest hydrology can be divided into three related aspects: experimental research on the effects of forest changing on hydrological process quantity and water quality; mechanism study on the effects of forest changing on hydrological cycle, and establishing and exploitating physical-based distributed forest hydrological model for resource management and engineering construction. Orientation experiment research can not only support the first-hand data for forest hydrological model, but also make clear the precipitation-runoff mechanisms. Research on runoff mechanisms can be valuable for the exploitation and improvement of physical based hydrological models. Moreover, the model can also improve the experimental and runoff mechanism researches. A review of above three aspects are summarized in this paper.

  6. Amoeba-inspired Tug-of-War algorithms for exploration-exploitation dilemma in extended Bandit Problem.

    PubMed

    Aono, Masashi; Kim, Song-Ju; Hara, Masahiko; Munakata, Toshinori

    2014-03-01

    The true slime mold Physarum polycephalum, a single-celled amoeboid organism, is capable of efficiently allocating a constant amount of intracellular resource to its pseudopod-like branches that best fit the environment where dynamic light stimuli are applied. Inspired by the resource allocation process, the authors formulated a concurrent search algorithm, called the Tug-of-War (TOW) model, for maximizing the profit in the multi-armed Bandit Problem (BP). A player (gambler) of the BP should decide as quickly and accurately as possible which slot machine to invest in out of the N machines and faces an "exploration-exploitation dilemma." The dilemma is a trade-off between the speed and accuracy of the decision making that are conflicted objectives. The TOW model maintains a constant intracellular resource volume while collecting environmental information by concurrently expanding and shrinking its branches. The conservation law entails a nonlocal correlation among the branches, i.e., volume increment in one branch is immediately compensated by volume decrement(s) in the other branch(es). Owing to this nonlocal correlation, the TOW model can efficiently manage the dilemma. In this study, we extend the TOW model to apply it to a stretched variant of BP, the Extended Bandit Problem (EBP), which is a problem of selecting the best M-tuple of the N machines. We demonstrate that the extended TOW model exhibits better performances for 2-tuple-3-machine and 2-tuple-4-machine instances of EBP compared with the extended versions of well-known algorithms for BP, the ϵ-Greedy and SoftMax algorithms, particularly in terms of its short-term decision-making capability that is essential for the survival of the amoeba in a hostile environment. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  7. Unsupervised Ensemble Anomaly Detection Using Time-Periodic Packet Sampling

    NASA Astrophysics Data System (ADS)

    Uchida, Masato; Nawata, Shuichi; Gu, Yu; Tsuru, Masato; Oie, Yuji

    We propose an anomaly detection method for finding patterns in network traffic that do not conform to legitimate (i.e., normal) behavior. The proposed method trains a baseline model describing the normal behavior of network traffic without using manually labeled traffic data. The trained baseline model is used as the basis for comparison with the audit network traffic. This anomaly detection works in an unsupervised manner through the use of time-periodic packet sampling, which is used in a manner that differs from its intended purpose — the lossy nature of packet sampling is used to extract normal packets from the unlabeled original traffic data. Evaluation using actual traffic traces showed that the proposed method has false positive and false negative rates in the detection of anomalies regarding TCP SYN packets comparable to those of a conventional method that uses manually labeled traffic data to train the baseline model. Performance variation due to the probabilistic nature of sampled traffic data is mitigated by using ensemble anomaly detection that collectively exploits multiple baseline models in parallel. Alarm sensitivity is adjusted for the intended use by using maximum- and minimum-based anomaly detection that effectively take advantage of the performance variations among the multiple baseline models. Testing using actual traffic traces showed that the proposed anomaly detection method performs as well as one using manually labeled traffic data and better than one using randomly sampled (unlabeled) traffic data.

  8. Combined DEM Extration Method from StereoSAR and InSAR

    NASA Astrophysics Data System (ADS)

    Zhao, Z.; Zhang, J. X.; Duan, M. Y.; Huang, G. M.; Yang, S. C.

    2015-06-01

    A pair of SAR images acquired from different positions can be used to generate digital elevation model (DEM). Two techniques exploiting this characteristic have been introduced: stereo SAR and interferometric SAR. They permit to recover the third dimension (topography) and, at the same time, to identify the absolute position (geolocation) of pixels included in the imaged area, thus allowing the generation of DEMs. In this paper, StereoSAR and InSAR combined adjustment model are constructed, and unify DEM extraction from InSAR and StereoSAR into the same coordinate system, and then improve three dimensional positioning accuracy of the target. We assume that there are four images 1, 2, 3 and 4. One pair of SAR images 1,2 meet the required conditions for InSAR technology, while the other pair of SAR images 3,4 can form stereo image pairs. The phase model is based on InSAR rigorous imaging geometric model. The master image 1 and the slave image 2 will be used in InSAR processing, but the slave image 2 is only used in the course of establishment, and the pixels of the slave image 2 are relevant to the corresponding pixels of the master image 1 through image coregistration coefficient, and it calculates the corresponding phase. It doesn't require the slave image in the construction of the phase model. In Range-Doppler (RD) model, the range equation and Doppler equation are a function of target geolocation, while in the phase equation, the phase is also a function of target geolocation. We exploit combined adjustment model to deviation of target geolocation, thus the problem of target solution is changed to solve three unkonwns through seven equations. The model was tested for DEM extraction under spaceborne InSAR and StereoSAR data and compared with InSAR and StereoSAR methods respectively. The results showed that the model delivered a better performance on experimental imagery and can be used for DEM extraction applications.

  9. A Model-Based Prognostics Approach Applied to Pneumatic Valves

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Goebel, Kai

    2011-01-01

    Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain knowledge of the system, its components, and how they fail by casting the underlying physical phenomena in a physics-based model that is derived from first principles. Uncertainty cannot be avoided in prediction, therefore, algorithms are employed that help in managing these uncertainties. The particle filtering algorithm has become a popular choice for model-based prognostics due to its wide applicability, ease of implementation, and support for uncertainty management. We develop a general model-based prognostics methodology within a robust probabilistic framework using particle filters. As a case study, we consider a pneumatic valve from the Space Shuttle cryogenic refueling system. We develop a detailed physics-based model of the pneumatic valve, and perform comprehensive simulation experiments to illustrate our prognostics approach and evaluate its effectiveness and robustness. The approach is demonstrated using historical pneumatic valve data from the refueling system.

  10. Towards a Semantically-Enabled Control Strategy for Building Simulations: Integration of Semantic Technologies and Model Predictive Control

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delgoshaei, Parastoo; Austin, Mark A.; Pertzborn, Amanda J.

    State-of-the-art building simulation control methods incorporate physical constraints into their mathematical models, but omit implicit constraints associated with policies of operation and dependency relationships among rules representing those constraints. To overcome these shortcomings, there is a recent trend in enabling the control strategies with inference-based rule checking capabilities. One solution is to exploit semantic web technologies in building simulation control. Such approaches provide the tools for semantic modeling of domains, and the ability to deduce new information based on the models through use of Description Logic (DL). In a step toward enabling this capability, this paper presents a cross-disciplinary data-drivenmore » control strategy for building energy management simulation that integrates semantic modeling and formal rule checking mechanisms into a Model Predictive Control (MPC) formulation. The results show that MPC provides superior levels of performance when initial conditions and inputs are derived from inference-based rules.« less

  11. Assessing the applicability of template-based protein docking in the twilight zone.

    PubMed

    Negroni, Jacopo; Mosca, Roberto; Aloy, Patrick

    2014-09-02

    The structural modeling of protein interactions in the absence of close homologous templates is a challenging task. Recently, template-based docking methods have emerged to exploit local structural similarities to help ab-initio protocols provide reliable 3D models for protein interactions. In this work, we critically assess the performance of template-based docking in the twilight zone. Our results show that, while it is possible to find templates for nearly all known interactions, the quality of the obtained models is rather limited. We can increase the precision of the models at expenses of coverage, but it drastically reduces the potential applicability of the method, as illustrated by the whole-interactome modeling of nine organisms. Template-based docking is likely to play an important role in the structural characterization of the interaction space, but we still need to improve the repertoire of structural templates onto which we can reliably model protein complexes. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Updraft Model for Development of Autonomous Soaring Uninhabited Air Vehicles

    NASA Technical Reports Server (NTRS)

    Allen, Michael J.

    2006-01-01

    Large birds and glider pilots commonly use updrafts caused by convection in the lower atmosphere to extend flight duration, increase cross-country speed, improve range, or simply to conserve energy. Uninhabited air vehicles may also have the ability to exploit updrafts to improve performance. An updraft model was developed at NASA Dryden Flight Research Center (Edwards, California) to investigate the use of convective lift for uninhabited air vehicles in desert regions. Balloon and surface measurements obtained at the National Oceanic and Atmospheric Administration Surface Radiation station (Desert Rock, Nevada) enabled the model development. The data were used to create a statistical representation of the convective velocity scale, w*, and the convective mixing-layer thickness, zi. These parameters were then used to determine updraft size, vertical velocity profile, spacing, and maximum height. This paper gives a complete description of the updraft model and its derivation. Computer code for running the model is also given in conjunction with a check case for model verification.

  13. Collaborative real-time motion video analysis by human observer and image exploitation algorithms

    NASA Astrophysics Data System (ADS)

    Hild, Jutta; Krüger, Wolfgang; Brüstle, Stefan; Trantelle, Patrick; Unmüßig, Gabriel; Heinze, Norbert; Peinsipp-Byma, Elisabeth; Beyerer, Jürgen

    2015-05-01

    Motion video analysis is a challenging task, especially in real-time applications. In most safety and security critical applications, a human observer is an obligatory part of the overall analysis system. Over the last years, substantial progress has been made in the development of automated image exploitation algorithms. Hence, we investigate how the benefits of automated video analysis can be integrated suitably into the current video exploitation systems. In this paper, a system design is introduced which strives to combine both the qualities of the human observer's perception and the automated algorithms, thus aiming to improve the overall performance of a real-time video analysis system. The system design builds on prior work where we showed the benefits for the human observer by means of a user interface which utilizes the human visual focus of attention revealed by the eye gaze direction for interaction with the image exploitation system; eye tracker-based interaction allows much faster, more convenient, and equally precise moving target acquisition in video images than traditional computer mouse selection. The system design also builds on prior work we did on automated target detection, segmentation, and tracking algorithms. Beside the system design, a first pilot study is presented, where we investigated how the participants (all non-experts in video analysis) performed in initializing an object tracking subsystem by selecting a target for tracking. Preliminary results show that the gaze + key press technique is an effective, efficient, and easy to use interaction technique when performing selection operations on moving targets in videos in order to initialize an object tracking function.

  14. Hybrid Communication Architectures for Distributed Smart Grid Applications

    DOE PAGES

    Zhang, Jianhua; Hasandka, Adarsh; Wei, Jin; ...

    2018-04-09

    Wired and wireless communications both play an important role in the blend of communications technologies necessary to enable future smart grid communications. Hybrid networks exploit independent mediums to extend network coverage and improve performance. However, whereas individual technologies have been applied in simulation networks, as far as we know there is only limited attention that has been paid to the development of a suite of hybrid communication simulation models for the communications system design. Hybrid simulation models are needed to capture the mixed communication technologies and IP address mechanisms in one simulation. To close this gap, we have developed amore » suite of hybrid communication system simulation models to validate the critical system design criteria for a distributed solar Photovoltaic (PV) communications system, including a single trip latency of 300 ms, throughput of 9.6 Kbps, and packet loss rate of 1%. In conclusion, the results show that three low-power wireless personal area network (LoWPAN)-based hybrid architectures can satisfy three performance metrics that are critical for distributed energy resource communications.« less

  15. A Generalizable Methodology for Quantifying User Satisfaction

    NASA Astrophysics Data System (ADS)

    Huang, Te-Yuan; Chen, Kuan-Ta; Huang, Polly; Lei, Chin-Laung

    Quantifying user satisfaction is essential, because the results can help service providers deliver better services. In this work, we propose a generalizable methodology, based on survival analysis, to quantify user satisfaction in terms of session times, i. e., the length of time users stay with an application. Unlike subjective human surveys, our methodology is based solely on passive measurement, which is more cost-efficient and better able to capture subconscious reactions. Furthermore, by using session times, rather than a specific performance indicator, such as the level of distortion of voice signals, the effects of other factors like loudness and sidetone, can also be captured by the developed models. Like survival analysis, our methodology is characterized by low complexity and a simple model-developing process. The feasibility of our methodology is demonstrated through case studies of ShenZhou Online, a commercial MMORPG in Taiwan, and the most prevalent VoIP application in the world, namely Skype. Through the model development process, we can also identify the most significant performance factors and their impacts on user satisfaction and discuss how they can be exploited to improve user experience and optimize resource allocation.

  16. 3D optimization of a polymer MOEMS for active focusing of VCSEL beam

    NASA Astrophysics Data System (ADS)

    Abada, S.; Camps, T.; Reig, B.; Doucet, JB; Daran, E.; Bardinal, V.

    2014-05-01

    We report on the optimized design of a polymer-based actuator that can be directly integrated on a VCSEL for vertical beam scanning. Its operation principle is based on the vertical displacement of a SU-8 membrane including a polymer microlens. Under an applied thermal gradient, the membrane is shifted vertically due to thermal expansion in the actuation arms induced by Joule effect. This leads to a modification of microlens position and thus to a vertical scan of the laser beam. Membrane vertical displacements as high as 8μm for only 3V applied were recently experimentally obtained. To explain these performances, we developed a comprehensive tri-dimensional thermo-mechanical model that takes into account SU-8 material properties and precise MOEMS geometry. Out-of-plane mechanical coefficients and thermal conductivity were thus integrated in our 3D model (COMSOL Multiphysics). Vertical displacements extracted from these data for different actuation powers were successfully compared to experimental values, validating this modelling tool. Thereby, it was exploited to increase MOEMS electrothermal performance by a factor higher than 5.

  17. Hybrid Communication Architectures for Distributed Smart Grid Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jianhua; Hasandka, Adarsh; Wei, Jin

    Wired and wireless communications both play an important role in the blend of communications technologies necessary to enable future smart grid communications. Hybrid networks exploit independent mediums to extend network coverage and improve performance. However, whereas individual technologies have been applied in simulation networks, as far as we know there is only limited attention that has been paid to the development of a suite of hybrid communication simulation models for the communications system design. Hybrid simulation models are needed to capture the mixed communication technologies and IP address mechanisms in one simulation. To close this gap, we have developed amore » suite of hybrid communication system simulation models to validate the critical system design criteria for a distributed solar Photovoltaic (PV) communications system, including a single trip latency of 300 ms, throughput of 9.6 Kbps, and packet loss rate of 1%. In conclusion, the results show that three low-power wireless personal area network (LoWPAN)-based hybrid architectures can satisfy three performance metrics that are critical for distributed energy resource communications.« less

  18. Exploiting the chaotic behaviour of atmospheric models with reconfigurable architectures

    NASA Astrophysics Data System (ADS)

    Russell, Francis P.; Düben, Peter D.; Niu, Xinyu; Luk, Wayne; Palmer, T. N.

    2017-12-01

    Reconfigurable architectures are becoming mainstream: Amazon, Microsoft and IBM are supporting such architectures in their data centres. The computationally intensive nature of atmospheric modelling is an attractive target for hardware acceleration using reconfigurable computing. Performance of hardware designs can be improved through the use of reduced-precision arithmetic, but maintaining appropriate accuracy is essential. We explore reduced-precision optimisation for simulating chaotic systems, targeting atmospheric modelling, in which even minor changes in arithmetic behaviour will cause simulations to diverge quickly. The possibility of equally valid simulations having differing outcomes means that standard techniques for comparing numerical accuracy are inappropriate. We use the Hellinger distance to compare statistical behaviour between reduced-precision CPU implementations to guide reconfigurable designs of a chaotic system, then analyse accuracy, performance and power efficiency of the resulting implementations. Our results show that with only a limited loss in accuracy corresponding to less than 10% uncertainty in input parameters, the throughput and energy efficiency of a single-precision chaotic system implemented on a Xilinx Virtex-6 SX475T Field Programmable Gate Array (FPGA) can be more than doubled.

  19. Optimizing the separation performance of a gas centrifuge

    NASA Astrophysics Data System (ADS)

    Wood, H. G.

    1997-11-01

    Gas centrifuges were originally developed for the enrichment of U^235 from naturally occurring uranium for the purpose of providing fuel for nuclear power reactors and material for nuclear weapons. This required the separation of a binary mixture composed of U^235 and U^238. Since the end of the cold war, a surplus of enriched uranium exists on the world market, but many centrifuge plants exist in numerous countries. These circumstances together with the growing demand for stable isotopes for chemical and physical research and in medical science has led to the exploration of alternate applications of gas centrifuge technology. In order to acieve these multi-component separations, existing centrifuges must be modified or new centrifuges must be designed. In either case, it is important to have models of the internal flow fields to predict the separation performance and algorithms to seek the optimal operating conditions of the centrifuges. Here, we use the Onsager pancake model of the internal flow field, and we present an optimization strategy which exploits a similarity parameter in the pancake model. Numerical examples will be presented.

  20. Equipment for fully homologous bulb turbine model testing in Laval University

    NASA Astrophysics Data System (ADS)

    R, Fraser; D, Vallée; Y, Jean; C, Deschênes

    2014-03-01

    Within the context of liberalisation of the energy market, hydroelectricity remains a first class source of clean and renewable energy. Combining the growing demand of energy, its increasing value and the appreciation associated to the sustainable development, low head sites formerly considered as non-profitable are now exploitable. Bulb turbines likely to equip such sites are traditionally developed on model using right angle transmission leading to piers enlargement for power take off shaft passage, thus restricting possibilities to have fully homologous hydraulic passages. Aiming to sustain good quality development on fully homologous scale model of bulb turbines, the Hydraulic Machines Laboratory (LAMH) of Laval University has developed a brake with an enhanced power to weight ratio. This powerful brake is small enough to be located in the bulb shell while dissipating power without mandatory test head reduction. This paper first presents the basic technology of this brake and its application. Then both its main performance capabilities and dimensional characteristics will be detailed. The instrumentation used to perform accurate measurements will be finally presented.

  1. Qualitative modelling for the Caeté Mangrove Estuary (North Brazil): a preliminary approach to an integrated eco-social analysis

    NASA Astrophysics Data System (ADS)

    Ortiz, Marco; Wolff, Matthias

    2004-10-01

    The sustainability of different integrated management regimes for the mangrove ecosystem of the Caeté Estuary (North Brazil) were assessed using a holistic theoretical framework. As a way to demonstrate that the behaviour and trajectory of complex whole systems are not epiphenomenal to the properties of the small parts, a set of conceptual models from more reductionistic to more holistic were enunciated. These models integrate the scientific information published until present for this mangrove ecosystem. The sustainability of different management scenarios (forestry and fishery) was assessed. Since the exploitation of mangrove trees is not allowed according Brazilian laws, the forestry was only included for simulation purposes. The model simulations revealed that sustainability predictions of reductionistic models should not be extrapolated into holistic approaches. Forestry and fishery activities seem to be sustainable only if they are self-damped. The exploitation of the two mangrove species Rhizophora mangle and Avicenia germinans does not appear to be sustainable, thus a rotation harvest is recommended. A similar conclusion holds for the exploitation of invertebrate species. Our results suggest that more studies should be focused on the estimation of maximum sustainable yield based on a multispecies approach. Any reference to holistic sustainability based on reductionistic approaches may distort our understanding of the natural complex ecosystems.

  2. Polarimetric Intensity Parameterization of Radar and Other Remote Sensing Sources for Advanced Exploitation and Data Fusion: Theory

    DTIC Science & Technology

    2008-10-01

    is theoretically similar to the concept of “partial or compact polarimetry”, yields comparable results to full or quadrature-polarized systems by...to the emerging “compact polarimetry” methodology [9]-[13] that exploits scattering system response to an incomplete set of input EM field components...a scattering operator or matrix. Although as theoretically discussed earlier, performance of such fully-polarized radar system (i.e., quadrature

  3. Interactive degraded document enhancement and ground truth generation

    NASA Astrophysics Data System (ADS)

    Bal, G.; Agam, G.; Frieder, O.; Frieder, G.

    2008-01-01

    Degraded documents are frequently obtained in various situations. Examples of degraded document collections include historical document depositories, document obtained in legal and security investigations, and legal and medical archives. Degraded document images are hard to to read and are hard to analyze using computerized techniques. There is hence a need for systems that are capable of enhancing such images. We describe a language-independent semi-automated system for enhancing degraded document images that is capable of exploiting inter- and intra-document coherence. The system is capable of processing document images with high levels of degradations and can be used for ground truthing of degraded document images. Ground truthing of degraded document images is extremely important in several aspects: it enables quantitative performance measurements of enhancement systems and facilitates model estimation that can be used to improve performance. Performance evaluation is provided using the historical Frieder diaries collection.1

  4. Contention Modeling for Multithreaded Distributed Shared Memory Machines: The Cray XMT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Secchi, Simone; Tumeo, Antonino; Villa, Oreste

    Distributed Shared Memory (DSM) machines are a wide class of multi-processor computing systems where a large virtually-shared address space is mapped on a network of physically distributed memories. High memory latency and network contention are two of the main factors that limit performance scaling of such architectures. Modern high-performance computing DSM systems have evolved toward exploitation of massive hardware multi-threading and fine-grained memory hashing to tolerate irregular latencies, avoid network hot-spots and enable high scaling. In order to model the performance of such large-scale machines, parallel simulation has been proved to be a promising approach to achieve good accuracy inmore » reasonable times. One of the most critical factors in solving the simulation speed-accuracy trade-off is network modeling. The Cray XMT is a massively multi-threaded supercomputing architecture that belongs to the DSM class, since it implements a globally-shared address space abstraction on top of a physically distributed memory substrate. In this paper, we discuss the development of a contention-aware network model intended to be integrated in a full-system XMT simulator. We start by measuring the effects of network contention in a 128-processor XMT machine and then investigate the trade-off that exists between simulation accuracy and speed, by comparing three network models which operate at different levels of accuracy. The comparison and model validation is performed by executing a string-matching algorithm on the full-system simulator and on the XMT, using three datasets that generate noticeably different contention patterns.« less

  5. Weakly supervised classification in high energy physics

    DOE PAGES

    Dery, Lucio Mwinmaarong; Nachman, Benjamin; Rubbo, Francesco; ...

    2017-05-01

    As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. Here, this paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics $-$ quark versus gluon tagging $-$ we show that weakly supervised classification can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weakly supervisedmore » classification is a general procedure that can be applied to a wide variety of learning problems to boost performance and robustness when detailed simulations are not reliable or not available.« less

  6. Performance evaluation and simulation of a Compound Parabolic Concentrator (CPC) trough Solar Thermal Power Plant in Puerto Rico under solar transient conditions

    NASA Astrophysics Data System (ADS)

    Feliciano-Cruz, Luisa I.

    The increasing fossil fuel costs as well as the need to move in a somewhat sustainable future has led the world in a quest for exploiting the free and naturally available energy from the Sun to produce electric power, and Puerto Rico is no exception. This thesis proposes the design of a simulation model for the analysis and performance evaluation of a Solar Thermal Power Plant in Puerto Rico and suggests the use of the Compound Parabolic Concentrator as the solar collector of choice. Optical and thermal analysis of such collectors will be made using local solar radiation data for determining the viability of this proposed project in terms of the electric power produced and its cost.

  7. Weakly supervised classification in high energy physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dery, Lucio Mwinmaarong; Nachman, Benjamin; Rubbo, Francesco

    As machine learning algorithms become increasingly sophisticated to exploit subtle features of the data, they often become more dependent on simulations. Here, this paper presents a new approach called weakly supervised classification in which class proportions are the only input into the machine learning algorithm. Using one of the most challenging binary classification tasks in high energy physics $-$ quark versus gluon tagging $-$ we show that weakly supervised classification can match the performance of fully supervised algorithms. Furthermore, by design, the new algorithm is insensitive to any mis-modeling of discriminating features in the data by the simulation. Weakly supervisedmore » classification is a general procedure that can be applied to a wide variety of learning problems to boost performance and robustness when detailed simulations are not reliable or not available.« less

  8. Probabilistic Low-Rank Multitask Learning.

    PubMed

    Kong, Yu; Shao, Ming; Li, Kang; Fu, Yun

    2018-03-01

    In this paper, we consider the problem of learning multiple related tasks simultaneously with the goal of improving the generalization performance of individual tasks. The key challenge is to effectively exploit the shared information across multiple tasks as well as preserve the discriminative information for each individual task. To address this, we propose a novel probabilistic model for multitask learning (MTL) that can automatically balance between low-rank and sparsity constraints. The former assumes a low-rank structure of the underlying predictive hypothesis space to explicitly capture the relationship of different tasks and the latter learns the incoherent sparse patterns private to each task. We derive and perform inference via variational Bayesian methods. Experimental results on both regression and classification tasks on real-world applications demonstrate the effectiveness of the proposed method in dealing with the MTL problems.

  9. Progress report on LBL's numerical modeling studies on Cerro Prieto

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halfman-Dooley, S.E.; Lippman, M.J.; Bodvarsson, G.S.

    1989-04-01

    An exploitation model of the Cerro Prieto geothermal system is needed to assess the energy capacity of the field, estimate its productive lifetime and develop an optimal reservoir management plan. The model must consider the natural state (i.e., pre-exploitation) conditions of the system and be able to predict changes in the reservoir thermodynamic conditions (and fluid chemistry) in response to fluid production (and injection). This paper discusses the results of a three-dimensional numerical simulation of the natural state conditions of the Cerro Prieto field and compares computed and observed pressure and temperature/enthalpy changes for the 1973--1987 production period. 16 refs.,more » 24 figs., 2 tabs.« less

  10. Planar junctionless phototransistor: A potential high-performance and low-cost device for optical-communications

    NASA Astrophysics Data System (ADS)

    Ferhati, H.; Djeffal, F.

    2017-12-01

    In this paper, a new junctionless optical controlled field effect transistor (JL-OCFET) and its comprehensive theoretical model is proposed to achieve high optical performance and low cost fabrication process. Exhaustive study of the device characteristics and comparison between the proposed junctionless design and the conventional inversion mode structure (IM-OCFET) for similar dimensions are performed. Our investigation reveals that the proposed design exhibits an outstanding capability to be an alternative to the IM-OCFET due to the high performance and the weak signal detection benefit offered by this design. Moreover, the developed analytical expressions are exploited to formulate the objective functions to optimize the device performance using Genetic Algorithms (GAs) approach. The optimized JL-OCFET not only demonstrates good performance in terms of derived drain current and responsivity, but also exhibits superior signal to noise ratio, low power consumption, high-sensitivity, high ION/IOFF ratio and high-detectivity as compared to the conventional IM-OCFET counterpart. These characteristics make the optimized JL-OCFET potentially suitable for developing low cost and ultrasensitive photodetectors for high-performance and low cost inter-chips data communication applications.

  11. Load Balancing Using Time Series Analysis for Soft Real Time Systems with Statistically Periodic Loads

    NASA Technical Reports Server (NTRS)

    Hailperin, Max

    1993-01-01

    This thesis provides design and analysis of techniques for global load balancing on ensemble architectures running soft-real-time object-oriented applications with statistically periodic loads. It focuses on estimating the instantaneous average load over all the processing elements. The major contribution is the use of explicit stochastic process models for both the loading and the averaging itself. These models are exploited via statistical time-series analysis and Bayesian inference to provide improved average load estimates, and thus to facilitate global load balancing. This thesis explains the distributed algorithms used and provides some optimality results. It also describes the algorithms' implementation and gives performance results from simulation. These results show that our techniques allow more accurate estimation of the global system load ing, resulting in fewer object migration than local methods. Our method is shown to provide superior performance, relative not only to static load-balancing schemes but also to many adaptive methods.

  12. Direct Regularized Estimation of Retinal Vascular Oxygen Tension Based on an Experimental Model

    PubMed Central

    Yildirim, Isa; Ansari, Rashid; Yetik, I. Samil; Shahidi, Mahnaz

    2014-01-01

    Phosphorescence lifetime imaging is commonly used to generate oxygen tension maps of retinal blood vessels by classical least squares (LS) estimation method. A spatial regularization method was later proposed and provided improved results. However, both methods obtain oxygen tension values from the estimates of intermediate variables, and do not yield an optimum estimate of oxygen tension values, due to their nonlinear dependence on the ratio of intermediate variables. In this paper, we provide an improved solution by devising a regularized direct least squares (RDLS) method that exploits available knowledge in studies that provide models of oxygen tension in retinal arteries and veins, unlike the earlier regularized LS approach where knowledge about intermediate variables is limited. The performance of the proposed RDLS method is evaluated by investigating and comparing the bias, variance, oxygen tension maps, 1-D profiles of arterial oxygen tension, and mean absolute error with those of earlier methods, and its superior performance both quantitatively and qualitatively is demonstrated. PMID:23732915

  13. Performance Characteristics of the Multi-Zone NAS Parallel Benchmarks

    NASA Technical Reports Server (NTRS)

    Jin, Haoqiang; VanderWijngaart, Rob F.

    2003-01-01

    We describe a new suite of computational benchmarks that models applications featuring multiple levels of parallelism. Such parallelism is often available in realistic flow computations on systems of grids, but had not previously been captured in bench-marks. The new suite, named NPB Multi-Zone, is extended from the NAS Parallel Benchmarks suite, and involves solving the application benchmarks LU, BT and SP on collections of loosely coupled discretization meshes. The solutions on the meshes are updated independently, but after each time step they exchange boundary value information. This strategy provides relatively easily exploitable coarse-grain parallelism between meshes. Three reference implementations are available: one serial, one hybrid using the Message Passing Interface (MPI) and OpenMP, and another hybrid using a shared memory multi-level programming model (SMP+OpenMP). We examine the effectiveness of hybrid parallelization paradigms in these implementations on three different parallel computers. We also use an empirical formula to investigate the performance characteristics of the multi-zone benchmarks.

  14. LENSED: a code for the forward reconstruction of lenses and sources from strong lensing observations

    NASA Astrophysics Data System (ADS)

    Tessore, Nicolas; Bellagamba, Fabio; Metcalf, R. Benton

    2016-12-01

    Robust modelling of strong lensing systems is fundamental to exploit the information they contain about the distribution of matter in galaxies and clusters. In this work, we present LENSED, a new code which performs forward parametric modelling of strong lenses. LENSED takes advantage of a massively parallel ray-tracing kernel to perform the necessary calculations on a modern graphics processing unit (GPU). This makes the precise rendering of the background lensed sources much faster, and allows the simultaneous optimization of tens of parameters for the selected model. With a single run, the code is able to obtain the full posterior probability distribution for the lens light, the mass distribution and the background source at the same time. LENSED is first tested on mock images which reproduce realistic space-based observations of lensing systems. In this way, we show that it is able to recover unbiased estimates of the lens parameters, even when the sources do not follow exactly the assumed model. Then, we apply it to a subsample of the Sloan Lens ACS Survey lenses, in order to demonstrate its use on real data. The results generally agree with the literature, and highlight the flexibility and robustness of the algorithm.

  15. Reconstruction of Complex Directional Networks with Group Lasso Nonlinear Conditional Granger Causality.

    PubMed

    Yang, Guanxue; Wang, Lin; Wang, Xiaofan

    2017-06-07

    Reconstruction of networks underlying complex systems is one of the most crucial problems in many areas of engineering and science. In this paper, rather than identifying parameters of complex systems governed by pre-defined models or taking some polynomial and rational functions as a prior information for subsequent model selection, we put forward a general framework for nonlinear causal network reconstruction from time-series with limited observations. With obtaining multi-source datasets based on the data-fusion strategy, we propose a novel method to handle nonlinearity and directionality of complex networked systems, namely group lasso nonlinear conditional granger causality. Specially, our method can exploit different sets of radial basis functions to approximate the nonlinear interactions between each pair of nodes and integrate sparsity into grouped variables selection. The performance characteristic of our approach is firstly assessed with two types of simulated datasets from nonlinear vector autoregressive model and nonlinear dynamic models, and then verified based on the benchmark datasets from DREAM3 Challenge4. Effects of data size and noise intensity are also discussed. All of the results demonstrate that the proposed method performs better in terms of higher area under precision-recall curve.

  16. Resilience of Cyber Systems with Over- and Underregulation.

    PubMed

    Gisladottir, Viktoria; Ganin, Alexander A; Keisler, Jeffrey M; Kepner, Jeremy; Linkov, Igor

    2017-09-01

    Recent cyber attacks provide evidence of increased threats to our critical systems and infrastructure. A common reaction to a new threat is to harden the system by adding new rules and regulations. As federal and state governments request new procedures to follow, each of their organizations implements their own cyber defense strategies. This unintentionally increases time and effort that employees spend on training and policy implementation and decreases the time and latitude to perform critical job functions, thus raising overall levels of stress. People's performance under stress, coupled with an overabundance of information, results in even more vulnerabilities for adversaries to exploit. In this article, we embed a simple regulatory model that accounts for cybersecurity human factors and an organization's regulatory environment in a model of a corporate cyber network under attack. The resulting model demonstrates the effect of under- and overregulation on an organization's resilience with respect to insider threats. Currently, there is a tendency to use ad-hoc approaches to account for human factors rather than to incorporate them into cyber resilience modeling. It is clear that using a systematic approach utilizing behavioral science, which already exists in cyber resilience assessment, would provide a more holistic view for decisionmakers. © 2016 Society for Risk Analysis.

  17. Biocellion: accelerating computer simulation of multicellular biological system models

    PubMed Central

    Kang, Seunghwa; Kahan, Simon; McDermott, Jason; Flann, Nicholas; Shmulevich, Ilya

    2014-01-01

    Motivation: Biological system behaviors are often the outcome of complex interactions among a large number of cells and their biotic and abiotic environment. Computational biologists attempt to understand, predict and manipulate biological system behavior through mathematical modeling and computer simulation. Discrete agent-based modeling (in combination with high-resolution grids to model the extracellular environment) is a popular approach for building biological system models. However, the computational complexity of this approach forces computational biologists to resort to coarser resolution approaches to simulate large biological systems. High-performance parallel computers have the potential to address the computing challenge, but writing efficient software for parallel computers is difficult and time-consuming. Results: We have developed Biocellion, a high-performance software framework, to solve this computing challenge using parallel computers. To support a wide range of multicellular biological system models, Biocellion asks users to provide their model specifics by filling the function body of pre-defined model routines. Using Biocellion, modelers without parallel computing expertise can efficiently exploit parallel computers with less effort than writing sequential programs from scratch. We simulate cell sorting, microbial patterning and a bacterial system in soil aggregate as case studies. Availability and implementation: Biocellion runs on x86 compatible systems with the 64 bit Linux operating system and is freely available for academic use. Visit http://biocellion.com for additional information. Contact: seunghwa.kang@pnnl.gov PMID:25064572

  18. Target specific proteochemometric model development for BACE1 - protein flexibility and structural water are critical in virtual screening.

    PubMed

    Manoharan, Prabu; Chennoju, Kiranmai; Ghoshal, Nanda

    2015-07-01

    BACE1 is an attractive target in Alzheimer's disease (AD) treatment. A rational drug design effort for the inhibition of BACE1 is actively pursued by researchers in both academic and pharmaceutical industries. This continued effort led to the steady accumulation of BACE1 crystal structures, co-complexed with different classes of inhibitors. This wealth of information is used in this study to develop target specific proteochemometric models and these models are exploited for predicting the prospective BACE1 inhibitors. The models developed in this study have performed excellently in predicting the computationally generated poses, separately obtained from single and ensemble docking approaches. The simple protein-ligand contact (SPLC) model outperforms other sophisticated high end models, in virtual screening performance, developed during this study. In an attempt to account for BACE1 protein active site flexibility information in predictive models, we included the change in the area of solvent accessible surface and the change in the volume of solvent accessible surface in our models. The ensemble and single receptor docking results obtained from this study indicate that the structural water mediated interactions improve the virtual screening results. Also, these waters are essential for recapitulating bioactive conformation during docking study. The proteochemometric models developed in this study can be used for the prediction of BACE1 inhibitors, during the early stage of AD drug discovery.

  19. Some Problems and Solutions in Transferring Ecosystem Simulation Codes to Supercomputers

    NASA Technical Reports Server (NTRS)

    Skiles, J. W.; Schulbach, C. H.

    1994-01-01

    Many computer codes for the simulation of ecological systems have been developed in the last twenty-five years. This development took place initially on main-frame computers, then mini-computers, and more recently, on micro-computers and workstations. Recent recognition of ecosystem science as a High Performance Computing and Communications Program Grand Challenge area emphasizes supercomputers (both parallel and distributed systems) as the next set of tools for ecological simulation. Transferring ecosystem simulation codes to such systems is not a matter of simply compiling and executing existing code on the supercomputer since there are significant differences in the system architectures of sequential, scalar computers and parallel and/or vector supercomputers. To more appropriately match the application to the architecture (necessary to achieve reasonable performance), the parallelism (if it exists) of the original application must be exploited. We discuss our work in transferring a general grassland simulation model (developed on a VAX in the FORTRAN computer programming language) to a Cray Y-MP. We show the Cray shared-memory vector-architecture, and discuss our rationale for selecting the Cray. We describe porting the model to the Cray and executing and verifying a baseline version, and we discuss the changes we made to exploit the parallelism in the application and to improve code execution. As a result, the Cray executed the model 30 times faster than the VAX 11/785 and 10 times faster than a Sun 4 workstation. We achieved an additional speed-up of approximately 30 percent over the original Cray run by using the compiler's vectorizing capabilities and the machine's ability to put subroutines and functions "in-line" in the code. With the modifications, the code still runs at only about 5% of the Cray's peak speed because it makes ineffective use of the vector processing capabilities of the Cray. We conclude with a discussion and future plans.

  20. Synergy in spreading processes: from exploitative to explorative foraging strategies.

    PubMed

    Pérez-Reche, Francisco J; Ludlam, Jonathan J; Taraskin, Sergei N; Gilligan, Christopher A

    2011-05-27

    An epidemiological model which incorporates synergistic effects that allow the infectivity and/or susceptibility of hosts to be dependent on the number of infected neighbors is proposed. Constructive synergy induces an exploitative behavior which results in a rapid invasion that infects a large number of hosts. Interfering synergy leads to a slower and sparser explorative foraging strategy that traverses larger distances by infecting fewer hosts. The model can be mapped to a dynamical bond percolation with spatial correlations that affect the mechanism of spread but do not influence the critical behavior of epidemics. © 2011 American Physical Society

  1. Emerging from the bottleneck: benefits of the comparative approach to modern neuroscience.

    PubMed

    Brenowitz, Eliot A; Zakon, Harold H

    2015-05-01

    Neuroscience has historically exploited a wide diversity of animal taxa. Recently, however, research has focused increasingly on a few model species. This trend has accelerated with the genetic revolution, as genomic sequences and genetic tools became available for a few species, which formed a bottleneck. This coalescence on a small set of model species comes with several costs that are often not considered, especially in the current drive to use mice explicitly as models for human diseases. Comparative studies of strategically chosen non-model species can complement model species research and yield more rigorous studies. As genetic sequences and tools become available for many more species, we are poised to emerge from the bottleneck and once again exploit the rich biological diversity offered by comparative studies. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Vectorization for Molecular Dynamics on Intel Xeon Phi Corpocessors

    NASA Astrophysics Data System (ADS)

    Yi, Hongsuk

    2014-03-01

    Many modern processors are capable of exploiting data-level parallelism through the use of single instruction multiple data (SIMD) execution. The new Intel Xeon Phi coprocessor supports 512 bit vector registers for the high performance computing. In this paper, we have developed a hierarchical parallelization scheme for accelerated molecular dynamics simulations with the Terfoff potentials for covalent bond solid crystals on Intel Xeon Phi coprocessor systems. The scheme exploits multi-level parallelism computing. We combine thread-level parallelism using a tightly coupled thread-level and task-level parallelism with 512-bit vector register. The simulation results show that the parallel performance of SIMD implementations on Xeon Phi is apparently superior to their x86 CPU architecture.

  3. 3D Tracking of individual growth factor receptors on polarized cells

    NASA Astrophysics Data System (ADS)

    Werner, James; Stich, Dominik; Cleyrat, Cedric; Phipps, Mary; Wadinger-Ness, Angela; Wilson, Bridget

    We have been developing methods for following 3D motion of selected biomolecular species throughout mammalian cells. Our approach exploits a custom designed confocal microscope that uses a unique spatial filter geometry and active feedback 200 times/second to follow fast 3D motion. By exploiting new non-blinking quantum dots as fluorescence labels, individual molecular trajectories can be observed for several minutes. We also will discuss recent instrument upgrades, including the ability to perform spinning disk fluorescence microscopy on the whole mammalian cell performed simultaneously with 3D molecular tracking experiments. These instrument upgrades were used to quantify 3D heterogeneous transport of individual growth factor receptors (EGFR) on live human renal cortical epithelial cells.

  4. LBT observations of the HR8799 planetary system

    NASA Astrophysics Data System (ADS)

    Mesa, D.; Arcidiacono, C.; Claudi, R. U.; Desidera, S.; Esposito, S.; Gratton, R.; Masciadri, E.

    2013-09-01

    We present here observations of the HR8799 planetary system performed in H and Ks band exploiting the AO system at the Large Binocular Telescope and the PISCES camera. Thanks to the excellent performence of the instrument we were able to detect for the first time the inner known planet of the system (HR8799) in the H band. Precise photometric and astrometric measures have been taken for all the four planets. Further, exploiting ours and previous astrometric results, we were able to put some limits on the planetary orbits of the four planets. The analysis of the dinamical stability of the system seems to show lower planetary masses than the ones adopted until now.

  5. Robust visual tracking via multiple discriminative models with object proposals

    NASA Astrophysics Data System (ADS)

    Zhang, Yuanqiang; Bi, Duyan; Zha, Yufei; Li, Huanyu; Ku, Tao; Wu, Min; Ding, Wenshan; Fan, Zunlin

    2018-04-01

    Model drift is an important reason for tracking failure. In this paper, multiple discriminative models with object proposals are used to improve the model discrimination for relieving this problem. Firstly, the target location and scale changing are captured by lots of high-quality object proposals, which are represented by deep convolutional features for target semantics. And then, through sharing a feature map obtained by a pre-trained network, ROI pooling is exploited to wrap the various sizes of object proposals into vectors of the same length, which are used to learn a discriminative model conveniently. Lastly, these historical snapshot vectors are trained by different lifetime models. Based on entropy decision mechanism, the bad model owing to model drift can be corrected by selecting the best discriminative model. This would improve the robustness of the tracker significantly. We extensively evaluate our tracker on two popular benchmarks, the OTB 2013 benchmark and UAV20L benchmark. On both benchmarks, our tracker achieves the best performance on precision and success rate compared with the state-of-the-art trackers.

  6. State-of-charge estimation in lithium-ion batteries: A particle filter approach

    NASA Astrophysics Data System (ADS)

    Tulsyan, Aditya; Tsai, Yiting; Gopaluni, R. Bhushan; Braatz, Richard D.

    2016-11-01

    The dynamics of lithium-ion batteries are complex and are often approximated by models consisting of partial differential equations (PDEs) relating the internal ionic concentrations and potentials. The Pseudo two-dimensional model (P2D) is one model that performs sufficiently accurately under various operating conditions and battery chemistries. Despite its widespread use for prediction, this model is too complex for standard estimation and control applications. This article presents an original algorithm for state-of-charge estimation using the P2D model. Partial differential equations are discretized using implicit stable algorithms and reformulated into a nonlinear state-space model. This discrete, high-dimensional model (consisting of tens to hundreds of states) contains implicit, nonlinear algebraic equations. The uncertainty in the model is characterized by additive Gaussian noise. By exploiting the special structure of the pseudo two-dimensional model, a novel particle filter algorithm that sweeps in time and spatial coordinates independently is developed. This algorithm circumvents the degeneracy problems associated with high-dimensional state estimation and avoids the repetitive solution of implicit equations by defining a 'tether' particle. The approach is illustrated through extensive simulations.

  7. Noise-robust speech triage.

    PubMed

    Bartos, Anthony L; Cipr, Tomas; Nelson, Douglas J; Schwarz, Petr; Banowetz, John; Jerabek, Ladislav

    2018-04-01

    A method is presented in which conventional speech algorithms are applied, with no modifications, to improve their performance in extremely noisy environments. It has been demonstrated that, for eigen-channel algorithms, pre-training multiple speaker identification (SID) models at a lattice of signal-to-noise-ratio (SNR) levels and then performing SID using the appropriate SNR dependent model was successful in mitigating noise at all SNR levels. In those tests, it was found that SID performance was optimized when the SNR of the testing and training data were close or identical. In this current effort multiple i-vector algorithms were used, greatly improving both processing throughput and equal error rate classification accuracy. Using identical approaches in the same noisy environment, performance of SID, language identification, gender identification, and diarization were significantly improved. A critical factor in this improvement is speech activity detection (SAD) that performs reliably in extremely noisy environments, where the speech itself is barely audible. To optimize SAD operation at all SNR levels, two algorithms were employed. The first maximized detection probability at low levels (-10 dB ≤ SNR < +10 dB) using just the voiced speech envelope, and the second exploited features extracted from the original speech to improve overall accuracy at higher quality levels (SNR ≥ +10 dB).

  8. Guiding pancreatic beta cells to target electrodes in a whole-cell biosensor for diabetes.

    PubMed

    Pedraza, Eileen; Karajić, Aleksandar; Raoux, Matthieu; Perrier, Romain; Pirog, Antoine; Lebreton, Fanny; Arbault, Stéphane; Gaitan, Julien; Renaud, Sylvie; Kuhn, Alexander; Lang, Jochen

    2015-10-07

    We are developing a cell-based bioelectronic glucose sensor that exploits the multi-parametric sensing ability of pancreatic islet cells for the treatment of diabetes. These cells sense changes in the concentration of glucose and physiological hormones and immediately react by generating electrical signals. In our sensor, signals from multiple cells are recorded as field potentials by a micro-electrode array (MEA). Thus, cell response to various factors can be assessed rapidly and with high throughput. However, signal quality and consequently overall sensor performance rely critically on close cell-electrode proximity. Therefore, we present here a non-invasive method of further exploiting the electrical properties of these cells to guide them towards multiple micro-electrodes via electrophoresis. Parameters were optimized by measuring the cell's zeta potential and modeling the electric field distribution. Clonal and primary mouse or human β-cells migrated directly to target electrodes during the application of a 1 V potential between MEA electrodes for 3 minutes. The morphology, insulin secretion, and electrophysiological characteristics were not altered compared to controls. Thus, cell manipulation on standard MEAs was achieved without introducing any external components and while maintaining the performance of the biosensor. Since the analysis of the cells' electrical activity was performed in real time via on-chip recording and processing, this work demonstrates that our biosensor is operational from the first step of electrically guiding cells to the final step of automatic recognition. Our favorable results with pancreatic islets, which are highly sensitive and fragile cells, are encouraging for the extension of this technique to other cell types and microarray devices.

  9. Rationalising predictors of child sexual exploitation and sex-trading.

    PubMed

    Klatt, Thimna; Cavner, Della; Egan, Vincent

    2014-02-01

    Although there is evidence for specific risk factors leading to child sexual exploitation and prostitution, these influences overlap and have rarely been examined concurrently. The present study examined case files for 175 young persons who attended a voluntary organization in Leicester, United Kingdom, which supports people who are sexually exploited or at risk of sexual exploitation. Based on the case files, the presence or absence of known risk factors for becoming a sex worker was coded. Data were analyzed using t-test, logistic regression, and smallest space analysis. Users of the voluntary organization's services who had been sexually exploited exhibited a significantly greater number of risk factors than service users who had not been victims of sexual exploitation. The logistic regression produced a significant model fit. However, of the 14 potential predictors--many of which were associated with each other--only four variables significantly predicted actual sexual exploitation: running away, poverty, drug and/or alcohol use, and having friends or family members in prostitution. Surprisingly, running away was found to significantly decrease the odds of becoming involved in sexual exploitation. Smallest space analysis of the data revealed 5 clusters of risk factors. Two of the clusters, which reflected a desperation and need construct and immature or out-of-control lifestyles, were significantly associated with sexual exploitation. Our research suggests that some risk factors (e.g. physical and emotional abuse, early delinquency, and homelessness) for becoming involved in sexual exploitation are common but are part of the problematic milieu of the individuals affected and not directly associated with sex trading itself. Our results also indicate that it is important to engage with the families and associates of young persons at risk of becoming (or remaining) a sex worker if one wants to reduce the numbers of persons who engage in this activity. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Mixed Poisson distributions in exact solutions of stochastic autoregulation models.

    PubMed

    Iyer-Biswas, Srividya; Jayaprakash, C

    2014-11-01

    In this paper we study the interplay between stochastic gene expression and system design using simple stochastic models of autoactivation and autoinhibition. Using the Poisson representation, a technique whose particular usefulness in the context of nonlinear gene regulation models we elucidate, we find exact results for these feedback models in the steady state. Further, we exploit this representation to analyze the parameter spaces of each model, determine which dimensionless combinations of rates are the shape determinants for each distribution, and thus demarcate where in the parameter space qualitatively different behaviors arise. These behaviors include power-law-tailed distributions, bimodal distributions, and sub-Poisson distributions. We also show how these distribution shapes change when the strength of the feedback is tuned. Using our results, we reexamine how well the autoinhibition and autoactivation models serve their conventionally assumed roles as paradigms for noise suppression and noise exploitation, respectively.

  11. Automatic network coupling analysis for dynamical systems based on detailed kinetic models.

    PubMed

    Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich

    2005-10-01

    We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.

  12. Interconnection network architectures based on integrated orbital angular momentum emitters

    NASA Astrophysics Data System (ADS)

    Scaffardi, Mirco; Zhang, Ning; Malik, Muhammad Nouman; Lazzeri, Emma; Klitis, Charalambos; Lavery, Martin; Sorel, Marc; Bogoni, Antonella

    2018-02-01

    Novel architectures for two-layer interconnection networks based on concentric OAM emitters are presented. A scalability analysis is done in terms of devices characteristics, power budget and optical signal to noise ratio by exploiting experimentally measured parameters. The analysis shows that by exploiting optical amplifications, the proposed interconnection networks can support a number of ports higher than 100. The OAM crosstalk induced-penalty, evaluated through an experimental characterization, do not significantly affect the interconnection network performance.

  13. Pheromone evolution and sexual behavior in Drosophila are shaped by male sensory exploitation of other males.

    PubMed

    Ng, Soon Hwee; Shankar, Shruti; Shikichi, Yasumasa; Akasaka, Kazuaki; Mori, Kenji; Yew, Joanne Y

    2014-02-25

    Animals exhibit a spectacular array of traits to attract mates. Understanding the evolutionary origins of sexual features and preferences is a fundamental problem in evolutionary biology, and the mechanisms remain highly controversial. In some species, females choose mates based on direct benefits conferred by the male to the female and her offspring. Thus, female preferences are thought to originate and coevolve with male traits. In contrast, sensory exploitation occurs when expression of a male trait takes advantage of preexisting sensory biases in females. Here, we document in Drosophila a previously unidentified example of sensory exploitation of males by other males through the use of the sex pheromone CH503. We use mass spectrometry, high-performance liquid chromatography, and behavioral analysis to demonstrate that an antiaphrodisiac produced by males of the melanogaster subgroup also is effective in distant Drosophila relatives that do not express the pheromone. We further show that species that produce the pheromone have become less sensitive to the compound, illustrating that sensory adaptation occurs after sensory exploitation. Our findings provide a mechanism for the origin of a sex pheromone and show that sensory exploitation changes male sexual behavior over evolutionary time.

  14. Efficiency of bulk-heterojunction organic solar cells

    PubMed Central

    Scharber, M.C.; Sariciftci, N.S.

    2013-01-01

    During the last years the performance of bulk heterojunction solar cells has been improved significantly. For a large-scale application of this technology further improvements are required. This article reviews the basic working principles and the state of the art device design of bulk heterojunction solar cells. The importance of high power conversion efficiencies for the commercial exploitation is outlined and different efficiency models for bulk heterojunction solar cells are discussed. Assuming state of the art materials and device architectures several models predict power conversion efficiencies in the range of 10–15%. A more general approach assuming device operation close to the Shockley–Queisser-limit leads to even higher efficiencies. Bulk heterojunction devices exhibiting only radiative recombination of charge carriers could be as efficient as ideal inorganic photovoltaic devices. PMID:24302787

  15. Deep and Structured Robust Information Theoretic Learning for Image Analysis.

    PubMed

    Deng, Yue; Bao, Feng; Deng, Xuesong; Wang, Ruiping; Kong, Youyong; Dai, Qionghai

    2016-07-07

    This paper presents a robust information theoretic (RIT) model to reduce the uncertainties, i.e. missing and noisy labels, in general discriminative data representation tasks. The fundamental pursuit of our model is to simultaneously learn a transformation function and a discriminative classifier that maximize the mutual information of data and their labels in the latent space. In this general paradigm, we respectively discuss three types of the RIT implementations with linear subspace embedding, deep transformation and structured sparse learning. In practice, the RIT and deep RIT are exploited to solve the image categorization task whose performances will be verified on various benchmark datasets. The structured sparse RIT is further applied to a medical image analysis task for brain MRI segmentation that allows group-level feature selections on the brain tissues.

  16. A Charrelation Matrix-Based Blind Adaptive Detector for DS-CDMA Systems

    PubMed Central

    Luo, Zhongqiang; Zhu, Lidong

    2015-01-01

    In this paper, a blind adaptive detector is proposed for blind separation of user signals and blind estimation of spreading sequences in DS-CDMA systems. The blind separation scheme exploits a charrelation matrix for simple computation and effective extraction of information from observation signal samples. The system model of DS-CDMA signals is modeled as a blind separation framework. The unknown user information and spreading sequence of DS-CDMA systems can be estimated only from the sampled observation signals. Theoretical analysis and simulation results show that the improved performance of the proposed algorithm in comparison with the existing conventional algorithms used in DS-CDMA systems. Especially, the proposed scheme is suitable for when the number of observation samples is less and the signal to noise ratio (SNR) is low. PMID:26287209

  17. A Charrelation Matrix-Based Blind Adaptive Detector for DS-CDMA Systems.

    PubMed

    Luo, Zhongqiang; Zhu, Lidong

    2015-08-14

    In this paper, a blind adaptive detector is proposed for blind separation of user signals and blind estimation of spreading sequences in DS-CDMA systems. The blind separation scheme exploits a charrelation matrix for simple computation and effective extraction of information from observation signal samples. The system model of DS-CDMA signals is modeled as a blind separation framework. The unknown user information and spreading sequence of DS-CDMA systems can be estimated only from the sampled observation signals. Theoretical analysis and simulation results show that the improved performance of the proposed algorithm in comparison with the existing conventional algorithms used in DS-CDMA systems. Especially, the proposed scheme is suitable for when the number of observation samples is less and the signal to noise ratio (SNR) is low.

  18. Development of In Vitro-In Vivo Correlation/Relationship Modeling Approaches for Immediate Release Formulations Using Compartmental Dynamic Dissolution Data from “Golem”: A Novel Apparatus

    PubMed Central

    Tuszyński, Paweł K.; Polak, Sebastian; Jachowicz, Renata; Mendyk, Aleksander; Dohnal, Jiří

    2015-01-01

    Different batches of atorvastatin, represented by two immediate release formulation designs, were studied using a novel dynamic dissolution apparatus, simulating stomach and small intestine. A universal dissolution method was employed which simulated the physiology of human gastrointestinal tract, including the precise chyme transit behavior and biorelevant conditions. The multicompartmental dissolution data allowed direct observation and qualitative discrimination of the differences resulting from highly pH dependent dissolution behavior of the tested batches. Further evaluation of results was performed using IVIVC/IVIVR development. While satisfactory correlation could not be achieved using a conventional deconvolution based-model, promising results were obtained through the use of a nonconventional approach exploiting the complex compartmental dissolution data. PMID:26120580

  19. An ecosystem model of an exploited southern Mediterranean shelf region (Gulf of Gabes, Tunisia) and a comparison with other Mediterranean ecosystem model properties

    NASA Astrophysics Data System (ADS)

    Hattab, Tarek; Ben Rais Lasram, Frida; Albouy, Camille; Romdhane, Mohamed Salah; Jarboui, Othman; Halouani, Ghassen; Cury, Philippe; Le Loc'h, François

    2013-12-01

    In this paper, we describe an exploited continental shelf ecosystem (Gulf of Gabes) in the southern Mediterranean Sea using an Ecopath mass-balance model. This allowed us to determine the structure and functioning of this ecosystem and assess the impacts of fishing upon it. The model represents the average state of the ecosystem between 2000 and 2005. It includes 41 functional groups, which encompass the entire trophic spectrum from phytoplankton to higher trophic levels (e.g., fishes, birds, and mammals), and also considers the fishing activities in the area (five fleets). Model results highlight an important bentho-pelagic coupling in the system due to the links between plankton and benthic invertebrates through detritus. A comparison of this model with those developed for other continental shelf regions in the Mediterranean (i.e., the southern Catalan, the northern-central Adriatic, and the northern Aegean Seas) emphasizes similar patterns in their trophic functioning. Low and medium trophic levels (i.e., zooplankton, benthic molluscs, and polychaetes) and sharks were identified as playing key ecosystem roles and were classified as keystone groups. An analysis of ecosystem attributes indicated that the Gulf of Gabes is the least mature (i.e., in the earliest stages of ecosystem development) of the four ecosystems that were compared and it is suggested that this is due, at least in part, to the impacts of fishing. Bottom trawling was identified as having the widest-ranging impacts across the different functional groups and the largest impacts on some commercially-targeted demersal fish species. Several exploitation indices highlighted that the Gulf of Gabes ecosystem is highly exploited, a finding which is supported by stock assessment outcomes. This suggests that it is unlikely that the gulf can be fished at sustainable levels, a situation which is similar to other marine ecosystems in the Mediterranean Sea.

  20. GPU-Accelerated Forward and Back-Projections with Spatially Varying Kernels for 3D DIRECT TOF PET Reconstruction.

    PubMed

    Ha, S; Matej, S; Ispiryan, M; Mueller, K

    2013-02-01

    We describe a GPU-accelerated framework that efficiently models spatially (shift) variant system response kernels and performs forward- and back-projection operations with these kernels for the DIRECT (Direct Image Reconstruction for TOF) iterative reconstruction approach. Inherent challenges arise from the poor memory cache performance at non-axis aligned TOF directions. Focusing on the GPU memory access patterns, we utilize different kinds of GPU memory according to these patterns in order to maximize the memory cache performance. We also exploit the GPU instruction-level parallelism to efficiently hide long latencies from the memory operations. Our experiments indicate that our GPU implementation of the projection operators has slightly faster or approximately comparable time performance than FFT-based approaches using state-of-the-art FFTW routines. However, most importantly, our GPU framework can also efficiently handle any generic system response kernels, such as spatially symmetric and shift-variant as well as spatially asymmetric and shift-variant, both of which an FFT-based approach cannot cope with.

  1. Challenge toward the prediction of typhoon behaviour and down pour

    NASA Astrophysics Data System (ADS)

    Takahashi, K.; Onishi, R.; Baba, Y.; Kida, S.; Matsuda, K.; Goto, K.; Fuchigami, H.

    2013-08-01

    Mechanisms of interactions among different scale phenomena play important roles for forecasting of weather and climate. Multi-scale Simulator for the Geoenvironment (MSSG), which deals with multi-scale multi-physics phenomena, is a coupled non-hydrostatic atmosphere-ocean model designed to be run efficiently on the Earth Simulator. We present simulation results with the world-highest 1.9km horizontal resolution for the entire globe and regional heavy rain with 1km horizontal resolution and 5m horizontal/vertical resolution for urban area simulation. To gain high performance by exploiting the system capabilities, we propose novel performance evaluation metrics introduced in previous studies that incorporate the effects of the data caching mechanism between CPU and memory. With a useful code optimization guideline based on such metrics, we demonstrate that MSSG can achieve an excellent peak performance ratio of 32.2% on the Earth Simulator with the single-core performance found to be a key to a reduced time-to-solution.

  2. GPU-Accelerated Forward and Back-Projections With Spatially Varying Kernels for 3D DIRECT TOF PET Reconstruction

    NASA Astrophysics Data System (ADS)

    Ha, S.; Matej, S.; Ispiryan, M.; Mueller, K.

    2013-02-01

    We describe a GPU-accelerated framework that efficiently models spatially (shift) variant system response kernels and performs forward- and back-projection operations with these kernels for the DIRECT (Direct Image Reconstruction for TOF) iterative reconstruction approach. Inherent challenges arise from the poor memory cache performance at non-axis aligned TOF directions. Focusing on the GPU memory access patterns, we utilize different kinds of GPU memory according to these patterns in order to maximize the memory cache performance. We also exploit the GPU instruction-level parallelism to efficiently hide long latencies from the memory operations. Our experiments indicate that our GPU implementation of the projection operators has slightly faster or approximately comparable time performance than FFT-based approaches using state-of-the-art FFTW routines. However, most importantly, our GPU framework can also efficiently handle any generic system response kernels, such as spatially symmetric and shift-variant as well as spatially asymmetric and shift-variant, both of which an FFT-based approach cannot cope with.

  3. Pain expressiveness and altruistic behavior: an exploration using agent-based modeling.

    PubMed

    de C Williams, Amanda C; Gallagher, Elizabeth; Fidalgo, Antonio R; Bentley, Peter J

    2016-03-01

    Predictions which invoke evolutionary mechanisms are hard to test. Agent-based modeling in artificial life offers a way to simulate behaviors and interactions in specific physical or social environments over many generations. The outcomes have implications for understanding adaptive value of behaviors in context. Pain-related behavior in animals is communicated to other animals that might protect or help, or might exploit or predate. An agent-based model simulated the effects of displaying or not displaying pain (expresser/nonexpresser strategies) when injured and of helping, ignoring, or exploiting another in pain (altruistic/nonaltruistic/selfish strategies). Agents modeled in MATLAB interacted at random while foraging (gaining energy); random injury interrupted foraging for a fixed time unless help from an altruistic agent, who paid an energy cost, speeded recovery. Environmental and social conditions also varied, and each model ran for 10,000 iterations. Findings were meaningful in that, in general, contingencies that evident from experimental work with a variety of mammals, over a few interactions, were replicated in the agent-based model after selection pressure over many generations. More energy-demanding expression of pain reduced its frequency in successive generations, and increasing injury frequency resulted in fewer expressers and altruists. Allowing exploitation of injured agents decreased expression of pain to near zero, but altruists remained. Decreasing costs or increasing benefits of helping hardly changed its frequency, whereas increasing interaction rate between injured agents and helpers diminished the benefits to both. Agent-based modeling allows simulation of complex behaviors and environmental pressures over evolutionary time.

  4. The Neuropsychological Function of Older First-Time Child Exploitation Material Offenders: A Pilot Study.

    PubMed

    Rodriguez, Marcelo; Ellis, Andrew

    2018-06-01

    Despite the growing incidence of child exploitation offences, there is little knowledge of the neuropsychological function of older child exploitation material offenders (CEMOs). Given that studies have reported that sex offenders demonstrate deficits attributed to frontal and temporal lobe function, the aim of this pilot study was to investigate the frontotemporal function of older first-time child exploitation material offenders (FTCEMOs). The neuropsychological performance of 11 older FTCEMOs was compared with 34 older historical sex offenders (HSOs) and 32 older nonsex offender (NSO) controls. Forty-five percent of FTCEMOs admitted to a pedophilic interest, which was significantly lower than those reported by HSOs. FTCEMOs provided significantly higher intellectual function scores than HSOs. Results revealed no evidence of mild or major neurocognitive disorder in FTCEMOs. Although the groups were not significantly different, compared with normative data, FTCEMOs reported a high incidence of impairment on a measure of decision making and on a measure of facial emotional recognition.

  5. Training and business performance: the mediating role of absorptive capacities.

    PubMed

    Hernández-Perlines, Felipe; Moreno-García, Juan; Yáñez-Araque, Benito

    2016-01-01

    Training has been the focus of considerable conceptual and empirical attention but is considered a relevant factor for competitive edge in companies because it has a positive impact on business performance. This study is justified by the need for deeper analysis of the process involving the transfer of training into performance. This paper's originality lies in the implementation of the absorptive capacities approach as an appropriate conceptual framework for designing a model that reflects the connection between training and business performance through absorptive capacities. Based on the above conceptual framework and using the dual methodological implementation, a new method of analyzing the relationship between training and performance was obtained: efforts in training will not lead to performance without the mediation of absorptive. Training turns into performance if absorptive capacities are involved in this process. The suggested model becomes an appropriate framework for explaining the process of transformation of training into organizational performance, in which absorptive capacities play a key role. The findings obtained can go further owing to fs/QCA: of the different absorptive capacities, that of exploitation is a necessary condition to achieve better organizational performance. Therefore, training based on absorptive capacity will guide and facilitate the design of appropriate human resource strategies so that training results in improved performance. This conclusion is relevant for the development of a new facet of absorptive capacities by relating it to training and resulting in first-level implications for human resource management.

  6. Bifocal Stereo for Multipath Person Re-Identification

    NASA Astrophysics Data System (ADS)

    Blott, G.; Heipke, C.

    2017-11-01

    This work presents an approach for the task of person re-identification by exploiting bifocal stereo cameras. Present monocular person re-identification approaches show a decreasing working distance, when increasing the image resolution to obtain a higher reidentification performance. We propose a novel 3D multipath bifocal approach, containing a rectilinear lens with larger focal length for long range distances and a fish eye lens of a smaller focal length for the near range. The person re-identification performance is at least on par with 2D re-identification approaches but the working distance of the approach is increased and on average 10% more re-identification performance can be achieved in the overlapping field of view compared to a single camera. In addition, the 3D information is exploited from the overlapping field of view to solve potential 2D ambiguities.

  7. An Advanced Hierarchical Hybrid Environment for Reliability and Performance Modeling

    NASA Technical Reports Server (NTRS)

    Ciardo, Gianfranco

    2003-01-01

    The key issue we intended to address in our proposed research project was the ability to model and study logical and probabilistic aspects of large computer systems. In particular, we wanted to focus mostly on automatic solution algorithms based on a state-space exploration as their first step, in addition to the more traditional discrete-event simulation approaches commonly employed in industry. One explicitly-stated goal was to extend by several orders of magnitude the size of models that can be solved exactly, using a combination of techniques: 1) Efficient exploration and storage of the state space using new data structures that require an amount of memory sublinear in the number states; and 2) Exploitation of the existing symmetries in the matrices describing the system behavior using Kronecker operators. Not only we have been successful in achieving the above goals, but we exceeded them in many respects.

  8. Solving the master equation without kinetic Monte Carlo: Tensor train approximations for a CO oxidation model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gelß, Patrick, E-mail: p.gelss@fu-berlin.de; Matera, Sebastian, E-mail: matera@math.fu-berlin.de; Schütte, Christof, E-mail: schuette@mi.fu-berlin.de

    2016-06-01

    In multiscale modeling of heterogeneous catalytic processes, one crucial point is the solution of a Markovian master equation describing the stochastic reaction kinetics. Usually, this is too high-dimensional to be solved with standard numerical techniques and one has to rely on sampling approaches based on the kinetic Monte Carlo method. In this study we break the curse of dimensionality for the direct solution of the Markovian master equation by exploiting the Tensor Train Format for this purpose. The performance of the approach is demonstrated on a first principles based, reduced model for the CO oxidation on the RuO{sub 2}(110) surface.more » We investigate the complexity for increasing system size and for various reaction conditions. The advantage over the stochastic simulation approach is illustrated by a problem with increased stiffness.« less

  9. Gain scheduled linear quadratic control for quadcopter

    NASA Astrophysics Data System (ADS)

    Okasha, M.; Shah, J.; Fauzi, W.; Hanouf, Z.

    2017-12-01

    This study exploits the dynamics and control of quadcopters using Linear Quadratic Regulator (LQR) control approach. The quadcopter’s mathematical model is derived using the Newton-Euler method. It is a highly manoeuvrable, nonlinear, coupled with six degrees of freedom (DOF) model, which includes aerodynamics and detailed gyroscopic moments that are often ignored in many literatures. The linearized model is obtained and characterized by the heading angle (i.e. yaw angle) of the quadcopter. The adopted control approach utilizes LQR method to track several reference trajectories including circle and helix curves with significant variation in the yaw angle. The controller is modified to overcome difficulties related to the continuous changes in the operating points and eliminate chattering and discontinuity that is observed in the control input signal. Numerical non-linear simulations are performed using MATLAB and Simulink to illustrate to accuracy and effectiveness of the proposed controller.

  10. Computer vision research with new imaging technology

    NASA Astrophysics Data System (ADS)

    Hou, Guangqi; Liu, Fei; Sun, Zhenan

    2015-12-01

    Light field imaging is capable of capturing dense multi-view 2D images in one snapshot, which record both intensity values and directions of rays simultaneously. As an emerging 3D device, the light field camera has been widely used in digital refocusing, depth estimation, stereoscopic display, etc. Traditional multi-view stereo (MVS) methods only perform well on strongly texture surfaces, but the depth map contains numerous holes and large ambiguities on textureless or low-textured regions. In this paper, we exploit the light field imaging technology on 3D face modeling in computer vision. Based on a 3D morphable model, we estimate the pose parameters from facial feature points. Then the depth map is estimated through the epipolar plane images (EPIs) method. At last, the high quality 3D face model is exactly recovered via the fusing strategy. We evaluate the effectiveness and robustness on face images captured by a light field camera with different poses.

  11. A semiparametric graphical modelling approach for large-scale equity selection

    PubMed Central

    Liu, Han; Mulvey, John; Zhao, Tianqi

    2016-01-01

    We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption. PMID:28316507

  12. Applying Reduced Generator Models in the Coarse Solver of Parareal in Time Parallel Power System Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, Nan; Dimitrovski, Aleksandar D; Simunovic, Srdjan

    2016-01-01

    The development of high-performance computing techniques and platforms has provided many opportunities for real-time or even faster-than-real-time implementation of power system simulations. One approach uses the Parareal in time framework. The Parareal algorithm has shown promising theoretical simulation speedups by temporal decomposing a simulation run into a coarse simulation on the entire simulation interval and fine simulations on sequential sub-intervals linked through the coarse simulation. However, it has been found that the time cost of the coarse solver needs to be reduced to fully exploit the potentials of the Parareal algorithm. This paper studies a Parareal implementation using reduced generatormore » models for the coarse solver and reports the testing results on the IEEE 39-bus system and a 327-generator 2383-bus Polish system model.« less

  13. Sampling limits for electron tomography with sparsity-exploiting reconstructions.

    PubMed

    Jiang, Yi; Padgett, Elliot; Hovden, Robert; Muller, David A

    2018-03-01

    Electron tomography (ET) has become a standard technique for 3D characterization of materials at the nano-scale. Traditional reconstruction algorithms such as weighted back projection suffer from disruptive artifacts with insufficient projections. Popularized by compressed sensing, sparsity-exploiting algorithms have been applied to experimental ET data and show promise for improving reconstruction quality or reducing the total beam dose applied to a specimen. Nevertheless, theoretical bounds for these methods have been less explored in the context of ET applications. Here, we perform numerical simulations to investigate performance of ℓ 1 -norm and total-variation (TV) minimization under various imaging conditions. From 36,100 different simulated structures, our results show specimens with more complex structures generally require more projections for exact reconstruction. However, once sufficient data is acquired, dividing the beam dose over more projections provides no improvements-analogous to the traditional dose-fraction theorem. Moreover, a limited tilt range of ±75° or less can result in distorting artifacts in sparsity-exploiting reconstructions. The influence of optimization parameters on reconstructions is also discussed. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. "Soft Technology" and Criticism of the Western Model of Development

    ERIC Educational Resources Information Center

    Harper, Peter

    1973-01-01

    Alternatives to the capitalistic Western model of develoment are suggested. Three problems afflicting Western society--alienation, resource exploitation, and eviornmental stability--are discussed and a model which advocates both political and technological change is proposed. (SM)

  15. Panmictic and Clonal Evolution on a Single Patchy Resource Produces Polymorphic Foraging Guilds

    PubMed Central

    Getz, Wayne M.; Salter, Richard; Lyons, Andrew J.; Sippl-Swezey, Nicolas

    2015-01-01

    We develop a stochastic, agent-based model to study how genetic traits and experiential changes in the state of agents and available resources influence individuals’ foraging and movement behaviors. These behaviors are manifest as decisions on when to stay and exploit a current resource patch or move to a particular neighboring patch, based on information of the resource qualities of the patches and the anticipated level of intraspecific competition within patches. We use a genetic algorithm approach and an individual’s biomass as a fitness surrogate to explore the foraging strategy diversity of evolving guilds under clonal versus hermaphroditic sexual reproduction. We first present the resource exploitation processes, movement on cellular arrays, and genetic algorithm components of the model. We then discuss their implementation on the Nova software platform. This platform seamlessly combines the dynamical systems modeling of consumer-resource interactions with agent-based modeling of individuals moving over a landscapes, using an architecture that lays transparent the following four hierarchical simulation levels: 1.) within-patch consumer-resource dynamics, 2.) within-generation movement and competition mitigation processes, 3.) across-generation evolutionary processes, and 4.) multiple runs to generate the statistics needed for comparative analyses. The focus of our analysis is on the question of how the biomass production efficiency and the diversity of guilds of foraging strategy types, exploiting resources over a patchy landscape, evolve under clonal versus random hermaphroditic sexual reproduction. Our results indicate greater biomass production efficiency under clonal reproduction only at higher population densities, and demonstrate that polymorphisms evolve and are maintained under random mating systems. The latter result questions the notion that some type of associative mating structure is needed to maintain genetic polymorphisms among individuals exploiting a common patchy resource on an otherwise spatially homogeneous landscape. PMID:26274613

  16. An effective PSO-based memetic algorithm for flow shop scheduling.

    PubMed

    Liu, Bo; Wang, Ling; Jin, Yi-Hui

    2007-02-01

    This paper proposes an effective particle swarm optimization (PSO)-based memetic algorithm (MA) for the permutation flow shop scheduling problem (PFSSP) with the objective to minimize the maximum completion time, which is a typical non-deterministic polynomial-time (NP) hard combinatorial optimization problem. In the proposed PSO-based MA (PSOMA), both PSO-based searching operators and some special local searching operators are designed to balance the exploration and exploitation abilities. In particular, the PSOMA applies the evolutionary searching mechanism of PSO, which is characterized by individual improvement, population cooperation, and competition to effectively perform exploration. On the other hand, the PSOMA utilizes several adaptive local searches to perform exploitation. First, to make PSO suitable for solving PFSSP, a ranked-order value rule based on random key representation is presented to convert the continuous position values of particles to job permutations. Second, to generate an initial swarm with certain quality and diversity, the famous Nawaz-Enscore-Ham (NEH) heuristic is incorporated into the initialization of population. Third, to balance the exploration and exploitation abilities, after the standard PSO-based searching operation, a new local search technique named NEH_1 insertion is probabilistically applied to some good particles selected by using a roulette wheel mechanism with a specified probability. Fourth, to enrich the searching behaviors and to avoid premature convergence, a simulated annealing (SA)-based local search with multiple different neighborhoods is designed and incorporated into the PSOMA. Meanwhile, an effective adaptive meta-Lamarckian learning strategy is employed to decide which neighborhood to be used in SA-based local search. Finally, to further enhance the exploitation ability, a pairwise-based local search is applied after the SA-based search. Simulation results based on benchmarks demonstrate the effectiveness of the PSOMA. Additionally, the effects of some parameters on optimization performances are also discussed.

  17. Gene selection using hybrid binary black hole algorithm and modified binary particle swarm optimization.

    PubMed

    Pashaei, Elnaz; Pashaei, Elham; Aydin, Nizamettin

    2018-04-14

    In cancer classification, gene selection is an important data preprocessing technique, but it is a difficult task due to the large search space. Accordingly, the objective of this study is to develop a hybrid meta-heuristic Binary Black Hole Algorithm (BBHA) and Binary Particle Swarm Optimization (BPSO) (4-2) model that emphasizes gene selection. In this model, the BBHA is embedded in the BPSO (4-2) algorithm to make the BPSO (4-2) more effective and to facilitate the exploration and exploitation of the BPSO (4-2) algorithm to further improve the performance. This model has been associated with Random Forest Recursive Feature Elimination (RF-RFE) pre-filtering technique. The classifiers which are evaluated in the proposed framework are Sparse Partial Least Squares Discriminant Analysis (SPLSDA); k-nearest neighbor and Naive Bayes. The performance of the proposed method was evaluated on two benchmark and three clinical microarrays. The experimental results and statistical analysis confirm the better performance of the BPSO (4-2)-BBHA compared with the BBHA, the BPSO (4-2) and several state-of-the-art methods in terms of avoiding local minima, convergence rate, accuracy and number of selected genes. The results also show that the BPSO (4-2)-BBHA model can successfully identify known biologically and statistically significant genes from the clinical datasets. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Exploitation of Cytotoxicity of Some Essential Oils for Translation in Cancer Therapy

    PubMed Central

    Russo, Rossella; Corasaniti, Maria Tiziana; Bagetta, Giacinto; Morrone, Luigi Antonio

    2015-01-01

    Essential oils are complex mixtures of several components endowed with a wide range of biological activities, including antiseptic, anti-inflammatory, spasmolytic, sedative, analgesic, and anesthetic properties. A growing body of scientific reports has recently focused on the potential of essential oils as anticancer treatment in the attempt to overcome the development of multidrug resistance and important side effects associated with the antitumor drugs currently used. In this review we discuss the literature on the effects of essential oils in  in vitro and in vivo models of cancer, focusing on the studies performed with the whole phytocomplex rather than single constituents. PMID:25722735

  19. Development of a kernel function for clinical data.

    PubMed

    Daemen, Anneleen; De Moor, Bart

    2009-01-01

    For most diseases and examinations, clinical data such as age, gender and medical history guides clinical management, despite the rise of high-throughput technologies. To fully exploit such clinical information, appropriate modeling of relevant parameters is required. As the widely used linear kernel function has several disadvantages when applied to clinical data, we propose a new kernel function specifically developed for this data. This "clinical kernel function" more accurately represents similarities between patients. Evidently, three data sets were studied and significantly better performances were obtained with a Least Squares Support Vector Machine when based on the clinical kernel function compared to the linear kernel function.

  20. An exploitation-competition system with negative effect of prey on its predator.

    PubMed

    Wang, Yuanshi

    2015-05-01

    This paper considers an exploitation-competition system in which exploitation is the dominant interaction when the prey is at low density, while competition is dominant when the prey is at high density due to its negative effect on the predator. The two-species system is characterized by differential equations, which are the combination of Lotka-Volterra competitive and predator-prey models. Global dynamics of the model demonstrate some basic properties of exploitation-competition systems: (i) When the growth rate of prey is extremely small, the prey cannot promote the growth of predator. (ii) When the growth rate is small, an obligate predator can survive by preying on the prey, while a facultative predator can approach a high density by the predation. (iii) When the growth rate is intermediate, the predator can approach the maximal density by an intermediate predation. (iv) When the growth rate is large, the predator can persist only if it has a large density and its predation on the prey is big. (v) Intermediate predation is beneficial to the predator under certain parameter range, while over- or under-predation is not good. Extremely big/small predation would lead to extinction of species. Numerical simulations confirm and extend our results. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Role of social interactions in dynamic patterns of resource patches and forager aggregation.

    PubMed

    Tania, Nessy; Vanderlei, Ben; Heath, Joel P; Edelstein-Keshet, Leah

    2012-07-10

    The dynamics of resource patches and species that exploit such patches are of interest to ecologists, conservation biologists, modelers, and mathematicians. Here we consider how social interactions can create unique, evolving patterns in space and time. Whereas simple prey taxis (with consumable prey) promotes spatial uniform distributions, here we show that taxis in producer-scrounger groups can lead to pattern formation. We consider two types of foragers: those that search directly ("producers") and those that exploit other foragers to find food ("scroungers" or exploiters). We show that such groups can sustain fluctuating spatiotemporal patterns, akin to "waves of pursuit." Investigating the relative benefits to the individuals, we observed conditions under which either strategy leads to enhanced success, defined as net food consumption. Foragers that search for food directly have an advantage when food patches are localized. Those that seek aggregations of group mates do better when their ability to track group mates exceeds the foragers' food-sensing acuity. When behavioral switching or reproductive success of the strategies is included, the relative abundance of foragers and exploiters is dynamic over time, in contrast with classic models that predict stable frequencies. Our work shows the importance of considering two-way interaction--i.e., how food distribution both influences and is influenced by social foraging and aggregation of predators.

  2. Behavioural system identification of visual flight speed control in Drosophila melanogaster

    PubMed Central

    Rohrseitz, Nicola; Fry, Steven N.

    2011-01-01

    Behavioural control in many animals involves complex mechanisms with intricate sensory-motor feedback loops. Modelling allows functional aspects to be captured without relying on a description of the underlying complex, and often unknown, mechanisms. A wide range of engineering techniques are available for modelling, but their ability to describe time-continuous processes is rarely exploited to describe sensory-motor control mechanisms in biological systems. We performed a system identification of visual flight speed control in the fruitfly Drosophila, based on an extensive dataset of open-loop responses previously measured under free flight conditions. We identified a second-order under-damped control model with just six free parameters that well describes both the transient and steady-state characteristics of the open-loop data. We then used the identified control model to predict flight speed responses after a visual perturbation under closed-loop conditions and validated the model with behavioural measurements performed in free-flying flies under the same closed-loop conditions. Our system identification of the fruitfly's flight speed response uncovers the high-level control strategy of a fundamental flight control reflex without depending on assumptions about the underlying physiological mechanisms. The results are relevant for future investigations of the underlying neuromotor processing mechanisms, as well as for the design of biomimetic robots, such as micro-air vehicles. PMID:20525744

  3. A spherical harmonics intensity model for 3D segmentation and 3D shape analysis of heterochromatin foci.

    PubMed

    Eck, Simon; Wörz, Stefan; Müller-Ott, Katharina; Hahn, Matthias; Biesdorf, Andreas; Schotta, Gunnar; Rippe, Karsten; Rohr, Karl

    2016-08-01

    The genome is partitioned into regions of euchromatin and heterochromatin. The organization of heterochromatin is important for the regulation of cellular processes such as chromosome segregation and gene silencing, and their misregulation is linked to cancer and other diseases. We present a model-based approach for automatic 3D segmentation and 3D shape analysis of heterochromatin foci from 3D confocal light microscopy images. Our approach employs a novel 3D intensity model based on spherical harmonics, which analytically describes the shape and intensities of the foci. The model parameters are determined by fitting the model to the image intensities using least-squares minimization. To characterize the 3D shape of the foci, we exploit the computed spherical harmonics coefficients and determine a shape descriptor. We applied our approach to 3D synthetic image data as well as real 3D static and real 3D time-lapse microscopy images, and compared the performance with that of previous approaches. It turned out that our approach yields accurate 3D segmentation results and performs better than previous approaches. We also show that our approach can be used for quantifying 3D shape differences of heterochromatin foci. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Behavioural system identification of visual flight speed control in Drosophila melanogaster.

    PubMed

    Rohrseitz, Nicola; Fry, Steven N

    2011-02-06

    Behavioural control in many animals involves complex mechanisms with intricate sensory-motor feedback loops. Modelling allows functional aspects to be captured without relying on a description of the underlying complex, and often unknown, mechanisms. A wide range of engineering techniques are available for modelling, but their ability to describe time-continuous processes is rarely exploited to describe sensory-motor control mechanisms in biological systems. We performed a system identification of visual flight speed control in the fruitfly Drosophila, based on an extensive dataset of open-loop responses previously measured under free flight conditions. We identified a second-order under-damped control model with just six free parameters that well describes both the transient and steady-state characteristics of the open-loop data. We then used the identified control model to predict flight speed responses after a visual perturbation under closed-loop conditions and validated the model with behavioural measurements performed in free-flying flies under the same closed-loop conditions. Our system identification of the fruitfly's flight speed response uncovers the high-level control strategy of a fundamental flight control reflex without depending on assumptions about the underlying physiological mechanisms. The results are relevant for future investigations of the underlying neuromotor processing mechanisms, as well as for the design of biomimetic robots, such as micro-air vehicles.

  5. Managing human resources in healthcare: learning from world class practices--Part I.

    PubMed

    Zairi, M

    1998-01-01

    This paper, which is presented in two parts, is intended to demonstrate that practices related to the area of human resources management, adopted by model organisations that have dominated their markets consistently, can lend themselves very well to the healthcare sector, which is primarily a "people-oriented" sector. As change in a modern business context is set to continue in an unrelenting way, most organisations will be presented with the challenge of developing the necessary skills and areas of expertise to enable them to cope with the demands on them, master technological opportunities at their disposal, learn how to exploit modern management concepts and optimise value to all the stakeholders they intend to serve. This paper draws from best practices using the experiences of quality recognised organisations and many admired names through pioneering human resource policies and practices and through clear demonstrations on the benefits of relying on people as the major "asset". Part I of this article addresses the importance of human resources as revealed through models of management for organisational excellence. In particular, the paper refers to the criteria for excellence in relation to people management using the following prestigious and integrative management models: Deming Prize (Japan); European Quality Award Model (Europe); and Malcolm Baldrige National Quality Award (USA). In addition, this paper illustrates several case studies using organisations known for their pioneering approaches to people management and which led them to win very prestigious quality awards and various international accolades. The paper concludes by reinforcing the point that human resource management in a healthcare context has to be viewed as an integrated set of processes and practices which need to be adhered to from an integrated perspective in order to optimise individuals' performance levels and so that the human potential can be exploited fully.

  6. Multi-objective optimization for conjunctive water use using coupled hydrogeological and agronomic models: a case study in Heihe mid-reach (China)

    NASA Astrophysics Data System (ADS)

    LI, Y.; Kinzelbach, W.; Pedrazzini, G.

    2017-12-01

    Groundwater is a vital water resource to buffer unexpected drought risk in agricultural production, which is however apt to unsustainable exploitation due to its open access characteristic and a much underestimated marginal cost. Being a wicked problem of general water resource management, groundwater staying hidden from surface terrain further amplifies difficulties of management. China has been facing this challenge in last decades, particularly in the northern part where irrigated agriculture resides despite of scarce surface water available compared to the south. Farmers therefore have been increasingly exploiting groundwater as an alternative in order to reach Chinese food self-sufficiency requirements and feed fast socio-economic development. In this work, we studied Heihe mid-reach located in northern China, which represents one of a few regions suffering from symptoms of unsustainable groundwater use, such as a large drawdown of the groundwater table in some irrigation districts, or soil salinization due to phreatic evaporation in others. In addition, we focus on solving a multi-objective optimization problem of conjunctive water use in order to find an alternative management scheme that fits decision makers' preference. The methodology starts with a global sensitivity analysis to determine the most influential decision variables. Then a state-of-the-art multi-objective evolutionary algorithm (MOEA) is employed to search a hyper-dimensional Pareto Front. The aquifer system is simulated with a distributed Modflow model, which is able to capture the main phenomenon of interest. Results show that the current water allocation scheme seems to exploit the water resources in an inefficient way, where areas with depression cones and areas with salinization or groundwater table rise can both be mitigated with an alternative management scheme. When assuming uncertain boundary conditions according to future climate change, the optimal solutions can yield better performance in economical productivity by reducing opportunity cost under unexpected drought conditions.

  7. Exploiting the oxidizing capabilities of laccases exploiting the oxidizing capabilities of laccases for sustainable chemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cannatelli, Mark D.

    Part one of this dissertation research has focused on harnessing the ability of laccases to generate reactive para-quinones in situ from the corresponding hydroquinones, followed by reaction with a variety of nucleophiles to perform novel carbon-carbon, carbon-nitrogen, and carbon-sulfur bond forming reactions for the synthesis of new and existing compounds. In part two of this dissertation, the fundamental laccase-catalyzed coupling chemistry developed in part one was applied to functionalize the surface of kraft lignin.

  8. Polysilicon for everything?

    NASA Astrophysics Data System (ADS)

    Ward, M. C. L.; McNie, Mark E.; Bunyan, Robert J.; King, David O.; Carline, Roger T.; Wilson, Rebecca; Gillham, J. P.

    1998-09-01

    We review some of the attractive attributes of microengineering and relate them to features of the highly successful silicon microelectronics industry. We highlight the need for cost effective functionality rather than ultimate performance as a driver for success and review key examples of polysilicon devices from this point of view. The effective exploitation of the data generated by the cost effective polysilicon sensors is also considered and we conclude that `non traditional' data analysis will need to be exploited if full use is to be made of polysilicon devices.

  9. Solving the Traveling Salesman's Problem Using the African Buffalo Optimization.

    PubMed

    Odili, Julius Beneoluchi; Mohmad Kahar, Mohd Nizam

    2016-01-01

    This paper proposes the African Buffalo Optimization (ABO) which is a new metaheuristic algorithm that is derived from careful observation of the African buffalos, a species of wild cows, in the African forests and savannahs. This animal displays uncommon intelligence, strategic organizational skills, and exceptional navigational ingenuity in its traversal of the African landscape in search for food. The African Buffalo Optimization builds a mathematical model from the behavior of this animal and uses the model to solve 33 benchmark symmetric Traveling Salesman's Problem and six difficult asymmetric instances from the TSPLIB. This study shows that buffalos are able to ensure excellent exploration and exploitation of the search space through regular communication, cooperation, and good memory of its previous personal exploits as well as tapping from the herd's collective exploits. The results obtained by using the ABO to solve these TSP cases were benchmarked against the results obtained by using other popular algorithms. The results obtained using the African Buffalo Optimization algorithm are very competitive.

  10. Solving the Traveling Salesman's Problem Using the African Buffalo Optimization

    PubMed Central

    Odili, Julius Beneoluchi; Mohmad Kahar, Mohd Nizam

    2016-01-01

    This paper proposes the African Buffalo Optimization (ABO) which is a new metaheuristic algorithm that is derived from careful observation of the African buffalos, a species of wild cows, in the African forests and savannahs. This animal displays uncommon intelligence, strategic organizational skills, and exceptional navigational ingenuity in its traversal of the African landscape in search for food. The African Buffalo Optimization builds a mathematical model from the behavior of this animal and uses the model to solve 33 benchmark symmetric Traveling Salesman's Problem and six difficult asymmetric instances from the TSPLIB. This study shows that buffalos are able to ensure excellent exploration and exploitation of the search space through regular communication, cooperation, and good memory of its previous personal exploits as well as tapping from the herd's collective exploits. The results obtained by using the ABO to solve these TSP cases were benchmarked against the results obtained by using other popular algorithms. The results obtained using the African Buffalo Optimization algorithm are very competitive. PMID:26880872

  11. A Combination of Geographically Weighted Regression, Particle Swarm Optimization and Support Vector Machine for Landslide Susceptibility Mapping: A Case Study at Wanzhou in the Three Gorges Area, China

    PubMed Central

    Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian

    2016-01-01

    In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%–19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides. PMID:27187430

  12. A Combination of Geographically Weighted Regression, Particle Swarm Optimization and Support Vector Machine for Landslide Susceptibility Mapping: A Case Study at Wanzhou in the Three Gorges Area, China.

    PubMed

    Yu, Xianyu; Wang, Yi; Niu, Ruiqing; Hu, Youjian

    2016-05-11

    In this study, a novel coupling model for landslide susceptibility mapping is presented. In practice, environmental factors may have different impacts at a local scale in study areas. To provide better predictions, a geographically weighted regression (GWR) technique is firstly used in our method to segment study areas into a series of prediction regions with appropriate sizes. Meanwhile, a support vector machine (SVM) classifier is exploited in each prediction region for landslide susceptibility mapping. To further improve the prediction performance, the particle swarm optimization (PSO) algorithm is used in the prediction regions to obtain optimal parameters for the SVM classifier. To evaluate the prediction performance of our model, several SVM-based prediction models are utilized for comparison on a study area of the Wanzhou district in the Three Gorges Reservoir. Experimental results, based on three objective quantitative measures and visual qualitative evaluation, indicate that our model can achieve better prediction accuracies and is more effective for landslide susceptibility mapping. For instance, our model can achieve an overall prediction accuracy of 91.10%, which is 7.8%-19.1% higher than the traditional SVM-based models. In addition, the obtained landslide susceptibility map by our model can demonstrate an intensive correlation between the classified very high-susceptibility zone and the previously investigated landslides.

  13. Leader-based and self-organized communication: modelling group-mass recruitment in ants.

    PubMed

    Collignon, Bertrand; Deneubourg, Jean Louis; Detrain, Claire

    2012-11-21

    For collective decisions to be made, the information acquired by experienced individuals about resources' location has to be shared with naïve individuals through recruitment. Here, we investigate the properties of collective responses arising from a leader-based recruitment and a self-organized communication by chemical trails. We develop a generalized model based on biological data drawn from Tetramorium caespitum ant species of which collective foraging relies on the coupling of group leading and trail recruitment. We show that for leader-based recruitment, small groups of recruits have to be guided in a very efficient way to allow a collective exploitation of food while large group requires less attention from their leader. In the case of self-organized recruitment through a chemical trail, a critical value of trail amount has to be laid per forager in order to launch collective food exploitation. Thereafter, ants can maintain collective foraging by emitting signal intensity below this threshold. Finally, we demonstrate how the coupling of both recruitment mechanisms may benefit to collectively foraging species. These theoretical results are then compared with experimental data from recruitment by T. caespitum ant colonies performing group-mass recruitment towards a single food source. We evidence the key role of leaders as initiators and catalysts of recruitment before this leader-based process is overtaken by self-organised communication through trails. This model brings new insights as well as a theoretical background to empirical studies about cooperative foraging in group-living species. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Manpower Planning Models. 5. Optimization Models

    DTIC Science & Technology

    1975-10-01

    aide 11 neceaaary and Identity by block number) Manpower Planning \\ \\ X Modelling Optimization 20. ABS emry and Identity by block number...notation resulting from the previous maximum M. We exploit the probabilistic interpretation of the flow process whenever it eases the exposi - tion

  15. Design and performance of optimal detectors for guided wave structural health monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dib, G.; Udpa, L.

    2016-01-01

    Ultrasonic guided wave measurements in a long term structural health monitoring system are affected by measurement noise, environmental conditions, transducer aging and malfunction. This results in measurement variability which affects detection performance, especially in complex structures where baseline data comparison is required. This paper derives the optimal detector structure, within the framework of detection theory, where a guided wave signal at the sensor is represented by a single feature value that can be used for comparison with a threshold. Three different types of detectors are derived depending on the underlying structure’s complexity: (i) Simple structures where defect reflections can bemore » identified without the need for baseline data; (ii) Simple structures that require baseline data due to overlap of defect scatter with scatter from structural features; (iii) Complex structure with dense structural features that require baseline data. The detectors are derived by modeling the effects of variabilities and uncertainties as random processes. Analytical solutions for the performance of detectors in terms of the probability of detection and false alarm are derived. A finite element model is used to generate guided wave signals and the performance results of a Monte-Carlo simulation are compared with the theoretical performance. initial results demonstrate that the problems of signal complexity and environmental variability can in fact be exploited to improve detection performance.« less

  16. Making it Easy to Construct Accurate Hydrological Models that Exploit High Performance Computers (Invited)

    NASA Astrophysics Data System (ADS)

    Kees, C. E.; Farthing, M. W.; Terrel, A.; Certik, O.; Seljebotn, D.

    2013-12-01

    This presentation will focus on two barriers to progress in the hydrological modeling community, and research and development conducted to lessen or eliminate them. The first is a barrier to sharing hydrological models among specialized scientists that is caused by intertwining the implementation of numerical methods with the implementation of abstract numerical modeling information. In the Proteus toolkit for computational methods and simulation, we have decoupled these two important parts of computational model through separate "physics" and "numerics" interfaces. More recently we have begun developing the Strong Form Language for easy and direct representation of the mathematical model formulation in a domain specific language embedded in Python. The second major barrier is sharing ANY scientific software tools that have complex library or module dependencies, as most parallel, multi-physics hydrological models must have. In this setting, users and developer are dependent on an entire distribution, possibly depending on multiple compilers and special instructions depending on the environment of the target machine. To solve these problem we have developed, hashdist, a stateless package management tool and a resulting portable, open source scientific software distribution.

  17. Stock assessment of fishery target species in Lake Koka, Ethiopia.

    PubMed

    Tesfaye, Gashaw; Wolff, Matthias

    2015-09-01

    Effective management is essential for small-scale fisheries to continue providing food and livelihoods for households, particularly in developing countries where other options are often limited. Studies on the population dynamics and stock assessment on fishery target species are thus imperative to sustain their fisheries and the benefits for the society. In Lake Koka (Ethiopia), very little is known about the vital population parameters and exploitation status of the fishery target species: tilapia Oreochromis niloticus, common carp Cyprinus carpio and catfish Clarias gariepinus. Our study, therefore, aimed at determining the vital population parameters and assessing the status of these target species in Lake Koka using length frequency data collected quarterly from commercial catches from 2007-2012. A total of 20,097 fish specimens (distributed as 7,933 tilapia, 6,025 catfish and 6,139 common carp) were measured for the analysis. Von Bertalarffy growth parameters and their confidence intervals were determined from modal progression analysis using ELEFAN I and applying the jackknife technique. Mortality parameters were determined from length-converted catch curves and empirical models. The exploitation status of these target species were then assessed by computing exploitation rates (E) from mortality parameters as well as from size indicators i.e., assessing the size distribution of fish catches relative to the size at maturity (Lm), the size that provides maximum cohort biomass (Lopt) and the abundance of mega-spawners. The mean value of growth parameters L∞, K and the growth performance index ø' were 44.5 cm, 0.41/year and 2.90 for O. niloticus, 74.1 cm, 0.28/year and 3.19 for C. carpio and 121.9 cm, 0.16/year and 3.36 for C. gariepinus, respectively. The 95 % confidence intervals of the estimates were also computed. Total mortality (Z) estimates were 1.47, 0.83 and 0.72/year for O. niloticus, C. carpio and C. gariepinus, respectively. Our study suggest that O. niloticus is in a healthy state, while C. gariepinus show signs of growth overfishing (when both exploitation rate (E) and size indicators were considered). In case of C. carpio, the low exploitation rate encountered would point to underfishing, while the size indicators of the catches would suggest that too small fish are harvested leading to growth overfishing. We concluded that fisheries production in Lake Koka could be enhanced by increasing E toward optimum level of exploitation (Eopt) for the underexploited C. carpio and by increasing the size at first capture (Lc) toward the Lopt, range for all target species.

  18. 3D geological modeling of the transboundary basin Berzdof-Radomierzyce in Upper Lusatia (Germany/Poland)

    NASA Astrophysics Data System (ADS)

    Woloszyn, Iwona; Merkel, Broder; Stanek, Klaus

    2015-04-01

    Keywords: Numerical modeling, Paradigm GOCAD, Berzdorf basin (Germany), Radomierzyce basin (Poland), Upper Lusatia. The accuracy of three-dimensional (3D) models depends on their data density and quality. Regions with a complex geology can be a challenge to model, especially if detailed models are required to support a further economic exploitation of a region. In this research, a 3D model was created based on the region's complicated geological condition. The focus area, the Berzdorf - Radomierzyce basin, located in Upper Lusatia on the Polish - German border to the south of the city of Görlitz - Zgorzelec, is such a region. The basin is divided by the volcanic threshold into the western part (Berzdorf basin) and its eastern extension (Radomierzyce basin). The connection between both parts is the so called "lignite bridge". The deposit in the Berzdorf has been exploited from 1830 until 1997. In contrast, the Radomierzyce deposit has never been exploited and is still considered as a prospective deposit for the operating Turów coal mine, which is located only around 15 km from the deposit. To represent the geology of the area a 3D modeling of the transboundary deposit was carried out. Moreover, some strategies to overcome numerical interpolation instability of the geological model with many faults were developed. Due to the large amount of data and its compatibility with other software the 3D geomodeling software Paradigm GOCAD was used. A total number of 10,102 boreholes, 60 cross sections and geological maps converted into digital format - were implemented into the model. The data density of the German part of the area of interest was much higher than the data density of the Polish part. The results demonstrate a good fit between the modeled surfaces and the real geological conditions. This is particularly evident by matching the modeled surfaces to borehole data and geological cross sections. Furthermore, simplification of the model does not decrease the accuracy and the applied techniques lead to a stable and reliable model. The geological model can be used for planning and full-scale mining operations of its eastern part (Radomierzyce). In addition, the detailed geological model can serve as a basis for the hydrogeological and the heat transfer models of the Berzdorf - Radomierzyce basin, in order to identify points were geothermal energy can be best exploited. It can aid towards improving the planned geothermal installations in the region.

  19. NATO Operational Record: Collective Analytical Exploitation to Inform Operational Analysis Models and Common Operational Planning Factors (Archives operationnelles de l’OTAN: Exploitation analytique collective visant a alimenter les modeles d’analyse operationnelle et les facteurs de planification operationnelle commune)

    DTIC Science & Technology

    2014-05-01

    scientific and technological work is carried out by Technical Teams, created under one or more of these eight bodies, for specific research activities...level records, with a secondary focus on strategic level records. Its work covered NATO records as well as NATO Troop Contributing Nation (TCN...fonctionnalités de recherche, récupération et visualisation peuvent être rapides et conviviales. Il conclut également que les freins à la création d’un

  20. 3D tracking of laparoscopic instruments using statistical and geometric modeling.

    PubMed

    Wolf, Rémi; Duchateau, Josselin; Cinquin, Philippe; Voros, Sandrine

    2011-01-01

    During a laparoscopic surgery, the endoscope can be manipulated by an assistant or a robot. Several teams have worked on the tracking of surgical instruments, based on methods ranging from the development of specific devices to image processing methods. We propose to exploit the instruments' insertion points, which are fixed on the patients abdominal cavity, as a geometric constraint for the localization of the instruments. A simple geometric model of a laparoscopic instrument is described, as well as a parametrization that exploits a spherical geometric grid, which offers attracting homogeneity and isotropy properties. The general architecture of our proposed approach is based on the probabilistic Condensation algorithm.

  1. Optimization-based image reconstruction in x-ray computed tomography by sparsity exploitation of local continuity and nonlocal spatial self-similarity

    NASA Astrophysics Data System (ADS)

    Han-Ming, Zhang; Lin-Yuan, Wang; Lei, Li; Bin, Yan; Ai-Long, Cai; Guo-En, Hu

    2016-07-01

    The additional sparse prior of images has been the subject of much research in problems of sparse-view computed tomography (CT) reconstruction. A method employing the image gradient sparsity is often used to reduce the sampling rate and is shown to remove the unwanted artifacts while preserve sharp edges, but may cause blocky or patchy artifacts. To eliminate this drawback, we propose a novel sparsity exploitation-based model for CT image reconstruction. In the presented model, the sparse representation and sparsity exploitation of both gradient and nonlocal gradient are investigated. The new model is shown to offer the potential for better results by introducing a similarity prior information of the image structure. Then, an effective alternating direction minimization algorithm is developed to optimize the objective function with a robust convergence result. Qualitative and quantitative evaluations have been carried out both on the simulation and real data in terms of accuracy and resolution properties. The results indicate that the proposed method can be applied for achieving better image-quality potential with the theoretically expected detailed feature preservation. Project supported by the National Natural Science Foundation of China (Grant No. 61372172).

  2. Robust lateral blended-wing-body aircraft feedback control design using a parameterized LFR model and DGK-iteration

    NASA Astrophysics Data System (ADS)

    Schirrer, A.; Westermayer, C.; Hemedi, M.; Kozek, M.

    2013-12-01

    This paper shows control design results, performance, and limitations of robust lateral control law designs based on the DGK-iteration mixed-μ-synthesis procedure for a large, flexible blended wing body (BWB) passenger aircraft. The aircraft dynamics is preshaped by a low-complexity inner loop control law providing stabilization, basic response shaping, and flexible mode damping. The μ controllers are designed to further improve vibration damping of the main flexible modes by exploiting the structure of the arising significant parameter-dependent plant variations. This is achieved by utilizing parameterized Linear Fractional Representations (LFR) of the aircraft rigid and flexible dynamics. Designs with various levels of LFR complexity are carried out and discussed, showing the achieved performance improvement over the initial controller and their robustness and complexity properties.

  3. Sputnik: a database platform for comparative plant genomics.

    PubMed

    Rudd, Stephen; Mewes, Hans-Werner; Mayer, Klaus F X

    2003-01-01

    Two million plant ESTs, from 20 different plant species, and totalling more than one 1000 Mbp of DNA sequence, represents a formidable transcriptomic resource. Sputnik uses the potential of this sequence resource to fill some of the information gap in the un-sequenced plant genomes and to serve as the foundation for in silicio comparative plant genomics. The complexity of the individual EST collections has been reduced using optimised EST clustering techniques. Annotation of cluster sequences is performed by exploiting and transferring information from the comprehensive knowledgebase already produced for the completed model plant genome (Arabidopsis thaliana) and by performing additional state of-the-art sequence analyses relevant to today's plant biologist. Functional predictions, comparative analyses and associative annotations for 500 000 plant EST derived peptides make Sputnik (http://mips.gsf.de/proj/sputnik/) a valid platform for contemporary plant genomics.

  4. Sputnik: a database platform for comparative plant genomics

    PubMed Central

    Rudd, Stephen; Mewes, Hans-Werner; Mayer, Klaus F.X.

    2003-01-01

    Two million plant ESTs, from 20 different plant species, and totalling more than one 1000 Mbp of DNA sequence, represents a formidable transcriptomic resource. Sputnik uses the potential of this sequence resource to fill some of the information gap in the un-sequenced plant genomes and to serve as the foundation for in silicio comparative plant genomics. The complexity of the individual EST collections has been reduced using optimised EST clustering techniques. Annotation of cluster sequences is performed by exploiting and transferring information from the comprehensive knowledgebase already produced for the completed model plant genome (Arabidopsis thaliana) and by performing additional state of-the-art sequence analyses relevant to today's plant biologist. Functional predictions, comparative analyses and associative annotations for 500 000 plant EST derived peptides make Sputnik (http://mips.gsf.de/proj/sputnik/) a valid platform for contemporary plant genomics. PMID:12519965

  5. Centralized PI control for high dimensional multivariable systems based on equivalent transfer function.

    PubMed

    Luan, Xiaoli; Chen, Qiang; Liu, Fei

    2014-09-01

    This article presents a new scheme to design full matrix controller for high dimensional multivariable processes based on equivalent transfer function (ETF). Differing from existing ETF method, the proposed ETF is derived directly by exploiting the relationship between the equivalent closed-loop transfer function and the inverse of open-loop transfer function. Based on the obtained ETF, the full matrix controller is designed utilizing the existing PI tuning rules. The new proposed ETF model can more accurately represent the original processes. Furthermore, the full matrix centralized controller design method proposed in this paper is applicable to high dimensional multivariable systems with satisfactory performance. Comparison with other multivariable controllers shows that the designed ETF based controller is superior with respect to design-complexity and obtained performance. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  6. An Analytical Approach for Performance Enhancement of FSO Communication System Using Array of Receivers in Adverse Weather Conditions

    NASA Astrophysics Data System (ADS)

    Nagpal, Shaina; Gupta, Amit

    2017-08-01

    Free Space Optics (FSO) link exploits the tremendous network capacity and is capable of offering wireless communications similar to communications through optical fibres. However, FSO link is extremely weather dependent and the major effect on FSO links is due to adverse weather conditions like fog and snow. In this paper, an FSO link is designed using an array of receivers. The disparity of the link for very high attenuation conditions due to fog and snow is analysed using aperture averaging technique. Further effect of aperture averaging technique is investigated by comparing the systems using aperture averaging technique with systems not using aperture averaging technique. The performance of proposed model of FSO link has been evaluated in terms of Q factor, bit error rate (BER) and eye diagram.

  7. Differential GNSS and Vision-Based Tracking to Improve Navigation Performance in Cooperative Multi-UAV Systems.

    PubMed

    Vetrella, Amedeo Rodi; Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio

    2016-12-17

    Autonomous navigation of micro-UAVs is typically based on the integration of low cost Global Navigation Satellite System (GNSS) receivers and Micro-Electro-Mechanical Systems (MEMS)-based inertial and magnetic sensors to stabilize and control the flight. The resulting navigation performance in terms of position and attitude accuracy may not suffice for other mission needs, such as the ones relevant to fine sensor pointing. In this framework, this paper presents a cooperative UAV navigation algorithm that allows a chief vehicle, equipped with inertial and magnetic sensors, a Global Positioning System (GPS) receiver, and a vision system, to improve its navigation performance (in real time or in the post processing phase) exploiting formation flying deputy vehicles equipped with GPS receivers. The focus is set on outdoor environments and the key concept is to exploit differential GPS among vehicles and vision-based tracking (DGPS/Vision) to build a virtual additional navigation sensor whose information is then integrated in a sensor fusion algorithm based on an Extended Kalman Filter. The developed concept and processing architecture are described, with a focus on DGPS/Vision attitude determination algorithm. Performance assessment is carried out on the basis of both numerical simulations and flight tests. In the latter ones, navigation estimates derived from the DGPS/Vision approach are compared with those provided by the onboard autopilot system of a customized quadrotor. The analysis shows the potential of the developed approach, mainly deriving from the possibility to exploit magnetic- and inertial-independent accurate attitude information.

  8. Enhancing instruction scheduling with a block-structured ISA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melvin, S.; Patt, Y.

    It is now generally recognized that not enough parallelism exists within the small basic blocks of most general purpose programs to satisfy high performance processors. Thus, a wide variety of techniques have been developed to exploit instruction level parallelism across basic block boundaries. In this paper we discuss some previous techniques along with their hardware and software requirements. Then we propose a new paradigm for an instruction set architecture (ISA): block-structuring. This new paradigm is presented, its hardware and software requirements are discussed and the results from a simulation study are presented. We show that a block-structured ISA utilizes bothmore » dynamic and compile-time mechanisms for exploiting instruction level parallelism and has significant performance advantages over a conventional ISA.« less

  9. Exploiting Phase Diversity for CDMA2000 1X Smart Antenna Base Stations

    NASA Astrophysics Data System (ADS)

    Kim, Seongdo; Hyeon, Seungheon; Choi, Seungwon

    2004-12-01

    A performance analysis of an access channel decoder is presented which exploits a diversity gain due to the independent magnitude of received signals energy at each of the antenna elements of a smart-antenna base-station transceiver subsystem (BTS) operating in CDMA2000 1X signal environment. The objective is to enhance the data retrieval at cellsite during the access period, for which the optimal weight vector of the smart antenna BTS is not available. It is shown in this paper that the access channel decoder proposed in this paper outperforms the conventional one, which is based on a single antenna channel in terms of detection probability of access probe, access channel failure probability, and Walsh-code demodulation performance.

  10. Optimal post-experiment estimation of poorly modeled dynamic systems

    NASA Technical Reports Server (NTRS)

    Mook, D. Joseph

    1988-01-01

    Recently, a novel strategy for post-experiment state estimation of discretely-measured dynamic systems has been developed. The method accounts for errors in the system dynamic model equations in a more general and rigorous manner than do filter-smoother algorithms. The dynamic model error terms do not require the usual process noise assumptions of zero-mean, symmetrically distributed random disturbances. Instead, the model error terms require no prior assumptions other than piecewise continuity. The resulting state estimates are more accurate than filters for applications in which the dynamic model error clearly violates the typical process noise assumptions, and the available measurements are sparse and/or noisy. Estimates of the dynamic model error, in addition to the states, are obtained as part of the solution of a two-point boundary value problem, and may be exploited for numerous reasons. In this paper, the basic technique is explained, and several example applications are given. Included among the examples are both state estimation and exploitation of the model error estimates.

  11. Using SpF to Achieve Petascale for Legacy Pseudospectral Applications

    NASA Technical Reports Server (NTRS)

    Clune, Thomas L.; Jiang, Weiyuan

    2014-01-01

    Pseudospectral (PS) methods possess a number of characteristics (e.g., efficiency, accuracy, natural boundary conditions) that are extremely desirable for dynamo models. Unfortunately, dynamo models based upon PS methods face a number of daunting challenges, which include exposing additional parallelism, leveraging hardware accelerators, exploiting hybrid parallelism, and improving the scalability of global memory transposes. Although these issues are a concern for most models, solutions for PS methods tend to require far more pervasive changes to underlying data and control structures. Further, improvements in performance in one model are difficult to transfer to other models, resulting in significant duplication of effort across the research community. We have developed an extensible software framework for pseudospectral methods called SpF that is intended to enable extreme scalability and optimal performance. Highlevel abstractions provided by SpF unburden applications of the responsibility of managing domain decomposition and load balance while reducing the changes in code required to adapt to new computing architectures. The key design concept in SpF is that each phase of the numerical calculation is partitioned into disjoint numerical kernels that can be performed entirely inprocessor. The granularity of domain decomposition provided by SpF is only constrained by the datalocality requirements of these kernels. SpF builds on top of optimized vendor libraries for common numerical operations such as transforms, matrix solvers, etc., but can also be configured to use open source alternatives for portability. SpF includes several alternative schemes for global data redistribution and is expected to serve as an ideal testbed for further research into optimal approaches for different network architectures. In this presentation, we will describe our experience in porting legacy pseudospectral models, MoSST and DYNAMO, to use SpF as well as present preliminary performance results provided by the improved scalability.

  12. Two-species occupancy modeling accounting for species misidentification and nondetection

    USGS Publications Warehouse

    Chambert, Thierry; Grant, Evan H. Campbell; Miller, David A. W.; Nichols, James; Mulder, Kevin P.; Brand, Adrianne B,

    2018-01-01

    In occupancy studies, species misidentification can lead to false‐positive detections, which can cause severe estimator biases. Currently, all models that account for false‐positive errors only consider omnibus sources of false detections and are limited to single‐species occupancy.However, false detections for a given species often occur because of the misidentification with another, closely related species. To exploit this explicit source of false‐positive detection error, we develop a two‐species occupancy model that accounts for misidentifications between two species of interest. As with other false‐positive models, identifiability is greatly improved by the availability of unambiguous detections at a subset of site x occasions. Here, we consider the case where some of the field observations can be confirmed using laboratory or other independent identification methods (“confirmatory data”).We performed three simulation studies to (1) assess the model's performance under various realistic scenarios, (2) investigate the influence of the proportion of confirmatory data on estimator accuracy and (3) compare the performance of this two‐species model with that of the single‐species false‐positive model. The model shows good performance under all scenarios, even when only small proportions of detections are confirmed (e.g. 5%). It also clearly outperforms the single‐species model.We illustrate application of this model using a 4‐year dataset on two sympatric species of lungless salamanders: the US federally endangered Shenandoah salamander Plethodon shenandoah, and its presumed competitor, the red‐backed salamander Plethodon cinereus. Occupancy of red‐backed salamanders appeared very stable across the 4 years of study, whereas the Shenandoah salamander displayed substantial turnover in occupancy of forest habitats among years.Given the extent of species misidentification issues in occupancy studies, this modelling approach should help improve the reliability of estimates of species distribution, which is the goal of many studies and monitoring programmes. Further developments, to account for different forms of state uncertainty, can be readily undertaken under our general approach.

  13. Parameter Estimation and Model Selection for Indoor Environments Based on Sparse Observations

    NASA Astrophysics Data System (ADS)

    Dehbi, Y.; Loch-Dehbi, S.; Plümer, L.

    2017-09-01

    This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

  14. Parameter identification of piezoelectric hysteresis model based on improved artificial bee colony algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Geng; Zhou, Kexin; Zhang, Yeming

    2018-04-01

    The widely used Bouc-Wen hysteresis model can be utilized to accurately simulate the voltage-displacement curves of piezoelectric actuators. In order to identify the unknown parameters of the Bouc-Wen model, an improved artificial bee colony (IABC) algorithm is proposed in this paper. A guiding strategy for searching the current optimal position of the food source is proposed in the method, which can help balance the local search ability and global exploitation capability. And the formula for the scout bees to search for the food source is modified to increase the convergence speed. Some experiments were conducted to verify the effectiveness of the IABC algorithm. The results show that the identified hysteresis model agreed well with the actual actuator response. Moreover, the identification results were compared with the standard particle swarm optimization (PSO) method, and it can be seen that the search performance in convergence rate of the IABC algorithm is better than that of the standard PSO method.

  15. Anomalies in target-controlled infusion: an analysis after 20 years of clinical use.

    PubMed

    Engbers, F H M; Dahan, A

    2018-05-01

    Although target-controlled infusion has been in use for more than two decades, its benefits are being obscured by anomalies in clinical practice caused by a number of important problems. These include: a variety of pharmacokinetic models available in open target-controlled infusion systems, which often confuse the user; the extrapolation of anthropomorphic data which provokes anomalous adjustments of dosing by such systems; and the uncertainty of regulatory requirements for the application of target-controlled infusion which causes uncontrolled exploitation of drugs and pharmacokinetic models in target-controlled infusion devices. Comparison of performance of pharmacokinetic models is complex and mostly inconclusive. However, a specific behaviour of a model in a target-controlled infusion system that is neither intended nor supported by scientific data can be considered an artefact or anomaly. Several of these anomalies can be identified in the current commercially available target-controlled infusion systems and are discussed in this review. © 2018 The Association of Anaesthetists of Great Britain and Ireland.

  16. Smile detectors correlation

    NASA Astrophysics Data System (ADS)

    Yuksel, Kivanc; Chang, Xin; Skarbek, Władysław

    2017-08-01

    The novel smile recognition algorithm is presented based on extraction of 68 facial salient points (fp68) using the ensemble of regression trees. The smile detector exploits the Support Vector Machine linear model. It is trained with few hundreds exemplar images by SVM algorithm working in 136 dimensional space. It is shown by the strict statistical data analysis that such geometric detector strongly depends on the geometry of mouth opening area, measured by triangulation of outer lip contour. To this goal two Bayesian detectors were developed and compared with SVM detector. The first uses the mouth area in 2D image, while the second refers to the mouth area in 3D animated face model. The 3D modeling is based on Candide-3 model and it is performed in real time along with three smile detectors and statistics estimators. The mouth area/Bayesian detectors exhibit high correlation with fp68/SVM detector in a range [0:8; 1:0], depending mainly on light conditions and individual features with advantage of 3D technique, especially in hard light conditions.

  17. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    NASA Astrophysics Data System (ADS)

    Hadjidoukas, P. E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.

    2015-03-01

    We present Π4U, an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.

  18. High-fidelity gravity modeling applied to spacecraft trajectories and lunar interior analysis

    NASA Astrophysics Data System (ADS)

    Chappaz, Loic P. R.

    As the complexity and boldness of emerging mission proposals increase, and with the rapid evolution of the available computational capabilities, high-accuracy and high-resolution gravity models and the tools to exploit such models are increasingly attractive within the context of spaceflight mechanics, mission design and analysis, and planetary science in general. First, in trajectory design applications, a gravity representation for the bodies of interest is, in general, assumed and exploited to determine the motion of a spacecraft in any given system. The focus is the exploration of trajectories in the vicinity of a system comprised of two small irregular bodies. Within this context, the primary bodies are initially modeled as massive ellipsoids and tools to construct third-body trajectories are developed. However, these dynamical models are idealized representations of the actual dynamical regime and do not account for any perturbing effects. Thus, a robust strategy to maintain a spacecraft near reference third-body trajectories is constructed. Further, it is important to assess the perturbing effect that dominates the dynamics of the spacecraft in such a region as a function of the baseline orbit. Alternatively, the motion of the spacecraft around a given body may be known to extreme precision enabling the derivation of a very high-accuracy gravity field for that body. Such knowledge can subsequently be exploited to gain insight into specific properties of the body. The success of the NASA's GRAIL mission ensures that the highest resolution and most accurate gravity data for the Moon is now available. In the GRAIL investigation, the focus is on the specific task of detecting the presence and extent of subsurface features, such as empty lava tubes beneath the mare surface. In addition to their importance for understanding the emplacement of the mare flood basalts, open lava tubes are of interest as possible habitation sites safe from cosmic radiation and micrometeorite impacts. Tools are developed to best exploit the rich gravity data toward the numerical detection of such small features.

  19. Balancing Information Analysis and Decision Value: A Model to Exploit the Decision Process

    DTIC Science & Technology

    2011-12-01

    technical intelli- gence e.g. signals and sensors (SIGINT and MASINT), imagery (!MINT), as well and human and open source intelligence (HUMINT and OSINT ...Clark 2006). The ability to capture large amounts of da- ta and the plenitude of modem intelligence information sources provides a rich cache of...many tech- niques for managing information collected and derived from these sources , the exploitation of intelligence assets for decision-making

  20. Improved Functional Properties and Efficiencies of Nitinol Wires Under High-Performance Shape Memory Effect (HP-SME)

    NASA Astrophysics Data System (ADS)

    Casati, R.; Saghafi, F.; Biffi, C. A.; Vedani, M.; Tuissi, A.

    2017-10-01

    Martensitic Ti-rich NiTi intermetallics are broadly used in various cyclic applications as actuators, which exploit the shape memory effect (SME). Recently, a new approach for exploiting austenitic Ni-rich NiTi shape memory alloys as actuators was proposed and named high-performance shape memory effect (HP-SME). HP-SME is based on thermal recovery of de-twinned martensite produced by mechanical loading of the parent phase. The aim of the manuscript consists in evaluating and comparing the fatigue and actuation properties of austenitic HP-SME wires and conventional martensitic SME wires. The effect of the thermomechanical cycling on the actuation response and the changes in the electrical resistivity of both shape memory materials were studied by performing the actuation tests at different stages of the fatigue life. Finally, the changes in the transition temperatures before and after cycling were also investigated by differential calorimetric tests.

  1. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    PubMed Central

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  2. The Media As Opiate: Blacks in the Performing Arts.

    ERIC Educational Resources Information Center

    Staples, Robert

    1986-01-01

    It is ironic that the barometer of progress for Blacks has been their success in the performing arts. The film industry has historically shaped and reflected racist attitudes toward Blacks, while the popular music industry still segregates and exploits Black artists. (Author/GC)

  3. Low-Quality Structural and Interaction Data Improves Binding Affinity Prediction via Random Forest.

    PubMed

    Li, Hongjian; Leung, Kwong-Sak; Wong, Man-Hon; Ballester, Pedro J

    2015-06-12

    Docking scoring functions can be used to predict the strength of protein-ligand binding. It is widely believed that training a scoring function with low-quality data is detrimental for its predictive performance. Nevertheless, there is a surprising lack of systematic validation experiments in support of this hypothesis. In this study, we investigated to which extent training a scoring function with data containing low-quality structural and binding data is detrimental for predictive performance. We actually found that low-quality data is not only non-detrimental, but beneficial for the predictive performance of machine-learning scoring functions, though the improvement is less important than that coming from high-quality data. Furthermore, we observed that classical scoring functions are not able to effectively exploit data beyond an early threshold, regardless of its quality. This demonstrates that exploiting a larger data volume is more important for the performance of machine-learning scoring functions than restricting to a smaller set of higher data quality.

  4. On the Usage of GPUs for Efficient Motion Estimation in Medical Image Sequences

    PubMed Central

    Thiyagalingam, Jeyarajan; Goodman, Daniel; Schnabel, Julia A.; Trefethen, Anne; Grau, Vicente

    2011-01-01

    Images are ubiquitous in biomedical applications from basic research to clinical practice. With the rapid increase in resolution, dimensionality of the images and the need for real-time performance in many applications, computational requirements demand proper exploitation of multicore architectures. Towards this, GPU-specific implementations of image analysis algorithms are particularly promising. In this paper, we investigate the mapping of an enhanced motion estimation algorithm to novel GPU-specific architectures, the resulting challenges and benefits therein. Using a database of three-dimensional image sequences, we show that the mapping leads to substantial performance gains, up to a factor of 60, and can provide near-real-time experience. We also show how architectural peculiarities of these devices can be best exploited in the benefit of algorithms, most specifically for addressing the challenges related to their access patterns and different memory configurations. Finally, we evaluate the performance of the algorithm on three different GPU architectures and perform a comprehensive analysis of the results. PMID:21869880

  5. High performance in silico virtual drug screening on many-core processors.

    PubMed

    McIntosh-Smith, Simon; Price, James; Sessions, Richard B; Ibarra, Amaurys A

    2015-05-01

    Drug screening is an important part of the drug development pipeline for the pharmaceutical industry. Traditional, lab-based methods are increasingly being augmented with computational methods, ranging from simple molecular similarity searches through more complex pharmacophore matching to more computationally intensive approaches, such as molecular docking. The latter simulates the binding of drug molecules to their targets, typically protein molecules. In this work, we describe BUDE, the Bristol University Docking Engine, which has been ported to the OpenCL industry standard parallel programming language in order to exploit the performance of modern many-core processors. Our highly optimized OpenCL implementation of BUDE sustains 1.43 TFLOP/s on a single Nvidia GTX 680 GPU, or 46% of peak performance. BUDE also exploits OpenCL to deliver effective performance portability across a broad spectrum of different computer architectures from different vendors, including GPUs from Nvidia and AMD, Intel's Xeon Phi and multi-core CPUs with SIMD instruction sets.

  6. High performance in silico virtual drug screening on many-core processors

    PubMed Central

    Price, James; Sessions, Richard B; Ibarra, Amaurys A

    2015-01-01

    Drug screening is an important part of the drug development pipeline for the pharmaceutical industry. Traditional, lab-based methods are increasingly being augmented with computational methods, ranging from simple molecular similarity searches through more complex pharmacophore matching to more computationally intensive approaches, such as molecular docking. The latter simulates the binding of drug molecules to their targets, typically protein molecules. In this work, we describe BUDE, the Bristol University Docking Engine, which has been ported to the OpenCL industry standard parallel programming language in order to exploit the performance of modern many-core processors. Our highly optimized OpenCL implementation of BUDE sustains 1.43 TFLOP/s on a single Nvidia GTX 680 GPU, or 46% of peak performance. BUDE also exploits OpenCL to deliver effective performance portability across a broad spectrum of different computer architectures from different vendors, including GPUs from Nvidia and AMD, Intel’s Xeon Phi and multi-core CPUs with SIMD instruction sets. PMID:25972727

  7. Statistical and Probabilistic Extensions to Ground Operations' Discrete Event Simulation Modeling

    NASA Technical Reports Server (NTRS)

    Trocine, Linda; Cummings, Nicholas H.; Bazzana, Ashley M.; Rychlik, Nathan; LeCroy, Kenneth L.; Cates, Grant R.

    2010-01-01

    NASA's human exploration initiatives will invest in technologies, public/private partnerships, and infrastructure, paving the way for the expansion of human civilization into the solar system and beyond. As it is has been for the past half century, the Kennedy Space Center will be the embarkation point for humankind's journey into the cosmos. Functioning as a next generation space launch complex, Kennedy's launch pads, integration facilities, processing areas, launch and recovery ranges will bustle with the activities of the world's space transportation providers. In developing this complex, KSC teams work through the potential operational scenarios: conducting trade studies, planning and budgeting for expensive and limited resources, and simulating alternative operational schemes. Numerous tools, among them discrete event simulation (DES), were matured during the Constellation Program to conduct such analyses with the purpose of optimizing the launch complex for maximum efficiency, safety, and flexibility while minimizing life cycle costs. Discrete event simulation is a computer-based modeling technique for complex and dynamic systems where the state of the system changes at discrete points in time and whose inputs may include random variables. DES is used to assess timelines and throughput, and to support operability studies and contingency analyses. It is applicable to any space launch campaign and informs decision-makers of the effects of varying numbers of expensive resources and the impact of off nominal scenarios on measures of performance. In order to develop representative DES models, methods were adopted, exploited, or created to extend traditional uses of DES. The Delphi method was adopted and utilized for task duration estimation. DES software was exploited for probabilistic event variation. A roll-up process was used, which was developed to reuse models and model elements in other less - detailed models. The DES team continues to innovate and expand DES capabilities to address KSC's planning needs.

  8. Measuring and Predicting Tag Importance for Image Retrieval.

    PubMed

    Li, Shangwen; Purushotham, Sanjay; Chen, Chen; Ren, Yuzhuo; Kuo, C-C Jay

    2017-12-01

    Textual data such as tags, sentence descriptions are combined with visual cues to reduce the semantic gap for image retrieval applications in today's Multimodal Image Retrieval (MIR) systems. However, all tags are treated as equally important in these systems, which may result in misalignment between visual and textual modalities during MIR training. This will further lead to degenerated retrieval performance at query time. To address this issue, we investigate the problem of tag importance prediction, where the goal is to automatically predict the tag importance and use it in image retrieval. To achieve this, we first propose a method to measure the relative importance of object and scene tags from image sentence descriptions. Using this as the ground truth, we present a tag importance prediction model to jointly exploit visual, semantic and context cues. The Structural Support Vector Machine (SSVM) formulation is adopted to ensure efficient training of the prediction model. Then, the Canonical Correlation Analysis (CCA) is employed to learn the relation between the image visual feature and tag importance to obtain robust retrieval performance. Experimental results on three real-world datasets show a significant performance improvement of the proposed MIR with Tag Importance Prediction (MIR/TIP) system over other MIR systems.

  9. On Designing Multicore-Aware Simulators for Systems Biology Endowed with OnLine Statistics

    PubMed Central

    Calcagno, Cristina; Coppo, Mario

    2014-01-01

    The paper arguments are on enabling methodologies for the design of a fully parallel, online, interactive tool aiming to support the bioinformatics scientists .In particular, the features of these methodologies, supported by the FastFlow parallel programming framework, are shown on a simulation tool to perform the modeling, the tuning, and the sensitivity analysis of stochastic biological models. A stochastic simulation needs thousands of independent simulation trajectories turning into big data that should be analysed by statistic and data mining tools. In the considered approach the two stages are pipelined in such a way that the simulation stage streams out the partial results of all simulation trajectories to the analysis stage that immediately produces a partial result. The simulation-analysis workflow is validated for performance and effectiveness of the online analysis in capturing biological systems behavior on a multicore platform and representative proof-of-concept biological systems. The exploited methodologies include pattern-based parallel programming and data streaming that provide key features to the software designers such as performance portability and efficient in-memory (big) data management and movement. Two paradigmatic classes of biological systems exhibiting multistable and oscillatory behavior are used as a testbed. PMID:25050327

  10. Massive Exploration of Perturbed Conditions of the Blood Coagulation Cascade through GPU Parallelization

    PubMed Central

    Cazzaniga, Paolo; Nobile, Marco S.; Besozzi, Daniela; Bellini, Matteo; Mauri, Giancarlo

    2014-01-01

    The introduction of general-purpose Graphics Processing Units (GPUs) is boosting scientific applications in Bioinformatics, Systems Biology, and Computational Biology. In these fields, the use of high-performance computing solutions is motivated by the need of performing large numbers of in silico analysis to study the behavior of biological systems in different conditions, which necessitate a computing power that usually overtakes the capability of standard desktop computers. In this work we present coagSODA, a CUDA-powered computational tool that was purposely developed for the analysis of a large mechanistic model of the blood coagulation cascade (BCC), defined according to both mass-action kinetics and Hill functions. coagSODA allows the execution of parallel simulations of the dynamics of the BCC by automatically deriving the system of ordinary differential equations and then exploiting the numerical integration algorithm LSODA. We present the biological results achieved with a massive exploration of perturbed conditions of the BCC, carried out with one-dimensional and bi-dimensional parameter sweep analysis, and show that GPU-accelerated parallel simulations of this model can increase the computational performances up to a 181× speedup compared to the corresponding sequential simulations. PMID:25025072

  11. Unifying practice schedules in the timescales of motor learning and performance.

    PubMed

    Verhoeven, F Martijn; Newell, Karl M

    2018-06-01

    In this article, we elaborate from a multiple time scales model of motor learning to examine the independent and integrated effects of massed and distributed practice schedules within- and between-sessions on the persistent (learning) and transient (warm-up, fatigue) processes of performance change. The timescales framework reveals the influence of practice distribution on four learning-related processes: the persistent processes of learning and forgetting, and the transient processes of warm-up decrement and fatigue. The superposition of the different processes of practice leads to a unified set of effects for massed and distributed practice within- and between-sessions in learning motor tasks. This analysis of the interaction between the duration of the interval of practice trials or sessions and parameters of the introduced time scale model captures the unified influence of the between trial and session scheduling of practice on learning and performance. It provides a starting point for new theoretically based hypotheses, and the scheduling of practice that minimizes the negative effects of warm-up decrement, fatigue and forgetting while exploiting the positive effects of learning and retention. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. On designing multicore-aware simulators for systems biology endowed with OnLine statistics.

    PubMed

    Aldinucci, Marco; Calcagno, Cristina; Coppo, Mario; Damiani, Ferruccio; Drocco, Maurizio; Sciacca, Eva; Spinella, Salvatore; Torquati, Massimo; Troina, Angelo

    2014-01-01

    The paper arguments are on enabling methodologies for the design of a fully parallel, online, interactive tool aiming to support the bioinformatics scientists .In particular, the features of these methodologies, supported by the FastFlow parallel programming framework, are shown on a simulation tool to perform the modeling, the tuning, and the sensitivity analysis of stochastic biological models. A stochastic simulation needs thousands of independent simulation trajectories turning into big data that should be analysed by statistic and data mining tools. In the considered approach the two stages are pipelined in such a way that the simulation stage streams out the partial results of all simulation trajectories to the analysis stage that immediately produces a partial result. The simulation-analysis workflow is validated for performance and effectiveness of the online analysis in capturing biological systems behavior on a multicore platform and representative proof-of-concept biological systems. The exploited methodologies include pattern-based parallel programming and data streaming that provide key features to the software designers such as performance portability and efficient in-memory (big) data management and movement. Two paradigmatic classes of biological systems exhibiting multistable and oscillatory behavior are used as a testbed.

  13. Strategic rehabilitation planning of piped water networks using multi-criteria decision analysis.

    PubMed

    Scholten, Lisa; Scheidegger, Andreas; Reichert, Peter; Maurer, Max; Mauer, Max; Lienert, Judit

    2014-02-01

    To overcome the difficulties of strategic asset management of water distribution networks, a pipe failure and a rehabilitation model are combined to predict the long-term performance of rehabilitation strategies. Bayesian parameter estimation is performed to calibrate the failure and replacement model based on a prior distribution inferred from three large water utilities in Switzerland. Multi-criteria decision analysis (MCDA) and scenario planning build the framework for evaluating 18 strategic rehabilitation alternatives under future uncertainty. Outcomes for three fundamental objectives (low costs, high reliability, and high intergenerational equity) are assessed. Exploitation of stochastic dominance concepts helps to identify twelve non-dominated alternatives and local sensitivity analysis of stakeholder preferences is used to rank them under four scenarios. Strategies with annual replacement of 1.5-2% of the network perform reasonably well under all scenarios. In contrast, the commonly used reactive replacement is not recommendable unless cost is the only relevant objective. Exemplified for a small Swiss water utility, this approach can readily be adapted to support strategic asset management for any utility size and based on objectives and preferences that matter to the respective decision makers. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Kokkos: Enabling manycore performance portability through polymorphic memory access patterns

    DOE PAGES

    Carter Edwards, H.; Trott, Christian R.; Sunderland, Daniel

    2014-07-22

    The manycore revolution can be characterized by increasing thread counts, decreasing memory per thread, and diversity of continually evolving manycore architectures. High performance computing (HPC) applications and libraries must exploit increasingly finer levels of parallelism within their codes to sustain scalability on these devices. We found that a major obstacle to performance portability is the diverse and conflicting set of constraints on memory access patterns across devices. Contemporary portable programming models address manycore parallelism (e.g., OpenMP, OpenACC, OpenCL) but fail to address memory access patterns. The Kokkos C++ library enables applications and domain libraries to achieve performance portability on diversemore » manycore architectures by unifying abstractions for both fine-grain data parallelism and memory access patterns. In this paper we describe Kokkos’ abstractions, summarize its application programmer interface (API), present performance results for unit-test kernels and mini-applications, and outline an incremental strategy for migrating legacy C++ codes to Kokkos. Furthermore, the Kokkos library is under active research and development to incorporate capabilities from new generations of manycore architectures, and to address a growing list of applications and domain libraries.« less

  15. Performance Analysis of a Hybrid Overset Multi-Block Application on Multiple Architectures

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Biswas, Rupak

    2003-01-01

    This paper presents a detailed performance analysis of a multi-block overset grid compu- tational fluid dynamics app!ication on multiple state-of-the-art computer architectures. The application is implemented using a hybrid MPI+OpenMP programming paradigm that exploits both coarse and fine-grain parallelism; the former via MPI message passing and the latter via OpenMP directives. The hybrid model also extends the applicability of multi-block programs to large clusters of SNIP nodes by overcoming the restriction that the number of processors be less than the number of grid blocks. A key kernel of the application, namely the LU-SGS linear solver, had to be modified to enhance the performance of the hybrid approach on the target machines. Investigations were conducted on cacheless Cray SX6 vector processors, cache-based IBM Power3 and Power4 architectures, and single system image SGI Origin3000 platforms. Overall results for complex vortex dynamics simulations demonstrate that the SX6 achieves the highest performance and outperforms the RISC-based architectures; however, the best scaling performance was achieved on the Power3.

  16. Exploration and exploitation of Victorian science in Darwin's reading notebooks.

    PubMed

    Murdock, Jaimie; Allen, Colin; DeDeo, Simon

    2017-02-01

    Search in an environment with an uncertain distribution of resources involves a trade-off between exploitation of past discoveries and further exploration. This extends to information foraging, where a knowledge-seeker shifts between reading in depth and studying new domains. To study this decision-making process, we examine the reading choices made by one of the most celebrated scientists of the modern era: Charles Darwin. From the full-text of books listed in his chronologically-organized reading journals, we generate topic models to quantify his local (text-to-text) and global (text-to-past) reading decisions using Kullback-Liebler Divergence, a cognitively-validated, information-theoretic measure of relative surprise. Rather than a pattern of surprise-minimization, corresponding to a pure exploitation strategy, Darwin's behavior shifts from early exploitation to later exploration, seeking unusually high levels of cognitive surprise relative to previous eras. These shifts, detected by an unsupervised Bayesian model, correlate with major intellectual epochs of his career as identified both by qualitative scholarship and Darwin's own self-commentary. Our methods allow us to compare his consumption of texts with their publication order. We find Darwin's consumption more exploratory than the culture's production, suggesting that underneath gradual societal changes are the explorations of individual synthesis and discovery. Our quantitative methods advance the study of cognitive search through a framework for testing interactions between individual and collective behavior and between short- and long-term consumption choices. This novel application of topic modeling to characterize individual reading complements widespread studies of collective scientific behavior. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Quantitative evaluation of the risk induced by dominant geomorphological processes on different land uses, based on GIS spatial analysis models

    NASA Astrophysics Data System (ADS)

    Ştefan, Bilaşco; Sanda, Roşca; Ioan, Fodorean; Iuliu, Vescan; Sorin, Filip; Dănuţ, Petrea

    2017-12-01

    Maramureş Land is mostly characterized by agricultural and forestry land use due to its specific configuration of topography and its specific pedoclimatic conditions. Taking into consideration the trend of the last century from the perspective of land management, a decrease in the surface of agricultural lands to the advantage of built-up and grass lands, as well as an accelerated decrease in the forest cover due to uncontrolled and irrational forest exploitation, has become obvious. The field analysis performed on the territory of Maramureş Land has highlighted a high frequency of two geomorphologic processes — landslides and soil erosion — which have a major negative impact on land use due to their rate of occurrence. The main aim of the present study is the GIS modeling of the two geomorphologic processes, determining a state of vulnerability (the USLE model for soil erosion and a quantitative model based on the morphometric characteristics of the territory, derived from the HG. 447/2003) and their integration in a complex model of cumulated vulnerability identification. The modeling of the risk exposure was performed using a quantitative approach based on models and equations of spatial analysis, which were developed with modeled raster data structures and primary vector data, through a matrix highlighting the correspondence between vulnerability and land use classes. The quantitative analysis of the risk was performed by taking into consideration the exposure classes as modeled databases and the land price as a primary alphanumeric database using spatial analysis techniques for each class by means of the attribute table. The spatial results highlight the territories with a high risk to present geomorphologic processes that have a high degree of occurrence and represent a useful tool in the process of spatial planning.

  18. Oceanographic Determinants of Bycatch Patterns in the California Drift Gillnet Fishery: Building an EBFM Tool for Sustainable Fisheries.

    NASA Astrophysics Data System (ADS)

    Hahlbeck, N.; Scales, K. L.; Hazen, E. L.; Bograd, S. J.

    2016-12-01

    The reduction of bycatch, or incidental capture of non-target species in a fishery, is a key objective of ecosystem-based fisheries management (EBFM) and critical to the conservation of many threatened marine species. Prediction of bycatch events is therefore of great importance to EBFM efforts. Here, bycatch of the ocean sunfish (Mola mola) and bluefin tuna (Thunnus thynnus) in the California drift gillnet fishery is modeled using a suite of remotely sensed environmental variables as predictors. Data from 8321 gillnet sets was aggregated by month to reduce zero inflation and autocorrelation among sets, and a set of a priori generalized additive models (GAMs) was created for each species based on literature review and preliminary data exploration. Each of the models was fit using a binomial family with a logit link in R, and Aikake's Information Criterion with correction (AICc) was used in the first stage of model selection. K-fold cross validation was used in the second stage of model selection and performance assessment, using the least-squares linear model of predicted vs. observed values as the performance metric. The best-performing mola model indicated a strong, nearly linear negative correlation with sea surface temperature, as well as weaker nonlinear correlations with eddy kinetic energy, chlorophyll-a concentration and rugosity. These findings are consistent with current understanding of ocean sunfish habitat use; for example, previous studies suggest seasonal movement patterns and exploitation of dynamic, highly productive areas characteristic of upwelling regions. Preliminary results from the bluefin models also indicate seasonal fluctuation and correlation with environmental variables. These models can be used with near-real time satellite data as bycatch avoidance tools for both fishers and managers, allowing for the use of more dynamic ocean management strategies to improve sustainability of the fishery.

  19. Oceanographic Determinants of Bycatch Patterns in the California Drift Gillnet Fishery: Building an EBFM Tool for Sustainable Fisheries.

    NASA Astrophysics Data System (ADS)

    Hahlbeck, N.; Scales, K. L.; Hazen, E. L.; Bograd, S. J.

    2016-02-01

    The reduction of bycatch, or incidental capture of non-target species in a fishery, is a key objective of ecosystem-based fisheries management (EBFM) and critical to the conservation of many threatened marine species. Prediction of bycatch events is therefore of great importance to EBFM efforts. Here, bycatch of the ocean sunfish (Mola mola) and bluefin tuna (Thunnus thynnus) in the California drift gillnet fishery is modeled using a suite of remotely sensed environmental variables as predictors. Data from 8321 gillnet sets was aggregated by month to reduce zero inflation and autocorrelation among sets, and a set of a priori generalized additive models (GAMs) was created for each species based on literature review and preliminary data exploration. Each of the models was fit using a binomial family with a logit link in R, and Aikake's Information Criterion with correction (AICc) was used in the first stage of model selection. K-fold cross validation was used in the second stage of model selection and performance assessment, using the least-squares linear model of predicted vs. observed values as the performance metric. The best-performing mola model indicated a strong, nearly linear negative correlation with sea surface temperature, as well as weaker nonlinear correlations with eddy kinetic energy, chlorophyll-a concentration and rugosity. These findings are consistent with current understanding of ocean sunfish habitat use; for example, previous studies suggest seasonal movement patterns and exploitation of dynamic, highly productive areas characteristic of upwelling regions. Preliminary results from the bluefin models also indicate seasonal fluctuation and correlation with environmental variables. These models can be used with near-real time satellite data as bycatch avoidance tools for both fishers and managers, allowing for the use of more dynamic ocean management strategies to improve sustainability of the fishery.

  20. Quantitative evaluation of the risk induced by dominant geomorphological processes on different land uses, based on GIS spatial analysis models

    NASA Astrophysics Data System (ADS)

    Ştefan, Bilaşco; Sanda, Roşca; Ioan, Fodorean; Iuliu, Vescan; Sorin, Filip; Dănuţ, Petrea

    2018-06-01

    Maramureş Land is mostly characterized by agricultural and forestry land use due to its specific configuration of topography and its specific pedoclimatic conditions. Taking into consideration the trend of the last century from the perspective of land management, a decrease in the surface of agricultural lands to the advantage of built-up and grass lands, as well as an accelerated decrease in the forest cover due to uncontrolled and irrational forest exploitation, has become obvious. The field analysis performed on the territory of Maramureş Land has highlighted a high frequency of two geomorphologic processes — landslides and soil erosion — which have a major negative impact on land use due to their rate of occurrence. The main aim of the present study is the GIS modeling of the two geomorphologic processes, determining a state of vulnerability (the USLE model for soil erosion and a quantitative model based on the morphometric characteristics of the territory, derived from the HG. 447/2003) and their integration in a complex model of cumulated vulnerability identification. The modeling of the risk exposure was performed using a quantitative approach based on models and equations of spatial analysis, which were developed with modeled raster data structures and primary vector data, through a matrix highlighting the correspondence between vulnerability and land use classes. The quantitative analysis of the risk was performed by taking into consideration the exposure classes as modeled databases and the land price as a primary alphanumeric database using spatial analysis techniques for each class by means of the attribute table. The spatial results highlight the territories with a high risk to present geomorphologic processes that have a high degree of occurrence and represent a useful tool in the process of spatial planning.

Top