A Generalized Quantum-Inspired Decision Making Model for Intelligent Agent
Loo, Chu Kiong
2014-01-01
A novel decision making for intelligent agent using quantum-inspired approach is proposed. A formal, generalized solution to the problem is given. Mathematically, the proposed model is capable of modeling higher dimensional decision problems than previous researches. Four experiments are conducted, and both empirical experiments results and proposed model's experiment results are given for each experiment. Experiments showed that the results of proposed model agree with empirical results perfectly. The proposed model provides a new direction for researcher to resolve cognitive basis in designing intelligent agent. PMID:24778580
General Blending Models for Data From Mixture Experiments
Brown, L.; Donev, A. N.; Bissett, A. C.
2015-01-01
We propose a new class of models providing a powerful unification and extension of existing statistical methodology for analysis of data obtained in mixture experiments. These models, which integrate models proposed by Scheffé and Becker, extend considerably the range of mixture component effects that may be described. They become complex when the studied phenomenon requires it, but remain simple whenever possible. This article has supplementary material online. PMID:26681812
DEM Calibration Approach: design of experiment
NASA Astrophysics Data System (ADS)
Boikov, A. V.; Savelev, R. V.; Payor, V. A.
2018-05-01
The problem of DEM models calibration is considered in the article. It is proposed to divide models input parameters into those that require iterative calibration and those that are recommended to measure directly. A new method for model calibration based on the design of the experiment for iteratively calibrated parameters is proposed. The experiment is conducted using a specially designed stand. The results are processed with technical vision algorithms. Approximating functions are obtained and the error of the implemented software and hardware complex is estimated. The prospects of the obtained results are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Justin; Hund, Lauren
2017-02-01
Dynamic compression experiments are being performed on complicated materials using increasingly complex drivers. The data produced in these experiments are beginning to reach a regime where traditional analysis techniques break down; requiring the solution of an inverse problem. A common measurement in dynamic experiments is an interface velocity as a function of time, and often this functional output can be simulated using a hydrodynamics code. Bayesian model calibration is a statistical framework to estimate inputs into a computational model in the presence of multiple uncertainties, making it well suited to measurements of this type. In this article, we apply Bayesianmore » model calibration to high pressure (250 GPa) ramp compression measurements in tantalum. We address several issues speci c to this calibration including the functional nature of the output as well as parameter and model discrepancy identi ability. Speci cally, we propose scaling the likelihood function by an e ective sample size rather than modeling the autocorrelation function to accommodate the functional output and propose sensitivity analyses using the notion of `modularization' to assess the impact of experiment-speci c nuisance input parameters on estimates of material properties. We conclude that the proposed Bayesian model calibration procedure results in simple, fast, and valid inferences on the equation of state parameters for tantalum.« less
On the Bayesian Treed Multivariate Gaussian Process with Linear Model of Coregionalization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konomi, Bledar A.; Karagiannis, Georgios; Lin, Guang
2015-02-01
The Bayesian treed Gaussian process (BTGP) has gained popularity in recent years because it provides a straightforward mechanism for modeling non-stationary data and can alleviate computational demands by fitting models to less data. The extension of BTGP to the multivariate setting requires us to model the cross-covariance and to propose efficient algorithms that can deal with trans-dimensional MCMC moves. In this paper we extend the cross-covariance of the Bayesian treed multivariate Gaussian process (BTMGP) to that of linear model of Coregionalization (LMC) cross-covariances. Different strategies have been developed to improve the MCMC mixing and invert smaller matrices in the Bayesianmore » inference. Moreover, we compare the proposed BTMGP with existing multiple BTGP and BTMGP in test cases and multiphase flow computer experiment in a full scale regenerator of a carbon capture unit. The use of the BTMGP with LMC cross-covariance helped to predict the computer experiments relatively better than existing competitors. The proposed model has a wide variety of applications, such as computer experiments and environmental data. In the case of computer experiments we also develop an adaptive sampling strategy for the BTMGP with LMC cross-covariance function.« less
A visual model for object detection based on active contours and level-set method.
Satoh, Shunji
2006-09-01
A visual model for object detection is proposed. In order to make the detection ability comparable with existing technical methods for object detection, an evolution equation of neurons in the model is derived from the computational principle of active contours. The hierarchical structure of the model emerges naturally from the evolution equation. One drawback involved with initial values of active contours is alleviated by introducing and formulating convexity, which is a visual property. Numerical experiments show that the proposed model detects objects with complex topologies and that it is tolerant of noise. A visual attention model is introduced into the proposed model. Other simulations show that the visual properties of the model are consistent with the results of psychological experiments that disclose the relation between figure-ground reversal and visual attention. We also demonstrate that the model tends to perceive smaller regions as figures, which is a characteristic observed in human visual perception.
Bayesian models based on test statistics for multiple hypothesis testing problems.
Ji, Yuan; Lu, Yiling; Mills, Gordon B
2008-04-01
We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.
A review of active learning approaches to experimental design for uncovering biological networks
2017-01-01
Various types of biological knowledge describe networks of interactions among elementary entities. For example, transcriptional regulatory networks consist of interactions among proteins and genes. Current knowledge about the exact structure of such networks is highly incomplete, and laboratory experiments that manipulate the entities involved are conducted to test hypotheses about these networks. In recent years, various automated approaches to experiment selection have been proposed. Many of these approaches can be characterized as active machine learning algorithms. Active learning is an iterative process in which a model is learned from data, hypotheses are generated from the model to propose informative experiments, and the experiments yield new data that is used to update the model. This review describes the various models, experiment selection strategies, validation techniques, and successful applications described in the literature; highlights common themes and notable distinctions among methods; and identifies likely directions of future research and open problems in the area. PMID:28570593
Springback Mechanism Analysis and Experiments on Robotic Bending of Rectangular Orthodontic Archwire
NASA Astrophysics Data System (ADS)
Jiang, Jin-Gang; Han, Ying-Shuai; Zhang, Yong-De; Liu, Yan-Jv; Wang, Zhao; Liu, Yi
2017-11-01
Fixed-appliance technology is the most common and effective malocclusion orthodontic treatment method, and its key step is the bending of orthodontic archwire. The springback of archwire did not consider the movement of the stress-strain-neutral layer. To solve this problem, a springback calculation model for rectangular orthodontic archwire is proposed. A bending springback experiment is conducted using an orthodontic archwire bending springback measurement device. The springback experimental results show that the theoretical calculation results using the proposed model coincide better with the experimental testing results than when movement of the stress-strain-neutral layer was not considered. A bending experiment with rectangular orthodontic archwire is conducted using a robotic orthodontic archwire bending system. The patient expriment result show that the maximum and minimum error ratios of formed orthodontic archwire parameters are 22.46% and 10.23% without considering springback and are decreased to 11.35% and 6.13% using the proposed model. The proposed springback calculation model, which considers the movement of the stress-strain-neutral layer, greatly improves the orthodontic archwire bending precision.
Is there something quantum-like about the human mental lexicon?
Bruza, Peter; Kitto, Kirsty; Nelson, Douglas; McEvoy, Cathy
2010-01-01
Following an early claim by Nelson & McEvoy (35) suggesting that word associations can display ‘spooky action at a distance behaviour’, a serious investigation of the potentially quantum nature of such associations is currently underway. In this paper quantum theory is proposed as a framework suitable for modelling the human mental lexicon, specifically the results obtained from both intralist and extralist word association experiments. Some initial models exploring this hypothesis are discussed, and experiments capable of testing these models proposed. PMID:20224806
Elucidating the role of recovery experiences in the job demands-resources model.
Moreno-Jiménez, Bernardo; Rodríguez-Muñoz, Alfredo; Sanz-Vergel, Ana Isabel; Garrosa, Eva
2012-07-01
Based on the Job Demands-Resources (JD-R) model, the current study examined the moderating role of recovery experiences (i.e., psychological detachment from work, relaxation, mastery experiences, and control over leisure time) on the relationship between one job demand (i.e., role conflict) and work- and health-related outcomes. Results from our sample of 990 employees from Spain showed that psychological detachment from work and relaxation buffered the negative impact of role conflict on some of the proposed outcomes. Contrary to our expectations, we did not find significant results for mastery and control regarding moderating effects. Overall, findings suggest a differential pattern of the recovery experiences in the health impairment process proposed by the JD-R model.
Kalman Filtered Bio Heat Transfer Model Based Self-adaptive Hybrid Magnetic Resonance Thermometry.
Zhang, Yuxin; Chen, Shuo; Deng, Kexin; Chen, Bingyao; Wei, Xing; Yang, Jiafei; Wang, Shi; Ying, Kui
2017-01-01
To develop a self-adaptive and fast thermometry method by combining the original hybrid magnetic resonance thermometry method and the bio heat transfer equation (BHTE) model. The proposed Kalman filtered Bio Heat Transfer Model Based Self-adaptive Hybrid Magnetic Resonance Thermometry, abbreviated as KalBHT hybrid method, introduced the BHTE model to synthesize a window on the regularization term of the hybrid algorithm, which leads to a self-adaptive regularization both spatially and temporally with change of temperature. Further, to decrease the sensitivity to accuracy of the BHTE model, Kalman filter is utilized to update the window at each iteration time. To investigate the effect of the proposed model, computer heating simulation, phantom microwave heating experiment and dynamic in-vivo model validation of liver and thoracic tumor were conducted in this study. The heating simulation indicates that the KalBHT hybrid algorithm achieves more accurate results without adjusting λ to a proper value in comparison to the hybrid algorithm. The results of the phantom heating experiment illustrate that the proposed model is able to follow temperature changes in the presence of motion and the temperature estimated also shows less noise in the background and surrounding the hot spot. The dynamic in-vivo model validation with heating simulation demonstrates that the proposed model has a higher convergence rate, more robustness to susceptibility problem surrounding the hot spot and more accuracy of temperature estimation. In the healthy liver experiment with heating simulation, the RMSE of the hot spot of the proposed model is reduced to about 50% compared to the RMSE of the original hybrid model and the convergence time becomes only about one fifth of the hybrid model. The proposed model is able to improve the accuracy of the original hybrid algorithm and accelerate the convergence rate of MR temperature estimation.
A "Uses and Gratification Expectancy Model" to Predict Students' "Perceived e-Learning Experience"
ERIC Educational Resources Information Center
Mondi, Makingu; Woods, Peter; Rafi, Ahmad
2008-01-01
This study investigates "how and why" students' "Uses and Gratification Expectancy" (UGE) for e-learning resources influences their "Perceived e-Learning Experience." A "Uses and Gratification Expectancy Model" (UGEM) framework is proposed to predict students' "Perceived e-Learning Experience," and…
Search for Hidden Particles: a new experiment proposal
NASA Astrophysics Data System (ADS)
De Lellis, G.
2015-08-01
Searches for new physics with accelerators are being performed at the LHC, looking for high massive particles coupled to matter with ordinary strength. We propose a new experiment meant to search for very weakly coupled particles in the few GeV mass domain. The existence of such particles, foreseen in different models beyond the Standard Model, is largely unexplored from the experimental point of view. A beam dump facility, built at CERN in the north area, using 400 GeV protons is a copious factory of charmed hadrons and it could be used to probe the existence of such particles. The beam dump is also an ideal source of tau neutrinos, the less known particle in the Standard Model. In particular, tau anti-neutrinos have not been observed so far. We therefore propose an experiment to search for hidden particles and study tau neutrino physics at the same time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allwine, K Jerry; Flaherty, Julia E.
2007-08-01
This report provides an experimental plan for a proposed Asian long-range tracer study as part of the international Tracer Experiment and Atmospheric Modeling (TEAM) Project. The TEAM partners are China, Japan, South Korea and the United States. Optimal times of year to conduct the study, meteorological measurements needed, proposed tracer release locations, proposed tracer sampling locations and the proposed durations of tracer releases and subsequent sampling are given. Also given are the activities necessary to prepare for the study and the schedule for completing the preparation activities leading to conducting the actual field operations. This report is intended to providemore » the TEAM members with the information necessary for planning and conducting the Asian long-range tracer study. The experimental plan is proposed, at this time, to describe the efforts necessary to conduct the Asian long-range tracer study, and the plan will undoubtedly be revised and refined as the planning goes forward over the next year.« less
Asymmetries in visual search for conjunctive targets.
Cohen, A
1993-08-01
Asymmetry is demonstrated between conjunctive targets in visual search with no detectable asymmetries between the individual features that compose these targets. Experiment 1 demonstrated this phenomenon for targets composed of color and shape. Experiment 2 and 4 demonstrate this asymmetry for targets composed of size and orientation and for targets composed of contrast level and orientation, respectively. Experiment 3 demonstrates that search rate of individual features cannot predict search rate for conjunctive targets. These results demonstrate the need for 2 levels of representations: one of features and one of conjunction of features. A model related to the modified feature integration theory is proposed to account for these results. The proposed model and other models of visual search are discussed.
Kravitz, Benjamin S.; Robock, Alan; Tilmes, S.; ...
2015-10-27
We present a suite of new climate model experiment designs for the Geoengineering Model Intercomparison Project (GeoMIP). This set of experiments, named GeoMIP6 (to be consistent with the Coupled Model Intercomparison Project Phase 6), builds on the previous GeoMIP project simulations, and has been expanded to address several further important topics, including key uncertainties in extreme events, the use of geoengineering as part of a portfolio of responses to climate change, and the relatively new idea of cirrus cloud thinning to allow more long wave radiation to escape to space. We discuss experiment designs, as well as the rationale formore » those designs, showing preliminary results from individual models when available. We also introduce a new feature, called the GeoMIP Testbed, which provides a platform for simulations that will be performed with a few models and subsequently assessed to determine whether the proposed experiment designs will be adopted as core (Tier 1) GeoMIP experiments. In conclusion, this is meant to encourage various stakeholders to propose new targeted experiments that address their key open science questions, with the goal of making GeoMIP more relevant to a broader set of communities.« less
Similarity Theory of Withdrawn Water Temperature Experiment
2015-01-01
Selective withdrawal from a thermal stratified reservoir has been widely utilized in managing reservoir water withdrawal. Besides theoretical analysis and numerical simulation, model test was also necessary in studying the temperature of withdrawn water. However, information on the similarity theory of the withdrawn water temperature model remains lacking. Considering flow features of selective withdrawal, the similarity theory of the withdrawn water temperature model was analyzed theoretically based on the modification of governing equations, the Boussinesq approximation, and some simplifications. The similarity conditions between the model and the prototype were suggested. The conversion of withdrawn water temperature between the model and the prototype was proposed. Meanwhile, the fundamental theory of temperature distribution conversion was firstly proposed, which could significantly improve the experiment efficiency when the basic temperature of the model was different from the prototype. Based on the similarity theory, an experiment was performed on the withdrawn water temperature which was verified by numerical method. PMID:26065020
Rondeau, Virginie; Schaffner, Emmanuel; Corbière, Fabien; Gonzalez, Juan R; Mathoulin-Pélissier, Simone
2013-06-01
Owing to the natural evolution of a disease, several events often arise after a first treatment for the same subject. For example, patients with a primary invasive breast cancer and treated with breast conserving surgery may experience breast cancer recurrences, metastases or death. A certain proportion of subjects in the population who are not expected to experience the events of interest are considered to be 'cured' or non-susceptible. To model correlated failure time data incorporating a surviving fraction, we compare several forms of cure rate frailty models. In the first model already proposed non-susceptible patients are those who are not expected to experience the event of interest over a sufficiently long period of time. The other proposed models account for the possibility of cure after each event. We illustrate the cure frailty models with two data sets. First to analyse time-dependent prognostic factors associated with breast cancer recurrences, metastases, new primary malignancy and death. Second to analyse successive rehospitalizations of patients diagnosed with colorectal cancer. Estimates were obtained by maximization of likelihood using SAS proc NLMIXED for a piecewise constant hazards model. As opposed to the simple frailty model, the proposed methods demonstrate great potential in modelling multivariate survival data with long-term survivors ('cured' individuals).
Three-dimensional computer model for the atmospheric general circulation experiment
NASA Technical Reports Server (NTRS)
Roberts, G. O.
1984-01-01
An efficient, flexible, three-dimensional, hydrodynamic, computer code has been developed for a spherical cap geometry. The code will be used to simulate NASA's Atmospheric General Circulation Experiment (AGCE). The AGCE is a spherical, baroclinic experiment which will model the large-scale dynamics of our atmosphere; it has been proposed to NASA for future Spacelab flights. In the AGCE a radial dielectric body force will simulate gravity, with hot fluid tending to move outwards. In order that this force be dominant, the AGCE must be operated in a low gravity environment such as Spacelab. The full potential of the AGCE will only be realized by working in conjunction with an accurate computer model. Proposed experimental parameter settings will be checked first using model runs. Then actual experimental results will be compared with the model predictions. This interaction between experiment and theory will be very valuable in determining the nature of the AGCE flows and hence their relationship to analytical theories and actual atmospheric dynamics.
Modeling Valuations from Experience: A Comment on Ashby and Rakow (2014)
ERIC Educational Resources Information Center
Wulff, Dirk U.; Pachur, Thorsten
2016-01-01
What are the cognitive mechanisms underlying subjective valuations formed on the basis of sequential experiences of an option's possible outcomes? Ashby and Rakow (2014) have proposed a sliding window model (SWIM), according to which people's valuations represent the average of a limited sample of recent experiences (the size of which is estimated…
A robust and fast active contour model for image segmentation with intensity inhomogeneity
NASA Astrophysics Data System (ADS)
Ding, Keyan; Weng, Guirong
2018-04-01
In this paper, a robust and fast active contour model is proposed for image segmentation in the presence of intensity inhomogeneity. By introducing the local image intensities fitting functions before the evolution of curve, the proposed model can effectively segment images with intensity inhomogeneity. And the computation cost is low because the fitting functions do not need to be updated in each iteration. Experiments have shown that the proposed model has a higher segmentation efficiency compared to some well-known active contour models based on local region fitting energy. In addition, the proposed model is robust to initialization, which allows the initial level set function to be a small constant function.
User Preference-Based Dual-Memory Neural Model With Memory Consolidation Approach.
Nasir, Jauwairia; Yoo, Yong-Ho; Kim, Deok-Hwa; Kim, Jong-Hwan; Nasir, Jauwairia; Yong-Ho Yoo; Deok-Hwa Kim; Jong-Hwan Kim; Nasir, Jauwairia; Yoo, Yong-Ho; Kim, Deok-Hwa; Kim, Jong-Hwan
2018-06-01
Memory modeling has been a popular topic of research for improving the performance of autonomous agents in cognition related problems. Apart from learning distinct experiences correctly, significant or recurring experiences are expected to be learned better and be retrieved easier. In order to achieve this objective, this paper proposes a user preference-based dual-memory adaptive resonance theory network model, which makes use of a user preference to encode memories with various strengths and to learn and forget at various rates. Over a period of time, memories undergo a consolidation-like process at a rate proportional to the user preference at the time of encoding and the frequency of recall of a particular memory. Consolidated memories are easier to recall and are more stable. This dual-memory neural model generates distinct episodic memories and a flexible semantic-like memory component. This leads to an enhanced retrieval mechanism of experiences through two routes. The simulation results are presented to evaluate the proposed memory model based on various kinds of cues over a number of trials. The experimental results on Mybot are also presented. The results verify that not only are distinct experiences learned correctly but also that experiences associated with higher user preference and recall frequency are consolidated earlier. Thus, these experiences are recalled more easily relative to the unconsolidated experiences.
Modeling of coherent ultrafast magneto-optical experiments: Light-induced molecular mean-field model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hinschberger, Y.; Hervieux, P.-A.
2015-12-28
We present calculations which aim to describe coherent ultrafast magneto-optical effects observed in time-resolved pump-probe experiments. Our approach is based on a nonlinear semi-classical Drude-Voigt model and is used to interpret experiments performed on nickel ferromagnetic thin film. Within this framework, a phenomenological light-induced coherent molecular mean-field depending on the polarizations of the pump and probe pulses is proposed whose microscopic origin is related to a spin-orbit coupling involving the electron spins of the material sample and the electric field of the laser pulses. Theoretical predictions are compared to available experimental data. The model successfully reproduces the observed experimental trendsmore » and gives meaningful insight into the understanding of magneto-optical rotation behavior in the ultrafast regime. Theoretical predictions for further experimental studies are also proposed.« less
Lim, Hojun; Battaile, Corbett C.; Brown, Justin L.; ...
2016-06-14
In this work, we develop a tantalum strength model that incorporates e ects of temperature, strain rate and pressure. Dislocation kink-pair theory is used to incorporate temperature and strain rate e ects while the pressure dependent yield is obtained through the pressure dependent shear modulus. Material constants used in the model are parameterized from tantalum single crystal tests and polycrystalline ramp compression experiments. It is shown that the proposed strength model agrees well with the temperature and strain rate dependent yield obtained from polycrystalline tantalum experiments. Furthermore, the model accurately reproduces the pressure dependent yield stresses up to 250 GPa.more » The proposed strength model is then used to conduct simulations of a Taylor cylinder impact test and validated with experiments. This approach provides a physically-based multi-scale strength model that is able to predict the plastic deformation of polycrystalline tantalum through a wide range of temperature, strain and pressure regimes.« less
Phase II Study Proposal Briefs.
ERIC Educational Resources Information Center
National Center for the Study of Postsecondary Educational Supports, Honolulu, HI.
This document collects 23 study proposal briefs presented to the National Center for the Study of Postsecondary Educational Supports. The proposals address the following topics concerned with postsecondary services for students with disabilities: cultural empowerment, longitudinal analysis of postsecondary students' experience, effective models of…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Atamturktur, Sez; Unal, Cetin; Hemez, Francois
The project proposed to provide a Predictive Maturity Framework with its companion metrics that (1) introduce a formalized, quantitative means to communicate information between interested parties, (2) provide scientifically dependable means to claim completion of Validation and Uncertainty Quantification (VU) activities, and (3) guide the decision makers in the allocation of Nuclear Energy’s resources for code development and physical experiments. The project team proposed to develop this framework based on two complimentary criteria: (1) the extent of experimental evidence available for the calibration of simulation models and (2) the sophistication of the physics incorporated in simulation models. The proposed frameworkmore » is capable of quantifying the interaction between the required number of physical experiments and degree of physics sophistication. The project team has developed this framework and implemented it with a multi-scale model for simulating creep of a core reactor cladding. The multi-scale model is composed of the viscoplastic self-consistent (VPSC) code at the meso-scale, which represents the visco-plastic behavior and changing properties of a highly anisotropic material and a Finite Element (FE) code at the macro-scale to represent the elastic behavior and apply the loading. The framework developed takes advantage of the transparency provided by partitioned analysis, where independent constituent codes are coupled in an iterative manner. This transparency allows model developers to better understand and remedy the source of biases and uncertainties, whether they stem from the constituents or the coupling interface by exploiting separate-effect experiments conducted within the constituent domain and integral-effect experiments conducted within the full-system domain. The project team has implemented this procedure with the multi- scale VPSC-FE model and demonstrated its ability to improve the predictive capability of the model. Within this framework, the project team has focused on optimizing resource allocation for improving numerical models through further code development and experimentation. Related to further code development, we have developed a code prioritization index (CPI) for coupled numerical models. CPI is implemented to effectively improve the predictive capability of the coupled model by increasing the sophistication of constituent codes. In relation to designing new experiments, we investigated the information gained by the addition of each new experiment used for calibration and bias correction of a simulation model. Additionally, the variability of ‘information gain’ through the design domain has been investigated in order to identify the experiment settings where maximum information gain occurs and thus guide the experimenters in the selection of the experiment settings. This idea was extended to evaluate the information gain from each experiment can be improved by intelligently selecting the experiments, leading to the development of the Batch Sequential Design (BSD) technique. Additionally, we evaluated the importance of sufficiently exploring the domain of applicability in experiment-based validation of high-consequence modeling and simulation by developing a new metric to quantify coverage. This metric has also been incorporated into the design of new experiments. Finally, we have proposed a data-aware calibration approach for the calibration of numerical models. This new method considers the complexity of a numerical model (the number of parameters to be calibrated, parameter uncertainty, and form of the model) and seeks to identify the number of experiments necessary to calibrate the model based on the level of sophistication of the physics. The final component in the project team’s work to improve model calibration and validation methods is the incorporation of robustness to non-probabilistic uncertainty in the input parameters. This is an improvement to model validation and uncertainty quantification stemming beyond the originally proposed scope of the project. We have introduced a new metric for incorporating the concept of robustness into experiment-based validation of numerical models. This project has accounted for the graduation of two Ph.D. students (Kendra Van Buren and Josh Hegenderfer) and two M.S. students (Matthew Egeberg and Parker Shields). One of the doctoral students is now working in the nuclear engineering field and the other one is a post-doctoral fellow at the Los Alamos National Laboratory. Additionally, two more Ph.D. students (Garrison Stevens and Tunc Kulaksiz) who are working towards graduation have been supported by this project.« less
Small intestinal model for electrically propelled capsule endoscopy
2011-01-01
The aim of this research is to propose a small intestine model for electrically propelled capsule endoscopy. The electrical stimulus can cause contraction of the small intestine and propel the capsule along the lumen. The proposed model considered the drag and friction from the small intestine using a thin walled model and Stokes' drag equation. Further, contraction force from the small intestine was modeled by using regression analysis. From the proposed model, the acceleration and velocity of various exterior shapes of capsule were calculated, and two exterior shapes of capsules were proposed based on the internal volume of the capsules. The proposed capsules were fabricated and animal experiments were conducted. One of the proposed capsules showed an average (SD) velocity in forward direction of 2.91 ± 0.99 mm/s and 2.23 ± 0.78 mm/s in the backward direction, which was 5.2 times faster than that obtained in previous research. The proposed model can predict locomotion of the capsule based on various exterior shapes of the capsule. PMID:22177218
Causal Modeling the Delayed-Choice Experiment
NASA Astrophysics Data System (ADS)
Chaves, Rafael; Lemos, Gabriela Barreto; Pienaar, Jacques
2018-05-01
Wave-particle duality has become one of the flagships of quantum mechanics. This counterintuitive concept is highlighted in a delayed-choice experiment, where the experimental setup that reveals either the particle or wave nature of a quantum system is decided after the system has entered the apparatus. Here we consider delayed-choice experiments from the perspective of device-independent causal models and show their equivalence to a prepare-and-measure scenario. Within this framework, we consider Wheeler's original proposal and its variant using a quantum control and show that a simple classical causal model is capable of reproducing the quantum mechanical predictions. Nonetheless, among other results, we show that, in a slight variant of Wheeler's gedanken experiment, a photon in an interferometer can indeed generate statistics incompatible with any nonretrocausal hidden variable model, whose dimensionality is the same as that of the quantum system it is supposed to mimic. Our proposal tolerates arbitrary losses and inefficiencies, making it specially suited to loophole-free experimental implementations.
NASA Astrophysics Data System (ADS)
Mazurowski, Maciej A.; Zhang, Jing; Lo, Joseph Y.; Kuzmiak, Cherie M.; Ghate, Sujata V.; Yoon, Sora
2014-03-01
Providing high quality mammography education to radiology trainees is essential, as good interpretation skills potentially ensure the highest benefit of screening mammography for patients. We have previously proposed a computer-aided education system that utilizes trainee models, which relate human-assessed image characteristics to interpretation error. We proposed that these models be used to identify the most difficult and therefore the most educationally useful cases for each trainee. In this study, as a next step in our research, we propose to build trainee models that utilize features that are automatically extracted from images using computer vision algorithms. To predict error, we used a logistic regression which accepts imaging features as input and returns error as output. Reader data from 3 experts and 3 trainees were used. Receiver operating characteristic analysis was applied to evaluate the proposed trainee models. Our experiments showed that, for three trainees, our models were able to predict error better than chance. This is an important step in the development of adaptive computer-aided education systems since computer-extracted features will allow for faster and more extensive search of imaging databases in order to identify the most educationally beneficial cases.
Segmenting Continuous Motions with Hidden Semi-markov Models and Gaussian Processes
Nakamura, Tomoaki; Nagai, Takayuki; Mochihashi, Daichi; Kobayashi, Ichiro; Asoh, Hideki; Kaneko, Masahide
2017-01-01
Humans divide perceived continuous information into segments to facilitate recognition. For example, humans can segment speech waves into recognizable morphemes. Analogously, continuous motions are segmented into recognizable unit actions. People can divide continuous information into segments without using explicit segment points. This capacity for unsupervised segmentation is also useful for robots, because it enables them to flexibly learn languages, gestures, and actions. In this paper, we propose a Gaussian process-hidden semi-Markov model (GP-HSMM) that can divide continuous time series data into segments in an unsupervised manner. Our proposed method consists of a generative model based on the hidden semi-Markov model (HSMM), the emission distributions of which are Gaussian processes (GPs). Continuous time series data is generated by connecting segments generated by the GP. Segmentation can be achieved by using forward filtering-backward sampling to estimate the model's parameters, including the lengths and classes of the segments. In an experiment using the CMU motion capture dataset, we tested GP-HSMM with motion capture data containing simple exercise motions; the results of this experiment showed that the proposed GP-HSMM was comparable with other methods. We also conducted an experiment using karate motion capture data, which is more complex than exercise motion capture data; in this experiment, the segmentation accuracy of GP-HSMM was 0.92, which outperformed other methods. PMID:29311889
Modeling and experiments of the adhesion force distribution between particles and a surface.
You, Siming; Wan, Man Pun
2014-06-17
Due to the existence of surface roughness in real surfaces, the adhesion force between particles and the surface where the particles are deposited exhibits certain statistical distributions. Despite the importance of adhesion force distribution in a variety of applications, the current understanding of modeling adhesion force distribution is still limited. In this work, an adhesion force distribution model based on integrating the root-mean-square (RMS) roughness distribution (i.e., the variation of RMS roughness on the surface in terms of location) into recently proposed mean adhesion force models was proposed. The integration was accomplished by statistical analysis and Monte Carlo simulation. A series of centrifuge experiments were conducted to measure the adhesion force distributions between polystyrene particles (146.1 ± 1.99 μm) and various substrates (stainless steel, aluminum and plastic, respectively). The proposed model was validated against the measured adhesion force distributions from this work and another previous study. Based on the proposed model, the effect of RMS roughness distribution on the adhesion force distribution of particles on a rough surface was explored, showing that both the median and standard deviation of adhesion force distribution could be affected by the RMS roughness distribution. The proposed model could predict both van der Waals force and capillary force distributions and consider the multiscale roughness feature, greatly extending the current capability of adhesion force distribution prediction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Babic, Miroslav; Kljenak, Ivo; Mavko, Borut
2006-07-01
The CFD code CFX4.4 was used to simulate an experiment in the ThAI facility, which was designed for investigation of thermal-hydraulic processes during a severe accident inside a Light Water Reactor containment. In the considered experiment, air was initially present in the vessel, and helium and steam were injected during different phases of the experiment at various mass flow rates and at different locations. The main purpose of the proposed work was to assess the capabilities of the CFD code to reproduce the atmosphere structure with a three-dimensional model, coupled with condensation models proposed by the authors. A three-dimensional modelmore » of the ThAI vessel for the CFX4.4 code was developed. The flow in the simulation domain was modeled as single-phase. Steam condensation on vessel walls was modeled as a sink of mass and energy using a correlation that was originally developed for an integral approach. A simple model of bulk phase change was also included. Calculated time-dependent variables together with temperature and volume fraction distributions at the end of different experiment phases are compared to experimental results. (authors)« less
A Didactic Experiment and Model of a Flat-Plate Solar Collector
ERIC Educational Resources Information Center
Gallitto, Aurelio Agliolo; Fiordilino, Emilio
2011-01-01
We report on an experiment performed with a home-made flat-plate solar collector, carried out together with high-school students. To explain the experimental results, we propose a model that describes the heating process of the solar collector. The model accounts quantitatively for the experimental data. We suggest that solar-energy topics should…
Energy model for rumor propagation on social networks
NASA Astrophysics Data System (ADS)
Han, Shuo; Zhuang, Fuzhen; He, Qing; Shi, Zhongzhi; Ao, Xiang
2014-01-01
With the development of social networks, the impact of rumor propagation on human lives is more and more significant. Due to the change of propagation mode, traditional rumor propagation models designed for word-of-mouth process may not be suitable for describing the rumor spreading on social networks. To overcome this shortcoming, we carefully analyze the mechanisms of rumor propagation and the topological properties of large-scale social networks, then propose a novel model based on the physical theory. In this model, heat energy calculation formula and Metropolis rule are introduced to formalize this problem and the amount of heat energy is used to measure a rumor’s impact on a network. Finally, we conduct track experiments to show the evolution of rumor propagation, make comparison experiments to contrast the proposed model with the traditional models, and perform simulation experiments to study the dynamics of rumor spreading. The experiments show that (1) the rumor propagation simulated by our model goes through three stages: rapid growth, fluctuant persistence and slow decline; (2) individuals could spread a rumor repeatedly, which leads to the rumor’s resurgence; (3) rumor propagation is greatly influenced by a rumor’s attraction, the initial rumormonger and the sending probability.
Modeling and characterization of supercapacitors for wireless sensor network applications
NASA Astrophysics Data System (ADS)
Zhang, Ying; Yang, Hengzhao
A simple circuit model is developed to describe supercapacitor behavior, which uses two resistor-capacitor branches with different time constants to characterize the charging and redistribution processes, and a variable leakage resistance to characterize the self-discharge process. The parameter values of a supercapacitor can be determined by a charging-redistribution experiment and a self-discharge experiment. The modeling and characterization procedures are illustrated using a 22F supercapacitor. The accuracy of the model is compared with that of other models often used in power electronics applications. The results show that the proposed model has better accuracy in characterizing the self-discharge process while maintaining similar performance as other models during charging and redistribution processes. Additionally, the proposed model is evaluated in a simplified energy storage system for self-powered wireless sensors. The model performance is compared with that of a commonly used energy recursive equation (ERE) model. The results demonstrate that the proposed model can predict the evolution profile of voltage across the supercapacitor more accurately than the ERE model, and therefore provides a better alternative for supporting research on storage system design and power management for wireless sensor networks.
Bayesian model calibration of ramp compression experiments on Z
NASA Astrophysics Data System (ADS)
Brown, Justin; Hund, Lauren
2017-06-01
Bayesian model calibration (BMC) is a statistical framework to estimate inputs for a computational model in the presence of multiple uncertainties, making it well suited to dynamic experiments which must be coupled with numerical simulations to interpret the results. Often, dynamic experiments are diagnosed using velocimetry and this output can be modeled using a hydrocode. Several calibration issues unique to this type of scenario including the functional nature of the output, uncertainty of nuisance parameters within the simulation, and model discrepancy identifiability are addressed, and a novel BMC process is proposed. As a proof of concept, we examine experiments conducted on Sandia National Laboratories' Z-machine which ramp compressed tantalum to peak stresses of 250 GPa. The proposed BMC framework is used to calibrate the cold curve of Ta (with uncertainty), and we conclude that the procedure results in simple, fast, and valid inferences. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Kim, K B; Shanyfelt, L M; Hahn, D W
2006-01-01
Dense-medium scattering is explored in the context of providing a quantitative measurement of turbidity, with specific application to corneal haze. A multiple-wavelength scattering technique is proposed to make use of two-color scattering response ratios, thereby providing a means for data normalization. A combination of measurements and simulations are reported to assess this technique, including light-scattering experiments for a range of polystyrene suspensions. Monte Carlo (MC) simulations were performed using a multiple-scattering algorithm based on full Mie scattering theory. The simulations were in excellent agreement with the polystyrene suspension experiments, thereby validating the MC model. The MC model was then used to simulate multiwavelength scattering in a corneal tissue model. Overall, the proposed multiwavelength scattering technique appears to be a feasible approach to quantify dense-medium scattering such as the manifestation of corneal haze, although more complex modeling of keratocyte scattering, and animal studies, are necessary.
Multi-modal gesture recognition using integrated model of motion, audio and video
NASA Astrophysics Data System (ADS)
Goutsu, Yusuke; Kobayashi, Takaki; Obara, Junya; Kusajima, Ikuo; Takeichi, Kazunari; Takano, Wataru; Nakamura, Yoshihiko
2015-07-01
Gesture recognition is used in many practical applications such as human-robot interaction, medical rehabilitation and sign language. With increasing motion sensor development, multiple data sources have become available, which leads to the rise of multi-modal gesture recognition. Since our previous approach to gesture recognition depends on a unimodal system, it is difficult to classify similar motion patterns. In order to solve this problem, a novel approach which integrates motion, audio and video models is proposed by using dataset captured by Kinect. The proposed system can recognize observed gestures by using three models. Recognition results of three models are integrated by using the proposed framework and the output becomes the final result. The motion and audio models are learned by using Hidden Markov Model. Random Forest which is the video classifier is used to learn the video model. In the experiments to test the performances of the proposed system, the motion and audio models most suitable for gesture recognition are chosen by varying feature vectors and learning methods. Additionally, the unimodal and multi-modal models are compared with respect to recognition accuracy. All the experiments are conducted on dataset provided by the competition organizer of MMGRC, which is a workshop for Multi-Modal Gesture Recognition Challenge. The comparison results show that the multi-modal model composed of three models scores the highest recognition rate. This improvement of recognition accuracy means that the complementary relationship among three models improves the accuracy of gesture recognition. The proposed system provides the application technology to understand human actions of daily life more precisely.
Dynamic model of target charging by short laser pulse interactions
NASA Astrophysics Data System (ADS)
Poyé, A.; Dubois, J.-L.; Lubrano-Lavaderci, F.; D'Humières, E.; Bardon, M.; Hulin, S.; Bailly-Grandvaux, M.; Ribolzi, J.; Raffestin, D.; Santos, J. J.; Nicolaï, Ph.; Tikhonchuk, V.
2015-10-01
A model providing an accurate estimate of the charge accumulation on the surface of a metallic target irradiated by a high-intensity laser pulse of fs-ps duration is proposed. The model is confirmed by detailed comparisons with specially designed experiments. Such a model is useful for understanding the electromagnetic pulse emission and the quasistatic magnetic field generation in laser-plasma interaction experiments.
Dynamic model of target charging by short laser pulse interactions.
Poyé, A; Dubois, J-L; Lubrano-Lavaderci, F; D'Humières, E; Bardon, M; Hulin, S; Bailly-Grandvaux, M; Ribolzi, J; Raffestin, D; Santos, J J; Nicolaï, Ph; Tikhonchuk, V
2015-10-01
A model providing an accurate estimate of the charge accumulation on the surface of a metallic target irradiated by a high-intensity laser pulse of fs-ps duration is proposed. The model is confirmed by detailed comparisons with specially designed experiments. Such a model is useful for understanding the electromagnetic pulse emission and the quasistatic magnetic field generation in laser-plasma interaction experiments.
Malik, Sarah A.; McCabe, Christopher; Araujo, Henrique; ...
2015-05-18
In our White Paper we present and discuss a concrete proposal for the consistent interpretation of Dark Matter searches at colliders and in direct detection experiments. Furthermore, based on a specific implementation of simplified models of vector and axial-vector mediator exchanges, this proposal demonstrates how the two search strategies can be compared on an equal footing.
Analyzing gene expression time-courses based on multi-resolution shape mixture model.
Li, Ying; He, Ye; Zhang, Yu
2016-11-01
Biological processes actually are a dynamic molecular process over time. Time course gene expression experiments provide opportunities to explore patterns of gene expression change over a time and understand the dynamic behavior of gene expression, which is crucial for study on development and progression of biology and disease. Analysis of the gene expression time-course profiles has not been fully exploited so far. It is still a challenge problem. We propose a novel shape-based mixture model clustering method for gene expression time-course profiles to explore the significant gene groups. Based on multi-resolution fractal features and mixture clustering model, we proposed a multi-resolution shape mixture model algorithm. Multi-resolution fractal features is computed by wavelet decomposition, which explore patterns of change over time of gene expression at different resolution. Our proposed multi-resolution shape mixture model algorithm is a probabilistic framework which offers a more natural and robust way of clustering time-course gene expression. We assessed the performance of our proposed algorithm using yeast time-course gene expression profiles compared with several popular clustering methods for gene expression profiles. The grouped genes identified by different methods are evaluated by enrichment analysis of biological pathways and known protein-protein interactions from experiment evidence. The grouped genes identified by our proposed algorithm have more strong biological significance. A novel multi-resolution shape mixture model algorithm based on multi-resolution fractal features is proposed. Our proposed model provides a novel horizons and an alternative tool for visualization and analysis of time-course gene expression profiles. The R and Matlab program is available upon the request. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Watanabe, S.; Kim, H.; Utsumi, N.
2017-12-01
This study aims to develop a new approach which projects hydrology under climate change using super ensemble experiments. The use of multiple ensemble is essential for the estimation of extreme, which is a major issue in the impact assessment of climate change. Hence, the super ensemble experiments are recently conducted by some research programs. While it is necessary to use multiple ensemble, the multiple calculations of hydrological simulation for each output of ensemble simulations needs considerable calculation costs. To effectively use the super ensemble experiments, we adopt a strategy to use runoff projected by climate models directly. The general approach of hydrological projection is to conduct hydrological model simulations which include land-surface and river routing process using atmospheric boundary conditions projected by climate models as inputs. This study, on the other hand, simulates only river routing model using runoff projected by climate models. In general, the climate model output is systematically biased so that a preprocessing which corrects such bias is necessary for impact assessments. Various bias correction methods have been proposed, but, to the best of our knowledge, no method has proposed for variables other than surface meteorology. Here, we newly propose a method for utilizing the projected future runoff directly. The developed method estimates and corrects the bias based on the pseudo-observation which is a result of retrospective offline simulation. We show an application of this approach to the super ensemble experiments conducted under the program of Half a degree Additional warming, Prognosis and Projected Impacts (HAPPI). More than 400 ensemble experiments from multiple climate models are available. The results of the validation using historical simulations by HAPPI indicates that the output of this approach can effectively reproduce retrospective runoff variability. Likewise, the bias of runoff from super ensemble climate projections is corrected, and the impact of climate change on hydrologic extremes is assessed in a cost-efficient way.
Working memory for braille is shaped by experience.
Cohen, Henri; Scherzer, Peter; Viau, Robert; Voss, Patrice; Lepore, Franco
2011-03-01
Tactile working memory was found to be more developed in completely blind (congenital and acquired) than in semi-sighted subjects, indicating that experience plays a crucial role in shaping working memory. A model of working memory, adapted from the classical model proposed by Baddeley and Hitch1 and Baddeley2 is presented where the connection strengths of a highly cross-modal network are altered through experience.
An exploration for research-oriented teaching model in biology teaching.
Xing, Wanjin; Mo, Morigen; Su, Huimin
2014-07-01
Training innovative talents, as one of the major aims for Chinese universities, needs to reform the traditional teaching methods. The research-oriented teaching method has been introduced and its connotation and significance for Chinese university teaching have been discussed for years. However, few practical teaching methods for routine class teaching were proposed. In this paper, a comprehensive and concrete research-oriented teaching model with contents of reference value and evaluation method for class teaching was proposed based on the current teacher-guiding teaching model in China. We proposed that the research-oriented teaching model should include at least seven aspects on: (1) telling the scientific history for the skills to find out scientific questions; (2) replaying the experiments for the skills to solve scientific problems; (3) analyzing experimental data for learning how to draw a conclusion; (4) designing virtual experiments for learning how to construct a proposal; (5) teaching the lesson as the detectives solve the crime for learning the logic in scientific exploration; (6) guiding students how to read and consult the relative references; (7) teaching students differently according to their aptitude and learning ability. In addition, we also discussed how to evaluate the effects of the research-oriented teaching model in examination.
Novel Real-Time Facial Wound Recovery Synthesis Using Subsurface Scattering
Chin, Seongah
2014-01-01
We propose a wound recovery synthesis model that illustrates the appearance of a wound healing on a 3-dimensional (3D) face. The H3 model is used to determine the size of the recovering wound. Furthermore, we present our subsurface scattering model that is designed to take the multilayered skin structure of the wound into consideration to represent its color transformation. We also propose a novel real-time rendering method based on the results of an analysis of the characteristics of translucent materials. Finally, we validate the proposed methods with 3D wound-simulation experiments using shading models. PMID:25197721
Background studies for the MINER Coherent Neutrino Scattering reactor experiment
NASA Astrophysics Data System (ADS)
Agnolet, G.; Baker, W.; Barker, D.; Beck, R.; Carroll, T. J.; Cesar, J.; Cushman, P.; Dent, J. B.; De Rijck, S.; Dutta, B.; Flanagan, W.; Fritts, M.; Gao, Y.; Harris, H. R.; Hays, C. C.; Iyer, V.; Jastram, A.; Kadribasic, F.; Kennedy, A.; Kubik, A.; Lang, K.; Mahapatra, R.; Mandic, V.; Marianno, C.; Martin, R. D.; Mast, N.; McDeavitt, S.; Mirabolfathi, N.; Mohanty, B.; Nakajima, K.; Newhouse, J.; Newstead, J. L.; Ogawa, I.; Phan, D.; Proga, M.; Rajput, A.; Roberts, A.; Rogachev, G.; Salazar, R.; Sander, J.; Senapati, K.; Shimada, M.; Soubasis, B.; Strigari, L.; Tamagawa, Y.; Teizer, W.; Vermaak, J. I. C.; Villano, A. N.; Walker, J.; Webb, B.; Wetzel, Z.; Yadavalli, S. A.
2017-05-01
The proposed Mitchell Institute Neutrino Experiment at Reactor (MINER) experiment at the Nuclear Science Center at Texas A&M University will search for coherent elastic neutrino-nucleus scattering within close proximity (about 2 m) of a 1 MW TRIGA nuclear reactor core using low threshold, cryogenic germanium and silicon detectors. Given the Standard Model cross section of the scattering process and the proposed experimental proximity to the reactor, as many as 5-20 events/kg/day are expected. We discuss the status of preliminary measurements to characterize the main backgrounds for the proposed experiment. Both in situ measurements at the experimental site and simulations using the MCNP and GEANT4 codes are described. A strategy for monitoring backgrounds during data taking is briefly discussed.
An integrated model for adolescent inpatient group therapy.
Garrick, D; Ewashen, C
2001-04-01
This paper proposes an integrated group therapy model to be utilized by psychiatric and mental health nurses; one innovatively designed to meet the therapeutic needs of adolescents admitted to inpatient psychiatric programs. The writers suggest a model of group therapy primarily comprised of interpersonal approaches within a feminist perspective. The proposed group focus is on active therapeutic engagement with adolescents to further interpersonal learning and to critically examine their contextualized lived experiences. Specific client and setting factors relevant to the selection of therapeutic techniques are reviewed. Selected theoretical models of group therapy are critiqued in relation to group therapy with adolescents. This integrated model of group therapy provides a safe and therapeutic forum that enriches clients' personal and interpersonal experiences as well as promotes healthy exploration, change, and empowerment.
Jerath, Ravinder; Crawford, Molly W.; Barnes, Vernon A.
2015-01-01
The Global Workspace Theory and Information Integration Theory are two of the most currently accepted consciousness models; however, these models do not address many aspects of conscious experience. We compare these models to our previously proposed consciousness model in which the thalamus fills-in processed sensory information from corticothalamic feedback loops within a proposed 3D default space, resulting in the recreation of the internal and external worlds within the mind. This 3D default space is composed of all cells of the body, which communicate via gap junctions and electrical potentials to create this unified space. We use 3D illustrations to explain how both visual and non-visual sensory information may be filled-in within this dynamic space, creating a unified seamless conscious experience. This neural sensory memory space is likely generated by baseline neural oscillatory activity from the default mode network, other salient networks, brainstem, and reticular activating system. PMID:26379573
Contact analysis and experimental investigation of a linear ultrasonic motor.
Lv, Qibao; Yao, Zhiyuan; Li, Xiang
2017-11-01
The effects of surface roughness are not considered in the traditional motor model which fails to reflect the actual contact mechanism between the stator and slider. An analytical model for calculating the tangential force of linear ultrasonic motor is proposed in this article. The presented model differs from the previous spring contact model, the asperities in contact between stator and slider are considered. The influences of preload and exciting voltage on tangential force in moving direction are analyzed. An experiment is performed to verify the feasibility of this proposed model by comparing the simulation results with the measured data. Moreover, the proposed model and spring model are compared. The results reveal that the proposed model is more accurate than spring model. The discussion is helpful for designing and modeling of linear ultrasonic motors. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Cheng, Jun; Gong, Yadong; Wang, Jinsheng
2013-11-01
The current research of micro-grinding mainly focuses on the optimal processing technology for different materials. However, the material removal mechanism in micro-grinding is the base of achieving high quality processing surface. Therefore, a novel method for predicting surface roughness in micro-grinding of hard brittle materials considering micro-grinding tool grains protrusion topography is proposed in this paper. The differences of material removal mechanism between convention grinding process and micro-grinding process are analyzed. Topography characterization has been done on micro-grinding tools which are fabricated by electroplating. Models of grain density generation and grain interval are built, and new predicting model of micro-grinding surface roughness is developed. In order to verify the precision and application effect of the surface roughness prediction model proposed, a micro-grinding orthogonally experiment on soda-lime glass is designed and conducted. A series of micro-machining surfaces which are 78 nm to 0.98 μm roughness of brittle material is achieved. It is found that experimental roughness results and the predicting roughness data have an evident coincidence, and the component variable of describing the size effects in predicting model is calculated to be 1.5×107 by reverse method based on the experimental results. The proposed model builds a set of distribution to consider grains distribution densities in different protrusion heights. Finally, the characterization of micro-grinding tools which are used in the experiment has been done based on the distribution set. It is concluded that there is a significant coincidence between surface prediction data from the proposed model and measurements from experiment results. Therefore, the effectiveness of the model is demonstrated. This paper proposes a novel method for predicting surface roughness in micro-grinding of hard brittle materials considering micro-grinding tool grains protrusion topography, which would provide significant research theory and experimental reference of material removal mechanism in micro-grinding of soda-lime glass.
NASA Astrophysics Data System (ADS)
Moslemipour, Ghorbanali
2018-07-01
This paper aims at proposing a quadratic assignment-based mathematical model to deal with the stochastic dynamic facility layout problem. In this problem, product demands are assumed to be dependent normally distributed random variables with known probability density function and covariance that change from period to period at random. To solve the proposed model, a novel hybrid intelligent algorithm is proposed by combining the simulated annealing and clonal selection algorithms. The proposed model and the hybrid algorithm are verified and validated using design of experiment and benchmark methods. The results show that the hybrid algorithm has an outstanding performance from both solution quality and computational time points of view. Besides, the proposed model can be used in both of the stochastic and deterministic situations.
Reduced-order modeling approach for frictional stick-slip behaviors of joint interface
NASA Astrophysics Data System (ADS)
Wang, Dong; Xu, Chao; Fan, Xuanhua; Wan, Qiang
2018-03-01
The complex frictional stick-slip behaviors of mechanical joint interface have a great effect on the dynamic properties of assembled structures. In this paper, a reduced-order modeling approach based on the constitutive Iwan model is proposed to describe the stick-slip behaviors of joint interface. An improved Iwan model is developed to describe the non-zero residual stiffness at macro-slip regime and smooth transition of joint stiffness from micro-slip to macro-slip regime, and the power-law relationship of energy dissipation during the micro-slip regime. In allusion to these nonlinear behaviors, the finite element method is used to calculate the recycle force under monolithic loading and the energy dissipation per cycle under oscillatory loading. The proposed model is then used to predict the nonlinear stick-slip behaviors of joint interface by curve-fitting to the results of finite element analysis, and the results show good agreements with the finite element analysis. A comparison with the experiment results in literature is also made. The proposed model agrees very well with the experiment results.
Zilinskas, Julius; Lančinskas, Algirdas; Guarracino, Mario Rosario
2014-01-01
In this paper we propose some mathematical models to plan a Next Generation Sequencing experiment to detect rare mutations in pools of patients. A mathematical optimization problem is formulated for optimal pooling, with respect to minimization of the experiment cost. Then, two different strategies to replicate patients in pools are proposed, which have the advantage to decrease the overall costs. Finally, a multi-objective optimization formulation is proposed, where the trade-off between the probability to detect a mutation and overall costs is taken into account. The proposed solutions are devised in pursuance of the following advantages: (i) the solution guarantees mutations are detectable in the experimental setting, and (ii) the cost of the NGS experiment and its biological validation using Sanger sequencing is minimized. Simulations show replicating pools can decrease overall experimental cost, thus making pooling an interesting option.
A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.
Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio
2017-11-01
Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force.
A Geophysical Inversion Model Enhancement Technique Based on the Blind Deconvolution
NASA Astrophysics Data System (ADS)
Zuo, B.; Hu, X.; Li, H.
2011-12-01
A model-enhancement technique is proposed to enhance the geophysical inversion model edges and details without introducing any additional information. Firstly, the theoretic correctness of the proposed geophysical inversion model-enhancement technique is discussed. An inversion MRM (model resolution matrix) convolution approximating PSF (Point Spread Function) method is designed to demonstrate the correctness of the deconvolution model enhancement method. Then, a total-variation regularization blind deconvolution geophysical inversion model-enhancement algorithm is proposed. In previous research, Oldenburg et al. demonstrate the connection between the PSF and the geophysical inverse solution. Alumbaugh et al. propose that more information could be provided by the PSF if we return to the idea of it behaving as an averaging or low pass filter. We consider the PSF as a low pass filter to enhance the inversion model basis on the theory of the PSF convolution approximation. Both the 1D linear and the 2D magnetotelluric inversion examples are used to analyze the validity of the theory and the algorithm. To prove the proposed PSF convolution approximation theory, the 1D linear inversion problem is considered. It shows the ratio of convolution approximation error is only 0.15%. The 2D synthetic model enhancement experiment is presented. After the deconvolution enhancement, the edges of the conductive prism and the resistive host become sharper, and the enhancement result is closer to the actual model than the original inversion model according the numerical statistic analysis. Moreover, the artifacts in the inversion model are suppressed. The overall precision of model increases 75%. All of the experiments show that the structure details and the numerical precision of inversion model are significantly improved, especially in the anomalous region. The correlation coefficient between the enhanced inversion model and the actual model are shown in Fig. 1. The figure illustrates that more information and details structure of the actual model are enhanced through the proposed enhancement algorithm. Using the proposed enhancement method can help us gain a clearer insight into the results of the inversions and help make better informed decisions.
Search for Hidden Particles (SHiP): a new experiment proposal
NASA Astrophysics Data System (ADS)
De Lellis, G.
2015-06-01
Searches for new physics with accelerators are being performed at the LHC, looking for high massive particles coupled to matter with ordinary strength. We propose a new experimental facility meant to search for very weakly coupled particles in the few GeV mass domain. The existence of such particles, foreseen in different theoretical models beyond the Standard Model, is largely unexplored from the experimental point of view. A beam dump facility, built at CERN in the north area, using 400 GeV protons is a copious factory of charmed hadrons and could be used to probe the existence of such particles. The beam dump is also an ideal source of tau neutrinos, the less known particle in the Standard Model. In particular, tau anti-neutrinos have not been observed so far. We therefore propose an experiment to search for hidden particles and study tau neutrino physics at the same time.
Manipulators with flexible links: A simple model and experiments
NASA Technical Reports Server (NTRS)
Shimoyama, Isao; Oppenheim, Irving J.
1989-01-01
A simple dynamic model proposed for flexible links is briefly reviewed and experimental control results are presented for different flexible systems. A simple dynamic model is useful for rapid prototyping of manipulators and their control systems, for possible application to manipulator design decisions, and for real time computation as might be applied in model based or feedforward control. Such a model is proposed, with the further advantage that clear physical arguments and explanations can be associated with its simplifying features and with its resulting analytical properties. The model is mathematically equivalent to Rayleigh's method. Taking the example of planar bending, the approach originates in its choice of two amplitude variables, typically chosen as the link end rotations referenced to the chord (or the tangent) motion of the link. This particular choice is key in establishing the advantageous features of the model, and it was used to support the series of experiments reported.
NASA Technical Reports Server (NTRS)
Kohl, R. E.
1973-01-01
The effectiveness of various vortex dissipation devices proposed for installation on or near aircraft runways is evaluated on basis of results of experiments conducted with a 0.03-scale model of a Boeing 747 transport aircraft in conjunction with a simulated runway. The test variables included type of vortex dissipation device, mode of operation of the powered devices, and altitude, lift coefficient and speed of the generating aircraft. A total of fifteen devices was investigated. The evaluation is based on time sequence photographs taken in the vertical and horizontal planes during each run.
Working memory for braille is shaped by experience
Scherzer, Peter; Viau, Robert; Voss, Patrice; Lepore, Franco
2011-01-01
Tactile working memory was found to be more developed in completely blind (congenital and acquired) than in semi-sighted subjects, indicating that experience plays a crucial role in shaping working memory. A model of working memory, adapted from the classical model proposed by Baddeley and Hitch1 and Baddeley2 is presented where the connection strengths of a highly cross-modal network are altered through experience. PMID:21655448
RACEWAY REACTOR FOR MICROALGAL BIODIESEL PRODUCTION
The proposed mathematical model incorporating mass transfer, hydraulics, carbonate/aquatic chemistry, biokinetics, biology and reactor design will be calibrated and validated using the data to be generated from the experiments. The practical feasibility of the proposed reactor...
Sun, Wei; Zhang, Xiaorui; Peeta, Srinivas; He, Xiaozheng; Li, Yongfu; Zhu, Senlai
2015-01-01
To improve the effectiveness and robustness of fatigue driving recognition, a self-adaptive dynamic recognition model is proposed that incorporates information from multiple sources and involves two sequential levels of fusion, constructed at the feature level and the decision level. Compared with existing models, the proposed model introduces a dynamic basic probability assignment (BPA) to the decision-level fusion such that the weight of each feature source can change dynamically with the real-time fatigue feature measurements. Further, the proposed model can combine the fatigue state at the previous time step in the decision-level fusion to improve the robustness of the fatigue driving recognition. An improved correction strategy of the BPA is also proposed to accommodate the decision conflict caused by external disturbances. Results from field experiments demonstrate that the effectiveness and robustness of the proposed model are better than those of models based on a single fatigue feature and/or single-source information fusion, especially when the most effective fatigue features are used in the proposed model. PMID:26393615
A game theory-based trust measurement model for social networks.
Wang, Yingjie; Cai, Zhipeng; Yin, Guisheng; Gao, Yang; Tong, Xiangrong; Han, Qilong
2016-01-01
In social networks, trust is a complex social network. Participants in online social networks want to share information and experiences with as many reliable users as possible. However, the modeling of trust is complicated and application dependent. Modeling trust needs to consider interaction history, recommendation, user behaviors and so on. Therefore, modeling trust is an important focus for online social networks. We propose a game theory-based trust measurement model for social networks. The trust degree is calculated from three aspects, service reliability, feedback effectiveness, recommendation credibility, to get more accurate result. In addition, to alleviate the free-riding problem, we propose a game theory-based punishment mechanism for specific trust and global trust, respectively. We prove that the proposed trust measurement model is effective. The free-riding problem can be resolved effectively through adding the proposed punishment mechanism.
Research on the water-entry attitude of a submersible aircraft.
Xu, BaoWei; Li, YongLi; Feng, JinFu; Hu, JunHua; Qi, Duo; Yang, Jian
2016-01-01
The water entry of a submersible aircraft, which is transient, highly coupled, and nonlinear, is complicated. After analyzing the mechanics of this process, the change rate of every variable is considered. A dynamic model is build and employed to study vehicle attitude and overturn phenomenon during water entry. Experiments are carried out and a method to organize experiment data is proposed. The accuracy of the method is confirmed by comparing the results of simulation of dynamic model and experiment under the same condition. Based on the analysis of the experiment and simulation, the initial attack angle and angular velocity largely influence the water entry of vehicle. Simulations of water entry with different initial and angular velocities are completed, followed by an analysis, and the motion law of vehicle is obtained. To solve the problem of vehicle stability and control during water entry, an approach is proposed by which the vehicle sails with a zero attack angle after entering water by controlling the initial angular velocity. With the dynamic model and optimization research algorithm, calculation is performed, and the optimal initial angular velocity of water-entry is obtained. The outcome of simulations confirms that the effectiveness of the propose approach by which the initial water-entry angular velocity is controlled.
NASA Astrophysics Data System (ADS)
Huang, Zhaohui; Huang, Xiemin
2018-04-01
This paper, firstly, introduces the application trend of the integration of multi-channel interactions in automotive HMI ((Human Machine Interface) from complex information models faced by existing automotive HMI and describes various interaction modes. By comparing voice interaction and touch screen, gestures and other interaction modes, the potential and feasibility of voice interaction in automotive HMI experience design are concluded. Then, the related theories of voice interaction, identification technologies, human beings' cognitive models of voices and voice design methods are further explored. And the research priority of this paper is proposed, i.e. how to design voice interaction to create more humane task-oriented dialogue scenarios to enhance interactive experiences of automotive HMI. The specific scenarios in driving behaviors suitable for the use of voice interaction are studied and classified, and the usability principles and key elements for automotive HMI voice design are proposed according to the scenario features. Then, through the user participatory usability testing experiment, the dialogue processes of voice interaction in automotive HMI are defined. The logics and grammars in voice interaction are classified according to the experimental results, and the mental models in the interaction processes are analyzed. At last, the voice interaction design method to create the humane task-oriented dialogue scenarios in the driving environment is proposed.
Rapid performance modeling and parameter regression of geodynamic models
NASA Astrophysics Data System (ADS)
Brown, J.; Duplyakin, D.
2016-12-01
Geodynamic models run in a parallel environment have many parameters with complicated effects on performance and scientifically-relevant functionals. Manually choosing an efficient machine configuration and mapping out the parameter space requires a great deal of expert knowledge and time-consuming experiments. We propose an active learning technique based on Gaussion Process Regression to automatically select experiments to map out the performance landscape with respect to scientific and machine parameters. The resulting performance model is then used to select optimal experiments for improving the accuracy of a reduced order model per unit of computational cost. We present the framework and evaluate its quality and capability using popular lithospheric dynamics models.
Threshold flux-controlled memristor model and its equivalent circuit implementation
NASA Astrophysics Data System (ADS)
Wu, Hua-Gan; Bao, Bo-Cheng; Chen, Mo
2014-11-01
Modeling a memristor is an effective way to explore the memristor properties due to the fact that the memristor devices are still not commercially available for common researchers. In this paper, a physical memristive device is assumed to exist whose ionic drift direction is perpendicular to the direction of the applied voltage, upon which, corresponding to the HP charge-controlled memristor model, a novel threshold flux-controlled memristor model with a window function is proposed. The fingerprints of the proposed model are analyzed. Especially, a practical equivalent circuit of the proposed model is realized, from which the corresponding experimental fingerprints are captured. The equivalent circuit of the threshold memristor model is appropriate for various memristors based breadboard experiments.
Silicon and Germanium (111) Surface Reconstruction
NASA Astrophysics Data System (ADS)
Hao, You Gong
Silicon (111) surface (7 x 7) reconstruction has been a long standing puzzle. For the last twenty years, various models were put forward to explain this reconstruction, but so far the problem still remains unsolved. Recent ion scattering and channeling (ISC), scanning tunneling microscopy (STM) and transmission electron diffraction (TED) experiments reveal some new results about the surface which greatly help investigators to establish better models. This work proposes a silicon (111) surface reconstruction mechanism, the raising and lowering mechanism which leads to benzene -like ring and flower (raised atom) building units. Based on these building units a (7 x 7) model is proposed, which is capable of explaining the STM and ISC experiment and several others. Furthermore the building units of the model can be used naturally to account for the germanium (111) surface c(2 x 8) reconstruction and other observed structures including (2 x 2), (5 x 5) and (7 x 7) for germanium as well as the (/3 x /3)R30 and (/19 x /19)R23.5 impurity induced structures for silicon, and the higher temperature disordered (1 x 1) structure for silicon. The model is closely related to the silicon (111) surface (2 x 1) reconstruction pi-bonded chain model, which is the most successful model for the reconstruction now. This provides an explanation for the rather low conversion temperature (560K) of the (2 x 1) to the (7 x 7). The model seems to meet some problems in the explanation of the TED result, which is explained very well by the dimer, adatom and stacking fault (DAS) model proposed by Takayanagi. In order to explain the TED result, a variation of the atomic scattering factor is proposed. Comparing the benzene-like ring model with the DAS model, the former needs more work to explain the TED result and the later has to find a way to explain the silicon (111) surface (1 x 1) disorder experiment.
Continuous Human Action Recognition Using Depth-MHI-HOG and a Spotter Model
Eum, Hyukmin; Yoon, Changyong; Lee, Heejin; Park, Mignon
2015-01-01
In this paper, we propose a new method for spotting and recognizing continuous human actions using a vision sensor. The method is comprised of depth-MHI-HOG (DMH), action modeling, action spotting, and recognition. First, to effectively separate the foreground from background, we propose a method called DMH. It includes a standard structure for segmenting images and extracting features by using depth information, MHI, and HOG. Second, action modeling is performed to model various actions using extracted features. The modeling of actions is performed by creating sequences of actions through k-means clustering; these sequences constitute HMM input. Third, a method of action spotting is proposed to filter meaningless actions from continuous actions and to identify precise start and end points of actions. By employing the spotter model, the proposed method improves action recognition performance. Finally, the proposed method recognizes actions based on start and end points. We evaluate recognition performance by employing the proposed method to obtain and compare probabilities by applying input sequences in action models and the spotter model. Through various experiments, we demonstrate that the proposed method is efficient for recognizing continuous human actions in real environments. PMID:25742172
An internal variable constitutive model for the large deformation of metals at high temperatures
NASA Technical Reports Server (NTRS)
Brown, Stuart; Anand, Lallit
1988-01-01
The advent of large deformation finite element methodologies is beginning to permit the numerical simulation of hot working processes whose design until recently has been based on prior industrial experience. Proper application of such finite element techniques requires realistic constitutive equations which more accurately model material behavior during hot working. A simple constitutive model for hot working is the single scalar internal variable model for isotropic thermal elastoplasticity proposed by Anand. The model is recalled and the specific scalar functions, for the equivalent plastic strain rate and the evolution equation for the internal variable, presented are slight modifications of those proposed by Anand. The modified functions are better able to represent high temperature material behavior. The monotonic constant true strain rate and strain rate jump compression experiments on a 2 percent silicon iron is briefly described. The model is implemented in the general purpose finite element program ABAQUS.
On the melting temperature measurements of metals under shock compression by pyrometry
NASA Astrophysics Data System (ADS)
Dai, Chengda; Hu, Jianbo; Tan, Hua
2009-06-01
The high-pressure melting temperatures are of interest in validating equation of state and modeling constitutive equation. The determination of melting temperatures for metals at megabars by pyrometry experiments is principally associated with the one-dimensional models for heat flow through dissimilar media: Grover-Urtiew model (J. App. Phys. 1974, 45: 146-152) and Tan-Ahrens model (High Press. Res. 1990, 2: 159-182). In the present work, we analyzed the insufficiency of Grover-Urtiew model in determining melting temperatures from observed interface temperatures. Based on the Tan-Ahrens model, we extracted the upper and lower bound on melting temperature at interface pressure, and proposed that the median of the both bounds was a good approximation to the melting temperatures at interface pressure. Pyrometry experiments were performed on tantalum, and the high-pressure melting temperatures were evaluated by application of the proposed approximation. The obtained results were compared with available theoretical calculations.
ERIC Educational Resources Information Center
Cox, Cody B.; Yang, Yan; Dicke-Bohmann, Amy K.
2014-01-01
The purpose of this study was to propose and test a model of the effects of cultural factors on Hispanic protégés' expectations for and experiences with their mentors. Specifically, the proposed model posits that cultural orientation predicts the mentorship functions protégés desire, and the positive impact of these mentorship functions depends on…
NASA Astrophysics Data System (ADS)
Zhao, Fei; Zhang, Chi; Yang, Guilin; Chen, Chinyin
2016-12-01
This paper presents an online estimation method of cutting error by analyzing of internal sensor readings. The internal sensors of numerical control (NC) machine tool are selected to avoid installation problem. The estimation mathematic model of cutting error was proposed to compute the relative position of cutting point and tool center point (TCP) from internal sensor readings based on cutting theory of gear. In order to verify the effectiveness of the proposed model, it was simulated and experimented in gear generating grinding process. The cutting error of gear was estimated and the factors which induce cutting error were analyzed. The simulation and experiments verify that the proposed approach is an efficient way to estimate the cutting error of work-piece during machining process.
A stochastic HMM-based forecasting model for fuzzy time series.
Li, Sheng-Tun; Cheng, Yi-Chung
2010-10-01
Recently, fuzzy time series have attracted more academic attention than traditional time series due to their capability of dealing with the uncertainty and vagueness inherent in the data collected. The formulation of fuzzy relations is one of the key issues affecting forecasting results. Most of the present works adopt IF-THEN rules for relationship representation, which leads to higher computational overhead and rule redundancy. Sullivan and Woodall proposed a Markov-based formulation and a forecasting model to reduce computational overhead; however, its applicability is limited to handling one-factor problems. In this paper, we propose a novel forecasting model based on the hidden Markov model by enhancing Sullivan and Woodall's work to allow handling of two-factor forecasting problems. Moreover, in order to make the nature of conjecture and randomness of forecasting more realistic, the Monte Carlo method is adopted to estimate the outcome. To test the effectiveness of the resulting stochastic model, we conduct two experiments and compare the results with those from other models. The first experiment consists of forecasting the daily average temperature and cloud density in Taipei, Taiwan, and the second experiment is based on the Taiwan Weighted Stock Index by forecasting the exchange rate of the New Taiwan dollar against the U.S. dollar. In addition to improving forecasting accuracy, the proposed model adheres to the central limit theorem, and thus, the result statistically approximates to the real mean of the target value being forecast.
NASA Astrophysics Data System (ADS)
Roth, Christian; Vorderer, Peter; Klimmt, Christoph
A conceptual account to the quality of the user experience that interactive storytelling intends to facilitate is introduced. Building on socialscientific research from 'old' entertainment media, the experiential qualities of curiosity, suspense, aesthetic pleasantness, self-enhancement, and optimal task engagement ("flow") are proposed as key elements of a theory of user experience in interactive storytelling. Perspectives for the evolution of the model, research and application are briefly discussed.
Analytical Model For Fluid Dynamics In A Microgravity Environment
NASA Technical Reports Server (NTRS)
Naumann, Robert J.
1995-01-01
Report presents analytical approximation methodology for providing coupled fluid-flow, heat, and mass-transfer equations in microgravity environment. Experimental engineering estimates accurate to within factor of 2 made quickly and easily, eliminating need for time-consuming and costly numerical modeling. Any proposed experiment reviewed to see how it would perform in microgravity environment. Model applied in commercial setting for preliminary design of low-Grashoff/Rayleigh-number experiments.
Development of an Implantable WBAN Path-Loss Model for Capsule Endoscopy
NASA Astrophysics Data System (ADS)
Aoyagi, Takahiro; Takizawa, Kenichi; Kobayashi, Takehiko; Takada, Jun-Ichi; Hamaguchi, Kiyoshi; Kohno, Ryuji
An implantable WBAN path-loss model for a capsule endoscopy which is used for examining digestive organs, is developed by conducting simulations and experiments. First, we performed FDTD simulations on implant WBAN propagation by using a numerical human model. Second, we performed FDTD simulations on a vessel that represents the human body. Third, we performed experiments using a vessel of the same dimensions as that used in the simulations. On the basis of the results of these simulations and experiments, we proposed the gradient and intercept parameters of the simple path-loss in-body propagation model.
AFFINE-CORRECTED PARADISE: FREE-BREATHING PATIENT-ADAPTIVE CARDIAC MRI WITH SENSITIVITY ENCODING
Sharif, Behzad; Bresler, Yoram
2013-01-01
We propose a real-time cardiac imaging method with parallel MRI that allows for free breathing during imaging and does not require cardiac or respiratory gating. The method is based on the recently proposed PARADISE (Patient-Adaptive Reconstruction and Acquisition Dynamic Imaging with Sensitivity Encoding) scheme. The new acquisition method adapts the PARADISE k-t space sampling pattern according to an affine model of the respiratory motion. The reconstruction scheme involves multi-channel time-sequential imaging with time-varying channels. All model parameters are adapted to the imaged patient as part of the experiment and drive both data acquisition and cine reconstruction. Simulated cardiac MRI experiments using the realistic NCAT phantom show high quality cine reconstructions and robustness to modeling inaccuracies. PMID:24390159
Improving the performances of autofocus based on adaptive retina-like sampling model
NASA Astrophysics Data System (ADS)
Hao, Qun; Xiao, Yuqing; Cao, Jie; Cheng, Yang; Sun, Ce
2018-03-01
An adaptive retina-like sampling model (ARSM) is proposed to balance autofocusing accuracy and efficiency. Based on the model, we carry out comparative experiments between the proposed method and the traditional method in terms of accuracy, the full width of the half maxima (FWHM) and time consumption. Results show that the performances of our method are better than that of the traditional method. Meanwhile, typical autofocus functions, including sum-modified-Laplacian (SML), Laplacian (LAP), Midfrequency-DCT (MDCT) and Absolute Tenengrad (ATEN) are compared through comparative experiments. The smallest FWHM is obtained by the use of LAP, which is more suitable for evaluating accuracy than other autofocus functions. The autofocus function of MDCT is most suitable to evaluate the real-time ability.
Chae, Yoojin; Goodman, Gail S; Edelstein, Robin S
2011-01-01
The authors propose a novel model of autobiographical memory development that features the fundamental role of attachment orientations and negative life events. In the model, it is proposed that early autobiographical memory derives in part from the need to express and remember negative experiences, a need that has adaptive value, and that attachment orientations create individual differences in children's recollections of negative experiences. Specifically, the role of attachment in the processing of negative information is discussed in regard to the mnemonic stages of encoding, storage, and retrieval. This model sheds light on several areas of contradictory data in the memory development literature, such as concerning earliest memories and children's and adults' memory/suggestibility for stressful events.
Work Experience, Socialization, and Civil Liberties
ERIC Educational Resources Information Center
Korman, Abraham K.
1975-01-01
Examines the effects of work experience on attitudes and behaviors in the area of civil liberties; (1) noting that hierarchical structure, rigidity and specialization seem to generate negative effect toward civil libertarian concerns, and (2) proposing a theoretical model designed to predict the conditions under which work experience may be…
Blending Student Technology Experiences in Formal and Informal Learning
ERIC Educational Resources Information Center
Lai, K.-W.; Khaddage, F.; Knezek, Gerald
2013-01-01
In this article, we discuss the importance of recognizing students' technology-enhanced informal learning experiences and develop pedagogies to connect students' formal and informal learning experiences, in order to meet the demands of the knowledge society. The Mobile-Blended Collaborative Learning model is proposed as a framework to…
NASA Technical Reports Server (NTRS)
Sidik, S. M.
1972-01-01
The error variance of the process prior multivariate normal distributions of the parameters of the models are assumed to be specified, prior probabilities of the models being correct. A rule for termination of sampling is proposed. Upon termination, the model with the largest posterior probability is chosen as correct. If sampling is not terminated, posterior probabilities of the models and posterior distributions of the parameters are computed. An experiment was chosen to maximize the expected Kullback-Leibler information function. Monte Carlo simulation experiments were performed to investigate large and small sample behavior of the sequential adaptive procedure.
ERIC Educational Resources Information Center
Erden, Ali
2017-01-01
Lifelong education is a process including positive and negative experiences at the same time. Negative experiences mostly appear as impediments to the overseas students. They need to overcome impediments they experience throughout their education. The paper discussed the key findings of a two-year research project for identifying the impediments…
Measurement Model and Precision Analysis of Accelerometers for Maglev Vibration Isolation Platforms.
Wu, Qianqian; Yue, Honghao; Liu, Rongqiang; Zhang, Xiaoyou; Ding, Liang; Liang, Tian; Deng, Zongquan
2015-08-14
High precision measurement of acceleration levels is required to allow active control for vibration isolation platforms. It is necessary to propose an accelerometer configuration measurement model that yields such a high measuring precision. In this paper, an accelerometer configuration to improve measurement accuracy is proposed. The corresponding calculation formulas of the angular acceleration were derived through theoretical analysis. A method is presented to minimize angular acceleration noise based on analysis of the root mean square noise of the angular acceleration. Moreover, the influence of installation position errors and accelerometer orientation errors on the calculation precision of the angular acceleration is studied. Comparisons of the output differences between the proposed configuration and the previous planar triangle configuration under the same installation errors are conducted by simulation. The simulation results show that installation errors have a relatively small impact on the calculation accuracy of the proposed configuration. To further verify the high calculation precision of the proposed configuration, experiments are carried out for both the proposed configuration and the planar triangle configuration. On the basis of the results of simulations and experiments, it can be concluded that the proposed configuration has higher angular acceleration calculation precision and can be applied to different platforms.
Measurement Model and Precision Analysis of Accelerometers for Maglev Vibration Isolation Platforms
Wu, Qianqian; Yue, Honghao; Liu, Rongqiang; Zhang, Xiaoyou; Ding, Liang; Liang, Tian; Deng, Zongquan
2015-01-01
High precision measurement of acceleration levels is required to allow active control for vibration isolation platforms. It is necessary to propose an accelerometer configuration measurement model that yields such a high measuring precision. In this paper, an accelerometer configuration to improve measurement accuracy is proposed. The corresponding calculation formulas of the angular acceleration were derived through theoretical analysis. A method is presented to minimize angular acceleration noise based on analysis of the root mean square noise of the angular acceleration. Moreover, the influence of installation position errors and accelerometer orientation errors on the calculation precision of the angular acceleration is studied. Comparisons of the output differences between the proposed configuration and the previous planar triangle configuration under the same installation errors are conducted by simulation. The simulation results show that installation errors have a relatively small impact on the calculation accuracy of the proposed configuration. To further verify the high calculation precision of the proposed configuration, experiments are carried out for both the proposed configuration and the planar triangle configuration. On the basis of the results of simulations and experiments, it can be concluded that the proposed configuration has higher angular acceleration calculation precision and can be applied to different platforms. PMID:26287203
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Kandler
Battery Life estimation is one of the key inputs required for Hybrid applications for all GM Hybrid/EV/EREV/PHEV programs. For each Hybrid vehicle program, GM has instituted multi-parameter Design of Experiments generating test data at Cell level and also Pack level on a reduced basis. Based on experience, generating test data on a pack level is found to be very expensive, resource intensive and sometimes less reliable. The proposed collaborative project will focus on a methodology to estimate Battery life based on cell degradation data combined with pack thermal modeling. NREL has previously developed cell-level battery aging models and pack-level thermal/electricalmore » network models, though these models are currently not integrated. When coupled together, the models are expected to describe pack-level thermal and aging response of individual cells. GM and NREL will use data collected for GM's Bas+ battery system for evaluation of the proposed methodology and assess to what degree these models can replace pack-level aging experiments in the future.« less
Applying deep bidirectional LSTM and mixture density network for basketball trajectory prediction
NASA Astrophysics Data System (ADS)
Zhao, Yu; Yang, Rennong; Chevalier, Guillaume; Shah, Rajiv C.; Romijnders, Rob
2018-04-01
Data analytics helps basketball teams to create tactics. However, manual data collection and analytics are costly and ineffective. Therefore, we applied a deep bidirectional long short-term memory (BLSTM) and mixture density network (MDN) approach. This model is not only capable of predicting a basketball trajectory based on real data, but it also can generate new trajectory samples. It is an excellent application to help coaches and players decide when and where to shoot. Its structure is particularly suitable for dealing with time series problems. BLSTM receives forward and backward information at the same time, while stacking multiple BLSTMs further increases the learning ability of the model. Combined with BLSTMs, MDN is used to generate a multi-modal distribution of outputs. Thus, the proposed model can, in principle, represent arbitrary conditional probability distributions of output variables. We tested our model with two experiments on three-pointer datasets from NBA SportVu data. In the hit-or-miss classification experiment, the proposed model outperformed other models in terms of the convergence speed and accuracy. In the trajectory generation experiment, eight model-generated trajectories at a given time closely matched real trajectories.
García Rodríguez, Y
1997-06-01
Various studies have explored the relationships between unemployment and expectation of success, commitment to work, motivation, causal attributions, self-esteem and depression. A model is proposed that assumes the relationships between these variables are moderated by (a) whether or not the unemployed individual is seeking a first job and (b) age. It is proposed that for the unemployed who are seeking their first job (seekers) the relationships among these variables will be consistent with expectancy-value theory, but for those who have had a previous job (losers), the relationships will be more consistent with learned helplessness theory. It is further assumed that within this latter group the young losers will experience "universal helplessness" whereas the adult losers will experience "personal helplessness".
Bell Test experiments explained without entanglement
NASA Astrophysics Data System (ADS)
Boyd, Jeffrey
2011-04-01
by Jeffrey H. Boyd. Jeffreyhboyd@gmail.com. John Bell proposed a test of what was called "local realism." However that is a different view of reality than we hold. Bell incorrectly assumed the validity of wave particle dualism. According to our model waves are independent of particles; wave interference precedes the emission of a particle. This results in two conclusions. First the proposed inequalities that apply to "local realism" in Bell's theorem do not apply to this model. The alleged mathematics of "local realism" is therefore wrong. Second, we can explain the Bell Test experimental results (such as the experiments done at Innsbruck) without any need for entanglement, non-locality, or particle superposition.
[Medical image segmentation based on the minimum variation snake model].
Zhou, Changxiong; Yu, Shenglin
2007-02-01
It is difficult for traditional parametric active contour (Snake) model to deal with automatic segmentation of weak edge medical image. After analyzing snake and geometric active contour model, a minimum variation snake model was proposed and successfully applied to weak edge medical image segmentation. This proposed model replaces constant force in the balloon snake model by variable force incorporating foreground and background two regions information. It drives curve to evolve with the criterion of the minimum variation of foreground and background two regions. Experiments and results have proved that the proposed model is robust to initial contours placements and can segment weak edge medical image automatically. Besides, the testing for segmentation on the noise medical image filtered by curvature flow filter, which preserves edge features, shows a significant effect.
Vaughn, Leigh Ann
2017-03-01
This article introduces the need-support model, which proposes that regulatory focus can affect subjective support for the needs proposed by self-determination theory (autonomy, competence, and relatedness), and support of these needs can affect subjective labeling of experiences as promotion-focused and prevention-focused. Three studies tested these hypotheses ( N = 2,114). Study 1 found that people recall more need support in promotion-focused experiences than in prevention-focused experiences, and need support in their day yesterday (with no particular regulatory focus) fell in between. Study 2 found that experiences of higher need support were more likely to be labeled as promotion-focused rather than prevention-focused, and that each need accounted for distinct variance in the labeling of experiences. Study 3 varied regulatory focus within a performance task and found that participants in the promotion condition engaged in need-support inflation, whereas participants in the prevention condition engaged in need-support deflation. Directions for future research are discussed.
Intonation in unaccompanied singing: accuracy, drift, and a model of reference pitch memory.
Mauch, Matthias; Frieler, Klaus; Dixon, Simon
2014-07-01
This paper presents a study on intonation and intonation drift in unaccompanied singing, and proposes a simple model of reference pitch memory that accounts for many of the effects observed. Singing experiments were conducted with 24 singers of varying ability under three conditions (Normal, Masked, Imagined). Over the duration of a recording, ∼50 s, a median absolute intonation drift of 11 cents was observed. While smaller than the median note error (19 cents), drift was significant in 22% of recordings. Drift magnitude did not correlate with other measures of singing accuracy, singing experience, or the presence of conditions tested. Furthermore, it is shown that neither a static intonation memory model nor a memoryless interval-based intonation model can account for the accuracy and drift behavior observed. The proposed causal model provides a better explanation as it treats the reference pitch as a changing latent variable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Studer, Anthony
Current pressures on the global food supply have accelerated the urgency for a second green revolution using novel and sustainable approaches to increase crop yield and efficiency. This proposal outlines experiments to address fundamental questions regarding the biology of C 4 photosynthesis, the method of carbon fixation utilized by the most productive food, feed and bioenergy crops. Carbonic anhydrase (CA) has been implicated in multiple cellular functions including nitrogen metabolism, water use efficiency, and photosynthesis. CA catalyzes the first dedicated step in C 4 photosynthesis, the hydration of CO 2 into bicarbonate, and is potentially rate limiting in C 4more » grasses. Using insertional mutagenesis, we have generated CA mutants in maize, and propose the characterization of these mutants using phenotypic, physiological, and transcriptomic profiling to assay the plant’s response to altered CA activity. In addition, florescent protein tagging experiments will be employed to study the subcellular localization of CA paralogs, providing critical data for modeling carbon fixation in C 4 plants. Finally, I propose parallel experiments in Setaria viridis to explore its relevance as model C 4 grass. Using a multifaceted approach, this proposal addresses important questions in basic biology, as well as the need for translation research in response to looming global food challenges.« less
NASA Technical Reports Server (NTRS)
Russell, Richard A.; Waiss, Richard D.
1988-01-01
A study was conducted to identify the common support equipment and Space Station interface requirements for the IOC (initial operating capabilities) model technology experiments. In particular, each principal investigator for the proposed model technology experiment was contacted and visited for technical understanding and support for the generation of the detailed technical backup data required for completion of this study. Based on the data generated, a strong case can be made for a dedicated technology experiment command and control work station consisting of a command keyboard, cathode ray tube, data processing and storage, and an alert/annunciator panel located in the pressurized laboratory.
The Yes-No Question Answering System and Statement Verification.
ERIC Educational Resources Information Center
Akiyama, M. Michael; And Others
1979-01-01
Two experiments investigated the relationship of verification to the answering of yes-no questions. Subjects verified simple statements or answered simple questions. Various proposals concerning the relative difficulty of answering questions and verifying statements were considered, and a model was proposed. (SW)
McCracken, Lance M; Trost, Zina
2014-01-01
Accumulating evidence suggests that the experience of injustice in patients with chronic pain is associated with poorer pain-related outcomes. Despite this evidence, a theoretical framework to understand this relationship is presently lacking. This review is the first to propose that the psychological flexibility model underlying Acceptance and Commitment Therapy (ACT) may provide a clinically useful conceptual framework to understand the association between the experience of injustice and chronic pain outcomes. A literature review was conducted to identify research and theory on the injustice experience in chronic pain, chronic pain acceptance, and ACT. Research relating injustice to chronic pain outcomes is summarised, the relevance of psychological flexibility to the injustice experience is discussed, and the subprocesses of psychological flexibility are proposed as potential mediating factors in the relationship between injustice and pain outcomes. Application of the psychological flexibility model to the experience of pain-related injustice may provide new avenues for future research and clinical interventions for patients with pain. Summary points • Emerging research links the experience of pain-related injustice to problematic pain outcomes. • A clinically relevant theoretical framework is currently lacking to guide future research and intervention on pain-related injustice. • The psychological flexibility model would suggest that the overarching process of psychological inflexibility mediates between the experience of injustice and adverse chronic pain outcomes. • Insofar as the processes of psychological inflexibility account for the association between injustice experiences and pain outcomes, methods of Acceptance and Commitment Therapy (ACT) may reduce the impact of injustice of pain outcomes. • Future research is needed to empirically test the proposed associations between the experience of pain-related injustice, psychological flexibility and pain outcomes, and whether ACT interventions mitigate the impact of pain-related injustice on pain outcomes. PMID:26516537
An Office Automation Needs Assessment Model
1985-08-01
TRACKING FORM . . . 74 I. CSD OFFICE SYSTEMS ANALYSIS WORKSHEETS . . . 75 J. AMO EVALUATIONS OF PROPOSED MODEL ...... 113 FOOTNOTES...as to "who should plan for office automated systems," a checklist of attributes should be evaluated , including: experience, expertise, availability of...with experience, differs with respect to breadth of knowledge in numerous areas. In evaluating in-house vs. outside resources, the Hospital Commander
A spectral-spatial-dynamic hierarchical Bayesian (SSD-HB) model for estimating soybean yield
NASA Astrophysics Data System (ADS)
Kazama, Yoriko; Kujirai, Toshihiro
2014-10-01
A method called a "spectral-spatial-dynamic hierarchical-Bayesian (SSD-HB) model," which can deal with many parameters (such as spectral and weather information all together) by reducing the occurrence of multicollinearity, is proposed. Experiments conducted on soybean yields in Brazil fields with a RapidEye satellite image indicate that the proposed SSD-HB model can predict soybean yield with a higher degree of accuracy than other estimation methods commonly used in remote-sensing applications. In the case of the SSD-HB model, the mean absolute error between estimated yield of the target area and actual yield is 0.28 t/ha, compared to 0.34 t/ha when conventional PLS regression was applied, showing the potential effectiveness of the proposed model.
Toward a Trust Evaluation Mechanism in the Social Internet of Things.
Truong, Nguyen Binh; Lee, Hyunwoo; Askwith, Bob; Lee, Gyu Myoung
2017-06-09
In the blooming era of the Internet of Things (IoT), trust has been accepted as a vital factor for provisioning secure, reliable, seamless communications and services. However, a large number of challenges still remain unsolved due to the ambiguity of the concept of trust as well as the variety of divergent trust models in different contexts. In this research, we augment the trust concept, the trust definition and provide a general conceptual model in the context of the Social IoT (SIoT) environment by breaking down all attributes influencing trust. Then, we propose a trust evaluation model called REK, comprised of the triad of trust indicators (TIs) Reputation, Experience and Knowledge. The REK model covers multi-dimensional aspects of trust by incorporating heterogeneous information from direct observation (as Knowledge TI), personal experiences (as Experience TI) to global opinions (as Reputation TI). The associated evaluation models for the three TIs are also proposed and provisioned. We then come up with an aggregation mechanism for deriving trust values as the final outcome of the REK evaluation model. We believe this article offers better understandings on trust as well as provides several prospective approaches for the trust evaluation in the SIoT environment.
Toward a Trust Evaluation Mechanism in the Social Internet of Things
Truong, Nguyen Binh; Lee, Hyunwoo; Askwith, Bob; Lee, Gyu Myoung
2017-01-01
In the blooming era of the Internet of Things (IoT), trust has been accepted as a vital factor for provisioning secure, reliable, seamless communications and services. However, a large number of challenges still remain unsolved due to the ambiguity of the concept of trust as well as the variety of divergent trust models in different contexts. In this research, we augment the trust concept, the trust definition and provide a general conceptual model in the context of the Social IoT (SIoT) environment by breaking down all attributes influencing trust. Then, we propose a trust evaluation model called REK, comprised of the triad of trust indicators (TIs) Reputation, Experience and Knowledge. The REK model covers multi-dimensional aspects of trust by incorporating heterogeneous information from direct observation (as Knowledge TI), personal experiences (as Experience TI) to global opinions (as Reputation TI). The associated evaluation models for the three TIs are also proposed and provisioned. We then come up with an aggregation mechanism for deriving trust values as the final outcome of the REK evaluation model. We believe this article offers better understandings on trust as well as provides several prospective approaches for the trust evaluation in the SIoT environment. PMID:28598401
Gas leak detection in infrared video with background modeling
NASA Astrophysics Data System (ADS)
Zeng, Xiaoxia; Huang, Likun
2018-03-01
Background modeling plays an important role in the task of gas detection based on infrared video. VIBE algorithm is a widely used background modeling algorithm in recent years. However, the processing speed of the VIBE algorithm sometimes cannot meet the requirements of some real time detection applications. Therefore, based on the traditional VIBE algorithm, we propose a fast prospect model and optimize the results by combining the connected domain algorithm and the nine-spaces algorithm in the following processing steps. Experiments show the effectiveness of the proposed method.
Note: Model-based identification method of a cable-driven wearable device for arm rehabilitation
NASA Astrophysics Data System (ADS)
Cui, Xiang; Chen, Weihai; Zhang, Jianbin; Wang, Jianhua
2015-09-01
Cable-driven exoskeletons have used active cables to actuate the system and are worn on subjects to provide motion assistance. However, this kind of wearable devices usually contains uncertain kinematic parameters. In this paper, a model-based identification method has been proposed for a cable-driven arm exoskeleton to estimate its uncertainties. The identification method is based on the linearized error model derived from the kinematics of the exoskeleton. Experiment has been conducted to demonstrate the feasibility of the proposed model-based method in practical application.
NASA Technical Reports Server (NTRS)
DeCarvalho, N. V.; Chen, B. Y.; Pinho, S. T.; Baiz, P. M.; Ratcliffe, J. G.; Tay, T. E.
2013-01-01
A novel approach is proposed for high-fidelity modeling of progressive damage and failure in composite materials that combines the Floating Node Method (FNM) and the Virtual Crack Closure Technique (VCCT) to represent multiple interacting failure mechanisms in a mesh-independent fashion. In this study, the approach is applied to the modeling of delamination migration in cross-ply tape laminates. Delamination, matrix cracking, and migration are all modeled using fracture mechanics based failure and migration criteria. The methodology proposed shows very good qualitative and quantitative agreement with experiments.
NASA Technical Reports Server (NTRS)
DeCarvalho, Nelson V.; Chen, B. Y.; Pinho, Silvestre T.; Baiz, P. M.; Ratcliffe, James G.; Tay, T. E.
2013-01-01
A novel approach is proposed for high-fidelity modeling of progressive damage and failure in composite materials that combines the Floating Node Method (FNM) and the Virtual Crack Closure Technique (VCCT) to represent multiple interacting failure mechanisms in a mesh-independent fashion. In this study, the approach is applied to the modeling of delamination migration in cross-ply tape laminates. Delamination, matrix cracking, and migration are all modeled using fracture mechanics based failure and migration criteria. The methodology proposed shows very good qualitative and quantitative agreement with experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Zhenyu; Du, Pengwei; Kosterev, Dmitry
2013-05-01
Disturbance data recorded by phasor measurement units (PMU) offers opportunities to improve the integrity of dynamic models. However, manually tuning parameters through play-back events demands significant efforts and engineering experiences. In this paper, a calibration method using the extended Kalman filter (EKF) technique is proposed. The formulation of EKF with parameter calibration is discussed. Case studies are presented to demonstrate its validity. The proposed calibration method is cost-effective, complementary to traditional equipment testing for improving dynamic model quality.
NASA Astrophysics Data System (ADS)
Shutov, A. V.; Larichkin, A. Yu
2017-10-01
A cyclic creep damage model, previously proposed by the authors, is modified for a better description of the transient creep of D16T alloy observed in the finite strain range under rapidly changing stresses. The new model encompasses the concept of kinematic hardening, which allows us to account for the creep-induced anisotropy. The model kinematics is based on the nested multiplicative split of the deformation gradient, proposed by Lion. The damage evolution is accounted for by the classical Kachanov-Rabotnov approach. The material parameters are identified using experimental data on cyclic torsion of thick-walled samples with different holding times between load reversals. For the validation of the proposed material model, an additional experiment is analyzed. Although this additional test is not involved in the identification procedure, the proposed cyclic creep damage model describes it accurately.
Dynamic access control model for privacy preserving personalized healthcare in cloud environment.
Son, Jiseong; Kim, Jeong-Dong; Na, Hong-Seok; Baik, Doo-Kwon
2015-01-01
When sharing and storing healthcare data in a cloud environment, access control is a central issue for preserving data privacy as a patient's personal health data may be accessed without permission from many stakeholders. Specifically, dynamic authorization for the access of data is required because personal health data is stored in cloud storage via wearable devices. Therefore, we propose a dynamic access control model for preserving the privacy of personal healthcare data in a cloud environment. The proposed model considers context information for dynamic access. According to the proposed model, access control can be dynamically determined by changing the context information; this means that even for a subject with the same role in the cloud, access permission is defined differently depending on the context information and access condition. Furthermore, we experiment the ability of the proposed model to provide correct responses by representing a dynamic access decision with real-life personalized healthcare system scenarios.
Local Intrinsic Dimension Estimation by Generalized Linear Modeling.
Hino, Hideitsu; Fujiki, Jun; Akaho, Shotaro; Murata, Noboru
2017-07-01
We propose a method for intrinsic dimension estimation. By fitting the power of distance from an inspection point and the number of samples included inside a ball with a radius equal to the distance, to a regression model, we estimate the goodness of fit. Then, by using the maximum likelihood method, we estimate the local intrinsic dimension around the inspection point. The proposed method is shown to be comparable to conventional methods in global intrinsic dimension estimation experiments. Furthermore, we experimentally show that the proposed method outperforms a conventional local dimension estimation method.
Tokiwa, Tatsuji; Zimin, Lev; Ishizuka, Satoru; Inoue, Takao; Fujii, Masami; Ishiguro, Hiroshi; Kajigaya, Hiroshi; Owada, Yuji; Suzuki, Michiyasu; Yamakawa, Takeshi
2015-08-01
The purpose of this study is to propose the palm-sized cryoprobe system based on a new concept and to suggest that the freezing technique could be used for treatment of epilepsy. We propose herein a cryoprobe system based on the boiling effect that uses a specific refrigerants with a boiling point higher than that of liquid nitrogen yet low enough to result in cell necrosis. To evaluate and verify the effectiveness of the proposed system, cooling characteristics are investigated in agar. In addition, the system is applied to a Wistar rat brain-model, in which the epileptic activities are induced in advance by a potent epileptogenic substance. The design concept yielded the following benefits: 1) the selected refrigerant promotes sealing in the tank; 2) the tank can be made as compact as possible, limited only by the volume required for the refrigerant; 3) because the tank and probe units can be separated by a nonconducting, flexible, and high-pressure tube, the tank unit can be manipulated without disturbing the probe tip with mechanical vibrations and electrical noise. Although the agar experiments, we verified that the proposed system can uniquely and reproducibly create an ice ball. Moreover, in the rat experiments in vivo, it was confirmed that penicillin G-induced epileptic activities disappeared on freezing with the proposed system. The palm-sized system has desired characteristics and can apply for an animal model of epilepsy. Results of in vivo experiments suggest that cryosurgery may be an effective treatment for epilepsy.
On-orbit technology experiment facility definition
NASA Technical Reports Server (NTRS)
Russell, Richard A.; Buchan, Robert W.; Gates, Richard M.
1988-01-01
A study was conducted to identify on-orbit integrated facility needs to support in-space technology experiments on the Space Station and associated free flyers. In particular, the first task was to examine the proposed technology development missions (TDMX's) from the model mission set and other proposed experimental facilities, both individually and by theme, to determine how and if the experiments might be combined, what equipment might be shared, what equipment might be used as generic equipment for continued experimentation, and what experiments will conflict with the conduct of other experiments or Space Station operations. Then using these results, to determine on-orbit facility needs to optimize the implementation of technology payloads. Finally, to develop one or more scenarios, design concepts, and outfitting requirements for implementation of onboard technology experiments.
An approach of traffic signal control based on NLRSQP algorithm
NASA Astrophysics Data System (ADS)
Zou, Yuan-Yang; Hu, Yu
2017-11-01
This paper presents a linear program model with linear complementarity constraints (LPLCC) to solve traffic signal optimization problem. The objective function of the model is to obtain the minimization of total queue length with weight factors at the end of each cycle. Then, a combination algorithm based on the nonlinear least regression and sequence quadratic program (NLRSQP) is proposed, by which the local optimal solution can be obtained. Furthermore, four numerical experiments are proposed to study how to set the initial solution of the algorithm that can get a better local optimal solution more quickly. In particular, the results of numerical experiments show that: The model is effective for different arrival rates and weight factors; and the lower bound of the initial solution is, the better optimal solution can be obtained.
Long, Chengjiang; Hua, Gang; Kapoor, Ashish
2015-01-01
We present a noise resilient probabilistic model for active learning of a Gaussian process classifier from crowds, i.e., a set of noisy labelers. It explicitly models both the overall label noise and the expertise level of each individual labeler with two levels of flip models. Expectation propagation is adopted for efficient approximate Bayesian inference of our probabilistic model for classification, based on which, a generalized EM algorithm is derived to estimate both the global label noise and the expertise of each individual labeler. The probabilistic nature of our model immediately allows the adoption of the prediction entropy for active selection of data samples to be labeled, and active selection of high quality labelers based on their estimated expertise to label the data. We apply the proposed model for four visual recognition tasks, i.e., object category recognition, multi-modal activity recognition, gender recognition, and fine-grained classification, on four datasets with real crowd-sourced labels from the Amazon Mechanical Turk. The experiments clearly demonstrate the efficacy of the proposed model. In addition, we extend the proposed model with the Predictive Active Set Selection Method to speed up the active learning system, whose efficacy is verified by conducting experiments on the first three datasets. The results show our extended model can not only preserve a higher accuracy, but also achieve a higher efficiency. PMID:26924892
Spindle Thermal Error Optimization Modeling of a Five-axis Machine Tool
NASA Astrophysics Data System (ADS)
Guo, Qianjian; Fan, Shuo; Xu, Rufeng; Cheng, Xiang; Zhao, Guoyong; Yang, Jianguo
2017-05-01
Aiming at the problem of low machining accuracy and uncontrollable thermal errors of NC machine tools, spindle thermal error measurement, modeling and compensation of a two turntable five-axis machine tool are researched. Measurement experiment of heat sources and thermal errors are carried out, and GRA(grey relational analysis) method is introduced into the selection of temperature variables used for thermal error modeling. In order to analyze the influence of different heat sources on spindle thermal errors, an ANN (artificial neural network) model is presented, and ABC(artificial bee colony) algorithm is introduced to train the link weights of ANN, a new ABC-NN(Artificial bee colony-based neural network) modeling method is proposed and used in the prediction of spindle thermal errors. In order to test the prediction performance of ABC-NN model, an experiment system is developed, the prediction results of LSR (least squares regression), ANN and ABC-NN are compared with the measurement results of spindle thermal errors. Experiment results show that the prediction accuracy of ABC-NN model is higher than LSR and ANN, and the residual error is smaller than 3 μm, the new modeling method is feasible. The proposed research provides instruction to compensate thermal errors and improve machining accuracy of NC machine tools.
A Model of Mental State Transition Network
NASA Astrophysics Data System (ADS)
Xiang, Hua; Jiang, Peilin; Xiao, Shuang; Ren, Fuji; Kuroiwa, Shingo
Emotion is one of the most essential and basic attributes of human intelligence. Current AI (Artificial Intelligence) research is concentrating on physical components of emotion, rarely is it carried out from the view of psychology directly(1). Study on the model of artificial psychology is the first step in the development of human-computer interaction. As affective computing remains unpredictable, creating a reasonable mental model becomes the primary task for building a hybrid system. A pragmatic mental model is also the fundament of some key topics such as recognition and synthesis of emotions. In this paper a Mental State Transition Network Model(2) is proposed to detect human emotions. By a series of psychological experiments, we present a new way to predict coming human's emotions depending on the various current emotional states under various stimuli. Besides, people in different genders and characters are taken into consideration in our investigation. According to the psychological experiments data derived from 200 questionnaires, a Mental State Transition Network Model for describing the transitions in distribution among the emotions and relationships between internal mental situations and external are concluded. Further more the coefficients of the mental transition network model were achieved. Comparing seven relative evaluating experiments, an average precision rate of 0.843 is achieved using a set of samples for the proposed model.
Image Quality Assessment Based on Local Linear Information and Distortion-Specific Compensation.
Wang, Hanli; Fu, Jie; Lin, Weisi; Hu, Sudeng; Kuo, C-C Jay; Zuo, Lingxuan
2016-12-14
Image Quality Assessment (IQA) is a fundamental yet constantly developing task for computer vision and image processing. Most IQA evaluation mechanisms are based on the pertinence of subjective and objective estimation. Each image distortion type has its own property correlated with human perception. However, this intrinsic property may not be fully exploited by existing IQA methods. In this paper, we make two main contributions to the IQA field. First, a novel IQA method is developed based on a local linear model that examines the distortion between the reference and the distorted images for better alignment with human visual experience. Second, a distortion-specific compensation strategy is proposed to offset the negative effect on IQA modeling caused by different image distortion types. These score offsets are learned from several known distortion types. Furthermore, for an image with an unknown distortion type, a Convolutional Neural Network (CNN) based method is proposed to compute the score offset automatically. Finally, an integrated IQA metric is proposed by combining the aforementioned two ideas. Extensive experiments are performed to verify the proposed IQA metric, which demonstrate that the local linear model is useful in human perception modeling, especially for individual image distortion, and the overall IQA method outperforms several state-of-the-art IQA approaches.
Decomposition of timed automata for solving scheduling problems
NASA Astrophysics Data System (ADS)
Nishi, Tatsushi; Wakatake, Masato
2014-03-01
A decomposition algorithm for scheduling problems based on timed automata (TA) model is proposed. The problem is represented as an optimal state transition problem for TA. The model comprises of the parallel composition of submodels such as jobs and resources. The procedure of the proposed methodology can be divided into two steps. The first step is to decompose the TA model into several submodels by using decomposable condition. The second step is to combine individual solution of subproblems for the decomposed submodels by the penalty function method. A feasible solution for the entire model is derived through the iterated computation of solving the subproblem for each submodel. The proposed methodology is applied to solve flowshop and jobshop scheduling problems. Computational experiments demonstrate the effectiveness of the proposed algorithm compared with a conventional TA scheduling algorithm without decomposition.
An entropy-assisted musculoskeletal shoulder model.
Xu, Xu; Lin, Jia-Hua; McGorry, Raymond W
2017-04-01
Optimization combined with a musculoskeletal shoulder model has been used to estimate mechanical loading of musculoskeletal elements around the shoulder. Traditionally, the objective function is to minimize the summation of the total activities of the muscles with forces, moments, and stability constraints. Such an objective function, however, tends to neglect the antagonist muscle co-contraction. In this study, an objective function including an entropy term is proposed to address muscle co-contractions. A musculoskeletal shoulder model is developed to apply the proposed objective function. To find the optimal weight for the entropy term, an experiment was conducted. In the experiment, participants generated various 3-D shoulder moments in six shoulder postures. The surface EMG of 8 shoulder muscles was measured and compared with the predicted muscle activities based on the proposed objective function using Bhattacharyya distance and concordance ratio under different weight of the entropy term. The results show that a small weight of the entropy term can improve the predictability of the model in terms of muscle activities. Such a result suggests that the concept of entropy could be helpful for further understanding the mechanism of muscle co-contractions as well as developing a shoulder biomechanical model with greater validity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Vehicle logo recognition using multi-level fusion model
NASA Astrophysics Data System (ADS)
Ming, Wei; Xiao, Jianli
2018-04-01
Vehicle logo recognition plays an important role in manufacturer identification and vehicle recognition. This paper proposes a new vehicle logo recognition algorithm. It has a hierarchical framework, which consists of two fusion levels. At the first level, a feature fusion model is employed to map the original features to a higher dimension feature space. In this space, the vehicle logos become more recognizable. At the second level, a weighted voting strategy is proposed to promote the accuracy and the robustness of the recognition results. To evaluate the performance of the proposed algorithm, extensive experiments are performed, which demonstrate that the proposed algorithm can achieve high recognition accuracy and work robustly.
Lee, Chun-Chia; Chang, Jen-Wei
2013-11-01
The need for teamwork has grown significantly in today's organizations. Especially for online game communities, teamwork is an important means of online game players' engagement. This study aims to investigate the impacts of trust on players' teamwork with affective commitment and normative commitment as mediators. Furthermore, this research includes team experience as a moderator to compare the difference between different player groups. A model was proposed and tested on 296 online game players' data using structural equation modeling. Findings revealed that team experience moderated the relationship between trust and teamwork. The results indicated that trust promotes more teamwork only for players with high experience through affective commitment than those who with low experience. Implications of the findings are discussed.
ERIC Educational Resources Information Center
Kanukollu, Shanta N.; Mahalingam, Ramaswami
2011-01-01
In this paper, we propose an interdisciplinary framework to study perceptions of child sexual abuse and help-seeking among South Asians living in the United States. We integrate research on social marginality, intersectionality, and cultural psychology to understand how marginalized social experience accentuates South Asian immigrants' desire to…
NASA Astrophysics Data System (ADS)
Xu, Xijin; Tang, Qian; Xia, Haiyue; Zhang, Yuling; Li, Weiqiu; Huo, Xia
2016-04-01
Chaotic time series prediction based on nonlinear systems showed a superior performance in prediction field. We studied prenatal exposure to polychlorinated biphenyls (PCBs) by chaotic time series prediction using the least squares self-exciting threshold autoregressive (SEATR) model in umbilical cord blood in an electronic waste (e-waste) contaminated area. The specific prediction steps basing on the proposal methods for prenatal PCB exposure were put forward, and the proposed scheme’s validity was further verified by numerical simulation experiments. Experiment results show: 1) seven kinds of PCB congeners negatively correlate with five different indices for birth status: newborn weight, height, gestational age, Apgar score and anogenital distance; 2) prenatal PCB exposed group at greater risks compared to the reference group; 3) PCBs increasingly accumulated with time in newborns; and 4) the possibility of newborns suffering from related diseases in the future was greater. The desirable numerical simulation experiments results demonstrated the feasibility of applying mathematical model in the environmental toxicology field.
Xu, Xijin; Tang, Qian; Xia, Haiyue; Zhang, Yuling; Li, Weiqiu; Huo, Xia
2016-01-01
Chaotic time series prediction based on nonlinear systems showed a superior performance in prediction field. We studied prenatal exposure to polychlorinated biphenyls (PCBs) by chaotic time series prediction using the least squares self-exciting threshold autoregressive (SEATR) model in umbilical cord blood in an electronic waste (e-waste) contaminated area. The specific prediction steps basing on the proposal methods for prenatal PCB exposure were put forward, and the proposed scheme’s validity was further verified by numerical simulation experiments. Experiment results show: 1) seven kinds of PCB congeners negatively correlate with five different indices for birth status: newborn weight, height, gestational age, Apgar score and anogenital distance; 2) prenatal PCB exposed group at greater risks compared to the reference group; 3) PCBs increasingly accumulated with time in newborns; and 4) the possibility of newborns suffering from related diseases in the future was greater. The desirable numerical simulation experiments results demonstrated the feasibility of applying mathematical model in the environmental toxicology field. PMID:27118260
NASA Astrophysics Data System (ADS)
Majdalani, Samer; Guinot, Vincent; Delenne, Carole; Gebran, Hicham
2018-06-01
This paper is devoted to theoretical and experimental investigations of solute dispersion in heterogeneous porous media. Dispersion in heterogenous porous media has been reported to be scale-dependent, a likely indication that the proposed dispersion models are incompletely formulated. A high quality experimental data set of breakthrough curves in periodic model heterogeneous porous media is presented. In contrast with most previously published experiments, the present experiments involve numerous replicates. This allows the statistical variability of experimental data to be accounted for. Several models are benchmarked against the data set: the Fickian-based advection-dispersion, mobile-immobile, multirate, multiple region advection dispersion models, and a newly proposed transport model based on pure advection. A salient property of the latter model is that its solutions exhibit a ballistic behaviour for small times, while tending to the Fickian behaviour for large time scales. Model performance is assessed using a novel objective function accounting for the statistical variability of the experimental data set, while putting equal emphasis on both small and large time scale behaviours. Besides being as accurate as the other models, the new purely advective model has the advantages that (i) it does not exhibit the undesirable effects associated with the usual Fickian operator (namely the infinite solute front propagation speed), and (ii) it allows dispersive transport to be simulated on every heterogeneity scale using scale-independent parameters.
NASA Technical Reports Server (NTRS)
Holms, A. G.
1977-01-01
A statistical decision procedure called chain pooling had been developed for model selection in fitting the results of a two-level fixed-effects full or fractional factorial experiment not having replication. The basic strategy included the use of one nominal level of significance for a preliminary test and a second nominal level of significance for the final test. The subject has been reexamined from the point of view of using as many as three successive statistical model deletion procedures in fitting the results of a single experiment. The investigation consisted of random number studies intended to simulate the results of a proposed aircraft turbine-engine rotor-burst-protection experiment. As a conservative approach, population model coefficients were chosen to represent a saturated 2 to the 4th power experiment with a distribution of parameter values unfavorable to the decision procedures. Three model selection strategies were developed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheehey, P.T.; Faehl, R.J.; Kirkpatrick, R.C.
1997-12-31
Magnetized Target Fusion (MTF) experiments, in which a preheated and magnetized target plasma is hydrodynamically compressed to fusion conditions, present some challenging computational modeling problems. Recently, joint experiments relevant to MTF (Russian acronym MAGO, for Magnitnoye Obzhatiye, or magnetic compression) have been performed by Los Alamos National Laboratory and the All-Russian Scientific Research Institute of Experimental Physics (VNIIEF). Modeling of target plasmas must accurately predict plasma densities, temperatures, fields, and lifetime; dense plasma interactions with wall materials must be characterized. Modeling of magnetically driven imploding solid liners, for compression of target plasmas, must address issues such as Rayleigh-Taylor instability growthmore » in the presence of material strength, and glide plane-liner interactions. Proposed experiments involving liner-on-plasma compressions to fusion conditions will require integrated target plasma and liner calculations. Detailed comparison of the modeling results with experiment will be presented.« less
Context, Learning, and Extinction
ERIC Educational Resources Information Center
Gershman, Samuel J.; Blei, David M.; Niv, Yael
2010-01-01
A. Redish et al. (2007) proposed a reinforcement learning model of context-dependent learning and extinction in conditioning experiments, using the idea of "state classification" to categorize new observations into states. In the current article, the authors propose an interpretation of this idea in terms of normative statistical inference. They…
Design of a 4-DOF MR haptic master for application to robot surgery: virtual environment work
NASA Astrophysics Data System (ADS)
Oh, Jong-Seok; Choi, Seung-Hyun; Choi, Seung-Bok
2014-09-01
This paper presents the design and control performance of a novel type of 4-degrees-of-freedom (4-DOF) haptic master in cyberspace for a robot-assisted minimally invasive surgery (RMIS) application. By using a controllable magnetorheological (MR) fluid, the proposed haptic master can have a feedback function for a surgical robot. Due to the difficulty in utilizing real human organs in the experiment, the cyberspace that features the virtual object is constructed to evaluate the performance of the haptic master. In order to realize the cyberspace, a volumetric deformable object is represented by a shape-retaining chain-linked (S-chain) model, which is a fast volumetric model and is suitable for real-time applications. In the haptic architecture for an RMIS application, the desired torque and position induced from the virtual object of the cyberspace and the haptic master of real space are transferred to each other. In order to validate the superiority of the proposed master and volumetric model, a tracking control experiment is implemented with a nonhomogenous volumetric cubic object to demonstrate that the proposed model can be utilized in real-time haptic rendering architecture. A proportional-integral-derivative (PID) controller is then designed and empirically implemented to accomplish the desired torque trajectories. It has been verified from the experiment that tracking the control performance for torque trajectories from a virtual slave can be successfully achieved.
NASA Astrophysics Data System (ADS)
Jiang, Shanchao; Wang, Jing; Sui, Qingmei
2015-11-01
One novel distinguishable circumferential inclined direction tilt sensor is demonstrated by incorporating two strain sensitivity fiber Bragg gratings (FBGs) with two orthogonal triangular cantilever beam and using one fiber Bragg grating (FBG) as temperature compensation element. According to spatial vector and space geometry, theory calculation model of the proposed FBG tilt sensor which can be used to obtain the azimuth and tile angle of the inclined direction is established. To obtain its measuring characteristics, calibration experiment on one prototype of the proposed FBG tilt sensor is carried out. After temperature sensitivity experiment data analysis, the proposed FBG tilt sensor exhibits excellent temperature compensation characteristics. In 2-D tilt angle experiment, tilt measurement sensitivities of these two strain sensitivity FBGs are 140.85°/nm and 101.01°/nm over a wide range of 60º. Further, azimuth and tile angle of the inclined direction can be obtained by the proposed FBG tilt sensor which is verified in circumferential angle experiment. Experiment data show that relative errors of azimuth are 0.55% (positive direction) and 1.14% (negative direction), respectively, and relative errors of tilt angle are all less than 3%. Experiment results confirm that the proposed distinguishable circumferential inclined direction tilt sensor based on FBG can achieve azimuth and tile angle measurement with wide measuring range and high accuracy.
Yang, Yongji; Moser, Michael A J; Zhang, Edwin; Zhang, Wenjun; Zhang, Bing
2018-01-01
The aim of this study was to develop a statistical model for cell death by irreversible electroporation (IRE) and to show that the statistic model is more accurate than the electric field threshold model in the literature using cervical cancer cells in vitro. HeLa cell line was cultured and treated with different IRE protocols in order to obtain data for modeling the statistical relationship between the cell death and pulse-setting parameters. In total, 340 in vitro experiments were performed with a commercial IRE pulse system, including a pulse generator and an electric cuvette. Trypan blue staining technique was used to evaluate cell death after 4 hours of incubation following IRE treatment. Peleg-Fermi model was used in the study to build the statistical relationship using the cell viability data obtained from the in vitro experiments. A finite element model of IRE for the electric field distribution was also built. Comparison of ablation zones between the statistical model and electric threshold model (drawn from the finite element model) was used to show the accuracy of the proposed statistical model in the description of the ablation zone and its applicability in different pulse-setting parameters. The statistical models describing the relationships between HeLa cell death and pulse length and the number of pulses, respectively, were built. The values of the curve fitting parameters were obtained using the Peleg-Fermi model for the treatment of cervical cancer with IRE. The difference in the ablation zone between the statistical model and the electric threshold model was also illustrated to show the accuracy of the proposed statistical model in the representation of ablation zone in IRE. This study concluded that: (1) the proposed statistical model accurately described the ablation zone of IRE with cervical cancer cells, and was more accurate compared with the electric field model; (2) the proposed statistical model was able to estimate the value of electric field threshold for the computer simulation of IRE in the treatment of cervical cancer; and (3) the proposed statistical model was able to express the change in ablation zone with the change in pulse-setting parameters.
NASA Astrophysics Data System (ADS)
Rakshit, Suman; Khare, Swanand R.; Datta, Biswa Nath
2018-07-01
One of the most important yet difficult aspect of the Finite Element Model Updating Problem is to preserve the finite element inherited structures in the updated model. Finite element matrices are in general symmetric, positive definite (or semi-definite) and banded (tridiagonal, diagonal, penta-diagonal, etc.). Though a large number of papers have been published in recent years on various aspects of solutions of this problem, papers dealing with structure preservation almost do not exist. A novel optimization based approach that preserves the symmetric tridiagonal structures of the stiffness and damping matrices is proposed in this paper. An analytical expression for the global minimum solution of the associated optimization problem along with the results of numerical experiments obtained by both the analytical expressions and by an appropriate numerical optimization algorithm are presented. The results of numerical experiments support the validity of the proposed method.
Time domain simulation of novel photovoltaic materials
NASA Astrophysics Data System (ADS)
Chung, Haejun
Thin-film silicon-based solar cells have operated far from the Shockley- Queisser limit in all experiments to date. Novel light-trapping structures, however, may help address this limitation. Finite-difference time domain simulation methods offer the potential to accurately determine the light-trapping potential of arbitrary dielectric structures, but suffer from materials modeling problems. In this thesis, existing dispersion models for novel photovoltaic materials will be reviewed, and a novel dispersion model, known as the quadratic complex rational function (QCRF), will be proposed. It has the advantage of accurately fitting experimental semiconductor dielectric values over a wide bandwidth in a numerically stable fashion. Applying the proposed dispersion model, a statistically correlated surface texturing method will be suggested, and light absorption rates of it will be explained. In future work, these designs will be combined with other structures and optimized to help guide future experiments.
Benchmarking Multilayer-HySEA model for landslide generated tsunami. HTHMP validation process.
NASA Astrophysics Data System (ADS)
Macias, J.; Escalante, C.; Castro, M. J.
2017-12-01
Landslide tsunami hazard may be dominant along significant parts of the coastline around the world, in particular in the USA, as compared to hazards from other tsunamigenic sources. This fact motivated NTHMP about the need of benchmarking models for landslide generated tsunamis, following the same methodology already used for standard tsunami models when the source is seismic. To perform the above-mentioned validation process, a set of candidate benchmarks were proposed. These benchmarks are based on a subset of available laboratory data sets for solid slide experiments and deformable slide experiments, and include both submarine and subaerial slides. A benchmark based on a historic field event (Valdez, AK, 1964) close the list of proposed benchmarks. A total of 7 benchmarks. The Multilayer-HySEA model including non-hydrostatic effects has been used to perform all the benchmarking problems dealing with laboratory experiments proposed in the workshop that was organized at Texas A&M University - Galveston, on January 9-11, 2017 by NTHMP. The aim of this presentation is to show some of the latest numerical results obtained with the Multilayer-HySEA (non-hydrostatic) model in the framework of this validation effort.Acknowledgements. This research has been partially supported by the Spanish Government Research project SIMURISK (MTM2015-70490-C02-01-R) and University of Malaga, Campus de Excelencia Internacional Andalucía Tech. The GPU computations were performed at the Unit of Numerical Methods (University of Malaga).
Evaluation of two models for predicting elemental accumulation by arthropods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Webster, J.R.; Crossley, D.A. Jr.
1978-06-15
Two different models have been proposed for predicting elemental accumulation by arthropods. Parameters of both models can be quantified from radioisotope elimination experiments. Our analysis of the 2 models shows that both predict identical elemental accumulation for a whole organism, though differing in the accumulation in body and gut. We quantified both models with experimental data from /sup 134/Cs and /sup 85/Sr elimination by crickets. Computer simulations of radioisotope accumulation were then compared with actual accumulation experiments. Neither model showed exact fit to the experimental data, though both showed the general pattern of elemental accumulation.
Marković, Slobodan
2012-01-01
In this paper aesthetic experience is defined as an experience qualitatively different from everyday experience and similar to other exceptional states of mind. Three crucial characteristics of aesthetic experience are discussed: fascination with an aesthetic object (high arousal and attention), appraisal of the symbolic reality of an object (high cognitive engagement), and a strong feeling of unity with the object of aesthetic fascination and aesthetic appraisal. In a proposed model, two parallel levels of aesthetic information processing are proposed. On the first level two sub-levels of narrative are processed, story (theme) and symbolism (deeper meanings). The second level includes two sub-levels, perceptual associations (implicit meanings of object's physical features) and detection of compositional regularities. Two sub-levels are defined as crucial for aesthetic experience, appraisal of symbolism and compositional regularities. These sub-levels require some specific cognitive and personality dispositions, such as expertise, creative thinking, and openness to experience. Finally, feedback of emotional processing is included in our model: appraisals of everyday emotions are specified as a matter of narrative content (eg, empathy with characters), whereas the aesthetic emotion is defined as an affective evaluation in the process of symbolism appraisal or the detection of compositional regularities. PMID:23145263
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jing Yanfei, E-mail: yanfeijing@uestc.edu.c; Huang Tingzhu, E-mail: tzhuang@uestc.edu.c; Duan Yong, E-mail: duanyong@yahoo.c
This study is mainly focused on iterative solutions with simple diagonal preconditioning to two complex-valued nonsymmetric systems of linear equations arising from a computational chemistry model problem proposed by Sherry Li of NERSC. Numerical experiments show the feasibility of iterative methods to some extent when applied to the problems and reveal the competitiveness of our recently proposed Lanczos biconjugate A-orthonormalization methods to other classic and popular iterative methods. By the way, experiment results also indicate that application specific preconditioners may be mandatory and required for accelerating convergence.
NHPP-Based Software Reliability Models Using Equilibrium Distribution
NASA Astrophysics Data System (ADS)
Xiao, Xiao; Okamura, Hiroyuki; Dohi, Tadashi
Non-homogeneous Poisson processes (NHPPs) have gained much popularity in actual software testing phases to estimate the software reliability, the number of remaining faults in software and the software release timing. In this paper, we propose a new modeling approach for the NHPP-based software reliability models (SRMs) to describe the stochastic behavior of software fault-detection processes. The fundamental idea is to apply the equilibrium distribution to the fault-detection time distribution in NHPP-based modeling. We also develop efficient parameter estimation procedures for the proposed NHPP-based SRMs. Through numerical experiments, it can be concluded that the proposed NHPP-based SRMs outperform the existing ones in many data sets from the perspective of goodness-of-fit and prediction performance.
A Novel BA Complex Network Model on Color Template Matching
Han, Risheng; Yue, Guangxue; Ding, Hui
2014-01-01
A novel BA complex network model of color space is proposed based on two fundamental rules of BA scale-free network model: growth and preferential attachment. The scale-free characteristic of color space is discovered by analyzing evolving process of template's color distribution. And then the template's BA complex network model can be used to select important color pixels which have much larger effects than other color pixels in matching process. The proposed BA complex network model of color space can be easily integrated into many traditional template matching algorithms, such as SSD based matching and SAD based matching. Experiments show the performance of color template matching results can be improved based on the proposed algorithm. To the best of our knowledge, this is the first study about how to model the color space of images using a proper complex network model and apply the complex network model to template matching. PMID:25243235
A novel BA complex network model on color template matching.
Han, Risheng; Shen, Shigen; Yue, Guangxue; Ding, Hui
2014-01-01
A novel BA complex network model of color space is proposed based on two fundamental rules of BA scale-free network model: growth and preferential attachment. The scale-free characteristic of color space is discovered by analyzing evolving process of template's color distribution. And then the template's BA complex network model can be used to select important color pixels which have much larger effects than other color pixels in matching process. The proposed BA complex network model of color space can be easily integrated into many traditional template matching algorithms, such as SSD based matching and SAD based matching. Experiments show the performance of color template matching results can be improved based on the proposed algorithm. To the best of our knowledge, this is the first study about how to model the color space of images using a proper complex network model and apply the complex network model to template matching.
Kimura, Shuhei; Sato, Masanao; Okada-Hatakeyama, Mariko
2013-01-01
The inference of a genetic network is a problem in which mutual interactions among genes are inferred from time-series of gene expression levels. While a number of models have been proposed to describe genetic networks, this study focuses on a mathematical model proposed by Vohradský. Because of its advantageous features, several researchers have proposed the inference methods based on Vohradský's model. When trying to analyze large-scale networks consisting of dozens of genes, however, these methods must solve high-dimensional non-linear function optimization problems. In order to resolve the difficulty of estimating the parameters of the Vohradský's model, this study proposes a new method that defines the problem as several two-dimensional function optimization problems. Through numerical experiments on artificial genetic network inference problems, we showed that, although the computation time of the proposed method is not the shortest, the method has the ability to estimate parameters of Vohradský's models more effectively with sufficiently short computation times. This study then applied the proposed method to an actual inference problem of the bacterial SOS DNA repair system, and succeeded in finding several reasonable regulations. PMID:24386175
Lu, Ji; Pan, Junhao; Zhang, Qiang; Dubé, Laurette; Ip, Edward H
2015-01-01
With intensively collected longitudinal data, recent advances in the experience-sampling method (ESM) benefit social science empirical research, but also pose important methodological challenges. As traditional statistical models are not generally well equipped to analyze a system of variables that contain feedback loops, this paper proposes the utility of an extended hidden Markov model to model reciprocal the relationship between momentary emotion and eating behavior. This paper revisited an ESM data set (Lu, Huet, & Dube, 2011) that observed 160 participants' food consumption and momentary emotions 6 times per day in 10 days. Focusing on the analyses on feedback loop between mood and meal-healthiness decision, the proposed reciprocal Markov model (RMM) can accommodate both hidden ("general" emotional states: positive vs. negative state) and observed states (meal: healthier, same or less healthy than usual) without presuming independence between observations and smooth trajectories of mood or behavior changes. The results of RMM analyses illustrated the reciprocal chains of meal consumption and mood as well as the effect of contextual factors that moderate the interrelationship between eating and emotion. A simulation experiment that generated data consistent with the empirical study further demonstrated that the procedure is promising in terms of recovering the parameters.
Perea, Manuel; Acha, Joana
2009-02-01
Recently, a number of input coding schemes (e.g., SOLAR model, SERIOL model, open-bigram model, overlap model) have been proposed that capture the transposed-letter priming effect (i.e., faster response times for jugde-JUDGE than for jupte-JUDGE). In their current version, these coding schemes do not assume any processing differences between vowels and consonants. However, in a lexical decision task, Perea and Lupker (2004, JML; Lupker, Perea, & Davis, 2008, L&CP) reported that transposed-letter priming effects occurred for consonant transpositions but not for vowel transpositions. This finding poses a challenge for these recently proposed coding schemes. Here, we report four masked priming experiments that examine whether this consonant/vowel dissociation in transposed-letter priming is task-specific. In Experiment 1, we used a lexical decision task and found a transposed-letter priming effect only for consonant transpositions. In Experiments 2-4, we employed a same-different task - a task which taps early perceptual processes - and found a robust transposed-letter priming effect that did not interact with consonant/vowel status. We examine the implications of these findings for the front-end of the models of visual word recognition.
Toward Reducing Ageism: PEACE (Positive Education about Aging and Contact Experiences) Model.
Levy, Sheri R
2018-03-19
The population of older adults is growing worldwide. Negative ageism (negative attitudes and behavior toward older adults) is a serious international concern that negatively influences not only older adults but also individuals across the age continuum. This article proposes and examines the application of an integrative theoretical model across empirical evidence in the literature on ageism in psychology, medicine, social work, and sociology. The proposed Positive Education about Aging and Contact Experiences (PEACE) model focuses on 2 key contributing factors expected to reduce negative ageism: (a) education about aging including facts on aging along with positive older role models that dispel negative and inaccurate images of older adulthood; and (b) positive contact experiences with older adults that are individualized, provide or promote equal status, are cooperative, involve sharing of personal information, and are sanctioned within the setting. These 2 key contributing factors have the potential to be interconnected and work together to reduce negative stereotypes, aging anxiety, prejudice, and discrimination associated with older adults and aging. This model has implications for policies and programs that can improve the health and well-being of individuals, as well as expand the residential, educational, and career options of individuals across the age continuum.
Zhang, Jian-Hua; Böhme, Johann F
2007-11-01
In this paper we report an adaptive regularization network (ARN) approach to realizing fast blind separation of cerebral evoked potentials (EPs) from background electroencephalogram (EEG) activity with no need to make any explicit assumption on the statistical (or deterministic) signal model. The ARNs are proposed to construct nonlinear EEG and EP signal models. A novel adaptive regularization training (ART) algorithm is proposed to improve the generalization performance of the ARN. Two adaptive neural modeling methods based on the ARN are developed and their implementation and performance analysis are also presented. The computer experiments using simulated and measured visual evoked potential (VEP) data have shown that the proposed ARN modeling paradigm yields computationally efficient and more accurate VEP signal estimation owing to its intrinsic model-free and nonlinear processing characteristics.
Method for six-legged robot stepping on obstacles by indirect force estimation
NASA Astrophysics Data System (ADS)
Xu, Yilin; Gao, Feng; Pan, Yang; Chai, Xun
2016-07-01
Adaptive gaits for legged robots often requires force sensors installed on foot-tips, however impact, temperature or humidity can affect or even damage those sensors. Efforts have been made to realize indirect force estimation on the legged robots using leg structures based on planar mechanisms. Robot Octopus III is a six-legged robot using spatial parallel mechanism(UP-2UPS) legs. This paper proposed a novel method to realize indirect force estimation on walking robot based on a spatial parallel mechanism. The direct kinematics model and the inverse kinematics model are established. The force Jacobian matrix is derived based on the kinematics model. Thus, the indirect force estimation model is established. Then, the relation between the output torques of the three motors installed on one leg to the external force exerted on the foot tip is described. Furthermore, an adaptive tripod static gait is designed. The robot alters its leg trajectory to step on obstacles by using the proposed adaptive gait. Both the indirect force estimation model and the adaptive gait are implemented and optimized in a real time control system. An experiment is carried out to validate the indirect force estimation model. The adaptive gait is tested in another experiment. Experiment results show that the robot can successfully step on a 0.2 m-high obstacle. This paper proposes a novel method to overcome obstacles for the six-legged robot using spatial parallel mechanism legs and to avoid installing the electric force sensors in harsh environment of the robot's foot tips.
A Field-Based Curriculum Model for Earth Science Teacher-Preparation Programs.
ERIC Educational Resources Information Center
Dubois, David D.
1979-01-01
This study proposed a model set of cognitive-behavioral objectives for field-based teacher education programs for earth science teachers. It describes field experience integration into teacher education programs. The model is also applicable for evaluation of earth science teacher education programs. (RE)
Yang, Tao; Sezer, Hayri; Celik, Ismail B.; ...
2015-06-02
In the present paper, a physics-based procedure combining experiments and multi-physics numerical simulations is developed for overall analysis of SOFCs operational diagnostics and performance predictions. In this procedure, essential information for the fuel cell is extracted first by utilizing empirical polarization analysis in conjunction with experiments and refined by multi-physics numerical simulations via simultaneous analysis and calibration of polarization curve and impedance behavior. The performance at different utilization cases and operating currents is also predicted to confirm the accuracy of the proposed model. It is demonstrated that, with the present electrochemical model, three air/fuel flow conditions are needed to producemore » a set of complete data for better understanding of the processes occurring within SOFCs. After calibration against button cell experiments, the methodology can be used to assess performance of planar cell without further calibration. The proposed methodology would accelerate the calibration process and improve the efficiency of design and diagnostics.« less
Integrating Women's Issues in the Social Work Curriculum: A Proposal.
ERIC Educational Resources Information Center
Carter, Carolyn; And Others
1994-01-01
Social work faculty revising courses at Arizona State University's School of Social Work attempting to integrate content on women propose that development of new models reflecting women's experiences are required. Examples of curricular changes made using this approach are offered. They address direct practice, family practice,…
Guo, Zhiqiang; Wang, Huaiqing; Yang, Jie; Miller, David J
2015-01-01
In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis ((2D)2PCA) and a Radial Basis Function Neural Network (RBFNN) to forecast stock market behavior. First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, (2D)2PCA is utilized to reduce the dimension of the data and extract its intrinsic features. Finally, an RBFNN accepts the data processed by (2D)2PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis (PCA) and independent component analysis (ICA). The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron.
Guo, Zhiqiang; Wang, Huaiqing; Yang, Jie; Miller, David J.
2015-01-01
In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis ((2D)2PCA) and a Radial Basis Function Neural Network (RBFNN) to forecast stock market behavior. First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, (2D)2PCA is utilized to reduce the dimension of the data and extract its intrinsic features. Finally, an RBFNN accepts the data processed by (2D)2PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis (PCA) and independent component analysis (ICA). The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron. PMID:25849483
A Model for Effective Teaching and Learning in Research Methods.
ERIC Educational Resources Information Center
Poindexter, Paula M.
1998-01-01
Proposes a teaching model for making research relevant. Presents a case study of the model as used in advertising and public relations research classes. Notes that the model consists of a knowledge base, team process, a realistic goal-oriented experience, self-management, expert consultation, and evaluation and synthesis. Discusses resulting…
Sediment-transport experiments in zero-gravity
NASA Technical Reports Server (NTRS)
Iversen, James D.; Greeley, Ronald
1987-01-01
One of the important parameters in the analysis of sediment entrainment and transport is gravitational attraction. The availability of a laboratory in earth orbit would afford an opportunity to conduct experiments in zero and variable gravity environments. Elimination of gravitational attraction as a factor in such experiments would enable other critical parameters (such as particle cohesion and aerodynamic forces) to be evaluated much more accurately. A Carousel Wind Tunnel (CWT) is proposed for use in conducting experiments concerning sediment particle entrainment and transport in a space station. In order to test the concept of this wind tunnel design a one third scale model CWT was constructed and calibrated. Experiments were conducted in the prototype to determine the feasibility of studying various aeolian processes and the results were compared with various numerical analysis. Several types of experiments appear to be feasible utilizing the proposed apparatus.
Sediment-transport experiments in zero-gravity
NASA Technical Reports Server (NTRS)
Iversen, J. D.; Greeley, R.
1986-01-01
One of the important parameters in the analysis of sediment entrainment and transport is gravitational attraction. The availability of a laboratory in Earth orbit would afford an opportunity to conduct experiments in zero and variable gravity environments. Elimination of gravitational attraction as a factor in such experiments would enable other critical parameters (such as particle cohesion and aerodynamic forces) to be evaluated much more accurately. A Carousel Wind Tunnel (CWT) is proposed for use in conducting experiments concerning sediment particle entrainment and transport in a space station. In order to test the concept of this wind tunnel design a one third scale model CWT was constructed and calibrated. Experiments were conducted in the prototype to determine the feasibility of studying various aeolian processes and the results were compared with various numerical analysis. Several types of experiments appear to be feasible utilizing the proposed apparatus.
Marsh, T; Wright, P; Smith, S
2001-04-01
New and emerging media technologies have the potential to induce a variety of experiences in users. In this paper, it is argued that the inducement of experience presupposes that users are absorbed in the illusion created by these media. Looking to another successful visual medium, film, this paper borrows from the techniques used in "shaping experience" to hold spectators' attention in the illusion of film, and identifies what breaks the illusion/experience for spectators. This paper focuses on one medium, virtual reality (VR), and advocates a transparent or "invisible style" of interaction. We argue that transparency keeps users in the "flow" of their activities and consequently enhances experience in users. Breakdown in activities breaks the experience and subsequently provides opportunities to identify and analyze potential causes of usability problems. Adopting activity theory, we devise a model of interaction with VR--through consciousness and activity--and introduce the concept of breakdown in illusion. From this, a model of effective interaction with VR is devised and the occurrence of breakdown in interaction and illusion is identified along a continuum of engagement. Evaluation guidelines for the design of experience are proposed and applied to usability problems detected in an empirical study of a head-mounted display (HMD) VR system. This study shows that the guidelines are effective in the evaluation of VR. Finally, we look at the potential experiences that may be induced in users and propose a way to evaluate user experience in virtual environments (VEs) and other new and emerging media.
ERIC Educational Resources Information Center
Boylan, Mark; Coldwell, Mike; Maxwell, Bronwen; Jordan, Julie
2018-01-01
One approach to designing, researching or evaluating professional learning experiences is to use models of learning processes. Here we analyse and critique five significant contemporary analytical models: three variations on path models, proposed by Guskey, by Desimone and by Clarke and Hollingsworth; a model using a systemic conceptualisation of…
NASA Astrophysics Data System (ADS)
Baba, S.; Sakai, T.; Sawada, K.; Kubota, C.; Wada, Y.; Shinmoto, Y.; Ohta, H.; Asano, H.; Kawanami, O.; Suzuki, K.; Imai, R.; Kawasaki, H.; Fujii, K.; Takayanagi, M.; Yoda, S.
2011-12-01
Boiling is one of the efficient modes of heat transfer due to phase change, and is regarded as promising means to be applied for the thermal management systems handling a large amount of waste heat under high heat flux. However, gravity effects on the two-phase flow phenomena and corresponding heat transfer characteristics have not been clarified in detail. The experiments onboard Japanese Experiment Module "KIBO" in International Space Station on boiling two-phase flow under microgravity conditions are proposed to clarify both of heat transfer and flow characteristics under microgravity conditions. To verify the feasibility of ISS experiments on boiling two-phase flow, the Bread Board Model is assembled and its performance and the function of components installed in a test loop are examined.
Learning LM Specificity for Ganglion Cells
NASA Technical Reports Server (NTRS)
Ahumada, Albert J.
2015-01-01
Unsupervised learning models have been proposed based on experience (Ahumada and Mulligan, 1990;Wachtler, Doi, Lee and Sejnowski, 2007) that allow the cortex to develop units with LM specific color opponent receptive fields like the blob cells reported by Hubel and Wiesel on the basis of visual experience. These models used ganglion cells with LM indiscriminate wiring as inputs to the learning mechanism, which was presumed to occur at the cortical level.
Mechanistic Understanding of Microbial Plugging for Improved Sweep Efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steven Bryant; Larry Britton
2008-09-30
Microbial plugging has been proposed as an effective low cost method of permeability reduction. Yet there is a dearth of information on the fundamental processes of microbial growth in porous media, and there are no suitable data to model the process of microbial plugging as it relates to sweep efficiency. To optimize the field implementation, better mechanistic and volumetric understanding of biofilm growth within a porous medium is needed. In particular, the engineering design hinges upon a quantitative relationship between amount of nutrient consumption, amount of growth, and degree of permeability reduction. In this project experiments were conducted to obtainmore » new data to elucidate this relationship. Experiments in heterogeneous (layered) beadpacks showed that microbes could grow preferentially in the high permeability layer. Ultimately this caused flow to be equally divided between high and low permeability layers, precisely the behavior needed for MEOR. Remarkably, classical models of microbial nutrient uptake in batch experiments do not explain the nutrient consumption by the same microbes in flow experiments. We propose a simple extension of classical kinetics to account for the self-limiting consumption of nutrient observed in our experiments, and we outline a modeling approach based on architecture and behavior of biofilms. Such a model would account for the changing trend of nutrient consumption by bacteria with the increasing biomass and the onset of biofilm formation. However no existing model can explain the microbial preference for growth in high permeability regions, nor is there any obvious extension of the model for this observation. An attractive conjecture is that quorum sensing is involved in the heterogeneous bead packs.« less
Exploiting salient semantic analysis for information retrieval
NASA Astrophysics Data System (ADS)
Luo, Jing; Meng, Bo; Quan, Changqin; Tu, Xinhui
2016-11-01
Recently, many Wikipedia-based methods have been proposed to improve the performance of different natural language processing (NLP) tasks, such as semantic relatedness computation, text classification and information retrieval. Among these methods, salient semantic analysis (SSA) has been proven to be an effective way to generate conceptual representation for words or documents. However, its feasibility and effectiveness in information retrieval is mostly unknown. In this paper, we study how to efficiently use SSA to improve the information retrieval performance, and propose a SSA-based retrieval method under the language model framework. First, SSA model is adopted to build conceptual representations for documents and queries. Then, these conceptual representations and the bag-of-words (BOW) representations can be used in combination to estimate the language models of queries and documents. The proposed method is evaluated on several standard text retrieval conference (TREC) collections. Experiment results on standard TREC collections show the proposed models consistently outperform the existing Wikipedia-based retrieval methods.
A behavior model for blood donors and marketing strategies to retain and attract them
Aldamiz-echevarria, Covadonga; Aguirre-Garcia, Maria Soledad
2014-01-01
Objective analyze and propose a theoretical model that describes blood donor decisions to help staff working in blood banks (nurses and others) in their efforts to capture and retain donors. Methods analysis of several studies on the motivations to give blood in Spain over the last six years, as well as past literature on the topic, the authors' experiences in the last 25 years in over 15 Non Governmental Organizations with different levels of responsibilities, their experiences as blood donors and the informal interviews developed during those 25 years. Results a model is proposed with different internal and external factors that influence blood donation, as well as the different stages of the decision-making process. Conclusion the knowledge of the donation process permits the development of marketing strategies that help to increase donors and donations. PMID:25029059
A behavior model for blood donors and marketing strategies to retain and attract them.
Aldamiz-Echevarria, Covadonga; Aguirre-Garcia, Maria Soledad
2014-01-01
analyze and propose a theoretical model that describes blood donor decisions to help staff working in blood banks (nurses and others) in their efforts to capture and retain donors. analysis of several studies on the motivations to give blood in Spain over the last six years, as well as past literature on the topic, the authors' experiences in the last 25 years in over 15 Non Governmental Organizations with different levels of responsibilities, their experiences as blood donors and the informal interviews developed during those 25 years. a model is proposed with different internal and external factors that influence blood donation, as well as the different stages of the decision-making process. the knowledge of the donation process permits the development of marketing strategies that help to increase donors and donations.
A kinetic model for the transport of electrons in a graphene layer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fermanian Kammerer, Clotilde, E-mail: Clotilde.Fermanian@u-pec.fr; Méhats, Florian, E-mail: florian.mehats@univ-rennes1.fr
In this article, we propose a new numerical scheme for the computation of the transport of electrons in a graphene device. The underlying quantum model for graphene is a massless Dirac equation, whose eigenvalues display a conical singularity responsible for non-adiabatic transitions between the two modes. We first derive a kinetic model which takes the form of two Boltzmann equations coupled by a collision operator modeling the non-adiabatic transitions. This collision term includes a Landau–Zener transfer term and a jump operator whose presence is essential in order to ensure a good energy conservation during the transitions. We propose an algorithmicmore » realization of the semi-group solving the kinetic model, by a particle method. We give analytic justification of the model and propose a series of numerical experiments studying the influences of the various sources of errors between the quantum and the kinetic models.« less
Optimization of time-course experiments for kinetic model discrimination.
Lages, Nuno F; Cordeiro, Carlos; Sousa Silva, Marta; Ponces Freire, Ana; Ferreira, António E N
2012-01-01
Systems biology relies heavily on the construction of quantitative models of biochemical networks. These models must have predictive power to help unveiling the underlying molecular mechanisms of cellular physiology, but it is also paramount that they are consistent with the data resulting from key experiments. Often, it is possible to find several models that describe the data equally well, but provide significantly different quantitative predictions regarding particular variables of the network. In those cases, one is faced with a problem of model discrimination, the procedure of rejecting inappropriate models from a set of candidates in order to elect one as the best model to use for prediction.In this work, a method is proposed to optimize the design of enzyme kinetic assays with the goal of selecting a model among a set of candidates. We focus on models with systems of ordinary differential equations as the underlying mathematical description. The method provides a design where an extension of the Kullback-Leibler distance, computed over the time courses predicted by the models, is maximized. Given the asymmetric nature this measure, a generalized differential evolution algorithm for multi-objective optimization problems was used.The kinetics of yeast glyoxalase I (EC 4.4.1.5) was chosen as a difficult test case to evaluate the method. Although a single-substrate kinetic model is usually considered, a two-substrate mechanism has also been proposed for this enzyme. We designed an experiment capable of discriminating between the two models by optimizing the initial substrate concentrations of glyoxalase I, in the presence of the subsequent pathway enzyme, glyoxalase II (EC 3.1.2.6). This discriminatory experiment was conducted in the laboratory and the results indicate a two-substrate mechanism for the kinetics of yeast glyoxalase I.
The SAGE Model of Social Psychological Research.
Power, Séamus A; Velez, Gabriel; Qadafi, Ahmad; Tennant, Joseph
2018-05-01
We propose a SAGE model for social psychological research. Encapsulated in our acronym is a proposal to have a synthetic approach to social psychological research, in which qualitative methods are augmentative to quantitative ones, qualitative methods can be generative of new experimental hypotheses, and qualitative methods can capture experiences that evade experimental reductionism. We remind social psychological researchers that psychology was founded in multiple methods of investigation at multiple levels of analysis. We discuss historical examples and our own research as contemporary examples of how a SAGE model can operate in part or as an integrated whole. The implications of our model are discussed.
NASA Technical Reports Server (NTRS)
DeLoach, Richard
2012-01-01
This paper reviews the derivation of an equation for scaling response surface modeling experiments. The equation represents the smallest number of data points required to fit a linear regression polynomial so as to achieve certain specified model adequacy criteria. Specific criteria are proposed which simplify an otherwise rather complex equation, generating a practical rule of thumb for the minimum volume of data required to adequately fit a polynomial with a specified number of terms in the model. This equation and the simplified rule of thumb it produces can be applied to minimize the cost of wind tunnel testing.
Background stratospheric aerosol and polar stratospheric cloud reference models
NASA Technical Reports Server (NTRS)
Mccormick, M. P.; Wang, P.-H.; Pitts, M. C.
1993-01-01
A global aerosol climatology is evolving from the NASA satellite experiments SAM II, SAGE I, and SAGE II. In addition, polar stratospheric cloud (PSC) data have been obtained from these experiments over the last decade. An undated reference model of the optical characteristics of the background aerosol is described and a new aerosol reference model derived from the latest available data is proposed. The aerosol models are referenced to the height above the tropopause. The impact of a number of volcanic eruptions is described. In addition, a model describing the seasonal, longitudinal, and interannual variations in PSCs is presented.
Software reliability: Additional investigations into modeling with replicated experiments
NASA Technical Reports Server (NTRS)
Nagel, P. M.; Schotz, F. M.; Skirvan, J. A.
1984-01-01
The effects of programmer experience level, different program usage distributions, and programming languages are explored. All these factors affect performance, and some tentative relational hypotheses are presented. An analytic framework for replicated and non-replicated (traditional) software experiments is presented. A method of obtaining an upper bound on the error rate of the next error is proposed. The method was validated empirically by comparing forecasts with actual data. In all 14 cases the bound exceeded the observed parameter, albeit somewhat conservatively. Two other forecasting methods are proposed and compared to observed results. Although demonstrated relative to this framework that stages are neither independent nor exponentially distributed, empirical estimates show that the exponential assumption is nearly valid for all but the extreme tails of the distribution. Except for the dependence in the stage probabilities, Cox's model approximates to a degree what is being observed.
Direct detection of exothermic dark matter with light mediator
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geng, Chao-Qiang; Department of Physics, National Tsing Hua University,Hsinchu, Taiwan; Physics Division, National Center for Theoretical Sciences,Hsinchu, Taiwan
2016-08-05
We study the dark matter (DM) direct detection for the models with the effects of the isospin-violating couplings, exothermic scatterings, and/or the lightness of the mediator, proposed to relax the tension between the CDMS-Si signals and null experiments. In the light of the new updates of the LUX and CDMSlite data, we find that many of the previous proposals are now ruled out, including the Ge-phobic exothermic DM model and the Xe-phobic DM one with a light mediator. We also examine the exothermic DM models with a light mediator but without the isospin violation, and we are unable to identifymore » any available parameter space that could simultaneously satisfy all the experiments. The only models that can partially relax the inconsistencies are the Xe-phobic exothermic DM models with or without a light mediator. But even in this case, a large portion of the CDMS-Si regions of interest has been constrained by the LUX and SuperCDMS data.« less
NASA Astrophysics Data System (ADS)
Shi, Ao; Lu, Bo; Yang, Dangguo; Wang, Xiansheng; Wu, Junqiang; Zhou, Fangqi
2018-05-01
Coupling between aero-acoustic noise and structural vibration under high-speed open cavity flow-induced oscillation may bring about severe random vibration of the structure, and even cause structure to fatigue destruction, which threatens the flight safety. Carrying out the research on vibro-acoustic experiments of scaled down model is an effective means to clarify the effects of high-intensity noise of cavity on structural vibration. Therefore, in allusion to the vibro-acoustic experiments of cavity in wind tunnel, taking typical elastic cavity as the research object, dimensional analysis and finite element method were adopted to establish the similitude relations of structural inherent characteristics and dynamics for distorted model, and verifying the proposed similitude relations by means of experiments and numerical simulation. Research shows that, according to the analysis of scale-down model, the established similitude relations can accurately simulate the structural dynamic characteristics of actual model, which provides theoretic guidance for structural design and vibro-acoustic experiments of scaled down elastic cavity model.
NASA Astrophysics Data System (ADS)
Yang, F.; Dong, Z. H.; Ye, X.
2018-05-01
Currently, space robots have been become a very important means of space on-orbit maintenance and support. Many countries are taking deep research and experiment on this. Because space operation attitude is very complicated, it is difficult to model them in research lab. This paper builds up a complete equivalent experiment framework according to the requirement of proposed space soft-contact technology. Also, this paper carries out flexible multi-body dynamics parameters verification for on-orbit soft-contact mechanism, which combines on-orbit experiment data, the built soft-contact mechanism equivalent model and flexible multi-body dynamics equivalent model that is based on KANE equation. The experiment results approve the correctness of the built on-orbit soft-contact flexible multi-body dynamics.
Curriculum Development: A Philosophical Model.
ERIC Educational Resources Information Center
Bruening, William H.
Presenting models based on the philosophies of Carl Rogers, John Dewey, Erich Fromm, and Jean-Paul Sartre, this paper proposes a philosophical approach to education and concludes with pragmatic suggestions concerning teaching based on a fully-functioning-person model. The fully-functioning person is characterized as being open to experience,…
ERIC Educational Resources Information Center
Graybill, Emily C.; Varjas, Kris; Meyers, Joel; Greenberg, Daphne; Roach, Andrew T.
2013-01-01
The Participatory Culture-Specific Model of Course Development (PCSMCD), adapted from the Participatory Culture-Specific Intervention Model, is a proposed framework to address challenges to social justice education by addressing the following four course variables: instructor characteristics, instructor experiences, student characteristics, and…
The Seven Faces of Information Literacy.
ERIC Educational Resources Information Center
Bruce, Christine
This book examines the varying experiences of information literacy among higher educators and proposes a relational model of information literacy as an alternative to the behavioral model that dominates the education and research. The metaphor of an "information literacy wheel" is used to examine problems associated with the behavioral model and…
Simulation of the communication system between an AUV group and a surface station
NASA Astrophysics Data System (ADS)
Burtovaya, D.; Demin, A.; Demeshko, M.; Moiseev, A.; Kudryashova, A.
2017-01-01
An object model for simulation of the communications system of an autonomous underwater vehicles (AUV) group with a surface station is proposed in the paper. Implementation of the model is made on the basis of the software package “Object Distribution Simulation”. All structural relationships and behavior details are described. The application was developed on the basis of the proposed model and is now used for computational experiments on the simulation of the communications system between the autonomous underwater vehicles group and a surface station.
A Maxwell Demon Model Connecting Information and Thermodynamics
NASA Astrophysics Data System (ADS)
Peng, Pei-Yan; Duan, Chang-Kui
2016-08-01
In the past decade several theoretical Maxwell's demon models have been proposed exhibiting effects such as refrigerating, doing work at the cost of information, and some experiments have been done to realise these effects. Here we propose a model with a two level demon, information represented by a sequence of bits, and two heat reservoirs. Which reservoir the demon interact with depends on the bit. If information is pure, one reservoir will be refrigerated, on the other hand, information can be erased if temperature difference is large. Genuine examples of such a system are discussed.
A control method for bilateral teleoperating systems
NASA Astrophysics Data System (ADS)
Strassberg, Yesayahu
1992-01-01
The thesis focuses on control of bilateral master-slave teleoperators. The bilateral control issue of teleoperators is studied and a new scheme that overcomes basic unsolved problems is proposed. A performance measure, based on the multiport modeling method, is introduced in order to evaluate and understand the limitations of earlier published bilateral control laws. Based on the study evaluating the different methods, the objective of the thesis is stated. The proposed control law is then introduced, its ideal performance is demonstrated, and conditions for stability and robustness are derived. It is shown that stability, desired performance, and robustness can be obtained under the assumption that the deviation of the model from the actual system satisfies certain norm inequalities and the measurement uncertainties are bounded. The proposed scheme is validated by numerical simulation. The simulated system is based on the configuration of the RAL (Robotics and Automation Laboratory) telerobot. From the simulation results it is shown that good tracking performance can be obtained. In order to verify the performance of the proposed scheme when applied to a real hardware system, an experimental setup of a three degree of freedom master-slave teleoperator (i.e. three degree of freedom master and three degree of freedom slave robot) was built. Three basic experiments were conducted to verify the performance of the proposed control scheme. The first experiment verified the master control law and its contribution to the robustness and performance of the entire system. The second experiment demonstrated the actual performance of the system while performing a free motion teleoperating task. From the experimental results, it is shown that the control law has good performance and is robust to uncertainties in the models of the master and slave.
NASA Astrophysics Data System (ADS)
Lin, Xin; Wang, Feiming; Xu, Jianyuan; Xia, Yalong; Liu, Weidong
2016-03-01
According to the stream theory, this paper proposes a mathematical model of the dielectric recovery characteristic based on the two-temperature ionization equilibrium equation. Taking the dynamic variation of charged particle's ionization and attachment into account, this model can be used in collaboration with the Coulomb collision model, which gives the relationship of the heavy particle temperature and electron temperature to calculate the electron density and temperature under different pressure and electric field conditions, so as to deliver the breakdown electric field strength under different pressure conditions. Meanwhile an experiment loop of the circuit breaker has been built to measure the breakdown voltage. It is shown that calculated results are in conformity with experiment results on the whole while results based on the stream criterion are larger than experiment results. This indicates that the mathematical model proposed here is more accurate for calculating the dielectric recovery characteristic, it is derived from the stream model with some improvement and refinement and has great significance for increasing the simulation accuracy of circuit breaker's interruption characteristic. supported by Science and Technology Project of State Grid Corporation of China (No. GY17201200063), National Natural Science Foundation of China (No. 51277123), Basic Research Project of Liaoning Key Laboratory of Education Department (LZ2015055)
Payload Planning for the International Space Station
NASA Technical Reports Server (NTRS)
Johnson, Tameka J.
1995-01-01
A review of the evolution of the International Space Station (ISS) was performed for the purpose of understanding the project objectives. It was requested than an analysis of the current Office of Space Access and Technology (OSAT) Partnership Utilization Plan (PUP) traffic model be completed to monitor the process through which the scientific experiments called payloads are manifested for flight to the ISS. A viewing analysis of the ISS was also proposed to identify the capability to observe the United States Laboratory (US LAB) during the assembly sequence. Observations of the Drop-Tower experiment and nondestructive testing procedures were also performed to maximize the intern's technical experience. Contributions were made to the meeting in which the 1996 OSAT or Code X PUP traffic model was generated using the software tool, Filemaker Pro. The current OSAT traffic model satisfies the requirement for manifesting and delivering the proposed payloads to station. The current viewing capability of station provides the ability to view the US LAB during station assembly sequence. The Drop Tower experiment successfully simulates the effect of microgravity and conveniently documents the results for later use. The non-destructive test proved effective in determining stress in various components tested.
Moving target detection method based on improved Gaussian mixture model
NASA Astrophysics Data System (ADS)
Ma, J. Y.; Jie, F. R.; Hu, Y. J.
2017-07-01
Gaussian Mixture Model is often employed to build background model in background difference methods for moving target detection. This paper puts forward an adaptive moving target detection algorithm based on improved Gaussian Mixture Model. According to the graylevel convergence for each pixel, adaptively choose the number of Gaussian distribution to learn and update background model. Morphological reconstruction method is adopted to eliminate the shadow.. Experiment proved that the proposed method not only has good robustness and detection effect, but also has good adaptability. Even for the special cases when the grayscale changes greatly and so on, the proposed method can also make outstanding performance.
Is Substance Abuse an Issue for Creative People?
ERIC Educational Resources Information Center
Mabee, Bev
1985-01-01
The author proposes a model for lessons on substance abuse that gives children alternative ways to satisfy the natural desire for altered states of consciousness. The model incorporates stages of progressive relaxation, visualization/guided fantasy, sensory experiences, information, and experimentation. (CL)
ERIC Educational Resources Information Center
Valiente, Carlos; Eisenberg, Nancy; Shepard, Stephanie A.; Fabes, Richard A.; Cumberland, Amanda J.; Losoya, Sandra H.; Spinrad, Tracy L.
2004-01-01
Guided by the heuristic model proposed by Eisenberg et al. [Psychol. Inq. 9 (1998) 241], we examined the relations of mothers' reported and observed negative expressivity to children's (N = 159; 74 girls; M age = 7.67 years) experience and expression of emotion. Children's experience and/or expression of emotion in response to a distressing film…
Seeing the World Anew: A Case Study of Ideas, Engagement, and Transfer in a 3 Year Old.
ERIC Educational Resources Information Center
Pugh, Kevin
According to the philosophy of John Dewey, the goal of education is to provide students with an increased capacity for having worthwhile experiences. This paper draws on Dewey's writings to develop a theory of worthwhile experience, termed "idea-based experience." A model is proposed of how individuals are apprenticed into having an…
NASA Astrophysics Data System (ADS)
Nir, A.; Doughty, C.; Tsang, C. F.
Validation methods which developed in the context of deterministic concepts of past generations often cannot be directly applied to environmental problems, which may be characterized by limited reproducibility of results and highly complex models. Instead, validation is interpreted here as a series of activities, including both theoretical and experimental tests, designed to enhance our confidence in the capability of a proposed model to describe some aspect of reality. We examine the validation process applied to a project concerned with heat and fluid transport in porous media, in which mathematical modeling, simulation, and results of field experiments are evaluated in order to determine the feasibility of a system for seasonal thermal energy storage in shallow unsaturated soils. Technical details of the field experiments are not included, but appear in previous publications. Validation activities are divided into three stages. The first stage, carried out prior to the field experiments, is concerned with modeling the relevant physical processes, optimization of the heat-exchanger configuration and the shape of the storage volume, and multi-year simulation. Subjects requiring further theoretical and experimental study are identified at this stage. The second stage encompasses the planning and evaluation of the initial field experiment. Simulations are made to determine the experimental time scale and optimal sensor locations. Soil thermal parameters and temperature boundary conditions are estimated using an inverse method. Then results of the experiment are compared with model predictions using different parameter values and modeling approximations. In the third stage, results of an experiment performed under different boundary conditions are compared to predictions made by the models developed in the second stage. Various aspects of this theoretical and experimental field study are described as examples of the verification and validation procedure. There is no attempt to validate a specific model, but several models of increasing complexity are compared with experimental results. The outcome is interpreted as a demonstration of the paradigm proposed by van der Heijde, 26 that different constituencies have different objectives for the validation process and therefore their acceptance criteria differ also.
NASA Technical Reports Server (NTRS)
Fowlis, W. W. (Editor); Davis, M. H. (Editor)
1981-01-01
The atmospheric general circulation experiment (AGCE) numerical design for Spacelab flights was studied. A spherical baroclinic flow experiment which models the large scale circulations of the Earth's atmosphere was proposed. Gravity is simulated by a radial dielectric body force. The major objective of the AGCE is to study nonlinear baroclinic wave flows in spherical geometry. Numerical models must be developed which accurately predict the basic axisymmetric states and the stability of nonlinear baroclinic wave flows. A three dimensional, fully nonlinear, numerical model and the AGCE based on the complete set of equations is required. Progress in the AGCE numerical design studies program is reported.
NASA Astrophysics Data System (ADS)
Kozono, Y.; Takahashi, T.; Sakuraba, M.; Nojima, K.
2016-12-01
A lot of debris by tsunami, such as cars, ships and collapsed buildings were generated in the 2011 Tohoku tsunami. It is useful for rescue and recovery after tsunami disaster to predict the amount and final position of disaster debris. The transport form of disaster debris varies as drifting, rolling and sliding. These transport forms need to be considered comprehensively in tsunami simulation. In this study, we focused on the following three points. Firstly, the numerical model considering various transport forms of disaster debris was developed. The proposed numerical model was compared with the hydraulic experiment by Okubo et al. (2004) in order to verify transport on the bottom surface such as rolling and sliding. Secondly, a numerical experiment considering transporting on the bottom surface and drifting was studied. Finally, the numerical model was applied for Kesennuma city where serious damage occurred by the 2011 Tohoku tsunami. In this model, the influence of disaster debris was considered as tsunami flow energy loss. The hydraulic experiments conducted in a water tank which was 10 m long by 30 cm wide. The gate confined water in a storage tank, and acted as a wave generator. A slope was set at downstream section. The initial position of a block (width: 3.2 cm, density: 1.55 g/cm3) assuming the disaster debris was placed in front of the slope. The proposed numerical model simulated well the maximum transport distance and the final stop position of the block. In the second numerical experiment, the conditions were the same as the hydraulic experiment, except for the density of the block. The density was set to various values (from 0.30 to 4.20 g/cm3). This model was able to estimate various transport forms including drifting and sliding. In the numerical simulation of the 2011 Tohoku tsunami, the condition of buildings was modeled as follows: (i)the resistance on the bottom using Manning roughness coefficient (conventional method), and (ii)structure of buildings with collapsing and washing-away due to tsunami wave pressure. In this calculation, disaster debris of collapsed buildings, cars and ships was considered. As a result, the proposed model showed that it is necessary to take the disaster debris into account in order to predict tsunami inundation accurately.
PSNet: prostate segmentation on MRI based on a convolutional neural network.
Tian, Zhiqiang; Liu, Lizhi; Zhang, Zhenfeng; Fei, Baowei
2018-04-01
Automatic segmentation of the prostate on magnetic resonance images (MRI) has many applications in prostate cancer diagnosis and therapy. We proposed a deep fully convolutional neural network (CNN) to segment the prostate automatically. Our deep CNN model is trained end-to-end in a single learning stage, which uses prostate MRI and the corresponding ground truths as inputs. The learned CNN model can be used to make an inference for pixel-wise segmentation. Experiments were performed on three data sets, which contain prostate MRI of 140 patients. The proposed CNN model of prostate segmentation (PSNet) obtained a mean Dice similarity coefficient of [Formula: see text] as compared to the manually labeled ground truth. Experimental results show that the proposed model could yield satisfactory segmentation of the prostate on MRI.
Resilience in adults with cancer: development of a conceptual model.
Deshields, Teresa L; Heiland, Mark F; Kracen, Amanda C; Dua, Priya
2016-01-01
Resilience is a construct addressed in the psycho-oncology literature and is especially relevant to cancer survivorship. The purpose of this paper is to propose a model for resilience that is specific to adults diagnosed with cancer. To establish the proposed model, a brief review of the various definitions of resilience and of the resilience literature in oncology is provided. The proposed model includes baseline attributes (personal and environmental) which impact how an individual responds to an adverse event, which in this paper is cancer-related. The survivor has an initial response that fits somewhere on the distress-resilience continuum; however, post-cancer experiences (and interventions) can modify the initial response through a process of recalibration. The literature reviewed indicates that resilience is a common response to cancer diagnosis or treatment. The proposed model supports the view of resilience as both an outcome and a dynamic process. Given the process of recalibration, a discussion is provided of interventions that might facilitate resilience in adults with cancer. Copyright © 2015 John Wiley & Sons, Ltd.
Energy Systems Integration News | Energy Systems Integration Facility |
distribution feeder models for use in hardware-in-the-loop (HIL) experiments. Using this method, a full feeder ; proposes an additional control loop to improve frequency support while ensuring stable operation. The and Frequency Deviation," also proposes an additional control loop, this time to smooth the wind
Ocular-Motor Function and Information Processing: Implications for the Reading Process.
ERIC Educational Resources Information Center
Leisman, Gerald; Schwartz, Joddy
This paper discusses the dichotomy between continually moving eyes and the lack of blurred visual experience. A discontinuous model of visual perception is proposed, with the discontinuities being phase and temporally related to saccadic eye movements. It is further proposed that deviant duration and angular velocity characteristics of saccades in…
Damage evaluation of reinforced concrete frame based on a combined fiber beam model
NASA Astrophysics Data System (ADS)
Shang, Bing; Liu, ZhanLi; Zhuang, Zhuo
2014-04-01
In order to analyze and simulate the impact collapse or seismic response of the reinforced concrete (RC) structures, a combined fiber beam model is proposed by dividing the cross section of RC beam into concrete fiber and steel fiber. The stress-strain relationship of concrete fiber is based on a model proposed by concrete codes for concrete structures. The stress-strain behavior of steel fiber is based on a model suggested by others. These constitutive models are implemented into a general finite element program ABAQUS through the user defined subroutines to provide effective computational tools for the inelastic analysis of RC frame structures. The fiber model proposed in this paper is validated by comparing with experiment data of the RC column under cyclical lateral loading. The damage evolution of a three-dimension frame subjected to impact loading is also investigated.
Lee, Haerin; Jung, Moonki; Lee, Ki-Kwang; Lee, Sang Hun
2017-02-06
In this paper, we propose a three-dimensional design and evaluation framework and process based on a probabilistic-based motion synthesis algorithm and biomechanical analysis system for the design of the Smith machine and squat training programs. Moreover, we implemented a prototype system to validate the proposed framework. The framework consists of an integrated human-machine-environment model as well as a squat motion synthesis system and biomechanical analysis system. In the design and evaluation process, we created an integrated model in which interactions between a human body and machine or the ground are modeled as joints with constraints at contact points. Next, we generated Smith squat motion using the motion synthesis program based on a Gaussian process regression algorithm with a set of given values for independent variables. Then, using the biomechanical analysis system, we simulated joint moments and muscle activities from the input of the integrated model and squat motion. We validated the model and algorithm through physical experiments measuring the electromyography (EMG) signals, ground forces, and squat motions as well as through a biomechanical simulation of muscle forces. The proposed approach enables the incorporation of biomechanics in the design process and reduces the need for physical experiments and prototypes in the development of training programs and new Smith machines.
A 3D Human-Machine Integrated Design and Analysis Framework for Squat Exercises with a Smith Machine
Lee, Haerin; Jung, Moonki; Lee, Ki-Kwang; Lee, Sang Hun
2017-01-01
In this paper, we propose a three-dimensional design and evaluation framework and process based on a probabilistic-based motion synthesis algorithm and biomechanical analysis system for the design of the Smith machine and squat training programs. Moreover, we implemented a prototype system to validate the proposed framework. The framework consists of an integrated human–machine–environment model as well as a squat motion synthesis system and biomechanical analysis system. In the design and evaluation process, we created an integrated model in which interactions between a human body and machine or the ground are modeled as joints with constraints at contact points. Next, we generated Smith squat motion using the motion synthesis program based on a Gaussian process regression algorithm with a set of given values for independent variables. Then, using the biomechanical analysis system, we simulated joint moments and muscle activities from the input of the integrated model and squat motion. We validated the model and algorithm through physical experiments measuring the electromyography (EMG) signals, ground forces, and squat motions as well as through a biomechanical simulation of muscle forces. The proposed approach enables the incorporation of biomechanics in the design process and reduces the need for physical experiments and prototypes in the development of training programs and new Smith machines. PMID:28178184
Linskell, Jeremy; Bouamrane, Matt-Mouley
2012-09-01
An assisted living space (ALS) is a technology-enabled environment designed to allow people with complex health or social care needs to remain, and live independently, in their own home for longer. However, many challenges remain in order to deliver usable systems acceptable to a diverse range of stakeholders, including end-users, and their families and carers, as well as health and social care services. ALSs need to support activities of daily-living while allowing end-users to maintain important social connections. They must be dynamic, flexible and adaptable living environments. In this article, we provide an overview of the technological landscape of assisted-living technology (ALT) and recent policies to promote an increased adoption of ALT in Scotland. We discuss our experiences in implementing technology-supported ALSs and emphasise key lessons. Finally, we propose an iterative and pragmatic user-centred implementation model for delivering ALSs in complex-needs scenarios. This empirical model is derived from our past ALS implementations. The proposed model allows project stakeholders to identify requirements, allocate tasks and responsibilities, and identify appropriate technological solutions for the delivery of functional ALS systems. The model is generic and makes no assumptions on needs or technology solutions, nor on the technical knowledge, skills and experience of the stakeholders involved in the ALS design process.
The Whole Learner: The Role of Imagination in Developing Disciplinary Understanding
ERIC Educational Resources Information Center
Anderson, Kirsteen
2010-01-01
This article challenges the predominance of modularization across the UK university system, arguing that the fragmentation of the learning experience which results from this model undermines the possibility of a disciplinary understanding. It proposes instead a practice of imaginative writing which, by engaging students' experience, interest and…
ERIC Educational Resources Information Center
Hirschy, Amy S.; Wilson, Maureen E.; Liddell, Debora L.; Boyle, Kathleen M.; Pasquesi, Kira
2015-01-01
In this study, the authors propose and test a model of professional identity development among early career student affairs professionals. Using survey data from 173 new professionals (0-5 years of experience), factor analysis revealed 3 dimensions of professional identity: commitment, values congruence, and intellectual investment. Multivariate…
NASA Astrophysics Data System (ADS)
Grebenev, Igor V.; Lebedeva, Olga V.; Polushkina, Svetlana V.
2018-07-01
The article proposes a new research object for a general physics course—the vapour Cartesian diver, designed to study the properties of saturated water vapour. Physics education puts great importance on the study of the saturated vapour state, as it is related to many fundamental laws and theories. For example, the temperature dependence of the saturated water vapour pressure allows the teacher to demonstrate the Le Chatelier’s principle: increasing the temperature of a system in a dynamic equilibrium favours the endothermic change. That means that increasing the temperature increases the amount of vapour present, and so increases the saturated vapour pressure. The experimental setup proposed in this paper can be used as an example of an auto-oscillatory system, based on the properties of saturated vapour. The article describes a mathematical model of physical processes that occur in the experiment, and proposes a numerical solution method for the acquired system of equations. It shows that the results of numerical simulation coincide with the self-oscillation parameters from the real experiment. The proposed installation can also be considered as a model of a thermal engine.
A degradation model for high kitchen waste content municipal solid waste.
Chen, Yunmin; Guo, Ruyang; Li, Yu-Chao; Liu, Hailong; Zhan, Tony Liangtong
2016-12-01
Municipal solid waste (MSW) in developing countries has a high content of kitchen waste (KW), and therefore contains large quantities of water and non-hollocellulose degradable organics. The degradation of high KW content MSW cannot be well simulated by the existing degradation models, which are mostly established for low KW content MSW in developed countries. This paper presents a two-stage anaerobic degradation model for high KW content MSW with degradations of hollocellulose, sugars, proteins and lipids considered. The ranges of the proportions of chemical compounds in MSW components are summarized with the recommended values given. Waste components are grouped into rapidly or slowly degradable categories in terms of the degradation rates under optimal water conditions for degradation. In the proposed model, the unionized VFA inhibitions of hydrolysis/acidogenesis and methanogenesis are considered as well as the pH inhibition of methanogenesis. Both modest and serious VFA inhibitions can be modeled by the proposed model. Default values for the parameters in the proposed method can be used for predictions of degradations of both low and high KW content MSW. The proposed model was verified by simulating two laboratory experiments, in which low and high KW content MSW were used, respectively. The simulated results are in good agreement with the measured data of the experiments. The results show that under low VFA concentrations, the pH inhibition of methanogenesis is the main inhibition to be considered, while the inhibitions of both hydrolysis/acidogenesis and methanogenesis caused by unionized VFA are significant under high VFA concentrations. The model is also used to compare the degradation behaviors of low and high KW content MSW under a favorable environmental condition, and it shows that the gas potential of high KW content MSW releases more quickly. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Liu, F. C.
1986-01-01
The objective of this investigation is to make analytical determination of the acceleration produced by crew motion in an orbiting space station and define design parameters for the suspension system of microgravity experiments. A simple structural model for simulation of the IOC space station is proposed. Mathematical formulation of this model provides the engineers a simple and direct tool for designing an effective suspension system.
A Cognitive Model Based on Neuromodulated Plasticity
Ruan, Xiaogang
2016-01-01
Associative learning, including classical conditioning and operant conditioning, is regarded as the most fundamental type of learning for animals and human beings. Many models have been proposed surrounding classical conditioning or operant conditioning. However, a unified and integrated model to explain the two types of conditioning is much less studied. Here, a model based on neuromodulated synaptic plasticity is presented. The model is bioinspired including multistored memory module and simulated VTA dopaminergic neurons to produce reward signal. The synaptic weights are modified according to the reward signal, which simulates the change of associative strengths in associative learning. The experiment results in real robots prove the suitability and validity of the proposed model. PMID:27872638
Improved understanding of the acoustophoretic focusing of dense suspensions in a microchannel
NASA Astrophysics Data System (ADS)
Karthick, S.; Sen, A. K.
2017-11-01
We provide improved understanding of acoustophoretic focusing of a dense suspension (volume fraction φ >10 % ) in a microchannel subjected to an acoustic standing wave using a proposed theoretical model and experiments. The model is based on the theory of interacting continua and utilizes a momentum transport equation for the mixture, continuity equation, and transport equation for the solid phase. The model demonstrates the interplay between acoustic radiation and shear-induced diffusion (SID) forces that is critical in the focusing of dense suspensions. The shear-induced particle migration model of Leighton and Acrivos, coupled with the acoustic radiation force, is employed to simulate the continuum behavior of particles. In the literature, various closures for the diffusion coefficient Dφ* are available for rigid spheres at high concentrations and nonspherical deformable particles [e.g., red blood cells (RBCs)] at low concentrations. Here we propose a closure for Dφ* for dense suspension of RBCs and validate the proposed model with experimental data. While the available closures for Dφ* fail to predict the acoustic focusing of a dense suspension of nonspherical deformable particles like RBCs, the predictions of the proposed model match experimental data within 15%. Both the model and experiments reveal a competition between acoustic radiation and SID forces that gives rise to an equilibrium width w* of a focused stream of particles at some distance Leq* along the flow direction. Using different shear rates, acoustic energy densities, and particle concentrations, we show that the equilibrium width is governed by Péclet number Pe and Strouhal number Stas w*=1.4(PeSt) -0.5 while the length required to obtain the equilibrium-focused width depends on St as Leq*=3.8 /(St)0.6 . The proposed model and correlations would find significance in the design of microchannels for acoustic focusing of dense suspensions such as undiluted blood.
Gao, Changwei; Liu, Xiaoming; Chen, Hai
2017-08-22
This paper focus on the power fluctuations of the virtual synchronous generator(VSG) during the transition process. An improved virtual synchronous generator(IVSG) control strategy based on feed-forward compensation is proposed. Adjustable parameter of the compensation section can be modified to achieve the goal of reducing the order of the system. It can effectively suppress the power fluctuations of the VSG in transient process. To verify the effectiveness of the proposed control strategy for distributed energy resources inverter, the simulation model is set up in MATLAB/SIMULINK platform and physical experiment platform is established. Simulation and experiment results demonstrate the effectiveness of the proposed IVSG control strategy.
He, Ning; Sun, Hechun; Dai, Miaomiao
2014-05-01
To evaluate the influence of temperature and humidity on the drug stability by initial average rate experiment, and to obtained the kinetic parameters. The effect of concentration error, drug degradation extent, humidity and temperature numbers, humidity and temperature range, and average humidity and temperature on the accuracy and precision of kinetic parameters in the initial average rate experiment was explored. The stability of vitamin C, as a solid state model, was investigated by an initial average rate experiment. Under the same experimental conditions, the kinetic parameters obtained from this proposed method were comparable to those from classical isothermal experiment at constant humidity. The estimates were more accurate and precise by controlling the extent of drug degradation, changing humidity and temperature range, or by setting the average temperature closer to room temperature. Compared with isothermal experiments at constant humidity, our proposed method saves time, labor, and materials.
Tamir, Maya; Bigman, Yochanan E; Rhodes, Emily; Salerno, James; Schreier, Jenna
2015-02-01
According to expectancy-value models of self-regulation, people are motivated to act in ways they expect to be useful to them. For instance, people are motivated to run when they believe running is useful, even when they have nothing to run away from. Similarly, we propose an expectancy-value model of emotion regulation, according to which people are motivated to emote in ways they expect to be useful to them, regardless of immediate contextual demands. For instance, people may be motivated to get angry when they believe anger is useful, even when there is nothing to be angry about. In 5 studies, we demonstrate that leading people to expect an emotion to be useful increased their motivation to experience that emotion (Studies 1-5), led them to up-regulate the experience of that emotion (Studies 3-4), and led to emotion-consistent behavior (Study 4). Our hypotheses were supported when we manipulated the expected value of anxiety (Study 1) and anger (Studies 2-5), both consciously (Studies 1-4) and unconsciously (Study 5). We discuss the theoretical and pragmatic implications of the proposed model. PsycINFO Database Record (c) 2015 APA, all rights reserved.
NASA Astrophysics Data System (ADS)
Avci, Mesut
A practical cost and energy efficient model predictive control (MPC) strategy is proposed for HVAC load control under dynamic real-time electricity pricing. The MPC strategy is built based on a proposed model that jointly minimizes the total energy consumption and hence, cost of electricity for the user, and the deviation of the inside temperature from the consumer's preference. An algorithm that assigns temperature set-points (reference temperatures) to price ranges based on the consumer's discomfort tolerance index is developed. A practical parameter prediction model is also designed for mapping between the HVAC load and the inside temperature. The prediction model and the produced temperature set-points are integrated as inputs into the MPC controller, which is then used to generate signal actions for the AC unit. To investigate and demonstrate the effectiveness of the proposed approach, a simulation based experimental analysis is presented using real-life pricing data. An actual prototype for the proposed HVAC load control strategy is then built and a series of prototype experiments are conducted similar to the simulation studies. The experiments reveal that the MPC strategy can lead to significant reductions in overall energy consumption and cost savings for the consumer. Results suggest that by providing an efficient response strategy for the consumers, the proposed MPC strategy can enable the utility providers to adopt efficient demand management policies using real-time pricing. Finally, a cost-benefit analysis is performed to display the economic feasibility of implementing such a controller as part of a building energy management system, and the payback period is identified considering cost of prototype build and cost savings to help the adoption of this controller in the building HVAC control industry.
Scale-up of ecological experiments: Density variation in the mobile bivalve Macomona liliana
Schneider, Davod C.; Walters, R.; Thrush, S.; Dayton, P.
1997-01-01
At present the problem of scaling up from controlled experiments (necessarily at a small spatial scale) to questions of regional or global importance is perhaps the most pressing issue in ecology. Most of the proposed techniques recommend iterative cycling between theory and experiment. We present a graphical technique that facilitates this cycling by allowing the scope of experiments, surveys, and natural history observations to be compared to the scope of models and theory. We apply the scope analysis to the problem of understanding the population dynamics of a bivalve exposed to environmental stress at the scale of a harbour. Previous lab and field experiments were found not to be 1:1 scale models of harbour-wide processes. Scope analysis allowed small scale experiments to be linked to larger scale surveys and to a spatially explicit model of population dynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1986-06-02
Experimental research covered includes involvement in SLAC and Fermilab accelerator experiments and construction of the ''Muon String'' of the DUMAND project. Activities also included planning of future experiments at the SLC and Tevatron. Experiments addressed the search for the free quark, gluon radiation, reduced upper limits for the mass of neutrinos. The theoretical program includes exact calculation of flavor changing processes within the standard model, constraints on the weak coupling of heavy quarks, neutrino oscillation, the role of DEMONS in superconductivity, extended electroweak models, gauge models, the origin of electron/muon asymmetry in the beam dump, SU(5) and departures in unification.more » QCD and vector dominance predictions were reconciled in the electromagnetic decays of neutral pions and eta mesons, and it was proposed that the electron plus jet events seen by UAl along with their W events are to interpreted as the production and decay of top. The possibility of observable particle-antiparticle rate differences in hyperon decays as a test of CP-invariance was proposed. (LEW)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malczynski, Leonard A.
This guide addresses software quality in the construction of Powersim{reg_sign} Studio 8 system dynamics simulation models. It is the result of almost ten years of experience with the Powersim suite of system dynamics modeling tools (Constructor and earlier Studio versions). It is a guide that proposes a common look and feel for the construction of Powersim Studio system dynamics models.
Reduction of Air Pollution Levels Downwind of a Road with an Upwind Noise Barrier
We propose a dispersion model to characterize the impact of an upwind solid noise barrier next to a highway on air pollution concentrations downwind of the road. The model is based on data from wind tunnel experiments conducted by Heist et al. (2009). The model assumes that the...
Estimating Independent Locally Shifted Random Utility Models for Ranking Data
ERIC Educational Resources Information Center
Lam, Kar Yin; Koning, Alex J.; Franses, Philip Hans
2011-01-01
We consider the estimation of probabilistic ranking models in the context of conjoint experiments. By using approximate rather than exact ranking probabilities, we avoided the computation of high-dimensional integrals. We extended the approximation technique proposed by Henery (1981) in the context of the Thurstone-Mosteller-Daniels model to any…
Simulation-based modeling of building complexes construction management
NASA Astrophysics Data System (ADS)
Shepelev, Aleksandr; Severova, Galina; Potashova, Irina
2018-03-01
The study reported here examines the experience in the development and implementation of business simulation games based on network planning and management of high-rise construction. Appropriate network models of different types and levels of detail have been developed; a simulation model including 51 blocks (11 stages combined in 4 units) is proposed.
A Formal Model of Capacity Limits in Working Memory
ERIC Educational Resources Information Center
Oberauer, Klaus; Kliegl, Reinhold
2006-01-01
A mathematical model of working-memory capacity limits is proposed on the key assumption of mutual interference between items in working memory. Interference is assumed to arise from overwriting of features shared by these items. The model was fit to time-accuracy data of memory-updating tasks from four experiments using nonlinear mixed effect…
A new RISE-based adaptive control of PKMs: design, stability analysis and experiments
NASA Astrophysics Data System (ADS)
Bennehar, M.; Chemori, A.; Bouri, M.; Jenni, L. F.; Pierrot, F.
2018-03-01
This paper deals with the development of a new adaptive control scheme for parallel kinematic manipulators (PKMs) based on Rrbust integral of the sign of the error (RISE) control theory. Original RISE control law is only based on state feedback and does not take advantage of the modelled dynamics of the manipulator. Consequently, the overall performance of the resulting closed-loop system may be poor compared to modern advanced model-based control strategies. We propose in this work to extend RISE by including the nonlinear dynamics of the PKM in the control loop to improve its overall performance. More precisely, we augment original RISE control scheme with a model-based adaptive control term to account for the inherent nonlinearities in the closed-loop system. To demonstrate the relevance of the proposed controller, real-time experiments are conducted on the Delta robot, a three-degree-of-freedom (3-DOF) PKM.
NASA Astrophysics Data System (ADS)
Cho, G. S.
2017-09-01
For performance optimization of Refrigerated Warehouses, design parameters are selected based on the physical parameters such as number of equipment and aisles, speeds of forklift for ease of modification. This paper provides a comprehensive framework approach for the system design of Refrigerated Warehouses. We propose a modeling approach which aims at the simulation optimization so as to meet required design specifications using the Design of Experiment (DOE) and analyze a simulation model using integrated aspect-oriented modeling approach (i-AOMA). As a result, this suggested method can evaluate the performance of a variety of Refrigerated Warehouses operations.
Dynamic modeling and characteristics analysis of a modal-independent linear ultrasonic motor.
Li, Xiang; Yao, Zhiyuan; Zhou, Shengli; Lv, Qibao; Liu, Zhen
2016-12-01
In this paper, an integrated model is developed to analyze the fundamental characteristics of a modal-independent linear ultrasonic motor with double piezoelectric vibrators. The energy method is used to model the dynamics of the two piezoelectric vibrators. The interface forces are coupled into the dynamic equations of the two vibrators and the moving platform, forming a whole machine model of the motor. The behavior of the force transmission of the motor is analyzed via the resulting model to understand the drive mechanism. In particular, the relative contact length is proposed to describe the intermittent contact characteristic between the stator and the mover, and its role in evaluating motor performance is discussed. The relations between the output speed and various inputs to the motor and the start-stop transients of the motor are analyzed by numerical simulations, which are validated by experiments. Furthermore, the dead-zone behavior is predicted and clarified analytically using the proposed model, which is also observed in experiments. These results are useful for designing servo control scheme for the motor. Copyright © 2016 Elsevier B.V. All rights reserved.
Quantifying (dis)agreement between direct detection experiments in a halo-independent way
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feldstein, Brian; Kahlhoefer, Felix, E-mail: brian.feldstein@physics.ox.ac.uk, E-mail: felix.kahlhoefer@physics.ox.ac.uk
We propose an improved method to study recent and near-future dark matter direct detection experiments with small numbers of observed events. Our method determines in a quantitative and halo-independent way whether the experiments point towards a consistent dark matter signal and identifies the best-fit dark matter parameters. To achieve true halo independence, we apply a recently developed method based on finding the velocity distribution that best describes a given set of data. For a quantitative global analysis we construct a likelihood function suitable for small numbers of events, which allows us to determine the best-fit particle physics properties of darkmore » matter considering all experiments simultaneously. Based on this likelihood function we propose a new test statistic that quantifies how well the proposed model fits the data and how large the tension between different direct detection experiments is. We perform Monte Carlo simulations in order to determine the probability distribution function of this test statistic and to calculate the p-value for both the dark matter hypothesis and the background-only hypothesis.« less
Du, Yanjun; Ding, Yanjun; Liu, Yufeng; Lan, Lijuan; Peng, Zhimin
2014-08-01
The effect of self-absorption on emission intensity distributions can be used for species concentration measurements. A calculation model is developed based on the Beer-Lambert law to quantify this effect. And then, a calibration-free measurement method is proposed on the basis of this model by establishing the relationship between gas concentration and absorption strength. The effect of collision parameters and rotational temperature on the method is also discussed. The proposed method is verified by investigating the nitric oxide emission bands (A²Σ⁺→X²∏) that are generated by a pulsed corona discharge at various gas concentrations. Experiment results coincide well with the expectations, thus confirming the precision and accuracy of the proposed measurement method.
Direct Importance Estimation with Gaussian Mixture Models
NASA Astrophysics Data System (ADS)
Yamada, Makoto; Sugiyama, Masashi
The ratio of two probability densities is called the importance and its estimation has gathered a great deal of attention these days since the importance can be used for various data processing purposes. In this paper, we propose a new importance estimation method using Gaussian mixture models (GMMs). Our method is an extention of the Kullback-Leibler importance estimation procedure (KLIEP), an importance estimation method using linear or kernel models. An advantage of GMMs is that covariance matrices can also be learned through an expectation-maximization procedure, so the proposed method — which we call the Gaussian mixture KLIEP (GM-KLIEP) — is expected to work well when the true importance function has high correlation. Through experiments, we show the validity of the proposed approach.
The early universe as a probe of new physics
NASA Astrophysics Data System (ADS)
Bird, Christopher Shane
The Standard Model of Particle Physics has been verified to unprecedented precision in the last few decades. However there are still phenomena in nature which cannot be explained, and as such new theories will be required. Since terrestrial experiments are limited in both the energy and precision that can be probed, new methods are required to search for signs of physics beyond the Standard Model. In this dissertation, I demonstrate how these theories can be probed by searching for remnants of their effects in the early Universe. In particular I focus on three possible extensions of the Standard Model: the addition of massive neutral particles as dark matter, the addition of charged massive particles, and the existence of higher dimensions. For each new model, I review the existing experimental bounds and the potential for discovering new physics in the next generation of experiments. For dark matter, I introduce six simple models which I have developed, and which involve a minimum amount of new physics, as well as reviewing one existing model of dark matter. For each model I calculate the latest constraints from astrophysics experiments, nuclear recoil experiments, and collider experiments. I also provide motivations for studying sub-GeV mass dark matter, and propose the possibility of searching for light WIMPs in the decay of B-mesons and other heavy particles. For charged massive relics, I introduce and review the recently proposed model of catalyzed Big Bang nucleosynthesis. In particular I review the production of 6Li by this mechanism, and calculate the abundance of 7Li after destruction of 7Be by charged relics. The result is that for certain natural relics CBBN is capable of removing tensions between the predicted and observed 6Li and 7Li abundances which are present in the standard model of BBN. For extra dimensions, I review the constraints on the ADD model from both astrophysics and collider experiments. I then calculate the constraints on this model from Big Bang nucleosynthesis in the early Universe. I also calculate the bounds on this model from Kaluza-Klein gravitons trapped in the galaxy which decay to electron-positron pairs, using the measured 511 keV gamma-ray flux. For each example of new physics, I find that remnants of the early Universe provide constraints on the models which are complementary to the existing constraints from colliders and other terrestrial experiments.
Fern, Lorna A; Taylor, Rachel M; Whelan, Jeremy; Pearce, Susie; Grew, Tom; Brooman, Katie; Starkey, Carol; Millington, Hannah; Ashton, James; Gibson, Faith
2013-01-01
There is recognition that teenagers and young adults with cancer merit age-appropriate specialist care. However, outcomes associated with such specialist care are not defined. Patient experience and patient-reported outcomes such as quality of life are gaining importance. Nevertheless, there is a lack of theoretical basis and patient involvement in experience surveys for young people. We previously proposed a conceptual model of the lived experience of cancer. We aimed to refine this model adding to areas that were lacking or underreported. The proposed conceptual framework will inform a bespoke patient experience survey for young people. Using participatory research, 11 young people aged 13 to 25 years at diagnosis, participated in a 1-day workshop consisting of semistructured peer-to-peer interviews. Eight core themes emerged: impact of cancer diagnosis, information provision, place of care, role of health professionals, coping, peers, psychological support, and life after cancer. The conceptual framework has informed survey development for a longitudinal cohort study examining patient experience and outcomes associated with specialist cancer care. Young people must be kept at the center of interactions in recognition of their stated needs of engagement, of individually tailored information and support unproxied by parents/family. Age-appropriate information and support services that help young people deal with the impact of cancer on daily life and life after cancer must be made available. If we are to develop services that meet need, patient experience surveys must be influenced by patient involvement. Young people can be successfully involved in planning research relevant to their experience.
Self-charging of identical grains in the absence of an external field.
Yoshimatsu, R; Araújo, N A M; Wurm, G; Herrmann, H J; Shinbrot, T
2017-01-06
We investigate the electrostatic charging of an agitated bed of identical grains using simulations, mathematical modeling, and experiments. We simulate charging with a discrete-element model including electrical multipoles and find that infinitesimally small initial charges can grow exponentially rapidly. We propose a mathematical Turing model that defines conditions for exponential charging to occur and provides insights into the mechanisms involved. Finally, we confirm the predicted exponential growth in experiments using vibrated grains under microgravity, and we describe novel predicted spatiotemporal states that merit further study.
Self-charging of identical grains in the absence of an external field
NASA Astrophysics Data System (ADS)
Yoshimatsu, R.; Araújo, N. A. M.; Wurm, G.; Herrmann, H. J.; Shinbrot, T.
2017-01-01
We investigate the electrostatic charging of an agitated bed of identical grains using simulations, mathematical modeling, and experiments. We simulate charging with a discrete-element model including electrical multipoles and find that infinitesimally small initial charges can grow exponentially rapidly. We propose a mathematical Turing model that defines conditions for exponential charging to occur and provides insights into the mechanisms involved. Finally, we confirm the predicted exponential growth in experiments using vibrated grains under microgravity, and we describe novel predicted spatiotemporal states that merit further study.
Something from nothing: self-charging of identical grains
NASA Astrophysics Data System (ADS)
Shinbrot, Troy; Yoshimatsu, Ryuta; Nuno Araujo, Nuno; Wurm, Gerhard; Herrmann, Hans
We investigate the electrostatic charging of an agitated bed of identical grains using simulations, mathematical modeling, and experiments. We simulate charging with a discrete-element model including electrical multipoles and find that infinitesimally small initial charges can grow exponentially rapidly. We propose a mathematical Turing model that defines conditions for exponential charging to occur and provides insights into the mechanisms involved. Finally, we confirm the predicted exponential growth in experiments using vibrated grains under microgravity, and we describe novel predicted spatiotemporal states that merit further study. I acknowledge support from NSF/DMR, award 1404792.
Self-charging of identical grains in the absence of an external field
Yoshimatsu, R.; Araújo, N. A. M.; Wurm, G.; Herrmann, H. J.; Shinbrot, T.
2017-01-01
We investigate the electrostatic charging of an agitated bed of identical grains using simulations, mathematical modeling, and experiments. We simulate charging with a discrete-element model including electrical multipoles and find that infinitesimally small initial charges can grow exponentially rapidly. We propose a mathematical Turing model that defines conditions for exponential charging to occur and provides insights into the mechanisms involved. Finally, we confirm the predicted exponential growth in experiments using vibrated grains under microgravity, and we describe novel predicted spatiotemporal states that merit further study. PMID:28059124
Iliotibial band friction syndrome
2010-01-01
Published articles on iliotibial band friction syndrome have been reviewed. These articles cover the epidemiology, etiology, anatomy, pathology, prevention, and treatment of the condition. This article describes (1) the various etiological models that have been proposed to explain iliotibial band friction syndrome; (2) some of the imaging methods, research studies, and clinical experiences that support or call into question these various models; (3) commonly proposed treatment methods for iliotibial band friction syndrome; and (4) the rationale behind these methods and the clinical outcome studies that support their efficacy. PMID:21063495
Reliability of Memories Protected by Multibit Error Correction Codes Against MBUs
NASA Astrophysics Data System (ADS)
Ming, Zhu; Yi, Xiao Li; Chang, Liu; Wei, Zhang Jian
2011-02-01
As technology scales, more and more memory cells can be placed in a die. Therefore, the probability that a single event induces multiple bit upsets (MBUs) in adjacent memory cells gets greater. Generally, multibit error correction codes (MECCs) are effective approaches to mitigate MBUs in memories. In order to evaluate the robustness of protected memories, reliability models have been widely studied nowadays. Instead of irradiation experiments, the models can be used to quickly evaluate the reliability of memories in the early design. To build an accurate model, some situations should be considered. Firstly, when MBUs are presented in memories, the errors induced by several events may overlap each other, which is more frequent than single event upset (SEU) case. Furthermore, radiation experiments show that the probability of MBUs strongly depends on angles of the radiation event. However, reliability models which consider the overlap of multiple bit errors and angles of radiation event have not been proposed in the present literature. In this paper, a more accurate model of memories with MECCs is presented. Both the overlap of multiple bit errors and angles of event are considered in the model, which produces a more precise analysis in the calculation of mean time to failure (MTTF) for memory systems under MBUs. In addition, memories with scrubbing and nonscrubbing are analyzed in the proposed model. Finally, we evaluate the reliability of memories under MBUs in Matlab. The simulation results verify the validity of the proposed model.
Lu, Ji; Pan, Junhao; Zhang, Qiang; Dubé, Laurette; Ip, Edward H.
2015-01-01
With intensively collected longitudinal data, recent advances in Experience Sampling Method (ESM) benefit social science empirical research, but also pose important methodological challenges. As traditional statistical models are not generally well-equipped to analyze a system of variables that contain feedback loops, this paper proposes the utility of an extended hidden Markov model to model reciprocal relationship between momentary emotion and eating behavior. This paper revisited an ESM data set (Lu, Huet & Dube, 2011) that observed 160 participants’ food consumption and momentary emotions six times per day in 10 days. Focusing on the analyses on feedback loop between mood and meal healthiness decision, the proposed Reciprocal Markov Model (RMM) can accommodate both hidden (“general” emotional states: positive vs. negative state) and observed states (meal: healthier, same or less healthy than usual) without presuming independence between observations and smooth trajectories of mood or behavior changes. The results of RMM analyses illustrated the reciprocal chains of meal consumption and mood as well as the effect of contextual factors that moderate the interrelationship between eating and emotion. A simulation experiment that generated data consistent to the empirical study further demonstrated that the procedure is promising in terms of recovering the parameters. PMID:26717120
A Sarsa(λ)-based control model for real-time traffic light coordination.
Zhou, Xiaoke; Zhu, Fei; Liu, Quan; Fu, Yuchen; Huang, Wei
2014-01-01
Traffic problems often occur due to the traffic demands by the outnumbered vehicles on road. Maximizing traffic flow and minimizing the average waiting time are the goals of intelligent traffic control. Each junction wants to get larger traffic flow. During the course, junctions form a policy of coordination as well as constraints for adjacent junctions to maximize their own interests. A good traffic signal timing policy is helpful to solve the problem. However, as there are so many factors that can affect the traffic control model, it is difficult to find the optimal solution. The disability of traffic light controllers to learn from past experiences caused them to be unable to adaptively fit dynamic changes of traffic flow. Considering dynamic characteristics of the actual traffic environment, reinforcement learning algorithm based traffic control approach can be applied to get optimal scheduling policy. The proposed Sarsa(λ)-based real-time traffic control optimization model can maintain the traffic signal timing policy more effectively. The Sarsa(λ)-based model gains traffic cost of the vehicle, which considers delay time, the number of waiting vehicles, and the integrated saturation from its experiences to learn and determine the optimal actions. The experiment results show an inspiring improvement in traffic control, indicating the proposed model is capable of facilitating real-time dynamic traffic control.
The SAGE Model of Social Psychological Research
Power, Séamus A.; Velez, Gabriel; Qadafi, Ahmad; Tennant, Joseph
2018-01-01
We propose a SAGE model for social psychological research. Encapsulated in our acronym is a proposal to have a synthetic approach to social psychological research, in which qualitative methods are augmentative to quantitative ones, qualitative methods can be generative of new experimental hypotheses, and qualitative methods can capture experiences that evade experimental reductionism. We remind social psychological researchers that psychology was founded in multiple methods of investigation at multiple levels of analysis. We discuss historical examples and our own research as contemporary examples of how a SAGE model can operate in part or as an integrated whole. The implications of our model are discussed. PMID:29361241
Accuracy comparison among different machine learning techniques for detecting malicious codes
NASA Astrophysics Data System (ADS)
Narang, Komal
2016-03-01
In this paper, a machine learning based model for malware detection is proposed. It can detect newly released malware i.e. zero day attack by analyzing operation codes on Android operating system. The accuracy of Naïve Bayes, Support Vector Machine (SVM) and Neural Network for detecting malicious code has been compared for the proposed model. In the experiment 400 benign files, 100 system files and 500 malicious files have been used to construct the model. The model yields the best accuracy 88.9% when neural network is used as classifier and achieved 95% and 82.8% accuracy for sensitivity and specificity respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Awida, Mohamed; Chen, Alex; Khabiboulline, Timergali
High intensity proton particle accelerators that supports several simultaneous physics experiments requires sharing the beam. A bunch by bunch beam chopper system located after the Radio Frequency Quadrupole (RFQ) is required in this case to structure the beam in the proper bunch format required by the several experiments. The unused beam will need to be kicked out of the beam path and is disposed in a beam dumb. In this paper, we report on the RF modeling results of a proposed helical kicker. Two beam kickers constitutes the proposed chopper. The beam sequence is formed by kicking in or outmore » the beam bunches from the streamline. The chopper was developed for Project X Injection Experiment (PXIE).« less
Asymmetric latent semantic indexing for gene expression experiments visualization.
González, Javier; Muñoz, Alberto; Martos, Gabriel
2016-08-01
We propose a new method to visualize gene expression experiments inspired by the latent semantic indexing technique originally proposed in the textual analysis context. By using the correspondence word-gene document-experiment, we define an asymmetric similarity measure of association for genes that accounts for potential hierarchies in the data, the key to obtain meaningful gene mappings. We use the polar decomposition to obtain the sources of asymmetry of the similarity matrix, which are later combined with previous knowledge. Genetic classes of genes are identified by means of a mixture model applied in the genes latent space. We describe the steps of the procedure and we show its utility in the Human Cancer dataset.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boveia, Antonio; Buchmueller, Oliver; Busoni, Giorgio
2016-03-14
This document summarises the proposal of the LHC Dark Matter Working Group on how to present LHC results on s-channel simplified dark matter models and to compare them to direct (indirect) detection experiments.
Accurate and dynamic predictive model for better prediction in medicine and healthcare.
Alanazi, H O; Abdullah, A H; Qureshi, K N; Ismail, A S
2018-05-01
Information and communication technologies (ICTs) have changed the trend into new integrated operations and methods in all fields of life. The health sector has also adopted new technologies to improve the systems and provide better services to customers. Predictive models in health care are also influenced from new technologies to predict the different disease outcomes. However, still, existing predictive models have suffered from some limitations in terms of predictive outcomes performance. In order to improve predictive model performance, this paper proposed a predictive model by classifying the disease predictions into different categories. To achieve this model performance, this paper uses traumatic brain injury (TBI) datasets. TBI is one of the serious diseases worldwide and needs more attention due to its seriousness and serious impacts on human life. The proposed predictive model improves the predictive performance of TBI. The TBI data set is developed and approved by neurologists to set its features. The experiment results show that the proposed model has achieved significant results including accuracy, sensitivity, and specificity.
Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Chen, Duofang; Ma, Xiaopeng; Liang, Jimin; Tian, Jie
2010-10-10
The camera lens diaphragm is an important component in a noncontact optical imaging system and has a crucial influence on the images registered on the CCD camera. However, this influence has not been taken into account in the existing free-space photon transport models. To model the photon transport process more accurately, a generalized free-space photon transport model is proposed. It combines Lambertian source theory with analysis of the influence of the camera lens diaphragm to simulate photon transport process in free space. In addition, the radiance theorem is also adopted to establish the energy relationship between the virtual detector and the CCD camera. The accuracy and feasibility of the proposed model is validated with a Monte-Carlo-based free-space photon transport model and physical phantom experiment. A comparison study with our previous hybrid radiosity-radiance theorem based model demonstrates the improvement performance and potential of the proposed model for simulating photon transport process in free space.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferrari, Jose A.; Perciante, Cesar D
2008-07-10
The behavior of photochromic glasses during activation and bleaching is investigated. A two-state phenomenological model describing light-induced activation (darkening) and thermal bleaching is presented. The proposed model is based on first-order kinetics. We demonstrate that the time behavior in the activation process (acting simultaneously with the thermal fading) can be characterized by two relaxation times that depend on the intensity of the activating light. These characteristic times are lower than the decay times of the pure thermal bleaching process. We study the temporal evolution of the glass optical density and its dependence on the activating intensity. We also present amore » series of activation and bleaching experiments that validate the proposed model. Our approach may be used to gain more insight into the transmittance behavior of photosensitive glasses, which could be potentially relevant in a broad range of applications, e.g., real-time holography and reconfigurable optical memories.« less
Kinetic model of water disinfection using peracetic acid including synergistic effects.
Flores, Marina J; Brandi, Rodolfo J; Cassano, Alberto E; Labas, Marisol D
2016-01-01
The disinfection efficiencies of a commercial mixture of peracetic acid against Escherichia coli were studied in laboratory scale experiments. The joint and separate action of two disinfectant agents, hydrogen peroxide and peracetic acid, were evaluated in order to observe synergistic effects. A kinetic model for each component of the mixture and for the commercial mixture was proposed. Through simple mathematical equations, the model describes different stages of attack by disinfectants during the inactivation process. Based on the experiments and the kinetic parameters obtained, it could be established that the efficiency of hydrogen peroxide was much lower than that of peracetic acid alone. However, the contribution of hydrogen peroxide was very important in the commercial mixture. It should be noted that this improvement occurred only after peracetic acid had initiated the attack on the cell. This synergistic effect was successfully explained by the proposed scheme and was verified by experimental results. Besides providing a clearer mechanistic understanding of water disinfection, such models may improve our ability to design reactors.
LDA-Based Unified Topic Modeling for Similar TV User Grouping and TV Program Recommendation.
Pyo, Shinjee; Kim, Eunhui; Kim, Munchurl
2015-08-01
Social TV is a social media service via TV and social networks through which TV users exchange their experiences about TV programs that they are viewing. For social TV service, two technical aspects are envisioned: grouping of similar TV users to create social TV communities and recommending TV programs based on group and personal interests for personalizing TV. In this paper, we propose a unified topic model based on grouping of similar TV users and recommending TV programs as a social TV service. The proposed unified topic model employs two latent Dirichlet allocation (LDA) models. One is a topic model of TV users, and the other is a topic model of the description words for viewed TV programs. The two LDA models are then integrated via a topic proportion parameter for TV programs, which enforces the grouping of similar TV users and associated description words for watched TV programs at the same time in a unified topic modeling framework. The unified model identifies the semantic relation between TV user groups and TV program description word groups so that more meaningful TV program recommendations can be made. The unified topic model also overcomes an item ramp-up problem such that new TV programs can be reliably recommended to TV users. Furthermore, from the topic model of TV users, TV users with similar tastes can be grouped as topics, which can then be recommended as social TV communities. To verify our proposed method of unified topic-modeling-based TV user grouping and TV program recommendation for social TV services, in our experiments, we used real TV viewing history data and electronic program guide data from a seven-month period collected by a TV poll agency. The experimental results show that the proposed unified topic model yields an average 81.4% precision for 50 topics in TV program recommendation and its performance is an average of 6.5% higher than that of the topic model of TV users only. For TV user prediction with new TV programs, the average prediction precision was 79.6%. Also, we showed the superiority of our proposed model in terms of both topic modeling performance and recommendation performance compared to two related topic models such as polylingual topic model and bilingual topic model.
An improved finite element modeling of the cerebrospinal fluid layer in the head impact analysis.
Wu, John Z; Pan, Christopher S; Wimer, Bryan M; Rosen, Charles L
2017-01-01
The finite element (FE) method has been widely used to investigate the mechanism of traumatic brain injuries (TBIs), because it is technically difficult to quantify the responses of the brain tissues to the impact in experiments. One of technical challenges to build a FE model of a human head is the modeling of the cerebrospinal fluid (CSF) of the brain. In the current study, we propose to use membrane elements to construct the CSF layer. Using the proposed approach, we demonstrate that a head model can be built by using existing meshes available in commercial databases, without using any advanced meshing software tool, and with the sole use of native functions of the FE package Abaqus. The calculated time histories of the intracranial pressures at frontal, posterior fossa, parietal, and occipital positions agree well with the experimental data and the simulations in the literature, indicating that the physical effects of the CSF layer have been accounted for in the proposed modeling approach. The proposed modeling approach would be useful for bioengineers to solve practical problems.
Classification of complex networks based on similarity of topological network features
NASA Astrophysics Data System (ADS)
Attar, Niousha; Aliakbary, Sadegh
2017-09-01
Over the past few decades, networks have been widely used to model real-world phenomena. Real-world networks exhibit nontrivial topological characteristics and therefore, many network models are proposed in the literature for generating graphs that are similar to real networks. Network models reproduce nontrivial properties such as long-tail degree distributions or high clustering coefficients. In this context, we encounter the problem of selecting the network model that best fits a given real-world network. The need for a model selection method reveals the network classification problem, in which a target-network is classified into one of the candidate network models. In this paper, we propose a novel network classification method which is independent of the network size and employs an alignment-free metric of network comparison. The proposed method is based on supervised machine learning algorithms and utilizes the topological similarities of networks for the classification task. The experiments show that the proposed method outperforms state-of-the-art methods with respect to classification accuracy, time efficiency, and robustness to noise.
Testing light dark matter coannihilation with fixed-target experiments
Izaguirre, Eder; Kahn, Yonatan; Krnjaic, Gordan; ...
2017-09-01
In this paper, we introduce a novel program of fixed-target searches for thermal-origin Dark Matter (DM), which couples inelastically to the Standard Model. Since the DM only interacts by transitioning to a heavier state, freeze-out proceeds via coannihilation and the unstable heavier state is depleted at later times. For sufficiently large mass splittings, direct detection is kinematically forbidden and indirect detection is impossible, so this scenario can only be tested with accelerators. Here we propose new searches at proton and electron beam fixed-target experiments to probe sub-GeV coannihilation, exploiting the distinctive signals of up- and downscattering as well as decaymore » of the excited state inside the detector volume. We focus on a representative model in which DM is a pseudo-Dirac fermion coupled to a hidden gauge field (dark photon), which kinetically mixes with the visible photon. We define theoretical targets in this framework and determine the existing bounds by reanalyzing results from previous experiments. We find that LSND, E137, and BaBar data already place strong constraints on the parameter space consistent with a thermal freeze-out origin, and that future searches at Belle II and MiniBooNE, as well as recently-proposed fixed-target experiments such as LDMX and BDX, can cover nearly all remaining gaps. We also briefly comment on the discovery potential for proposed beam dump and neutrino experiments which operate at much higher beam energies.« less
Testing light dark matter coannihilation with fixed-target experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Izaguirre, Eder; Kahn, Yonatan; Krnjaic, Gordan
In this paper, we introduce a novel program of fixed-target searches for thermal-origin Dark Matter (DM), which couples inelastically to the Standard Model. Since the DM only interacts by transitioning to a heavier state, freeze-out proceeds via coannihilation and the unstable heavier state is depleted at later times. For sufficiently large mass splittings, direct detection is kinematically forbidden and indirect detection is impossible, so this scenario can only be tested with accelerators. Here we propose new searches at proton and electron beam fixed-target experiments to probe sub-GeV coannihilation, exploiting the distinctive signals of up- and downscattering as well as decaymore » of the excited state inside the detector volume. We focus on a representative model in which DM is a pseudo-Dirac fermion coupled to a hidden gauge field (dark photon), which kinetically mixes with the visible photon. We define theoretical targets in this framework and determine the existing bounds by reanalyzing results from previous experiments. We find that LSND, E137, and BaBar data already place strong constraints on the parameter space consistent with a thermal freeze-out origin, and that future searches at Belle II and MiniBooNE, as well as recently-proposed fixed-target experiments such as LDMX and BDX, can cover nearly all remaining gaps. We also briefly comment on the discovery potential for proposed beam dump and neutrino experiments which operate at much higher beam energies.« less
Testing light dark matter coannihilation with fixed-target experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Izaguirre, Eder; Kahn, Yonatan; Krnjaic, Gordan
In this paper, we introduce a novel program of fixed-target searches for thermal-origin Dark Matter (DM), which couples inelastically to the Standard Model. Since the DM only interacts by transitioning to a heavier state, freeze-out proceeds via coannihilation and the unstable heavier state is depleted at later times. For sufficiently large mass splittings, direct detection is kinematically forbidden and indirect detection is impossible, so this scenario can only be tested with accelerators. Here we propose new searches at proton and electron beam fixed-target experiments to probe sub-GeV coannihilation, exploiting the distinctive signals of up- and down-scattering as well as decaymore » of the excited state inside the detector volume. We focus on a representative model in which DM is a pseudo-Dirac fermion coupled to a hidden gauge field (dark photon), which kinetically mixes with the visible photon. We define theoretical targets in this framework and determine the existing bounds by reanalyzing results from previous experiments. We find that LSND, E137, and BaBar data already place strong constraints on the parameter space consistent with a thermal freeze-out origin, and that future searches at Belle II and MiniBooNE, as well as recently-proposed fixed-target experiments such as LDMX and BDX, can cover nearly all remaining gaps. We also briefly comment on the discovery potential for proposed beam dump and neutrino experiments which operate at much higher beam energies.« less
Jealousy and the threatened self: getting to the heart of the green-eyed monster.
DeSteno, David; Valdesolo, Piercarlo; Bartlett, Monica Y
2006-10-01
Several theories specifying the causes of jealousy have been put forth in the past few decades. Firm support for any proposed theory, however, has been limited by the difficulties inherent in inducing jealousy and examining any proposed mediating mechanisms in real time. In support of a theory of jealousy centering on threats to the self-system, 2 experiments are presented that address these past limitations and argue for a model based on context-induced variability in self-evaluation. Experiment 1 presents a method for evoking jealousy through the use of highly orchestrated social encounters and demonstrates that threatened self-esteem functions as a principal mediator of jealousy. In addition to replicating these findings, Experiment 2 provides direct evidence for jealousy as a cause of aggression. The ability of the proposed theory of jealousy to integrate other extant findings in the literature is also discussed. 2006 APA, all rights reserved
A universal test for gravitational decoherence
Pfister, C.; Kaniewski, J.; Tomamichel, M.; Mantri, A.; Schmucker, R.; McMahon, N.; Milburn, G.; Wehner, S.
2016-01-01
Quantum mechanics and the theory of gravity are presently not compatible. A particular question is whether gravity causes decoherence. Several models for gravitational decoherence have been proposed, not all of which can be described quantum mechanically. Since quantum mechanics may need to be modified, one may question the use of quantum mechanics as a calculational tool to draw conclusions from the data of experiments concerning gravity. Here we propose a general method to estimate gravitational decoherence in an experiment that allows us to draw conclusions in any physical theory where the no-signalling principle holds, even if quantum mechanics needs to be modified. As an example, we propose a concrete experiment using optomechanics. Our work raises the interesting question whether other properties of nature could similarly be established from experimental observations alone—that is, without already having a rather well-formed theory of nature to make sense of experimental data. PMID:27694976
Detecting nonsense for Chinese comments based on logistic regression
NASA Astrophysics Data System (ADS)
Zhuolin, Ren; Guang, Chen; Shu, Chen
2016-07-01
To understand cyber citizens' opinion accurately from Chinese news comments, the clear definition on nonsense is present, and a detection model based on logistic regression (LR) is proposed. The detection of nonsense can be treated as a binary-classification problem. Besides of traditional lexical features, we propose three kinds of features in terms of emotion, structure and relevance. By these features, we train an LR model and demonstrate its effect in understanding Chinese news comments. We find that each of proposed features can significantly promote the result. In our experiments, we achieve a prediction accuracy of 84.3% which improves the baseline 77.3% by 7%.
Zhu, Xiaoning
2014-01-01
Rail mounted gantry crane (RMGC) scheduling is important in reducing makespan of handling operation and improving container handling efficiency. In this paper, we present an RMGC scheduling optimization model, whose objective is to determine an optimization handling sequence in order to minimize RMGC idle load time in handling tasks. An ant colony optimization is proposed to obtain near optimal solutions. Computational experiments on a specific railway container terminal are conducted to illustrate the proposed model and solution algorithm. The results show that the proposed method is effective in reducing the idle load time of RMGC. PMID:25538768
The Intelligent Control System and Experiments for an Unmanned Wave Glider.
Liao, Yulei; Wang, Leifeng; Li, Yiming; Li, Ye; Jiang, Quanquan
2016-01-01
The control system designing of Unmanned Wave Glider (UWG) is challenging since the control system is weak maneuvering, large time-lag and large disturbance, which is difficult to establish accurate mathematical model. Meanwhile, to complete marine environment monitoring in long time scale and large spatial scale autonomously, UWG asks high requirements of intelligence and reliability. This paper focuses on the "Ocean Rambler" UWG. First, the intelligent control system architecture is designed based on the cerebrum basic function combination zone theory and hierarchic control method. The hardware and software designing of the embedded motion control system are mainly discussed. A motion control system based on rational behavior model of four layers is proposed. Then, combining with the line-of sight method(LOS), a self-adapting PID guidance law is proposed to compensate the steady state error in path following of UWG caused by marine environment disturbance especially current. Based on S-surface control method, an improved S-surface heading controller is proposed to solve the heading control problem of the weak maneuvering carrier under large disturbance. Finally, the simulation experiments were carried out and the UWG completed autonomous path following and marine environment monitoring in sea trials. The simulation experiments and sea trial results prove that the proposed intelligent control system, guidance law, controller have favorable control performance, and the feasibility and reliability of the designed intelligent control system of UWG are verified.
The Intelligent Control System and Experiments for an Unmanned Wave Glider
Liao, Yulei; Wang, Leifeng; Li, Yiming; Li, Ye; Jiang, Quanquan
2016-01-01
The control system designing of Unmanned Wave Glider (UWG) is challenging since the control system is weak maneuvering, large time-lag and large disturbance, which is difficult to establish accurate mathematical model. Meanwhile, to complete marine environment monitoring in long time scale and large spatial scale autonomously, UWG asks high requirements of intelligence and reliability. This paper focuses on the “Ocean Rambler” UWG. First, the intelligent control system architecture is designed based on the cerebrum basic function combination zone theory and hierarchic control method. The hardware and software designing of the embedded motion control system are mainly discussed. A motion control system based on rational behavior model of four layers is proposed. Then, combining with the line-of sight method(LOS), a self-adapting PID guidance law is proposed to compensate the steady state error in path following of UWG caused by marine environment disturbance especially current. Based on S-surface control method, an improved S-surface heading controller is proposed to solve the heading control problem of the weak maneuvering carrier under large disturbance. Finally, the simulation experiments were carried out and the UWG completed autonomous path following and marine environment monitoring in sea trials. The simulation experiments and sea trial results prove that the proposed intelligent control system, guidance law, controller have favorable control performance, and the feasibility and reliability of the designed intelligent control system of UWG are verified. PMID:28005956
Passive Acoustic Leak Detection for Sodium Cooled Fast Reactors Using Hidden Markov Models
NASA Astrophysics Data System (ADS)
Marklund, A. Riber; Kishore, S.; Prakash, V.; Rajan, K. K.; Michel, F.
2016-06-01
Acoustic leak detection for steam generators of sodium fast reactors have been an active research topic since the early 1970s and several methods have been tested over the years. Inspired by its success in the field of automatic speech recognition, we here apply hidden Markov models (HMM) in combination with Gaussian mixture models (GMM) to the problem. To achieve this, we propose a new feature calculation scheme, based on the temporal evolution of the power spectral density (PSD) of the signal. Using acoustic signals recorded during steam/water injection experiments done at the Indira Gandhi Centre for Atomic Research (IGCAR), the proposed method is tested. We perform parametric studies on the HMM+GMM model size and demonstrate that the proposed method a) performs well without a priori knowledge of injection noise, b) can incorporate several noise models and c) has an output distribution that simplifies false alarm rate control.
Information dissemination model for social media with constant updates
NASA Astrophysics Data System (ADS)
Zhu, Hui; Wu, Heng; Cao, Jin; Fu, Gang; Li, Hui
2018-07-01
With the development of social media tools and the pervasiveness of smart terminals, social media has become a significant source of information for many individuals. However, false information can spread rapidly, which may result in negative social impacts and serious economic losses. Thus, reducing the unfavorable effects of false information has become an urgent challenge. In this paper, a new competitive model called DMCU is proposed to describe the dissemination of information with constant updates in social media. In the model, we focus on the competitive relationship between the original false information and updated information, and then propose the priority of related information. To more effectively evaluate the effectiveness of the proposed model, data sets containing actual social media activity are utilized in experiments. Simulation results demonstrate that the DMCU model can precisely describe the process of information dissemination with constant updates, and that it can be used to forecast information dissemination trends on social media.
NASA Astrophysics Data System (ADS)
Kajiwara, Itsuro; Furuya, Keiichiro; Ishizuka, Shinichi
2018-07-01
Model-based controllers with adaptive design variables are often used to control an object with time-dependent characteristics. However, the controller's performance is influenced by many factors such as modeling accuracy and fluctuations in the object's characteristics. One method to overcome these negative factors is to tune model-based controllers. Herein we propose an online tuning method to maintain control performance for an object that exhibits time-dependent variations. The proposed method employs the poles of the controller as design variables because the poles significantly impact performance. Specifically, we use the simultaneous perturbation stochastic approximation (SPSA) to optimize a model-based controller with multiple design variables. Moreover, a vibration control experiment of an object with time-dependent characteristics as the temperature is varied demonstrates that the proposed method allows adaptive control and stably maintains the closed-loop characteristics.
Mao, Ling-Feng; Ning, H.; Hu, Changjun; Lu, Zhaolin; Wang, Gaofeng
2016-01-01
Field effect mobility in an organic device is determined by the activation energy. A new physical model of the activation energy is proposed by virtue of the energy and momentum conservation equations. The dependencies of the activation energy on the gate voltage and the drain voltage, which were observed in the experiments in the previous independent literature, can be well explained using the proposed model. Moreover, the expression in the proposed model, which has clear physical meanings in all parameters, can have the same mathematical form as the well-known Meyer-Neldel relation, which lacks of clear physical meanings in some of its parameters since it is a phenomenological model. Thus it not only describes a physical mechanism but also offers a possibility to design the next generation of high-performance optoelectronics and integrated flexible circuits by optimizing device physical parameter. PMID:27103586
Generic framework for mining cellular automata models on protein-folding simulations.
Diaz, N; Tischer, I
2016-05-13
Cellular automata model identification is an important way of building simplified simulation models. In this study, we describe a generic architectural framework to ease the development process of new metaheuristic-based algorithms for cellular automata model identification in protein-folding trajectories. Our framework was developed by a methodology based on design patterns that allow an improved experience for new algorithms development. The usefulness of the proposed framework is demonstrated by the implementation of four algorithms, able to obtain extremely precise cellular automata models of the protein-folding process with a protein contact map representation. Dynamic rules obtained by the proposed approach are discussed, and future use for the new tool is outlined.
An acoustic glottal source for vocal tract physical models
NASA Astrophysics Data System (ADS)
Hannukainen, Antti; Kuortti, Juha; Malinen, Jarmo; Ojalammi, Antti
2017-11-01
A sound source is proposed for the acoustic measurement of physical models of the human vocal tract. The physical models are produced by fast prototyping, based on magnetic resonance imaging during prolonged vowel production. The sound source, accompanied by custom signal processing algorithms, is used for two kinds of measurements from physical models of the vocal tract: (i) amplitude frequency response and resonant frequency measurements, and (ii) signal reconstructions at the source output according to a target pressure waveform with measurements at the mouth position. The proposed source and the software are validated by computational acoustics experiments and measurements on a physical model of the vocal tract corresponding to the vowels [] of a male speaker.
A collaborative molecular modeling environment using a virtual tunneling service.
Lee, Jun; Kim, Jee-In; Kang, Lin-Woo
2012-01-01
Collaborative researches of three-dimensional molecular modeling can be limited by different time zones and locations. A networked virtual environment can be utilized to overcome the problem caused by the temporal and spatial differences. However, traditional approaches did not sufficiently consider integration of different computing environments, which were characterized by types of applications, roles of users, and so on. We propose a collaborative molecular modeling environment to integrate different molecule modeling systems using a virtual tunneling service. We integrated Co-Coot, which is a collaborative crystallographic object-oriented toolkit, with VRMMS, which is a virtual reality molecular modeling system, through a collaborative tunneling system. The proposed system showed reliable quantitative and qualitative results through pilot experiments.
Many-body physics using cold atoms
NASA Astrophysics Data System (ADS)
Sundar, Bhuvanesh
Advances in experiments on dilute ultracold atomic gases have given us access to highly tunable quantum systems. In particular, there have been substantial improvements in achieving different kinds of interaction between atoms. As a result, utracold atomic gases oer an ideal platform to simulate many-body phenomena in condensed matter physics, and engineer other novel phenomena that are a result of the exotic interactions produced between atoms. In this dissertation, I present a series of studies that explore the physics of dilute ultracold atomic gases in different settings. In each setting, I explore a different form of the inter-particle interaction. Motivated by experiments which induce artificial spin-orbit coupling for cold fermions, I explore this system in my first project. In this project, I propose a method to perform universal quantum computation using the excitations of interacting spin-orbit coupled fermions, in which effective p-wave interactions lead to the formation of a topological superfluid. Motivated by experiments which explore the physics of exotic interactions between atoms trapped inside optical cavities, I explore this system in a second project. I calculate the phase diagram of lattice bosons trapped in an optical cavity, where the cavity modes mediates effective global range checkerboard interactions between the atoms. I compare this phase diagram with one that was recently measured experimentally. In two other projects, I explore quantum simulation of condensed matter phenomena due to spin-dependent interactions between particles. I propose a method to produce tunable spin-dependent interactions between atoms, using an optical Feshbach resonance. In one project, I use these spin-dependent interactions in an ultracold Bose-Fermi system, and propose a method to produce the Kondo model. I propose an experiment to directly observe the Kondo effect in this system. In another project, I propose using lattice bosons with a large hyperfine spin, which have Feshbach-induced spin-dependent interactions, to produce a quantum dimer model. I propose an experiment to detect the ground state in this system. In a final project, I develop tools to simulate the dynamics of fermionic superfluids in which fermions interact via a short-range interaction.
ERIC Educational Resources Information Center
Sung, Kyongje
2008-01-01
Participants searched a visual display for a target among distractors. Each of 3 experiments tested a condition proposed to require attention and for which certain models propose a serial search. Serial versus parallel processing was tested by examining effects on response time means and cumulative distribution functions. In 2 conditions, the…
ERIC Educational Resources Information Center
Kuhl, Julius
1978-01-01
A formal elaboration of the original theory of achievement motivation (Atkinson, 1957; Atkinson & Feather, 1966) is proposed that includes personal standards as determinants of motivational tendencies. The results of an experiment are reported that examines the validity of some of the implications of the elaborated model proposed here. (Author/RK)
A new discriminative kernel from probabilistic models.
Tsuda, Koji; Kawanabe, Motoaki; Rätsch, Gunnar; Sonnenburg, Sören; Müller, Klaus-Robert
2002-10-01
Recently, Jaakkola and Haussler (1999) proposed a method for constructing kernel functions from probabilistic models. Their so-called Fisher kernel has been combined with discriminative classifiers such as support vector machines and applied successfully in, for example, DNA and protein analysis. Whereas the Fisher kernel is calculated from the marginal log-likelihood, we propose the TOP kernel derived; from tangent vectors of posterior log-odds. Furthermore, we develop a theoretical framework on feature extractors from probabilistic models and use it for analyzing the TOP kernel. In experiments, our new discriminative TOP kernel compares favorably to the Fisher kernel.
An Inherited Efficiencies Model of Non-Genomic Evolution
NASA Technical Reports Server (NTRS)
New, Michael H.; Pohorille, Andrew
1999-01-01
A model for the evolution of biological systems in the absence of a nucleic acid-like genome is proposed and applied to model the earliest living organisms -- protocells composed of membrane encapsulated peptides. Assuming that the peptides can make and break bonds between amino acids, and bonds in non-functional peptides are more likely to be destroyed than in functional peptides, it is demonstrated that the catalytic capabilities of the system as a whole can increase. This increase is defined to be non-genomic evolution. The relationship between the proposed mechanism for evolution and recent experiments on self-replicating peptides is discussed.
Electrostatic Discharge Initiation Experiments using PVDF Pressure Transducers
1991-12-01
ignition sensitivity. The results are discussed within the context of a preliminary model of electrostatic initiation. iii/iv NAVSWC TR 91-666 CONTENTS...Chapter Page 1 INTRODUCTION .......................................... 1-1 TWO PHASE IGNITION MODEL ....................... 1-1 SENSITIZING FACTORS...is necessary to establish effective techniques to reduce the hazards associated with ESD ignition. TWO-PHASE IGNITION MODEL A model has been proposed
A globally accurate theory for a class of binary mixture models
NASA Astrophysics Data System (ADS)
Dickman, Adriana G.; Stell, G.
The self-consistent Ornstein-Zernike approximation results for the 3D Ising model are used to obtain phase diagrams for binary mixtures described by decorated models, yielding the plait point, binodals, and closed-loop coexistence curves for the models proposed by Widom, Clark, Neece, and Wheeler. The results are in good agreement with series expansions and experiments.
ERIC Educational Resources Information Center
Kalechofsky, Robert
This research paper proposes several mathematical models which help clarify Piaget's theory of cognition on the concrete and formal operational stages. Some modified lattice models were used for the concrete stage and a combined Boolean Algebra and group theory model was used for the formal stage. The researcher used experiments cited in the…
NASA Astrophysics Data System (ADS)
Ni, Jun; Hu, Jibin
2017-06-01
In this paper, a novel dynamics controller for autonomous vehicle to simultaneously control it to the driving limits and follow the desired path is proposed. The dynamics controller consists of longitudinal and lateral controllers. In longitudinal controller, the G-G diagram is utilized to describe the driving and handling limits of the vehicle. The accurate G-G diagram is obtained based on phase plane approach and a nonlinear vehicle dynamic model with accurate tyre model. In lateral controller, the tyre cornering stiffness is estimated to improve the robustness of the controller. The stability analysis of the closed-looped error dynamics shows that the controller remains stable against parameters uncertainties in extreme condition such as tyre saturation. Finally, an electric autonomous Formula race car developed by the authors is used to validate the proposed controller. The autonomous driving experiment on an oval race track shows the efficiency and robustness of the proposed controller.
Aha Malawi! Envisioning Field Experiences That Nurture Cultural Competencies for Preservice Teachers
ERIC Educational Resources Information Center
Talbot, Patricia A.
2011-01-01
This theoretical study uses the context of the writer's personal encounters in Malawi, Africa, to propose a conceptual model for creating diverse field experiences based on best practices in critical pedagogy, service learning, and the underpinnings of transformational learning theory, for the purpose of increasing the probability of meaningful…
A Case of Reform: The Undergraduate Research Collaboratives
ERIC Educational Resources Information Center
Horsch, Elizabeth; St. John, Mark; Christensen, Ronald L.
2012-01-01
Despite numerous calls for reform, the early chemistry experience for most college students has remained unchanged for decades. In 2004 the National Science Foundation (NSF) issued a call for proposals to create new models of chemical education that would infuse authentic research into the early stages of a student's college experience. Under this…
Yeast Biocontrol of a Fungal Plant Disease: A Model for Studying Organism Interrelationships
ERIC Educational Resources Information Center
Chanchaichaovivat, Arun; Panijpan, Bhinyo; Ruenwongsa, Pintip
2008-01-01
An experiment on the action of the yeast, "Saccharomyces cerevisiae", against a fungal plant disease is proposed for secondary students (Grade 11) to support their study of organism interrelationship. This biocontrol experiment serves as the basis for discussing relationships among three organisms (red chilli fruit, "Saccharomyces cerevisiae," and…
Trends in Interdisciplinary and Integrative Graduate Training: An NSF IGERT Example
ERIC Educational Resources Information Center
Martin, Philip E.; Umberger, Brian R.
2003-01-01
In a report entitled "Reshaping the Graduate Education of Scientists and Engineers" (National Academy of Sciences, 1995), the Committee on Science, Engineering, and Public Policy proposed a modified PhD training model that retains an emphasis on intensive research experiences, while incorporating additional experiences to prepare graduates for an…
Polarimetric SAR Models for Oil Fields Monitoring in China Seas
NASA Astrophysics Data System (ADS)
Buono, A.; Nunziata, F.; Li, X.; Wei, Y.; Ding, X.
2014-11-01
In this study, physical-based models for polarimetric Synthetic Aperture Radar (SAR) oil fields monitoring are proposed. They all share a physical rationale relying on the different scattering mechanisms that characterize a free sea surface, an oil slick-covered sea surface, and a metallic target. In fact, sea surface scattering is well modeled by a Bragg-like behaviour, while a strong departure from Bragg scattering is in place when dealing with oil slicks and targets. Furthermore, the proposed polarimetric models aim at addressing simultaneously target and oil slick detection, providing useful extra information with respect to single-pol SAR data in order to approach oil discrimination and classification. Experiments undertaken over East and South China Sea from actual C-band RadarSAT-2 full-pol SAR data witness the soundness of the proposed rationale.
Polarimetric SAR Models for Oil Fields Monitoring in China Seas
NASA Astrophysics Data System (ADS)
Buono, A.; Nunziata, F.; Li, X.; Wei, Y.; Ding, X.
2014-11-01
In this study, physical-based models for polarimetric Synthetic Aperture Radar (SAR) oil fields monitoring are proposed. They all share a physical rationale relying on the different scattering mechanisms that characterize a free sea surface, an oil slick-covered sea surface, and a metallic target. In fact, sea surface scattering is well modeled by a Bragg-like behaviour, while a strong departure from Bragg scattering is in place when dealing with oil slicks and targets. Furthermore, the proposed polarimetric models aim at addressing simultaneously target and oil slick detection, providing useful extra information with respect to single-pol SAR data in order to approach oil discrimination and classification.Experiments undertaken over East and South China Sea from actual C-band RadarSAT-2 full-pol SAR data witness the soundness of the proposed rationale.
NASA Astrophysics Data System (ADS)
Sutrisno; Widowati; Solikhin
2016-06-01
In this paper, we propose a mathematical model in stochastic dynamic optimization form to determine the optimal strategy for an integrated single product inventory control problem and supplier selection problem where the demand and purchasing cost parameters are random. For each time period, by using the proposed model, we decide the optimal supplier and calculate the optimal product volume purchased from the optimal supplier so that the inventory level will be located at some point as close as possible to the reference point with minimal cost. We use stochastic dynamic programming to solve this problem and give several numerical experiments to evaluate the model. From the results, for each time period, the proposed model was generated the optimal supplier and the inventory level was tracked the reference point well.
The transformation of multi-sensory experiences into memories during sleep.
Rothschild, Gideon
2018-03-26
Our everyday lives present us with a continuous stream of multi-modal sensory inputs. While most of this information is soon forgotten, sensory information associated with salient experiences can leave long-lasting memories in our minds. Extensive human and animal research has established that the hippocampus is critically involved in this process of memory formation and consolidation. However, the underlying mechanistic details are still only partially understood. Specifically, the hippocampus has often been suggested to encode information during experience, temporarily store it, and gradually transfer this information to the cortex during sleep. In rodents, ample evidence has supported this notion in the context of spatial memory, yet whether this process adequately describes the consolidation of multi-sensory experiences into memories is unclear. Here, focusing on rodent studies, I examine how multi-sensory experiences are consolidated into long term memories by hippocampal and cortical circuits during sleep. I propose that in contrast to the classical model of memory consolidation, the cortex is a "fast learner" that has a rapid and instructive role in shaping hippocampal-dependent memory consolidation. The proposed model may offer mechanistic insight into memory biasing using sensory cues during sleep. Copyright © 2018 Elsevier Inc. All rights reserved.
Near infrared spectrum simulation applied to human skin for diagnosis
NASA Astrophysics Data System (ADS)
Tsai, Chen-Mu; Fang, Yi-Chin; Wang, Chih-Yu; Chiu, Pin-Chun; Wu, Guo-Ying; Zheng, Wei-Chi; Chemg, Shih-Hao
2007-11-01
This research proposes a new method for skin diagnose using near infrared as the light source (750nm~1300nm). Compared to UV and visible light, near infrared might penetrate relatively deep into biological soft tissue in some cases although NIR absorption property of tissue is not a constant for water, fat, and collagen etc. In the research, NIR absorption and scattering properties for skin are discussed firstly using the theory of molecule vibration from Quantum physics and Solid State Physics; secondly the practical model for various NIR absorption spectrum to skin tissue are done by optical simulation for human skin. Finally, experiments are done for further identification of proposed model for human skin and its reaction to near infrared. Results show success with identification from both theory and experiments.
Xu, Zheng; Wang, Sheng; Li, Yeqing; Zhu, Feiyun; Huang, Junzhou
2018-02-08
The most recent history of parallel Magnetic Resonance Imaging (pMRI) has in large part been devoted to finding ways to reduce acquisition time. While joint total variation (JTV) regularized model has been demonstrated as a powerful tool in increasing sampling speed for pMRI, however, the major bottleneck is the inefficiency of the optimization method. While all present state-of-the-art optimizations for the JTV model could only reach a sublinear convergence rate, in this paper, we squeeze the performance by proposing a linear-convergent optimization method for the JTV model. The proposed method is based on the Iterative Reweighted Least Squares algorithm. Due to the complexity of the tangled JTV objective, we design a novel preconditioner to further accelerate the proposed method. Extensive experiments demonstrate the superior performance of the proposed algorithm for pMRI regarding both accuracy and efficiency compared with state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Georgiou, Katerina; Abramoff, Rose; Harte, John; Riley, William; Torn, Margaret
2017-04-01
Climatic, atmospheric, and land-use changes all have the potential to alter soil microbial activity via abiotic effects on soil or mediated by changes in plant inputs. Recently, many promising microbial models of soil organic carbon (SOC) decomposition have been proposed to advance understanding and prediction of climate and carbon (C) feedbacks. Most of these models, however, exhibit unrealistic oscillatory behavior and SOC insensitivity to long-term changes in C inputs. Here we diagnose the sources of instability in four models that span the range of complexity of these recent microbial models, by sequentially adding complexity to a simple model to include microbial physiology, a mineral sorption isotherm, and enzyme dynamics. We propose a formulation that introduces density-dependence of microbial turnover, which acts to limit population sizes and reduce oscillations. We compare these models to results from 24 long-term C-input field manipulations, including the Detritus Input and Removal Treatment (DIRT) experiments, to show that there are clear metrics that can be used to distinguish and validate the inherent dynamics of each model structure. We find that widely used first-order models and microbial models without density-dependence cannot readily capture the range of long-term responses observed across the DIRT experiments as a direct consequence of their model structures. The proposed formulation improves predictions of long-term C-input changes, and implies greater SOC storage associated with CO2-fertilization-driven increases in C inputs over the coming century compared to common microbial models. Finally, we discuss our findings in the context of improving microbial model behavior for inclusion in Earth System Models.
Dendritic Alloy Solidification Experiment (DASE)
NASA Technical Reports Server (NTRS)
Beckermann, C.; Karma, A.; Steinbach, I.; deGroh, H. C., III
2001-01-01
A space experiment, and supporting ground-based research, is proposed to study the microstructural evolution in free dendritic growth from a supercooled melt of the transparent model alloy succinonitrile-acetone (SCN-ACE). The research is relevant to equiaxed solidification of metal alloy castings. The microgravity experiment will establish a benchmark for testing of equiaxed dendritic growth theories, scaling laws, and models in the presence of purely diffusive, coupled heat and solute transport, without the complicating influences of melt convection. The specific objectives are to: determine the selection of the dendrite tip operating state, i.e. the growth velocity and tip radius, for free dendritic growth of succinonitrile-acetone alloys; determine the growth morphology and sidebranching behavior for freely grown alloy dendrites; determine the effects of the thermal/solutal interactions in the growth of an assemblage of equiaxed alloy crystals; determine the effects of melt convection on the free growth of alloy dendrites; measure the surface tension anisotropy strength of succinon itrile -acetone alloys establish a theoretical and modeling framework for the experiments. Microgravity experiments on equiaxed dendritic growth of alloy dendrites have not been performed in the past. The proposed experiment builds on the Isothermal Dendritic Growth Experiment (IDGE) of Glicksman and coworkers, which focused on the steady growth of a single crystal from pure supercooled melts (succinonitrile and pivalic acid). It also extends the Equiaxed Dendritic Solidification Experiment (EDSE) of the present investigators, which is concerned with the interactions and transients arising in the growth of an assemblage of equiaxed crystals (succinonitrile). However, these experiments with pure substances are not able to address the issues related to coupled heat and solute transport in growth of alloy dendrites.
Schädler, Marc René; Warzybok, Anna; Ewert, Stephan D; Kollmeier, Birger
2016-05-01
A framework for simulating auditory discrimination experiments, based on an approach from Schädler, Warzybok, Hochmuth, and Kollmeier [(2015). Int. J. Audiol. 54, 100-107] which was originally designed to predict speech recognition thresholds, is extended to also predict psychoacoustic thresholds. The proposed framework is used to assess the suitability of different auditory-inspired feature sets for a range of auditory discrimination experiments that included psychoacoustic as well as speech recognition experiments in noise. The considered experiments were 2 kHz tone-in-broadband-noise simultaneous masking depending on the tone length, spectral masking with simultaneously presented tone signals and narrow-band noise maskers, and German Matrix sentence test reception threshold in stationary and modulated noise. The employed feature sets included spectro-temporal Gabor filter bank features, Mel-frequency cepstral coefficients, logarithmically scaled Mel-spectrograms, and the internal representation of the Perception Model from Dau, Kollmeier, and Kohlrausch [(1997). J. Acoust. Soc. Am. 102(5), 2892-2905]. The proposed framework was successfully employed to simulate all experiments with a common parameter set and obtain objective thresholds with less assumptions compared to traditional modeling approaches. Depending on the feature set, the simulated reference-free thresholds were found to agree with-and hence to predict-empirical data from the literature. Across-frequency processing was found to be crucial to accurately model the lower speech reception threshold in modulated noise conditions than in stationary noise conditions.
Optimum design of a novel pounding tuned mass damper under harmonic excitation
NASA Astrophysics Data System (ADS)
Wang, Wenxi; Hua, Xugang; Wang, Xiuyong; Chen, Zhengqing; Song, Gangbing
2017-05-01
In this paper, a novel pounding tuned mass damper (PTMD) utilizing pounding damping is proposed to reduce structural vibration by increasing the damping ratio of a lightly damped structure. The pounding boundary covered by viscoelastic material is fixed right next to the tuned mass when the spring-mass system is in the equilibrium position. The dynamic properties of the proposed PTMD, including the natural frequency and the equivalent damping ratio, are derived theoretically. Moreover, the numerical simulation method by using an impact force model to study the PTMD is proposed and validated by pounding experiments. To minimize the maximum dynamic magnification factor under harmonic excitations, an optimum design of the PTMD is developed. Finally, the optimal PTMD is implemented to control a lightly damped frame structure. A comparison of experimental and simulated results reveals that the proposed impact force model can accurately model the pounding force. Furthermore, the proposed PTMD is effective to control the vibration in a wide frequency range, as demonstrated experimentally.
On Productive Knowledge and Levels of Questions.
ERIC Educational Resources Information Center
Andre, Thomas
A model is proposed for memory that stresses a distinction between episodic memory for encoded personal experience and semantic memory for abstractors and generalizations. Basically, the model holds that questions influence the nature of memory representations formed during instruction, and that memory representation controls the way in which…
A Model for Establishing Learning Communities at a HBCU in Graduate Classes
ERIC Educational Resources Information Center
Duncan, Bernadine; Barber-Freeman, Pamela T.
2008-01-01
Because of the positive effects of learning communities with undergraduates, these researchers proposed the Collaborative Learning Initiatives that Motivate Bi-cultural experiences model (CLIMB) to implement learning communities within graduate counseling and educational administration courses. This article examines the concept of learning…
The Probability Heuristics Model of Syllogistic Reasoning.
ERIC Educational Resources Information Center
Chater, Nick; Oaksford, Mike
1999-01-01
Proposes a probability heuristic model for syllogistic reasoning and confirms the rationality of this heuristic by an analysis of the probabilistic validity of syllogistic reasoning that treats logical inference as a limiting case of probabilistic inference. Meta-analysis and two experiments involving 40 adult participants and using generalized…
Pilipino American Identity Development Model
ERIC Educational Resources Information Center
Nadal, Kevin L.
2004-01-01
This article examines the identity development of F/Pilipino Americans. Because of a distinct history and culture that differentiates them from other Asian groups, F/Pilipino Americans may experience a different ethnic identity development than other Asian Americans. A nonlinear 6-stage ethnic identity development model is proposed to promote…
Cognitive Development during the College Years.
ERIC Educational Resources Information Center
Van Hecke, Madeleine L.
The use of William Perry's (1970) model of cognitive development during the college years to restructure an abnormal psychology course is described. The model provides a framework for students and teachers to understand the confusion and frustration they sometimes experience. Perry proposed that students enter college with tacit epistemological…
Decontaminate feature for tracking: adaptive tracking via evolutionary feature subset
NASA Astrophysics Data System (ADS)
Liu, Qiaoyuan; Wang, Yuru; Yin, Minghao; Ren, Jinchang; Li, Ruizhi
2017-11-01
Although various visual tracking algorithms have been proposed in the last 2-3 decades, it remains a challenging problem for effective tracking with fast motion, deformation, occlusion, etc. Under complex tracking conditions, most tracking models are not discriminative and adaptive enough. When the combined feature vectors are inputted to the visual models, this may lead to redundancy causing low efficiency and ambiguity causing poor performance. An effective tracking algorithm is proposed to decontaminate features for each video sequence adaptively, where the visual modeling is treated as an optimization problem from the perspective of evolution. Every feature vector is compared to a biological individual and then decontaminated via classical evolutionary algorithms. With the optimized subsets of features, the "curse of dimensionality" has been avoided while the accuracy of the visual model has been improved. The proposed algorithm has been tested on several publicly available datasets with various tracking challenges and benchmarked with a number of state-of-the-art approaches. The comprehensive experiments have demonstrated the efficacy of the proposed methodology.
Efficient Regressions via Optimally Combining Quantile Information*
Zhao, Zhibiao; Xiao, Zhijie
2014-01-01
We develop a generally applicable framework for constructing efficient estimators of regression models via quantile regressions. The proposed method is based on optimally combining information over multiple quantiles and can be applied to a broad range of parametric and nonparametric settings. When combining information over a fixed number of quantiles, we derive an upper bound on the distance between the efficiency of the proposed estimator and the Fisher information. As the number of quantiles increases, this upper bound decreases and the asymptotic variance of the proposed estimator approaches the Cramér-Rao lower bound under appropriate conditions. In the case of non-regular statistical estimation, the proposed estimator leads to super-efficient estimation. We illustrate the proposed method for several widely used regression models. Both asymptotic theory and Monte Carlo experiments show the superior performance over existing methods. PMID:25484481
Adaptive time-variant models for fuzzy-time-series forecasting.
Wong, Wai-Keung; Bai, Enjian; Chu, Alice Wai-Ching
2010-12-01
A fuzzy time series has been applied to the prediction of enrollment, temperature, stock indices, and other domains. Related studies mainly focus on three factors, namely, the partition of discourse, the content of forecasting rules, and the methods of defuzzification, all of which greatly influence the prediction accuracy of forecasting models. These studies use fixed analysis window sizes for forecasting. In this paper, an adaptive time-variant fuzzy-time-series forecasting model (ATVF) is proposed to improve forecasting accuracy. The proposed model automatically adapts the analysis window size of fuzzy time series based on the prediction accuracy in the training phase and uses heuristic rules to generate forecasting values in the testing phase. The performance of the ATVF model is tested using both simulated and actual time series including the enrollments at the University of Alabama, Tuscaloosa, and the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX). The experiment results show that the proposed ATVF model achieves a significant improvement in forecasting accuracy as compared to other fuzzy-time-series forecasting models.
Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Xu; Tuo, Rui; Jeff Wu, C. F.
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less
Superstatistics model for T₂ distribution in NMR experiments on porous media.
Correia, M D; Souza, A M; Sinnecker, J P; Sarthour, R S; Santos, B C C; Trevizan, W; Oliveira, I S
2014-07-01
We propose analytical functions for T2 distribution to describe transverse relaxation in high- and low-fields NMR experiments on porous media. The method is based on a superstatistics theory, and allows to find the mean and standard deviation of T2, directly from measurements. It is an alternative to multiexponential models for data decay inversion in NMR experiments. We exemplify the method with q-exponential functions and χ(2)-distributions to describe, respectively, data decay and T2 distribution on high-field experiments of fully water saturated glass microspheres bed packs, sedimentary rocks from outcrop and noisy low-field experiment on rocks. The method is general and can also be applied to biological systems. Copyright © 2014 Elsevier Inc. All rights reserved.
Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion
He, Xu; Tuo, Rui; Jeff Wu, C. F.
2017-01-31
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less
Advances in electrophoretic separations
NASA Technical Reports Server (NTRS)
Snyder, R. S.; Rhodes, P. H.
1984-01-01
Free fluid electrophoresis is described using laboratory and space experiments combined with extensive mathematical modeling. Buoyancy driven convective flows due to thermal and concentration gradients are absent in the reduced gravity environment of space. The elimination of convection in weightlessness offers possible improvements in electrophoresis and other separation methods which occur in fluid media. The mathematical modeling suggests new ways of doing electrophoresis in space and explains various phenomena observed during past experiments. The extent to which ground based separation techniques are limited by gravity induced convection is investigated and space experiments are designed to evaluate specific characteristics of the fluid/particle environment. A series of experiments are proposed that require weightlessness and apparatus is developed that can be used to carry out these experiments in the near future.
Modelling of resonant MEMS magnetic field sensor with electromagnetic induction sensing
NASA Astrophysics Data System (ADS)
Liu, Song; Xu, Huaying; Xu, Dehui; Xiong, Bin
2017-06-01
This paper presents an analytical model of resonant MEMS magnetic field sensor with electromagnetic induction sensing. The resonant structure vibrates in square extensional (SE) mode. By analyzing the vibration amplitude and quality factor of the resonant structure, the magnetic field sensitivity as a function of device structure parameters and encapsulation pressure is established. The developed analytical model has been verified by comparing calculated results with experiment results and the deviation between them is only 10.25%, which shows the feasibility of the proposed device model. The model can provide theoretical guidance for further design optimization of the sensor. Moreover, a quantitative study of the magnetic field sensitivity is conducted with respect to the structure parameters and encapsulation pressure based on the proposed model.
A cascading failure model for analyzing railway accident causation
NASA Astrophysics Data System (ADS)
Liu, Jin-Tao; Li, Ke-Ping
2018-01-01
In this paper, a new cascading failure model is proposed for quantitatively analyzing the railway accident causation. In the model, the loads of nodes are redistributed according to the strength of the causal relationships between the nodes. By analyzing the actual situation of the existing prevention measures, a critical threshold of the load parameter in the model is obtained. To verify the effectiveness of the proposed cascading model, simulation experiments of a train collision accident are performed. The results show that the cascading failure model can describe the cascading process of the railway accident more accurately than the previous models, and can quantitatively analyze the sensitivities and the influence of the causes. In conclusion, this model can assist us to reveal the latent rules of accident causation to reduce the occurrence of railway accidents.
ERIC Educational Resources Information Center
Reider, David; Knestis, Kirk; Malyn-Smith, Joyce
2016-01-01
This article proposes a STEM workforce education logic model, tailored to the particular context of the National Science Foundation's Innovative Technology Experiences for Students and Teachers (ITEST) program. This model aims to help program designers and researchers address challenges particular to designing, implementing, and studying education…
A Model for the Branding of Higher Education in South Africa
ERIC Educational Resources Information Center
Hay, H. R.; van Gensen, G. A.
2008-01-01
In this article a proposed model for the branding of higher education institutions is provided. The model describes, among others, the internal practices that have a profound impact on branding and on an institution's overall reputation and image. The authors argue that a strong internal focus is necessary before a meaningful brand experience can…
ERIC Educational Resources Information Center
Baayen, R. Harald; Milin, Petar; Durdevic, Dusica Filipovic; Hendrix, Peter; Marelli, Marco
2011-01-01
A 2-layer symbolic network model based on the equilibrium equations of the Rescorla-Wagner model (Danks, 2003) is proposed. The study first presents 2 experiments in Serbian, which reveal for sentential reading the inflectional paradigmatic effects previously observed by Milin, Filipovic Durdevic, and Moscoso del Prado Martin (2009) for unprimed…
Stochastic modelling of infectious diseases for heterogeneous populations.
Ming, Rui-Xing; Liu, Ji-Ming; W Cheung, William K; Wan, Xiang
2016-12-22
Infectious diseases such as SARS and H1N1 can significantly impact people's lives and cause severe social and economic damages. Recent outbreaks have stressed the urgency of effective research on the dynamics of infectious disease spread. However, it is difficult to predict when and where outbreaks may emerge and how infectious diseases spread because many factors affect their transmission, and some of them may be unknown. One feasible means to promptly detect an outbreak and track the progress of disease spread is to implement surveillance systems in regional or national health and medical centres. The accumulated surveillance data, including temporal, spatial, clinical, and demographic information can provide valuable information that can be exploited to better understand and model the dynamics of infectious disease spread. The aim of this work is to develop and empirically evaluate a stochastic model that allows the investigation of transmission patterns of infectious diseases in heterogeneous populations. We test the proposed model on simulation data and apply it to the surveillance data from the 2009 H1N1 pandemic in Hong Kong. In the simulation experiment, our model achieves high accuracy in parameter estimation (less than 10.0 % mean absolute percentage error). In terms of the forward prediction of case incidence, the mean absolute percentage errors are 17.3 % for the simulation experiment and 20.0 % for the experiment on the real surveillance data. We propose a stochastic model to study the dynamics of infectious disease spread in heterogeneous populations from temporal-spatial surveillance data. The proposed model is evaluated using both simulated data and the real data from the 2009 H1N1 epidemic in Hong Kong and achieves acceptable prediction accuracy. We believe that our model can provide valuable insights for public health authorities to predict the effect of disease spread and analyse its underlying factors and to guide new control efforts.
ERIC Educational Resources Information Center
Al-Roubaiy, Najwan S.; Owen-Pugh, Valerie; Wheeler, Sue
2017-01-01
The psychotherapy experiences of a sample of Iraqi refugee men, in later stages of exile, were explored with the aim of shedding some light on how this client group can experience therapy. Ten adult male Iraqi refugees--who had lived in Sweden for at least five years and had been psychotherapy clients at some point during that time--were recruited…
NASA Astrophysics Data System (ADS)
Benhalouche, Fatima Zohra; Karoui, Moussa Sofiane; Deville, Yannick; Ouamri, Abdelaziz
2015-10-01
In this paper, a new Spectral-Unmixing-based approach, using Nonnegative Matrix Factorization (NMF), is proposed to locally multi-sharpen hyperspectral data by integrating a Digital Surface Model (DSM) obtained from LIDAR data. In this new approach, the nature of the local mixing model is detected by using the local variance of the object elevations. The hyper/multispectral images are explored using small zones. In each zone, the variance of the object elevations is calculated from the DSM data in this zone. This variance is compared to a threshold value and the adequate linear/linearquadratic spectral unmixing technique is used in the considered zone to independently unmix hyperspectral and multispectral data, using an adequate linear/linear-quadratic NMF-based approach. The obtained spectral and spatial information thus respectively extracted from the hyper/multispectral images are then recombined in the considered zone, according to the selected mixing model. Experiments based on synthetic hyper/multispectral data are carried out to evaluate the performance of the proposed multi-sharpening approach and literature linear/linear-quadratic approaches used on the whole hyper/multispectral data. In these experiments, real DSM data are used to generate synthetic data containing linear and linear-quadratic mixed pixel zones. The DSM data are also used for locally detecting the nature of the mixing model in the proposed approach. Globally, the proposed approach yields good spatial and spectral fidelities for the multi-sharpened data and significantly outperforms the used literature methods.
The Satellite Clock Bias Prediction Method Based on Takagi-Sugeno Fuzzy Neural Network
NASA Astrophysics Data System (ADS)
Cai, C. L.; Yu, H. G.; Wei, Z. C.; Pan, J. D.
2017-05-01
The continuous improvement of the prediction accuracy of Satellite Clock Bias (SCB) is the key problem of precision navigation. In order to improve the precision of SCB prediction and better reflect the change characteristics of SCB, this paper proposes an SCB prediction method based on the Takagi-Sugeno fuzzy neural network. Firstly, the SCB values are pre-treated based on their characteristics. Then, an accurate Takagi-Sugeno fuzzy neural network model is established based on the preprocessed data to predict SCB. This paper uses the precise SCB data with different sampling intervals provided by IGS (International Global Navigation Satellite System Service) to realize the short-time prediction experiment, and the results are compared with the ARIMA (Auto-Regressive Integrated Moving Average) model, GM(1,1) model, and the quadratic polynomial model. The results show that the Takagi-Sugeno fuzzy neural network model is feasible and effective for the SCB short-time prediction experiment, and performs well for different types of clocks. The prediction results for the proposed method are better than the conventional methods obviously.
A novel approach for pilot error detection using Dynamic Bayesian Networks.
Saada, Mohamad; Meng, Qinggang; Huang, Tingwen
2014-06-01
In the last decade Dynamic Bayesian Networks (DBNs) have become one type of the most attractive probabilistic modelling framework extensions of Bayesian Networks (BNs) for working under uncertainties from a temporal perspective. Despite this popularity not many researchers have attempted to study the use of these networks in anomaly detection or the implications of data anomalies on the outcome of such models. An abnormal change in the modelled environment's data at a given time, will cause a trailing chain effect on data of all related environment variables in current and consecutive time slices. Albeit this effect fades with time, it still can have an ill effect on the outcome of such models. In this paper we propose an algorithm for pilot error detection, using DBNs as the modelling framework for learning and detecting anomalous data. We base our experiments on the actions of an aircraft pilot, and a flight simulator is created for running the experiments. The proposed anomaly detection algorithm has achieved good results in detecting pilot errors and effects on the whole system.
Dynamic elementary mode modelling of non-steady state flux data.
Folch-Fortuny, Abel; Teusink, Bas; Hoefsloot, Huub C J; Smilde, Age K; Ferrer, Alberto
2018-06-18
A novel framework is proposed to analyse metabolic fluxes in non-steady state conditions, based on the new concept of dynamic elementary mode (dynEM): an elementary mode activated partially depending on the time point of the experiment. Two methods are introduced here: dynamic elementary mode analysis (dynEMA) and dynamic elementary mode regression discriminant analysis (dynEMR-DA). The former is an extension of the recently proposed principal elementary mode analysis (PEMA) method from steady state to non-steady state scenarios. The latter is a discriminant model that permits to identify which dynEMs behave strongly different depending on the experimental conditions. Two case studies of Saccharomyces cerevisiae, with fluxes derived from simulated and real concentration data sets, are presented to highlight the benefits of this dynamic modelling. This methodology permits to analyse metabolic fluxes at early stages with the aim of i) creating reduced dynamic models of flux data, ii) combining many experiments in a single biologically meaningful model, and iii) identifying the metabolic pathways that drive the organism from one state to another when changing the environmental conditions.
Shutin, Dmitriy; Zlobinskaya, Olga
2010-02-01
The goal of this contribution is to apply model-based information-theoretic measures to the quantification of relative differences between immunofluorescent signals. Several models for approximating the empirical fluorescence intensity distributions are considered, namely Gaussian, Gamma, Beta, and kernel densities. As a distance measure the Hellinger distance and the Kullback-Leibler divergence are considered. For the Gaussian, Gamma, and Beta models the closed-form expressions for evaluating the distance as a function of the model parameters are obtained. The advantages of the proposed quantification framework as compared to simple mean-based approaches are analyzed with numerical simulations. Two biological experiments are also considered. The first is the functional analysis of the p8 subunit of the TFIIH complex responsible for a rare hereditary multi-system disorder--trichothiodystrophy group A (TTD-A). In the second experiment the proposed methods are applied to assess the UV-induced DNA lesion repair rate. A good agreement between our in vivo results and those obtained with an alternative in vitro measurement is established. We believe that the computational simplicity and the effectiveness of the proposed quantification procedure will make it very attractive for different analysis tasks in functional proteomics, as well as in high-content screening. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.
Jaber, Abobaker M; Ismail, Mohd Tahir; Altaher, Alsaidi M
2014-01-01
This paper mainly forecasts the daily closing price of stock markets. We propose a two-stage technique that combines the empirical mode decomposition (EMD) with nonparametric methods of local linear quantile (LLQ). We use the proposed technique, EMD-LLQ, to forecast two stock index time series. Detailed experiments are implemented for the proposed method, in which EMD-LPQ, EMD, and Holt-Winter methods are compared. The proposed EMD-LPQ model is determined to be superior to the EMD and Holt-Winter methods in predicting the stock closing prices.
Distance measurement based on light field geometry and ray tracing.
Chen, Yanqin; Jin, Xin; Dai, Qionghai
2017-01-09
In this paper, we propose a geometric optical model to measure the distances of object planes in a light field image. The proposed geometric optical model is composed of two sub-models based on ray tracing: object space model and image space model. The two theoretic sub-models are derived on account of on-axis point light sources. In object space model, light rays propagate into the main lens and refract inside it following the refraction theorem. In image space model, light rays exit from emission positions on the main lens and subsequently impinge on the image sensor with different imaging diameters. The relationships between imaging diameters of objects and their corresponding emission positions on the main lens are investigated through utilizing refocusing and similar triangle principle. By combining the two sub-models together and tracing light rays back to the object space, the relationships between objects' imaging diameters and corresponding distances of object planes are figured out. The performance of the proposed geometric optical model is compared with existing approaches using different configurations of hand-held plenoptic 1.0 cameras and real experiments are conducted using a preliminary imaging system. Results demonstrate that the proposed model can outperform existing approaches in terms of accuracy and exhibits good performance at general imaging range.
NASA Technical Reports Server (NTRS)
Moran, M. S.; Goodrich, D. C.; Kustas, W. P.
1994-01-01
A research and modeling strategy is presented for development of distributed hydrologic models given by a combination of remotely sensed and ground based data. In support of this strategy, two experiments Moonsoon'90 and Walnut Gulch'92 were conducted in a semiarid rangeland southeast of Tucson, Arizona, (U.S.) and a third experiment, the SALSA-MEX (Semi Arid Land Surface Atmospheric Mountain Experiment) was proposed. Results from the Moonsoon'90 experiment substantially advanced the understanding of the hydrologic and atmospheric fluxes in an arid environment and provided insight into the use of remote sensing data for hydrologic modeling. The Walnut Gulch'92 experiment addressed the seasonal hydrologic dynamics of the region and the potential of combined optical microwave remote sensing for hydrologic applications. SALSA-MEX will combine measurements and modeling to study hydrologic processes influenced by surrounding mountains, such as enhanced precipitation, snowmelt and recharge to ground water aquifers. The results from these experiments, along with the extensive experimental data bases, should aid the research community in large scale modeling of mass and energy exchanges across the soil-plant-atmosphere interface.
NASA Astrophysics Data System (ADS)
Fiore, S.; Płóciennik, M.; Doutriaux, C.; Blanquer, I.; Barbera, R.; Williams, D. N.; Anantharaj, V. G.; Evans, B. J. K.; Salomoni, D.; Aloisio, G.
2017-12-01
The increased models resolution in the development of comprehensive Earth System Models is rapidly leading to very large climate simulations output that pose significant scientific data management challenges in terms of data sharing, processing, analysis, visualization, preservation, curation, and archiving.Large scale global experiments for Climate Model Intercomparison Projects (CMIP) have led to the development of the Earth System Grid Federation (ESGF), a federated data infrastructure which has been serving the CMIP5 experiment, providing access to 2PB of data for the IPCC Assessment Reports. In such a context, running a multi-model data analysis experiment is very challenging, as it requires the availability of a large amount of data related to multiple climate models simulations and scientific data management tools for large-scale data analytics. To address these challenges, a case study on climate models intercomparison data analysis has been defined and implemented in the context of the EU H2020 INDIGO-DataCloud project. The case study has been tested and validated on CMIP5 datasets, in the context of a large scale, international testbed involving several ESGF sites (LLNL, ORNL and CMCC), one orchestrator site (PSNC) and one more hosting INDIGO PaaS services (UPV). Additional ESGF sites, such as NCI (Australia) and a couple more in Europe, are also joining the testbed. The added value of the proposed solution is summarized in the following: it implements a server-side paradigm which limits data movement; it relies on a High-Performance Data Analytics (HPDA) stack to address performance; it exploits the INDIGO PaaS layer to support flexible, dynamic and automated deployment of software components; it provides user-friendly web access based on the INDIGO Future Gateway; and finally it integrates, complements and extends the support currently available through ESGF. Overall it provides a new "tool" for climate scientists to run multi-model experiments. At the time this contribution is being written, the proposed testbed represents the first implementation of a distributed large-scale, multi-model experiment in the ESGF/CMIP context, joining together server-side approaches for scientific data analysis, HPDA frameworks, end-to-end workflow management, and cloud computing.
Searching for long-lived particles: A compact detector for exotics at LHCb
NASA Astrophysics Data System (ADS)
Gligorov, Vladimir V.; Knapen, Simon; Papucci, Michele; Robinson, Dean J.
2018-01-01
We advocate for the construction of a new detector element at the LHCb experiment, designed to search for displaced decays of beyond Standard Model long-lived particles, taking advantage of a large shielded space in the LHCb cavern that is expected to soon become available. We discuss the general features and putative capabilities of such an experiment, as well as its various advantages and complementarities with respect to the existing LHC experiments and proposals such as SHiP and MATHUSLA. For two well-motivated beyond Standard Model benchmark scenarios—Higgs decay to dark photons and B meson decays via a Higgs mixing portal—the reach either complements or exceeds that predicted for other LHC experiments.
Ahmad, Sahar; Khan, Muhammad Faisal
2015-12-01
In this paper, we present a new non-rigid image registration method that imposes a topology preservation constraint on the deformation. We propose to incorporate the time varying elasticity model into the deformable image matching procedure and constrain the Jacobian determinant of the transformation over the entire image domain. The motion of elastic bodies is governed by a hyperbolic partial differential equation, generally termed as elastodynamics wave equation, which we propose to use as a deformation model. We carried out clinical image registration experiments on 3D magnetic resonance brain scans from IBSR database. The results of the proposed registration approach in terms of Kappa index and relative overlap computed over the subcortical structures were compared against the existing topology preserving non-rigid image registration methods and non topology preserving variant of our proposed registration scheme. The Jacobian determinant maps obtained with our proposed registration method were qualitatively and quantitatively analyzed. The results demonstrated that the proposed scheme provides good registration accuracy with smooth transformations, thereby guaranteeing the preservation of topology. Copyright © 2015 Elsevier Ltd. All rights reserved.
Decentering and Related Constructs: A Critical Review and Metacognitive Processes Model
Bernstein, Amit; Hadash, Yuval; Lichtash, Yael; Tanay, Galia; Shepherd, Kathrine; Fresco, David M.
2016-01-01
The capacity to shift experiential perspective—from within one’s subjective experience onto that experience—is fundamental to being human. Scholars have long theorized that this metacognitive capacity—which we refer to as decentering—may play an important role in mental health. To help illuminate this mental phenomenon and its links to mental health, we critically examine decentering-related constructs and their respective literatures (e.g., self-distanced perspective, cognitive distancing, cognitive defusion). First, we introduce a novel metacognitive processes model of decentering. Specifically, we propose that, to varying degrees, decentering-related constructs reflect a common mental phenomenon subserved by three interrelated metacognitive processes: meta-awareness, disidentification from internal experience, and reduced reactivity to thought content. Second, we examine extant research linking decentering-related constructs and their underlying metacognitive processes to mental health. We conclude by proposing future directions for research that transcends decentering-related constructs in an effort to advance the field’s understanding of this facet of human experience and its role in (mal)adaptation. PMID:26385999
Bröder, A
2000-09-01
The boundedly rational 'Take-The-Best" heuristic (TTB) was proposed by G. Gigerenzer, U. Hoffrage, and H. Kleinbölting (1991) as a model of fast and frugal probabilistic inferences. Although the simple lexicographic rule proved to be successful in computer simulations, direct empirical demonstrations of its adequacy as a psychological model are lacking because of several methodical problems. In 4 experiments with a total of 210 participants, this question was addressed. Whereas Experiment 1 showed that TTB is not valid as a universal hypothesis about probabilistic inferences, up to 28% of participants in Experiment 2 and 53% of participants in Experiment 3 were classified as TTB users. Experiment 4 revealed that investment costs for information seem to be a relevant factor leading participants to switch to a noncompensatory TTB strategy. The observed individual differences in strategy use imply the recommendation of an idiographic approach to decision-making research.
Combining universal beauty and cultural context in a unifying model of visual aesthetic experience.
Redies, Christoph
2015-01-01
In this work, I propose a model of visual aesthetic experience that combines formalist and contextual aspects of aesthetics. The model distinguishes between two modes of processing. First, perceptual processing is based on the intrinsic form of an artwork, which may or may not be beautiful. If it is beautiful, a beauty-responsive mechanism is activated in the brain. This bottom-up mechanism is universal amongst humans; it is widespread in the visual brain and responsive across visual modalities. Second, cognitive processing is based on contextual information, such as the depicted content, the intentions of the artist or the circumstances of the presentation of the artwork. Cognitive processing is partially top-down and varies between individuals according to their cultural experience. Processing in the two channels is parallel and largely independent. In the general case, an aesthetic experience is induced if processing in both channels is favorable, i.e., if there is resonance in the perceptual processing channel ("aesthetics of perception"), and successful mastering in the cognitive processing channel ("aesthetics of cognition"). I speculate that this combinatorial mechanism has evolved to mediate social bonding between members of a (cultural) group of people. Primary emotions can be elicited via both channels and modulate the degree of the aesthetic experience. Two special cases are discussed. First, in a subset of (post-)modern art, beauty no longer plays a prominent role. Second, in some forms of abstract art, beautiful form can be enjoyed with minimal cognitive processing. The model is applied to examples of Western art. Finally, implications of the model are discussed. In summary, the proposed model resolves the seeming contradiction between formalist perceptual approaches to aesthetic experience, which are based on the intrinsic beauty of artworks, and contextual approaches, which account for highly individual and culturally dependent aspects of aesthetics.
Combining universal beauty and cultural context in a unifying model of visual aesthetic experience
Redies, Christoph
2015-01-01
In this work, I propose a model of visual aesthetic experience that combines formalist and contextual aspects of aesthetics. The model distinguishes between two modes of processing. First, perceptual processing is based on the intrinsic form of an artwork, which may or may not be beautiful. If it is beautiful, a beauty-responsive mechanism is activated in the brain. This bottom–up mechanism is universal amongst humans; it is widespread in the visual brain and responsive across visual modalities. Second, cognitive processing is based on contextual information, such as the depicted content, the intentions of the artist or the circumstances of the presentation of the artwork. Cognitive processing is partially top–down and varies between individuals according to their cultural experience. Processing in the two channels is parallel and largely independent. In the general case, an aesthetic experience is induced if processing in both channels is favorable, i.e., if there is resonance in the perceptual processing channel (“aesthetics of perception”), and successful mastering in the cognitive processing channel (“aesthetics of cognition”). I speculate that this combinatorial mechanism has evolved to mediate social bonding between members of a (cultural) group of people. Primary emotions can be elicited via both channels and modulate the degree of the aesthetic experience. Two special cases are discussed. First, in a subset of (post-)modern art, beauty no longer plays a prominent role. Second, in some forms of abstract art, beautiful form can be enjoyed with minimal cognitive processing. The model is applied to examples of Western art. Finally, implications of the model are discussed. In summary, the proposed model resolves the seeming contradiction between formalist perceptual approaches to aesthetic experience, which are based on the intrinsic beauty of artworks, and contextual approaches, which account for highly individual and culturally dependent aspects of aesthetics. PMID:25972799
Galvanin, Federico; Ballan, Carlo C; Barolo, Massimiliano; Bezzo, Fabrizio
2013-08-01
The use of pharmacokinetic (PK) and pharmacodynamic (PD) models is a common and widespread practice in the preliminary stages of drug development. However, PK-PD models may be affected by structural identifiability issues intrinsically related to their mathematical formulation. A preliminary structural identifiability analysis is usually carried out to check if the set of model parameters can be uniquely determined from experimental observations under the ideal assumptions of noise-free data and no model uncertainty. However, even for structurally identifiable models, real-life experimental conditions and model uncertainty may strongly affect the practical possibility to estimate the model parameters in a statistically sound way. A systematic procedure coupling the numerical assessment of structural identifiability with advanced model-based design of experiments formulations is presented in this paper. The objective is to propose a general approach to design experiments in an optimal way, detecting a proper set of experimental settings that ensure the practical identifiability of PK-PD models. Two simulated case studies based on in vitro bacterial growth and killing models are presented to demonstrate the applicability and generality of the methodology to tackle model identifiability issues effectively, through the design of feasible and highly informative experiments.
Toward a general psychological model of tension and suspense
Lehne, Moritz; Koelsch, Stefan
2015-01-01
Tension and suspense are powerful emotional experiences that occur in a wide variety of contexts (e.g., in music, film, literature, and everyday life). The omnipresence of tension and suspense suggests that they build on very basic cognitive and affective mechanisms. However, the psychological underpinnings of tension experiences remain largely unexplained, and tension and suspense are rarely discussed from a general, domain-independent perspective. In this paper, we argue that tension experiences in different contexts (e.g., musical tension or suspense in a movie) build on the same underlying psychological processes. We discuss key components of tension experiences and propose a domain-independent model of tension and suspense. According to this model, tension experiences originate from states of conflict, instability, dissonance, or uncertainty that trigger predictive processes directed at future events of emotional significance. We also discuss possible neural mechanisms underlying tension and suspense. The model provides a theoretical framework that can inform future empirical research on tension phenomena. PMID:25717309
Toward a general psychological model of tension and suspense.
Lehne, Moritz; Koelsch, Stefan
2015-01-01
Tension and suspense are powerful emotional experiences that occur in a wide variety of contexts (e.g., in music, film, literature, and everyday life). The omnipresence of tension and suspense suggests that they build on very basic cognitive and affective mechanisms. However, the psychological underpinnings of tension experiences remain largely unexplained, and tension and suspense are rarely discussed from a general, domain-independent perspective. In this paper, we argue that tension experiences in different contexts (e.g., musical tension or suspense in a movie) build on the same underlying psychological processes. We discuss key components of tension experiences and propose a domain-independent model of tension and suspense. According to this model, tension experiences originate from states of conflict, instability, dissonance, or uncertainty that trigger predictive processes directed at future events of emotional significance. We also discuss possible neural mechanisms underlying tension and suspense. The model provides a theoretical framework that can inform future empirical research on tension phenomena.
Edwards, Clementine J; Cella, Matteo; Tarrier, Nicholas; Wykes, Til
2015-10-01
Anhedonia and amotivation are substantial predictors of poor functional outcomes in people with schizophrenia and often present a formidable barrier to returning to work or building relationships. The Temporal Experience of Pleasure Model proposes constructs which should be considered therapeutic targets for these symptoms in schizophrenia e.g. anticipatory pleasure, memory, executive functions, motivation and behaviours related to the activity. Recent reviews have highlighted the need for a clear evidence base to drive the development of targeted interventions. To review systematically the empirical evidence for each TEP model component and propose evidence-based therapeutic targets for anhedonia and amotivation in schizophrenia. Following PRISMA guidelines, PubMed and PsycInfo were searched using the terms "schizophrenia" and "anhedonia". Studies were included if they measured anhedonia and participants had a diagnosis of schizophrenia. The methodology, measures and main findings from each study were extracted and critically summarised for each TEP model construct. 80 independent studies were reviewed and executive functions, emotional memory and the translation of motivation into actions are highlighted as key deficits with a strong evidence base in people with schizophrenia. However, there are many relationships that are unclear because the empirical work is limited by over-general tasks and measures. Promising methods for research which have more ecological validity include experience sampling and behavioural tasks assessing motivation. Specific adaptations to Cognitive Remediation Therapy, Cognitive Behavioural Therapy and the utilisation of mobile technology to enhance representations and emotional memory are recommended for future development. Copyright © 2015. Published by Elsevier B.V.
Efficient airport detection using region-based fully convolutional neural networks
NASA Astrophysics Data System (ADS)
Xin, Peng; Xu, Yuelei; Zhang, Xulei; Ma, Shiping; Li, Shuai; Lv, Chao
2018-04-01
This paper presents a model for airport detection using region-based fully convolutional neural networks. To achieve fast detection with high accuracy, we shared the conv layers between the region proposal procedure and the airport detection procedure and used graphics processing units (GPUs) to speed up the training and testing time. For lack of labeled data, we transferred the convolutional layers of ZF net pretrained by ImageNet to initialize the shared convolutional layers, then we retrained the model using the alternating optimization training strategy. The proposed model has been tested on an airport dataset consisting of 600 images. Experiments show that the proposed method can distinguish airports in our dataset from similar background scenes almost real-time with high accuracy, which is much better than traditional methods.
Structural model constructing for optical handwritten character recognition
NASA Astrophysics Data System (ADS)
Khaustov, P. A.; Spitsyn, V. G.; Maksimova, E. I.
2017-02-01
The article is devoted to the development of the algorithms for optical handwritten character recognition based on the structural models constructing. The main advantage of these algorithms is the low requirement regarding the number of reference images. The one-pass approach to a thinning of the binary character representation has been proposed. This approach is based on the joint use of Zhang-Suen and Wu-Tsai algorithms. The effectiveness of the proposed approach is confirmed by the results of the experiments. The article includes the detailed description of the structural model constructing algorithm’s steps. The proposed algorithm has been implemented in character processing application and has been approved on MNIST handwriting characters database. Algorithms that could be used in case of limited reference images number were used for the comparison.
Internal Model-Based Robust Tracking Control Design for the MEMS Electromagnetic Micromirror.
Tan, Jiazheng; Sun, Weijie; Yeow, John T W
2017-05-26
The micromirror based on micro-electro-mechanical systems (MEMS) technology is widely employed in different areas, such as scanning, imaging and optical switching. This paper studies the MEMS electromagnetic micromirror for scanning or imaging application. In these application scenarios, the micromirror is required to track the command sinusoidal signal, which can be converted to an output regulation problem theoretically. In this paper, based on the internal model principle, the output regulation problem is solved by designing a robust controller that is able to force the micromirror to track the command signal accurately. The proposed controller relies little on the accuracy of the model. Further, the proposed controller is implemented, and its effectiveness is examined by experiments. The experimental results demonstrate that the performance of the proposed controller is satisfying.
Internal Model-Based Robust Tracking Control Design for the MEMS Electromagnetic Micromirror
Tan, Jiazheng; Sun, Weijie; Yeow, John T. W.
2017-01-01
The micromirror based on micro-electro-mechanical systems (MEMS) technology is widely employed in different areas, such as scanning, imaging and optical switching. This paper studies the MEMS electromagnetic micromirror for scanning or imaging application. In these application scenarios, the micromirror is required to track the command sinusoidal signal, which can be converted to an output regulation problem theoretically. In this paper, based on the internal model principle, the output regulation problem is solved by designing a robust controller that is able to force the micromirror to track the command signal accurately. The proposed controller relies little on the accuracy of the model. Further, the proposed controller is implemented, and its effectiveness is examined by experiments. The experimental results demonstrate that the performance of the proposed controller is satisfying. PMID:28587105
2016-06-01
zones with ice concentrations up to 40%. To achieve this goal, the Navy must determine safe operational speeds as a function of ice concen- tration...and full-scale experience with ice-capable hull forms that have shallow entry angles to promote flexural ice failure preferentially over crushing...plan view) of the proposed large-scale ice–hull impact experiment to be conducted in CRREL’s refrigerated towing basin. Shown here is a side-panel
Ice detection and classification on an aircraft wing with ultrasonic shear horizontal guided waves.
Gao, Huidong; Rose, Joseph L
2009-02-01
Ice accumulation on airfoils has been identified as a primary cause of many accidents in commercial and military aircraft. To improve aviation safety as well as reduce cost and environmental threats related to aircraft icing, sensitive, reliable, and aerodynamically compatible ice detection techniques are in great demand. Ultrasonic guided-wave-based techniques have been proved reliable for "go" and "no go" types of ice detection in some systems including the HALO system, in which the second author of this paper is a primary contributor. In this paper, we propose a new model that takes the ice layer into guided-wave modeling. Using this model, the thickness and type of ice formation can be determined from guided-wave signals. Five experimental schemes are also proposed in this paper based on some unique features identified from the guided- wave dispersion curves. A sample experiment is also presented in this paper, where a 1 mm thick glaze ice on a 2 mm aluminum plate is clearly detected. Quantitative match of the experiment data to theoretical prediction serves as a strong support for future implementation of other testing schemes proposed in this paper.
SOURCES OF ORGANIC AEROSOL: SEMIVOLATILE EMISSIONS AND PHOTOCHEMICAL AGING
The proposed research integrates emissions testing, smog chamber experiments, and regional chemical transport models (CTMs) to investigate the sources of organic aerosol in urban and regional environments.
Learning general phonological rules from distributional information: a computational model.
Calamaro, Shira; Jarosz, Gaja
2015-04-01
Phonological rules create alternations in the phonetic realizations of related words. These rules must be learned by infants in order to identify the phonological inventory, the morphological structure, and the lexicon of a language. Recent work proposes a computational model for the learning of one kind of phonological alternation, allophony (Peperkamp, Le Calvez, Nadal, & Dupoux, 2006). This paper extends the model to account for learning of a broader set of phonological alternations and the formalization of these alternations as general rules. In Experiment 1, we apply the original model to new data in Dutch and demonstrate its limitations in learning nonallophonic rules. In Experiment 2, we extend the model to allow it to learn general rules for alternations that apply to a class of segments. In Experiment 3, the model is further extended to allow for generalization by context; we argue that this generalization must be constrained by linguistic principles. Copyright © 2014 Cognitive Science Society, Inc.
Discriminating Among Probability Weighting Functions Using Adaptive Design Optimization
Cavagnaro, Daniel R.; Pitt, Mark A.; Gonzalez, Richard; Myung, Jay I.
2014-01-01
Probability weighting functions relate objective probabilities and their subjective weights, and play a central role in modeling choices under risk within cumulative prospect theory. While several different parametric forms have been proposed, their qualitative similarities make it challenging to discriminate among them empirically. In this paper, we use both simulation and choice experiments to investigate the extent to which different parametric forms of the probability weighting function can be discriminated using adaptive design optimization, a computer-based methodology that identifies and exploits model differences for the purpose of model discrimination. The simulation experiments show that the correct (data-generating) form can be conclusively discriminated from its competitors. The results of an empirical experiment reveal heterogeneity between participants in terms of the functional form, with two models (Prelec-2, Linear in Log Odds) emerging as the most common best-fitting models. The findings shed light on assumptions underlying these models. PMID:24453406
Integral Nursing: An Emerging Framework for Engaging the Evolution of the Profession.
ERIC Educational Resources Information Center
Fiandt, Kathryn; Forman, John; Megel, Mary Erickson; Pakieser, Ruth A.; Burge, Stephanie
2003-01-01
Proposes the Integral Nursing framework, which combines Wilber's All-Quadrant/All-Level model, a heuristic device to organize human experience, and the Spiral Dynamics model of human development organized around value memes or cultural units of information. Includes commentary by Beth L. Rodgers. (Contains 17 references.) (JOW)
Influences of High School Curriculum on Determinants of Labor Market Experiences.
ERIC Educational Resources Information Center
Gardner, John A.; And Others
This study extends previous research on labor market effects of vocational education by explicitly modeling the intervening factors in the relationship between secondary vocational education and labor market outcomes. The strategy is to propose and estimate a simplified, recursive model that can contribute to understanding why positive earnings…
Social and Collaborative Interactions for Educational Content Enrichment in ULEs
ERIC Educational Resources Information Center
Araújo, Rafael D.; Brant-Ribeiro, Taffarel; Mendonça, Igor E. S.; Mendes, Miller M.; Dorça, Fabiano A.; Cattelan, Renan G.
2017-01-01
This article presents a social and collaborative model for content enrichment in Ubiquitous Learning Environments. Designed as a loosely coupled software architecture, the proposed model was implemented and integrated into the Classroom eXperience, a multimedia capture platform for educational environments. After automatically recording a lecture…
Resistivity of liquid metals on Veljkovic-Slavic pseudopotential
NASA Astrophysics Data System (ADS)
Abdel-Azez, Khalef
1996-04-01
An empirical form of screened model pseudopotential, proposed by Veljkovic and Slavic, is exploited for the calculation of resistivity of seven liquid metals through the correct re- determination of its parameters. The model derives qualitative support from the close agreement obtained between the computed results and the experiment.
Acquisition of Automatic Imitation Is Sensitive to Sensorimotor Contingency
ERIC Educational Resources Information Center
Cook, Richard; Press, Clare; Dickinson, Anthony; Heyes, Cecilia
2010-01-01
The associative sequence learning model proposes that the development of the mirror system depends on the same mechanisms of associative learning that mediate Pavlovian and instrumental conditioning. To test this model, two experiments used the reduction of automatic imitation through incompatible sensorimotor training to assess whether mirror…
Dynamics of Affective States during Complex Learning
ERIC Educational Resources Information Center
D'Mello, Sidney; Graesser, Art
2012-01-01
We propose a model to explain the dynamics of affective states that emerge during deep learning activities. The model predicts that learners in a state of engagement/flow will experience cognitive disequilibrium and confusion when they face contradictions, incongruities, anomalies, obstacles to goals, and other impasses. Learners revert into the…
NASA Astrophysics Data System (ADS)
Wei, Zhongbao; Tseng, King Jet; Wai, Nyunt; Lim, Tuti Mariana; Skyllas-Kazacos, Maria
2016-11-01
Reliable state estimate depends largely on an accurate battery model. However, the parameters of battery model are time varying with operating condition variation and battery aging. The existing co-estimation methods address the model uncertainty by integrating the online model identification with state estimate and have shown improved accuracy. However, the cross interference may arise from the integrated framework to compromise numerical stability and accuracy. Thus this paper proposes the decoupling of model identification and state estimate to eliminate the possibility of cross interference. The model parameters are online adapted with the recursive least squares (RLS) method, based on which a novel joint estimator based on extended Kalman Filter (EKF) is formulated to estimate the state of charge (SOC) and capacity concurrently. The proposed joint estimator effectively compresses the filter order which leads to substantial improvement in the computational efficiency and numerical stability. Lab scale experiment on vanadium redox flow battery shows that the proposed method is highly authentic with good robustness to varying operating conditions and battery aging. The proposed method is further compared with some existing methods and shown to be superior in terms of accuracy, convergence speed, and computational cost.
A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.
Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan
2017-01-01
Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.
SIMULTANEOUS MULTISLICE MAGNETIC RESONANCE FINGERPRINTING WITH LOW-RANK AND SUBSPACE MODELING
Zhao, Bo; Bilgic, Berkin; Adalsteinsson, Elfar; Griswold, Mark A.; Wald, Lawrence L.; Setsompop, Kawin
2018-01-01
Magnetic resonance fingerprinting (MRF) is a new quantitative imaging paradigm that enables simultaneous acquisition of multiple magnetic resonance tissue parameters (e.g., T1, T2, and spin density). Recently, MRF has been integrated with simultaneous multislice (SMS) acquisitions to enable volumetric imaging with faster scan time. In this paper, we present a new image reconstruction method based on low-rank and subspace modeling for improved SMS-MRF. Here the low-rank model exploits strong spatiotemporal correlation among contrast-weighted images, while the subspace model captures the temporal evolution of magnetization dynamics. With the proposed model, the image reconstruction problem is formulated as a convex optimization problem, for which we develop an algorithm based on variable splitting and the alternating direction method of multipliers. The performance of the proposed method has been evaluated by numerical experiments, and the results demonstrate that the proposed method leads to improved accuracy over the conventional approach. Practically, the proposed method has a potential to allow for a 3x speedup with minimal reconstruction error, resulting in less than 5 sec imaging time per slice. PMID:29060594
Sentiments Analysis of Reviews Based on ARCNN Model
NASA Astrophysics Data System (ADS)
Xu, Xiaoyu; Xu, Ming; Xu, Jian; Zheng, Ning; Yang, Tao
2017-10-01
The sentiments analysis of product reviews is designed to help customers understand the status of the product. The traditional method of sentiments analysis relies on the input of a fixed feature vector which is performance bottleneck of the basic codec architecture. In this paper, we propose an attention mechanism with BRNN-CNN model, referring to as ARCNN model. In order to have a good analysis of the semantic relations between words and solves the problem of dimension disaster, we use the GloVe algorithm to train the vector representations for words. Then, ARCNN model is proposed to deal with the problem of deep features training. Specifically, BRNN model is proposed to investigate non-fixed-length vectors and keep time series information perfectly and CNN can study more connection of deep semantic links. Moreover, the attention mechanism can automatically learn from the data and optimize the allocation of weights. Finally, a softmax classifier is designed to complete the sentiment classification of reviews. Experiments show that the proposed method can improve the accuracy of sentiment classification compared with benchmark methods.
NASA Astrophysics Data System (ADS)
Cai, Tao; Guo, Songtao; Li, Yongzeng; Peng, Di; Zhao, Xiaofeng; Liu, Yingzheng
2018-04-01
The mechanoluminescent (ML) sensor is a newly developed non-invasive technique for stress/strain measurement. However, its application has been mostly restricted to qualitative measurement due to the lack of a well-defined relationship between ML intensity and stress. To achieve accurate stress measurement, an intensity ratio model was proposed in this study to establish a quantitative relationship between the stress condition and its ML intensity in elastic deformation. To verify the proposed model, experiments were carried out on a ML measurement system using resin samples mixed with the sensor material SrAl2O4:Eu2+, Dy3+. The ML intensity ratio was found to be dependent on the applied stress and strain rate, and the relationship acquired from the experimental results agreed well with the proposed model. The current study provided a physical explanation for the relationship between ML intensity and its stress condition. The proposed model was applicable in various SrAl2O4:Eu2+, Dy3+-based ML measurement in elastic deformation, and could provide a useful reference for quantitative stress measurement using the ML sensor in general.
Simultaneous multislice magnetic resonance fingerprinting with low-rank and subspace modeling.
Bo Zhao; Bilgic, Berkin; Adalsteinsson, Elfar; Griswold, Mark A; Wald, Lawrence L; Setsompop, Kawin
2017-07-01
Magnetic resonance fingerprinting (MRF) is a new quantitative imaging paradigm that enables simultaneous acquisition of multiple magnetic resonance tissue parameters (e.g., T 1 , T 2 , and spin density). Recently, MRF has been integrated with simultaneous multislice (SMS) acquisitions to enable volumetric imaging with faster scan time. In this paper, we present a new image reconstruction method based on low-rank and subspace modeling for improved SMS-MRF. Here the low-rank model exploits strong spatiotemporal correlation among contrast-weighted images, while the subspace model captures the temporal evolution of magnetization dynamics. With the proposed model, the image reconstruction problem is formulated as a convex optimization problem, for which we develop an algorithm based on variable splitting and the alternating direction method of multipliers. The performance of the proposed method has been evaluated by numerical experiments, and the results demonstrate that the proposed method leads to improved accuracy over the conventional approach. Practically, the proposed method has a potential to allow for a 3× speedup with minimal reconstruction error, resulting in less than 5 sec imaging time per slice.
Li, Yong; Cai, Rui; Yan, Bei; Zainal Abidin, Ilham Mukriz; Jing, Haoqing; Wang, Yi
2018-05-28
For fuel transmission and structural strengthening, small-diameter pipes of nonmagnetic materials are extensively adopted in engineering fields including aerospace, energy, transportation, etc. However, the hostile and corrosive environment leaves them vulnerable to external corrosion which poses a severe threat to structural integrity of pipes. Therefore, it is imperative to nondestructively detect and evaluate the external corrosion in nonmagnetic pipes. In light of this, a capsule-type Electromagnetic Acoustic Transducer (EMAT) for in-situ nondestructive evaluation of nonmagnetic pipes and fast screening of external corrosion is proposed in this paper. A 3D hybrid model for efficient prediction of responses from the proposed transducer to external corrosion is established. Closed-form expressions of field quantities of electromagnetics and EMAT signals are formulated. Simulations based on the hybrid model indicate feasibility of the proposed transducer in detection and evaluation of external corrosion in nonmagnetic pipes. In parallel, experiments with the fabricated transducer have been carried out. Experimental results are supportive of the conclusion drawn from simulations. The investigation via simulations and experiments implies that the proposed capsule-type EMAT is capable of fast screening of external corrosion, which is beneficial to the in-situ nondestructive evaluation of small-diameter nonmagnetic pipes.
DEM generation from contours and a low-resolution DEM
NASA Astrophysics Data System (ADS)
Li, Xinghua; Shen, Huanfeng; Feng, Ruitao; Li, Jie; Zhang, Liangpei
2017-12-01
A digital elevation model (DEM) is a virtual representation of topography, where the terrain is established by the three-dimensional co-ordinates. In the framework of sparse representation, this paper investigates DEM generation from contours. Since contours are usually sparsely distributed and closely related in space, sparse spatial regularization (SSR) is enforced on them. In order to make up for the lack of spatial information, another lower spatial resolution DEM from the same geographical area is introduced. In this way, the sparse representation implements the spatial constraints in the contours and extracts the complementary information from the auxiliary DEM. Furthermore, the proposed method integrates the advantage of the unbiased estimation of kriging. For brevity, the proposed method is called the kriging and sparse spatial regularization (KSSR) method. The performance of the proposed KSSR method is demonstrated by experiments in Shuttle Radar Topography Mission (SRTM) 30 m DEM and Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) 30 m global digital elevation model (GDEM) generation from the corresponding contours and a 90 m DEM. The experiments confirm that the proposed KSSR method outperforms the traditional kriging and SSR methods, and it can be successfully used for DEM generation from contours.
Measurement of Vehicle-Bridge-Interaction force using dynamic tire pressure monitoring
NASA Astrophysics Data System (ADS)
Chen, Zhao; Xie, Zhipeng; Zhang, Jian
2018-05-01
The Vehicle-Bridge-Interaction (VBI) force, i.e., the normal contact force of a tire, is a key component in the VBI mechanism. The VBI force measurement can facilitate experimental studies of the VBI as well as input-output bridge structural identification. This paper introduces an innovative method for calculating the interaction force by using dynamic tire pressure monitoring. The core idea of the proposed method combines the ideal gas law and a basic force model to build a relationship between the tire pressure and the VBI force. Then, unknown model parameters are identified by the Extended Kalman Filter using calibration data. A signal filter based on the wavelet analysis is applied to preprocess the effect that the tire rotation has on the pressure data. Two laboratory tests were conducted to check the proposed method's validity. The effects of different road irregularities, loads and forward velocities were studied. Under the current experiment setting, the proposed method was robust to different road irregularities, and the increase in load and velocity benefited the performance of the proposed method. A high-speed test further supported the use of this method in rapid bridge tests. Limitations of the derived theories and experiment were also discussed.
John G. Michopoulos; Tomonari Furukawa; John C. Hermanson; Samuel G. Lambrakos
2011-01-01
The goal of this paper is to propose and demonstrate a multi level design optimization approach for the coordinated determination of a material constitutive model synchronously to the design of the experimental procedure needed to acquire the necessary data. The methodology achieves both online (real-time) and offline design of optimum experiments required for...
Real Learning Connections: Questioning the Learner in the LIS Internship
ERIC Educational Resources Information Center
Bird, Nora J.; Crumpton, Michael A.
2014-01-01
The focus of literature on the role of internship has been on whether and how such activity benefits the student. A model is proposed that examines what happens for both the practitioner supervisor and the LIS educator during an internship experience. Is it possible that all participants learn from the experience and how can that learning be…
A New Variational Approach for Multiplicative Noise and Blur Removal
Ullah, Asmat; Chen, Wen; Khan, Mushtaq Ahmad; Sun, HongGuang
2017-01-01
This paper proposes a new variational model for joint multiplicative denoising and deblurring. It combines a total generalized variation filter (which has been proved to be able to reduce the blocky-effects by being aware of high-order smoothness) and shearlet transform (that effectively preserves anisotropic image features such as sharp edges, curves and so on). The new model takes the advantage of both regularizers since it is able to minimize the staircase effects while preserving sharp edges, textures and other fine image details. The existence and uniqueness of a solution to the proposed variational model is also discussed. The resulting energy functional is then solved by using alternating direction method of multipliers. Numerical experiments showing that the proposed model achieves satisfactory restoration results, both visually and quantitatively in handling the blur (motion, Gaussian, disk, and Moffat) and multiplicative noise (Gaussian, Gamma, or Rayleigh) reduction. A comparison with other recent methods in this field is provided as well. The proposed model can also be applied for restoring both single and multi-channel images contaminated with multiplicative noise, and permit cross-channel blurs when the underlying image has more than one channel. Numerical tests on color images are conducted to demonstrate the effectiveness of the proposed model. PMID:28141802
A Discrete Model for Color Naming
NASA Astrophysics Data System (ADS)
Menegaz, G.; Le Troter, A.; Sequeira, J.; Boi, J. M.
2006-12-01
The ability to associate labels to colors is very natural for human beings. Though, this apparently simple task hides very complex and still unsolved problems, spreading over many different disciplines ranging from neurophysiology to psychology and imaging. In this paper, we propose a discrete model for computational color categorization and naming. Starting from the 424 color specimens of the OSA-UCS set, we propose a fuzzy partitioning of the color space. Each of the 11 basic color categories identified by Berlin and Kay is modeled as a fuzzy set whose membership function is implicitly defined by fitting the model to the results of an ad hoc psychophysical experiment (Experiment 1). Each OSA-UCS sample is represented by a feature vector whose components are the memberships to the different categories. The discrete model consists of a three-dimensional Delaunay triangulation of the CIELAB color space which associates each OSA-UCS sample to a vertex of a 3D tetrahedron. Linear interpolation is used to estimate the membership values of any other point in the color space. Model validation is performed both directly, through the comparison of the predicted membership values to the subjective counterparts, as evaluated via another psychophysical test (Experiment 2), and indirectly, through the investigation of its exploitability for image segmentation. The model has proved to be successful in both cases, providing an estimation of the membership values in good agreement with the subjective measures as well as a semantically meaningful color-based segmentation map.
Adaptive correction procedure for TVL1 image deblurring under impulse noise
NASA Astrophysics Data System (ADS)
Bai, Minru; Zhang, Xiongjun; Shao, Qianqian
2016-08-01
For the problem of image restoration of observed images corrupted by blur and impulse noise, the widely used TVL1 model may deviate from both the data-acquisition model and the prior model, especially for high noise levels. In order to seek a solution of high recovery quality beyond the reach of the TVL1 model, we propose an adaptive correction procedure for TVL1 image deblurring under impulse noise. Then, a proximal alternating direction method of multipliers (ADMM) is presented to solve the corrected TVL1 model and its convergence is also established under very mild conditions. It is verified by numerical experiments that our proposed approach outperforms the TVL1 model in terms of signal-to-noise ratio (SNR) values and visual quality, especially for high noise levels: it can handle salt-and-pepper noise as high as 90% and random-valued noise as high as 70%. In addition, a comparison with a state-of-the-art method, the two-phase method, demonstrates the superiority of the proposed approach.
A model of traffic signs recognition with convolutional neural network
NASA Astrophysics Data System (ADS)
Hu, Haihe; Li, Yujian; Zhang, Ting; Huo, Yi; Kuang, Wenqing
2016-10-01
In real traffic scenes, the quality of captured images are generally low due to some factors such as lighting conditions, and occlusion on. All of these factors are challengeable for automated recognition algorithms of traffic signs. Deep learning has provided a new way to solve this kind of problems recently. The deep network can automatically learn features from a large number of data samples and obtain an excellent recognition performance. We therefore approach this task of recognition of traffic signs as a general vision problem, with few assumptions related to road signs. We propose a model of Convolutional Neural Network (CNN) and apply the model to the task of traffic signs recognition. The proposed model adopts deep CNN as the supervised learning model, directly takes the collected traffic signs image as the input, alternates the convolutional layer and subsampling layer, and automatically extracts the features for the recognition of the traffic signs images. The proposed model includes an input layer, three convolutional layers, three subsampling layers, a fully-connected layer, and an output layer. To validate the proposed model, the experiments are implemented using the public dataset of China competition of fuzzy image processing. Experimental results show that the proposed model produces a recognition accuracy of 99.01 % on the training dataset, and yield a record of 92% on the preliminary contest within the fourth best.
Development of an Assessment Model for Sustainable Supply Chain Management in Batik Industry
NASA Astrophysics Data System (ADS)
Mubiena, G. F.; Ma’ruf, A.
2018-03-01
This research proposes a dynamic assessment model for sustainable supply chain management in batik industry. The proposed model identifies the dynamic relationship between economic aspect, environment aspect and social aspect. The economic aspect refers to the supply chain operation reference model. The environment aspect uses carbon emissions and liquid waste as the attribute assessment, while the social aspect focus on employee’s welfare. Lean manufacturing concept was implemented as an alternative approach to sustainability. The simulation result shows that the average of sustainability score for 5 years increased from 65,3% to 70%. Future experiments will be conducted on design improvements to reach the company target on sustainability score.
Automatic Parameterization Strategy for Cardiac Electrophysiology Simulations.
Costa, Caroline Mendonca; Hoetzl, Elena; Rocha, Bernardo Martins; Prassl, Anton J; Plank, Gernot
2013-10-01
Driven by recent advances in medical imaging, image segmentation and numerical techniques, computer models of ventricular electrophysiology account for increasingly finer levels of anatomical and biophysical detail. However, considering the large number of model parameters involved parameterization poses a major challenge. A minimum requirement in combined experimental and modeling studies is to achieve good agreement in activation and repolarization sequences between model and experiment or patient data. In this study, we propose basic techniques which aid in determining bidomain parameters to match activation sequences. An iterative parameterization algorithm is implemented which determines appropriate bulk conductivities which yield prescribed velocities. In addition, a method is proposed for splitting the computed bulk conductivities into individual bidomain conductivities by prescribing anisotropy ratios.
[The future of clinical laboratory database management system].
Kambe, M; Imidy, D; Matsubara, A; Sugimoto, Y
1999-09-01
To assess the present status of the clinical laboratory database management system, the difference between the Clinical Laboratory Information System and Clinical Laboratory System was explained in this study. Although three kinds of database management systems (DBMS) were shown including the relational model, tree model and network model, the relational model was found to be the best DBMS for the clinical laboratory database based on our experience and developments of some clinical laboratory expert systems. As a future clinical laboratory database management system, the IC card system connected to an automatic chemical analyzer was proposed for personal health data management and a microscope/video system was proposed for dynamic data management of leukocytes or bacteria.
Problem-posing in education: transformation of the practice of the health professional.
Casagrande, L D; Caron-Ruffino, M; Rodrigues, R A; Vendrúsculo, D M; Takayanagui, A M; Zago, M M; Mendes, M D
1998-02-01
This study was developed by a group of professionals from different areas (nurses and educators) concerned with health education. It proposes the use of a problem-posing model for the transformation of professional practice. The concept and functions of the model and their relationships with the educative practice of health professionals are discussed. The model of problem-posing education is presented (compared to traditional, "banking" education), and four innovative experiences of teaching-learning are reported based on this model. These experiences, carried out in areas of environmental and occupational health and patient education have shown the applicability of the problem-posing model to the practice of the health professional, allowing transformation.
Polymer Physics of the Large-Scale Structure of Chromatin.
Bianco, Simona; Chiariello, Andrea Maria; Annunziatella, Carlo; Esposito, Andrea; Nicodemi, Mario
2016-01-01
We summarize the picture emerging from recently proposed models of polymer physics describing the general features of chromatin large scale spatial architecture, as revealed by microscopy and Hi-C experiments.
A Multiobjective Sparse Feature Learning Model for Deep Neural Networks.
Gong, Maoguo; Liu, Jia; Li, Hao; Cai, Qing; Su, Linzhi
2015-12-01
Hierarchical deep neural networks are currently popular learning models for imitating the hierarchical architecture of human brain. Single-layer feature extractors are the bricks to build deep networks. Sparse feature learning models are popular models that can learn useful representations. But most of those models need a user-defined constant to control the sparsity of representations. In this paper, we propose a multiobjective sparse feature learning model based on the autoencoder. The parameters of the model are learnt by optimizing two objectives, reconstruction error and the sparsity of hidden units simultaneously to find a reasonable compromise between them automatically. We design a multiobjective induced learning procedure for this model based on a multiobjective evolutionary algorithm. In the experiments, we demonstrate that the learning procedure is effective, and the proposed multiobjective model can learn useful sparse features.
Farina, Marco; Pappadopulo, Duccio; Rompineve, Fabrizio; ...
2017-01-23
Here, we propose a framework in which the QCD axion has an exponentially large coupling to photons, relying on the “clockwork” mechanism. We discuss the impact of present and future axion experiments on the parameter space of the model. In addition to the axion, the model predicts a large number of pseudoscalars which can be light and observable at the LHC. In the most favorable scenario, axion Dark Matter will give a signal in multiple axion detection experiments and the pseudo-scalars will be discovered at the LHC, allowing us to determine most of the parameters of the model.
NASA Astrophysics Data System (ADS)
Wu, Hongjie; Yuan, Shifei; Zhang, Xi; Yin, Chengliang; Ma, Xuerui
2015-08-01
To improve the suitability of lithium-ion battery model under varying scenarios, such as fluctuating temperature and SoC variation, dynamic model with parameters updated realtime should be developed. In this paper, an incremental analysis-based auto regressive exogenous (I-ARX) modeling method is proposed to eliminate the modeling error caused by the OCV effect and improve the accuracy of parameter estimation. Then, its numerical stability, modeling error, and parametric sensitivity are analyzed at different sampling rates (0.02, 0.1, 0.5 and 1 s). To identify the model parameters recursively, a bias-correction recursive least squares (CRLS) algorithm is applied. Finally, the pseudo random binary sequence (PRBS) and urban dynamic driving sequences (UDDSs) profiles are performed to verify the realtime performance and robustness of the newly proposed model and algorithm. Different sampling rates (1 Hz and 10 Hz) and multiple temperature points (5, 25, and 45 °C) are covered in our experiments. The experimental and simulation results indicate that the proposed I-ARX model can present high accuracy and suitability for parameter identification without using open circuit voltage.
Automatic Detection of Welding Defects using Deep Neural Network
NASA Astrophysics Data System (ADS)
Hou, Wenhui; Wei, Ye; Guo, Jie; Jin, Yi; Zhu, Chang'an
2018-01-01
In this paper, we propose an automatic detection schema including three stages for weld defects in x-ray images. Firstly, the preprocessing procedure for the image is implemented to locate the weld region; Then a classification model which is trained and tested by the patches cropped from x-ray images is constructed based on deep neural network. And this model can learn the intrinsic feature of images without extra calculation; Finally, the sliding-window approach is utilized to detect the whole images based on the trained model. In order to evaluate the performance of the model, we carry out several experiments. The results demonstrate that the classification model we proposed is effective in the detection of welded joints quality.
Geomechanical Modeling of Gas Hydrate Bearing Sediments
NASA Astrophysics Data System (ADS)
Sanchez, M. J.; Gai, X., Sr.
2015-12-01
This contribution focuses on an advance geomechanical model for methane hydrate-bearing soils based on concepts of elasto-plasticity for strain hardening/softening soils and incorporates bonding and damage effects. The core of the proposed model includes: a hierarchical single surface critical state framework, sub-loading concepts for modeling the plastic strains generally observed inside the yield surface and a hydrate enhancement factor to account for the cementing effects provided by the presence of hydrates in sediments. The proposed framework has been validated against recently published experiments involving both, synthetic and natural hydrate soils, as well as different sediments types (i.e., different hydrate saturations, and different hydrates morphologies) and confinement conditions. The performance of the model in these different case studies was very satisfactory.
A Collaborative Molecular Modeling Environment Using a Virtual Tunneling Service
Lee, Jun; Kim, Jee-In; Kang, Lin-Woo
2012-01-01
Collaborative researches of three-dimensional molecular modeling can be limited by different time zones and locations. A networked virtual environment can be utilized to overcome the problem caused by the temporal and spatial differences. However, traditional approaches did not sufficiently consider integration of different computing environments, which were characterized by types of applications, roles of users, and so on. We propose a collaborative molecular modeling environment to integrate different molecule modeling systems using a virtual tunneling service. We integrated Co-Coot, which is a collaborative crystallographic object-oriented toolkit, with VRMMS, which is a virtual reality molecular modeling system, through a collaborative tunneling system. The proposed system showed reliable quantitative and qualitative results through pilot experiments. PMID:22927721
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sibatov, R. T., E-mail: ren-sib@bk.ru; Morozova, E. V., E-mail: kat-valezhanina@yandex.ru
2015-05-15
A model of dispersive transport in disordered nanostructured semiconductors has been proposed taking into account the percolation structure of a sample and joint action of several mechanisms. Topological and energy disorders have been simultaneously taken into account within the multiple trapping model on a comb structure modeling the percolation character of trajectories. The joint action of several mechanisms has been described within random walks with a mixture of waiting time distributions. Integral transport equations with fractional derivatives have been obtained for an arbitrary density of localized states. The kinetics of the transient current has been calculated within the proposed newmore » model in order to analyze time-of-flight experiments for nanostructured semiconductors.« less
Adaptation of hidden Markov models for recognizing speech of reduced frame rate.
Lee, Lee-Min; Jean, Fu-Rong
2013-12-01
The frame rate of the observation sequence in distributed speech recognition applications may be reduced to suit a resource-limited front-end device. In order to use models trained using full-frame-rate data in the recognition of reduced frame-rate (RFR) data, we propose a method for adapting the transition probabilities of hidden Markov models (HMMs) to match the frame rate of the observation. Experiments on the recognition of clean and noisy connected digits are conducted to evaluate the proposed method. Experimental results show that the proposed method can effectively compensate for the frame-rate mismatch between the training and the test data. Using our adapted model to recognize the RFR speech data, one can significantly reduce the computation time and achieve the same level of accuracy as that of a method, which restores the frame rate using data interpolation.
BgCut: automatic ship detection from UAV images.
Xu, Chao; Zhang, Dongping; Zhang, Zhengning; Feng, Zhiyong
2014-01-01
Ship detection in static UAV aerial images is a fundamental challenge in sea target detection and precise positioning. In this paper, an improved universal background model based on Grabcut algorithm is proposed to segment foreground objects from sea automatically. First, a sea template library including images in different natural conditions is built to provide an initial template to the model. Then the background trimap is obtained by combing some templates matching with region growing algorithm. The output trimap initializes Grabcut background instead of manual intervention and the process of segmentation without iteration. The effectiveness of our proposed model is demonstrated by extensive experiments on a certain area of real UAV aerial images by an airborne Canon 5D Mark. The proposed algorithm is not only adaptive but also with good segmentation. Furthermore, the model in this paper can be well applied in the automated processing of industrial images for related researches.
BgCut: Automatic Ship Detection from UAV Images
Zhang, Zhengning; Feng, Zhiyong
2014-01-01
Ship detection in static UAV aerial images is a fundamental challenge in sea target detection and precise positioning. In this paper, an improved universal background model based on Grabcut algorithm is proposed to segment foreground objects from sea automatically. First, a sea template library including images in different natural conditions is built to provide an initial template to the model. Then the background trimap is obtained by combing some templates matching with region growing algorithm. The output trimap initializes Grabcut background instead of manual intervention and the process of segmentation without iteration. The effectiveness of our proposed model is demonstrated by extensive experiments on a certain area of real UAV aerial images by an airborne Canon 5D Mark. The proposed algorithm is not only adaptive but also with good segmentation. Furthermore, the model in this paper can be well applied in the automated processing of industrial images for related researches. PMID:24977182
A proximity algorithm accelerated by Gauss-Seidel iterations for L1/TV denoising models
NASA Astrophysics Data System (ADS)
Li, Qia; Micchelli, Charles A.; Shen, Lixin; Xu, Yuesheng
2012-09-01
Our goal in this paper is to improve the computational performance of the proximity algorithms for the L1/TV denoising model. This leads us to a new characterization of all solutions to the L1/TV model via fixed-point equations expressed in terms of the proximity operators. Based upon this observation we develop an algorithm for solving the model and establish its convergence. Furthermore, we demonstrate that the proposed algorithm can be accelerated through the use of the componentwise Gauss-Seidel iteration so that the CPU time consumed is significantly reduced. Numerical experiments using the proposed algorithm for impulsive noise removal are included, with a comparison to three recently developed algorithms. The numerical results show that while the proposed algorithm enjoys a high quality of the restored images, as the other three known algorithms do, it performs significantly better in terms of computational efficiency measured in the CPU time consumed.
a Target Aware Texture Mapping for Sculpture Heritage Modeling
NASA Astrophysics Data System (ADS)
Yang, C.; Zhang, F.; Huang, X.; Li, D.; Zhu, Y.
2017-08-01
In this paper, we proposed a target aware image to model registration method using silhouette as the matching clues. The target sculpture object in natural environment can be automatically detected from image with complex background with assistant of 3D geometric data. Then the silhouette can be automatically extracted and applied in image to model matching. Due to the user don't need to deliberately draw target area, the time consumption for precisely image to model matching operation can be greatly reduced. To enhance the function of this method, we also improved the silhouette matching algorithm to support conditional silhouette matching. Two experiments using a stone lion sculpture of Ming Dynasty and a potable relic in museum are given to evaluate the method we proposed. The method we proposed in this paper is extended and developed into a mature software applied in many culture heritage documentation projects.
Multi-mode clustering model for hierarchical wireless sensor networks
NASA Astrophysics Data System (ADS)
Hu, Xiangdong; Li, Yongfu; Xu, Huifen
2017-03-01
The topology management, i.e., clusters maintenance, of wireless sensor networks (WSNs) is still a challenge due to its numerous nodes, diverse application scenarios and limited resources as well as complex dynamics. To address this issue, a multi-mode clustering model (M2 CM) is proposed to maintain the clusters for hierarchical WSNs in this study. In particular, unlike the traditional time-trigger model based on the whole-network and periodic style, the M2 CM is proposed based on the local and event-trigger operations. In addition, an adaptive local maintenance algorithm is designed for the broken clusters in the WSNs using the spatial-temporal demand changes accordingly. Numerical experiments are performed using the NS2 network simulation platform. Results validate the effectiveness of the proposed model with respect to the network maintenance costs, node energy consumption and transmitted data as well as the network lifetime.
NASA Astrophysics Data System (ADS)
Brattico, Elvira; Brattico, Pauli; Vuust, Peter
2017-07-01
In their target article published in this journal issue, Pelowski et al. [1] address the question of how humans experience, and respond to, visual art. They propose a multi-layered model of the representations and processes involved in assessing visual art objects that, furthermore, involves both bottom-up and top-down elements. Their model provides predictions for seven different outcomes of human aesthetic experience, based on few distinct features (schema congruence, self-relevance, and coping necessity), and connects the underlying processing stages to ;specific correlates of the brain; (a similar attempt was previously done for music by [2-4]). In doing this, the model aims to account for the (often profound) experience of an individual viewer in front of an art object.
Robust small area prediction for counts.
Tzavidis, Nikos; Ranalli, M Giovanna; Salvati, Nicola; Dreassi, Emanuela; Chambers, Ray
2015-06-01
A new semiparametric approach to model-based small area prediction for counts is proposed and used for estimating the average number of visits to physicians for Health Districts in Central Italy. The proposed small area predictor can be viewed as an outlier robust alternative to the more commonly used empirical plug-in predictor that is based on a Poisson generalized linear mixed model with Gaussian random effects. Results from the real data application and from a simulation experiment confirm that the proposed small area predictor has good robustness properties and in some cases can be more efficient than alternative small area approaches. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
NASA Astrophysics Data System (ADS)
Rutqvist, J.; Rinaldi, A. P.
2017-12-01
The exploitation of a geothermal system is one of the most promising clean and almost inexhaustible forms of energy production. However, the exploitation of hot dry rock (HDR) reservoirs at depth requires circulation of a large amount of fluids. Indeed, the conceptual model of an Enhanced Geothermal System (EGS) requires that the circulation is enhanced by fluid injection. The pioneering experiments at Fenton Hill demonstrated the feasibility of EGS by producing the world's first HDR reservoirs. Such pioneering project demonstrated that the fluid circulation can be effectively enhanced by stimulating a preexisting fracture zone. The so-called "hydroshearing" involving shear activation of preexisting fractures is recognized as one of the main processes effectively enhancing permeability. The goal of this work is to quantify the effect of shear reactivation on permeability by proposing a model that accounts for fracture opening and shearing. We develop a case base on a pressure stimulation experiment at Fenton Hill, in which observation suggest that a fracture was jacked open by pressure increase. The proposed model can successfully reproduce such a behavior, and we compare the base case of pure elastic opening with the hydroshearing model to demonstrate that this latter could have occurred at the field, although no "felt" seismicity was observed. Then we investigate on the sensitivity of the proposed model by varying some of the critical parameters such as the maximum aperture, the dilation angle, as well as the fracture density.
Li, Tingting; Cheng, Zhengguo; Zhang, Le
2017-01-01
Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency. PMID:29194393
Li, Tingting; Cheng, Zhengguo; Zhang, Le
2017-12-01
Since they can provide a natural and flexible description of nonlinear dynamic behavior of complex system, Agent-based models (ABM) have been commonly used for immune system simulation. However, it is crucial for ABM to obtain an appropriate estimation for the key parameters of the model by incorporating experimental data. In this paper, a systematic procedure for immune system simulation by integrating the ABM and regression method under the framework of history matching is developed. A novel parameter estimation method by incorporating the experiment data for the simulator ABM during the procedure is proposed. First, we employ ABM as simulator to simulate the immune system. Then, the dimension-reduced type generalized additive model (GAM) is employed to train a statistical regression model by using the input and output data of ABM and play a role as an emulator during history matching. Next, we reduce the input space of parameters by introducing an implausible measure to discard the implausible input values. At last, the estimation of model parameters is obtained using the particle swarm optimization algorithm (PSO) by fitting the experiment data among the non-implausible input values. The real Influeza A Virus (IAV) data set is employed to demonstrate the performance of our proposed method, and the results show that the proposed method not only has good fitting and predicting accuracy, but it also owns favorable computational efficiency.
Active Player Modeling in the Iterated Prisoner's Dilemma
Park, Hyunsoo; Kim, Kyung-Joong
2016-01-01
The iterated prisoner's dilemma (IPD) is well known within the domain of game theory. Although it is relatively simple, it can also elucidate important problems related to cooperation and trust. Generally, players can predict their opponents' actions when they are able to build a precise model of their behavior based on their game playing experience. However, it is difficult to make such predictions based on a limited number of games. The creation of a precise model requires the use of not only an appropriate learning algorithm and framework but also a good dataset. Active learning approaches have recently been introduced to machine learning communities. The approach can usually produce informative datasets with relatively little effort. Therefore, we have proposed an active modeling technique to predict the behavior of IPD players. The proposed method can model the opponent player's behavior while taking advantage of interactive game environments. This experiment used twelve representative types of players as opponents, and an observer used an active modeling algorithm to model these opponents. This observer actively collected data and modeled the opponent's behavior online. Most of our data showed that the observer was able to build, through direct actions, a more accurate model of an opponent's behavior than when the data were collected through random actions. PMID:26989405
Liou, Shwu-Ru
2009-01-01
To systematically analyse the Organizational Commitment model and Theory of Reasoned Action and determine concepts that can better explain nurses' intention to leave their job. The Organizational Commitment model and Theory of Reasoned Action have been proposed and applied to understand intention to leave and turnover behaviour, which are major contributors to nursing shortage. However, the appropriateness of applying these two models in nursing was not analysed. Three main criteria of a useful model were used for the analysis: consistency in the use of concepts, testability and predictability. Both theories use concepts consistently. Concepts in the Theory of Reasoned Action are defined broadly whereas they are operationally defined in the Organizational Commitment model. Predictability of the Theory of Reasoned Action is questionable whereas the Organizational Commitment model can be applied to predict intention to leave. A model was proposed based on this analysis. Organizational commitment, intention to leave, work experiences, job characteristics and personal characteristics can be concepts for predicting nurses' intention to leave. Nursing managers may consider nurses' personal characteristics and experiences to increase their organizational commitment and enhance their intention to stay. Empirical studies are needed to test and cross-validate the re-synthesized model for nurses' intention to leave their job.
Active Player Modeling in the Iterated Prisoner's Dilemma.
Park, Hyunsoo; Kim, Kyung-Joong
2016-01-01
The iterated prisoner's dilemma (IPD) is well known within the domain of game theory. Although it is relatively simple, it can also elucidate important problems related to cooperation and trust. Generally, players can predict their opponents' actions when they are able to build a precise model of their behavior based on their game playing experience. However, it is difficult to make such predictions based on a limited number of games. The creation of a precise model requires the use of not only an appropriate learning algorithm and framework but also a good dataset. Active learning approaches have recently been introduced to machine learning communities. The approach can usually produce informative datasets with relatively little effort. Therefore, we have proposed an active modeling technique to predict the behavior of IPD players. The proposed method can model the opponent player's behavior while taking advantage of interactive game environments. This experiment used twelve representative types of players as opponents, and an observer used an active modeling algorithm to model these opponents. This observer actively collected data and modeled the opponent's behavior online. Most of our data showed that the observer was able to build, through direct actions, a more accurate model of an opponent's behavior than when the data were collected through random actions.
A propagation experiment for modelling high elevation angle land mobile satellite channels
NASA Technical Reports Server (NTRS)
Richharia, M.; Evans, B. G.; Butt, G.
1990-01-01
This paper summarizes the results of a feasibility study for conducting high elevation angle propagation experiments in the European region for land mobile satellite communication. The study addresses various aspects of a proposed experiment. These include the selection of a suitable source for transmission, possibility of gathering narrow and wide band propagation data in various frequency bands, types of useful data, data acquisition technique, possible experimental configuration, and other experimental details.
Butterfill, Stephen A
2015-11-01
What evidence could bear on questions about whether humans ever perceptually experience any of another's mental states, and how might those questions be made precise enough to test experimentally? This paper focusses on emotions and their expression. It is proposed that research on perceptual experiences of physical properties provides one model for thinking about what evidence concerning expressions of emotion might reveal about perceptual experiences of others' mental states. This proposal motivates consideration of the hypothesis that categorical perception of expressions of emotion occurs, can be facilitated by information about agents' emotions, and gives rise to phenomenal expectations. It is argued that the truth of this hypothesis would support a modest version of the claim that humans sometimes perceptually experience some of another's mental states. Much available evidence is consistent with, but insufficient to establish, the truth of the hypothesis. We are probably not yet in a position to know whether humans ever perceptually experience others' mental states. Copyright © 2015 Elsevier Inc. All rights reserved.
An Improved Perturb and Observe Algorithm for Photovoltaic Motion Carriers
NASA Astrophysics Data System (ADS)
Peng, Lele; Xu, Wei; Li, Liming; Zheng, Shubin
2018-03-01
An improved perturbation and observation algorithm for photovoltaic motion carriers is proposed in this paper. The model of the proposed algorithm is given by using Lambert W function and tangent error method. Moreover, by using matlab and experiment of photovoltaic system, the tracking performance of the proposed algorithm is tested. And the results demonstrate that the improved algorithm has fast tracking speed and high efficiency. Furthermore, the energy conversion efficiency by the improved method has increased by nearly 8.2%.
Tunable overlapping long-period fiber grating and its bending vector sensing application
NASA Astrophysics Data System (ADS)
Hu, Wei; Zhang, Weigang; Chen, Lei; Wang, Song; Zhang, Yunshan; Zhang, Yanxin; Kong, Lingxin; Yu, Lin; Yan, Tieyi; Li, Yanping
2018-03-01
A novel overlapping long-period fiber grating (OLPFG) is proposed and experimentally demonstrated in this paper. The OLPFG is composed of two partially overlapping long-period fiber gratings (LPFG). Based on the coupled model theory and transfer matrix method, it is found that the phase shift LPFG and LPFGs interference are two special situations of the proposed OLPFG. Moreover, the confirmation experiments verified that the proposed OLPFG has a high bending sensitivity in opposite directions, and the temperature crosstalk can be compensated spontaneously.
ERIC Educational Resources Information Center
Zhang, Jinguang
2010-01-01
Research suggests that first- and third-person perceptions are driven by the motive to self-enhance and cognitive processes involving the perception of social norms. This article proposes and tests a dual-process model that predicts an interaction between cognition and motivation. Consistent with the model, Experiment 1 (N = 112) showed that…
Accurate object tracking system by integrating texture and depth cues
NASA Astrophysics Data System (ADS)
Chen, Ju-Chin; Lin, Yu-Hang
2016-03-01
A robust object tracking system that is invariant to object appearance variations and background clutter is proposed. Multiple instance learning with a boosting algorithm is applied to select discriminant texture information between the object and background data. Additionally, depth information, which is important to distinguish the object from a complicated background, is integrated. We propose two depth-based models that can compensate texture information to cope with both appearance variants and background clutter. Moreover, in order to reduce the risk of drifting problem increased for the textureless depth templates, an update mechanism is proposed to select more precise tracking results to avoid incorrect model updates. In the experiments, the robustness of the proposed system is evaluated and quantitative results are provided for performance analysis. Experimental results show that the proposed system can provide the best success rate and has more accurate tracking results than other well-known algorithms.
Heterogeneous Tensor Decomposition for Clustering via Manifold Optimization.
Sun, Yanfeng; Gao, Junbin; Hong, Xia; Mishra, Bamdev; Yin, Baocai
2016-03-01
Tensor clustering is an important tool that exploits intrinsically rich structures in real-world multiarray or Tensor datasets. Often in dealing with those datasets, standard practice is to use subspace clustering that is based on vectorizing multiarray data. However, vectorization of tensorial data does not exploit complete structure information. In this paper, we propose a subspace clustering algorithm without adopting any vectorization process. Our approach is based on a novel heterogeneous Tucker decomposition model taking into account cluster membership information. We propose a new clustering algorithm that alternates between different modes of the proposed heterogeneous tensor model. All but the last mode have closed-form updates. Updating the last mode reduces to optimizing over the multinomial manifold for which we investigate second order Riemannian geometry and propose a trust-region algorithm. Numerical experiments show that our proposed algorithm compete effectively with state-of-the-art clustering algorithms that are based on tensor factorization.
Development of oil canning index model for sheet metal forming products with large curvature
NASA Astrophysics Data System (ADS)
Kim, Honglae; Lee, Seonggi; Murugesan, Mohanraj; Hong, Seokmoo; Lee, Shanghun; Ki, Juncheol; Jung, Hunchul; Kim, Naksoo
2017-09-01
Oil canning is predominantly caused by unequal stretches and heterogeneous stress distributions in steel sheets, which affects the appearance of components and develop noise and vibration problems. This paper proposes the formulation of an Oil canning index (OCI) model that can predict the occurrence of oil canning in the sheet metal. To investigate the influence of material properties, we used electro-galvanized (EGI) and galvanized (GI) steel sheets with different thicknesses and processing conditions. Furthermore, this paper presents an appropriate experimental and numerical procedure for determining the sheet stiffness and indentation properties to evaluate the oil canning results. Experiments were carried out by varying the tensile force over different materials, thicknesses, and bead force. Comparison of the discrete results obtained from these experiments confirmed that the product shape characteristics, such as curvature, have a significant influence on the oil canning occurrence. Based on the results, we propose the new OCI model, which can effectively predict the oil canning occurrence owing to the shape curvature. Verification of the accuracy and usability of our model has been carried out by simulating the experiments that were done with the sheet metal. The authors observed a good agreement between the experimental and numerical results from the model. This research work can be considered as a very effective method for eliminating appearance defects from the automobile products.
A Sarsa(λ)-Based Control Model for Real-Time Traffic Light Coordination
Zhu, Fei; Liu, Quan; Fu, Yuchen; Huang, Wei
2014-01-01
Traffic problems often occur due to the traffic demands by the outnumbered vehicles on road. Maximizing traffic flow and minimizing the average waiting time are the goals of intelligent traffic control. Each junction wants to get larger traffic flow. During the course, junctions form a policy of coordination as well as constraints for adjacent junctions to maximize their own interests. A good traffic signal timing policy is helpful to solve the problem. However, as there are so many factors that can affect the traffic control model, it is difficult to find the optimal solution. The disability of traffic light controllers to learn from past experiences caused them to be unable to adaptively fit dynamic changes of traffic flow. Considering dynamic characteristics of the actual traffic environment, reinforcement learning algorithm based traffic control approach can be applied to get optimal scheduling policy. The proposed Sarsa(λ)-based real-time traffic control optimization model can maintain the traffic signal timing policy more effectively. The Sarsa(λ)-based model gains traffic cost of the vehicle, which considers delay time, the number of waiting vehicles, and the integrated saturation from its experiences to learn and determine the optimal actions. The experiment results show an inspiring improvement in traffic control, indicating the proposed model is capable of facilitating real-time dynamic traffic control. PMID:24592183
ERIC Educational Resources Information Center
Groccia, James E.
2018-01-01
This chapter reviews the history and various definitions of student engagement and proposes a multidimensional model from which one can develop a variety of engagement opportunities that lead to a rich and challenging higher education experience.
DOE R&D Accomplishments Database
Davis, R. Jr.; Evans, J. C.; Cleveland, B. T.
1978-04-28
A summary of the results of the Brookhaven solar neutrino experiment is given and discussed in relation to solar model calculations. A review is given of the merits of various new solar neutrino detectors that were proposed.
Microdose Induced Drain Leakage Effects in Power Trench MOSFETs: Experiment and Modeling
NASA Astrophysics Data System (ADS)
Zebrev, Gennady I.; Vatuev, Alexander S.; Useinov, Rustem G.; Emeliyanov, Vladimir V.; Anashin, Vasily S.; Gorbunov, Maxim S.; Turin, Valentin O.; Yesenkov, Kirill A.
2014-08-01
We study experimentally and theoretically the micro-dose induced drain-source leakage current in the trench power MOSFETs under irradiation with high-LET heavy ions. We found experimentally that cumulative increase of leakage current occurs by means of stochastic spikes corresponding to a strike of single heavy ion into the MOSFET gate oxide. We simulate this effect with the proposed analytic model allowing to describe (including Monte Carlo methods) both the deterministic (cumulative dose) and stochastic (single event) aspects of the problem. Based on this model the survival probability assessment in space heavy ion environment with high LETs was proposed.
The Student Spaceflight Experiments Program: Access to the ISS for K-14 Students
NASA Astrophysics Data System (ADS)
Livengood, Timothy A.; Goldstein, J. J.; Vanhala, H. A. T.; Johnson, M.; Hulslander, M.
2012-10-01
The Student Spaceflight Experiments Program (SSEP) has flown 42 experiments to space, on behalf of students from middle school through community college, on 3 missions: each of the last 2 Space Shuttle flights, and the first SpaceX resupply flight to the International Space Station (ISS). SSEP plans 2 missions to the ISS per year for the foreseeable future, and is expanding the program to include 4-year undergraduate college students and home-schooled students. SSEP experiments have explored biological, chemical, and physical phenomena within self-contained enclosures developed by NanoRacks, currently in the form of MixStix Fluid Mixing Enclosures. Over 9000 students participated in the initial 3 missions of SSEP, directly experiencing the entire lifecycle of space science experimentation through community-wide participation in SSEP, taking research from a nascent idea through developing competitive research proposals, down-selecting to three proposals from each participating community and further selection of a single proposal for flight, actual space flight, sample recovery, analysis, and reporting. The National Air and Space Museum has hosted 2 National Conferences for SSEP student teams to report results in keeping with the model of professional research. Student teams have unflinchingly reported on success, failure, and groundbased efforts to develop proposals for future flight opportunities. Community participation extends outside the sciences and the immediate proposal efforts to include design competitions for mission patches (that also fly to space). Student experimenters have rallied around successful proposal teams to support a successful experiment on behalf of the entire community. SSEP is a project of the National Center for Earth and Space Science Education enabled through NanoRacks LLC, working in partnership with NASA under a Space Act Agreement as part of the utilization of the International Space Station as a National Laboratory.
Antoniotti, M; Park, F; Policriti, A; Ugel, N; Mishra, B
2003-01-01
The analysis of large amounts of data, produced as (numerical) traces of in vivo, in vitro and in silico experiments, has become a central activity for many biologists and biochemists. Recent advances in the mathematical modeling and computation of biochemical systems have moreover increased the prominence of in silico experiments; such experiments typically involve the simulation of sets of Differential Algebraic Equations (DAE), e.g., Generalized Mass Action systems (GMA) and S-systems. In this paper we reason about the necessary theoretical and pragmatic foundations for a query and simulation system capable of analyzing large amounts of such trace data. To this end, we propose to combine in a novel way several well-known tools from numerical analysis (approximation theory), temporal logic and verification, and visualization. The result is a preliminary prototype system: simpathica/xssys. When dealing with simulation data simpathica/xssys exploits the special structure of the underlying DAE, and reduces the search space in an efficient way so as to facilitate any queries about the traces. The proposed system is designed to give the user possibility to systematically analyze and simultaneously query different possible timed evolutions of the modeled system.
Pan, Chong; Zhang, Dali; Kon, Audrey Wan Mei; Wai, Charity Sue Lea; Ang, Woo Boon
2015-06-01
Continuous improvement in process efficiency for specialist outpatient clinic (SOC) systems is increasingly being demanded due to the growth of the patient population in Singapore. In this paper, we propose a discrete event simulation (DES) model to represent the patient and information flow in an ophthalmic SOC system in the Singapore National Eye Centre (SNEC). Different improvement strategies to reduce the turnaround time for patients in the SOC were proposed and evaluated with the aid of the DES model and the Design of Experiment (DOE). Two strategies for better patient appointment scheduling and one strategy for dilation-free examination are estimated to have a significant impact on turnaround time for patients. One of the improvement strategies has been implemented in the actual SOC system in the SNEC with promising improvement reported.
Evaluation of an imputed pitch velocity model of the auditory kappa effect.
Henry, Molly J; McAuley, J Devin
2009-04-01
Three experiments evaluated an imputed pitch velocity model of the auditory kappa effect. Listeners heard 3-tone sequences and judged the timing of the middle (target) tone relative to the timing of the 1st and 3rd (bounding) tones. Experiment 1 held pitch constant but varied the time (T) interval between bounding tones (T = 728, 1,000, or 1,600 ms) in order to establish baseline performance levels for the 3 values of T. Experiments 2 and 3 combined the values of T tested in Experiment 1 with a pitch manipulation in order to create fast (8 semitones/728 ms), medium (8 semitones/1,000 ms), and slow (8 semitones/1,600 ms) velocity conditions. Consistent with an auditory motion hypothesis, distortions in perceived timing were larger for fast than for slow velocity conditions for both ascending sequences (Experiment 2) and descending sequences (Experiment 3). Overall, results supported the proposed imputed pitch velocity model of the auditory kappa effect. (c) 2009 APA, all rights reserved.
Visual anticipation biases conscious decision making but not bottom-up visual processing.
Mathews, Zenon; Cetnarski, Ryszard; Verschure, Paul F M J
2014-01-01
Prediction plays a key role in control of attention but it is not clear which aspects of prediction are most prominent in conscious experience. An evolving view on the brain is that it can be seen as a prediction machine that optimizes its ability to predict states of the world and the self through the top-down propagation of predictions and the bottom-up presentation of prediction errors. There are competing views though on whether prediction or prediction errors dominate the formation of conscious experience. Yet, the dynamic effects of prediction on perception, decision making and consciousness have been difficult to assess and to model. We propose a novel mathematical framework and a psychophysical paradigm that allows us to assess both the hierarchical structuring of perceptual consciousness, its content and the impact of predictions and/or errors on conscious experience, attention and decision-making. Using a displacement detection task combined with reverse correlation, we reveal signatures of the usage of prediction at three different levels of perceptual processing: bottom-up fast saccades, top-down driven slow saccades and consciousnes decisions. Our results suggest that the brain employs multiple parallel mechanism at different levels of perceptual processing in order to shape effective sensory consciousness within a predicted perceptual scene. We further observe that bottom-up sensory and top-down predictive processes can be dissociated through cognitive load. We propose a probabilistic data association model from dynamical systems theory to model the predictive multi-scale bias in perceptual processing that we observe and its role in the formation of conscious experience. We propose that these results support the hypothesis that consciousness provides a time-delayed description of a task that is used to prospectively optimize real time control structures, rather than being engaged in the real-time control of behavior itself.
Chaibub Neto, Elias; Bare, J. Christopher; Margolin, Adam A.
2014-01-01
New algorithms are continuously proposed in computational biology. Performance evaluation of novel methods is important in practice. Nonetheless, the field experiences a lack of rigorous methodology aimed to systematically and objectively evaluate competing approaches. Simulation studies are frequently used to show that a particular method outperforms another. Often times, however, simulation studies are not well designed, and it is hard to characterize the particular conditions under which different methods perform better. In this paper we propose the adoption of well established techniques in the design of computer and physical experiments for developing effective simulation studies. By following best practices in planning of experiments we are better able to understand the strengths and weaknesses of competing algorithms leading to more informed decisions about which method to use for a particular task. We illustrate the application of our proposed simulation framework with a detailed comparison of the ridge-regression, lasso and elastic-net algorithms in a large scale study investigating the effects on predictive performance of sample size, number of features, true model sparsity, signal-to-noise ratio, and feature correlation, in situations where the number of covariates is usually much larger than sample size. Analysis of data sets containing tens of thousands of features but only a few hundred samples is nowadays routine in computational biology, where “omics” features such as gene expression, copy number variation and sequence data are frequently used in the predictive modeling of complex phenotypes such as anticancer drug response. The penalized regression approaches investigated in this study are popular choices in this setting and our simulations corroborate well established results concerning the conditions under which each one of these methods is expected to perform best while providing several novel insights. PMID:25289666
Design Approaches to Support Preservice Teachers in Scientific Modeling
NASA Astrophysics Data System (ADS)
Kenyon, Lisa; Davis, Elizabeth A.; Hug, Barbara
2011-02-01
Engaging children in scientific practices is hard for beginning teachers. One such scientific practice with which beginning teachers may have limited experience is scientific modeling. We have iteratively designed preservice teacher learning experiences and materials intended to help teachers achieve learning goals associated with scientific modeling. Our work has taken place across multiple years at three university sites, with preservice teachers focused on early childhood, elementary, and middle school teaching. Based on results from our empirical studies supporting these design decisions, we discuss design features of our modeling instruction in each iteration. Our results suggest some successes in supporting preservice teachers in engaging students in modeling practice. We propose design principles that can guide science teacher educators in incorporating modeling in teacher education.
NASA Astrophysics Data System (ADS)
Shi, Jinfei; Zhu, Songqing; Chen, Ruwen
2017-12-01
An order selection method based on multiple stepwise regressions is proposed for General Expression of Nonlinear Autoregressive model which converts the model order problem into the variable selection of multiple linear regression equation. The partial autocorrelation function is adopted to define the linear term in GNAR model. The result is set as the initial model, and then the nonlinear terms are introduced gradually. Statistics are chosen to study the improvements of both the new introduced and originally existed variables for the model characteristics, which are adopted to determine the model variables to retain or eliminate. So the optimal model is obtained through data fitting effect measurement or significance test. The simulation and classic time-series data experiment results show that the method proposed is simple, reliable and can be applied to practical engineering.
A Novel Multi-Digital Camera System Based on Tilt-Shift Photography Technology
Sun, Tao; Fang, Jun-yong; Zhao, Dong; Liu, Xue; Tong, Qing-xi
2015-01-01
Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product. PMID:25835187
Evolution of fairness in the one-shot anonymous Ultimatum Game
Rand, David G.; Tarnita, Corina E.; Ohtsuki, Hisashi; Nowak, Martin A.
2013-01-01
Classical economic models assume that people are fully rational and selfish, while experiments often point to different conclusions. A canonical example is the Ultimatum Game: one player proposes a division of a sum of money between herself and a second player, who either accepts or rejects. Based on rational self-interest, responders should accept any nonzero offer and proposers should offer the smallest possible amount. Traditional, deterministic models of evolutionary game theory agree: in the one-shot anonymous Ultimatum Game, natural selection favors low offers and demands. Experiments instead show a preference for fairness: often responders reject low offers and proposers make higher offers than needed to avoid rejection. Here we show that using stochastic evolutionary game theory, where agents make mistakes when judging the payoffs and strategies of others, natural selection favors fairness. Across a range of parameters, the average strategy matches the observed behavior: proposers offer between 30% and 50%, and responders demand between 25% and 40%. Rejecting low offers increases relative payoff in pairwise competition between two strategies and is favored when selection is sufficiently weak. Offering more than you demand increases payoff when many strategies are present simultaneously and is favored when mutation is sufficiently high. We also perform a behavioral experiment and find empirical support for these theoretical findings: uncertainty about the success of others is associated with higher demands and offers; and inconsistency in the behavior of others is associated with higher offers but not predictive of demands. In an uncertain world, fairness finishes first. PMID:23341593
A novel multi-digital camera system based on tilt-shift photography technology.
Sun, Tao; Fang, Jun-Yong; Zhao, Dong; Liu, Xue; Tong, Qing-Xi
2015-03-31
Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product.
Evolution of fairness in the one-shot anonymous Ultimatum Game.
Rand, David G; Tarnita, Corina E; Ohtsuki, Hisashi; Nowak, Martin A
2013-02-12
Classical economic models assume that people are fully rational and selfish, while experiments often point to different conclusions. A canonical example is the Ultimatum Game: one player proposes a division of a sum of money between herself and a second player, who either accepts or rejects. Based on rational self-interest, responders should accept any nonzero offer and proposers should offer the smallest possible amount. Traditional, deterministic models of evolutionary game theory agree: in the one-shot anonymous Ultimatum Game, natural selection favors low offers and demands. Experiments instead show a preference for fairness: often responders reject low offers and proposers make higher offers than needed to avoid rejection. Here we show that using stochastic evolutionary game theory, where agents make mistakes when judging the payoffs and strategies of others, natural selection favors fairness. Across a range of parameters, the average strategy matches the observed behavior: proposers offer between 30% and 50%, and responders demand between 25% and 40%. Rejecting low offers increases relative payoff in pairwise competition between two strategies and is favored when selection is sufficiently weak. Offering more than you demand increases payoff when many strategies are present simultaneously and is favored when mutation is sufficiently high. We also perform a behavioral experiment and find empirical support for these theoretical findings: uncertainty about the success of others is associated with higher demands and offers; and inconsistency in the behavior of others is associated with higher offers but not predictive of demands. In an uncertain world, fairness finishes first.
Pomerantsev, Alexey L; Kutsenova, Alla V; Rodionova, Oxana Ye
2017-02-01
A novel non-linear regression method for modeling non-isothermal thermogravimetric data is proposed. Experiments for several heating rates are analyzed simultaneously. The method is applicable to complex multi-stage processes when the number of stages is unknown. Prior knowledge of the type of kinetics is not required. The main idea is a consequent estimation of parameters when the overall model is successively changed from one level of modeling to another. At the first level, the Avrami-Erofeev functions are used. At the second level, the Sestak-Berggren functions are employed with the goal to broaden the overall model. The method is tested using both simulated and real-world data. A comparison of the proposed method with a recently published 'model-free' deconvolution method is presented.
A Non-Intrusive Pressure Sensor by Detecting Multiple Longitudinal Waves
Zhou, Hongliang; Lin, Weibin; Ge, Xiaocheng; Zhou, Jian
2016-01-01
Pressure vessels are widely used in industrial fields, and some of them are safety-critical components in the system—for example, those which contain flammable or explosive material. Therefore, the pressure of these vessels becomes one of the critical measurements for operational management. In the paper, we introduce a new approach to the design of non-intrusive pressure sensors, based on ultrasonic waves. The model of this sensor is built based upon the travel-time change of the critically refracted longitudinal wave (LCR wave) and the reflected longitudinal waves with the pressure. To evaluate the model, experiments are carried out to compare the proposed model with other existing models. The results show that the proposed model can improve the accuracy compared to models based on a single wave. PMID:27527183
Efficient micromagnetic modelling of spin-transfer torque and spin-orbit torque
NASA Astrophysics Data System (ADS)
Abert, Claas; Bruckner, Florian; Vogler, Christoph; Suess, Dieter
2018-05-01
While the spin-diffusion model is considered one of the most complete and accurate tools for the description of spin transport and spin torque, its solution in the context of dynamical micromagnetic simulations is numerically expensive. We propose a procedure to retrieve the free parameters of a simple macro-spin like spin-torque model through the spin-diffusion model. In case of spin-transfer torque the simplified model complies with the model of Slonczewski. A similar model can be established for the description of spin-orbit torque. In both cases the spin-diffusion model enables the retrieval of free model parameters from the geometry and the material parameters of the system. Since these parameters usually have to be determined phenomenologically through experiments, the proposed method combines the strength of the diffusion model to resolve material parameters and geometry with the high performance of simple torque models.
Continuing education for general practice. 2. Systematic learning from experience.
al-Shehri, A; Stanley, I; Thomas, P
1993-01-01
Prompted by evidence that the recently-adopted arrangements for ongoing education among established general practitioners are unsatisfactory, the first of a pair of papers examined the theoretical basis of continuing education for general practice and proposed a model of self-directed learning in which the experience of established practitioners is connected, through the media of reading, reflection and audit, with competence for the role. In this paper a practical, systematic approach to self-directed learning by general practitioners is described based on the model. The contribution which appropriate participation in continuing medical education can make to enhancing learning from experience is outlined. PMID:8373649
Modeling of Flow Transition Using an Intermittency Transport Equation
NASA Technical Reports Server (NTRS)
Suzen, Y. B.; Huang, P. G.
1999-01-01
A new transport equation for intermittency factor is proposed to model transitional flows. The intermittent behavior of the transitional flows is incorporated into the computations by modifying the eddy viscosity, mu(sub t), obtainable from a turbulence model, with the intermittency factor, gamma: mu(sub t, sup *) = gamma.mu(sub t). In this paper, Menter's SST model (Menter, 1994) is employed to compute mu(sub t) and other turbulent quantities. The proposed intermittency transport equation can be considered as a blending of two models - Steelant and Dick (1996) and Cho and Chung (1992). The former was proposed for near-wall flows and was designed to reproduce the streamwise variation of the intermittency factor in the transition zone following Dhawan and Narasimha correlation (Dhawan and Narasimha, 1958) and the latter was proposed for free shear flows and was used to provide a realistic cross-stream variation of the intermittency profile. The new model was used to predict the T3 series experiments assembled by Savill (1993a, 1993b) including flows with different freestream turbulence intensities and two pressure-gradient cases. For all test cases good agreements between the computed results and the experimental data are observed.
Fundamental incorporation of the density change during melting of a confined phase change material
NASA Astrophysics Data System (ADS)
Hernández, Ernesto M.; Otero, José A.
2018-02-01
The modeling of thermal diffusion processes taking place in a phase change material presents a challenge when the dynamics of the phase transition is coupled to the mechanical properties of the container. Thermo-mechanical models have been developed by several authors, however, it will be shown that these models only explain the phase transition dynamics at low pressures when the density of each phase experiences negligible changes. In our proposal, a new energy-mass balance equation at the interface is derived and found to be a consequence of mass conservation. The density change experienced in each phase is predicted by the proposed formulation of the problem. Numerical and semi-analytical solutions to the proposed model are presented for an example on a high temperature phase change material. The solutions to the models presented by other authors are observed to be well-behaved close to the isobaric limit. However, compared to the results obtained from our model, the change in the fusion temperature, latent heat, and absolute pressure is found to be greatly overestimated by other proposals when the phase transition is studied close to the isochoric regime.
Data-Driven Modeling and Rendering of Force Responses from Elastic Tool Deformation
Rakhmatov, Ruslan; Ogay, Tatyana; Jeon, Seokhee
2018-01-01
This article presents a new data-driven model design for rendering force responses from elastic tool deformation. The new design incorporates a six-dimensional input describing the initial position of the contact, as well as the state of the tool deformation. The input-output relationship of the model was represented by a radial basis functions network, which was optimized based on training data collected from real tool-surface contact. Since the input space of the model is represented in the local coordinate system of a tool, the model is independent of recording and rendering devices and can be easily deployed to an existing simulator. The model also supports complex interactions, such as self and multi-contact collisions. In order to assess the proposed data-driven model, we built a custom data acquisition setup and developed a proof-of-concept rendering simulator. The simulator was evaluated through numerical and psychophysical experiments with four different real tools. The numerical evaluation demonstrated the perceptual soundness of the proposed model, meanwhile the user study revealed the force feedback of the proposed simulator to be realistic. PMID:29342964
A simple analytical infiltration model for short-duration rainfall
NASA Astrophysics Data System (ADS)
Wang, Kaiwen; Yang, Xiaohua; Liu, Xiaomang; Liu, Changming
2017-12-01
Many infiltration models have been proposed to simulate infiltration process. Different initial soil conditions and non-uniform initial water content can lead to infiltration simulation errors, especially for short-duration rainfall (SHR). Few infiltration models are specifically derived to eliminate the errors caused by the complex initial soil conditions. We present a simple analytical infiltration model for SHR infiltration simulation, i.e., Short-duration Infiltration Process model (SHIP model). The infiltration simulated by 5 models (i.e., SHIP (high) model, SHIP (middle) model, SHIP (low) model, Philip model and Parlange model) were compared based on numerical experiments and soil column experiments. In numerical experiments, SHIP (middle) and Parlange models had robust solutions for SHR infiltration simulation of 12 typical soils under different initial soil conditions. The absolute values of percent bias were less than 12% and the values of Nash and Sutcliffe efficiency were greater than 0.83. Additionally, in soil column experiments, infiltration rate fluctuated in a range because of non-uniform initial water content. SHIP (high) and SHIP (low) models can simulate an infiltration range, which successfully covered the fluctuation range of the observed infiltration rate. According to the robustness of solutions and the coverage of fluctuation range of infiltration rate, SHIP model can be integrated into hydrologic models to simulate SHR infiltration process and benefit the flood forecast.
NASA Astrophysics Data System (ADS)
Zhang, Jianfeng; Zhu, Yan; Zhang, Xiaoping; Ye, Ming; Yang, Jinzhong
2018-06-01
Predicting water table depth over the long-term in agricultural areas presents great challenges because these areas have complex and heterogeneous hydrogeological characteristics, boundary conditions, and human activities; also, nonlinear interactions occur among these factors. Therefore, a new time series model based on Long Short-Term Memory (LSTM), was developed in this study as an alternative to computationally expensive physical models. The proposed model is composed of an LSTM layer with another fully connected layer on top of it, with a dropout method applied in the first LSTM layer. In this study, the proposed model was applied and evaluated in five sub-areas of Hetao Irrigation District in arid northwestern China using data of 14 years (2000-2013). The proposed model uses monthly water diversion, evaporation, precipitation, temperature, and time as input data to predict water table depth. A simple but effective standardization method was employed to pre-process data to ensure data on the same scale. 14 years of data are separated into two sets: training set (2000-2011) and validation set (2012-2013) in the experiment. As expected, the proposed model achieves higher R2 scores (0.789-0.952) in water table depth prediction, when compared with the results of traditional feed-forward neural network (FFNN), which only reaches relatively low R2 scores (0.004-0.495), proving that the proposed model can preserve and learn previous information well. Furthermore, the validity of the dropout method and the proposed model's architecture are discussed. Through experimentation, the results show that the dropout method can prevent overfitting significantly. In addition, comparisons between the R2 scores of the proposed model and Double-LSTM model (R2 scores range from 0.170 to 0.864), further prove that the proposed model's architecture is reasonable and can contribute to a strong learning ability on time series data. Thus, one can conclude that the proposed model can serve as an alternative approach predicting water table depth, especially in areas where hydrogeological data are difficult to obtain.
Cellular automaton model of crowd evacuation inspired by slime mould
NASA Astrophysics Data System (ADS)
Kalogeiton, V. S.; Papadopoulos, D. P.; Georgilas, I. P.; Sirakoulis, G. Ch.; Adamatzky, A. I.
2015-04-01
In all the living organisms, the self-preservation behaviour is almost universal. Even the most simple of living organisms, like slime mould, is typically under intense selective pressure to evolve a response to ensure their evolution and safety in the best possible way. On the other hand, evacuation of a place can be easily characterized as one of the most stressful situations for the individuals taking part on it. Taking inspiration from the slime mould behaviour, we are introducing a computational bio-inspired model crowd evacuation model. Cellular Automata (CA) were selected as a fully parallel advanced computation tool able to mimic the Physarum's behaviour. In particular, the proposed CA model takes into account while mimicking the Physarum foraging process, the food diffusion, the organism's growth, the creation of tubes for each organism, the selection of optimum tube for each human in correspondence to the crowd evacuation under study and finally, the movement of all humans at each time step towards near exit. To test the model's efficiency and robustness, several simulation scenarios were proposed both in virtual and real-life indoor environments (namely, the first floor of office building B of the Department of Electrical and Computer Engineering of Democritus University of Thrace). The proposed model is further evaluated in a purely quantitative way by comparing the simulation results with the corresponding ones from the bibliography taken by real data. The examined fundamental diagrams of velocity-density and flow-density are found in full agreement with many of the already published corresponding results proving the adequacy, the fitness and the resulting dynamics of the model. Finally, several real Physarum experiments were conducted in an archetype of the aforementioned real-life environment proving at last that the proposed model succeeded in reproducing sufficiently the Physarum's recorded behaviour derived from observation of the aforementioned biological laboratory experiments.
Topological phases in the Haldane model with spin–spin on-site interactions
NASA Astrophysics Data System (ADS)
Rubio-García, A.; García-Ripoll, J. J.
2018-04-01
Ultracold atom experiments allow the study of topological insulators, such as the non-interacting Haldane model. In this work we study a generalization of the Haldane model with spin–spin on-site interactions that can be implemented on such experiments. We focus on measuring the winding number, a topological invariant, of the ground state, which we compute using a mean-field calculation that effectively captures long-range correlations and a matrix product state computation in a lattice with 64 sites. Our main result is that we show how the topological phases present in the non-interacting model survive until the interactions are comparable to the kinetic energy. We also demonstrate the accuracy of our mean-field approach in efficiently capturing long-range correlations. Based on state-of-the-art ultracold atom experiments, we propose an implementation of our model that can give information about the topological phases.
Du, Dongping; Yang, Hui; Ednie, Andrew R; Bennett, Eric S
2016-09-01
Glycan structures account for up to 35% of the mass of cardiac sodium ( Nav ) channels. To question whether and how reduced sialylation affects Nav activity and cardiac electrical signaling, we conducted a series of in vitro experiments on ventricular apex myocytes under two different glycosylation conditions, reduced protein sialylation (ST3Gal4(-/-)) and full glycosylation (control). Although aberrant electrical signaling is observed in reduced sialylation, realizing a better understanding of mechanistic details of pathological variations in INa and AP is difficult without performing in silico studies. However, computer model of Nav channels and cardiac myocytes involves greater levels of complexity, e.g., high-dimensional parameter space, nonlinear and nonconvex equations. Traditional linear and nonlinear optimization methods have encountered many difficulties for model calibration. This paper presents a new statistical metamodeling approach for efficient computer experiments and optimization of Nav models. First, we utilize a fractional factorial design to identify control variables from the large set of model parameters, thereby reducing the dimensionality of parametric space. Further, we develop the Gaussian process model as a surrogate of expensive and time-consuming computer models and then identify the next best design point that yields the maximal probability of improvement. This process iterates until convergence, and the performance is evaluated and validated with real-world experimental data. Experimental results show the proposed algorithm achieves superior performance in modeling the kinetics of Nav channels under a variety of glycosylation conditions. As a result, in silico models provide a better understanding of glyco-altered mechanistic details in state transitions and distributions of Nav channels. Notably, ST3Gal4(-/-) myocytes are shown to have higher probabilities accumulated in intermediate inactivation during the repolarization and yield a shorter refractory period than WTs. The proposed statistical design of computer experiments is generally extensible to many other disciplines that involve large-scale and computationally expensive models.
Wang, Likun; Du, Zhijiang; Dong, Wei; Shen, Yi; Zhao, Guangyu
2018-01-01
To achieve strength augmentation, endurance enhancement, and human assistance in a functional autonomous exoskeleton, control precision, back drivability, low output impedance, and mechanical compactness are desired. In our previous work, two elastic modules were designed for human–robot interaction sensing and compliant control, respectively. According to the intrinsic sensing properties of the elastic module, in this paper, only one compact elastic module is applied to realize both purposes. Thus, the corresponding control strategy is required and evolving internal model control is proposed to address this issue. Moreover, the input signal to the controller is derived from the deflection of the compact elastic module. The human–robot interaction is considered as the disturbance which is approximated by the output error between the exoskeleton control plant and evolving forward learning model. Finally, to verify our proposed control scheme, several experiments are conducted with our robotic exoskeleton system. The experiment shows a satisfying result and promising application feasibility. PMID:29562684
Wang, Likun; Du, Zhijiang; Dong, Wei; Shen, Yi; Zhao, Guangyu
2018-03-19
To achieve strength augmentation, endurance enhancement, and human assistance in a functional autonomous exoskeleton, control precision, back drivability, low output impedance, and mechanical compactness are desired. In our previous work, two elastic modules were designed for human-robot interaction sensing and compliant control, respectively. According to the intrinsic sensing properties of the elastic module, in this paper, only one compact elastic module is applied to realize both purposes. Thus, the corresponding control strategy is required and evolving internal model control is proposed to address this issue. Moreover, the input signal to the controller is derived from the deflection of the compact elastic module. The human-robot interaction is considered as the disturbance which is approximated by the output error between the exoskeleton control plant and evolving forward learning model. Finally, to verify our proposed control scheme, several experiments are conducted with our robotic exoskeleton system. The experiment shows a satisfying result and promising application feasibility.
Tuarob, Suppawong; Tucker, Conrad S; Salathe, Marcel; Ram, Nilam
2014-06-01
The role of social media as a source of timely and massive information has become more apparent since the era of Web 2.0.Multiple studies illustrated the use of information in social media to discover biomedical and health-related knowledge.Most methods proposed in the literature employ traditional document classification techniques that represent a document as a bag of words.These techniques work well when documents are rich in text and conform to standard English; however, they are not optimal for social media data where sparsity and noise are norms.This paper aims to address the limitations posed by the traditional bag-of-word based methods and propose to use heterogeneous features in combination with ensemble machine learning techniques to discover health-related information, which could prove to be useful to multiple biomedical applications, especially those needing to discover health-related knowledge in large scale social media data.Furthermore, the proposed methodology could be generalized to discover different types of information in various kinds of textual data. Social media data is characterized by an abundance of short social-oriented messages that do not conform to standard languages, both grammatically and syntactically.The problem of discovering health-related knowledge in social media data streams is then transformed into a text classification problem, where a text is identified as positive if it is health-related and negative otherwise.We first identify the limitations of the traditional methods which train machines with N-gram word features, then propose to overcome such limitations by utilizing the collaboration of machine learning based classifiers, each of which is trained to learn a semantically different aspect of the data.The parameter analysis for tuning each classifier is also reported. Three data sets are used in this research.The first data set comprises of approximately 5000 hand-labeled tweets, and is used for cross validation of the classification models in the small scale experiment, and for training the classifiers in the real-world large scale experiment.The second data set is a random sample of real-world Twitter data in the US.The third data set is a random sample of real-world Facebook Timeline posts. Two sets of evaluations are conducted to investigate the proposed model's ability to discover health-related information in the social media domain: small scale and large scale evaluations.The small scale evaluation employs 10-fold cross validation on the labeled data, and aims to tune parameters of the proposed models, and to compare with the stage-of-the-art method.The large scale evaluation tests the trained classification models on the native, real-world data sets, and is needed to verify the ability of the proposed model to handle the massive heterogeneity in real-world social media. The small scale experiment reveals that the proposed method is able to mitigate the limitations in the well established techniques existing in the literature, resulting in performance improvement of 18.61% (F-measure).The large scale experiment further reveals that the baseline fails to perform well on larger data with higher degrees of heterogeneity, while the proposed method is able to yield reasonably good performance and outperform the baseline by 46.62% (F-Measure) on average. Copyright © 2014 Elsevier Inc. All rights reserved.
Numerical Simulation of the Perrin-Like Experiments
ERIC Educational Resources Information Center
Mazur, Zygmunt; Grech, Dariusz
2008-01-01
A simple model of the random Brownian walk of a spherical mesoscopic particle in viscous liquids is proposed. The model can be solved analytically and simulated numerically. The analytic solution gives the known Einstein-Smoluchowski diffusion law r[superscript 2] = 2Dt, where the diffusion constant D is expressed by the mass and geometry of a…
A Theory for the Neural Basis of Language Part 2: Simulation Studies of the Model
ERIC Educational Resources Information Center
Baron, R. J.
1974-01-01
Computer simulation studies of the proposed model are presented. Processes demonstrated are (1) verbally directed recall of visual experience; (2) understanding of verbal information; (3) aspects of learning and forgetting; (4) the dependence of recognition and understanding in context; and (5) elementary concepts of sentence production. (Author)
Assessing the New Competencies for Resident Education: A Model from an Emergency Medicine Program.
ERIC Educational Resources Information Center
Reisdorff, Earl J.; Hayes, Oliver W.; Carlson, Dale J.; Walker, Gregory L.
2001-01-01
Based on the experience of Michigan State University's emergency medicine residency program, proposes a practical method for modifying an existing student evaluation format. The model provides a template other programs could use in assessing residents' acquisition of the knowledge, skills, and attitudes reflected in the six general competencies…
Research Vitality as Sustained Excellence: What Keeps the Plates Spinning?
ERIC Educational Resources Information Center
Gilstrap, J. Bruce; Harvey, Jaron; Novicevic, Milorad M.; Buckley, M. Ronald
2011-01-01
Purpose: Research vitality addresses the perseverance that faculty members in the organization sciences experience in maintaining their research quantity and quality over an extended period of time. The purpose of this paper is to offer a theoretical model of research vitality. Design/methodology/approach: The authors propose a model consisting of…
The Model Technology School: Toward Literacy through Technology. Technology.
ERIC Educational Resources Information Center
Schneider, Raymond J.
This paper describes one Florida school's experience with the Model Technology Schools (MTS) pilot program, and proposes a poetry curriculum for K-12 education that incorporates laserdisc technology for student presentations. Webster Elementary School in St. Augustine was the smallest of five schools chosen for the MTS program to demonstrate the…
From Children's Perspectives: A Model of Aesthetic Processing in Theatre
ERIC Educational Resources Information Center
Klein, Jeanne
2005-01-01
While several developmental models of aesthetic understanding, experience, and appreciation exist in the realms of visual art and music education, few examples have been proposed in regard to theatre, particularly for child audiences. This author argues that children gaze upon theatre in differential ways by including age as a variable…
Assessment and Innovation: One Darn Thing Leads to Another
ERIC Educational Resources Information Center
Rutz, Carol; Lauer-Glebov, Jacqulyn
2005-01-01
Using recent experience at Carleton College in Minnesota as a case history, the authors offer a model for assessment that provides more flexibility than the well-known assessment feedback loop, which assumes a linear progression within a hierarchical administrative structure. The proposed model is based on a double helix, with values and feedback…
NASA Astrophysics Data System (ADS)
Antonopoulou, Evangelia; Rohmann-Shaw, Connor F.; Sykes, Thomas C.; Cayre, Olivier J.; Hunter, Timothy N.; Jimack, Peter K.
2018-03-01
Understanding the sedimentation behaviour of colloidal suspensions is crucial in determining their stability. Since sedimentation rates are often very slow, centrifugation is used to expedite sedimentation experiments. The effect of centrifugal acceleration on sedimentation behaviour is not fully understood. Furthermore, in sedimentation models, interparticle interactions are usually omitted by using the hard-sphere assumption. This work proposes a one-dimensional model for sedimentation using an effective maximum volume fraction, with an extension for sedimentation under centrifugal force. A numerical implementation of the model using an adaptive finite difference solver is described. Experiments with silica suspensions are carried out using an analytical centrifuge. The model is shown to be a good fit with experimental data for 480 nm spherical silica, with the effects of centrifugation at 705 rpm studied. A conversion of data to Earth gravity conditions is proposed, which is shown to recover Earth gravity sedimentation rates well. This work suggests that the effective maximum volume fraction accurately captures interparticle interactions and provides insights into the effect of centrifugation on sedimentation.
A developmental social neuroscience model for understanding loneliness in adolescence.
Wong, Nichol M L; Yeung, Patcy P S; Lee, Tatia M C
2018-02-01
Loneliness is prevalent in adolescents. Although it can be a normative experience, children and adolescents who experience loneliness are often at risk for anxiety, depression, and suicide. Research efforts have been made to identify the neurobiological basis of such distressful feelings in our social brain. In adolescents, the social brain is still undergoing significant development, which may contribute to their increased and differential sensitivity to the social environment. Many behavioral studies have shown the significance of attachment security and social skills in adolescents' interactions with the social world. In this review, we propose a developmental social neuroscience model that extends from the social neuroscience model of loneliness. In particular, we argue that the social brain and social skills are both important for the development of adolescents' perceived loneliness and that adolescents' familial attachment sets the baseline for neurobiological development. By reviewing the related behavioral and neuroimaging literature, we propose a developmental social neuroscience model to explain the heightened perception of loneliness in adolescents using social skills and attachment style as neurobiological moderators. We encourage future researchers to investigate adolescents' perceived social connectedness from the developmental neuroscience perspective.
Suzuki, Ryo; Ito, Kohta; Lee, Taeyong; Ogihara, Naomichi
2017-12-01
Identifying the viscous properties of the plantar soft tissue is crucial not only for understanding the dynamic interaction of the foot with the ground during locomotion, but also for development of improved footwear products and therapeutic footwear interventions. In the present study, the viscous and hyperelastic material properties of the plantar soft tissue were experimentally identified using a spherical indentation test and an analytical contact model of the spherical indentation test. Force-relaxation curves of the heel pads were obtained from the indentation experiment. The curves were fit to the contact model incorporating a five-element Maxwell model to identify the viscous material parameters. The finite element method with the experimentally identified viscoelastic parameters could successfully reproduce the measured force-relaxation curves, indicating the material parameters were correctly estimated using the proposed method. Although there are some methodological limitations, the proposed framework to identify the viscous material properties may facilitate the development of subject-specific finite element modeling of the foot and other biological materials. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Design of a Virtual Player for Joint Improvisation with Humans in the Mirror Game
Zhai, Chao; Alderisio, Francesco; Tsaneva-Atanasova, Krasimira; di Bernardo, Mario
2016-01-01
Joint improvisation is often observed among humans performing joint action tasks. Exploring the underlying cognitive and neural mechanisms behind the emergence of joint improvisation is an open research challenge. This paper investigates jointly improvised movements between two participants in the mirror game, a paradigmatic joint task example. First, experiments involving movement coordination of different dyads of human players are performed in order to build a human benchmark. No designation of leader and follower is given beforehand. We find that joint improvisation is characterized by the lack of a leader and high levels of movement synchronization. Then, a theoretical model is proposed to capture some features of their interaction, and a set of experiments is carried out to test and validate the model ability to reproduce the experimental observations. Furthermore, the model is used to drive a computer avatar able to successfully improvise joint motion with a human participant in real time. Finally, a convergence analysis of the proposed model is carried out to confirm its ability to reproduce joint movements between the participants. PMID:27123927
Design of a Virtual Player for Joint Improvisation with Humans in the Mirror Game.
Zhai, Chao; Alderisio, Francesco; Słowiński, Piotr; Tsaneva-Atanasova, Krasimira; di Bernardo, Mario
2016-01-01
Joint improvisation is often observed among humans performing joint action tasks. Exploring the underlying cognitive and neural mechanisms behind the emergence of joint improvisation is an open research challenge. This paper investigates jointly improvised movements between two participants in the mirror game, a paradigmatic joint task example. First, experiments involving movement coordination of different dyads of human players are performed in order to build a human benchmark. No designation of leader and follower is given beforehand. We find that joint improvisation is characterized by the lack of a leader and high levels of movement synchronization. Then, a theoretical model is proposed to capture some features of their interaction, and a set of experiments is carried out to test and validate the model ability to reproduce the experimental observations. Furthermore, the model is used to drive a computer avatar able to successfully improvise joint motion with a human participant in real time. Finally, a convergence analysis of the proposed model is carried out to confirm its ability to reproduce joint movements between the participants.
Physical Model of the Dynamic Instability in an Expanding Cell Culture
Mark, Shirley; Shlomovitz, Roie; Gov, Nir S.; Poujade, Mathieu; Grasland-Mongrain, Erwan; Silberzan, Pascal
2010-01-01
Abstract Collective cell migration is of great significance in many biological processes. The goal of this work is to give a physical model for the dynamics of cell migration during the wound healing response. Experiments demonstrate that an initially uniform cell-culture monolayer expands in a nonuniform manner, developing fingerlike shapes. These fingerlike shapes of the cell culture front are composed of columns of cells that move collectively. We propose a physical model to explain this phenomenon, based on the notion of dynamic instability. In this model, we treat the first layers of cells at the front of the moving cell culture as a continuous one-dimensional membrane (contour), with the usual elasticity of a membrane: curvature and surface-tension. This membrane is active, due to the forces of cellular motility of the cells, and we propose that this motility is related to the local curvature of the culture interface; larger convex curvature correlates with a stronger cellular motility force. This shape-force relation gives rise to a dynamic instability, which we then compare to the patterns observed in the wound healing experiments. PMID:20141748
The Complex Action Recognition via the Correlated Topic Model
Tu, Hong-bin; Xia, Li-min; Wang, Zheng-wu
2014-01-01
Human complex action recognition is an important research area of the action recognition. Among various obstacles to human complex action recognition, one of the most challenging is to deal with self-occlusion, where one body part occludes another one. This paper presents a new method of human complex action recognition, which is based on optical flow and correlated topic model (CTM). Firstly, the Markov random field was used to represent the occlusion relationship between human body parts in terms of an occlusion state variable. Secondly, the structure from motion (SFM) is used for reconstructing the missing data of point trajectories. Then, we can extract the key frame based on motion feature from optical flow and the ratios of the width and height are extracted by the human silhouette. Finally, we use the topic model of correlated topic model (CTM) to classify action. Experiments were performed on the KTH, Weizmann, and UIUC action dataset to test and evaluate the proposed method. The compared experiment results showed that the proposed method was more effective than compared methods. PMID:24574920
Multiphysics modeling of two-phase film boiling within porous corrosion deposits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Miaomiao, E-mail: mmjin@mit.edu; Short, Michael, E-mail: hereiam@mit.edu
2016-07-01
Porous corrosion deposits on nuclear fuel cladding, known as CRUD, can cause multiple operational problems in light water reactors (LWRs). CRUD can cause accelerated corrosion of the fuel cladding, increase radiation fields and hence greater exposure risk to plant workers once activated, and induce a downward axial power shift causing an imbalance in core power distribution. In order to facilitate a better understanding of CRUD's effects, such as localized high cladding surface temperatures related to accelerated corrosion rates, we describe an improved, fully-coupled, multiphysics model to simulate heat transfer, chemical reactions and transport, and two-phase fluid flow within these deposits.more » Our new model features a reformed assumption of 2D, two-phase film boiling within the CRUD, correcting earlier models' assumptions of single-phase coolant flow with wick boiling under high heat fluxes. This model helps to better explain observed experimental values of the effective CRUD thermal conductivity. Finally, we propose a more complete set of boiling regimes, or a more detailed mechanism, to explain recent CRUD deposition experiments by suggesting the new concept of double dryout specifically in thick porous media with boiling chimneys. - Highlights: • A two-phase model of CRUD's effects on fuel cladding is developed and improved. • This model eliminates the formerly erroneous assumption of wick boiling. • Higher fuel cladding temperatures are predicted when accounting for two-phase flow. • Double-peaks in thermal conductivity vs. heat flux in experiments are explained. • A “double dryout” mechanism in CRUD is proposed based on the model and experiments.« less
Evaluating Variability and Uncertainty of Geological Strength Index at a Specific Site
NASA Astrophysics Data System (ADS)
Wang, Yu; Aladejare, Adeyemi Emman
2016-09-01
Geological Strength Index (GSI) is an important parameter for estimating rock mass properties. GSI can be estimated from quantitative GSI chart, as an alternative to the direct observational method which requires vast geological experience of rock. GSI chart was developed from past observations and engineering experience, with either empiricism or some theoretical simplifications. The GSI chart thereby contains model uncertainty which arises from its development. The presence of such model uncertainty affects the GSI estimated from GSI chart at a specific site; it is, therefore, imperative to quantify and incorporate the model uncertainty during GSI estimation from the GSI chart. A major challenge for quantifying the GSI chart model uncertainty is a lack of the original datasets that have been used to develop the GSI chart, since the GSI chart was developed from past experience without referring to specific datasets. This paper intends to tackle this problem by developing a Bayesian approach for quantifying the model uncertainty in GSI chart when using it to estimate GSI at a specific site. The model uncertainty in the GSI chart and the inherent spatial variability in GSI are modeled explicitly in the Bayesian approach. The Bayesian approach generates equivalent samples of GSI from the integrated knowledge of GSI chart, prior knowledge and observation data available from site investigation. Equations are derived for the Bayesian approach, and the proposed approach is illustrated using data from a drill and blast tunnel project. The proposed approach effectively tackles the problem of how to quantify the model uncertainty that arises from using GSI chart for characterization of site-specific GSI in a transparent manner.
Adeniyi, D A; Wei, Z; Yang, Y
2018-01-30
A wealth of data are available within the health care system, however, effective analysis tools for exploring the hidden patterns in these datasets are lacking. To alleviate this limitation, this paper proposes a simple but promising hybrid predictive model by suitably combining the Chi-square distance measurement with case-based reasoning technique. The study presents the realization of an automated risk calculator and death prediction in some life-threatening ailments using Chi-square case-based reasoning (χ 2 CBR) model. The proposed predictive engine is capable of reducing runtime and speeds up execution process through the use of critical χ 2 distribution value. This work also showcases the development of a novel feature selection method referred to as frequent item based rule (FIBR) method. This FIBR method is used for selecting the best feature for the proposed χ 2 CBR model at the preprocessing stage of the predictive procedures. The implementation of the proposed risk calculator is achieved through the use of an in-house developed PHP program experimented with XAMP/Apache HTTP server as hosting server. The process of data acquisition and case-based development is implemented using the MySQL application. Performance comparison between our system, the NBY, the ED-KNN, the ANN, the SVM, the Random Forest and the traditional CBR techniques shows that the quality of predictions produced by our system outperformed the baseline methods studied. The result of our experiment shows that the precision rate and predictive quality of our system in most cases are equal to or greater than 70%. Our result also shows that the proposed system executes faster than the baseline methods studied. Therefore, the proposed risk calculator is capable of providing useful, consistent, faster, accurate and efficient risk level prediction to both the patients and the physicians at any time, online and on a real-time basis.
Searching for long-lived particles: A compact detector for exotics at LHCb
Gligorov, Vladimir V.; Knapen, Simon; Papucci, Michele; ...
2018-01-31
We advocate for the construction of a new detector element at the LHCb experiment, designed to search for displaced decays of beyond Standard Model long-lived particles, taking advantage of a large shielded space in the LHCb cavern that is expected to soon become available. We discuss the general features and putative capabilities of such an experiment, as well as its various advantages and complementarities with respect to the existing LHC experiments and proposals such as SHiP and MATHUSLA. For two well-motivated beyond Standard Model benchmark scenarios—Higgs decay to dark photons and B meson decays via a Higgs mixing portal—the reachmore » either complements or exceeds that predicted for other LHC experiments.« less
An experimental approach to the fundamental principles of hemodynamics.
Pontiga, Francisco; Gaytán, Susana P
2005-09-01
An experimental model has been developed to give students hands-on experience with the fundamental laws of hemodynamics. The proposed experimental setup is of simple construction but permits the precise measurements of physical variables involved in the experience. The model consists in a series of experiments where different basic phenomena are quantitatively investigated, such as the pressure drop in a long straight vessel and in an obstructed vessel, the transition from laminar to turbulent flow, the association of vessels in vascular networks, or the generation of a critical stenosis. Through these experiments, students acquire a direct appreciation of the importance of the parameters involved in the relationship between pressure and flow rate, thus facilitating the comprehension of more complex problems in hemodynamics.
Bae, Gi-Yeul; Olkkonen, Maria; Allred, Sarah R; Flombaum, Jonathan I
2015-08-01
Categorization with basic color terms is an intuitive and universal aspect of color perception. Yet research on visual working memory capacity has largely assumed that only continuous estimates within color space are relevant to memory. As a result, the influence of color categories on working memory remains unknown. We propose a dual content model of color representation in which color matches to objects that are either present (perception) or absent (memory) integrate category representations along with estimates of specific values on a continuous scale ("particulars"). We develop and test the model through 4 experiments. In a first experiment pair, participants reproduce a color target, both with and without a delay, using a recently influential estimation paradigm. In a second experiment pair, we use standard methods in color perception to identify boundary and focal colors in the stimulus set. The main results are that responses drawn from working memory are significantly biased away from category boundaries and toward category centers. Importantly, the same pattern of results is present without a memory delay. The proposed dual content model parsimoniously explains these results, and it should replace prevailing single content models in studies of visual working memory. More broadly, the model and the results demonstrate how the main consequence of visual working memory maintenance is the amplification of category related biases and stimulus-specific variability that originate in perception. (c) 2015 APA, all rights reserved).
Verma, Gyanendra K; Tiwary, Uma Shanker
2014-11-15
The purpose of this paper is twofold: (i) to investigate the emotion representation models and find out the possibility of a model with minimum number of continuous dimensions and (ii) to recognize and predict emotion from the measured physiological signals using multiresolution approach. The multimodal physiological signals are: Electroencephalogram (EEG) (32 channels) and peripheral (8 channels: Galvanic skin response (GSR), blood volume pressure, respiration pattern, skin temperature, electromyogram (EMG) and electrooculogram (EOG)) as given in the DEAP database. We have discussed the theories of emotion modeling based on i) basic emotions, ii) cognitive appraisal and physiological response approach and iii) the dimensional approach and proposed a three continuous dimensional representation model for emotions. The clustering experiment on the given valence, arousal and dominance values of various emotions has been done to validate the proposed model. A novel approach for multimodal fusion of information from a large number of channels to classify and predict emotions has also been proposed. Discrete Wavelet Transform, a classical transform for multiresolution analysis of signal has been used in this study. The experiments are performed to classify different emotions from four classifiers. The average accuracies are 81.45%, 74.37%, 57.74% and 75.94% for SVM, MLP, KNN and MMC classifiers respectively. The best accuracy is for 'Depressing' with 85.46% using SVM. The 32 EEG channels are considered as independent modes and features from each channel are considered with equal importance. May be some of the channel data are correlated but they may contain supplementary information. In comparison with the results given by others, the high accuracy of 85% with 13 emotions and 32 subjects from our proposed method clearly proves the potential of our multimodal fusion approach. Copyright © 2013 Elsevier Inc. All rights reserved.
Mathematical models of carbon-carbon composite deformation
NASA Astrophysics Data System (ADS)
Golovin, N. N.; Kuvyrkin, G. N.
2016-09-01
Mathematical models of carbon-carbon composites (CCC) intended for describing the processes of deformation of structures produced by using CCC under high-temperature loading are considered. A phenomenological theory of CCC inelastic deformation is proposed, where such materials are considered as homogeneous ones with effective characteristics and where their high anisotropy of mechanical characteristics and different ways of resistance to extension and compression are taken into account. Micromechanical models are proposed for spatially reinforced CCC, where the difference between mechanical characteristics of components and the reinforcement scheme are taken into account. Themodel parameters are determined from the results of experiments of composite macrospecimens in the directions typical of the material. A version of endochronictype theory with several internal times "launched" for each composite component and related to some damage accumulation mechanisms is proposed for describing the inelastic deformation. Some practical examples are considered.
Menolascina, Filippo; Bellomo, Domenico; Maiwald, Thomas; Bevilacqua, Vitoantonio; Ciminelli, Caterina; Paradiso, Angelo; Tommasi, Stefania
2009-10-15
Mechanistic models are becoming more and more popular in Systems Biology; identification and control of models underlying biochemical pathways of interest in oncology is a primary goal in this field. Unfortunately the scarce availability of data still limits our understanding of the intrinsic characteristics of complex pathologies like cancer: acquiring information for a system understanding of complex reaction networks is time consuming and expensive. Stimulus response experiments (SRE) have been used to gain a deeper insight into the details of biochemical mechanisms underlying cell life and functioning. Optimisation of the input time-profile, however, still remains a major area of research due to the complexity of the problem and its relevance for the task of information retrieval in systems biology-related experiments. We have addressed the problem of quantifying the information associated to an experiment using the Fisher Information Matrix and we have proposed an optimal experimental design strategy based on evolutionary algorithm to cope with the problem of information gathering in Systems Biology. On the basis of the theoretical results obtained in the field of control systems theory, we have studied the dynamical properties of the signals to be used in cell stimulation. The results of this study have been used to develop a microfluidic device for the automation of the process of cell stimulation for system identification. We have applied the proposed approach to the Epidermal Growth Factor Receptor pathway and we observed that it minimises the amount of parametric uncertainty associated to the identified model. A statistical framework based on Monte-Carlo estimations of the uncertainty ellipsoid confirmed the superiority of optimally designed experiments over canonical inputs. The proposed approach can be easily extended to multiobjective formulations that can also take advantage of identifiability analysis. Moreover, the availability of fully automated microfluidic platforms explicitly developed for the task of biochemical model identification will hopefully reduce the effects of the 'data rich--data poor' paradox in Systems Biology.
Walking Distance Estimation Using Walking Canes with Inertial Sensors
Suh, Young Soo
2018-01-01
A walking distance estimation algorithm for cane users is proposed using an inertial sensor unit attached to various positions on the cane. A standard inertial navigation algorithm using an indirect Kalman filter was applied to update the velocity and position of the cane during movement. For quadripod canes, a standard zero-velocity measurement-updating method is proposed. For standard canes, a velocity-updating method based on an inverted pendulum model is proposed. The proposed algorithms were verified by three walking experiments with two different types of canes and different positions of the sensor module. PMID:29342971
What can one learn from experiments about the elusive transition state?
Chang, Iksoo; Cieplak, Marek; Banavar, Jayanth R.; Maritan, Amos
2004-01-01
We present the results of an exact analysis of a model energy landscape of a protein to clarify the idea of the transition state and the physical meaning of the φ values determined in protein engineering experiments. We benchmark our findings to various theoretical approaches proposed in the literature for the identification and characterization of the transition state. PMID:15295118
An Information Perception-Based Emotion Contagion Model for Fire Evacuation
NASA Astrophysics Data System (ADS)
Liu, Ting Ting; Liu, Zhen; Ma, Minhua; Xuan, Rongrong; Chen, Tian; Lu, Tao; Yu, Lipeng
2017-03-01
In fires, people are easier to lose their mind. Panic will lead to irrational behavior and irreparable tragedy. It has great practical significance to make contingency plans for crowd evacuation in fires. However, existing studies about crowd simulation always paid much attention on the crowd density, but little attention on emotional contagion that may cause a panic. Based on settings about information space and information sharing, this paper proposes an emotional contagion model for crowd in panic situations. With the proposed model, a behavior mechanism is constructed for agents in the crowd and a prototype of system is developed for crowd simulation. Experiments are carried out to verify the proposed model. The results showed that the spread of panic not only related to the crowd density and the individual comfort level, but also related to people's prior knowledge of fire evacuation. The model provides a new way for safety education and evacuation management. It is possible to avoid and reduce unsafe factors in the crowd with the lowest cost.
Cross-cultural re-entry for missionaries: a new application for the Dual Process Model.
Selby, Susan; Clark, Sheila; Braunack-Mayer, Annette; Jones, Alison; Moulding, Nicole; Beilby, Justin
Nearly half a million foreign aid workers currently work worldwide, including over 140,000 missionaries. During re-entry these workers may experience significant psychological distress. This article positions previous research about psychological distress during re-entry, emphasizing loss and grief. At present there is no identifiable theoretical framework to provide a basis for assessment, management, and prevention of re-entry distress in the clinical setting. The development of theoretical concepts and frameworks surrounding loss and grief including the Dual Process Model (DPM) are discussed. All the parameters of the DPM have been shown to be appropriate for the proposed re-entry model, the Dual Process Model applied to Re-entry (DPMR). It is proposed that the DPMR is an appropriate framework to address the processes and strategies of managing re-entry loss and grief. Possible future clinical applications and limitations of the proposed model are discussed. The DPMR is offered for further validation and use in clinical practice.
Improved Neural Networks with Random Weights for Short-Term Load Forecasting
Lang, Kun; Zhang, Mingyuan; Yuan, Yongbo
2015-01-01
An effective forecasting model for short-term load plays a significant role in promoting the management efficiency of an electric power system. This paper proposes a new forecasting model based on the improved neural networks with random weights (INNRW). The key is to introduce a weighting technique to the inputs of the model and use a novel neural network to forecast the daily maximum load. Eight factors are selected as the inputs. A mutual information weighting algorithm is then used to allocate different weights to the inputs. The neural networks with random weights and kernels (KNNRW) is applied to approximate the nonlinear function between the selected inputs and the daily maximum load due to the fast learning speed and good generalization performance. In the application of the daily load in Dalian, the result of the proposed INNRW is compared with several previously developed forecasting models. The simulation experiment shows that the proposed model performs the best overall in short-term load forecasting. PMID:26629825
Improved Neural Networks with Random Weights for Short-Term Load Forecasting.
Lang, Kun; Zhang, Mingyuan; Yuan, Yongbo
2015-01-01
An effective forecasting model for short-term load plays a significant role in promoting the management efficiency of an electric power system. This paper proposes a new forecasting model based on the improved neural networks with random weights (INNRW). The key is to introduce a weighting technique to the inputs of the model and use a novel neural network to forecast the daily maximum load. Eight factors are selected as the inputs. A mutual information weighting algorithm is then used to allocate different weights to the inputs. The neural networks with random weights and kernels (KNNRW) is applied to approximate the nonlinear function between the selected inputs and the daily maximum load due to the fast learning speed and good generalization performance. In the application of the daily load in Dalian, the result of the proposed INNRW is compared with several previously developed forecasting models. The simulation experiment shows that the proposed model performs the best overall in short-term load forecasting.
Passive acoustic leak detection for sodium cooled fast reactors using hidden Markov models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Riber Marklund, A.; Kishore, S.; Prakash, V.
2015-07-01
Acoustic leak detection for steam generators of sodium fast reactors have been an active research topic since the early 1970's and several methods have been tested over the years. Inspired by its success in the field of automatic speech recognition, we here apply hidden Markov models (HMM) in combination with Gaussian mixture models (GMM) to the problem. To achieve this, we propose a new feature calculation scheme, based on the temporal evolution of the power spectral density (PSD) of the signal. Using acoustic signals recorded during steam/water injection experiments done at the Indira Gandhi Centre for Atomic Research (IGCAR), themore » proposed method is tested. We perform parametric studies on the HMM+GMM model size and demonstrate that the proposed method a) performs well without a priori knowledge of injection noise, b) can incorporate several noise models and c) has an output distribution that simplifies false alarm rate control. (authors)« less
A biodynamic feedthrough model based on neuromuscular principles.
Venrooij, Joost; Abbink, David A; Mulder, Mark; van Paassen, Marinus M; Mulder, Max; van der Helm, Frans C T; Bulthoff, Heinrich H
2014-07-01
A biodynamic feedthrough (BDFT) model is proposed that describes how vehicle accelerations feed through the human body, causing involuntary limb motions and so involuntary control inputs. BDFT dynamics strongly depend on limb dynamics, which can vary between persons (between-subject variability), but also within one person over time, e.g., due to the control task performed (within-subject variability). The proposed BDFT model is based on physical neuromuscular principles and is derived from an established admittance model-describing limb dynamics-which was extended to include control device dynamics and account for acceleration effects. The resulting BDFT model serves primarily the purpose of increasing the understanding of the relationship between neuromuscular admittance and biodynamic feedthrough. An added advantage of the proposed model is that its parameters can be estimated using a two-stage approach, making the parameter estimation more robust, as the procedure is largely based on the well documented procedure required for the admittance model. To estimate the parameter values of the BDFT model, data are used from an experiment in which both neuromuscular admittance and biodynamic feedthrough are measured. The quality of the BDFT model is evaluated in the frequency and time domain. Results provide strong evidence that the BDFT model and the proposed method of parameter estimation put forward in this paper allows for accurate BDFT modeling across different subjects (accounting for between-subject variability) and across control tasks (accounting for within-subject variability).
Sex work and the claim for grassroots legislation.
Fassi, Marisa N
2015-01-01
The aim of this paper is to contribute to understanding of legal models that aim to control sex work, and the policy implications of these, by discussing the experience of developing a grassroots legislation bill proposal by organised sex workers in Córdoba, Argentina. The term 'grassroots legislation' here refers to a legal response that derives from the active involvement of local social movements and thus incorporates the experiential knowledge and claims of these particular social groupings in the proposal. The experience described in this paper excludes approaches that render sex workers as passive victims or as deviant perpetrators; instead, it conceives of sex workers in terms of their political subjectivity and of political subjectivity in its capacity to speak, to decide, to act and to propose. This means challenging current patterns of knowledge/power that give superiority to 'expert knowledge' above and beyond the claims, experiences, knowledge and needs of sex workers themselves as meaningful sources for law making.
Deformation modeling and constitutive modeling for anisotropic superalloys
NASA Technical Reports Server (NTRS)
Milligan, Walter W.; Antolovich, Stephen D.
1989-01-01
A study of deformation mechanisms in the single crystal superalloy PWA 1480 was conducted. Monotonic and cyclic tests were conducted from 20 to 1093 C. Both (001) and near-(123) crystals were tested, at strain rates of 0.5 and 50 percent/minute. The deformation behavior could be grouped into two temperature regimes: low temperatures, below 760 C; and high temperatures, above 820 to 950 C depending on the strain rate. At low temperatures, the mechanical behavior was very anisotropic. An orientation dependent CRSS, a tension-compression asymmetry, and anisotropic strain hardening were all observed. The material was deformed by planar octahedral slip. The anisotropic properties were correlated with the ease of cube cross-slip, as well as the number of active slip systems. At high temperatures, the material was isotropic, and deformed by homogeneous gamma by-pass. It was found that the temperature dependence of the formation of superlattice-intrinsic stacking faults was responsible for the local minimum in the CRSS of this alloy at 400 C. It was proposed that the cube cross-slip process must be reversible. This was used to explain the reversible tension-compression asymmetry, and was used to study models of cross-slip. As a result, the cross-slip model proposed by Paidar, Pope and Vitek was found to be consistent with the proposed slip reversibility. The results were related to anisotropic viscoplastic constitutive models. The model proposed by Walter and Jordan was found to be capable of modeling all aspects of the material anisotropy. Temperature and strain rate boundaries for the model were proposed, and guidelines for numerical experiments were proposed.
Semantic Indexing of Multimedia Content Using Visual, Audio, and Text Cues
NASA Astrophysics Data System (ADS)
Adams, W. H.; Iyengar, Giridharan; Lin, Ching-Yung; Naphade, Milind Ramesh; Neti, Chalapathy; Nock, Harriet J.; Smith, John R.
2003-12-01
We present a learning-based approach to the semantic indexing of multimedia content using cues derived from audio, visual, and text features. We approach the problem by developing a set of statistical models for a predefined lexicon. Novel concepts are then mapped in terms of the concepts in the lexicon. To achieve robust detection of concepts, we exploit features from multiple modalities, namely, audio, video, and text. Concept representations are modeled using Gaussian mixture models (GMM), hidden Markov models (HMM), and support vector machines (SVM). Models such as Bayesian networks and SVMs are used in a late-fusion approach to model concepts that are not explicitly modeled in terms of features. Our experiments indicate promise in the proposed classification and fusion methodologies: our proposed fusion scheme achieves more than 10% relative improvement over the best unimodal concept detector.
Waltemath, Dagmar; Adams, Richard; Bergmann, Frank T; Hucka, Michael; Kolpakov, Fedor; Miller, Andrew K; Moraru, Ion I; Nickerson, David; Sahle, Sven; Snoep, Jacky L; Le Novère, Nicolas
2011-12-15
The increasing use of computational simulation experiments to inform modern biological research creates new challenges to annotate, archive, share and reproduce such experiments. The recently published Minimum Information About a Simulation Experiment (MIASE) proposes a minimal set of information that should be provided to allow the reproduction of simulation experiments among users and software tools. In this article, we present the Simulation Experiment Description Markup Language (SED-ML). SED-ML encodes in a computer-readable exchange format the information required by MIASE to enable reproduction of simulation experiments. It has been developed as a community project and it is defined in a detailed technical specification and additionally provides an XML schema. The version of SED-ML described in this publication is Level 1 Version 1. It covers the description of the most frequent type of simulation experiments in the area, namely time course simulations. SED-ML documents specify which models to use in an experiment, modifications to apply on the models before using them, which simulation procedures to run on each model, what analysis results to output, and how the results should be presented. These descriptions are independent of the underlying model implementation. SED-ML is a software-independent format for encoding the description of simulation experiments; it is not specific to particular simulation tools. Here, we demonstrate that with the growing software support for SED-ML we can effectively exchange executable simulation descriptions. With SED-ML, software can exchange simulation experiment descriptions, enabling the validation and reuse of simulation experiments in different tools. Authors of papers reporting simulation experiments can make their simulation protocols available for other scientists to reproduce the results. Because SED-ML is agnostic about exact modeling language(s) used, experiments covering models from different fields of research can be accurately described and combined.
Hollow-Fiber Cartridges: Model Systems for Virus Removal from Blood
NASA Astrophysics Data System (ADS)
Jacobitz, Frank; Menon, Jeevan
2005-11-01
Aethlon Medical is developing a hollow-fiber hemodialysis device designed to remove viruses and toxins from blood. Possible target viruses include HIV and pox-viruses. The filter could reduce virus and viral toxin concentration in the patient's blood, delaying illness so the patient's immune system can fight off the virus. In order to optimize the design of such a filter, the fluid mechanics of the device is both modeled analytically and investigated experimentally. The flow configuration of the proposed device is that of Starling flow. Polysulfone hollow-fiber dialysis cartridges were used. The cartridges are charged with water as a model fluid for blood and fluorescent latex beads are used in the experiments as a model for viruses. In the experiments, properties of the flow through the cartridge are determined through pressure and volume flow rate measurements of water. The removal of latex beads, which are captured in the porous walls of the fibers, was measured spectrophotometrically. Experimentally derived coefficients derived from these experiments are used in the analytical model of the flow and removal predictions from the model are compared to those obtained from the experiments.
NASA Astrophysics Data System (ADS)
Matsumoto, Jun; Okaya, Shunichi; Igoh, Hiroshi; Kawaguchi, Junichiro
2017-04-01
A new propellant feed system referred to as a self-pressurized feed system is proposed for liquid rocket engines. The self-pressurized feed system is a type of gas-pressure feed system; however, the pressurization source is retained in the liquid state to reduce tank volume. The liquid pressurization source is heated and gasified using heat exchange from the hot propellant using a regenerative cooling strategy. The liquid pressurization source is raised to critical pressure by a pressure booster referred to as a charger in order to avoid boiling and improve the heat exchange efficiency. The charger is driven by a part of the generated pressurization gas using a closed-loop self-pressurized feed system. The purpose of this study is to propose a propellant feed system that is lighter and simpler than traditional gas pressure feed systems. The proposed system can be applied to all liquid rocket engines that use the regenerative cooling strategy. The concept and mathematical models of the self-pressurized feed system are presented first. Experiment results for verification are then shown and compared with the mathematical models.
Numerical Optimization Using Computer Experiments
NASA Technical Reports Server (NTRS)
Trosset, Michael W.; Torczon, Virginia
1997-01-01
Engineering design optimization often gives rise to problems in which expensive objective functions are minimized by derivative-free methods. We propose a method for solving such problems that synthesizes ideas from the numerical optimization and computer experiment literatures. Our approach relies on kriging known function values to construct a sequence of surrogate models of the objective function that are used to guide a grid search for a minimizer. Results from numerical experiments on a standard test problem are presented.
Comparison of retention models for polymers 1. Poly(ethylene glycol)s.
Bashir, Mubasher A; Radke, Wolfgang
2006-10-27
The suitability of three different retention models to predict the retention times of poly(ethylene glycol)s (PEGs) in gradient and isocratic chromatography was investigated. The models investigated were the linear (LSSM) and the quadratic solvent strength model (QSSM). In addition, a model describing the retention behaviour of polymers was extended to account for gradient elution (PM). It was found that all models are suited to properly predict gradient retention volumes provided the extraction of the analyte specific parameters is performed from gradient experiments as well. The LSSM and QSSM on principle cannot describe retention behaviour under critical or SEC conditions. Since the PM is designed to cover all three modes of polymer chromatography, it is therefore superior to the other models. However, the determination of the analyte specific parameters, which are needed to calibrate the retention behaviour, strongly depend on the suitable selection of initial experiments. A useful strategy for a purposeful selection of these calibration experiments is proposed.
Pattern recognition tool based on complex network-based approach
NASA Astrophysics Data System (ADS)
Casanova, Dalcimar; Backes, André Ricardo; Martinez Bruno, Odemir
2013-02-01
This work proposed a generalization of the method proposed by the authors: 'A complex network-based approach for boundary shape analysis'. Instead of modelling a contour into a graph and use complex networks rules to characterize it, here, we generalize the technique. This way, the work proposes a mathematical tool for characterization signals, curves and set of points. To evaluate the pattern description power of the proposal, an experiment of plat identification based on leaf veins image are conducted. Leaf vein is a taxon characteristic used to plant identification proposes, and one of its characteristics is that these structures are complex, and difficult to be represented as a signal or curves and this way to be analyzed in a classical pattern recognition approach. Here, we model the veins as a set of points and model as graphs. As features, we use the degree and joint degree measurements in a dynamic evolution. The results demonstrates that the technique has a good power of discrimination and can be used for plant identification, as well as other complex pattern recognition tasks.
Cone beam x-ray luminescence computed tomography: a feasibility study.
Chen, Dongmei; Zhu, Shouping; Yi, Huangjian; Zhang, Xianghan; Chen, Duofang; Liang, Jimin; Tian, Jie
2013-03-01
The appearance of x-ray luminescence computed tomography (XLCT) opens new possibilities to perform molecular imaging by x ray. In the previous XLCT system, the sample was irradiated by a sequence of narrow x-ray beams and the x-ray luminescence was measured by a highly sensitive charge coupled device (CCD) camera. This resulted in a relatively long sampling time and relatively low utilization of the x-ray beam. In this paper, a novel cone beam x-ray luminescence computed tomography strategy is proposed, which can fully utilize the x-ray dose and shorten the scanning time. The imaging model and reconstruction method are described. The validity of the imaging strategy has been studied in this paper. In the cone beam XLCT system, the cone beam x ray was adopted to illuminate the sample and a highly sensitive CCD camera was utilized to acquire luminescent photons emitted from the sample. Photons scattering in biological tissues makes it an ill-posed problem to reconstruct the 3D distribution of the x-ray luminescent sample in the cone beam XLCT. In order to overcome this issue, the authors used the diffusion approximation model to describe the photon propagation in tissues, and employed the sparse regularization method for reconstruction. An incomplete variables truncated conjugate gradient method and permissible region strategy were used for reconstruction. Meanwhile, traditional x-ray CT imaging could also be performed in this system. The x-ray attenuation effect has been considered in their imaging model, which is helpful in improving the reconstruction accuracy. First, simulation experiments with cylinder phantoms were carried out to illustrate the validity of the proposed compensated method. The experimental results showed that the location error of the compensated algorithm was smaller than that of the uncompensated method. The permissible region strategy was applied and reduced the reconstruction error to less than 2 mm. The robustness and stability were then evaluated from different view numbers, different regularization parameters, different measurement noise levels, and optical parameters mismatch. The reconstruction results showed that the settings had a small effect on the reconstruction. The nonhomogeneous phantom simulation was also carried out to simulate a more complex experimental situation and evaluated their proposed method. Second, the physical cylinder phantom experiments further showed similar results in their prototype XLCT system. With the discussion of the above experiments, it was shown that the proposed method is feasible to the general case and actual experiments. Utilizing numerical simulation and physical experiments, the authors demonstrated the validity of the new cone beam XLCT method. Furthermore, compared with the previous narrow beam XLCT, the cone beam XLCT could more fully utilize the x-ray dose and the scanning time would be shortened greatly. The study of both simulation experiments and physical phantom experiments indicated that the proposed method was feasible to the general case and actual experiments.
Wang, Xun; Sun, Beibei; Liu, Boyang; Fu, Yaping; Zheng, Pan
2017-01-01
Experimental design focuses on describing or explaining the multifactorial interactions that are hypothesized to reflect the variation. The design introduces conditions that may directly affect the variation, where particular conditions are purposely selected for observation. Combinatorial design theory deals with the existence, construction and properties of systems of finite sets whose arrangements satisfy generalized concepts of balance and/or symmetry. In this work, borrowing the concept of "balance" in combinatorial design theory, a novel method for multifactorial bio-chemical experiments design is proposed, where balanced templates in combinational design are used to select the conditions for observation. Balanced experimental data that covers all the influencing factors of experiments can be obtianed for further processing, such as training set for machine learning models. Finally, a software based on the proposed method is developed for designing experiments with covering influencing factors a certain number of times.
Bayesian cross-entropy methodology for optimal design of validation experiments
NASA Astrophysics Data System (ADS)
Jiang, X.; Mahadevan, S.
2006-07-01
An important concern in the design of validation experiments is how to incorporate the mathematical model in the design in order to allow conclusive comparisons of model prediction with experimental output in model assessment. The classical experimental design methods are more suitable for phenomena discovery and may result in a subjective, expensive, time-consuming and ineffective design that may adversely impact these comparisons. In this paper, an integrated Bayesian cross-entropy methodology is proposed to perform the optimal design of validation experiments incorporating the computational model. The expected cross entropy, an information-theoretic distance between the distributions of model prediction and experimental observation, is defined as a utility function to measure the similarity of two distributions. A simulated annealing algorithm is used to find optimal values of input variables through minimizing or maximizing the expected cross entropy. The measured data after testing with the optimum input values are used to update the distribution of the experimental output using Bayes theorem. The procedure is repeated to adaptively design the required number of experiments for model assessment, each time ensuring that the experiment provides effective comparison for validation. The methodology is illustrated for the optimal design of validation experiments for a three-leg bolted joint structure and a composite helicopter rotor hub component.
Tilmes, S.; Mills, Mike; Niemeier, Ulrike; ...
2015-01-15
A new Geoengineering Model Intercomparison Project (GeoMIP) experiment "G4 specified stratospheric aerosols" (short name: G4SSA) is proposed to investigate the impact of stratospheric aerosol geoengineering on atmosphere, chemistry, dynamics, climate, and the environment. In contrast to the earlier G4 GeoMIP experiment, which requires an emission of sulfur dioxide (SO₂) into the model, a prescribed aerosol forcing file is provided to the community, to be consistently applied to future model experiments between 2020 and 2100. This stratospheric aerosol distribution, with a total burden of about 2 Tg S has been derived using the ECHAM5-HAM microphysical model, based on a continuous annualmore » tropical emission of 8 Tg SO₂ yr⁻¹. A ramp-up of geoengineering in 2020 and a ramp-down in 2070 over a period of 2 years are included in the distribution, while a background aerosol burden should be used for the last 3 decades of the experiment. The performance of this experiment using climate and chemistry models in a multi-model comparison framework will allow us to better understand the impact of geoengineering and its abrupt termination after 50 years in a changing environment. The zonal and monthly mean stratospheric aerosol input data set is available at https://www2.acd.ucar.edu/gcm/geomip-g4-specified-stratospheric-aerosol-data-set.« less
Effect of motor dynamics on nonlinear feedback robot arm control
NASA Technical Reports Server (NTRS)
Tarn, Tzyh-Jong; Li, Zuofeng; Bejczy, Antal K.; Yun, Xiaoping
1991-01-01
A nonlinear feedback robot controller that incorporates the robot manipulator dynamics and the robot joint motor dynamics is proposed. The manipulator dynamics and the motor dynamics are coupled to obtain a third-order-dynamic model, and differential geometric control theory is applied to produce a linearized and decoupled robot controller. The derived robot controller operates in the robot task space, thus eliminating the need for decomposition of motion commands into robot joint space commands. Computer simulations are performed to verify the feasibility of the proposed robot controller. The controller is further experimentally evaluated on the PUMA 560 robot arm. The experiments show that the proposed controller produces good trajectory tracking performances and is robust in the presence of model inaccuracies. Compared with a nonlinear feedback robot controller based on the manipulator dynamics only, the proposed robot controller yields conspicuously improved performance.
Graph-based sensor fusion for classification of transient acoustic signals.
Srinivas, Umamahesh; Nasrabadi, Nasser M; Monga, Vishal
2015-03-01
Advances in acoustic sensing have enabled the simultaneous acquisition of multiple measurements of the same physical event via co-located acoustic sensors. We exploit the inherent correlation among such multiple measurements for acoustic signal classification, to identify the launch/impact of munition (i.e., rockets, mortars). Specifically, we propose a probabilistic graphical model framework that can explicitly learn the class conditional correlations between the cepstral features extracted from these different measurements. Additionally, we employ symbolic dynamic filtering-based features, which offer improvements over the traditional cepstral features in terms of robustness to signal distortions. Experiments on real acoustic data sets show that our proposed algorithm outperforms conventional classifiers as well as the recently proposed joint sparsity models for multisensor acoustic classification. Additionally our proposed algorithm is less sensitive to insufficiency in training samples compared to competing approaches.
Tourism forecasting using modified empirical mode decomposition and group method of data handling
NASA Astrophysics Data System (ADS)
Yahya, N. A.; Samsudin, R.; Shabri, A.
2017-09-01
In this study, a hybrid model using modified Empirical Mode Decomposition (EMD) and Group Method of Data Handling (GMDH) model is proposed for tourism forecasting. This approach reconstructs intrinsic mode functions (IMFs) produced by EMD using trial and error method. The new component and the remaining IMFs is then predicted respectively using GMDH model. Finally, the forecasted results for each component are aggregated to construct an ensemble forecast. The data used in this experiment are monthly time series data of tourist arrivals from China, Thailand and India to Malaysia from year 2000 to 2016. The performance of the model is evaluated using Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE) where conventional GMDH model and EMD-GMDH model are used as benchmark models. Empirical results proved that the proposed model performed better forecasts than the benchmarked models.
Zhou, Miaolei; Wang, Shoubin; Gao, Wei
2013-01-01
As a new type of intelligent material, magnetically shape memory alloy (MSMA) has a good performance in its applications in the actuator manufacturing. Compared with traditional actuators, MSMA actuator has the advantages as fast response and large deformation; however, the hysteresis nonlinearity of the MSMA actuator restricts its further improving of control precision. In this paper, an improved Krasnosel'skii-Pokrovskii (KP) model is used to establish the hysteresis model of MSMA actuator. To identify the weighting parameters of the KP operators, an improved gradient correction algorithm and a variable step-size recursive least square estimation algorithm are proposed in this paper. In order to demonstrate the validity of the proposed modeling approach, simulation experiments are performed, simulations with improved gradient correction algorithm and variable step-size recursive least square estimation algorithm are studied, respectively. Simulation results of both identification algorithms demonstrate that the proposed modeling approach in this paper can establish an effective and accurate hysteresis model for MSMA actuator, and it provides a foundation for improving the control precision of MSMA actuator.
Hysteresis Modeling of Magnetic Shape Memory Alloy Actuator Based on Krasnosel'skii-Pokrovskii Model
Wang, Shoubin; Gao, Wei
2013-01-01
As a new type of intelligent material, magnetically shape memory alloy (MSMA) has a good performance in its applications in the actuator manufacturing. Compared with traditional actuators, MSMA actuator has the advantages as fast response and large deformation; however, the hysteresis nonlinearity of the MSMA actuator restricts its further improving of control precision. In this paper, an improved Krasnosel'skii-Pokrovskii (KP) model is used to establish the hysteresis model of MSMA actuator. To identify the weighting parameters of the KP operators, an improved gradient correction algorithm and a variable step-size recursive least square estimation algorithm are proposed in this paper. In order to demonstrate the validity of the proposed modeling approach, simulation experiments are performed, simulations with improved gradient correction algorithm and variable step-size recursive least square estimation algorithm are studied, respectively. Simulation results of both identification algorithms demonstrate that the proposed modeling approach in this paper can establish an effective and accurate hysteresis model for MSMA actuator, and it provides a foundation for improving the control precision of MSMA actuator. PMID:23737730
No-Reference Image Quality Assessment by Wide-Perceptual-Domain Scorer Ensemble Method.
Liu, Tsung-Jung; Liu, Kuan-Hsien
2018-03-01
A no-reference (NR) learning-based approach to assess image quality is presented in this paper. The devised features are extracted from wide perceptual domains, including brightness, contrast, color, distortion, and texture. These features are used to train a model (scorer) which can predict scores. The scorer selection algorithms are utilized to help simplify the proposed system. In the final stage, the ensemble method is used to combine the prediction results from selected scorers. Two multiple-scale versions of the proposed approach are also presented along with the single-scale one. They turn out to have better performances than the original single-scale method. Because of having features from five different domains at multiple image scales and using the outputs (scores) from selected score prediction models as features for multi-scale or cross-scale fusion (i.e., ensemble), the proposed NR image quality assessment models are robust with respect to more than 24 image distortion types. They also can be used on the evaluation of images with authentic distortions. The extensive experiments on three well-known and representative databases confirm the performance robustness of our proposed model.
Potential Follow on Experiments for the Zero Boil Off Tank Experiment
NASA Technical Reports Server (NTRS)
Chato, David; Kassemi, Mohammad
2014-01-01
Cryogenic Storage &Transfer are enabling propulsion technologies in the direct path of nearly all future human or robotic missions; It is identified by NASA as an area with greatest potential for cost saving; This proposal aims at resolving fundamental scientific issues behind the engineering development of the storage tanks; We propose to use the ISS lab to generate & collect archival scientific data:, raise our current state-of-the-art understanding of transport and phase change issues affecting the storage tank cryogenic fluid management (CFM), develop and validate state-of-the-art CFD models to innovate, optimize, and advance the future engineering designs
Multi-scale Modeling of Plasticity in Tantalum.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lim, Hojun; Battaile, Corbett Chandler.; Carroll, Jay
In this report, we present a multi-scale computational model to simulate plastic deformation of tantalum and validating experiments. In atomistic/ dislocation level, dislocation kink- pair theory is used to formulate temperature and strain rate dependent constitutive equations. The kink-pair theory is calibrated to available data from single crystal experiments to produce accurate and convenient constitutive laws. The model is then implemented into a BCC crystal plasticity finite element method (CP-FEM) model to predict temperature and strain rate dependent yield stresses of single and polycrystalline tantalum and compared with existing experimental data from the literature. Furthermore, classical continuum constitutive models describingmore » temperature and strain rate dependent flow behaviors are fit to the yield stresses obtained from the CP-FEM polycrystal predictions. The model is then used to conduct hydro- dynamic simulations of Taylor cylinder impact test and compared with experiments. In order to validate the proposed tantalum CP-FEM model with experiments, we introduce a method for quantitative comparison of CP-FEM models with various experimental techniques. To mitigate the effects of unknown subsurface microstructure, tantalum tensile specimens with a pseudo-two-dimensional grain structure and grain sizes on the order of millimeters are used. A technique combining an electron back scatter diffraction (EBSD) and high resolution digital image correlation (HR-DIC) is used to measure the texture and sub-grain strain fields upon uniaxial tensile loading at various applied strains. Deformed specimens are also analyzed with optical profilometry measurements to obtain out-of- plane strain fields. These high resolution measurements are directly compared with large-scale CP-FEM predictions. This computational method directly links fundamental dislocation physics to plastic deformations in the grain-scale and to the engineering-scale applications. Furthermore, direct and quantitative comparisons between experimental measurements and simulation show that the proposed model accurately captures plasticity in deformation of polycrystalline tantalum.« less
NASA Astrophysics Data System (ADS)
Maghsoudi, Mohammad Javad; Mohamed, Z.; Sudin, S.; Buyamin, S.; Jaafar, H. I.; Ahmad, S. M.
2017-08-01
This paper proposes an improved input shaping scheme for an efficient sway control of a nonlinear three dimensional (3D) overhead crane with friction using the particle swarm optimization (PSO) algorithm. Using this approach, a higher payload sway reduction is obtained as the input shaper is designed based on a complete nonlinear model, as compared to the analytical-based input shaping scheme derived using a linear second order model. Zero Vibration (ZV) and Distributed Zero Vibration (DZV) shapers are designed using both analytical and PSO approaches for sway control of rail and trolley movements. To test the effectiveness of the proposed approach, MATLAB simulations and experiments on a laboratory 3D overhead crane are performed under various conditions involving different cable lengths and sway frequencies. Their performances are studied based on a maximum residual of payload sway and Integrated Absolute Error (IAE) values which indicate total payload sway of the crane. With experiments, the superiority of the proposed approach over the analytical-based is shown by 30-50% reductions of the IAE values for rail and trolley movements, for both ZV and DZV shapers. In addition, simulations results show higher sway reductions with the proposed approach. It is revealed that the proposed PSO-based input shaping design provides higher payload sway reductions of a 3D overhead crane with friction as compared to the commonly designed input shapers.
Adaptive classifier for steel strip surface defects
NASA Astrophysics Data System (ADS)
Jiang, Mingming; Li, Guangyao; Xie, Li; Xiao, Mang; Yi, Li
2017-01-01
Surface defects detection system has been receiving increased attention as its precision, speed and less cost. One of the most challenges is reacting to accuracy deterioration with time as aged equipment and changed processes. These variables will make a tiny change to the real world model but a big impact on the classification result. In this paper, we propose a new adaptive classifier with a Bayes kernel (BYEC) which update the model with small sample to it adaptive for accuracy deterioration. Firstly, abundant features were introduced to cover lots of information about the defects. Secondly, we constructed a series of SVMs with the random subspace of the features. Then, a Bayes classifier was trained as an evolutionary kernel to fuse the results from base SVMs. Finally, we proposed the method to update the Bayes evolutionary kernel. The proposed algorithm is experimentally compared with different algorithms, experimental results demonstrate that the proposed method can be updated with small sample and fit the changed model well. Robustness, low requirement for samples and adaptive is presented in the experiment.
Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan
2017-12-20
A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ron Warren
2006-12-01
An assessment of the potential radiation dose that residents offsite of the Nevada Test Site (NTS) might receive from the proposed Divine Strake experiment was made to determine compliance with Subpart H of Part 61 of Title 40 of the Code of Federal Regulations, National Emission Standards for Emissions of Radionuclides Other than Radon from Department of Energy Facilities. The Divine Strake experiment, proposed by the Defense Threat Reduction Agency, consists of a detonation of 700 tons of heavy ammonium nitrate fuel oil-emulsion above the U16b Tunnel complex in Area 16 of the NTS. Both natural radionuclides suspended, and historicmore » fallout radionuclides resuspended from the detonation, have potential to be transported outside the NTS boundary by wind. They may, therefore, contribute radiological dose to the public. Subpart H states ''Emissions of radionuclides to the ambient air from Department of Energy facilities shall not exceed those amounts that would cause any member of the public to receive in any year an effective dose equivalent of 10 mrem/yr'' (Title 40 of the Code of Federal Regulations [CFR] 61.92) where mrem/yr is millirem per year. Furthermore, application for U.S. Environmental Protection Agency (EPA) approval of construction of a new source or modification of an existing source is required if the effective dose equivalent, caused by all emissions from the new construction or modification, is greater than or equal to 0.1 mrem/yr (40 CFR 61.96). In accordance with Section 61.93, a dose assessment was conducted with the computer model CAP88-PC, Version 3.0. In addition to this model, a dose assessment was also conducted by the National Atmospheric Release Advisory Center (NARAC) at the Lawrence Livermore National Laboratory. This modeling was conducted to obtain dose estimates from a model designed for acute releases and which addresses terrain effects and uses meteorology from multiple locations. Potential radiation dose to a hypothetical maximally exposed individual at the closest NTS boundary to the proposed Divine Strake experiment, as estimated by the CAP88-PC model, was 0.005 mrem with wind blowing directly towards that location. Boundary dose, as modeled by NARAC, ranged from about 0.006 to 0.007 mrem. Potential doses to actual offsite populated locations were generally two to five times lower still, or about 40 to 100 times lower then the 0.1 mrem level at which EPA approval is required pursuant to Section 61.96.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acikel, Volkan, E-mail: vacik@ee.bilkent.edu.tr; Atalar, Ergin; Uslubas, Ali
Purpose: The authors’ purpose is to model the case of an implantable pulse generator (IPG) and the electrode of an active implantable medical device using lumped circuit elements in order to analyze their effect on radio frequency induced tissue heating problem during a magnetic resonance imaging (MRI) examination. Methods: In this study, IPG case and electrode are modeled with a voltage source and impedance. Values of these parameters are found using the modified transmission line method (MoTLiM) and the method of moments (MoM) simulations. Once the parameter values of an electrode/IPG case model are determined, they can be connected tomore » any lead, and tip heating can be analyzed. To validate these models, both MoM simulations and MR experiments were used. The induced currents on the leads with the IPG case or electrode connections were solved using the proposed models and the MoTLiM. These results were compared with the MoM simulations. In addition, an electrode was connected to a lead via an inductor. The dissipated power on the electrode was calculated using the MoTLiM by changing the inductance and the results were compared with the specific absorption rate results that were obtained using MoM. Then, MRI experiments were conducted to test the IPG case and the electrode models. To test the IPG case, a bare lead was connected to the case and placed inside a uniform phantom. During a MRI scan, the temperature rise at the lead was measured by changing the lead length. The power at the lead tip for the same scenario was also calculated using the IPG case model and MoTLiM. Then, an electrode was connected to a lead via an inductor and placed inside a uniform phantom. During a MRI scan, the temperature rise at the electrode was measured by changing the inductance and compared with the dissipated power on the electrode resistance. Results: The induced currents on leads with the IPG case or electrode connection were solved for using the combination of the MoTLiM and the proposed lumped circuit models. These results were compared with those from the MoM simulations. The mean square error was less than 9%. During the MRI experiments, when the IPG case was introduced, the resonance lengths were calculated to have an error less than 13%. Also the change in tip temperature rise at resonance lengths was predicted with less than 4% error. For the electrode experiments, the value of the matching impedance was predicted with an error less than 1%. Conclusions: Electrical models for the IPG case and electrode are suggested, and the method is proposed to determine the parameter values. The concept of matching of the electrode to the lead is clarified using the defined electrode impedance and the lead Thevenin impedance. The effect of the IPG case and electrode on tip heating can be predicted using the proposed theory. With these models, understanding the tissue heating due to the implants becomes easier. Also, these models are beneficial for implant safety testers and designers. Using these models, worst case conditions can be determined and the corresponding implant test experiments can be planned.« less
NASA Astrophysics Data System (ADS)
Mao, Chao; Chen, Shou
2017-01-01
According to the traditional entropy value method still have low evaluation accuracy when evaluating the performance of mining projects, a performance evaluation model of mineral project founded on improved entropy is proposed. First establish a new weight assignment model founded on compatible matrix analysis of analytic hierarchy process (AHP) and entropy value method, when the compatibility matrix analysis to achieve consistency requirements, if it has differences between subjective weights and objective weights, moderately adjust both proportions, then on this basis, the fuzzy evaluation matrix for performance evaluation. The simulation experiments show that, compared with traditional entropy and compatible matrix analysis method, the proposed performance evaluation model of mining project based on improved entropy value method has higher accuracy assessment.
A general method for the inclusion of radiation chemistry in astrochemical models.
Shingledecker, Christopher N; Herbst, Eric
2018-02-21
In this paper, we propose a general formalism that allows for the estimation of radiolysis decomposition pathways and rate coefficients suitable for use in astrochemical models, with a focus on solid phase chemistry. Such a theory can help increase the connection between laboratory astrophysics experiments and astrochemical models by providing a means for modelers to incorporate radiation chemistry into chemical networks. The general method proposed here is targeted particularly at the majority of species now included in chemical networks for which little radiochemical data exist; however, the method can also be used as a starting point for considering better studied species. We here apply our theory to the irradiation of H 2 O ice and compare the results with previous experimental data.
Automatic Parameterization Strategy for Cardiac Electrophysiology Simulations
Costa, Caroline Mendonca; Hoetzl, Elena; Rocha, Bernardo Martins; Prassl, Anton J; Plank, Gernot
2014-01-01
Driven by recent advances in medical imaging, image segmentation and numerical techniques, computer models of ventricular electrophysiology account for increasingly finer levels of anatomical and biophysical detail. However, considering the large number of model parameters involved parameterization poses a major challenge. A minimum requirement in combined experimental and modeling studies is to achieve good agreement in activation and repolarization sequences between model and experiment or patient data. In this study, we propose basic techniques which aid in determining bidomain parameters to match activation sequences. An iterative parameterization algorithm is implemented which determines appropriate bulk conductivities which yield prescribed velocities. In addition, a method is proposed for splitting the computed bulk conductivities into individual bidomain conductivities by prescribing anisotropy ratios. PMID:24729986
An adaptive time-stepping strategy for solving the phase field crystal model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Zhengru, E-mail: zrzhang@bnu.edu.cn; Ma, Yuan, E-mail: yuner1022@gmail.com; Qiao, Zhonghua, E-mail: zqiao@polyu.edu.hk
2013-09-15
In this work, we will propose an adaptive time step method for simulating the dynamics of the phase field crystal (PFC) model. The numerical simulation of the PFC model needs long time to reach steady state, and then large time-stepping method is necessary. Unconditionally energy stable schemes are used to solve the PFC model. The time steps are adaptively determined based on the time derivative of the corresponding energy. It is found that the use of the proposed time step adaptivity cannot only resolve the steady state solution, but also the dynamical development of the solution efficiently and accurately. Themore » numerical experiments demonstrate that the CPU time is significantly saved for long time simulations.« less
Reducing the Dynamical Degradation by Bi-Coupling Digital Chaotic Maps
NASA Astrophysics Data System (ADS)
Liu, Lingfeng; Liu, Bocheng; Hu, Hanping; Miao, Suoxia
A chaotic map which is realized on a computer will suffer dynamical degradation. Here, a coupled chaotic model is proposed to reduce the dynamical degradation. In this model, the state variable of one digital chaotic map is used to control the parameter of the other digital map. This coupled model is universal and can be used for all chaotic maps. In this paper, two coupled models (one is coupled by two logistic maps, the other is coupled by Chebyshev map and Baker map) are performed, and the numerical experiments show that the performances of these two coupled chaotic maps are greatly improved. Furthermore, a simple pseudorandom bit generator (PRBG) based on coupled digital logistic maps is proposed as an application for our method.
Measurement system and model for simultaneously measuring 6DOF geometric errors.
Zhao, Yuqiong; Zhang, Bin; Feng, Qibo
2017-09-04
A measurement system to simultaneously measure six degree-of-freedom (6DOF) geometric errors is proposed. The measurement method is based on a combination of mono-frequency laser interferometry and laser fiber collimation. A simpler and more integrated optical configuration is designed. To compensate for the measurement errors introduced by error crosstalk, element fabrication error, laser beam drift, and nonparallelism of two measurement beam, a unified measurement model, which can improve the measurement accuracy, is deduced and established using the ray-tracing method. A numerical simulation using the optical design software Zemax is conducted, and the results verify the correctness of the model. Several experiments are performed to demonstrate the feasibility and effectiveness of the proposed system and measurement model.
Upscaling pore pressure-dependent gas permeability in shales
NASA Astrophysics Data System (ADS)
Ghanbarian, Behzad; Javadpour, Farzam
2017-04-01
Upscaling pore pressure dependence of shale gas permeability is of great importance and interest in the investigation of gas production in unconventional reservoirs. In this study, we apply the Effective Medium Approximation, an upscaling technique from statistical physics, and modify the Doyen model for unconventional rocks. We develop an upscaling model to estimate the pore pressure-dependent gas permeability from pore throat size distribution, pore connectivity, tortuosity, porosity, and gas characteristics. We compare our adapted model with six data sets: three experiments, one pore-network model, and two lattice-Boltzmann simulations. Results showed that the proposed model estimated the gas permeability within a factor of 3 of the measurements/simulations in all data sets except the Eagle Ford experiment for which we discuss plausible sources of discrepancies.
An Empirical Model of the Variation of the Solar Lyman-α Spectral Irradiance
NASA Astrophysics Data System (ADS)
Kretzschmar, Matthieu; Snow, Martin; Curdt, Werner
2018-03-01
We propose a simple model that computes the spectral profile of the solar irradiance in the hydrogen Lyman alpha line, H Ly-α (121.567 nm), from 1947 to present. Such a model is relevant for the study of many astronomical environments, from planetary atmospheres to interplanetary medium. This empirical model is based on the SOlar Heliospheric Observatory/Solar Ultraviolet Measurement of Emitted Radiation observations of the Ly-α irradiance over solar cycle 23 and the Ly-α disk-integrated irradiance composite. The model reproduces the temporal variability of the spectral profile and matches the independent SOlar Radiation and Climate Experiment/SOLar-STellar Irradiance Comparison Experiment spectral observations from 2003 to 2007 with an accuracy better than 10%.
Multi-hole pressure probes to wind tunnel experiments and air data systems
NASA Astrophysics Data System (ADS)
Shevchenko, A. M.; Shmakov, A. S.
2017-10-01
The problems to develop a multihole pressure system to measure flow angularity, Mach number and dynamic head for wind tunnel experiments or air data systems are discussed. A simple analytical model with separation of variables is derived for the multihole spherical pressure probe. The proposed model is uniform for small subsonic and supersonic speeds. An error analysis was performed. The error functions are obtained, allowing to estimate the influence of the Mach number, the pitch angle, the location of the pressure ports on the uncertainty of determining the flow parameters.
Multi-GNSS precise point positioning (MGPPP) using raw observations
NASA Astrophysics Data System (ADS)
Liu, Teng; Yuan, Yunbin; Zhang, Baocheng; Wang, Ningbo; Tan, Bingfeng; Chen, Yongchang
2017-03-01
A joint-processing model for multi-GNSS (GPS, GLONASS, BDS and GALILEO) precise point positioning (PPP) is proposed, in which raw code and phase observations are used. In the proposed model, inter-system biases (ISBs) and GLONASS code inter-frequency biases (IFBs) are carefully considered, among which GLONASS code IFBs are modeled as a linear function of frequency numbers. To get the full rank function model, the unknowns are re-parameterized and the estimable slant ionospheric delays and ISBs/IFBs are derived and estimated simultaneously. One month of data in April, 2015 from 32 stations of the International GNSS Service (IGS) Multi-GNSS Experiment (MGEX) tracking network have been used to validate the proposed model. Preliminary results show that RMS values of the positioning errors (with respect to external double-difference solutions) for static/kinematic solutions (four systems) are 6.2 mm/2.1 cm (north), 6.0 mm/2.2 cm (east) and 9.3 mm/4.9 cm (up). One-day stabilities of the estimated ISBs described by STD values are 0.36 and 0.38 ns, for GLONASS and BDS, respectively. Significant ISB jumps are identified between adjacent days for all stations, which are caused by the different satellite clock datums in different days and for different systems. Unlike ISBs, the estimated GLONASS code IFBs are quite stable for all stations, with an average STD of 0.04 ns over a month. Single-difference experiment of short baseline shows that PPP ionospheric delays are more precise than traditional leveling ionospheric delays.
Morales Urrea, Diego Alberto; Haure, Patricia Mónica; García Einschlag, Fernando Sebastián; Contreras, Edgardo Martín
2018-05-09
Enzymatic decolourization of azo-dyes could be a cost-competitive alternative compared to physicochemical or microbiological methods. Stoichiometric and kinetic features of peroxidase-mediated decolourization of azo-dyes by hydrogen peroxide (P) are central for designing purposes. In this work, a modified version of the Dunford mechanism of peroxidases was developed. The proposed model takes into account the inhibition of peroxidases by high concentrations of P, the substrate-dependant catalatic activity of peroxidases (e.g. the decomposition of P to water and oxygen), the generation of oxidation products (OP) and the effect of pH on the decolourization kinetics of the azo-dye Orange II (OII). To obtain the parameters of the proposed model, two series of experiments were performed. In the first set, the effects of initial P concentration (0.01-0.12 mM) and pH (5-10) on the decolourization degree were studied at a constant initial OII concentration (0.045 mM). Obtained results showed that at pH 9-10 and low initial P concentrations, the consumption of P was mainly to oxidize OII. From the proposed model, an expression for the decolourization degree was obtained. In the second set of experiments, the effect of the initial concentrations of OII (0.023-0.090 mM), P (0.02-4.7 mM), HRP (34-136 mg/L) and pH (5-10) on the initial specific decolourization rate (q 0 ) was studied. As a general rule, a noticeable increase in q 0 was observed for pHs higher than 7. For a given pH, q 0 increased as a function of the initial OII concentration. Besides, there was an inhibitory effect of high P concentrations on q 0 . To asses the possibility of reusing the enzyme, repeated additions of OII and P were performed. Results showed that the enzyme remained active after six reuse cycles. A satisfactory accordance between the change of the absorbance during these experiments and absorbances calculated using the proposed model was obtained. Considering that this set of data was not used during the fitting procedure of the model, the agreement between predicted and experimental absorbances provides a powerful validation of the model developed in the present work.
Improved Denoising via Poisson Mixture Modeling of Image Sensor Noise.
Zhang, Jiachao; Hirakawa, Keigo
2017-04-01
This paper describes a study aimed at comparing the real image sensor noise distribution to the models of noise often assumed in image denoising designs. A quantile analysis in pixel, wavelet transform, and variance stabilization domains reveal that the tails of Poisson, signal-dependent Gaussian, and Poisson-Gaussian models are too short to capture real sensor noise behavior. A new Poisson mixture noise model is proposed to correct the mismatch of tail behavior. Based on the fact that noise model mismatch results in image denoising that undersmoothes real sensor data, we propose a mixture of Poisson denoising method to remove the denoising artifacts without affecting image details, such as edge and textures. Experiments with real sensor data verify that denoising for real image sensor data is indeed improved by this new technique.
A hybrid group method of data handling with discrete wavelet transform for GDP forecasting
NASA Astrophysics Data System (ADS)
Isa, Nadira Mohamed; Shabri, Ani
2013-09-01
This study is proposed the application of hybridization model using Group Method of Data Handling (GMDH) and Discrete Wavelet Transform (DWT) in time series forecasting. The objective of this paper is to examine the flexibility of the hybridization GMDH in time series forecasting by using Gross Domestic Product (GDP). A time series data set is used in this study to demonstrate the effectiveness of the forecasting model. This data are utilized to forecast through an application aimed to handle real life time series. This experiment compares the performances of a hybrid model and a single model of Wavelet-Linear Regression (WR), Artificial Neural Network (ANN), and conventional GMDH. It is shown that the proposed model can provide a promising alternative technique in GDP forecasting.
Dynamic model of the force driving kinesin to move along microtubule-Simulation with a model system
NASA Astrophysics Data System (ADS)
Chou, Y. C.; Hsiao, Yi-Feng; To, Kiwing
2015-09-01
A dynamic model for the motility of kinesin, including stochastic-force generation and step formation is proposed. The force driving the motion of kinesin motor is generated by the impulse from the collision between the randomly moving long-chain stalk and the ratchet-shaped outer surface of microtubule. Most of the dynamical and statistical features of the motility of kinesin are reproduced in a simulation system, with (a) ratchet structures similar to the outer surface of microtubule, (b) a bead chain connected to two heads, similarly to the stalk of the real kinesin motor, and (c) the interaction between the heads of the simulated kinesin and microtubule. We also propose an experiment to discriminate between the conventional hand-over-hand model and the dynamic model.
An Elasto-Plastic Damage Model for Rocks Based on a New Nonlinear Strength Criterion
NASA Astrophysics Data System (ADS)
Huang, Jingqi; Zhao, Mi; Du, Xiuli; Dai, Feng; Ma, Chao; Liu, Jingbo
2018-05-01
The strength and deformation characteristics of rocks are the most important mechanical properties for rock engineering constructions. A new nonlinear strength criterion is developed for rocks by combining the Hoek-Brown (HB) criterion and the nonlinear unified strength criterion (NUSC). The proposed criterion takes account of the intermediate principal stress effect against HB criterion, as well as being nonlinear in the meridian plane against NUSC. Only three parameters are required to be determined by experiments, including the two HB parameters σ c and m i . The failure surface of the proposed criterion is continuous, smooth and convex. The proposed criterion fits the true triaxial test data well and performs better than the other three existing criteria. Then, by introducing the Geological Strength Index, the proposed criterion is extended to rock masses and predicts the test data well. Finally, based on the proposed criterion, a triaxial elasto-plastic damage model for intact rock is developed. The plastic part is based on the effective stress, whose yield function is developed by the proposed criterion. For the damage part, the evolution function is assumed to have an exponential form. The performance of the constitutive model shows good agreement with the results of experimental tests.
Comparison of Data on Mutation Frequencies of Mice Caused by Radiation with Low Dose Model
NASA Astrophysics Data System (ADS)
Manabe, Yuichiro; Bando, Masako
2013-09-01
We propose low dose (LD) model, the extension of LDM model which was proposed in the previous paper [Y. Manabe et al.: J. Phys. Soc. Jpn. 81 (2012) 104004] to estimate biological damage caused by irradiation. LD model takes account of cell death effect in addition to the proliferation, apoptosis, repair which were included in LDM model. As a typical example of estimation, we apply LD model to the experiment of mutation frequency on the responses induced by the exposure to low levels of ionizing radiation. The most famous and extensive experiments are those summarized by Russell and Kelly [Proc. Natl. Acad. Sci. U.S.A. 79 (1982) 539], which are known as ``mega-mouse project''. This provides us with important information of the frequencies of transmitted specific-locus mutations induced in mouse spermatogonia stem-cells. It is found that the numerical results of the mutation frequency of mice are in reasonable agreement with the experimental data: the LD model reproduces the total dose and dose rate dependence of data reasonably. In order to see such dose-rate dependence more explicitly, we introduce the dose-rate effectiveness factor (DREF). This represents a sort of dose rate dependent effect, which are to be competitive with proliferation effect of broken cells induced by irradiation.
Dynamical phase separation using a microfluidic device: experiments and modeling
NASA Astrophysics Data System (ADS)
Aymard, Benjamin; Vaes, Urbain; Radhakrishnan, Anand; Pradas, Marc; Gavriilidis, Asterios; Kalliadasis, Serafim; Complex Multiscale Systems Team
2017-11-01
We study the dynamical phase separation of a binary fluid by a microfluidic device both from the experimental and from the modeling points of view. The experimental device consists of a main channel (600 μm wide) leading into an array of 276 trapezoidal capillaries of 5 μm width arranged on both sides and separating the lateral channels from the main channel. Due to geometrical effects as well as wetting properties of the substrate, and under well chosen pressure boundary conditions, a multiphase flow introduced into the main channel gets separated at the capillaries. Understanding this dynamics via modeling and numerical simulation is a crucial step in designing future efficient micro-separators. We propose a diffuse-interface model, based on the classical Cahn-Hilliard-Navier-Stokes system, with a new nonlinear mobility and new wetting boundary conditions. We also propose a novel numerical method using a finite-element approach, together with an adaptive mesh refinement strategy. The complex geometry is captured using the same computer-aided design files as the ones adopted in the fabrication of the actual device. Numerical simulations reveal a very good qualitative agreement between model and experiments, demonstrating also a clear separation of phases.
Impact of Information based Classification on Network Epidemics
Mishra, Bimal Kumar; Haldar, Kaushik; Sinha, Durgesh Nandini
2016-01-01
Formulating mathematical models for accurate approximation of malicious propagation in a network is a difficult process because of our inherent lack of understanding of several underlying physical processes that intrinsically characterize the broader picture. The aim of this paper is to understand the impact of available information in the control of malicious network epidemics. A 1-n-n-1 type differential epidemic model is proposed, where the differentiality allows a symptom based classification. This is the first such attempt to add such a classification into the existing epidemic framework. The model is incorporated into a five class system called the DifEpGoss architecture. Analysis reveals an epidemic threshold, based on which the long-term behavior of the system is analyzed. In this work three real network datasets with 22002, 22469 and 22607 undirected edges respectively, are used. The datasets show that classification based prevention given in the model can have a good role in containing network epidemics. Further simulation based experiments are used with a three category classification of attack and defense strengths, which allows us to consider 27 different possibilities. These experiments further corroborate the utility of the proposed model. The paper concludes with several interesting results. PMID:27329348
León, Larry F; Cai, Tianxi
2012-04-01
In this paper we develop model checking techniques for assessing functional form specifications of covariates in censored linear regression models. These procedures are based on a censored data analog to taking cumulative sums of "robust" residuals over the space of the covariate under investigation. These cumulative sums are formed by integrating certain Kaplan-Meier estimators and may be viewed as "robust" censored data analogs to the processes considered by Lin, Wei & Ying (2002). The null distributions of these stochastic processes can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be generated by computer simulation. Each observed process can then be graphically compared with a few realizations from the Gaussian process. We also develop formal test statistics for numerical comparison. Such comparisons enable one to assess objectively whether an apparent trend seen in a residual plot reects model misspecification or natural variation. We illustrate the methods with a well known dataset. In addition, we examine the finite sample performance of the proposed test statistics in simulation experiments. In our simulation experiments, the proposed test statistics have good power of detecting misspecification while at the same time controlling the size of the test.
NASA Astrophysics Data System (ADS)
Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid
2016-11-01
The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.
Jin, Long; Zhang, Yunong
2015-07-01
In this brief, a discrete-time Zhang neural network (DTZNN) model is first proposed, developed, and investigated for online time-varying nonlinear optimization (OTVNO). Then, Newton iteration is shown to be derived from the proposed DTZNN model. In addition, to eliminate the explicit matrix-inversion operation, the quasi-Newton Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is introduced, which can effectively approximate the inverse of Hessian matrix. A DTZNN-BFGS model is thus proposed and investigated for OTVNO, which is the combination of the DTZNN model and the quasi-Newton BFGS method. In addition, theoretical analyses show that, with step-size h=1 and/or with zero initial error, the maximal residual error of the DTZNN model has an O(τ(2)) pattern, whereas the maximal residual error of the Newton iteration has an O(τ) pattern, with τ denoting the sampling gap. Besides, when h ≠ 1 and h ∈ (0,2) , the maximal steady-state residual error of the DTZNN model has an O(τ(2)) pattern. Finally, an illustrative numerical experiment and an application example to manipulator motion generation are provided and analyzed to substantiate the efficacy of the proposed DTZNN and DTZNN-BFGS models for OTVNO.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Y.; Huang, L. H.; Yang, F. P. Y.
The present study analytically reinvestigates the two-dimensional lift-up problem for a rigid porous bed that was studied by Mei, Yeung, and Liu [“Lifting of a large object from a porous seabed,” J. Fluid Mech. 152, 203 (1985)]. Mei, Yeung, and Liu proposed a model that treats the bed as a rigid porous medium and performed relevant experiments. In their model, they assumed the gap flow comes from the periphery of the gap, and there is a shear layer in the porous medium; the flow in the gap is described by adhesion approximation [D. J. Acheson, Elementary Fluid Dynamics (Clarendon, Oxford,more » 1990), pp. 243-245.] and the pore flow by Darcy’s law, and the slip-flow condition proposed by Beavers and Joseph [“Boundary conditions at a naturally permeable wall,” J. Fluid Mech. 30, 197 (1967)] is applied to the bed interface. In this problem, however, the gap flow initially mainly comes from the porous bed, and the shear layer may not exist. Although later the shear effect becomes important, the empirical slip-flow condition might not physically respond to the shear effect, and the existence of the vertical velocity affects the situation so greatly that the slip-flow condition might not be appropriate. In contrast, the present study proposes a more general model for the problem, applying Stokes flow to the gap, the Brinkman equation to the porous medium, and Song and Huang’s [“Laminar poroelastic media flow,” J. Eng. Mech. 126, 358 (2000)] complete interfacial conditions to the bed interface. The exact solution to the problem is found and fits Mei’s experiments well. The breakout phenomenon is examined for different soil beds, mechanics that cannot be illustrated by Mei’s model are revealed, and the theoretical breakout times obtained using Mei’s model and our model are compared. The results show that the proposed model is more compatible with physics and provides results that are more precise.« less
Two-dimensional lift-up problem for a rigid porous bed
NASA Astrophysics Data System (ADS)
Chang, Y.; Huang, L. H.; Yang, F. P. Y.
2015-05-01
The present study analytically reinvestigates the two-dimensional lift-up problem for a rigid porous bed that was studied by Mei, Yeung, and Liu ["Lifting of a large object from a porous seabed," J. Fluid Mech. 152, 203 (1985)]. Mei, Yeung, and Liu proposed a model that treats the bed as a rigid porous medium and performed relevant experiments. In their model, they assumed the gap flow comes from the periphery of the gap, and there is a shear layer in the porous medium; the flow in the gap is described by adhesion approximation [D. J. Acheson, Elementary Fluid Dynamics (Clarendon, Oxford, 1990), pp. 243-245.] and the pore flow by Darcy's law, and the slip-flow condition proposed by Beavers and Joseph ["Boundary conditions at a naturally permeable wall," J. Fluid Mech. 30, 197 (1967)] is applied to the bed interface. In this problem, however, the gap flow initially mainly comes from the porous bed, and the shear layer may not exist. Although later the shear effect becomes important, the empirical slip-flow condition might not physically respond to the shear effect, and the existence of the vertical velocity affects the situation so greatly that the slip-flow condition might not be appropriate. In contrast, the present study proposes a more general model for the problem, applying Stokes flow to the gap, the Brinkman equation to the porous medium, and Song and Huang's ["Laminar poroelastic media flow," J. Eng. Mech. 126, 358 (2000)] complete interfacial conditions to the bed interface. The exact solution to the problem is found and fits Mei's experiments well. The breakout phenomenon is examined for different soil beds, mechanics that cannot be illustrated by Mei's model are revealed, and the theoretical breakout times obtained using Mei's model and our model are compared. The results show that the proposed model is more compatible with physics and provides results that are more precise.
NASA Astrophysics Data System (ADS)
Sotokoba, Yasumasa; Okajima, Kenji; Iida, Toshiaki; Tanaka, Tadatsugu
We propose the trenchless box culvert construction method to construct box culverts in small covering soil layers while keeping roads or tracks open. When we use this construction method, it is necessary to clarify deformation and shear failure by excavation of grounds. In order to investigate the soil behavior, model experiments and elasto-plactic finite element analysis were performed. In the model experiments, it was shown that the shear failure was developed from the end of the roof to the toe of the boundary surface. In the finite element analysis, a shear band effect was introduced. Comparing the observed shear bands in model experiments with computed maximum shear strain contours, it was found that the observed direction of the shear band could be simulated reasonably by the finite element analysis. We may say that the finite element method used in this study is useful tool for this construction method.
SPIR: The potential spreaders involved SIR model for information diffusion in social networks
NASA Astrophysics Data System (ADS)
Rui, Xiaobin; Meng, Fanrong; Wang, Zhixiao; Yuan, Guan; Du, Changjiang
2018-09-01
The Susceptible-Infective-Removed (SIR) model is one of the most widely used models for the information diffusion research in social networks. Many researchers have devoted themselves to improving the classic SIR model in different aspects. However, on the one hand, the equations of these improved models are regarded as continuous functions, while the corresponding simulation experiments use discrete time, leading to the mismatch between numerical solutions got from mathematical method and experimental results obtained by simulating the spreading behaviour of each node. On the other hand, if the equations of these improved models are solved discretely, susceptible nodes will be calculated repeatedly, resulting in a big deviation from the actual value. In order to solve the above problem, this paper proposes a Susceptible-Potential-Infective-Removed (SPIR) model that analyses the diffusion process based on the discrete time according to simulation. Besides, this model also introduces a potential spreader set which solve the problem of repeated calculation effectively. To test the SPIR model, various experiments have been carried out from different angles on both artificial networks and real world networks. The Pearson correlation coefficient between numerical solutions of our SPIR equations and corresponding simulation results is mostly bigger than 0.95, which reveals that the proposed SPIR model is able to depict the information diffusion process with high accuracy.
Poisson-Boltzmann-Nernst-Planck model
NASA Astrophysics Data System (ADS)
Zheng, Qiong; Wei, Guo-Wei
2011-05-01
The Poisson-Nernst-Planck (PNP) model is based on a mean-field approximation of ion interactions and continuum descriptions of concentration and electrostatic potential. It provides qualitative explanation and increasingly quantitative predictions of experimental measurements for the ion transport problems in many areas such as semiconductor devices, nanofluidic systems, and biological systems, despite many limitations. While the PNP model gives a good prediction of the ion transport phenomenon for chemical, physical, and biological systems, the number of equations to be solved and the number of diffusion coefficient profiles to be determined for the calculation directly depend on the number of ion species in the system, since each ion species corresponds to one Nernst-Planck equation and one position-dependent diffusion coefficient profile. In a complex system with multiple ion species, the PNP can be computationally expensive and parameter demanding, as experimental measurements of diffusion coefficient profiles are generally quite limited for most confined regions such as ion channels, nanostructures and nanopores. We propose an alternative model to reduce number of Nernst-Planck equations to be solved in complex chemical and biological systems with multiple ion species by substituting Nernst-Planck equations with Boltzmann distributions of ion concentrations. As such, we solve the coupled Poisson-Boltzmann and Nernst-Planck (PBNP) equations, instead of the PNP equations. The proposed PBNP equations are derived from a total energy functional by using the variational principle. We design a number of computational techniques, including the Dirichlet to Neumann mapping, the matched interface and boundary, and relaxation based iterative procedure, to ensure efficient solution of the proposed PBNP equations. Two protein molecules, cytochrome c551 and Gramicidin A, are employed to validate the proposed model under a wide range of bulk ion concentrations and external voltages. Extensive numerical experiments show that there is an excellent consistency between the results predicted from the present PBNP model and those obtained from the PNP model in terms of the electrostatic potentials, ion concentration profiles, and current-voltage (I-V) curves. The present PBNP model is further validated by a comparison with experimental measurements of I-V curves under various ion bulk concentrations. Numerical experiments indicate that the proposed PBNP model is more efficient than the original PNP model in terms of simulation time.
Poisson-Boltzmann-Nernst-Planck model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng Qiong; Wei Guowei; Department of Electrical and Computer Engineering, Michigan State University, East Lansing, Michigan 48824
2011-05-21
The Poisson-Nernst-Planck (PNP) model is based on a mean-field approximation of ion interactions and continuum descriptions of concentration and electrostatic potential. It provides qualitative explanation and increasingly quantitative predictions of experimental measurements for the ion transport problems in many areas such as semiconductor devices, nanofluidic systems, and biological systems, despite many limitations. While the PNP model gives a good prediction of the ion transport phenomenon for chemical, physical, and biological systems, the number of equations to be solved and the number of diffusion coefficient profiles to be determined for the calculation directly depend on the number of ion species inmore » the system, since each ion species corresponds to one Nernst-Planck equation and one position-dependent diffusion coefficient profile. In a complex system with multiple ion species, the PNP can be computationally expensive and parameter demanding, as experimental measurements of diffusion coefficient profiles are generally quite limited for most confined regions such as ion channels, nanostructures and nanopores. We propose an alternative model to reduce number of Nernst-Planck equations to be solved in complex chemical and biological systems with multiple ion species by substituting Nernst-Planck equations with Boltzmann distributions of ion concentrations. As such, we solve the coupled Poisson-Boltzmann and Nernst-Planck (PBNP) equations, instead of the PNP equations. The proposed PBNP equations are derived from a total energy functional by using the variational principle. We design a number of computational techniques, including the Dirichlet to Neumann mapping, the matched interface and boundary, and relaxation based iterative procedure, to ensure efficient solution of the proposed PBNP equations. Two protein molecules, cytochrome c551 and Gramicidin A, are employed to validate the proposed model under a wide range of bulk ion concentrations and external voltages. Extensive numerical experiments show that there is an excellent consistency between the results predicted from the present PBNP model and those obtained from the PNP model in terms of the electrostatic potentials, ion concentration profiles, and current-voltage (I-V) curves. The present PBNP model is further validated by a comparison with experimental measurements of I-V curves under various ion bulk concentrations. Numerical experiments indicate that the proposed PBNP model is more efficient than the original PNP model in terms of simulation time.« less
Poisson–Boltzmann–Nernst–Planck model
Zheng, Qiong; Wei, Guo-Wei
2011-01-01
The Poisson–Nernst–Planck (PNP) model is based on a mean-field approximation of ion interactions and continuum descriptions of concentration and electrostatic potential. It provides qualitative explanation and increasingly quantitative predictions of experimental measurements for the ion transport problems in many areas such as semiconductor devices, nanofluidic systems, and biological systems, despite many limitations. While the PNP model gives a good prediction of the ion transport phenomenon for chemical, physical, and biological systems, the number of equations to be solved and the number of diffusion coefficient profiles to be determined for the calculation directly depend on the number of ion species in the system, since each ion species corresponds to one Nernst–Planck equation and one position-dependent diffusion coefficient profile. In a complex system with multiple ion species, the PNP can be computationally expensive and parameter demanding, as experimental measurements of diffusion coefficient profiles are generally quite limited for most confined regions such as ion channels, nanostructures and nanopores. We propose an alternative model to reduce number of Nernst–Planck equations to be solved in complex chemical and biological systems with multiple ion species by substituting Nernst–Planck equations with Boltzmann distributions of ion concentrations. As such, we solve the coupled Poisson–Boltzmann and Nernst–Planck (PBNP) equations, instead of the PNP equations. The proposed PBNP equations are derived from a total energy functional by using the variational principle. We design a number of computational techniques, including the Dirichlet to Neumann mapping, the matched interface and boundary, and relaxation based iterative procedure, to ensure efficient solution of the proposed PBNP equations. Two protein molecules, cytochrome c551 and Gramicidin A, are employed to validate the proposed model under a wide range of bulk ion concentrations and external voltages. Extensive numerical experiments show that there is an excellent consistency between the results predicted from the present PBNP model and those obtained from the PNP model in terms of the electrostatic potentials, ion concentration profiles, and current–voltage (I–V) curves. The present PBNP model is further validated by a comparison with experimental measurements of I–V curves under various ion bulk concentrations. Numerical experiments indicate that the proposed PBNP model is more efficient than the original PNP model in terms of simulation time. PMID:21599038
Poisson-Boltzmann-Nernst-Planck model.
Zheng, Qiong; Wei, Guo-Wei
2011-05-21
The Poisson-Nernst-Planck (PNP) model is based on a mean-field approximation of ion interactions and continuum descriptions of concentration and electrostatic potential. It provides qualitative explanation and increasingly quantitative predictions of experimental measurements for the ion transport problems in many areas such as semiconductor devices, nanofluidic systems, and biological systems, despite many limitations. While the PNP model gives a good prediction of the ion transport phenomenon for chemical, physical, and biological systems, the number of equations to be solved and the number of diffusion coefficient profiles to be determined for the calculation directly depend on the number of ion species in the system, since each ion species corresponds to one Nernst-Planck equation and one position-dependent diffusion coefficient profile. In a complex system with multiple ion species, the PNP can be computationally expensive and parameter demanding, as experimental measurements of diffusion coefficient profiles are generally quite limited for most confined regions such as ion channels, nanostructures and nanopores. We propose an alternative model to reduce number of Nernst-Planck equations to be solved in complex chemical and biological systems with multiple ion species by substituting Nernst-Planck equations with Boltzmann distributions of ion concentrations. As such, we solve the coupled Poisson-Boltzmann and Nernst-Planck (PBNP) equations, instead of the PNP equations. The proposed PBNP equations are derived from a total energy functional by using the variational principle. We design a number of computational techniques, including the Dirichlet to Neumann mapping, the matched interface and boundary, and relaxation based iterative procedure, to ensure efficient solution of the proposed PBNP equations. Two protein molecules, cytochrome c551 and Gramicidin A, are employed to validate the proposed model under a wide range of bulk ion concentrations and external voltages. Extensive numerical experiments show that there is an excellent consistency between the results predicted from the present PBNP model and those obtained from the PNP model in terms of the electrostatic potentials, ion concentration profiles, and current-voltage (I-V) curves. The present PBNP model is further validated by a comparison with experimental measurements of I-V curves under various ion bulk concentrations. Numerical experiments indicate that the proposed PBNP model is more efficient than the original PNP model in terms of simulation time. © 2011 American Institute of Physics.
NASA Astrophysics Data System (ADS)
Le Touz, N.; Toullier, T.; Dumoulin, J.
2017-05-01
The present study addresses the thermal behaviour of a modified pavement structure to prevent icing at its surface in adverse winter time conditions or overheating in hot summer conditions. First a multi-physic model based on infinite elements method was built to predict the evolution of the surface temperature. In a second time, laboratory experiments on small specimen were carried out and the surface temperature was monitored by infrared thermography. Results obtained are analyzed and performances of the numerical model for real scale outdoor application are discussed. Finally conclusion and perspectives are proposed.
Stock market index prediction using neural networks
NASA Astrophysics Data System (ADS)
Komo, Darmadi; Chang, Chein-I.; Ko, Hanseok
1994-03-01
A neural network approach to stock market index prediction is presented. Actual data of the Wall Street Journal's Dow Jones Industrial Index has been used for a benchmark in our experiments where Radial Basis Function based neural networks have been designed to model these indices over the period from January 1988 to Dec 1992. A notable success has been achieved with the proposed model producing over 90% prediction accuracies observed based on monthly Dow Jones Industrial Index predictions. The model has also captured both moderate and heavy index fluctuations. The experiments conducted in this study demonstrated that the Radial Basis Function neural network represents an excellent candidate to predict stock market index.