Sample records for system gmm model

  1. A Bayesian framework based on a Gaussian mixture model and radial-basis-function Fisher discriminant analysis (BayGmmKda V1.1) for spatial prediction of floods

    NASA Astrophysics Data System (ADS)

    Tien Bui, Dieu; Hoang, Nhat-Duc

    2017-09-01

    In this study, a probabilistic model, named as BayGmmKda, is proposed for flood susceptibility assessment in a study area in central Vietnam. The new model is a Bayesian framework constructed by a combination of a Gaussian mixture model (GMM), radial-basis-function Fisher discriminant analysis (RBFDA), and a geographic information system (GIS) database. In the Bayesian framework, GMM is used for modeling the data distribution of flood-influencing factors in the GIS database, whereas RBFDA is utilized to construct a latent variable that aims at enhancing the model performance. As a result, the posterior probabilistic output of the BayGmmKda model is used as flood susceptibility index. Experiment results showed that the proposed hybrid framework is superior to other benchmark models, including the adaptive neuro-fuzzy inference system and the support vector machine. To facilitate the model implementation, a software program of BayGmmKda has been developed in MATLAB. The BayGmmKda program can accurately establish a flood susceptibility map for the study region. Accordingly, local authorities can overlay this susceptibility map onto various land-use maps for the purpose of land-use planning or management.

  2. Experimental study on GMM-based speaker recognition

    NASA Astrophysics Data System (ADS)

    Ye, Wenxing; Wu, Dapeng; Nucci, Antonio

    2010-04-01

    Speaker recognition plays a very important role in the field of biometric security. In order to improve the recognition performance, many pattern recognition techniques have be explored in the literature. Among these techniques, the Gaussian Mixture Model (GMM) is proved to be an effective statistic model for speaker recognition and is used in most state-of-the-art speaker recognition systems. The GMM is used to represent the 'voice print' of a speaker through modeling the spectral characteristic of speech signals of the speaker. In this paper, we implement a speaker recognition system, which consists of preprocessing, Mel-Frequency Cepstrum Coefficients (MFCCs) based feature extraction, and GMM based classification. We test our system with TIDIGITS data set (325 speakers) and our own recordings of more than 200 speakers; our system achieves 100% correct recognition rate. Moreover, we also test our system under the scenario that training samples are from one language but test samples are from a different language; our system also achieves 100% correct recognition rate, which indicates that our system is language independent.

  3. Audio-visual imposture

    NASA Astrophysics Data System (ADS)

    Karam, Walid; Mokbel, Chafic; Greige, Hanna; Chollet, Gerard

    2006-05-01

    A GMM based audio visual speaker verification system is described and an Active Appearance Model with a linear speaker transformation system is used to evaluate the robustness of the verification. An Active Appearance Model (AAM) is used to automatically locate and track a speaker's face in a video recording. A Gaussian Mixture Model (GMM) based classifier (BECARS) is used for face verification. GMM training and testing is accomplished on DCT based extracted features of the detected faces. On the audio side, speech features are extracted and used for speaker verification with the GMM based classifier. Fusion of both audio and video modalities for audio visual speaker verification is compared with face verification and speaker verification systems. To improve the robustness of the multimodal biometric identity verification system, an audio visual imposture system is envisioned. It consists of an automatic voice transformation technique that an impostor may use to assume the identity of an authorized client. Features of the transformed voice are then combined with the corresponding appearance features and fed into the GMM based system BECARS for training. An attempt is made to increase the acceptance rate of the impostor and to analyzing the robustness of the verification system. Experiments are being conducted on the BANCA database, with a prospect of experimenting on the newly developed PDAtabase developed within the scope of the SecurePhone project.

  4. A Novel GMM-Based Behavioral Modeling Approach for Smartwatch-Based Driver Authentication.

    PubMed

    Yang, Ching-Han; Chang, Chin-Chun; Liang, Deron

    2018-03-28

    All drivers have their own distinct driving habits, and usually hold and operate the steering wheel differently in different driving scenarios. In this study, we proposed a novel Gaussian mixture model (GMM)-based method that can improve the traditional GMM in modeling driving behavior. This new method can be applied to build a better driver authentication system based on the accelerometer and orientation sensor of a smartwatch. To demonstrate the feasibility of the proposed method, we created an experimental system that analyzes driving behavior using the built-in sensors of a smartwatch. The experimental results for driver authentication-an equal error rate (EER) of 4.62% in the simulated environment and an EER of 7.86% in the real-traffic environment-confirm the feasibility of this approach.

  5. Planning and design of a knowledge based system for green manufacturing management

    NASA Astrophysics Data System (ADS)

    Kamal Mohd Nawawi, Mohd; Mohd Zuki Nik Mohamed, Nik; Shariff Adli Aminuddin, Adam

    2013-12-01

    This paper presents a conceptual design approach to the development of a hybrid Knowledge Based (KB) system for Green Manufacturing Management (GMM) at the planning and design stages. The research concentrates on the GMM by using a hybrid KB system, which is a blend of KB system and Gauging Absences of Pre-requisites (GAP). The hybrid KB/GAP system identifies all potentials elements of green manufacturing management issues throughout the development of this system. The KB system used in the planning and design stages analyses the gap between the existing and the benchmark organizations for an effective implementation through the GAP analysis technique. The proposed KBGMM model at the design stage explores two components, namely Competitive Priority and Lean Environment modules. Through the simulated results, the KBGMM System has identified, for each modules and sub-module, the problem categories in a prioritized manner. The System finalized all the Bad Points (BP) that need to be improved to achieve benchmark implementation of GMM at the design stage. The System provides valuable decision making information for the planning and design a GMM in term of business organization.

  6. The application of a Grey Markov Model to forecasting annual maximum water levels at hydrological stations

    NASA Astrophysics Data System (ADS)

    Dong, Sheng; Chi, Kun; Zhang, Qiyi; Zhang, Xiangdong

    2012-03-01

    Compared with traditional real-time forecasting, this paper proposes a Grey Markov Model (GMM) to forecast the maximum water levels at hydrological stations in the estuary area. The GMM combines the Grey System and Markov theory into a higher precision model. The GMM takes advantage of the Grey System to predict the trend values and uses the Markov theory to forecast fluctuation values, and thus gives forecast results involving two aspects of information. The procedure for forecasting annul maximum water levels with the GMM contains five main steps: 1) establish the GM (1, 1) model based on the data series; 2) estimate the trend values; 3) establish a Markov Model based on relative error series; 4) modify the relative errors caused in step 2, and then obtain the relative errors of the second order estimation; 5) compare the results with measured data and estimate the accuracy. The historical water level records (from 1960 to 1992) at Yuqiao Hydrological Station in the estuary area of the Haihe River near Tianjin, China are utilized to calibrate and verify the proposed model according to the above steps. Every 25 years' data are regarded as a hydro-sequence. Eight groups of simulated results show reasonable agreement between the predicted values and the measured data. The GMM is also applied to the 10 other hydrological stations in the same estuary. The forecast results for all of the hydrological stations are good or acceptable. The feasibility and effectiveness of this new forecasting model have been proved in this paper.

  7. Gaussian-input Gaussian mixture model for representing density maps and atomic models.

    PubMed

    Kawabata, Takeshi

    2018-07-01

    A new Gaussian mixture model (GMM) has been developed for better representations of both atomic models and electron microscopy 3D density maps. The standard GMM algorithm employs an EM algorithm to determine the parameters. It accepted a set of 3D points with weights, corresponding to voxel or atomic centers. Although the standard algorithm worked reasonably well; however, it had three problems. First, it ignored the size (voxel width or atomic radius) of the input, and thus it could lead to a GMM with a smaller spread than the input. Second, the algorithm had a singularity problem, as it sometimes stopped the iterative procedure due to a Gaussian function with almost zero variance. Third, a map with a large number of voxels required a long computation time for conversion to a GMM. To solve these problems, we have introduced a Gaussian-input GMM algorithm, which considers the input atoms or voxels as a set of Gaussian functions. The standard EM algorithm of GMM was extended to optimize the new GMM. The new GMM has identical radius of gyration to the input, and does not suddenly stop due to the singularity problem. For fast computation, we have introduced a down-sampled Gaussian functions (DSG) by merging neighboring voxels into an anisotropic Gaussian function. It provides a GMM with thousands of Gaussian functions in a short computation time. We also have introduced a DSG-input GMM: the Gaussian-input GMM with the DSG as the input. This new algorithm is much faster than the standard algorithm. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  8. Progress in the development of the GMM-2 gravity field model for Mars

    NASA Technical Reports Server (NTRS)

    Lemoine, F. G.; Smith, D. E.; Lerch, F. J.; Zuber, M. T.; Patel, G. B.

    1994-01-01

    Last year we published the GMM-1 (Goddard Mars Model-1) gravity model for Mars. We have completely re-analyzed the Viking and Mariner 9 tracking data in the development of the new field, designated GMM-2. The model is complete to degree and order 70. Various aspects of the model are discussed.

  9. Systems GMM estimates of the health care spending and GDP relationship: a note.

    PubMed

    Kumar, Saten

    2013-06-01

    This paper utilizes the systems generalized method of moments (GMM) [Arellano and Bover (1995) J Econometrics 68:29-51; Blundell and Bond (1998) J Econometrics 87:115-143], and panel Granger causality [Hurlin and Venet (2001) Granger Causality tests in panel data models with fixed coefficients. Mime'o, University Paris IX], to investigate the health care spending and gross domestic product (GDP) relationship for organisation for economic co-operation and development countries over the period 1960-2007. The system GMM estimates confirm that the contribution of real GDP to health spending is significant and positive. The panel Granger causality tests imply that a bi-directional causality exists between health spending and GDP. To this end, policies aimed at raising health spending will eventually improve the well-being of the population in the long run.

  10. Parallelization strategies for continuum-generalized method of moments on the multi-thread systems

    NASA Astrophysics Data System (ADS)

    Bustamam, A.; Handhika, T.; Ernastuti, Kerami, D.

    2017-07-01

    Continuum-Generalized Method of Moments (C-GMM) covers the Generalized Method of Moments (GMM) shortfall which is not as efficient as Maximum Likelihood estimator by using the continuum set of moment conditions in a GMM framework. However, this computation would take a very long time since optimizing regularization parameter. Unfortunately, these calculations are processed sequentially whereas in fact all modern computers are now supported by hierarchical memory systems and hyperthreading technology, which allowing for parallel computing. This paper aims to speed up the calculation process of C-GMM by designing a parallel algorithm for C-GMM on the multi-thread systems. First, parallel regions are detected for the original C-GMM algorithm. There are two parallel regions in the original C-GMM algorithm, that are contributed significantly to the reduction of computational time: the outer-loop and the inner-loop. Furthermore, this parallel algorithm will be implemented with standard shared-memory application programming interface, i.e. Open Multi-Processing (OpenMP). The experiment shows that the outer-loop parallelization is the best strategy for any number of observations.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morita, Daisuke; Miyamoto, Ayumi; Hattori, Yuki

    Highlights: •Glucose monomycolate (GMM) is a marker glycolipid for active tuberculosis. •Tissue responses to GMM involved up-regulation of Th1-attracting chemokines. •Th1-skewed local responses were mounted at the GMM-injected tissue. -- Abstract: Trehalose 6,6′-dimycolate (TDM) is a major glycolipid of the cell wall of mycobacteria with remarkable adjuvant functions. To avoid detection by the host innate immune system, invading mycobacteria down-regulate the expression of TDM by utilizing host-derived glucose as a competitive substrate for their mycolyltransferases; however, this enzymatic reaction results in the concomitant biosynthesis of glucose monomycolate (GMM) which is recognized by the acquired immune system. GMM-specific, CD1-restricted T cellmore » responses have been detected in the peripheral blood of infected human subjects and monkeys as well as in secondary lymphoid organs of small animals, such as guinea pigs and human CD1-transgenic mice. Nevertheless, it remains to be determined how tissues respond at the site where GMM is produced. Here we found that rhesus macaques vaccinated with Mycobacterium bovis bacillus Calmette–Guerin mounted a chemokine response in GMM-challenged skin that was favorable for recruiting T helper (Th)1 T cells. Indeed, the expression of interferon-γ, but not Th2 or Th17 cytokines, was prominent in the GMM-injected tissue. The GMM-elicited tissue response was also associated with the expression of monocyte/macrophage-attracting CC chemokines, such as CCL2, CCL4 and CCL8. Furthermore, the skin response to GMM involved the up-regulated expression of granulysin and perforin. Given that GMM is produced primarily by pathogenic mycobacteria proliferating within the host, the Th1-skewed tissue response to GMM may function efficiently at the site of infection.« less

  12. Statistical modeling, detection, and segmentation of stains in digitized fabric images

    NASA Astrophysics Data System (ADS)

    Gururajan, Arunkumar; Sari-Sarraf, Hamed; Hequet, Eric F.

    2007-02-01

    This paper will describe a novel and automated system based on a computer vision approach, for objective evaluation of stain release on cotton fabrics. Digitized color images of the stained fabrics are obtained, and the pixel values in the color and intensity planes of these images are probabilistically modeled as a Gaussian Mixture Model (GMM). Stain detection is posed as a decision theoretic problem, where the null hypothesis corresponds to absence of a stain. The null hypothesis and the alternate hypothesis mathematically translate into a first order GMM and a second order GMM respectively. The parameters of the GMM are estimated using a modified Expectation-Maximization (EM) algorithm. Minimum Description Length (MDL) is then used as the test statistic to decide the verity of the null hypothesis. The stain is then segmented by a decision rule based on the probability map generated by the EM algorithm. The proposed approach was tested on a dataset of 48 fabric images soiled with stains of ketchup, corn oil, mustard, ragu sauce, revlon makeup and grape juice. The decision theoretic part of the algorithm produced a correct detection rate (true positive) of 93% and a false alarm rate of 5% on these set of images.

  13. Portable Brain-Computer Interface for the Intensive Care Unit Patient Communication Using Subject-Dependent SSVEP Identification.

    PubMed

    Dehzangi, Omid; Farooq, Muhamed

    2018-01-01

    A major predicament for Intensive Care Unit (ICU) patients is inconsistent and ineffective communication means. Patients rated most communication sessions as difficult and unsuccessful. This, in turn, can cause distress, unrecognized pain, anxiety, and fear. As such, we designed a portable BCI system for ICU communications (BCI4ICU) optimized to operate effectively in an ICU environment. The system utilizes a wearable EEG cap coupled with an Android app designed on a mobile device that serves as visual stimuli and data processing module. Furthermore, to overcome the challenges that BCI systems face today in real-world scenarios, we propose a novel subject-specific Gaussian Mixture Model- (GMM-) based training and adaptation algorithm. First, we incorporate subject-specific information in the training phase of the SSVEP identification model using GMM-based training and adaptation. We evaluate subject-specific models against other subjects. Subsequently, from the GMM discriminative scores, we generate the transformed vectors, which are passed to our predictive model. Finally, the adapted mixture mean scores of the subject-specific GMMs are utilized to generate the high-dimensional supervectors. Our experimental results demonstrate that the proposed system achieved 98.7% average identification accuracy, which is promising in order to provide effective and consistent communication for patients in the intensive care.

  14. An improved gravity model for Mars: Goddard Mars Model-1 (GMM-1)

    NASA Technical Reports Server (NTRS)

    Smith, D. E.; Lerch, F. J.; Nerem, R. S.; Zuber, M. T.; Patel, G. B.; Fricke, S. K.; Lemoine, F. G.

    1993-01-01

    Doppler tracking data of three orbiting spacecraft have been reanalyzed to develop a new gravitational field model for the planet Mars, GMM-1 (Goddard Mars Model-1). This model employs nearly all available data, consisting of approximately 1100 days of S-bank tracking data collected by NASA's Deep Space Network from the Mariner 9, and Viking 1 and Viking 2 spacecraft, in seven different orbits, between 1971 and 1979. GMM-1 is complete to spherical harmonic degree and order 50, which corresponds to a half-wavelength spatial resolution of 200-300 km where the data permit. GMM-1 represents satellite orbits with considerably better accuracy than previous Mars gravity models and shows greater resolution of identifiable geological structures. The notable improvement in GMM-1 over previous models is a consequence of several factors: improved computational capabilities, the use of optimum weighting and least-squares collocation solution techniques which stabilized the behavior of the solution at high degree and order, and the use of longer satellite arcs than employed in previous solutions that were made possible by improved force and measurement models. The inclusion of X-band tracking data from the 379-km altitude, near-polar orbiting Mars Observer spacecraft should provide a significant improvement over GMM-1, particularly at high latitudes where current data poorly resolves the gravitational signature of the planet.

  15. A Gaussian Mixture Model Representation of Endmember Variability in Hyperspectral Unmixing

    NASA Astrophysics Data System (ADS)

    Zhou, Yuan; Rangarajan, Anand; Gader, Paul D.

    2018-05-01

    Hyperspectral unmixing while considering endmember variability is usually performed by the normal compositional model (NCM), where the endmembers for each pixel are assumed to be sampled from unimodal Gaussian distributions. However, in real applications, the distribution of a material is often not Gaussian. In this paper, we use Gaussian mixture models (GMM) to represent the endmember variability. We show, given the GMM starting premise, that the distribution of the mixed pixel (under the linear mixing model) is also a GMM (and this is shown from two perspectives). The first perspective originates from the random variable transformation and gives a conditional density function of the pixels given the abundances and GMM parameters. With proper smoothness and sparsity prior constraints on the abundances, the conditional density function leads to a standard maximum a posteriori (MAP) problem which can be solved using generalized expectation maximization. The second perspective originates from marginalizing over the endmembers in the GMM, which provides us with a foundation to solve for the endmembers at each pixel. Hence, our model can not only estimate the abundances and distribution parameters, but also the distinct endmember set for each pixel. We tested the proposed GMM on several synthetic and real datasets, and showed its potential by comparing it to current popular methods.

  16. GMM-based speaker age and gender classification in Czech and Slovak

    NASA Astrophysics Data System (ADS)

    Přibil, Jiří; Přibilová, Anna; Matoušek, Jindřich

    2017-01-01

    The paper describes an experiment with using the Gaussian mixture models (GMM) for automatic classification of the speaker age and gender. It analyses and compares the influence of different number of mixtures and different types of speech features used for GMM gender/age classification. Dependence of the computational complexity on the number of used mixtures is also analysed. Finally, the GMM classification accuracy is compared with the output of the conventional listening tests. The results of these objective and subjective evaluations are in correspondence.

  17. Reducing computation in an i-vector speaker recognition system using a tree-structured universal background model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McClanahan, Richard; De Leon, Phillip L.

    The majority of state-of-the-art speaker recognition systems (SR) utilize speaker models that are derived from an adapted universal background model (UBM) in the form of a Gaussian mixture model (GMM). This is true for GMM supervector systems, joint factor analysis systems, and most recently i-vector systems. In all of the identified systems, the posterior probabilities and sufficient statistics calculations represent a computational bottleneck in both enrollment and testing. We propose a multi-layered hash system, employing a tree-structured GMM–UBM which uses Runnalls’ Gaussian mixture reduction technique, in order to reduce the number of these calculations. Moreover, with this tree-structured hash, wemore » can trade-off reduction in computation with a corresponding degradation of equal error rate (EER). As an example, we also reduce this computation by a factor of 15× while incurring less than 10% relative degradation of EER (or 0.3% absolute EER) when evaluated with NIST 2010 speaker recognition evaluation (SRE) telephone data.« less

  18. Reducing computation in an i-vector speaker recognition system using a tree-structured universal background model

    DOE PAGES

    McClanahan, Richard; De Leon, Phillip L.

    2014-08-20

    The majority of state-of-the-art speaker recognition systems (SR) utilize speaker models that are derived from an adapted universal background model (UBM) in the form of a Gaussian mixture model (GMM). This is true for GMM supervector systems, joint factor analysis systems, and most recently i-vector systems. In all of the identified systems, the posterior probabilities and sufficient statistics calculations represent a computational bottleneck in both enrollment and testing. We propose a multi-layered hash system, employing a tree-structured GMM–UBM which uses Runnalls’ Gaussian mixture reduction technique, in order to reduce the number of these calculations. Moreover, with this tree-structured hash, wemore » can trade-off reduction in computation with a corresponding degradation of equal error rate (EER). As an example, we also reduce this computation by a factor of 15× while incurring less than 10% relative degradation of EER (or 0.3% absolute EER) when evaluated with NIST 2010 speaker recognition evaluation (SRE) telephone data.« less

  19. The dynamic relationships between economic status and health measures among working-age adults in the United States.

    PubMed

    Meraya, Abdulkarim M; Dwibedi, Nilanjana; Tan, Xi; Innes, Kim; Mitra, Sophie; Sambamoorthi, Usha

    2018-04-18

    We examine the dynamic relationships between economic status and health measures using data from 8 waves of the Panel Study of Income Dynamics from 1999 to 2013. Health measures are self-rated health (SRH) and functional limitations; economic status measures are labor income (earnings), family income, and net wealth. We use 3 different types of models: (a) ordinary least squares regression, (b) first-difference, and (c) system-generalized method of moment (GMM). Using ordinary least squares regression and first difference models, we find that higher levels of economic status are associated with better SRH and functional status among both men and women, although declines in income and wealth are associated with a decline in health for men only. Using system-GMM estimators, we find evidence of a causal link from labor income to SRH and functional status for both genders. Among men only, system-GMM results indicate that there is a causal link from net wealth to SRH and functional status. Results overall highlight the need for integrated economic and health policies, and for policies that mitigate the potential adverse health effects of short-term changes in economic status. Copyright © 2018 John Wiley & Sons, Ltd.

  20. Adaptive Elastic Net for Generalized Methods of Moments.

    PubMed

    Caner, Mehmet; Zhang, Hao Helen

    2014-01-30

    Model selection and estimation are crucial parts of econometrics. This paper introduces a new technique that can simultaneously estimate and select the model in generalized method of moments (GMM) context. The GMM is particularly powerful for analyzing complex data sets such as longitudinal and panel data, and it has wide applications in econometrics. This paper extends the least squares based adaptive elastic net estimator of Zou and Zhang (2009) to nonlinear equation systems with endogenous variables. The extension is not trivial and involves a new proof technique due to estimators lack of closed form solutions. Compared to Bridge-GMM of Caner (2009), we allow for the number of parameters to diverge to infinity as well as collinearity among a large number of variables, also the redundant parameters set to zero via a data dependent technique. This method has the oracle property, meaning that we can estimate nonzero parameters with their standard limit and the redundant parameters are dropped from the equations simultaneously. Numerical examples are used to illustrate the performance of the new method.

  1. An improved gravity model for Mars: Goddard Mars Model 1

    NASA Technical Reports Server (NTRS)

    Smith, D. E.; Lerch, F. J.; Nerem, R. S.; Zuber, M. T.; Patel, G. B.; Fricke, S. K.; Lemoine, F. G.

    1993-01-01

    Doppler tracking data of three orbiting spacecraft have been reanalyzed to develop a new gravitational field model for the planet Mars, Goddard Mars Model 1 (GMM-1). This model employs nearly all available data, consisting of approximately 1100 days of S band tracking data collected by NASA's Deep Space Network from the Mariner 9 and Viking 1 and Viking 2 spacecraft, in seven different orbits, between 1971 and 1979. GMM-1 is complete to spherical harmonic degree and order 50, which corresponds to a half-wavelength spatial resolution of 200-300 km where the data permit. GMM-1 represents satellite orbits with considerably better accuracy than previous Mars gravity models and shows greater resolution of identifiable geological structures. The notable improvement in GMM-1 over previous models is a consequence of several factors: improved computational capabilities, the use of otpimum weighting and least squares collocation solution techniques which stabilized the behavior of the solution at high degree and order, and the use of longer satellite arcs than employed in previous solutions that were made possible by improved force and measurement models. The inclusion of X band tracking data from the 379-km altitude, nnear-polar orbiting Mars Observer spacecraft should provide a significant improvement over GMM-1, particularly at high latitudes where current data poorly resolve the gravitational signature of the planet.

  2. The Potential of Growth Mixture Modelling

    ERIC Educational Resources Information Center

    Muthen, Bengt

    2006-01-01

    The authors of the paper on growth mixture modelling (GMM) give a description of GMM and related techniques as applied to antisocial behaviour. They bring up the important issue of choice of model within the general framework of mixture modelling, especially the choice between latent class growth analysis (LCGA) techniques developed by Nagin and…

  3. Selecting salient frames for spatiotemporal video modeling and segmentation.

    PubMed

    Song, Xiaomu; Fan, Guoliang

    2007-12-01

    We propose a new statistical generative model for spatiotemporal video segmentation. The objective is to partition a video sequence into homogeneous segments that can be used as "building blocks" for semantic video segmentation. The baseline framework is a Gaussian mixture model (GMM)-based video modeling approach that involves a six-dimensional spatiotemporal feature space. Specifically, we introduce the concept of frame saliency to quantify the relevancy of a video frame to the GMM-based spatiotemporal video modeling. This helps us use a small set of salient frames to facilitate the model training by reducing data redundancy and irrelevance. A modified expectation maximization algorithm is developed for simultaneous GMM training and frame saliency estimation, and the frames with the highest saliency values are extracted to refine the GMM estimation for video segmentation. Moreover, it is interesting to find that frame saliency can imply some object behaviors. This makes the proposed method also applicable to other frame-related video analysis tasks, such as key-frame extraction, video skimming, etc. Experiments on real videos demonstrate the effectiveness and efficiency of the proposed method.

  4. Identification of damage in composite structures using Gaussian mixture model-processed Lamb waves

    NASA Astrophysics Data System (ADS)

    Wang, Qiang; Ma, Shuxian; Yue, Dong

    2018-04-01

    Composite materials have comprehensively better properties than traditional materials, and therefore have been more and more widely used, especially because of its higher strength-weight ratio. However, the damage of composite structures is usually varied and complicated. In order to ensure the security of these structures, it is necessary to monitor and distinguish the structural damage in a timely manner. Lamb wave-based structural health monitoring (SHM) has been proved to be effective in online structural damage detection and evaluation; furthermore, the characteristic parameters of the multi-mode Lamb wave varies in response to different types of damage in the composite material. This paper studies the damage identification approach for composite structures using the Lamb wave and the Gaussian mixture model (GMM). The algorithm and principle of the GMM, and the parameter estimation, is introduced. Multi-statistical characteristic parameters of the excited Lamb waves are extracted, and the parameter space with reduced dimensions is adopted by principal component analysis (PCA). The damage identification system using the GMM is then established through training. Experiments on a glass fiber-reinforced epoxy composite laminate plate are conducted to verify the feasibility of the proposed approach in terms of damage classification. The experimental results show that different types of damage can be identified according to the value of the likelihood function of the GMM.

  5. Accelerated Gaussian mixture model and its application on image segmentation

    NASA Astrophysics Data System (ADS)

    Zhao, Jianhui; Zhang, Yuanyuan; Ding, Yihua; Long, Chengjiang; Yuan, Zhiyong; Zhang, Dengyi

    2013-03-01

    Gaussian mixture model (GMM) has been widely used for image segmentation in recent years due to its superior adaptability and simplicity of implementation. However, traditional GMM has the disadvantage of high computational complexity. In this paper an accelerated GMM is designed, for which the following approaches are adopted: establish the lookup table for Gaussian probability matrix to avoid the repetitive probability calculations on all pixels, employ the blocking detection method on each block of pixels to further decrease the complexity, change the structure of lookup table from 3D to 1D with more simple data type to reduce the space requirement. The accelerated GMM is applied on image segmentation with the help of OTSU method to decide the threshold value automatically. Our algorithm has been tested through image segmenting of flames and faces from a set of real pictures, and the experimental results prove its efficiency in segmentation precision and computational cost.

  6. Transition Flight Control Room Automation

    NASA Technical Reports Server (NTRS)

    Welborn, Curtis Ray

    1990-01-01

    The Workstation Prototype Laboratory is currently working on a number of projects which we feel can have a direct impact on ground operations automation. These projects include: The Fuel Cell Monitoring System (FCMS), which will monitor and detect problems with the fuel cells on the Shuttle. FCMS will use a combination of rules (forward/backward) and multi-threaded procedures which run concurrently with the rules, to implement the malfunction algorithms of the EGIL flight controllers. The combination of rule based reasoning and procedural reasoning allows us to more easily map the malfunction algorithms into a real-time system implementation. A graphical computation language (AGCOMPL). AGCOMPL is an experimental prototype to determine the benefits and drawbacks of using a graphical language to design computations (algorithms) to work on Shuttle or Space Station telemetry and trajectory data. The design of a system which will allow a model of an electrical system, including telemetry sensors, to be configured on the screen graphically using previously defined electrical icons. This electrical model would then be used to generate rules and procedures for detecting malfunctions in the electrical components of the model. A generic message management (GMM) system. GMM is being designed as a message management system for real-time applications which send advisory messages to a user. The primary purpose of GMM is to reduce the risk of overloading a user with information when multiple failures occurs and in assisting the developer in devising an explanation facility. The emphasis of our work is to develop practical tools and techniques, while determining the feasibility of a given approach, including identification of appropriate software tools to support research, application and tool building activities.

  7. High-resolution compact spectrometer based on a custom-printed varied-line-spacing concave blazed grating.

    PubMed

    Chen, Jianwei; Chen, Wang; Zhang, Guodong; Lin, Hui; Chen, Shih-Chi

    2017-05-29

    We present the modeling, design and characterization of a compact spectrometer, achieving a resolution better than 1.5 nm throughout the visible spectrum (360-825 nm). The key component in the spectrometer is a custom-printed varied-line-space (VLS) concave blazed grating, where the groove density linearly decreases from the center of the grating (530 g/mm) at a rate of 0.58 nm/mm to the edge (528 g/mm). Parametric models have been established to deterministically link the system performance with the VLS grating design parameters, e.g., groove density, line-space varying rate, and to minimize the system footprint. Simulations have been performed in ZEMAX to confirm the results, indicating a 15% enhancement in system resolution versus common constant line-space (CLS) gratings. Next, the VLS concave blazed grating is fabricated via our vacuum nanoimprinting system, where a polydimethylsiloxane (PDMS) stamp is non-uniformly expanded to form the varied-line-spacing pattern from a planar commercial grating master (600 g/mm) for precision imprinting. The concave blazed grating is measured to have an absolute diffraction efficiency of 43%, higher than typical holographic gratings (~30%) used in the commercial compact spectrometers. The completed compact spectrometer contains only one optical component, i.e., the VLS concave grating, as well as an entrance slit and linear photodetector array, achieving a footprint of 11 × 11 × 3 cm 3 , which makes it the most compact and resolving (1.46 nm) spectrometer of its kind.

  8. Methods and Measures: Growth Mixture Modeling--A Method for Identifying Differences in Longitudinal Change among Unobserved Groups

    ERIC Educational Resources Information Center

    Ram, Nilam; Grimm, Kevin J.

    2009-01-01

    Growth mixture modeling (GMM) is a method for identifying multiple unobserved sub-populations, describing longitudinal change within each unobserved sub-population, and examining differences in change among unobserved sub-populations. We provide a practical primer that may be useful for researchers beginning to incorporate GMM analysis into their…

  9. Evaluation of Spectral and Prosodic Features of Speech Affected by Orthodontic Appliances Using the Gmm Classifier

    NASA Astrophysics Data System (ADS)

    Přibil, Jiří; Přibilová, Anna; Ďuračkoá, Daniela

    2014-01-01

    The paper describes our experiment with using the Gaussian mixture models (GMM) for classification of speech uttered by a person wearing orthodontic appliances. For the GMM classification, the input feature vectors comprise the basic and the complementary spectral properties as well as the supra-segmental parameters. Dependence of classification correctness on the number of the parameters in the input feature vector and on the computation complexity is also evaluated. In addition, an influence of the initial setting of the parameters for GMM training process was analyzed. Obtained recognition results are compared visually in the form of graphs as well as numerically in the form of tables and confusion matrices for tested sentences uttered using three configurations of orthodontic appliances.

  10. Multi-atlas segmentation for abdominal organs with Gaussian mixture models

    NASA Astrophysics Data System (ADS)

    Burke, Ryan P.; Xu, Zhoubing; Lee, Christopher P.; Baucom, Rebeccah B.; Poulose, Benjamin K.; Abramson, Richard G.; Landman, Bennett A.

    2015-03-01

    Abdominal organ segmentation with clinically acquired computed tomography (CT) is drawing increasing interest in the medical imaging community. Gaussian mixture models (GMM) have been extensively used through medical segmentation, most notably in the brain for cerebrospinal fluid / gray matter / white matter differentiation. Because abdominal CT exhibit strong localized intensity characteristics, GMM have recently been incorporated in multi-stage abdominal segmentation algorithms. In the context of variable abdominal anatomy and rich algorithms, it is difficult to assess the marginal contribution of GMM. Herein, we characterize the efficacy of an a posteriori framework that integrates GMM of organ-wise intensity likelihood with spatial priors from multiple target-specific registered labels. In our study, we first manually labeled 100 CT images. Then, we assigned 40 images to use as training data for constructing target-specific spatial priors and intensity likelihoods. The remaining 60 images were evaluated as test targets for segmenting 12 abdominal organs. The overlap between the true and the automatic segmentations was measured by Dice similarity coefficient (DSC). A median improvement of 145% was achieved by integrating the GMM intensity likelihood against the specific spatial prior. The proposed framework opens the opportunities for abdominal organ segmentation by efficiently using both the spatial and appearance information from the atlases, and creates a benchmark for large-scale automatic abdominal segmentation.

  11. An Improved Gaussian Mixture Model for Damage Propagation Monitoring of an Aircraft Wing Spar under Changing Structural Boundary Conditions.

    PubMed

    Qiu, Lei; Yuan, Shenfang; Mei, Hanfei; Fang, Fang

    2016-02-26

    Structural Health Monitoring (SHM) technology is considered to be a key technology to reduce the maintenance cost and meanwhile ensure the operational safety of aircraft structures. It has gradually developed from theoretic and fundamental research to real-world engineering applications in recent decades. The problem of reliable damage monitoring under time-varying conditions is a main issue for the aerospace engineering applications of SHM technology. Among the existing SHM methods, Guided Wave (GW) and piezoelectric sensor-based SHM technique is a promising method due to its high damage sensitivity and long monitoring range. Nevertheless the reliability problem should be addressed. Several methods including environmental parameter compensation, baseline signal dependency reduction and data normalization, have been well studied but limitations remain. This paper proposes a damage propagation monitoring method based on an improved Gaussian Mixture Model (GMM). It can be used on-line without any structural mechanical model and a priori knowledge of damage and time-varying conditions. With this method, a baseline GMM is constructed first based on the GW features obtained under time-varying conditions when the structure under monitoring is in the healthy state. When a new GW feature is obtained during the on-line damage monitoring process, the GMM can be updated by an adaptive migration mechanism including dynamic learning and Gaussian components split-merge. The mixture probability distribution structure of the GMM and the number of Gaussian components can be optimized adaptively. Then an on-line GMM can be obtained. Finally, a best match based Kullback-Leibler (KL) divergence is studied to measure the migration degree between the baseline GMM and the on-line GMM to reveal the weak cumulative changes of the damage propagation mixed in the time-varying influence. A wing spar of an aircraft is used to validate the proposed method. The results indicate that the crack propagation under changing structural boundary conditions can be monitored reliably. The method is not limited by the properties of the structure, and thus it is feasible to be applied to composite structure.

  12. Crack propagation monitoring in a full-scale aircraft fatigue test based on guided wave-Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Qiu, Lei; Yuan, Shenfang; Bao, Qiao; Mei, Hanfei; Ren, Yuanqiang

    2016-05-01

    For aerospace application of structural health monitoring (SHM) technology, the problem of reliable damage monitoring under time-varying conditions must be addressed and the SHM technology has to be fully validated on real aircraft structures under realistic load conditions on ground before it can reach the status of flight test. In this paper, the guided wave (GW) based SHM method is applied to a full-scale aircraft fatigue test which is one of the most similar test status to the flight test. To deal with the time-varying problem, a GW-Gaussian mixture model (GW-GMM) is proposed. The probability characteristic of GW features, which is introduced by time-varying conditions is modeled by GW-GMM. The weak cumulative variation trend of the crack propagation, which is mixed in time-varying influence can be tracked by the GW-GMM migration during on-line damage monitoring process. A best match based Kullback-Leibler divergence is proposed to measure the GW-GMM migration degree to reveal the crack propagation. The method is validated in the full-scale aircraft fatigue test. The validation results indicate that the reliable crack propagation monitoring of the left landing gear spar and the right wing panel under realistic load conditions are achieved.

  13. High-speed cell recognition algorithm for ultrafast flow cytometer imaging system.

    PubMed

    Zhao, Wanyue; Wang, Chao; Chen, Hongwei; Chen, Minghua; Yang, Sigang

    2018-04-01

    An optical time-stretch flow imaging system enables high-throughput examination of cells/particles with unprecedented high speed and resolution. A significant amount of raw image data is produced. A high-speed cell recognition algorithm is, therefore, highly demanded to analyze large amounts of data efficiently. A high-speed cell recognition algorithm consisting of two-stage cascaded detection and Gaussian mixture model (GMM) classification is proposed. The first stage of detection extracts cell regions. The second stage integrates distance transform and the watershed algorithm to separate clustered cells. Finally, the cells detected are classified by GMM. We compared the performance of our algorithm with support vector machine. Results show that our algorithm increases the running speed by over 150% without sacrificing the recognition accuracy. This algorithm provides a promising solution for high-throughput and automated cell imaging and classification in the ultrafast flow cytometer imaging platform. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  14. High-speed cell recognition algorithm for ultrafast flow cytometer imaging system

    NASA Astrophysics Data System (ADS)

    Zhao, Wanyue; Wang, Chao; Chen, Hongwei; Chen, Minghua; Yang, Sigang

    2018-04-01

    An optical time-stretch flow imaging system enables high-throughput examination of cells/particles with unprecedented high speed and resolution. A significant amount of raw image data is produced. A high-speed cell recognition algorithm is, therefore, highly demanded to analyze large amounts of data efficiently. A high-speed cell recognition algorithm consisting of two-stage cascaded detection and Gaussian mixture model (GMM) classification is proposed. The first stage of detection extracts cell regions. The second stage integrates distance transform and the watershed algorithm to separate clustered cells. Finally, the cells detected are classified by GMM. We compared the performance of our algorithm with support vector machine. Results show that our algorithm increases the running speed by over 150% without sacrificing the recognition accuracy. This algorithm provides a promising solution for high-throughput and automated cell imaging and classification in the ultrafast flow cytometer imaging platform.

  15. The Use of Growth Mixture Modeling for Studying Resilience to Major Life Stressors in Adulthood and Old Age: Lessons for Class Size and Identification and Model Selection.

    PubMed

    Infurna, Frank J; Grimm, Kevin J

    2017-12-15

    Growth mixture modeling (GMM) combines latent growth curve and mixture modeling approaches and is typically used to identify discrete trajectories following major life stressors (MLS). However, GMM is often applied to data that does not meet the statistical assumptions of the model (e.g., within-class normality) and researchers often do not test additional model constraints (e.g., homogeneity of variance across classes), which can lead to incorrect conclusions regarding the number and nature of the trajectories. We evaluate how these methodological assumptions influence trajectory size and identification in the study of resilience to MLS. We use data on changes in subjective well-being and depressive symptoms following spousal loss from the HILDA and HRS. Findings drastically differ when constraining the variances to be homogenous versus heterogeneous across trajectories, with overextraction being more common when constraining the variances to be homogeneous across trajectories. In instances, when the data are non-normally distributed, assuming normally distributed data increases the extraction of latent classes. Our findings showcase that the assumptions typically underlying GMM are not tenable, influencing trajectory size and identification and most importantly, misinforming conceptual models of resilience. The discussion focuses on how GMM can be leveraged to effectively examine trajectories of adaptation following MLS and avenues for future research. © The Author 2017. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. Research on hysteresis loop considering the prestress effect and electrical input dynamics for a giant magnetostrictive actuator

    NASA Astrophysics Data System (ADS)

    Zhu, Yuchuan; Yang, Xulei; Wereley, Norman M.

    2016-08-01

    In this paper, focusing on the application-oriented giant magnetostrictive material (GMM)-based electro-hydrostatic actuator, which features an applied magnetic field at high frequency and high amplitude, and concentrating on the static and dynamic characteristics of a giant magnetostrictive actuator (GMA) considering the prestress effect on the GMM rod and the electrical input dynamics involving the power amplifier and the inductive coil, a methodology for studying the static and dynamic characteristics of a GMA using the hysteresis loop as a tool is developed. A GMA that can display the preforce on the GMM rod in real-time is designed, and a magnetostrictive model dependent on the prestress on a GMM rod instead of the existing quadratic domain rotation model is proposed. Additionally, an electrical input dynamics model to excite GMA is developed according to the simplified circuit diagram, and the corresponding parameters are identified by the experimental data. A dynamic magnetization model with the eddy current effect is deduced according to the Jiles-Atherton model and the Maxwell equations. Next, all of the parameters, including the electrical input characteristics, the dynamic magnetization and the mechanical structure of GMA, are identified by the experimental data from the current response, magnetization response and displacement response, respectively. Finally, a comprehensive comparison between the model results and experimental data is performed, and the results show that the test data agree well with the presented model results. An analysis on the relation between the GMA displacement response and the parameters from the electrical input dynamics, magnetization dynamics and mechanical structural dynamics is performed.

  17. High degree gravitational sensitivity from Mars orbiters for the GMM-1 gravity model

    NASA Technical Reports Server (NTRS)

    Lerch, F. J.; Smith, D. E.; Chan, J. C.; Patel, G. B.; Chinn, D. S.

    1994-01-01

    Orbital sensitivity of the gravity field for high degree terms (greater than 30) is analyzed on satellites employed in a Goddard Mars Model GMM-1, complete in spherical harmonics through degree and order 50. The model is obtained from S-band Doppler data on Mariner 9 (M9), Viking Orbiter 1 (VO1), and Viking Orbiter 2 (VO2) spacecraft, which were tracked by the NASA Deep Space Network on seven different highly eccentric orbits. The main sensitivity of the high degree terms is obtained from the VO1 and VO2 low orbits (300 km periapsis altitude), where significant spectral sensitivity is seen for all degrees out through degree 50. The velocity perturbations show a dominant effect at periapsis and significant effects out beyond the semi-latus rectum covering over 180 degrees of the orbital groundtrack for the low altitude orbits. Because of the wideband of periapsis motion covering nearly 180 degrees in w and +39 degrees in latitude coverage, the VO1 300 km periapsis altitude orbit with inclination of 39 degrees gave the dominant sensitivity in the GMM-1 solution for the high degree terms. Although the VO2 low periapsis orbit has a smaller band of periapsis mapping coverage, it strongly complements the VO1 orbit sensitivity for the GMM-1 solution with Doppler tracking coverage over a different inclination of 80 degrees.

  18. Analysis of the human female foot in two different measurement systems: from geometric morphometrics to functional morphology.

    PubMed

    Bookstein, Fred L; Domjanić, Jacqueline

    2014-09-01

    The relationship of geometric morphometrics (GMM) to functional analysis of the same morphological resources is currently a topic of active interest among functional morphologists. Although GMM is typically advertised as free of prior assumptions about shape features or morphological theories, it is common for GMM findings to be concordant with findings from studies based on a-priori lists of shape features whenever prior insights or theories have been properly accounted for in the study design. The present paper demonstrates this happy possibility by revisiting a previously published GMM analysis of footprint outlines for which there is also functionally relevant information in the form of a-pri-ori foot measurements. We show how to convert the conventional measurements into the language of shape, thereby affording two parallel statistical analyses. One is the classic multivariate analysis of "shape features", the other the equally classic GMM of semilandmark coordinates. In this example, the two data sets, analyzed by protocols that are remarkably different in both their geometry and their algebra, nevertheless result in one common biometrical summary: wearing high heels is bad for women inasmuch as it leads to the need for orthotic devices to treat the consequently flattened arch. This concordance bears implications for other branches of applied anthropology. To carry out a good biomedical analysis of applied anthropometric data it may not matter whether one uses GMM or instead an adequate assortment of conventional measurements. What matters is whether the conventional measurements have been selected in order to match the natural spectrum of functional variation.

  19. Mixture class recovery in GMM under varying degrees of class separation: frequentist versus Bayesian estimation.

    PubMed

    Depaoli, Sarah

    2013-06-01

    Growth mixture modeling (GMM) represents a technique that is designed to capture change over time for unobserved subgroups (or latent classes) that exhibit qualitatively different patterns of growth. The aim of the current article was to explore the impact of latent class separation (i.e., how similar growth trajectories are across latent classes) on GMM performance. Several estimation conditions were compared: maximum likelihood via the expectation maximization (EM) algorithm and the Bayesian framework implementing diffuse priors, "accurate" informative priors, weakly informative priors, data-driven informative priors, priors reflecting partial-knowledge of parameters, and "inaccurate" (but informative) priors. The main goal was to provide insight about the optimal estimation condition under different degrees of latent class separation for GMM. Results indicated that optimal parameter recovery was obtained though the Bayesian approach using "accurate" informative priors, and partial-knowledge priors showed promise for the recovery of the growth trajectory parameters. Maximum likelihood and the remaining Bayesian estimation conditions yielded poor parameter recovery for the latent class proportions and the growth trajectories. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  20. Speech Enhancement, Gain, and Noise Spectrum Adaptation Using Approximate Bayesian Estimation

    PubMed Central

    Hao, Jiucang; Attias, Hagai; Nagarajan, Srikantan; Lee, Te-Won; Sejnowski, Terrence J.

    2010-01-01

    This paper presents a new approximate Bayesian estimator for enhancing a noisy speech signal. The speech model is assumed to be a Gaussian mixture model (GMM) in the log-spectral domain. This is in contrast to most current models in frequency domain. Exact signal estimation is a computationally intractable problem. We derive three approximations to enhance the efficiency of signal estimation. The Gaussian approximation transforms the log-spectral domain GMM into the frequency domain using minimal Kullback–Leiber (KL)-divergency criterion. The frequency domain Laplace method computes the maximum a posteriori (MAP) estimator for the spectral amplitude. Correspondingly, the log-spectral domain Laplace method computes the MAP estimator for the log-spectral amplitude. Further, the gain and noise spectrum adaptation are implemented using the expectation–maximization (EM) algorithm within the GMM under Gaussian approximation. The proposed algorithms are evaluated by applying them to enhance the speeches corrupted by the speech-shaped noise (SSN). The experimental results demonstrate that the proposed algorithms offer improved signal-to-noise ratio, lower word recognition error rate, and less spectral distortion. PMID:20428253

  1. Differentiation of low-attenuation intracranial hemorrhage and calcification using dual-energy computed tomography in a phantom system

    PubMed Central

    Nute, Jessica L.; Roux, Lucia Le; Chandler, Adam G.; Baladandayuthapani, Veera; Schellingerhout, Dawid; Cody, Dianna D.

    2015-01-01

    Objectives Calcific and hemorrhagic intracranial lesions with attenuation levels of <100 Hounsfield Units (HU) cannot currently be reliably differentiated by single-energy computed tomography (SECT). The proper differentiation of these lesion types would have a multitude of clinical applications. A phantom model was used to test the ability of dual-energy CT (DECT) to differentiate such lesions. Materials and Methods Agar gel-bound ferric oxide and hydroxyapatite were used to model hemorrhage and calcification, respectively. Gel models were scanned using SECT and DECT and organized into SECT attenuation-matched pairs at 16 attenuation levels between 0 and 100 HU. DECT data were analyzed using 3D Gaussian mixture models (GMMs), as well as a simplified threshold plane metric derived from the 3D GMM, to assign voxels to hemorrhagic or calcific categories. Accuracy was calculated by comparing predicted voxel assignments with actual voxel identities. Results We measured 6,032 voxels from each gel model, for a total of 193,024 data points (16 matched model pairs). Both the 3D GMM and its more clinically implementable threshold plane derivative yielded similar results, with >90% accuracy at matched SECT attenuation levels ≥50 HU. Conclusions Hemorrhagic and calcific lesions with attenuation levels between 50 and 100 HU were differentiable using DECT in a clinically relevant phantom system with >90% accuracy. This method warrants further testing for potential clinical applications. PMID:25162534

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soffientini, Chiara Dolores, E-mail: chiaradolores.soffientini@polimi.it; Baselli, Giuseppe; De Bernardi, Elisabetta

    Purpose: Quantitative {sup 18}F-fluorodeoxyglucose positron emission tomography is limited by the uncertainty in lesion delineation due to poor SNR, low resolution, and partial volume effects, subsequently impacting oncological assessment, treatment planning, and follow-up. The present work develops and validates a segmentation algorithm based on statistical clustering. The introduction of constraints based on background features and contiguity priors is expected to improve robustness vs clinical image characteristics such as lesion dimension, noise, and contrast level. Methods: An eight-class Gaussian mixture model (GMM) clustering algorithm was modified by constraining the mean and variance parameters of four background classes according to the previousmore » analysis of a lesion-free background volume of interest (background modeling). Hence, expectation maximization operated only on the four classes dedicated to lesion detection. To favor the segmentation of connected objects, a further variant was introduced by inserting priors relevant to the classification of neighbors. The algorithm was applied to simulated datasets and acquired phantom data. Feasibility and robustness toward initialization were assessed on a clinical dataset manually contoured by two expert clinicians. Comparisons were performed with respect to a standard eight-class GMM algorithm and to four different state-of-the-art methods in terms of volume error (VE), Dice index, classification error (CE), and Hausdorff distance (HD). Results: The proposed GMM segmentation with background modeling outperformed standard GMM and all the other tested methods. Medians of accuracy indexes were VE <3%, Dice >0.88, CE <0.25, and HD <1.2 in simulations; VE <23%, Dice >0.74, CE <0.43, and HD <1.77 in phantom data. Robustness toward image statistic changes (±15%) was shown by the low index changes: <26% for VE, <17% for Dice, and <15% for CE. Finally, robustness toward the user-dependent volume initialization was demonstrated. The inclusion of the spatial prior improved segmentation accuracy only for lesions surrounded by heterogeneous background: in the relevant simulation subset, the median VE significantly decreased from 13% to 7%. Results on clinical data were found in accordance with simulations, with absolute VE <7%, Dice >0.85, CE <0.30, and HD <0.81. Conclusions: The sole introduction of constraints based on background modeling outperformed standard GMM and the other tested algorithms. Insertion of a spatial prior improved the accuracy for realistic cases of objects in heterogeneous backgrounds. Moreover, robustness against initialization supports the applicability in a clinical setting. In conclusion, application-driven constraints can generally improve the capabilities of GMM and statistical clustering algorithms.« less

  3. Probabilistic Elastic Part Model: A Pose-Invariant Representation for Real-World Face Verification.

    PubMed

    Li, Haoxiang; Hua, Gang

    2018-04-01

    Pose variation remains to be a major challenge for real-world face recognition. We approach this problem through a probabilistic elastic part model. We extract local descriptors (e.g., LBP or SIFT) from densely sampled multi-scale image patches. By augmenting each descriptor with its location, a Gaussian mixture model (GMM) is trained to capture the spatial-appearance distribution of the face parts of all face images in the training corpus, namely the probabilistic elastic part (PEP) model. Each mixture component of the GMM is confined to be a spherical Gaussian to balance the influence of the appearance and the location terms, which naturally defines a part. Given one or multiple face images of the same subject, the PEP-model builds its PEP representation by sequentially concatenating descriptors identified by each Gaussian component in a maximum likelihood sense. We further propose a joint Bayesian adaptation algorithm to adapt the universally trained GMM to better model the pose variations between the target pair of faces/face tracks, which consistently improves face verification accuracy. Our experiments show that we achieve state-of-the-art face verification accuracy with the proposed representations on the Labeled Face in the Wild (LFW) dataset, the YouTube video face database, and the CMU MultiPIE dataset.

  4. Investigating the environmental Kuznets curve hypothesis: the role of tourism and ecological footprint.

    PubMed

    Ozturk, Ilhan; Al-Mulali, Usama; Saboori, Behnaz

    2016-01-01

    The main objective of this study is to examine the environmental Kuznets curve (EKC) hypothesis by utilizing the ecological footprint as an environment indicator and GDP from tourism as the economic indicator. To achieve this goal, an environmental degradation model is established during the period of 1988-2008 for 144 countries. The results from the time series generalized method of moments (GMM) and the system panel GMM revealed that the number of countries that have a negative relationship between the ecological footprint and its determinants (GDP growth from tourism, energy consumption, trade openness, and urbanization) is more existent in the upper middle- and high-income countries. Moreover, the EKC hypothesis is more present in the upper middle- and high-income countries than the other income countries. From the outcome of this research, a number of policy recommendations were provided for the investigated countries.

  5. Safe electrode trajectory planning in SEEG via MIP-based vessel segmentation

    NASA Astrophysics Data System (ADS)

    Scorza, Davide; Moccia, Sara; De Luca, Giuseppe; Plaino, Lisa; Cardinale, Francesco; Mattos, Leonardo S.; Kabongo, Luis; De Momi, Elena

    2017-03-01

    Stereo-ElectroEncephaloGraphy (SEEG) is a surgical procedure that allows brain exploration of patients affected by focal epilepsy by placing intra-cerebral multi-lead electrodes. The electrode trajectory planning is challenging and time consuming. Various constraints have to be taken into account simultaneously, such as absence of vessels at the electrode Entry Point (EP), where bleeding is more likely to occur. In this paper, we propose a novel framework to help clinicians in defining a safe trajectory and focus our attention on EP. For each electrode, a Maximum Intensity Projection (MIP) image was obtained from Computer Tomography Angiography (CTA) slices of the brain first centimeter measured along the electrode trajectory. A Gaussian Mixture Model (GMM), modified to include neighborhood prior through Markov Random Fields (GMM-MRF), is used to robustly segment vessels and deal with the noisy nature of MIP images. Results are compared with simple GMM and manual global Thresholding (Th) by computing sensitivity, specificity, accuracy and Dice similarity index against manual segmentation performed under the supervision of an expert surgeon. In this work we present a novel framework which can be easily integrated into manual and automatic planner to help surgeon during the planning phase. GMM-MRF qualitatively showed better performance over GMM in reproducing the connected nature of brain vessels also in presence of noise and image intensity drops typical of MIP images. With respect Th, it is a completely automatic method and it is not influenced by inter-subject variability.

  6. Estimating 4D CBCT from prior information and extremely limited angle projections using structural PCA and weighted free-form deformation for lung radiotherapy

    PubMed Central

    Harris, Wendy; Zhang, You; Yin, Fang-Fang; Ren, Lei

    2017-01-01

    Purpose To investigate the feasibility of using structural-based principal component analysis (PCA) motion-modeling and weighted free-form deformation to estimate on-board 4D-CBCT using prior information and extremely limited angle projections for potential 4D target verification of lung radiotherapy. Methods A technique for lung 4D-CBCT reconstruction has been previously developed using a deformation field map (DFM)-based strategy. In the previous method, each phase of the 4D-CBCT was generated by deforming a prior CT volume. The DFM was solved by a motion-model extracted by global PCA and free-form deformation (GMM-FD) technique, using a data fidelity constraint and deformation energy minimization. In this study, a new structural-PCA method was developed to build a structural motion-model (SMM) by accounting for potential relative motion pattern changes between different anatomical structures from simulation to treatment. The motion model extracted from planning 4DCT was divided into two structures: tumor and body excluding tumor, and the parameters of both structures were optimized together. Weighted free-form deformation (WFD) was employed afterwards to introduce flexibility in adjusting the weightings of different structures in the data fidelity constraint based on clinical interests. XCAT (computerized patient model) simulation with a 30 mm diameter lesion was simulated with various anatomical and respirational changes from planning 4D-CT to onboard volume to evaluate the method. The estimation accuracy was evaluated by the Volume-Percent-Difference (VPD)/Center-of-Mass-Shift (COMS) between lesions in the estimated and “ground-truth” on board 4D-CBCT. Different onboard projection acquisition scenarios and projection noise levels were simulated to investigate their effects on the estimation accuracy. The method was also evaluated against 3 lung patients. Results The SMM-WFD method achieved substantially better accuracy than the GMM-FD method for CBCT estimation using extremely small scan angles or projections. Using orthogonal 15° scanning angles, the VPD/COMS were 3.47±2.94% and 0.23±0.22mm for SMM-WFD and 25.23±19.01% and 2.58±2.54mm for GMM-FD among all 8 XCAT scenarios. Compared to GMM-FD, SMM-WFD was more robust against reduction of the scanning angles down to orthogonal 10° with VPD/COMS of 6.21±5.61% and 0.39±0.49mm, and more robust against reduction of projection numbers down to only 8 projections in total for both orthogonal-view 30° and orthogonal-view 15° scan angles. SMM-WFD method was also more robust than the GMM-FD method against increasing levels of noise in the projection images. Additionally, the SMM-WFD technique provided better tumor estimation for all three lung patients compared to the GMM-FD technique. Conclusion Compared to the GMM-FD technique, the SMM-WFD technique can substantially improve the 4D-CBCT estimation accuracy using extremely small scan angles and low number of projections to provide fast low dose 4D target verification. PMID:28079267

  7. Deep neural network and noise classification-based speech enhancement

    NASA Astrophysics Data System (ADS)

    Shi, Wenhua; Zhang, Xiongwei; Zou, Xia; Han, Wei

    2017-07-01

    In this paper, a speech enhancement method using noise classification and Deep Neural Network (DNN) was proposed. Gaussian mixture model (GMM) was employed to determine the noise type in speech-absent frames. DNN was used to model the relationship between noisy observation and clean speech. Once the noise type was determined, the corresponding DNN model was applied to enhance the noisy speech. GMM was trained with mel-frequency cepstrum coefficients (MFCC) and the parameters were estimated with an iterative expectation-maximization (EM) algorithm. Noise type was updated by spectrum entropy-based voice activity detection (VAD). Experimental results demonstrate that the proposed method could achieve better objective speech quality and smaller distortion under stationary and non-stationary conditions.

  8. Two Methods of Automatic Evaluation of Speech Signal Enhancement Recorded in the Open-Air MRI Environment

    NASA Astrophysics Data System (ADS)

    Přibil, Jiří; Přibilová, Anna; Frollo, Ivan

    2017-12-01

    The paper focuses on two methods of evaluation of successfulness of speech signal enhancement recorded in the open-air magnetic resonance imager during phonation for the 3D human vocal tract modeling. The first approach enables to obtain a comparison based on statistical analysis by ANOVA and hypothesis tests. The second method is based on classification by Gaussian mixture models (GMM). The performed experiments have confirmed that the proposed ANOVA and GMM classifiers for automatic evaluation of the speech quality are functional and produce fully comparable results with the standard evaluation based on the listening test method.

  9. Improving the Accuracy and Training Speed of Motor Imagery Brain-Computer Interfaces Using Wavelet-Based Combined Feature Vectors and Gaussian Mixture Model-Supervectors.

    PubMed

    Lee, David; Park, Sang-Hoon; Lee, Sang-Goog

    2017-10-07

    In this paper, we propose a set of wavelet-based combined feature vectors and a Gaussian mixture model (GMM)-supervector to enhance training speed and classification accuracy in motor imagery brain-computer interfaces. The proposed method is configured as follows: first, wavelet transforms are applied to extract the feature vectors for identification of motor imagery electroencephalography (EEG) and principal component analyses are used to reduce the dimensionality of the feature vectors and linearly combine them. Subsequently, the GMM universal background model is trained by the expectation-maximization (EM) algorithm to purify the training data and reduce its size. Finally, a purified and reduced GMM-supervector is used to train the support vector machine classifier. The performance of the proposed method was evaluated for three different motor imagery datasets in terms of accuracy, kappa, mutual information, and computation time, and compared with the state-of-the-art algorithms. The results from the study indicate that the proposed method achieves high accuracy with a small amount of training data compared with the state-of-the-art algorithms in motor imagery EEG classification.

  10. Pain referral and regional deep tissue hyperalgesia in experimental human hip pain models.

    PubMed

    Izumi, Masashi; Petersen, Kristian Kjær; Arendt-Nielsen, Lars; Graven-Nielsen, Thomas

    2014-04-01

    Hip disorder patients typically present with extensive pain referral and hyperalgesia. To better understand underlying mechanisms, an experimental hip pain model was established in which pain referrals and hyperalgesia could be studied under standardized conditions. In 16 healthy subjects, pain was induced by hypertonic saline injection into the gluteus medius tendon (GMT), adductor longus tendon (ALT), or gluteus medius muscle (GMM). Isotonic saline was injected contralaterally as control. Pain intensity was assessed on a visual analogue scale (VAS), and subjects mapped the pain distribution. Before, during, and after injections, passive hip joint pain provocation tests were completed, together with quantitative sensory testing as follows: pressure pain thresholds (PPTs), cuff algometry pain thresholds (cuff PPTs), cutaneous pin-prick sensitivity, and thermal pain thresholds. Hypertonic saline injected into the GMT resulted in higher VAS scores than hypertonic injections into the ALT and GMM (P<.05). Referred pain areas spread to larger parts of the leg after GMT and GMM injections compared with more regionalized pain pattern after ALT injections (P<.05). PPTs at the injection site were decreased after hypertonic saline injections into GMT and GMM compared with baseline, ALT injections, and isotonic saline. Cuff PPTs from the thigh were decreased after hypertonic saline injections into the ALT compared with baseline, GMT injections, and isotonic saline (P<.05). More subjects had positive joint pain provocation tests after hypertonic compared with isotonic saline injections (P<.05), indicating that this provocation test also assessed hyperalgesia in extra-articular soft tissues. The experimental models may open for better understanding of pain mechanisms associated with painful hip disorders. Copyright © 2014 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.

  11. Estimating 4D-CBCT from prior information and extremely limited angle projections using structural PCA and weighted free-form deformation for lung radiotherapy.

    PubMed

    Harris, Wendy; Zhang, You; Yin, Fang-Fang; Ren, Lei

    2017-03-01

    To investigate the feasibility of using structural-based principal component analysis (PCA) motion-modeling and weighted free-form deformation to estimate on-board 4D-CBCT using prior information and extremely limited angle projections for potential 4D target verification of lung radiotherapy. A technique for lung 4D-CBCT reconstruction has been previously developed using a deformation field map (DFM)-based strategy. In the previous method, each phase of the 4D-CBCT was generated by deforming a prior CT volume. The DFM was solved by a motion model extracted by a global PCA and free-form deformation (GMM-FD) technique, using a data fidelity constraint and deformation energy minimization. In this study, a new structural PCA method was developed to build a structural motion model (SMM) by accounting for potential relative motion pattern changes between different anatomical structures from simulation to treatment. The motion model extracted from planning 4DCT was divided into two structures: tumor and body excluding tumor, and the parameters of both structures were optimized together. Weighted free-form deformation (WFD) was employed afterwards to introduce flexibility in adjusting the weightings of different structures in the data fidelity constraint based on clinical interests. XCAT (computerized patient model) simulation with a 30 mm diameter lesion was simulated with various anatomical and respiratory changes from planning 4D-CT to on-board volume to evaluate the method. The estimation accuracy was evaluated by the volume percent difference (VPD)/center-of-mass-shift (COMS) between lesions in the estimated and "ground-truth" on-board 4D-CBCT. Different on-board projection acquisition scenarios and projection noise levels were simulated to investigate their effects on the estimation accuracy. The method was also evaluated against three lung patients. The SMM-WFD method achieved substantially better accuracy than the GMM-FD method for CBCT estimation using extremely small scan angles or projections. Using orthogonal 15° scanning angles, the VPD/COMS were 3.47 ± 2.94% and 0.23 ± 0.22 mm for SMM-WFD and 25.23 ± 19.01% and 2.58 ± 2.54 mm for GMM-FD among all eight XCAT scenarios. Compared to GMM-FD, SMM-WFD was more robust against reduction of the scanning angles down to orthogonal 10° with VPD/COMS of 6.21 ± 5.61% and 0.39 ± 0.49 mm, and more robust against reduction of projection numbers down to only 8 projections in total for both orthogonal-view 30° and orthogonal-view 15° scan angles. SMM-WFD method was also more robust than the GMM-FD method against increasing levels of noise in the projection images. Additionally, the SMM-WFD technique provided better tumor estimation for all three lung patients compared to the GMM-FD technique. Compared to the GMM-FD technique, the SMM-WFD technique can substantially improve the 4D-CBCT estimation accuracy using extremely small scan angles and low number of projections to provide fast low dose 4D target verification. © 2017 American Association of Physicists in Medicine.

  12. Balancing aggregation and smoothing errors in inverse models

    DOE PAGES

    Turner, A. J.; Jacob, D. J.

    2015-06-30

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore » state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less

  13. Balancing aggregation and smoothing errors in inverse models

    NASA Astrophysics Data System (ADS)

    Turner, A. J.; Jacob, D. J.

    2015-01-01

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.

  14. Balancing aggregation and smoothing errors in inverse models

    NASA Astrophysics Data System (ADS)

    Turner, A. J.; Jacob, D. J.

    2015-06-01

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.

  15. Cough event classification by pretrained deep neural network.

    PubMed

    Liu, Jia-Ming; You, Mingyu; Wang, Zheng; Li, Guo-Zheng; Xu, Xianghuai; Qiu, Zhongmin

    2015-01-01

    Cough is an essential symptom in respiratory diseases. In the measurement of cough severity, an accurate and objective cough monitor is expected by respiratory disease society. This paper aims to introduce a better performed algorithm, pretrained deep neural network (DNN), to the cough classification problem, which is a key step in the cough monitor. The deep neural network models are built from two steps, pretrain and fine-tuning, followed by a Hidden Markov Model (HMM) decoder to capture tamporal information of the audio signals. By unsupervised pretraining a deep belief network, a good initialization for a deep neural network is learned. Then the fine-tuning step is a back propogation tuning the neural network so that it can predict the observation probability associated with each HMM states, where the HMM states are originally achieved by force-alignment with a Gaussian Mixture Model Hidden Markov Model (GMM-HMM) on the training samples. Three cough HMMs and one noncough HMM are employed to model coughs and noncoughs respectively. The final decision is made based on viterbi decoding algorihtm that generates the most likely HMM sequence for each sample. A sample is labeled as cough if a cough HMM is found in the sequence. The experiments were conducted on a dataset that was collected from 22 patients with respiratory diseases. Patient dependent (PD) and patient independent (PI) experimental settings were used to evaluate the models. Five criteria, sensitivity, specificity, F1, macro average and micro average are shown to depict different aspects of the models. From overall evaluation criteria, the DNN based methods are superior to traditional GMM-HMM based method on F1 and micro average with maximal 14% and 11% error reduction in PD and 7% and 10% in PI, meanwhile keep similar performances on macro average. They also surpass GMM-HMM model on specificity with maximal 14% error reduction on both PD and PI. In this paper, we tried pretrained deep neural network in cough classification problem. Our results showed that comparing with the conventional GMM-HMM framework, the HMM-DNN could get better overall performance on cough classification task.

  16. [Methods of identification and assessment of safety of genetically modified microorganisms in manufacture food production].

    PubMed

    Khovaev, A A; Nesterenko, L N; Naroditskiĭ, B S

    2011-01-01

    Methods of identification of genetically modified microorganisms (GMM), used in manufacture food on control probes are presented. Results of microbiological and molecular and genetic analyses of food products and their components important in microbiological and genetic expert examination of GMM in foods are considered. Examination of biosafety of GMM are indicated.

  17. Growth Mixture Modeling of Academic Achievement in Children of Varying Birth Weight Risk

    PubMed Central

    Espy, Kimberly Andrews; Fang, Hua; Charak, David; Minich, Nori; Taylor, H. Gerry

    2009-01-01

    The extremes of birth weight and preterm birth are known to result in a host of adverse outcomes, yet studies to date largely have used cross-sectional designs and variable-centered methods to understand long-term sequelae. Growth mixture modeling (GMM) that utilizes an integrated person- and variable-centered approach was applied to identify latent classes of achievement from a cohort of school-age children born at varying birth weights. GMM analyses revealed two latent achievement classes for calculation, problem-solving, and decoding abilities. The classes differed substantively and persistently in proficiency and in growth trajectories. Birth weight was a robust predictor of class membership for the two mathematics achievement outcomes and a marginal predictor of class membership for decoding. Neither visuospatial-motor skills nor environmental risk at study entry added to class prediction for any of the achievement skills. Among children born preterm, neonatal medical variables predicted class membership uniquely beyond birth weight. More generally, GMM is useful in revealing coherence in the developmental patterns of academic achievement in children of varying weight at birth, and is well suited to investigations of sources of heterogeneity. PMID:19586210

  18. Mining patterns in persistent surveillance systems with smart query and visual analytics

    NASA Astrophysics Data System (ADS)

    Habibi, Mohammad S.; Shirkhodaie, Amir

    2013-05-01

    In Persistent Surveillance Systems (PSS) the ability to detect and characterize events geospatially help take pre-emptive steps to counter adversary's actions. Interactive Visual Analytic (VA) model offers this platform for pattern investigation and reasoning to comprehend and/or predict such occurrences. The need for identifying and offsetting these threats requires collecting information from diverse sources, which brings with it increasingly abstract data. These abstract semantic data have a degree of inherent uncertainty and imprecision, and require a method for their filtration before being processed further. In this paper, we have introduced an approach based on Vector Space Modeling (VSM) technique for classification of spatiotemporal sequential patterns of group activities. The feature vectors consist of an array of attributes extracted from generated sensors semantic annotated messages. To facilitate proper similarity matching and detection of time-varying spatiotemporal patterns, a Temporal-Dynamic Time Warping (DTW) method with Gaussian Mixture Model (GMM) for Expectation Maximization (EM) is introduced. DTW is intended for detection of event patterns from neighborhood-proximity semantic frames derived from established ontology. GMM with EM, on the other hand, is employed as a Bayesian probabilistic model to estimated probability of events associated with a detected spatiotemporal pattern. In this paper, we present a new visual analytic tool for testing and evaluation group activities detected under this control scheme. Experimental results demonstrate the effectiveness of proposed approach for discovery and matching of subsequences within sequentially generated patterns space of our experiments.

  19. Intertwined nanocarbon and manganese oxide hybrid foam for high-energy supercapacitors.

    PubMed

    Wang, Wei; Guo, Shirui; Bozhilov, Krassimir N; Yan, Dong; Ozkan, Mihrimah; Ozkan, Cengiz S

    2013-11-11

    Rapid charging and discharging supercapacitors are promising alternative energy storage systems for applications such as portable electronics and electric vehicles. Integration of pseudocapacitive metal oxides with single-structured materials has received a lot of attention recently due to their superior electrochemical performance. In order to realize high energy-density supercapacitors, a simple and scalable method is developed to fabricate a graphene/MWNT/MnO2 nanowire (GMM) hybrid nanostructured foam, via a two-step process. The 3D few-layer graphene/MWNT (GM) architecture is grown on foamed metal foils (nickel foam) via ambient pressure chemical vapor deposition. Hydrothermally synthesized α-MnO2 nanowires are conformally coated onto the GM foam by a simple bath deposition. The as-prepared hierarchical GMM foam yields a monographical graphene foam conformally covered with an intertwined, densely packed CNT/MnO2 nanowire nanocomposite network. Symmetrical electrochemical capacitors (ECs) based on GMM foam electrodes show an extended operational voltage window of 1.6 V in aqueous electrolyte. A superior energy density of 391.7 Wh kg(-1) is obtained for the supercapacitor based on the GMM foam, which is much higher than ECs based on GM foam only (39.72 Wh kg(-1) ). A high specific capacitance (1108.79 F g(-1) ) and power density (799.84 kW kg(-1) ) are also achieved. Moreover, the great capacitance retention (97.94%) after 13 000 charge-discharge cycles and high current handability demonstrate the high stability of the electrodes of the supercapacitor. These excellent performances enable the innovative 3D hierarchical GMM foam to serve as EC electrodes, resulting in energy-storage devices with high stability and power density in neutral aqueous electrolyte. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Transition flight control room automation

    NASA Technical Reports Server (NTRS)

    Welborn, Curtis Ray

    1990-01-01

    The Workstation Prototype Laboratory is currently working on a number of projects which can have a direct impact on ground operations automation. These projects include: (1) The fuel cell monitoring system (FCMS), which will monitor and detect problems with the fuel cells on the shuttle. FCMS will use a combination of rules (forward/backward) and multithreaded procedures, which run concurrently with the rules, to implement the malfunction algorithms of the EGIL flight controllers. The combination of rule-based reasoning and procedural reasoning allows us to more easily map the malfunction algorithms into a real-time system implementation. (2) A graphical computation language (AGCOMPL) is an experimental prototype to determine the benefits and drawbacks of using a graphical language to design computations (algorithms) to work on shuttle or space station telemetry and trajectory data. (3) The design of a system will allow a model of an electrical system, including telemetry sensors, to be configured on the screen graphically using previously defined electrical icons. This electrical model would then be used to generate rules and procedures for detecting malfunctions in the electrical components of the model. (4) A generic message management (GMM) system is being designed for real-time applications as a message management system which sends advisory messages to a user. The primary purpose of GMM is to reduce the risk of overloading a user with information when multiple failures occur and to assist the developer in the devising an explanation facility. The emphasis of our work is to develop practical tools and techniques, including identification of appropriate software tools to support research, application, and tool building activities, while determining the feasibility of a given approach.

  1. Passive Acoustic Leak Detection for Sodium Cooled Fast Reactors Using Hidden Markov Models

    NASA Astrophysics Data System (ADS)

    Marklund, A. Riber; Kishore, S.; Prakash, V.; Rajan, K. K.; Michel, F.

    2016-06-01

    Acoustic leak detection for steam generators of sodium fast reactors have been an active research topic since the early 1970s and several methods have been tested over the years. Inspired by its success in the field of automatic speech recognition, we here apply hidden Markov models (HMM) in combination with Gaussian mixture models (GMM) to the problem. To achieve this, we propose a new feature calculation scheme, based on the temporal evolution of the power spectral density (PSD) of the signal. Using acoustic signals recorded during steam/water injection experiments done at the Indira Gandhi Centre for Atomic Research (IGCAR), the proposed method is tested. We perform parametric studies on the HMM+GMM model size and demonstrate that the proposed method a) performs well without a priori knowledge of injection noise, b) can incorporate several noise models and c) has an output distribution that simplifies false alarm rate control.

  2. X-Eye: a novel wearable vision system

    NASA Astrophysics Data System (ADS)

    Wang, Yuan-Kai; Fan, Ching-Tang; Chen, Shao-Ang; Chen, Hou-Ye

    2011-03-01

    This paper proposes a smart portable device, named the X-Eye, which provides a gesture interface with a small size but a large display for the application of photo capture and management. The wearable vision system is implemented with embedded systems and can achieve real-time performance. The hardware of the system includes an asymmetric dualcore processer with an ARM core and a DSP core. The display device is a pico projector which has a small volume size but can project large screen size. A triple buffering mechanism is designed for efficient memory management. Software functions are partitioned and pipelined for effective execution in parallel. The gesture recognition is achieved first by a color classification which is based on the expectation-maximization algorithm and Gaussian mixture model (GMM). To improve the performance of the GMM, we devise a LUT (Look Up Table) technique. Fingertips are extracted and geometrical features of fingertip's shape are matched to recognize user's gesture commands finally. In order to verify the accuracy of the gesture recognition module, experiments are conducted in eight scenes with 400 test videos including the challenge of colorful background, low illumination, and flickering. The processing speed of the whole system including the gesture recognition is with the frame rate of 22.9FPS. Experimental results give 99% recognition rate. The experimental results demonstrate that this small-size large-screen wearable system has effective gesture interface with real-time performance.

  3. Automated EEG sleep staging in the term-age baby using a generative modelling approach.

    PubMed

    Pillay, Kirubin; Dereymaeker, Anneleen; Jansen, Katrien; Naulaers, Gunnar; Van Huffel, Sabine; De Vos, Maarten

    2018-06-01

    We develop a method for automated four-state sleep classification of preterm and term-born babies at term-age of 38-40 weeks postmenstrual age (the age since the last menstrual cycle of the mother) using multichannel electroencephalogram (EEG) recordings. At this critical age, EEG differentiates from broader quiet sleep (QS) and active sleep (AS) stages to four, more complex states, and the quality and timing of this differentiation is indicative of the level of brain development. However, existing methods for automated sleep classification remain focussed only on QS and AS sleep classification. EEG features were calculated from 16 EEG recordings, in 30 s epochs, and personalized feature scaling used to correct for some of the inter-recording variability, by standardizing each recording's feature data using its mean and standard deviation. Hidden Markov models (HMMs) and Gaussian mixture models (GMMs) were trained, with the HMM incorporating knowledge of the sleep state transition probabilities. Performance of the GMM and HMM (with and without scaling) were compared, and Cohen's kappa agreement calculated between the estimates and clinicians' visual labels. For four-state classification, the HMM proved superior to the GMM. With the inclusion of personalized feature scaling, mean kappa (±standard deviation) was 0.62 (±0.16) compared to the GMM value of 0.55 (±0.15). Without feature scaling, kappas for the HMM and GMM dropped to 0.56 (±0.18) and 0.51 (±0.15), respectively. This is the first study to present a successful method for the automated staging of four states in term-age sleep using multichannel EEG. Results suggested a benefit in incorporating transition information using an HMM, and correcting for inter-recording variability through personalized feature scaling. Determining the timing and quality of these states are indicative of developmental delays in both preterm and term-born babies that may lead to learning problems by school age.

  4. Automated EEG sleep staging in the term-age baby using a generative modelling approach

    NASA Astrophysics Data System (ADS)

    Pillay, Kirubin; Dereymaeker, Anneleen; Jansen, Katrien; Naulaers, Gunnar; Van Huffel, Sabine; De Vos, Maarten

    2018-06-01

    Objective. We develop a method for automated four-state sleep classification of preterm and term-born babies at term-age of 38-40 weeks postmenstrual age (the age since the last menstrual cycle of the mother) using multichannel electroencephalogram (EEG) recordings. At this critical age, EEG differentiates from broader quiet sleep (QS) and active sleep (AS) stages to four, more complex states, and the quality and timing of this differentiation is indicative of the level of brain development. However, existing methods for automated sleep classification remain focussed only on QS and AS sleep classification. Approach. EEG features were calculated from 16 EEG recordings, in 30 s epochs, and personalized feature scaling used to correct for some of the inter-recording variability, by standardizing each recording’s feature data using its mean and standard deviation. Hidden Markov models (HMMs) and Gaussian mixture models (GMMs) were trained, with the HMM incorporating knowledge of the sleep state transition probabilities. Performance of the GMM and HMM (with and without scaling) were compared, and Cohen’s kappa agreement calculated between the estimates and clinicians’ visual labels. Main results. For four-state classification, the HMM proved superior to the GMM. With the inclusion of personalized feature scaling, mean kappa (±standard deviation) was 0.62 (±0.16) compared to the GMM value of 0.55 (±0.15). Without feature scaling, kappas for the HMM and GMM dropped to 0.56 (±0.18) and 0.51 (±0.15), respectively. Significance. This is the first study to present a successful method for the automated staging of four states in term-age sleep using multichannel EEG. Results suggested a benefit in incorporating transition information using an HMM, and correcting for inter-recording variability through personalized feature scaling. Determining the timing and quality of these states are indicative of developmental delays in both preterm and term-born babies that may lead to learning problems by school age.

  5. Adaptive Gaussian mixture models for pre-screening in GPR data

    NASA Astrophysics Data System (ADS)

    Torrione, Peter; Morton, Kenneth, Jr.; Besaw, Lance E.

    2011-06-01

    Due to the large amount of data generated by vehicle-mounted ground penetrating radar (GPR) antennae arrays, advanced feature extraction and classification can only be performed on a small subset of data during real-time operation. As a result, most GPR based landmine detection systems implement "pre-screening" algorithms to processes all of the data generated by the antennae array and identify locations with anomalous signatures for more advanced processing. These pre-screening algorithms must be computationally efficient and obtain high probability of detection, but can permit a false alarm rate which might be higher than the total system requirements. Many approaches to prescreening have previously been proposed, including linear prediction coefficients, the LMS algorithm, and CFAR-based approaches. Similar pre-screening techniques have also been developed in the field of video processing to identify anomalous behavior or anomalous objects. One such algorithm, an online k-means approximation to an adaptive Gaussian mixture model (GMM), is particularly well-suited to application for pre-screening in GPR data due to its computational efficiency, non-linear nature, and relevance of the logic underlying the algorithm to GPR processing. In this work we explore the application of an adaptive GMM-based approach for anomaly detection from the video processing literature to pre-screening in GPR data. Results with the ARA Nemesis landmine detection system demonstrate significant pre-screening performance improvements compared to alternative approaches, and indicate that the proposed algorithm is a complimentary technique to existing methods.

  6. Embedded security system for multi-modal surveillance in a railway carriage

    NASA Astrophysics Data System (ADS)

    Zouaoui, Rhalem; Audigier, Romaric; Ambellouis, Sébastien; Capman, François; Benhadda, Hamid; Joudrier, Stéphanie; Sodoyer, David; Lamarque, Thierry

    2015-10-01

    Public transport security is one of the main priorities of the public authorities when fighting against crime and terrorism. In this context, there is a great demand for autonomous systems able to detect abnormal events such as violent acts aboard passenger cars and intrusions when the train is parked at the depot. To this end, we present an innovative approach which aims at providing efficient automatic event detection by fusing video and audio analytics and reducing the false alarm rate compared to classical stand-alone video detection. The multi-modal system is composed of two microphones and one camera and integrates onboard video and audio analytics and fusion capabilities. On the one hand, for detecting intrusion, the system relies on the fusion of "unusual" audio events detection with intrusion detections from video processing. The audio analysis consists in modeling the normal ambience and detecting deviation from the trained models during testing. This unsupervised approach is based on clustering of automatically extracted segments of acoustic features and statistical Gaussian Mixture Model (GMM) modeling of each cluster. The intrusion detection is based on the three-dimensional (3D) detection and tracking of individuals in the videos. On the other hand, for violent events detection, the system fuses unsupervised and supervised audio algorithms with video event detection. The supervised audio technique detects specific events such as shouts. A GMM is used to catch the formant structure of a shout signal. Video analytics use an original approach for detecting aggressive motion by focusing on erratic motion patterns specific to violent events. As data with violent events is not easily available, a normality model with structured motions from non-violent videos is learned for one-class classification. A fusion algorithm based on Dempster-Shafer's theory analyses the asynchronous detection outputs and computes the degree of belief of each probable event.

  7. A probabilistic approach to emission-line galaxy classification

    NASA Astrophysics Data System (ADS)

    de Souza, R. S.; Dantas, M. L. L.; Costa-Duarte, M. V.; Feigelson, E. D.; Killedar, M.; Lablanche, P.-Y.; Vilalta, R.; Krone-Martins, A.; Beck, R.; Gieseke, F.

    2017-12-01

    We invoke a Gaussian mixture model (GMM) to jointly analyse two traditional emission-line classification schemes of galaxy ionization sources: the Baldwin-Phillips-Terlevich (BPT) and WH α versus [N II]/H α (WHAN) diagrams, using spectroscopic data from the Sloan Digital Sky Survey Data Release 7 and SEAGal/STARLIGHT data sets. We apply a GMM to empirically define classes of galaxies in a three-dimensional space spanned by the log [O III]/H β, log [N II]/H α and log EW(H α) optical parameters. The best-fitting GMM based on several statistical criteria suggests a solution around four Gaussian components (GCs), which are capable to explain up to 97 per cent of the data variance. Using elements of information theory, we compare each GC to their respective astronomical counterpart. GC1 and GC4 are associated with star-forming galaxies, suggesting the need to define a new starburst subgroup. GC2 is associated with BPT's active galactic nuclei (AGN) class and WHAN's weak AGN class. GC3 is associated with BPT's composite class and WHAN's strong AGN class. Conversely, there is no statistical evidence - based on four GCs - for the existence of a Seyfert/low-ionization nuclear emission-line region (LINER) dichotomy in our sample. Notwithstanding, the inclusion of an additional GC5 unravels it. The GC5 appears associated with the LINER and passive galaxies on the BPT and WHAN diagrams, respectively. This indicates that if the Seyfert/LINER dichotomy is there, it does not account significantly to the global data variance and may be overlooked by standard metrics of goodness of fit. Subtleties aside, we demonstrate the potential of our methodology to recover/unravel different objects inside the wilderness of astronomical data sets, without lacking the ability to convey physically interpretable results. The probabilistic classifications from the GMM analysis are publicly available within the COINtoolbox at https://cointoolbox.github.io/GMM_Catalogue/.

  8. Robust generative asymmetric GMM for brain MR image segmentation.

    PubMed

    Ji, Zexuan; Xia, Yong; Zheng, Yuhui

    2017-11-01

    Accurate segmentation of brain tissues from magnetic resonance (MR) images based on the unsupervised statistical models such as Gaussian mixture model (GMM) has been widely studied during last decades. However, most GMM based segmentation methods suffer from limited accuracy due to the influences of noise and intensity inhomogeneity in brain MR images. To further improve the accuracy for brain MR image segmentation, this paper presents a Robust Generative Asymmetric GMM (RGAGMM) for simultaneous brain MR image segmentation and intensity inhomogeneity correction. First, we develop an asymmetric distribution to fit the data shapes, and thus construct a spatial constrained asymmetric model. Then, we incorporate two pseudo-likelihood quantities and bias field estimation into the model's log-likelihood, aiming to exploit the neighboring priors of within-cluster and between-cluster and to alleviate the impact of intensity inhomogeneity, respectively. Finally, an expectation maximization algorithm is derived to iteratively maximize the approximation of the data log-likelihood function to overcome the intensity inhomogeneity in the image and segment the brain MR images simultaneously. To demonstrate the performances of the proposed algorithm, we first applied the proposed algorithm to a synthetic brain MR image to show the intermediate illustrations and the estimated distribution of the proposed algorithm. The next group of experiments is carried out in clinical 3T-weighted brain MR images which contain quite serious intensity inhomogeneity and noise. Then we quantitatively compare our algorithm to state-of-the-art segmentation approaches by using Dice coefficient (DC) on benchmark images obtained from IBSR and BrainWeb with different level of noise and intensity inhomogeneity. The comparison results on various brain MR images demonstrate the superior performances of the proposed algorithm in dealing with the noise and intensity inhomogeneity. In this paper, the RGAGMM algorithm is proposed which can simply and efficiently incorporate spatial constraints into an EM framework to simultaneously segment brain MR images and estimate the intensity inhomogeneity. The proposed algorithm is flexible to fit the data shapes, and can simultaneously overcome the influence of noise and intensity inhomogeneity, and hence is capable of improving over 5% segmentation accuracy comparing with several state-of-the-art algorithms. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Passive acoustic leak detection for sodium cooled fast reactors using hidden Markov models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riber Marklund, A.; Kishore, S.; Prakash, V.

    2015-07-01

    Acoustic leak detection for steam generators of sodium fast reactors have been an active research topic since the early 1970's and several methods have been tested over the years. Inspired by its success in the field of automatic speech recognition, we here apply hidden Markov models (HMM) in combination with Gaussian mixture models (GMM) to the problem. To achieve this, we propose a new feature calculation scheme, based on the temporal evolution of the power spectral density (PSD) of the signal. Using acoustic signals recorded during steam/water injection experiments done at the Indira Gandhi Centre for Atomic Research (IGCAR), themore » proposed method is tested. We perform parametric studies on the HMM+GMM model size and demonstrate that the proposed method a) performs well without a priori knowledge of injection noise, b) can incorporate several noise models and c) has an output distribution that simplifies false alarm rate control. (authors)« less

  10. Development and Implementation of Metrics for Identifying Military Impulse Noise

    DTIC Science & Technology

    2010-09-01

    False Negative Rate FP False Positive FPR False Positive Rate FtC Fort Carson, CO GIS Geographic Information System GMM Gaussian mixture model Hz...60 70 80 90 100 110 Bin Number B in N um be r N um ber of D ata Points M apped to B in 14 Figure 8. Plot of typical neuron activation...signal metrics and waveform itself were saved and transmitted to the home base. There is also a provision to download the entire recorded waveform

  11. Class Enumeration and Parameter Recovery of Growth Mixture Modeling and Second-Order Growth Mixture Modeling in the Presence of Measurement Noninvariance between Latent Classes

    PubMed Central

    Kim, Eun Sook; Wang, Yan

    2017-01-01

    Population heterogeneity in growth trajectories can be detected with growth mixture modeling (GMM). It is common that researchers compute composite scores of repeated measures and use them as multiple indicators of growth factors (baseline performance and growth) assuming measurement invariance between latent classes. Considering that the assumption of measurement invariance does not always hold, we investigate the impact of measurement noninvariance on class enumeration and parameter recovery in GMM through a Monte Carlo simulation study (Study 1). In Study 2, we examine the class enumeration and parameter recovery of the second-order growth mixture modeling (SOGMM) that incorporates measurement models at the first order level. Thus, SOGMM estimates growth trajectory parameters with reliable sources of variance, that is, common factor variance of repeated measures and allows heterogeneity in measurement parameters between latent classes. The class enumeration rates are examined with information criteria such as AIC, BIC, sample-size adjusted BIC, and hierarchical BIC under various simulation conditions. The results of Study 1 showed that the parameter estimates of baseline performance and growth factor means were biased to the degree of measurement noninvariance even when the correct number of latent classes was extracted. In Study 2, the class enumeration accuracy of SOGMM depended on information criteria, class separation, and sample size. The estimates of baseline performance and growth factor mean differences between classes were generally unbiased but the size of measurement noninvariance was underestimated. Overall, SOGMM is advantageous in that it yields unbiased estimates of growth trajectory parameters and more accurate class enumeration compared to GMM by incorporating measurement models. PMID:28928691

  12. Robust speaker's location detection in a vehicle environment using GMM models.

    PubMed

    Hu, Jwu-Sheng; Cheng, Chieh-Cheng; Liu, Wei-Han

    2006-04-01

    Abstract-Human-computer interaction (HCI) using speech communication is becoming increasingly important, especially in driving where safety is the primary concern. Knowing the speaker's location (i.e., speaker localization) not only improves the enhancement results of a corrupted signal, but also provides assistance to speaker identification. Since conventional speech localization algorithms suffer from the uncertainties of environmental complexity and noise, as well as from the microphone mismatch problem, they are frequently not robust in practice. Without a high reliability, the acceptance of speech-based HCI would never be realized. This work presents a novel speaker's location detection method and demonstrates high accuracy within a vehicle cabinet using a single linear microphone array. The proposed approach utilize Gaussian mixture models (GMM) to model the distributions of the phase differences among the microphones caused by the complex characteristic of room acoustic and microphone mismatch. The model can be applied both in near-field and far-field situations in a noisy environment. The individual Gaussian component of a GMM represents some general location-dependent but content and speaker-independent phase difference distributions. Moreover, the scheme performs well not only in nonline-of-sight cases, but also when the speakers are aligned toward the microphone array but at difference distances from it. This strong performance can be achieved by exploiting the fact that the phase difference distributions at different locations are distinguishable in the environment of a car. The experimental results also show that the proposed method outperforms the conventional multiple signal classification method (MUSIC) technique at various SNRs.

  13. Specific acoustic models for spontaneous and dictated style in indonesian speech recognition

    NASA Astrophysics Data System (ADS)

    Vista, C. B.; Satriawan, C. H.; Lestari, D. P.; Widyantoro, D. H.

    2018-03-01

    The performance of an automatic speech recognition system is affected by differences in speech style between the data the model is originally trained upon and incoming speech to be recognized. In this paper, the usage of GMM-HMM acoustic models for specific speech styles is investigated. We develop two systems for the experiments; the first employs a speech style classifier to predict the speech style of incoming speech, either spontaneous or dictated, then decodes this speech using an acoustic model specifically trained for that speech style. The second system uses both acoustic models to recognise incoming speech and decides upon a final result by calculating a confidence score of decoding. Results show that training specific acoustic models for spontaneous and dictated speech styles confers a slight recognition advantage as compared to a baseline model trained on a mixture of spontaneous and dictated training data. In addition, the speech style classifier approach of the first system produced slightly more accurate results than the confidence scoring employed in the second system.

  14. Age group classification and gender detection based on forced expiratory spirometry.

    PubMed

    Cosgun, Sema; Ozbek, I Yucel

    2015-08-01

    This paper investigates the utility of forced expiratory spirometry (FES) test with efficient machine learning algorithms for the purpose of gender detection and age group classification. The proposed method has three main stages: feature extraction, training of the models and detection. In the first stage, some features are extracted from volume-time curve and expiratory flow-volume loop obtained from FES test. In the second stage, the probabilistic models for each gender and age group are constructed by training Gaussian mixture models (GMMs) and Support vector machine (SVM) algorithm. In the final stage, the gender (or age group) of test subject is estimated by using the trained GMM (or SVM) model. Experiments have been evaluated on a large database from 4571 subjects. The experimental results show that average correct classification rate performance of both GMM and SVM methods based on the FES test is more than 99.3 % and 96.8 % for gender and age group classification, respectively.

  15. A novel material detection algorithm based on 2D GMM-based power density function and image detail addition scheme in dual energy X-ray images.

    PubMed

    Pourghassem, Hossein

    2012-01-01

    Material detection is a vital need in dual energy X-ray luggage inspection systems at security of airport and strategic places. In this paper, a novel material detection algorithm based on statistical trainable models using 2-Dimensional power density function (PDF) of three material categories in dual energy X-ray images is proposed. In this algorithm, the PDF of each material category as a statistical model is estimated from transmission measurement values of low and high energy X-ray images by Gaussian Mixture Models (GMM). Material label of each pixel of object is determined based on dependency probability of its transmission measurement values in the low and high energy to PDF of three material categories (metallic, organic and mixed materials). The performance of material detection algorithm is improved by a maximum voting scheme in a neighborhood of image as a post-processing stage. Using two background removing and denoising stages, high and low energy X-ray images are enhanced as a pre-processing procedure. For improving the discrimination capability of the proposed material detection algorithm, the details of the low and high energy X-ray images are added to constructed color image which includes three colors (orange, blue and green) for representing the organic, metallic and mixed materials. The proposed algorithm is evaluated on real images that had been captured from a commercial dual energy X-ray luggage inspection system. The obtained results show that the proposed algorithm is effective and operative in detection of the metallic, organic and mixed materials with acceptable accuracy.

  16. Intoxicated Speech Detection: A Fusion Framework with Speaker-Normalized Hierarchical Functionals and GMM Supervectors

    PubMed Central

    Bone, Daniel; Li, Ming; Black, Matthew P.; Narayanan, Shrikanth S.

    2013-01-01

    Segmental and suprasegmental speech signal modulations offer information about paralinguistic content such as affect, age and gender, pathology, and speaker state. Speaker state encompasses medium-term, temporary physiological phenomena influenced by internal or external biochemical actions (e.g., sleepiness, alcohol intoxication). Perceptual and computational research indicates that detecting speaker state from speech is a challenging task. In this paper, we present a system constructed with multiple representations of prosodic and spectral features that provided the best result at the Intoxication Subchallenge of Interspeech 2011 on the Alcohol Language Corpus. We discuss the details of each classifier and show that fusion improves performance. We additionally address the question of how best to construct a speaker state detection system in terms of robust and practical marginalization of associated variability such as through modeling speakers, utterance type, gender, and utterance length. As is the case in human perception, speaker normalization provides significant improvements to our system. We show that a held-out set of baseline (sober) data can be used to achieve comparable gains to other speaker normalization techniques. Our fused frame-level statistic-functional systems, fused GMM systems, and final combined system achieve unweighted average recalls (UARs) of 69.7%, 65.1%, and 68.8%, respectively, on the test set. More consistent numbers compared to development set results occur with matched-prompt training, where the UARs are 70.4%, 66.2%, and 71.4%, respectively. The combined system improves over the Challenge baseline by 5.5% absolute (8.4% relative), also improving upon our previously best result. PMID:24376305

  17. Influence of climate variability on anchovy reproductive timing off northern Chile

    NASA Astrophysics Data System (ADS)

    Contreras-Reyes, Javier E.; Canales, T. Mariella; Rojas, Pablo M.

    2016-12-01

    We investigated the relationship between environmental variables and the Gonadosomatic Monthly Mean (GMM) index of anchovy (Engraulis ringens) to understand how the environment affects the dynamics of anchovy reproductive timing. The data examined corresponds to biological information collected from samples of the landings off northern Chile (18°21‧S, 24°00‧S) during the period 1990-2010. We used the Humboldt Current Index (HCI) and the Multivariate ENSO Index (MEI), which combine several physical-oceanographic factors in the Tropical and South Pacific regions. Using the GMM index, we studied the dynamics of anchovy reproductive timing at different intervals of length, specifically females with a length between 11.5 and 14 cm (medium class) and longer than 14 cm (large class). Seasonal Autoregressive Integrated Mobile Average (SARIMA) was used to predict missing observations. The trends of the environment and reproductive indexes were explored via the Breaks For Additive Season and Trend (BFAST) statistical technique and the relationship between these indexes via cross-correlation functions (CCF) analysis. Our results showed that the habitat of anchovy switched from cool to warm condition, which also influenced gonad development. This was revealed by two and three significant changes (breaks) in the trend of the HCI and MEI indexes, and two significant breaks in the GMM of each time series of anchovy females (medium and large). Negative cross-correlation between the MEI index and GMM of medium and large class females was found, indicating that as the environment gets warmer (positive value of MEI) a decrease in the reproductive activity of anchovy can be expected. Correlation between the MEI index and larger females was stronger than with medium females. Additionally, our results indicate that the GMM index of anchovy for both length classes reaches two maximums per year; the first from August to September and the second from December to January. The intensity (maximum GMM values at rise point) of reproductive activity was not equal though, with the August-September peak being the highest. We also discuss how the synchronicity between environment and reproductive timing, the negative correlation found between MEI and GMM indexes, and the two increases per year of anchovy GMM relate to previous studies. Based on these findings we propose ways to advance in the understanding of how anchovy synchronize gonad development with the environment.

  18. Acoustics based assessment of respiratory diseases using GMM classification.

    PubMed

    Mayorga, P; Druzgalski, C; Morelos, R L; Gonzalez, O H; Vidales, J

    2010-01-01

    The focus of this paper is to present a method utilizing lung sounds for a quantitative assessment of patient health as it relates to respiratory disorders. In order to accomplish this, applicable traditional techniques within the speech processing domain were utilized to evaluate lung sounds obtained with a digital stethoscope. Traditional methods utilized in the evaluation of asthma involve auscultation and spirometry, but utilization of more sensitive electronic stethoscopes, which are currently available, and application of quantitative signal analysis methods offer opportunities of improved diagnosis. In particular we propose an acoustic evaluation methodology based on the Gaussian Mixed Models (GMM) which should assist in broader analysis, identification, and diagnosis of asthma based on the frequency domain analysis of wheezing and crackles.

  19. Planning the City Logistics Terminal Location by Applying the Green p-Median Model and Type-2 Neurofuzzy Network

    PubMed Central

    Pamučar, Dragan; Vasin, Ljubislav; Atanasković, Predrag; Miličić, Milica

    2016-01-01

    The paper herein presents green p-median problem (GMP) which uses the adaptive type-2 neural network for the processing of environmental and sociological parameters including costs of logistics operators and demonstrates the influence of these parameters on planning the location for the city logistics terminal (CLT) within the discrete network. CLT shows direct effects on increment of traffic volume especially in urban areas, which further results in negative environmental effects such as air pollution and noise as well as increased number of urban populations suffering from bronchitis, asthma, and similar respiratory infections. By applying the green p-median model (GMM), negative effects on environment and health in urban areas caused by delivery vehicles may be reduced to minimum. This model creates real possibilities for making the proper investment decisions so as profitable investments may be realized in the field of transport infrastructure. The paper herein also includes testing of GMM in real conditions on four CLT locations in Belgrade City zone. PMID:27195005

  20. Planning the City Logistics Terminal Location by Applying the Green p-Median Model and Type-2 Neurofuzzy Network.

    PubMed

    Pamučar, Dragan; Vasin, Ljubislav; Atanasković, Predrag; Miličić, Milica

    2016-01-01

    The paper herein presents green p-median problem (GMP) which uses the adaptive type-2 neural network for the processing of environmental and sociological parameters including costs of logistics operators and demonstrates the influence of these parameters on planning the location for the city logistics terminal (CLT) within the discrete network. CLT shows direct effects on increment of traffic volume especially in urban areas, which further results in negative environmental effects such as air pollution and noise as well as increased number of urban populations suffering from bronchitis, asthma, and similar respiratory infections. By applying the green p-median model (GMM), negative effects on environment and health in urban areas caused by delivery vehicles may be reduced to minimum. This model creates real possibilities for making the proper investment decisions so as profitable investments may be realized in the field of transport infrastructure. The paper herein also includes testing of GMM in real conditions on four CLT locations in Belgrade City zone.

  1. Modeling variability in dendritic ice crystal backscattering cross sections at millimeter wavelengths using a modified Rayleigh-Gans theory

    NASA Astrophysics Data System (ADS)

    Lu, Yinghui; Clothiaux, Eugene E.; Aydin, Kültegin; Botta, Giovanni; Verlinde, Johannes

    2013-12-01

    Using the Generalized Multi-particle Mie-method (GMM), Botta et al. (in this issue) [7] created a database of backscattering cross sections for 412 different ice crystal dendrites at X-, Ka- and W-band wavelengths for different incident angles. The Rayleigh-Gans theory, which accounts for interference effects but ignores interactions between different parts of an ice crystal, explains much, but not all, of the variability in the database of backscattering cross sections. Differences between it and the GMM range from -3.5 dB to +2.5 dB and are highly dependent on the incident angle. To explain the residual variability a physically intuitive iterative method was developed to estimate the internal electric field within an ice crystal that accounts for interactions between the neighboring regions within it. After modifying the Rayleigh-Gans theory using this estimated internal electric field, the difference between the estimated backscattering cross sections and those from the GMM method decreased to within 0.5 dB for most of the ice crystals. The largest percentage differences occur when the form factor from the Rayleigh-Gans theory is close to zero. Both interference effects and neighbor interactions are sensitive to the morphology of ice crystals. Improvements in ice-microphysical models are necessary to predict or diagnose internal structures within ice crystals to aid in more accurate interpretation of radar returns. Observations of the morphology of ice crystals are, in turn, necessary to guide the development of such ice-microphysical models and to better understand the statistical properties of ice crystal morphologies in different environmental conditions.

  2. Gaussian mixture model based identification of arterial wall movement for computation of distension waveform.

    PubMed

    Patil, Ravindra B; Krishnamoorthy, P; Sethuraman, Shriram

    2015-01-01

    This work proposes a novel Gaussian Mixture Model (GMM) based approach for accurate tracking of the arterial wall and subsequent computation of the distension waveform using Radio Frequency (RF) ultrasound signal. The approach was evaluated on ultrasound RF data acquired using a prototype ultrasound system from an artery mimicking flow phantom. The effectiveness of the proposed algorithm is demonstrated by comparing with existing wall tracking algorithms. The experimental results show that the proposed method provides 20% reduction in the error margin compared to the existing approaches in tracking the arterial wall movement. This approach coupled with ultrasound system can be used to estimate the arterial compliance parameters required for screening of cardiovascular related disorders.

  3. Design and model for the giant magnetostrictive actuator used on an electronic controlled injector

    NASA Astrophysics Data System (ADS)

    Xue, Guangming; Zhang, Peilin; He, Zhongbo; Li, Ben; Rong, Ce

    2017-05-01

    Giant magnetostrictive actuator (GMA) may be a promising candidate actuator to drive an electronic controlled injector as giant magnetostrictive material (GMM) has excellent performances as large output, fast response and high operating stability etc. To meet the driving requirement of the injector, the GMA should produce maximal shortening displacement when energized. An unbiased GMA with a ‘T’ shaped output rod is designed to reach the target. Furthermore, an open-hold-fall type driving voltage is exerted on the actuator coil to accelerate the response speed of the coil current. The actuator displacement is modeled from establishing the sub-models of coil current, magnetic field within GMM rod, magnetization and magnetostrictive strain sequentially. Two modifications are done to make the model more accurate. Firstly, consider the model fails to compute the transient-state response precisely, a dead-zone and delay links are embedded into the coil current sub-model. Secondly, as the magnetization and magnetostrictive strain sub-models just influence the change rule of the transient-state response the linear magnetostrictive strain-magnetic field sub-model is introduced. From experimental results, the modified model with linear magnetostrictive stain expression can predict the actuator displacement quite effectively.

  4. Identification and Control of Aircrafts using Multiple Models and Adaptive Critics

    NASA Technical Reports Server (NTRS)

    Principe, Jose C.

    2007-01-01

    We compared two possible implementations of local linear models for control: one approach is based on a self-organizing map (SOM) to cluster the dynamics followed by a set of linear models operating at each cluster. Therefore the gating function is hard (a single local model will represent the regional dynamics). This simplifies the controller design since there is a one to one mapping between controllers and local models. The second approach uses a soft gate using a probabilistic framework based on a Gaussian Mixture Model (also called a dynamic mixture of experts). In this approach several models may be active at a given time, we can expect a smaller number of models, but the controller design is more involved, with potentially better noise rejection characteristics. Our experiments showed that the SOM provides overall best performance in high SNRs, but the performance degrades faster than with the GMM for the same noise conditions. The SOM approach required about an order of magnitude more models than the GMM, so in terms of implementation cost, the GMM is preferable. The design of the SOM is straight forward, while the design of the GMM controllers, although still reasonable, is more involved and needs more care in the selection of the parameters. Either one of these locally linear approaches outperform global nonlinear controllers based on neural networks, such as the time delay neural network (TDNN). Therefore, in essence the local model approach warrants practical implementations. In order to call the attention of the control community for this design methodology we extended successfully the multiple model approach to PID controllers (still today the most widely used control scheme in the industry), and wrote a paper on this subject. The echo state network (ESN) is a recurrent neural network with the special characteristics that only the output parameters are trained. The recurrent connections are preset according to the problem domain and are fixed. In a nutshell, the states of the reservoir of recurrent processing elements implement a projection space, where the desired response is optimally projected. This architecture trades training efficiency by a large increase in the dimension of the recurrent layer. However, the power of the recurrent neural networks can be brought to bear on practical difficult problems. Our goal was to implement an adaptive critic architecture implementing Bellman s approach to optimal control. However, we could only characterize the ESN performance as a critic in value function evaluation, which is just one of the pieces of the overall adaptive critic controller. The results were very convincing, and the simplicity of the implementation was unparalleled.

  5. A comparison of heuristic and model-based clustering methods for dietary pattern analysis.

    PubMed

    Greve, Benjamin; Pigeot, Iris; Huybrechts, Inge; Pala, Valeria; Börnhorst, Claudia

    2016-02-01

    Cluster analysis is widely applied to identify dietary patterns. A new method based on Gaussian mixture models (GMM) seems to be more flexible compared with the commonly applied k-means and Ward's method. In the present paper, these clustering approaches are compared to find the most appropriate one for clustering dietary data. The clustering methods were applied to simulated data sets with different cluster structures to compare their performance knowing the true cluster membership of observations. Furthermore, the three methods were applied to FFQ data assessed in 1791 children participating in the IDEFICS (Identification and Prevention of Dietary- and Lifestyle-Induced Health Effects in Children and Infants) Study to explore their performance in practice. The GMM outperformed the other methods in the simulation study in 72 % up to 100 % of cases, depending on the simulated cluster structure. Comparing the computationally less complex k-means and Ward's methods, the performance of k-means was better in 64-100 % of cases. Applied to real data, all methods identified three similar dietary patterns which may be roughly characterized as a 'non-processed' cluster with a high consumption of fruits, vegetables and wholemeal bread, a 'balanced' cluster with only slight preferences of single foods and a 'junk food' cluster. The simulation study suggests that clustering via GMM should be preferred due to its higher flexibility regarding cluster volume, shape and orientation. The k-means seems to be a good alternative, being easier to use while giving similar results when applied to real data.

  6. Poisson Growth Mixture Modeling of Intensive Longitudinal Data: An Application to Smoking Cessation Behavior

    ERIC Educational Resources Information Center

    Shiyko, Mariya P.; Li, Yuelin; Rindskopf, David

    2012-01-01

    Intensive longitudinal data (ILD) have become increasingly common in the social and behavioral sciences; count variables, such as the number of daily smoked cigarettes, are frequently used outcomes in many ILD studies. We demonstrate a generalized extension of growth mixture modeling (GMM) to Poisson-distributed ILD for identifying qualitatively…

  7. Evaluation of speaker de-identification based on voice gender and age conversion

    NASA Astrophysics Data System (ADS)

    Přibil, Jiří; Přibilová, Anna; Matoušek, Jindřich

    2018-03-01

    Two basic tasks are covered in this paper. The first one consists in the design and practical testing of a new method for voice de-identification that changes the apparent age and/or gender of a speaker by multi-segmental frequency scale transformation combined with prosody modification. The second task is aimed at verification of applicability of a classifier based on Gaussian mixture models (GMM) to detect the original Czech and Slovak speakers after applied voice deidentification. The performed experiments confirm functionality of the developed gender and age conversion for all selected types of de-identification which can be objectively evaluated by the GMM-based open-set classifier. The original speaker detection accuracy was compared also for sentences uttered by German and English speakers showing language independence of the proposed method.

  8. Class Identification Efficacy in Piecewise GMM with Unknown Turning Points

    ERIC Educational Resources Information Center

    Ning, Ling; Luo, Wen

    2018-01-01

    Piecewise GMM with unknown turning points is a new procedure to investigate heterogeneous subpopulations' growth trajectories consisting of distinct developmental phases. Unlike the conventional PGMM, which relies on theory or experiment design to specify turning points a priori, the new procedure allows for an optimal location of turning points…

  9. Multiscale Processes of Hurricane Sandy (2012) as Revealed by the CAMVis-MAP

    NASA Astrophysics Data System (ADS)

    Shen, B.; Li, J. F.; Cheung, S.

    2013-12-01

    In late October 2012, Storm Sandy made landfall near Brigantine, New Jersey, devastating surrounding areas and causing tremendous economic loss and hundreds of fatalities (Blake et al., 2013). An estimated damage of $50 billion made Sandy become the second costliest tropical cyclone (TC) in US history, surpassed only by Hurricane Katrina (2005). Central questions to be addressed include (1) to what extent the lead time of severe storm prediction such as Sandy can be extended (e.g., Emanuel 2012); and (2) whether and how advanced global model, supercomputing technology and numerical algorithm can help effectively illustrate the complicated physical processes that are associated with the evolution of the storms. In this study, the predictability of Sandy is addressed with a focus on short-term (or extended-range) genesis prediction as the first step toward the goal of understanding the relationship between extreme events, such as Sandy, and the current climate. The newly deployed Coupled Advanced global mesoscale Modeling (GMM) and concurrent Visualization (CAMVis) system is used for this study. We will show remarkable simulations of Hurricane Sandy with the GMM, including realistic 7-day track and intensity forecast and genesis predictions with a lead time of up to 6 days (e.g., Shen et al., 2013, GRL, submitted). We then discuss the enabling role of the high-resolution 4-D (time-X-Y-Z) visualizations in illustrating TC's transient dynamics and its interaction with tropical waves. In addition, we have finished the parallel implementation of the ensemble empirical mode decomposition (PEEMD, Cheung et al., 2013, AGU13, submitted) method that will be soon integrated into the multiscale analysis package (MAP) for the analysis of tropical weather systems such as TCs and tropical waves. While the original EEMD has previously shown superior performance in decomposition of nonlinear (local) and non-stationary data into different intrinsic modes which stay within the natural filter period windows, the PEEMD achieves a speedup of over 100 times as compared to the original EEMD. The advanced GMM, 4D visualizations and PEEMD method are being used to examine the multiscale processes of Sandy and its environmental flows that may contribute to the extended lead-time predictability of Hurricane Sandy. Figure 1: Evolution of Hurricane Sandy (2012) as revealed by the advanced visualization.

  10. An adaptive data-driven method for accurate prediction of remaining useful life of rolling bearings

    NASA Astrophysics Data System (ADS)

    Peng, Yanfeng; Cheng, Junsheng; Liu, Yanfei; Li, Xuejun; Peng, Zhihua

    2018-06-01

    A novel data-driven method based on Gaussian mixture model (GMM) and distance evaluation technique (DET) is proposed to predict the remaining useful life (RUL) of rolling bearings. The data sets are clustered by GMM to divide all data sets into several health states adaptively and reasonably. The number of clusters is determined by the minimum description length principle. Thus, either the health state of the data sets or the number of the states is obtained automatically. Meanwhile, the abnormal data sets can be recognized during the clustering process and removed from the training data sets. After obtaining the health states, appropriate features are selected by DET for increasing the classification and prediction accuracy. In the prediction process, each vibration signal is decomposed into several components by empirical mode decomposition. Some common statistical parameters of the components are calculated first and then the features are clustered using GMM to divide the data sets into several health states and remove the abnormal data sets. Thereafter, appropriate statistical parameters of the generated components are selected using DET. Finally, least squares support vector machine is utilized to predict the RUL of rolling bearings. Experimental results indicate that the proposed method reliably predicts the RUL of rolling bearings.

  11. Low cross-reactivity of T-cell responses against lipids from Mycobacterium bovis and M. avium paratuberculosis during natural infection

    PubMed Central

    Van Rhijn, Ildiko; Nguyen, Thi Kim Anh; Michel, Anita; Cooper, Dave; Govaerts, Marc; Cheng, Tan-Yun; van Eden, Willem; Moody, D. Branch; Coetzer, Jacobus A. W.; Rutten, Victor; Koets, Ad P.

    2011-01-01

    Although CD1 proteins are known to present mycobacterial lipid antigens to T cells, there is little understanding of the in vivo behavior of T cells restricted by CD1a, CD1b and CD1c, and the relative immunogenicity and immunodominance of individual lipids within the total array of lipids that comprise a bacterium. Because bovines express multiple CD1 proteins and are natural hosts of Mycobacterium bovis and Mycobacterium avium paratuberculosis (MAP), we used them as a new animal model of CD1 function. Here, we report the surprisingly divergent responses against lipids produced by these two pathogens during infection. Despite considerable overlap in lipid content, only three out of 69 animals cross-react with M. bovis and MAP total lipid preparations. The unidentified immunodominant compound of M. bovis is a hydrophilic compound, whereas the immunodominant lipid of MAP is presented by CD1b and was identified as glucose monomycolate (GMM). The preferential recognition of GMM antigen by MAP-infected cattle may be explained by the higher expression of GMM by MAP than by M. bovis. The bacterial species-specific nature of the CD1-restricted, adaptive T-cell response affects the approach to development of lipid based immunodiagnostic tests. PMID:19688747

  12. A Novel Degradation Identification Method for Wind Turbine Pitch System

    NASA Astrophysics Data System (ADS)

    Guo, Hui-Dong

    2018-04-01

    It’s difficult for traditional threshold value method to identify degradation of operating equipment accurately. An novel degradation evaluation method suitable for wind turbine condition maintenance strategy implementation was proposed in this paper. Based on the analysis of typical variable-speed pitch-to-feather control principle and monitoring parameters for pitch system, a multi input multi output (MIMO) regression model was applied to pitch system, where wind speed, power generation regarding as input parameters, wheel rotation speed, pitch angle and motor driving currency for three blades as output parameters. Then, the difference between the on-line measurement and the calculated value from the MIMO regression model applying least square support vector machines (LSSVM) method was defined as the Observed Vector of the system. The Gaussian mixture model (GMM) was applied to fitting the distribution of the multi dimension Observed Vectors. Applying the model established, the Degradation Index was calculated using the SCADA data of a wind turbine damaged its pitch bearing retainer and rolling body, which illustrated the feasibility of the provided method.

  13. Particle Filtering for Model-Based Anomaly Detection in Sensor Networks

    NASA Technical Reports Server (NTRS)

    Solano, Wanda; Banerjee, Bikramjit; Kraemer, Landon

    2012-01-01

    A novel technique has been developed for anomaly detection of rocket engine test stand (RETS) data. The objective was to develop a system that postprocesses a csv file containing the sensor readings and activities (time-series) from a rocket engine test, and detects any anomalies that might have occurred during the test. The output consists of the names of the sensors that show anomalous behavior, and the start and end time of each anomaly. In order to reduce the involvement of domain experts significantly, several data-driven approaches have been proposed where models are automatically acquired from the data, thus bypassing the cost and effort of building system models. Many supervised learning methods can efficiently learn operational and fault models, given large amounts of both nominal and fault data. However, for domains such as RETS data, the amount of anomalous data that is actually available is relatively small, making most supervised learning methods rather ineffective, and in general met with limited success in anomaly detection. The fundamental problem with existing approaches is that they assume that the data are iid, i.e., independent and identically distributed, which is violated in typical RETS data. None of these techniques naturally exploit the temporal information inherent in time series data from the sensor networks. There are correlations among the sensor readings, not only at the same time, but also across time. However, these approaches have not explicitly identified and exploited such correlations. Given these limitations of model-free methods, there has been renewed interest in model-based methods, specifically graphical methods that explicitly reason temporally. The Gaussian Mixture Model (GMM) in a Linear Dynamic System approach assumes that the multi-dimensional test data is a mixture of multi-variate Gaussians, and fits a given number of Gaussian clusters with the help of the wellknown Expectation Maximization (EM) algorithm. The parameters thus learned are used for calculating the joint distribution of the observations. However, this GMM assumption is essentially an approximation and signals the potential viability of non-parametric density estimators. This is the key idea underlying the new approach.

  14. Coupled Electro-Magneto-Mechanical-Acoustic Analysis Method Developed by Using 2D Finite Element Method for Flat Panel Speaker Driven by Magnetostrictive-Material-Based Actuator

    NASA Astrophysics Data System (ADS)

    Yoo, Byungjin; Hirata, Katsuhiro; Oonishi, Atsurou

    In this study, a coupled analysis method for flat panel speakers driven by giant magnetostrictive material (GMM) based actuator was developed. The sound field produced by a flat panel speaker that is driven by a GMM actuator depends on the vibration of the flat panel, this vibration is a result of magnetostriction property of the GMM. In this case, to predict the sound pressure level (SPL) in the audio-frequency range, it is necessary to take into account not only the magnetostriction property of the GMM but also the effect of eddy current and the vibration characteristics of the actuator and the flat panel. In this paper, a coupled electromagnetic-structural-acoustic analysis method is presented; this method was developed by using the finite element method (FEM). This analysis method is used to predict the performance of a flat panel speaker in the audio-frequency range. The validity of the analysis method is verified by comparing with the measurement results of a prototype speaker.

  15. Bayesian Inference for Growth Mixture Models with Latent Class Dependent Missing Data

    ERIC Educational Resources Information Center

    Lu, Zhenqiu Laura; Zhang, Zhiyong; Lubke, Gitta

    2011-01-01

    "Growth mixture models" (GMMs) with nonignorable missing data have drawn increasing attention in research communities but have not been fully studied. The goal of this article is to propose and to evaluate a Bayesian method to estimate the GMMs with latent class dependent missing data. An extended GMM is first presented in which class…

  16. Uncertainty analysis in geospatial merit matrix–based hydropower resource assessment

    DOE PAGES

    Pasha, M. Fayzul K.; Yeasmin, Dilruba; Saetern, Sen; ...

    2016-03-30

    Hydraulic head and mean annual streamflow, two main input parameters in hydropower resource assessment, are not measured at every point along the stream. Translation and interpolation are used to derive these parameters, resulting in uncertainties. This study estimates the uncertainties and their effects on model output parameters: the total potential power and the number of potential locations (stream-reach). These parameters are quantified through Monte Carlo Simulation (MCS) linking with a geospatial merit matrix based hydropower resource assessment (GMM-HRA) Model. The methodology is applied to flat, mild, and steep terrains. Results show that the uncertainty associated with the hydraulic head ismore » within 20% for mild and steep terrains, and the uncertainty associated with streamflow is around 16% for all three terrains. Output uncertainty increases as input uncertainty increases. However, output uncertainty is around 10% to 20% of the input uncertainty, demonstrating the robustness of the GMM-HRA model. Hydraulic head is more sensitive to output parameters in steep terrain than in flat and mild terrains. Furthermore, mean annual streamflow is more sensitive to output parameters in flat terrain.« less

  17. Recognizing visual focus of attention from head pose in natural meetings.

    PubMed

    Ba, Sileye O; Odobez, Jean-Marc

    2009-02-01

    We address the problem of recognizing the visual focus of attention (VFOA) of meeting participants based on their head pose. To this end, the head pose observations are modeled using a Gaussian mixture model (GMM) or a hidden Markov model (HMM) whose hidden states correspond to the VFOA. The novelties of this paper are threefold. First, contrary to previous studies on the topic, in our setup, the potential VFOA of a person is not restricted to other participants only. It includes environmental targets as well (a table and a projection screen), which increases the complexity of the task, with more VFOA targets spread in the pan as well as tilt gaze space. Second, we propose a geometric model to set the GMM or HMM parameters by exploiting results from cognitive science on saccadic eye motion, which allows the prediction of the head pose given a gaze target. Third, an unsupervised parameter adaptation step not using any labeled data is proposed, which accounts for the specific gazing behavior of each participant. Using a publicly available corpus of eight meetings featuring four persons, we analyze the above methods by evaluating, through objective performance measures, the recognition of the VFOA from head pose information obtained either using a magnetic sensor device or a vision-based tracking system. The results clearly show that in such complex but realistic situations, the VFOA recognition performance is highly dependent on how well the visual targets are separated for a given meeting participant. In addition, the results show that the use of a geometric model with unsupervised adaptation achieves better results than the use of training data to set the HMM parameters.

  18. A Longitudinal Test of the Parent-Adolescent Family Functioning Discrepancy Hypothesis: A Trend toward Increased HIV Risk Behaviors Among Immigrant Hispanic Adolescents.

    PubMed

    Córdova, David; Schwartz, Seth J; Unger, Jennifer B; Baezconde-Garbanati, Lourdes; Villamar, Juan A; Soto, Daniel W; Des Rosiers, Sabrina E; Lee, Tae Kyoung; Meca, Alan; Cano, Miguel Ángel; Lorenzo-Blanco, Elma I; Oshri, Assaf; Salas-Wright, Christopher P; Piña-Watson, Brandy; Romero, Andrea J

    2016-10-01

    Parent-adolescent discrepancies in family functioning play an important role in HIV risk behaviors among adolescents, yet longitudinal research with recent immigrant Hispanic families remains limited. This study tested the effects of trajectories of parent-adolescent family functioning discrepancies on HIV risk behaviors among recent-immigrant Hispanic adolescents. Additionally, we examined whether and to what extent trajectories of parent-adolescent family functioning discrepancies vary as a function of gender. We assessed family functioning of 302 Hispanic adolescents (47 % female) and their parent (70 % female) at six time points over a three-year period and computed latent discrepancy scores between parent and adolescent reports at each timepoint. Additionally, adolescents completed measures of sexual risk behaviors and alcohol use. We conducted a confirmatory factor analysis to determine the feasibility of collapsing parent and adolescent reported family functioning indicators onto a single latent discrepancy variable, tested model invariance over time, and conducted growth mixture modeling (GMM). GMM yielded a three-class solution for discrepancies: High-Increasing, High-Stable, and Low-Stable. Relative to the Low-Stable class, parent-adolescent dyads in the High-Increasing and High-Stable classes were at greater risk for adolescents reporting sexual debut at time 6. Additionally, the High-Stable class was at greater risk, relative to the Low-Stable class, in terms of adolescent lifetime alcohol use at 30 months post-baseline. Multiple group GMM indicated that trajectories of parent-adolescent family functioning trajectories did not vary by gender. Implications for future research and practice are discussed.

  19. A Longitudinal Test of the Parent–Adolescent Family Functioning Discrepancy Hypothesis: A Trend toward Increased HIV Risk Behaviors among Immigrant Hispanic Adolescents

    PubMed Central

    Cordova, David; Schwartz, Seth J.; Unger, Jennifer B.; Baezconde-Garbanati, Lourdes; Villamar, Juan A.; Soto, Daniel W.; Des Rosiers, Sabrina E.; Lee, Tae Kyoung; Meca, Alan; Cano, Miguel Ángel; Lorenzo-Blanco, Elma I.; Oshri, Assaf; Salas-Wright, Christopher P.; Piña-Watson, Brandy M.; Romero, Andrea J.

    2016-01-01

    Parent-adolescent discrepancies in family functioning play an important role in HIV risk behaviors among adolescents, yet longitudinal research with recent immigrant Hispanic families remains limited. This study tested the effects of trajectories of parent–adolescent family functioning discrepancies on HIV risk behaviors among recent-immigrant Hispanic adolescents. Additionally, we examined whether and to what extent trajectories of parent-adolescent family functioning discrepancies vary as a function of gender. We assessed family functioning of 302 Hispanic adolescents (47% female) and their parent (70% female) at six time points over a three-year period and computed latent discrepancy scores between parent and adolescent reports at each timepoint. Additionally, adolescents completed measures of sexual risk behaviors and alcohol use. We conducted a confirmatory factor analysis to determine the feasibility of collapsing parent and adolescent reported family functioning indicators onto a single latent discrepancy variable, tested model invariance over time, and conducted growth mixture modeling (GMM). GMM yielded a three-class solution for discrepancies: High-Increasing, High-Stable, and Low-Stable. Relative to the Low-Stable class, parent–adolescent dyads in the High-Increasing and High-Stable classes were at greater risk for adolescents reporting sexual debut at time 6. Additionally, the High-Stable class was at greater risk, relative to the Low-Stable class, in terms of adolescent lifetime alcohol use at 30 months post-baseline. Multiple group GMM indicated that trajectories of parent-adolescent family functioning trajectories did not vary by gender. Implications for future research and practice are discussed. PMID:27216199

  20. Capturing the Temporal Sequence of Interaction in Young Siblings

    PubMed Central

    Steele, Fiona; Jenkins, Jennifer

    2015-01-01

    We explored whether young children exhibit subtypes of behavioral sequences during sibling interaction. Ten-minute, free-play observations of over 300 sibling dyads were coded for positivity, negativity and disengagement. The data were analyzed using growth mixture modeling (GMM). Younger (18-month-old) children’s temporal behavioral sequences showed a harmonious (53%) and a casual (47%) class. Older (approximately four-year-old) children’s behavior was more differentiated revealing a harmonious (25%), a deteriorating (31%), a recovery (22%) and a casual (22%) class. A more positive maternal affective climate was associated with more positive patterns. Siblings’ sequential behavioral patterns tended to be complementary rather than reciprocal in nature. The study illustrates a novel use of GMM and makes a theoretical contribution by showing that young children exhibit distinct types of temporal behavioral sequences that are related to parenting processes. PMID:25996957

  1. A Support Vector Machine-Based Gender Identification Using Speech Signal

    NASA Astrophysics Data System (ADS)

    Lee, Kye-Hwan; Kang, Sang-Ick; Kim, Deok-Hwan; Chang, Joon-Hyuk

    We propose an effective voice-based gender identification method using a support vector machine (SVM). The SVM is a binary classification algorithm that classifies two groups by finding the voluntary nonlinear boundary in a feature space and is known to yield high classification performance. In the present work, we compare the identification performance of the SVM with that of a Gaussian mixture model (GMM)-based method using the mel frequency cepstral coefficients (MFCC). A novel approach of incorporating a features fusion scheme based on a combination of the MFCC and the fundamental frequency is proposed with the aim of improving the performance of gender identification. Experimental results demonstrate that the gender identification performance using the SVM is significantly better than that of the GMM-based scheme. Moreover, the performance is substantially improved when the proposed features fusion technique is applied.

  2. Foreign Language Analysis and Recognition (FLARe)

    DTIC Science & Technology

    2016-10-08

    10 7 Chinese CER ...Rates ( CERs ) were obtained with each feature set: (1) 19.2%, (2) 17.3%, and (3) 15.3%. Based on these results, a GMM-HMM speech recognition system...These systems were evaluated on the HUB4 and HKUST test partitions. Table 7 shows the CER obtained on each test set. Whereas including the HKUST data

  3. SU-G-JeP3-04: Estimating 4D CBCT from Prior Information and Extremely Limited Angle Projections Using Structural PCA and Weighted Free-Form Deformation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, W; Yin, F; Zhang, Y

    Purpose: To investigate the feasibility of using structure-based principal component analysis (PCA) motion-modeling and weighted free-form deformation to estimate on-board 4D-CBCT using prior information and extremely limited angle projections for potential 4D target verification of lung radiotherapy. Methods: A technique for lung 4D-CBCT reconstruction has been previously developed using a deformation field map (DFM)-based strategy. In the previous method, each phase of the 4D-CBCT was generated by deforming a prior CT volume. The DFM was solved by a motion-model extracted by global PCA and a free-form deformation (GMM-FD) technique, using data fidelity constraint and the deformation energy minimization. In thismore » study, a new structural-PCA method was developed to build a structural motion-model (SMM) by accounting for potential relative motion pattern changes between different anatomical structures from simulation to treatment. The motion model extracted from planning 4DCT was divided into two structures: tumor and body excluding tumor, and the parameters of both structures were optimized together. Weighted free-form deformation (WFD) was employed afterwards to introduce flexibility in adjusting the weightings of different structures in the data fidelity constraint based on clinical interests. XCAT (computerized patient model) simulation with a 30 mm diameter lesion was simulated with various anatomical and respirational changes from planning 4D-CT to onboard volume. The estimation accuracy was evaluated by the Volume-Percent-Difference (VPD)/Center-of-Mass-Shift (COMS) between lesions in the estimated and “ground-truth” on board 4D-CBCT. Results: Among 6 different XCAT scenarios corresponding to respirational and anatomical changes from planning CT to on-board using single 30° on-board projections, the VPD/COMS for SMM-WFD was reduced to 10.64±3.04%/1.20±0.45mm from 21.72±9.24%/1.80±0.53mm for GMM-FD. Using 15° orthogonal projections, the VPD/COMS was further reduced to 1.91±0.86%/0.31±0.42mm based on SMM-WFD. Conclusion: Compared to GMM-FD technique, the SMM-WFD technique can substantially improve the 4D-CBCT estimation accuracy using extremely small scan angles to provide ultra-fast 4D verification. This work was supported by the National Institutes of Health under Grant No. R01-CA184173 and a research grant from Varian Medical Systems.« less

  4. Construction of Early and Midlife Work Trajectories in Women and Their Association With Birth Weight

    PubMed Central

    Mutambudzi, Miriam

    2014-01-01

    Objectives. We derived trajectories of the substantive complexity (SC) of work across mid-adult life in women and determined their association with term birth weight. SC is a concept that encompasses decision latitude, active learning, and ability to use and expand one’s abilities at work. Methods. Using occupational data from the National Longitudinal Survey of Youth 1979 and O*NET work variables, we used growth mixture modeling (GMM) to construct longitudinal trajectories of work SC from the ages of 18 to 34 years. The association between work trajectories and birth weight of infants born to study participants was modeled using generalized estimating equations, adjusting for education, income, and relevant covariates. Results. GMM yielded a 5-class solution for work trajectories in women. Higher work trajectories were associated with higher term birth weight and were robust to the inclusion of both education and income. A work trajectory that showed a sharp rise after age 24 years was associated with marked improvement in birth weight. Conclusions. Longitudinal modeling of work characteristics might improve capacity to integrate occupation into a life-course model that examines antecedents and consequences for maternal and child health. PMID:24354827

  5. Sequential updating of multimodal hydrogeologic parameter fields using localization and clustering techniques

    NASA Astrophysics Data System (ADS)

    Sun, Alexander Y.; Morris, Alan P.; Mohanty, Sitakanta

    2009-07-01

    Estimated parameter distributions in groundwater models may contain significant uncertainties because of data insufficiency. Therefore, adaptive uncertainty reduction strategies are needed to continuously improve model accuracy by fusing new observations. In recent years, various ensemble Kalman filters have been introduced as viable tools for updating high-dimensional model parameters. However, their usefulness is largely limited by the inherent assumption of Gaussian error statistics. Hydraulic conductivity distributions in alluvial aquifers, for example, are usually non-Gaussian as a result of complex depositional and diagenetic processes. In this study, we combine an ensemble Kalman filter with grid-based localization and a Gaussian mixture model (GMM) clustering techniques for updating high-dimensional, multimodal parameter distributions via dynamic data assimilation. We introduce innovative strategies (e.g., block updating and dimension reduction) to effectively reduce the computational costs associated with these modified ensemble Kalman filter schemes. The developed data assimilation schemes are demonstrated numerically for identifying the multimodal heterogeneous hydraulic conductivity distributions in a binary facies alluvial aquifer. Our results show that localization and GMM clustering are very promising techniques for assimilating high-dimensional, multimodal parameter distributions, and they outperform the corresponding global ensemble Kalman filter analysis scheme in all scenarios considered.

  6. Accelerometry-based classification of human activities using Markov modeling.

    PubMed

    Mannini, Andrea; Sabatini, Angelo Maria

    2011-01-01

    Accelerometers are a popular choice as body-motion sensors: the reason is partly in their capability of extracting information that is useful for automatically inferring the physical activity in which the human subject is involved, beside their role in feeding biomechanical parameters estimators. Automatic classification of human physical activities is highly attractive for pervasive computing systems, whereas contextual awareness may ease the human-machine interaction, and in biomedicine, whereas wearable sensor systems are proposed for long-term monitoring. This paper is concerned with the machine learning algorithms needed to perform the classification task. Hidden Markov Model (HMM) classifiers are studied by contrasting them with Gaussian Mixture Model (GMM) classifiers. HMMs incorporate the statistical information available on movement dynamics into the classification process, without discarding the time history of previous outcomes as GMMs do. An example of the benefits of the obtained statistical leverage is illustrated and discussed by analyzing two datasets of accelerometer time series.

  7. Fusion and Gaussian mixture based classifiers for SONAR data

    NASA Astrophysics Data System (ADS)

    Kotari, Vikas; Chang, KC

    2011-06-01

    Underwater mines are inexpensive and highly effective weapons. They are difficult to detect and classify. Hence detection and classification of underwater mines is essential for the safety of naval vessels. This necessitates a formulation of highly efficient classifiers and detection techniques. Current techniques primarily focus on signals from one source. Data fusion is known to increase the accuracy of detection and classification. In this paper, we formulated a fusion-based classifier and a Gaussian mixture model (GMM) based classifier for classification of underwater mines. The emphasis has been on sound navigation and ranging (SONAR) signals due to their extensive use in current naval operations. The classifiers have been tested on real SONAR data obtained from University of California Irvine (UCI) repository. The performance of both GMM based classifier and fusion based classifier clearly demonstrate their superior classification accuracy over conventional single source cases and validate our approach.

  8. Temporal compressive imaging for video

    NASA Astrophysics Data System (ADS)

    Zhou, Qun; Zhang, Linxia; Ke, Jun

    2018-01-01

    In many situations, imagers are required to have higher imaging speed, such as gunpowder blasting analysis and observing high-speed biology phenomena. However, measuring high-speed video is a challenge to camera design, especially, in infrared spectrum. In this paper, we reconstruct a high-frame-rate video from compressive video measurements using temporal compressive imaging (TCI) with a temporal compression ratio T=8. This means that, 8 unique high-speed temporal frames will be obtained from a single compressive frame using a reconstruction algorithm. Equivalently, the video frame rates is increased by 8 times. Two methods, two-step iterative shrinkage/threshold (TwIST) algorithm and the Gaussian mixture model (GMM) method, are used for reconstruction. To reduce reconstruction time and memory usage, each frame of size 256×256 is divided into patches of size 8×8. The influence of different coded mask to reconstruction is discussed. The reconstruction qualities using TwIST and GMM are also compared.

  9. Damage/fault diagnosis in an operating wind turbine under uncertainty via a vibration response Gaussian mixture random coefficient model based framework

    NASA Astrophysics Data System (ADS)

    Avendaño-Valencia, Luis David; Fassois, Spilios D.

    2017-07-01

    The study focuses on vibration response based health monitoring for an operating wind turbine, which features time-dependent dynamics under environmental and operational uncertainty. A Gaussian Mixture Model Random Coefficient (GMM-RC) model based Structural Health Monitoring framework postulated in a companion paper is adopted and assessed. The assessment is based on vibration response signals obtained from a simulated offshore 5 MW wind turbine. The non-stationarity in the vibration signals originates from the continually evolving, due to blade rotation, inertial properties, as well as the wind characteristics, while uncertainty is introduced by random variations of the wind speed within the range of 10-20 m/s. Monte Carlo simulations are performed using six distinct structural states, including the healthy state and five types of damage/fault in the tower, the blades, and the transmission, with each one of them characterized by four distinct levels. Random vibration response modeling and damage diagnosis are illustrated, along with pertinent comparisons with state-of-the-art diagnosis methods. The results demonstrate consistently good performance of the GMM-RC model based framework, offering significant performance improvements over state-of-the-art methods. Most damage types and levels are shown to be properly diagnosed using a single vibration sensor.

  10. Detection and inpainting of facial wrinkles using texture orientation fields and Markov random field modeling.

    PubMed

    Batool, Nazre; Chellappa, Rama

    2014-09-01

    Facial retouching is widely used in media and entertainment industry. Professional software usually require a minimum level of user expertise to achieve the desirable results. In this paper, we present an algorithm to detect facial wrinkles/imperfection. We believe that any such algorithm would be amenable to facial retouching applications. The detection of wrinkles/imperfections can allow these skin features to be processed differently than the surrounding skin without much user interaction. For detection, Gabor filter responses along with texture orientation field are used as image features. A bimodal Gaussian mixture model (GMM) represents distributions of Gabor features of normal skin versus skin imperfections. Then, a Markov random field model is used to incorporate the spatial relationships among neighboring pixels for their GMM distributions and texture orientations. An expectation-maximization algorithm then classifies skin versus skin wrinkles/imperfections. Once detected automatically, wrinkles/imperfections are removed completely instead of being blended or blurred. We propose an exemplar-based constrained texture synthesis algorithm to inpaint irregularly shaped gaps left by the removal of detected wrinkles/imperfections. We present results conducted on images downloaded from the Internet to show the efficacy of our algorithms.

  11. Identifying high energy density stream-reaches through refined geospatial resolution in hydropower resource assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pasha, M. Fayzul K.; Yang, Majntxov; Yeasmin, Dilruba

    Benefited from the rapid development of multiple geospatial data sets on topography, hydrology, and existing energy-water infrastructures, the reconnaissance level hydropower resource assessment can now be conducted using geospatial models in all regions of the US. Furthermore, the updated techniques can be used to estimate the total undeveloped hydropower potential across all regions, and may eventually help identify further hydropower opportunities that were previously overlooked. To enhance the characterization of higher energy density stream-reaches, this paper explored the sensitivity of geospatial resolution on the identification of hydropower stream-reaches using the geospatial merit matrix based hydropower resource assessment (GMM-HRA) model. GMM-HRAmore » model simulation was conducted with eight different spatial resolutions on six U.S. Geological Survey (USGS) 8-digit hydrologic units (HUC8) located at three different terrains; Flat, Mild, and Steep. The results showed that more hydropower potential from higher energy density stream-reaches can be identified with increasing spatial resolution. Both Flat and Mild terrains exhibited lower impacts compared to the Steep terrain. Consequently, greater attention should be applied when selecting the discretization resolution for hydropower resource assessments in the future study.« less

  12. Identifying high energy density stream-reaches through refined geospatial resolution in hydropower resource assessment

    DOE PAGES

    Pasha, M. Fayzul K.; Yang, Majntxov; Yeasmin, Dilruba; ...

    2016-01-07

    Benefited from the rapid development of multiple geospatial data sets on topography, hydrology, and existing energy-water infrastructures, the reconnaissance level hydropower resource assessment can now be conducted using geospatial models in all regions of the US. Furthermore, the updated techniques can be used to estimate the total undeveloped hydropower potential across all regions, and may eventually help identify further hydropower opportunities that were previously overlooked. To enhance the characterization of higher energy density stream-reaches, this paper explored the sensitivity of geospatial resolution on the identification of hydropower stream-reaches using the geospatial merit matrix based hydropower resource assessment (GMM-HRA) model. GMM-HRAmore » model simulation was conducted with eight different spatial resolutions on six U.S. Geological Survey (USGS) 8-digit hydrologic units (HUC8) located at three different terrains; Flat, Mild, and Steep. The results showed that more hydropower potential from higher energy density stream-reaches can be identified with increasing spatial resolution. Both Flat and Mild terrains exhibited lower impacts compared to the Steep terrain. Consequently, greater attention should be applied when selecting the discretization resolution for hydropower resource assessments in the future study.« less

  13. Acoustic Biometric System Based on Preprocessing Techniques and Linear Support Vector Machines

    PubMed Central

    del Val, Lara; Izquierdo-Fuente, Alberto; Villacorta, Juan J.; Raboso, Mariano

    2015-01-01

    Drawing on the results of an acoustic biometric system based on a MSE classifier, a new biometric system has been implemented. This new system preprocesses acoustic images, extracts several parameters and finally classifies them, based on Support Vector Machine (SVM). The preprocessing techniques used are spatial filtering, segmentation—based on a Gaussian Mixture Model (GMM) to separate the person from the background, masking—to reduce the dimensions of images—and binarization—to reduce the size of each image. An analysis of classification error and a study of the sensitivity of the error versus the computational burden of each implemented algorithm are presented. This allows the selection of the most relevant algorithms, according to the benefits required by the system. A significant improvement of the biometric system has been achieved by reducing the classification error, the computational burden and the storage requirements. PMID:26091392

  14. Acoustic Biometric System Based on Preprocessing Techniques and Linear Support Vector Machines.

    PubMed

    del Val, Lara; Izquierdo-Fuente, Alberto; Villacorta, Juan J; Raboso, Mariano

    2015-06-17

    Drawing on the results of an acoustic biometric system based on a MSE classifier, a new biometric system has been implemented. This new system preprocesses acoustic images, extracts several parameters and finally classifies them, based on Support Vector Machine (SVM). The preprocessing techniques used are spatial filtering, segmentation-based on a Gaussian Mixture Model (GMM) to separate the person from the background, masking-to reduce the dimensions of images-and binarization-to reduce the size of each image. An analysis of classification error and a study of the sensitivity of the error versus the computational burden of each implemented algorithm are presented. This allows the selection of the most relevant algorithms, according to the benefits required by the system. A significant improvement of the biometric system has been achieved by reducing the classification error, the computational burden and the storage requirements.

  15. Validation of a CD1b tetramer assay for studies of human mycobacterial infection or vaccination.

    PubMed

    Layton, Erik D; Yu, Krystle K Q; Smith, Malisa T; Scriba, Thomas J; De Rosa, Stephen C; Seshadri, Chetan

    2018-07-01

    CD1 tetramers loaded with lipid antigens facilitate the identification of rare lipid-antigen specific T cells present in human blood and tissue. Because CD1 proteins are structurally non-polymorphic, these tetramers can be applied to genetically diverse human populations, unlike MHC-I and MHC-II tetramers. However, there are no standardized assays to quantify and characterize lipid antigen-specific T cells present within clinical samples. We incorporated CD1b tetramers loaded with the mycobacterial lipid glucose monomycolate (GMM) into a multi-parameter flow cytometry assay. Using a GMM-specific T-cell line, we demonstrate that the assay is linear, reproducible, repeatable, precise, accurate, and has a limit of detection of approximately 0.007%. Having formally validated this assay, we performed a cross-sectional study of healthy U.S. controls and South African adolescents with and without latent tuberculosis infection (LTBI). We show that GMM-specific T cells are specifically detected in South African subjects with LTBI and not in U.S. healthy controls. This assay can be expanded to include additional tetramers or phenotypic markers to characterize GMM-specific T cells in studies of mycobacterial infection, disease, or vaccination. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Does industrial waste taxation contribute to reduction of landfilled waste? Dynamic panel analysis considering industrial waste category in Japan.

    PubMed

    Sasao, Toshiaki

    2014-11-01

    Waste taxes, such as landfill and incineration taxes, have emerged as a popular option in developed countries to promote the 3Rs (reduce, reuse, and recycle). However, few studies have examined the effectiveness of waste taxes. In addition, quite a few studies have considered both dynamic relationships among dependent variables and unobserved individual heterogeneity among the jurisdictions. If dependent variables are persistent, omitted variables cause a bias, or common characteristics exist across the jurisdictions that have introduced waste taxes, the standard fixed effects model may lead to biased estimation results and misunderstood causal relationships. In addition, most existing studies have examined waste in terms of total amounts rather than by categories. Even if significant reductions in total waste amounts are not observed, some reduction within each category may, nevertheless, become evident. Therefore, this study analyzes the effects of industrial waste taxation on quantities of waste in landfill in Japan by applying the bias-corrected least-squares dummy variable (LSDVC) estimators; the general method of moments (difference GMM); and the system GMM. In addition, the study investigates effect differences attributable to industrial waste categories and taxation types. This paper shows that industrial waste taxes in Japan have minimal, significant effects on the reduction of final disposal amounts thus far, considering dynamic relationships and waste categories. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Canadian Field Soils IV: Modeling Thermal Conductivity at Dryness and Saturation

    NASA Astrophysics Data System (ADS)

    Tarnawski, V. R.; McCombie, M. L.; Leong, W. H.; Coppa, P.; Corasaniti, S.; Bovesecchi, G.

    2018-03-01

    The thermal conductivity data of 40 Canadian soils at dryness (λ _{dry}) and at full saturation (λ _{sat}) were used to verify 13 predictive models, i.e., four mechanistic, four semi-empirical and five empirical equations. The performance of each model, for λ _{dry} and λ _{sat}, was evaluated using a standard deviation ( SD) formula. Among the mechanistic models applied to dry soils, the closest λ _{dry} estimates were obtained by MaxRTCM (it{SD} = ± 0.018 Wm^{-1}\\cdot K^{-1}), followed by de Vries and a series-parallel model (S-{\\vert }{\\vert }). Among the semi-empirical equations (deVries-ave, Advanced Geometric Mean Model (A-GMM), Chaudhary and Bhandari (C-B) and Chen's equation), the closest λ _{dry} estimates were obtained by the C-B model (± 0.022 Wm^{-1}\\cdot K^{-1}). Among the empirical equations, the top λ _{dry} estimates were given by CDry-40 (± 0.021 Wm^{-1}\\cdot K^{-1} and ± 0.018 Wm^{-1}\\cdot K^{-1} for18-coarse and 22-fine soils, respectively). In addition, λ _{dry} and λ _{sat} models were applied to the λ _{sat} database of 21 other soils. From all the models tested, only the maxRTCM and the CDry-40 models provided the closest λ _{dry} estimates for the 40 Canadian soils as well as the 21 soils. The best λ _{sat} estimates for the 40-Canadian soils and the 21 soils were given by the A-GMM and the S-{\\vert }{\\vert } model.

  18. 3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dibildox, Gerardo, E-mail: g.dibildox@erasmusmc.nl; Baka, Nora; Walsum, Theo van

    2014-09-15

    Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment ismore » achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.« less

  19. One-Shot Learning of Human Activity With an MAP Adapted GMM and Simplex-HMM.

    PubMed

    Rodriguez, Mario; Orrite, Carlos; Medrano, Carlos; Makris, Dimitrios

    2016-05-10

    This paper presents a novel activity class representation using a single sequence for training. The contribution of this representation lays on the ability to train an one-shot learning recognition system, useful in new scenarios where capturing and labeling sequences is expensive or impractical. The method uses a universal background model of local descriptors obtained from source databases available on-line and adapts it to a new sequence in the target scenario through a maximum a posteriori adaptation. Each activity sample is encoded in a sequence of normalized bag of features and modeled by a new hidden Markov model formulation, where the expectation-maximization algorithm for training is modified to deal with observations consisting in vectors in a unit simplex. Extensive experiments in recognition have been performed using one-shot learning over the public datasets Weizmann, KTH, and IXMAS. These experiments demonstrate the discriminative properties of the representation and the validity of application in recognition systems, achieving state-of-the-art results.

  20. The impact of compression of speech signal, background noise and acoustic disturbances on the effectiveness of speaker identification

    NASA Astrophysics Data System (ADS)

    Kamiński, K.; Dobrowolski, A. P.

    2017-04-01

    The paper presents the architecture and the results of optimization of selected elements of the Automatic Speaker Recognition (ASR) system that uses Gaussian Mixture Models (GMM) in the classification process. Optimization was performed on the process of selection of individual characteristics using the genetic algorithm and the parameters of Gaussian distributions used to describe individual voices. The system that was developed was tested in order to evaluate the impact of different compression methods used, among others, in landline, mobile, and VoIP telephony systems, on effectiveness of the speaker identification. Also, the results were presented of effectiveness of speaker identification at specific levels of noise with the speech signal and occurrence of other disturbances that could appear during phone calls, which made it possible to specify the spectrum of applications of the presented ASR system.

  1. Scattering and Diffraction of Electromagnetic Radiation: An Effective Probe to Material Structure

    NASA Technical Reports Server (NTRS)

    Xu, Yu-Lin

    2016-01-01

    Scattered electromagnetic waves from material bodies of different forms contain, in an intricate way, precise information on the intrinsic, geometrical and physical properties of the objects. Scattering theories, ever deepening, aim to provide dependable interpretation and prediction to the complicated interaction of electromagnetic radiation with matter. There are well-established multiple-scattering formulations based on classical electromagnetic theories. An example is the Generalized Multi-particle Mie-solution (GMM), which has recently been extended to a special version ? the GMM-PA approach, applicable to finite periodic arrays consisting of a huge number (e.g., >>106) of identical scattering centers [1]. The framework of the GMM-PA is nearly complete. When the size of the constituent unit scatterers becomes considerably small in comparison with incident wavelength, an appropriate array of such small element volumes may well be a satisfactory representation of a material entity having an arbitrary structure. X-ray diffraction is a powerful characterization tool used in a variety of scientific and technical fields, including material science. A diffraction pattern is nothing more than the spatial distribution of scattered intensity, determined by the distribution of scattering matter by way of its Fourier transform [1]. Since all linear dimensions entered into Maxwell's equations are normalized by wavelength, an analogy exists between optical and X-ray diffraction patterns. A large set of optical diffraction patterns experimentally obtained can be found in the literature [e.g., 2,3]. Theoretical results from the GMM-PA have been scrutinized using a large collection of publically accessible, experimentally obtained Fraunhofer diffraction patterns. As far as characteristic structures of the patterns are concerned, theoretical and experimental results are in uniform agreement; no exception has been found so far. Closely connected with the spatial distribution of scattered intensities are cross sections, such as for extinction, scattering, absorption, and radiation pressure, as a critical type of key quantity addressed in most theoretical and experimental studies of radiative scattering. Cross sections predicted from different scattering theories are supposed to be in general agreement. For objects of irregular shape, the GMM-PA solutions can be compared with the highly flexible Discrete Dipole Approximation (DDA) [4,5] when dividing a target to no more than 106 unit cells. Also, there are different ways to calculate the cross sections in the GMM-PA, providing an additional means to examine the accuracy of the numerical solutions and to unveil potential issues concerning the theoretical formulations and numerical aspects. To solve multiple scattering by an assembly of material volumes through classical theories such as the GMM-PA, the radiative properties of the component scatterers, the complex refractive index in particular, must be provided as input parameters. When using a PA to characterize a material body, this involves the use of an adequate theoretical tool, an effective medium theory, to connect Maxwell's phenomenogical theory with the atomistic theory of matter. In the atomic theory, one regards matter as composed of interacting particles (atoms and molecules) embedded in the vacuum [6]. However, the radiative properties of atomic-scaled particles are known to be substantially different from bulk materials. Intensive research efforts in the fields of cluster science and nanoscience attempt to bridge the gap between bulk and atom and to understand the transition from classical to quantum physics. The GMM-PA calculations, which place virtually no restriction on the component-particle size, might help to gain certain insight into the transition.

  2. Novel positioning method using Gaussian mixture model for a monolithic scintillator-based detector in positron emission tomography

    NASA Astrophysics Data System (ADS)

    Bae, Seungbin; Lee, Kisung; Seo, Changwoo; Kim, Jungmin; Joo, Sung-Kwan; Joung, Jinhun

    2011-09-01

    We developed a high precision position decoding method for a positron emission tomography (PET) detector that consists of a thick slab scintillator coupled with a multichannel photomultiplier tube (PMT). The DETECT2000 simulation package was used to validate light response characteristics for a 48.8 mm×48.8 mm×10 mm slab of lutetium oxyorthosilicate coupled to a 64 channel PMT. The data are then combined to produce light collection histograms. We employed a Gaussian mixture model (GMM) to parameterize the composite light response with multiple Gaussian mixtures. In the training step, light photons acquired by N PMT channels was used as an N-dimensional feature vector and were fed into a GMM training model to generate optimal parameters for M mixtures. In the positioning step, we decoded the spatial locations of incident photons by evaluating a sample feature vector with respect to the trained mixture parameters. The average spatial resolutions after positioning with four mixtures were 1.1 mm full width at half maximum (FWHM) at the corner and 1.0 mm FWHM at the center section. This indicates that the proposed algorithm achieved high performance in both spatial resolution and positioning bias, especially at the corner section of the detector.

  3. The Association of Latino Children's Kindergarten School Readiness Profiles with Grade 2-5 Literacy Achievement Trajectories

    ERIC Educational Resources Information Center

    Quirk, Matthew; Grimm, Ryan; Furlong, Michael J.; Nylund-Gibson, Karen; Swami, Sruthi

    2016-01-01

    This study utilized latent class analysis (LCA) to identify 5 discernible profiles of Latino children's (N = 1,253) social-emotional, physical, and cognitive school readiness at the time of kindergarten entry. In addition, a growth mixture modeling (GMM) approach was used to identify 3 unique literacy achievement trajectories, across Grades 2-5,…

  4. An Anomalous Noise Events Detector for Dynamic Road Traffic Noise Mapping in Real-Life Urban and Suburban Environments.

    PubMed

    Socoró, Joan Claudi; Alías, Francesc; Alsina-Pagès, Rosa Ma

    2017-10-12

    One of the main aspects affecting the quality of life of people living in urban and suburban areas is their continued exposure to high Road Traffic Noise (RTN) levels. Until now, noise measurements in cities have been performed by professionals, recording data in certain locations to build a noise map afterwards. However, the deployment of Wireless Acoustic Sensor Networks (WASN) has enabled automatic noise mapping in smart cities. In order to obtain a reliable picture of the RTN levels affecting citizens, Anomalous Noise Events (ANE) unrelated to road traffic should be removed from the noise map computation. To this aim, this paper introduces an Anomalous Noise Event Detector (ANED) designed to differentiate between RTN and ANE in real time within a predefined interval running on the distributed low-cost acoustic sensors of a WASN. The proposed ANED follows a two-class audio event detection and classification approach, instead of multi-class or one-class classification schemes, taking advantage of the collection of representative acoustic data in real-life environments. The experiments conducted within the DYNAMAP project, implemented on ARM-based acoustic sensors, show the feasibility of the proposal both in terms of computational cost and classification performance using standard Mel cepstral coefficients and Gaussian Mixture Models (GMM). The two-class GMM core classifier relatively improves the baseline universal GMM one-class classifier F1 measure by 18.7% and 31.8% for suburban and urban environments, respectively, within the 1-s integration interval. Nevertheless, according to the results, the classification performance of the current ANED implementation still has room for improvement.

  5. Modeling Concept Dependencies for Event Detection

    DTIC Science & Technology

    2014-04-04

    Gaussian Mixture Model (GMM). Jiang et al . [8] provide a summary of experiments for TRECVID MED 2010 . They employ low-level features such as SIFT and...event detection literature. Ballan et al . [2] present a method to introduce temporal information for video event detection with a BoW (bag-of-words...approach. Zhou et al . [24] study video event detection by encoding a video with a set of bag of SIFT feature vectors and describe the distribution with a

  6. Modeling sports highlights using a time-series clustering framework and model interpretation

    NASA Astrophysics Data System (ADS)

    Radhakrishnan, Regunathan; Otsuka, Isao; Xiong, Ziyou; Divakaran, Ajay

    2005-01-01

    In our past work on sports highlights extraction, we have shown the utility of detecting audience reaction using an audio classification framework. The audio classes in the framework were chosen based on intuition. In this paper, we present a systematic way of identifying the key audio classes for sports highlights extraction using a time series clustering framework. We treat the low-level audio features as a time series and model the highlight segments as "unusual" events in a background of an "usual" process. The set of audio classes to characterize the sports domain is then identified by analyzing the consistent patterns in each of the clusters output from the time series clustering framework. The distribution of features from the training data so obtained for each of the key audio classes, is parameterized by a Minimum Description Length Gaussian Mixture Model (MDL-GMM). We also interpret the meaning of each of the mixture components of the MDL-GMM for the key audio class (the "highlight" class) that is correlated with highlight moments. Our results show that the "highlight" class is a mixture of audience cheering and commentator's excited speech. Furthermore, we show that the precision-recall performance for highlights extraction based on this "highlight" class is better than that of our previous approach which uses only audience cheering as the key highlight class.

  7. An efficient background modeling approach based on vehicle detection

    NASA Astrophysics Data System (ADS)

    Wang, Jia-yan; Song, Li-mei; Xi, Jiang-tao; Guo, Qing-hua

    2015-10-01

    The existing Gaussian Mixture Model(GMM) which is widely used in vehicle detection suffers inefficiency in detecting foreground image during the model phase, because it needs quite a long time to blend the shadows in the background. In order to overcome this problem, an improved method is proposed in this paper. First of all, each frame is divided into several areas(A, B, C and D), Where area A, B, C and D are decided by the frequency and the scale of the vehicle access. For each area, different new learning rate including weight, mean and variance is applied to accelerate the elimination of shadows. At the same time, the measure of adaptive change for Gaussian distribution is taken to decrease the total number of distributions and save memory space effectively. With this method, different threshold value and different number of Gaussian distribution are adopted for different areas. The results show that the speed of learning and the accuracy of the model using our proposed algorithm surpass the traditional GMM. Probably to the 50th frame, interference with the vehicle has been eliminated basically, and the model number only 35% to 43% of the standard, the processing speed for every frame approximately has a 20% increase than the standard. The proposed algorithm has good performance in terms of elimination of shadow and processing speed for vehicle detection, it can promote the development of intelligent transportation, which is very meaningful to the other Background modeling methods.

  8. The Probabilistic Admissible Region with Additional Constraints

    NASA Astrophysics Data System (ADS)

    Roscoe, C.; Hussein, I.; Wilkins, M.; Schumacher, P.

    The admissible region, in the space surveillance field, is defined as the set of physically acceptable orbits (e.g., orbits with negative energies) consistent with one or more observations of a space object. Given additional constraints on orbital semimajor axis, eccentricity, etc., the admissible region can be constrained, resulting in the constrained admissible region (CAR). Based on known statistics of the measurement process, one can replace hard constraints with a probabilistic representation of the admissible region. This results in the probabilistic admissible region (PAR), which can be used for orbit initiation in Bayesian tracking and prioritization of tracks in a multiple hypothesis tracking framework. The PAR concept was introduced by the authors at the 2014 AMOS conference. In that paper, a Monte Carlo approach was used to show how to construct the PAR in the range/range-rate space based on known statistics of the measurement, semimajor axis, and eccentricity. An expectation-maximization algorithm was proposed to convert the particle cloud into a Gaussian Mixture Model (GMM) representation of the PAR. This GMM can be used to initialize a Bayesian filter. The PAR was found to be significantly non-uniform, invalidating an assumption frequently made in CAR-based filtering approaches. Using the GMM or particle cloud representations of the PAR, orbits can be prioritized for propagation in a multiple hypothesis tracking (MHT) framework. In this paper, the authors focus on expanding the PAR methodology to allow additional constraints, such as a constraint on perigee altitude, to be modeled in the PAR. This requires re-expressing the joint probability density function for the attributable vector as well as the (constrained) orbital parameters and range and range-rate. The final PAR is derived by accounting for any interdependencies between the parameters. Noting that the concepts presented are general and can be applied to any measurement scenario, the idea will be illustrated using a short-arc, angles-only observation scenario.

  9. Partially supervised speaker clustering.

    PubMed

    Tang, Hao; Chu, Stephen Mingyu; Hasegawa-Johnson, Mark; Huang, Thomas S

    2012-05-01

    Content-based multimedia indexing, retrieval, and processing as well as multimedia databases demand the structuring of the media content (image, audio, video, text, etc.), one significant goal being to associate the identity of the content to the individual segments of the signals. In this paper, we specifically address the problem of speaker clustering, the task of assigning every speech utterance in an audio stream to its speaker. We offer a complete treatment to the idea of partially supervised speaker clustering, which refers to the use of our prior knowledge of speakers in general to assist the unsupervised speaker clustering process. By means of an independent training data set, we encode the prior knowledge at the various stages of the speaker clustering pipeline via 1) learning a speaker-discriminative acoustic feature transformation, 2) learning a universal speaker prior model, and 3) learning a discriminative speaker subspace, or equivalently, a speaker-discriminative distance metric. We study the directional scattering property of the Gaussian mixture model (GMM) mean supervector representation of utterances in the high-dimensional space, and advocate exploiting this property by using the cosine distance metric instead of the euclidean distance metric for speaker clustering in the GMM mean supervector space. We propose to perform discriminant analysis based on the cosine distance metric, which leads to a novel distance metric learning algorithm—linear spherical discriminant analysis (LSDA). We show that the proposed LSDA formulation can be systematically solved within the elegant graph embedding general dimensionality reduction framework. Our speaker clustering experiments on the GALE database clearly indicate that 1) our speaker clustering methods based on the GMM mean supervector representation and vector-based distance metrics outperform traditional speaker clustering methods based on the “bag of acoustic features” representation and statistical model-based distance metrics, 2) our advocated use of the cosine distance metric yields consistent increases in the speaker clustering performance as compared to the commonly used euclidean distance metric, 3) our partially supervised speaker clustering concept and strategies significantly improve the speaker clustering performance over the baselines, and 4) our proposed LSDA algorithm further leads to state-of-the-art speaker clustering performance.

  10. Bayesian Nonlinear Assimilation of Eulerian and Lagrangian Coastal Flow Data

    DTIC Science & Technology

    2015-09-30

    Lagrangian Coastal Flow Data Dr. Pierre F.J. Lermusiaux Department of Mechanical Engineering Center for Ocean Science and Engineering Massachusetts...Develop and apply theory, schemes and computational systems for rigorous Bayesian nonlinear assimilation of Eulerian and Lagrangian coastal flow data...coastal ocean fields, both in Eulerian and Lagrangian forms. - Further develop and implement our GMM-DO schemes for robust Bayesian nonlinear estimation

  11. An Anomalous Noise Events Detector for Dynamic Road Traffic Noise Mapping in Real-Life Urban and Suburban Environments

    PubMed Central

    2017-01-01

    One of the main aspects affecting the quality of life of people living in urban and suburban areas is their continued exposure to high Road Traffic Noise (RTN) levels. Until now, noise measurements in cities have been performed by professionals, recording data in certain locations to build a noise map afterwards. However, the deployment of Wireless Acoustic Sensor Networks (WASN) has enabled automatic noise mapping in smart cities. In order to obtain a reliable picture of the RTN levels affecting citizens, Anomalous Noise Events (ANE) unrelated to road traffic should be removed from the noise map computation. To this aim, this paper introduces an Anomalous Noise Event Detector (ANED) designed to differentiate between RTN and ANE in real time within a predefined interval running on the distributed low-cost acoustic sensors of a WASN. The proposed ANED follows a two-class audio event detection and classification approach, instead of multi-class or one-class classification schemes, taking advantage of the collection of representative acoustic data in real-life environments. The experiments conducted within the DYNAMAP project, implemented on ARM-based acoustic sensors, show the feasibility of the proposal both in terms of computational cost and classification performance using standard Mel cepstral coefficients and Gaussian Mixture Models (GMM). The two-class GMM core classifier relatively improves the baseline universal GMM one-class classifier F1 measure by 18.7% and 31.8% for suburban and urban environments, respectively, within the 1-s integration interval. Nevertheless, according to the results, the classification performance of the current ANED implementation still has room for improvement. PMID:29023397

  12. Facial Shape Analysis Identifies Valid Cues to Aspects of Physiological Health in Caucasian, Asian, and African Populations.

    PubMed

    Stephen, Ian D; Hiew, Vivian; Coetzee, Vinet; Tiddeman, Bernard P; Perrett, David I

    2017-01-01

    Facial cues contribute to attractiveness, including shape cues such as symmetry, averageness, and sexual dimorphism. These cues may represent cues to objective aspects of physiological health, thereby conferring an evolutionary advantage to individuals who find them attractive. The link between facial cues and aspects of physiological health is therefore central to evolutionary explanations of attractiveness. Previously, studies linking facial cues to aspects of physiological health have been infrequent, have had mixed results, and have tended to focus on individual facial cues in isolation. Geometric morphometric methodology (GMM) allows a bottom-up approach to identifying shape correlates of aspects of physiological health. Here, we apply GMM to facial shape data, producing models that successfully predict aspects of physiological health in 272 Asian, African, and Caucasian faces - percentage body fat (21.0% of variance explained), body mass index (BMI; 31.9%) and blood pressure (BP; 21.3%). Models successfully predict percentage body fat and blood pressure even when controlling for BMI, suggesting that they are not simply measuring body size. Predicted values of BMI and BP, but not percentage body fat, correlate with health ratings. When asked to manipulate the shape of faces along the physiological health variable axes (as determined by the models), participants reduced predicted BMI, body fat and (marginally) BP, suggesting that facial shape provides a valid cue to aspects of physiological health.

  13. Facial Shape Analysis Identifies Valid Cues to Aspects of Physiological Health in Caucasian, Asian, and African Populations

    PubMed Central

    Stephen, Ian D.; Hiew, Vivian; Coetzee, Vinet; Tiddeman, Bernard P.; Perrett, David I.

    2017-01-01

    Facial cues contribute to attractiveness, including shape cues such as symmetry, averageness, and sexual dimorphism. These cues may represent cues to objective aspects of physiological health, thereby conferring an evolutionary advantage to individuals who find them attractive. The link between facial cues and aspects of physiological health is therefore central to evolutionary explanations of attractiveness. Previously, studies linking facial cues to aspects of physiological health have been infrequent, have had mixed results, and have tended to focus on individual facial cues in isolation. Geometric morphometric methodology (GMM) allows a bottom–up approach to identifying shape correlates of aspects of physiological health. Here, we apply GMM to facial shape data, producing models that successfully predict aspects of physiological health in 272 Asian, African, and Caucasian faces – percentage body fat (21.0% of variance explained), body mass index (BMI; 31.9%) and blood pressure (BP; 21.3%). Models successfully predict percentage body fat and blood pressure even when controlling for BMI, suggesting that they are not simply measuring body size. Predicted values of BMI and BP, but not percentage body fat, correlate with health ratings. When asked to manipulate the shape of faces along the physiological health variable axes (as determined by the models), participants reduced predicted BMI, body fat and (marginally) BP, suggesting that facial shape provides a valid cue to aspects of physiological health. PMID:29163270

  14. Adenovirus-mediated interleukin-18 mutant in vivo gene transfer inhibits tumor growth through the induction of T cell immunity and activation of natural killer cell cytotoxicity.

    PubMed

    Hwang, Kyung-Sun; Cho, Won-Kyung; Yoo, Jinsang; Seong, Young Rim; Kim, Bum-Kyeng; Kim, Samyong; Im, Dong-Soo

    2004-06-01

    We report here that gene transfer using recombinant adenoviruses encoding interleukin (IL)-18 mutants induces potent antitumor activity in vivo. The precursor form of IL-18 (ProIL-18) is processed by caspase-1 to produce bioactive IL-18, but its cleavage by caspase-3 (CPP32) produces an inactive form. To prepare IL-18 molecules with an effective antitumor activity, a murine IL-18 mutant with the signal sequence of murine granulocyte-macrophage (GM)- colony stimulating factor (CSF) at the 5'-end of mature IL-18 cDNA (GMmIL-18) and human IL-18 mutant with the prepro leader sequence of trypsin (PPT), which is not cleaved by caspase-3 (PPThIL-18CPP32-), respectively, were constructed. Adenovirus vectors carrying GMmIL-18 or PPThIL-18CPP32- produced bioactive IL-18. Ad.GMmIL-18 had a more potent antitumor effect than Ad.mProIL-18 encoding immature IL-18 in renal cell adenocarcinoma (Renca) tumor-bearing mice. Tumor-specific cytotoxic T lymphocytes, the induction of Th1 cytokines, and an augmented natural killer (NK) cell activity were detected in Renca tumor-bearing mice treated with Ad.GMmIL-18. An immunohistological analysis revealed that CD4+ and CD8+ T cells abundantly infiltrated into tumors of mice treated with Ad.GMmIL-18. Huh-7 human hepatoma tumor growth in nude mice with a defect of T cell function was significantly inhibited by Ad.PPThIL-18CPP32- compared with Ad.hProIL-18 encoding immature IL-18. Nude mice treated with Ad.PPThIL-18CPP32- contained NK cells with increased cytotoxicity. The results suggest that the release of mature IL-18 in tumors is required for achieving an antitumor effect including tumor-specific cellular immunity and augmented NK cell-mediated cytotoxicity. These optimally designed IL-18 mutants could be useful for improving the antitumor effectiveness of wild-type IL-18. Copyright 2004 Nature Publishing Group

  15. Dielectric constant adjustments in computations of the scattering properties of solid ice crystals using the Generalized Multi-particle Mie method

    NASA Astrophysics Data System (ADS)

    Lu, Yinghui; Aydin, Kültegin; Clothiaux, Eugene E.; Verlinde, Johannes

    2014-03-01

    Ice crystal scattering properties at microwave radar wavelengths can be modeled with the Generalized Multi-particle Mie (GMM) method by decomposing an ice crystal into a cluster of tiny spheres composed of solid ice. In this decomposition the mass distribution of the tiny spheres in the cluster is no longer equivalent to that in the original ice crystal because of gaps between the tiny spheres. To compensate for the gaps in the cluster representation of an ice crystal in the GMM computation of crystal scattering properties, the Maxwell Garnett approximation is used to estimate what the dielectric function of the tiny spheres (i.e., the inclusions) in the cluster must be to make the cluster of tiny spheres with associated air gaps (i.e., the background matrix) dielectrically equivalent to the original solid ice crystal. Overall, compared with the T-matrix method for spheroids outside resonance regions this approach agrees to within mostly 0.3 dB (and often better) in the horizontal backscattering cross section σhh and the ratio of horizontal and vertical backscattering cross sections σhh/σvv, and 6% for the amplitude scattering matrix elements Re{S22-S11} and Im{S22} in the forward direction. For crystal sizes and wavelengths near resonances, where the scattering parameters are highly sensitive to the crystal shape, the differences are generally within 1.2 dB for σhh and σhh/σvv, 20% for Re{S22-S11} and 6% for Im{S22}. The Discrete Dipole Approximation (DDA) results for the same spheroids are generally closer than those of GMM to the T-matrix results. For hexagonal plates the differences between GMM and the DDA at a W-band wavelength (3.19 mm) are mostly within 0.6 dB for σhh, 1 dB for σhh/σvv, 11% for Re{S22-S11} and 12% for Im{S22}. For columns the differences are within 0.3 dB for σhh and σhh/σvv, 8% for Re{S22-S11} and 4% for Im{S22}. This method shows higher accuracy than an alternative method that artificially increases the thickness of ice plates to provide the same mass as the original ice crystal.

  16. A Hierarchical Convolutional Neural Network for vesicle fusion event classification.

    PubMed

    Li, Haohan; Mao, Yunxiang; Yin, Zhaozheng; Xu, Yingke

    2017-09-01

    Quantitative analysis of vesicle exocytosis and classification of different modes of vesicle fusion from the fluorescence microscopy are of primary importance for biomedical researches. In this paper, we propose a novel Hierarchical Convolutional Neural Network (HCNN) method to automatically identify vesicle fusion events in time-lapse Total Internal Reflection Fluorescence Microscopy (TIRFM) image sequences. Firstly, a detection and tracking method is developed to extract image patch sequences containing potential fusion events. Then, a Gaussian Mixture Model (GMM) is applied on each image patch of the patch sequence with outliers rejected for robust Gaussian fitting. By utilizing the high-level time-series intensity change features introduced by GMM and the visual appearance features embedded in some key moments of the fusion process, the proposed HCNN architecture is able to classify each candidate patch sequence into three classes: full fusion event, partial fusion event and non-fusion event. Finally, we validate the performance of our method on 9 challenging datasets that have been annotated by cell biologists, and our method achieves better performances when comparing with three previous methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Utterance independent bimodal emotion recognition in spontaneous communication

    NASA Astrophysics Data System (ADS)

    Tao, Jianhua; Pan, Shifeng; Yang, Minghao; Li, Ya; Mu, Kaihui; Che, Jianfeng

    2011-12-01

    Emotion expressions sometimes are mixed with the utterance expression in spontaneous face-to-face communication, which makes difficulties for emotion recognition. This article introduces the methods of reducing the utterance influences in visual parameters for the audio-visual-based emotion recognition. The audio and visual channels are first combined under a Multistream Hidden Markov Model (MHMM). Then, the utterance reduction is finished by finding the residual between the real visual parameters and the outputs of the utterance related visual parameters. This article introduces the Fused Hidden Markov Model Inversion method which is trained in the neutral expressed audio-visual corpus to solve the problem. To reduce the computing complexity the inversion model is further simplified to a Gaussian Mixture Model (GMM) mapping. Compared with traditional bimodal emotion recognition methods (e.g., SVM, CART, Boosting), the utterance reduction method can give better results of emotion recognition. The experiments also show the effectiveness of our emotion recognition system when it was used in a live environment.

  18. The Effect of Poverty, Gender Exclusion, and Child Labor on Out-of-School Rates for Female Children

    ERIC Educational Resources Information Center

    Laborda Castillo, Leopoldo; Sotelsek Salem, Daniel; Sarr, Leopold Remi

    2014-01-01

    In this article, the authors analyze the effect of poverty, social exclusion, and child labor on out-of-school rates for female children. This empirical study is based on a dynamic panel model for a sample of 216 countries over the period 1970 to 2010. Results based on the generalized method of moments (GMM) of Arellano and Bond (1991) and the…

  19. Classification of SD-OCT volumes for DME detection: an anomaly detection approach

    NASA Astrophysics Data System (ADS)

    Sankar, S.; Sidibé, D.; Cheung, Y.; Wong, T. Y.; Lamoureux, E.; Milea, D.; Meriaudeau, F.

    2016-03-01

    Diabetic Macular Edema (DME) is the leading cause of blindness amongst diabetic patients worldwide. It is characterized by accumulation of water molecules in the macula leading to swelling. Early detection of the disease helps prevent further loss of vision. Naturally, automated detection of DME from Optical Coherence Tomography (OCT) volumes plays a key role. To this end, a pipeline for detecting DME diseases in OCT volumes is proposed in this paper. The method is based on anomaly detection using Gaussian Mixture Model (GMM). It starts with pre-processing the B-scans by resizing, flattening, filtering and extracting features from them. Both intensity and Local Binary Pattern (LBP) features are considered. The dimensionality of the extracted features is reduced using PCA. As the last stage, a GMM is fitted with features from normal volumes. During testing, features extracted from the test volume are evaluated with the fitted model for anomaly and classification is made based on the number of B-scans detected as outliers. The proposed method is tested on two OCT datasets achieving a sensitivity and a specificity of 80% and 93% on the first dataset, and 100% and 80% on the second one. Moreover, experiments show that the proposed method achieves better classification performances than other recently published works.

  20. Predicting Tropical Cyclogenesis with a Global Mesoscale Model: Preliminary Results with Very Severe Cyclonic Storm Nargis (2008)

    NASA Astrophysics Data System (ADS)

    Shen, B.; Tao, W.; Atlas, R.

    2008-12-01

    Very Severe Cyclonic Storm Nargis, the deadliest named tropical cyclone (TC) in the North Indian Ocean Basin, devastated Burma (Myanmar) in May 2008, causing tremendous damage and numerous fatalities. An increased lead time in the prediction of TC Nargis would have increased the warning time and may therefore have saved lives and reduced economic damage. Recent advances in high-resolution global models and supercomputers have shown the potential for improving TC track and intensity forecasts, presumably by improving multi-scale simulations. The key but challenging questions to be answered include: (1) if and how realistic, in terms of timing, location and TC general structure, the global mesoscale model (GMM) can simulate TC genesis and (2) under what conditions can the model extend the lead time of TC genesis forecasts. In this study, we focus on genesis prediction for TCs in the Indian Ocean with the GMM. Preliminary real-data simulations show that the initial formation and intensity variations of TC Nargis can be realistically predicted at a lead time of up to 5 days. These simulations also suggest that the accurate representations of a westerly wind burst (WWB) and an equatorial trough, associated with monsoon circulations and/or a Madden-Julian Oscillation (MJO), are important for predicting the formation of this kind of TC. In addition to the WWB and equatorial trough, other favorable environmental conditions will be examined, which include enhanced monsoonal circulation, upper-level outflow, low- and middle-level moistening, and surface fluxes.

  1. Chance-Constrained Day-Ahead Hourly Scheduling in Distribution System Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang; Zhang, Yingchen; Muljadi, Eduard

    This paper aims to propose a two-step approach for day-ahead hourly scheduling in a distribution system operation, which contains two operation costs, the operation cost at substation level and feeder level. In the first step, the objective is to minimize the electric power purchase from the day-ahead market with the stochastic optimization. The historical data of day-ahead hourly electric power consumption is used to provide the forecast results with the forecasting error, which is presented by a chance constraint and formulated into a deterministic form by Gaussian mixture model (GMM). In the second step, the objective is to minimize themore » system loss. Considering the nonconvexity of the three-phase balanced AC optimal power flow problem in distribution systems, the second-order cone program (SOCP) is used to relax the problem. Then, a distributed optimization approach is built based on the alternating direction method of multiplier (ADMM). The results shows that the validity and effectiveness method.« less

  2. Are smokers rational addicts? Empirical evidence from the Indonesian Family Life Survey

    PubMed Central

    2011-01-01

    Background Indonesia is one of the largest consumers of tobacco in the world, however there has been little work done on the economics addiction of tobacco. This study provides an empirical test of a rational addiction (henceforth RA) hypothesis of cigarette demand in Indonesia. Methods Four estimators (OLS, 2SLS, GMM, and System-GMM) were explored to test the RA hypothesis. The author adopted several diagnostics tests to select the best estimator to overcome econometric problems faced in presence of the past and future cigarette consumption (suspected endogenous variables). A short-run and long-run price elasticities of cigarettes demand was then calculated. The model was applied to individuals pooled data derived from three-waves a panel of the Indonesian Family Life Survey spanning the period 1993-2000. Results The past cigarette consumption coefficients turned out to be a positive with a p-value < 1%, implying that cigarettes indeed an addictive goods. The rational addiction hypothesis was rejected in favour of myopic ones. The short-run cigarette price elasticity for male and female was estimated to be-0.38 and -0.57, respectively, and the long-run one was -0.4 and -3.85, respectively. Conclusions Health policymakers should redesign current public health campaign against cigarette smoking in the country. Given the demand for cigarettes to be more prices sensitive for the long run (and female) than the short run (and male), an increase in the price of cigarettes could lead to a significant fall in cigarette consumption in the long run rather than as a constant source of government revenue. PMID:21345229

  3. Are smokers rational addicts? Empirical evidence from the Indonesian Family Life Survey.

    PubMed

    Hidayat, Budi; Thabrany, Hasbullah

    2011-02-23

    Indonesia is one of the largest consumers of tobacco in the world, however there has been little work done on the economics addiction of tobacco. This study provides an empirical test of a rational addiction (henceforth RA) hypothesis of cigarette demand in Indonesia. Four estimators (OLS, 2SLS, GMM, and System-GMM) were explored to test the RA hypothesis. The author adopted several diagnostics tests to select the best estimator to overcome econometric problems faced in presence of the past and future cigarette consumption (suspected endogenous variables). A short-run and long-run price elasticities of cigarettes demand was then calculated. The model was applied to individuals pooled data derived from three-waves a panel of the Indonesian Family Life Survey spanning the period 1993-2000. The past cigarette consumption coefficients turned out to be a positive with a p-value < 1%, implying that cigarettes indeed an addictive goods. The rational addiction hypothesis was rejected in favour of myopic ones. The short-run cigarette price elasticity for male and female was estimated to be-0.38 and -0.57, respectively, and the long-run one was -0.4 and -3.85, respectively. Health policymakers should redesign current public health campaign against cigarette smoking in the country. Given the demand for cigarettes to be more prices sensitive for the long run (and female) than the short run (and male), an increase in the price of cigarettes could lead to a significant fall in cigarette consumption in the long run rather than as a constant source of government revenue.

  4. Kinematics and force analysis of a robot hand based on an artificial biological control scheme

    NASA Astrophysics Data System (ADS)

    Kim, Man Guen

    An artificial biological control scheme (ABCS) is used to study the kinematics and statics of a multifingered hand with a view to developing an efficient control scheme for grasping. The ABCS is based on observation of human grasping, intuitively taking it as the optimum model for robotic grasping. A final chapter proposes several grasping measures to be applied to the design and control of a robot hand. The ABCS leads to the definition of two modes of the grasping action: natural grasping (NG), which is the human motion to grasp the object without any special task command, and forced grasping (FG), which is the motion with a specific task. The grasping direction line (GDL) is defined to determine the position and orientation of the object in the hand. The kinematic model of a redundant robot arm and hand is developed by reconstructing the human upper extremity and using anthropometric measurement data. The inverse kinematic analyses of various types of precision and power grasping are studied by replacing the three-link with one virtual link and using the GDL. The static force analysis for grasping with fingertips is studied by applying the ABCS. A measure of grasping stability, that maintains the positions of contacts as well as the configurations of the redundant fingers, is derived. The grasping stability measure (GSM), a measure of how well the hand maintains grasping under the existence of external disturbance, is derived by the torque vector of the hand calculated from the external force applied to the object. The grasping manipulability measure (GMM), a measure of how well the hand manipulates the object for the task, is derived by the joint velocity vector of the hand calculated from the object velocity. The grasping performance measure (GPM) is defined by the sum of the directional components of the GSM and the GMM. Finally, a planar redundant hand with two fingers is examined in order to study the various postures of the hand performing pinch grasping by applying the GSM and the GMM.

  5. Physical Human Activity Recognition Using Wearable Sensors.

    PubMed

    Attal, Ferhat; Mohammed, Samer; Dedabrishvili, Mariam; Chamroukhi, Faicel; Oukhellou, Latifa; Amirat, Yacine

    2015-12-11

    This paper presents a review of different classification techniques used to recognize human activities from wearable inertial sensor data. Three inertial sensor units were used in this study and were worn by healthy subjects at key points of upper/lower body limbs (chest, right thigh and left ankle). Three main steps describe the activity recognition process: sensors' placement, data pre-processing and data classification. Four supervised classification techniques namely, k-Nearest Neighbor (k-NN), Support Vector Machines (SVM), Gaussian Mixture Models (GMM), and Random Forest (RF) as well as three unsupervised classification techniques namely, k-Means, Gaussian mixture models (GMM) and Hidden Markov Model (HMM), are compared in terms of correct classification rate, F-measure, recall, precision, and specificity. Raw data and extracted features are used separately as inputs of each classifier. The feature selection is performed using a wrapper approach based on the RF algorithm. Based on our experiments, the results obtained show that the k-NN classifier provides the best performance compared to other supervised classification algorithms, whereas the HMM classifier is the one that gives the best results among unsupervised classification algorithms. This comparison highlights which approach gives better performance in both supervised and unsupervised contexts. It should be noted that the obtained results are limited to the context of this study, which concerns the classification of the main daily living human activities using three wearable accelerometers placed at the chest, right shank and left ankle of the subject.

  6. Physical Human Activity Recognition Using Wearable Sensors

    PubMed Central

    Attal, Ferhat; Mohammed, Samer; Dedabrishvili, Mariam; Chamroukhi, Faicel; Oukhellou, Latifa; Amirat, Yacine

    2015-01-01

    This paper presents a review of different classification techniques used to recognize human activities from wearable inertial sensor data. Three inertial sensor units were used in this study and were worn by healthy subjects at key points of upper/lower body limbs (chest, right thigh and left ankle). Three main steps describe the activity recognition process: sensors’ placement, data pre-processing and data classification. Four supervised classification techniques namely, k-Nearest Neighbor (k-NN), Support Vector Machines (SVM), Gaussian Mixture Models (GMM), and Random Forest (RF) as well as three unsupervised classification techniques namely, k-Means, Gaussian mixture models (GMM) and Hidden Markov Model (HMM), are compared in terms of correct classification rate, F-measure, recall, precision, and specificity. Raw data and extracted features are used separately as inputs of each classifier. The feature selection is performed using a wrapper approach based on the RF algorithm. Based on our experiments, the results obtained show that the k-NN classifier provides the best performance compared to other supervised classification algorithms, whereas the HMM classifier is the one that gives the best results among unsupervised classification algorithms. This comparison highlights which approach gives better performance in both supervised and unsupervised contexts. It should be noted that the obtained results are limited to the context of this study, which concerns the classification of the main daily living human activities using three wearable accelerometers placed at the chest, right shank and left ankle of the subject. PMID:26690450

  7. Does capitation matter? Impacts on access, use, and quality.

    PubMed

    Zuvekas, Samuel H; Hill, Steven C

    2004-01-01

    Provider capitation may constrain costs, but it also may reduce access and quality of care. We examine the impacts of capitating the usual source of care of enrollees in health maintenance organizations (HMOs). We account for the endogeneity of capitation and other characteristics using generalized methods of moments (GMM) estimation on a sample from the Medical Expenditure Panel Survey for 1996 and 1997. Being organized as a group/staff HMO generally has stronger impact on access and quality than capitation. Capitation by itself may increase access to consumers' usual sources of care, improve primary preventive care, and reduce coordination, but estimates with GMM were not statistically significant.

  8. Automated glioblastoma segmentation based on a multiparametric structured unsupervised classification.

    PubMed

    Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V; Robles, Montserrat; Aparici, F; Martí-Bonmatí, L; García-Gómez, Juan M

    2015-01-01

    Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation.

  9. Processor core for real time background identification of HD video based on OpenCV Gaussian mixture model algorithm

    NASA Astrophysics Data System (ADS)

    Genovese, Mariangela; Napoli, Ettore

    2013-05-01

    The identification of moving objects is a fundamental step in computer vision processing chains. The development of low cost and lightweight smart cameras steadily increases the request of efficient and high performance circuits able to process high definition video in real time. The paper proposes two processor cores aimed to perform the real time background identification on High Definition (HD, 1920 1080 pixel) video streams. The implemented algorithm is the OpenCV version of the Gaussian Mixture Model (GMM), an high performance probabilistic algorithm for the segmentation of the background that is however computationally intensive and impossible to implement on general purpose CPU with the constraint of real time processing. In the proposed paper, the equations of the OpenCV GMM algorithm are optimized in such a way that a lightweight and low power implementation of the algorithm is obtained. The reported performances are also the result of the use of state of the art truncated binary multipliers and ROM compression techniques for the implementation of the non-linear functions. The first circuit has commercial FPGA devices as a target and provides speed and logic resource occupation that overcome previously proposed implementations. The second circuit is oriented to an ASIC (UMC-90nm) standard cell implementation. Both implementations are able to process more than 60 frames per second in 1080p format, a frame rate compatible with HD television.

  10. Multinomial logistic regression analysis for differentiating 3 treatment outcome trajectory groups for headache-associated disability.

    PubMed

    Lewis, Kristin Nicole; Heckman, Bernadette Davantes; Himawan, Lina

    2011-08-01

    Growth mixture modeling (GMM) identified latent groups based on treatment outcome trajectories of headache disability measures in patients in headache subspecialty treatment clinics. Using a longitudinal design, 219 patients in headache subspecialty clinics in 4 large cities throughout Ohio provided data on their headache disability at pretreatment and 3 follow-up assessments. GMM identified 3 treatment outcome trajectory groups: (1) patients who initiated treatment with elevated disability levels and who reported statistically significant reductions in headache disability (high-disability improvers; 11%); (2) patients who initiated treatment with elevated disability but who reported no reductions in disability (high-disability nonimprovers; 34%); and (3) patients who initiated treatment with moderate disability and who reported statistically significant reductions in headache disability (moderate-disability improvers; 55%). Based on the final multinomial logistic regression model, a dichotomized treatment appointment attendance variable was a statistically significant predictor for differentiating high-disability improvers from high-disability nonimprovers. Three-fourths of patients who initiated treatment with elevated disability levels did not report reductions in disability after 5 months of treatment with new preventive pharmacotherapies. Preventive headache agents may be most efficacious for patients with moderate levels of disability and for patients with high disability levels who attend all treatment appointments. Copyright © 2011 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.

  11. Real-Time EEG Signal Enhancement Using Canonical Correlation Analysis and Gaussian Mixture Clustering

    PubMed Central

    Huang, Chih-Sheng; Yang, Wen-Yu; Chuang, Chun-Hsiang; Wang, Yu-Kai

    2018-01-01

    Electroencephalogram (EEG) signals are usually contaminated with various artifacts, such as signal associated with muscle activity, eye movement, and body motion, which have a noncerebral origin. The amplitude of such artifacts is larger than that of the electrical activity of the brain, so they mask the cortical signals of interest, resulting in biased analysis and interpretation. Several blind source separation methods have been developed to remove artifacts from the EEG recordings. However, the iterative process for measuring separation within multichannel recordings is computationally intractable. Moreover, manually excluding the artifact components requires a time-consuming offline process. This work proposes a real-time artifact removal algorithm that is based on canonical correlation analysis (CCA), feature extraction, and the Gaussian mixture model (GMM) to improve the quality of EEG signals. The CCA was used to decompose EEG signals into components followed by feature extraction to extract representative features and GMM to cluster these features into groups to recognize and remove artifacts. The feasibility of the proposed algorithm was demonstrated by effectively removing artifacts caused by blinks, head/body movement, and chewing from EEG recordings while preserving the temporal and spectral characteristics of the signals that are important to cognitive research. PMID:29599950

  12. A Sampling-Based Bayesian Approach for Cooperative Multiagent Online Search With Resource Constraints.

    PubMed

    Xiao, Hu; Cui, Rongxin; Xu, Demin

    2018-06-01

    This paper presents a cooperative multiagent search algorithm to solve the problem of searching for a target on a 2-D plane under multiple constraints. A Bayesian framework is used to update the local probability density functions (PDFs) of the target when the agents obtain observation information. To obtain the global PDF used for decision making, a sampling-based logarithmic opinion pool algorithm is proposed to fuse the local PDFs, and a particle sampling approach is used to represent the continuous PDF. Then the Gaussian mixture model (GMM) is applied to reconstitute the global PDF from the particles, and a weighted expectation maximization algorithm is presented to estimate the parameters of the GMM. Furthermore, we propose an optimization objective which aims to guide agents to find the target with less resource consumptions, and to keep the resource consumption of each agent balanced simultaneously. To this end, a utility function-based optimization problem is put forward, and it is solved by a gradient-based approach. Several contrastive simulations demonstrate that compared with other existing approaches, the proposed one uses less overall resources and shows a better performance of balancing the resource consumption.

  13. Recognizing Whispered Speech Produced by an Individual with Surgically Reconstructed Larynx Using Articulatory Movement Data

    PubMed Central

    Cao, Beiming; Kim, Myungjong; Mau, Ted; Wang, Jun

    2017-01-01

    Individuals with larynx (vocal folds) impaired have problems in controlling their glottal vibration, producing whispered speech with extreme hoarseness. Standard automatic speech recognition using only acoustic cues is typically ineffective for whispered speech because the corresponding spectral characteristics are distorted. Articulatory cues such as the tongue and lip motion may help in recognizing whispered speech since articulatory motion patterns are generally not affected. In this paper, we investigated whispered speech recognition for patients with reconstructed larynx using articulatory movement data. A data set with both acoustic and articulatory motion data was collected from a patient with surgically reconstructed larynx using an electromagnetic articulograph. Two speech recognition systems, Gaussian mixture model-hidden Markov model (GMM-HMM) and deep neural network-HMM (DNN-HMM), were used in the experiments. Experimental results showed adding either tongue or lip motion data to acoustic features such as mel-frequency cepstral coefficient (MFCC) significantly reduced the phone error rates on both speech recognition systems. Adding both tongue and lip data achieved the best performance. PMID:29423453

  14. Deep Ensemble Learning for Monaural Speech Separation

    DTIC Science & Technology

    2015-02-01

    cse.ohio- state.edu). typically predict the ideal binary mask (IBM) or ideal ratio mask ( IRM ). For the IBM [21], a T-F unit is assigned 1, if the signal...dominance. For the IRM [17], a T- F unit is assigned some ratio of target energy and mixture energy. Kim et al. [15] used Gaussian mixture models (GMM...significantly outperforms earlier separation methods. Subsequently, Wang et al. [23] examined a number of training targets and suggested that the IRM should be

  15. Motion generation of robotic surgical tasks: learning from expert demonstrations.

    PubMed

    Reiley, Carol E; Plaku, Erion; Hager, Gregory D

    2010-01-01

    Robotic surgical assistants offer the possibility of automating portions of a task that are time consuming and tedious in order to reduce the cognitive workload of a surgeon. This paper proposes using programming by demonstration to build generative models and generate smooth trajectories that capture the underlying structure of the motion data recorded from expert demonstrations. Specifically, motion data from Intuitive Surgical's da Vinci Surgical System of a panel of expert surgeons performing three surgical tasks are recorded. The trials are decomposed into subtasks or surgemes, which are then temporally aligned through dynamic time warping. Next, a Gaussian Mixture Model (GMM) encodes the experts' underlying motion structure. Gaussian Mixture Regression (GMR) is then used to extract a smooth reference trajectory to reproduce a trajectory of the task. The approach is evaluated through an automated skill assessment measurement. Results suggest that this paper presents a means to (i) extract important features of the task, (ii) create a metric to evaluate robot imitative performance (iii) generate smoother trajectories for reproduction of three common medical tasks.

  16. The Effect of Environmental Regulation on Employment in Resource-Based Areas of China-An Empirical Research Based on the Mediating Effect Model.

    PubMed

    Cao, Wenbin; Wang, Hui; Ying, Huihui

    2017-12-19

    While environmental pollution is becoming more and more serious, many countries are adopting policies to control pollution. At the same time, the environmental regulation will inevitably affect economic and social development, especially employment growth. The environmental regulation will not only affect the scale of employment directly, but it will also have indirect effects by stimulating upgrades in the industrial structure and in technological innovation. This paper examines the impact of environmental regulation on employment, using a mediating model based on the data from five typical resource-based provinces in China from 2000 to 2015. The estimation is performed based on the system GMM (Generalized Method of Moments) estimator. The results show that the implementation of environmental regulation in resource-based areas has both a direct effect and a mediating effect on employment. These findings provide policy implications for these resource-based areas to promote the coordinating development between the environment and employment.

  17. Improved dense trajectories for action recognition based on random projection and Fisher vectors

    NASA Astrophysics Data System (ADS)

    Ai, Shihui; Lu, Tongwei; Xiong, Yudian

    2018-03-01

    As an important application of intelligent monitoring system, the action recognition in video has become a very important research area of computer vision. In order to improve the accuracy rate of the action recognition in video with improved dense trajectories, one advanced vector method is introduced. Improved dense trajectories combine Fisher Vector with Random Projection. The method realizes the reduction of the characteristic trajectory though projecting the high-dimensional trajectory descriptor into the low-dimensional subspace based on defining and analyzing Gaussian mixture model by Random Projection. And a GMM-FV hybrid model is introduced to encode the trajectory feature vector and reduce dimension. The computational complexity is reduced by Random Projection which can drop Fisher coding vector. Finally, a Linear SVM is used to classifier to predict labels. We tested the algorithm in UCF101 dataset and KTH dataset. Compared with existed some others algorithm, the result showed that the method not only reduce the computational complexity but also improved the accuracy of action recognition.

  18. The Effect of Environmental Regulation on Employment in Resource-Based Areas of China—An Empirical Research Based on the Mediating Effect Model

    PubMed Central

    Cao, Wenbin; Wang, Hui; Ying, Huihui

    2017-01-01

    While environmental pollution is becoming more and more serious, many countries are adopting policies to control pollution. At the same time, the environmental regulation will inevitably affect economic and social development, especially employment growth. The environmental regulation will not only affect the scale of employment directly, but it will also have indirect effects by stimulating upgrades in the industrial structure and in technological innovation. This paper examines the impact of environmental regulation on employment, using a mediating model based on the data from five typical resource-based provinces in China from 2000 to 2015. The estimation is performed based on the system GMM (Generalized Method of Moments) estimator. The results show that the implementation of environmental regulation in resource-based areas has both a direct effect and a mediating effect on employment. These findings provide policy implications for these resource-based areas to promote the coordinating development between the environment and employment. PMID:29257068

  19. An Architecture for Measuring Joint Angles Using a Long Period Fiber Grating-Based Sensor

    PubMed Central

    Perez-Ramirez, Carlos A.; Almanza-Ojeda, Dora L.; Guerrero-Tavares, Jesus N.; Mendoza-Galindo, Francisco J.; Estudillo-Ayala, Julian M.; Ibarra-Manzano, Mario A.

    2014-01-01

    The implementation of signal filters in a real-time form requires a tradeoff between computation resources and the system performance. Therefore, taking advantage of low lag response and the reduced consumption of resources, in this article, the Recursive Least Square (RLS) algorithm is used to filter a signal acquired from a fiber-optics-based sensor. In particular, a Long-Period Fiber Grating (LPFG) sensor is used to measure the bending movement of a finger. After that, the Gaussian Mixture Model (GMM) technique allows us to classify the corresponding finger position along the motion range. For these measures to help in the development of an autonomous robotic hand, the proposed technique can be straightforwardly implemented on real time platforms such as Field Programmable Gate Array (FPGA) or Digital Signal Processors (DSP). Different angle measurements of the finger's motion are carried out by the prototype and a detailed analysis of the system performance is presented. PMID:25536002

  20. Automated Glioblastoma Segmentation Based on a Multiparametric Structured Unsupervised Classification

    PubMed Central

    Juan-Albarracín, Javier; Fuster-Garcia, Elies; Manjón, José V.; Robles, Montserrat; Aparici, F.; Martí-Bonmatí, L.; García-Gómez, Juan M.

    2015-01-01

    Automatic brain tumour segmentation has become a key component for the future of brain tumour treatment. Currently, most of brain tumour segmentation approaches arise from the supervised learning standpoint, which requires a labelled training dataset from which to infer the models of the classes. The performance of these models is directly determined by the size and quality of the training corpus, whose retrieval becomes a tedious and time-consuming task. On the other hand, unsupervised approaches avoid these limitations but often do not reach comparable results than the supervised methods. In this sense, we propose an automated unsupervised method for brain tumour segmentation based on anatomical Magnetic Resonance (MR) images. Four unsupervised classification algorithms, grouped by their structured or non-structured condition, were evaluated within our pipeline. Considering the non-structured algorithms, we evaluated K-means, Fuzzy K-means and Gaussian Mixture Model (GMM), whereas as structured classification algorithms we evaluated Gaussian Hidden Markov Random Field (GHMRF). An automated postprocess based on a statistical approach supported by tissue probability maps is proposed to automatically identify the tumour classes after the segmentations. We evaluated our brain tumour segmentation method with the public BRAin Tumor Segmentation (BRATS) 2013 Test and Leaderboard datasets. Our approach based on the GMM model improves the results obtained by most of the supervised methods evaluated with the Leaderboard set and reaches the second position in the ranking. Our variant based on the GHMRF achieves the first position in the Test ranking of the unsupervised approaches and the seventh position in the general Test ranking, which confirms the method as a viable alternative for brain tumour segmentation. PMID:25978453

  1. Interpolation on the manifold of K component GMMs.

    PubMed

    Kim, Hyunwoo J; Adluru, Nagesh; Banerjee, Monami; Vemuri, Baba C; Singh, Vikas

    2015-12-01

    Probability density functions (PDFs) are fundamental objects in mathematics with numerous applications in computer vision, machine learning and medical imaging. The feasibility of basic operations such as computing the distance between two PDFs and estimating a mean of a set of PDFs is a direct function of the representation we choose to work with. In this paper, we study the Gaussian mixture model (GMM) representation of the PDFs motivated by its numerous attractive features. (1) GMMs are arguably more interpretable than, say, square root parameterizations (2) the model complexity can be explicitly controlled by the number of components and (3) they are already widely used in many applications. The main contributions of this paper are numerical algorithms to enable basic operations on such objects that strictly respect their underlying geometry. For instance, when operating with a set of K component GMMs, a first order expectation is that the result of simple operations like interpolation and averaging should provide an object that is also a K component GMM. The literature provides very little guidance on enforcing such requirements systematically. It turns out that these tasks are important internal modules for analysis and processing of a field of ensemble average propagators (EAPs), common in diffusion weighted magnetic resonance imaging. We provide proof of principle experiments showing how the proposed algorithms for interpolation can facilitate statistical analysis of such data, essential to many neuroimaging studies. Separately, we also derive interesting connections of our algorithm with functional spaces of Gaussians, that may be of independent interest.

  2. Transgenic Mosquitoes - Fact or Fiction?

    PubMed

    Wilke, André B B; Beier, John C; Benelli, Giovanni

    2018-06-01

    Technologies for controlling mosquito vectors based on genetic manipulation and the release of genetically modified mosquitoes (GMMs) are gaining ground. However, concrete epidemiological evidence of their effectiveness, sustainability, and impact on the environment and nontarget species is lacking; no reliable ecological evidence on the potential interactions among GMMs, target populations, and other mosquito species populations exists; and no GMM technology has yet been approved by the WHO Vector Control Advisory Group. Our opinion is that, although GMMs may be considered a promising control tool, more studies are needed to assess their true effectiveness, risks, and benefits. Overall, several lines of evidence must be provided before GMM-based control strategies can be used under the integrated vector management framework. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. A new high-resolution 3-D quantitative method for analysing small morphological features: an example using a Cambrian trilobite.

    PubMed

    Esteve, Jorge; Zhao, Yuan-Long; Maté-González, Miguel Ángel; Gómez-Heras, Miguel; Peng, Jin

    2018-02-12

    Taphonomic processes play an important role in the preservation of small morphological features such as granulation or pits. However, the assessment of these features may face the issue of the small size of the specimens and, sometimes, the destructiveness of these analyses, which makes impossible carrying them out in singular specimen, such as holotypes or lectotypes. This paper takes a new approach to analysing small-morphological features, by using an optical surface roughness (OSR) meter to create a high-resolution three-dimensional digital-elevation model (DEM). This non-destructive technique allows analysing quantitatively the DEM using geometric morphometric methods (GMM). We created a number of DEMs from three populations putatively belonging to the same species of trilobite (Oryctocephalus indicus) that present the same cranidial outline, but differ in the presence or absence of the second and third transglabellar furrows. Profile analysis of the DEMs demonstrate that all three populations show similar preservation variation in the glabellar furrows and lobes. The GMM shows that all populations exhibit the same range of variation. Differences in preservation are a consequence of different degrees of cementation and rates of dissolution. Fast cementation enhances the preservation of glabellar furrows and lobes, while fast dissolution hampers preservation of the same structures.

  4. Driving profile modeling and recognition based on soft computing approach.

    PubMed

    Wahab, Abdul; Quek, Chai; Tan, Chin Keong; Takeda, Kazuya

    2009-04-01

    Advancements in biometrics-based authentication have led to its increasing prominence and are being incorporated into everyday tasks. Existing vehicle security systems rely only on alarms or smart card as forms of protection. A biometric driver recognition system utilizing driving behaviors is a highly novel and personalized approach and could be incorporated into existing vehicle security system to form a multimodal identification system and offer a greater degree of multilevel protection. In this paper, detailed studies have been conducted to model individual driving behavior in order to identify features that may be efficiently and effectively used to profile each driver. Feature extraction techniques based on Gaussian mixture models (GMMs) are proposed and implemented. Features extracted from the accelerator and brake pedal pressure were then used as inputs to a fuzzy neural network (FNN) system to ascertain the identity of the driver. Two fuzzy neural networks, namely, the evolving fuzzy neural network (EFuNN) and the adaptive network-based fuzzy inference system (ANFIS), are used to demonstrate the viability of the two proposed feature extraction techniques. The performances were compared against an artificial neural network (NN) implementation using the multilayer perceptron (MLP) network and a statistical method based on the GMM. Extensive testing was conducted and the results show great potential in the use of the FNN for real-time driver identification and verification. In addition, the profiling of driver behaviors has numerous other potential applications for use by law enforcement and companies dealing with buses and truck drivers.

  5. Speaker diarization system on the 2007 NIST rich transcription meeting recognition evaluation

    NASA Astrophysics Data System (ADS)

    Sun, Hanwu; Nwe, Tin Lay; Koh, Eugene Chin Wei; Bin, Ma; Li, Haizhou

    2007-09-01

    This paper presents a speaker diarization system developed at the Institute for Infocomm Research (I2R) for NIST Rich Transcription 2007 (RT-07) evaluation task. We describe in details our primary approaches for the speaker diarization on the Multiple Distant Microphones (MDM) conditions in conference room scenario. Our proposed system consists of six modules: 1). Least-mean squared (NLMS) adaptive filter for the speaker direction estimate via Time Difference of Arrival (TDOA), 2). An initial speaker clustering via two-stage TDOA histogram distribution quantization approach, 3). Multiple microphone speaker data alignment via GCC-PHAT Time Delay Estimate (TDE) among all the distant microphone channel signals, 4). A speaker clustering algorithm based on GMM modeling approach, 5). Non-speech removal via speech/non-speech verification mechanism and, 6). Silence removal via "Double-Layer Windowing"(DLW) method. We achieves error rate of 31.02% on the 2006 Spring (RT-06s) MDM evaluation task and a competitive overall error rate of 15.32% for the NIST Rich Transcription 2007 (RT-07) MDM evaluation task.

  6. Vehicle Detection with Occlusion Handling, Tracking, and OC-SVM Classification: A High Performance Vision-Based System

    PubMed Central

    Velazquez-Pupo, Roxana; Sierra-Romero, Alberto; Torres-Roman, Deni; Shkvarko, Yuriy V.; Romero-Delgado, Misael

    2018-01-01

    This paper presents a high performance vision-based system with a single static camera for traffic surveillance, for moving vehicle detection with occlusion handling, tracking, counting, and One Class Support Vector Machine (OC-SVM) classification. In this approach, moving objects are first segmented from the background using the adaptive Gaussian Mixture Model (GMM). After that, several geometric features are extracted, such as vehicle area, height, width, centroid, and bounding box. As occlusion is present, an algorithm was implemented to reduce it. The tracking is performed with adaptive Kalman filter. Finally, the selected geometric features: estimated area, height, and width are used by different classifiers in order to sort vehicles into three classes: small, midsize, and large. Extensive experimental results in eight real traffic videos with more than 4000 ground truth vehicles have shown that the improved system can run in real time under an occlusion index of 0.312 and classify vehicles with a global detection rate or recall, precision, and F-measure of up to 98.190%, and an F-measure of up to 99.051% for midsize vehicles. PMID:29382078

  7. Finite element modelling of squirrel, guinea pig and rat skulls: using geometric morphometrics to assess sensitivity.

    PubMed

    Cox, P G; Fagan, M J; Rayfield, E J; Jeffery, N

    2011-12-01

    Rodents are defined by a uniquely specialized dentition and a highly complex arrangement of jaw-closing muscles. Finite element analysis (FEA) is an ideal technique to investigate the biomechanical implications of these specializations, but it is essential to understand fully the degree of influence of the different input parameters of the FE model to have confidence in the model's predictions. This study evaluates the sensitivity of FE models of rodent crania to elastic properties of the materials, loading direction, and the location and orientation of the models' constraints. Three FE models were constructed of squirrel, guinea pig and rat skulls. Each was loaded to simulate biting on the incisors, and the first and the third molars, with the angle of the incisal bite varied over a range of 45°. The Young's moduli of the bone and teeth components were varied between limits defined by findings from our own and previously published tests of material properties. Geometric morphometrics (GMM) was used to analyse the resulting skull deformations. Bone stiffness was found to have the strongest influence on the results in all three rodents, followed by bite position, and then bite angle and muscle orientation. Tooth material properties were shown to have little effect on the deformation of the skull. The effect of bite position varied between species, with the mesiodistal position of the biting tooth being most important in squirrels and guinea pigs, whereas bilateral vs. unilateral biting had the greatest influence in rats. A GMM analysis of isolated incisor deformations showed that, for all rodents, bite angle is the most important parameter, followed by elastic properties of the tooth. The results here elucidate which input parameters are most important when defining the FE models, but also provide interesting glimpses of the biomechanical differences between the three skulls, which will be fully explored in future publications. © 2011 The Authors. Journal of Anatomy © 2011 Anatomical Society of Great Britain and Ireland.

  8. Automated identification of diabetic type 2 subjects with and without neuropathy using wavelet transform on pedobarograph.

    PubMed

    Acharya, Rajendra; Tan, Peck Ha; Subramaniam, Tavintharan; Tamura, Toshiyo; Chua, Kuang Chua; Goh, Seach Chyr Ernest; Lim, Choo Min; Goh, Shu Yi Diana; Chung, Kang Rui Conrad; Law, Chelsea

    2008-02-01

    Diabetes is a disorder of metabolism-the way our bodies use digested food for growth and energy. The most common form of diabetes is Type 2 diabetes. Abnormal plantar pressures are considered to play a major role in the pathologies of neuropathic ulcers in the diabetic foot. The purpose of this study was to examine the plantar pressure distribution in normal, diabetic Type 2 with and without neuropathy subjects. Foot scans were obtained using the F-scan (Tekscan USA) pressure measurement system. Various discrete wavelet coefficients were evaluated from the foot images. These extracted parameters were extracted using the discrete wavelet transform (DWT) and presented to the Gaussian mixture model (GMM) and a four-layer feed forward neural network for classification. We demonstrated a sensitivity of 100% and a specificity of more than 85% for the classifiers.

  9. Robust vehicle detection in different weather conditions: Using MIPM

    PubMed Central

    Menéndez, José Manuel; Jiménez, David

    2018-01-01

    Intelligent Transportation Systems (ITS) allow us to have high quality traffic information to reduce the risk of potentially critical situations. Conventional image-based traffic detection methods have difficulties acquiring good images due to perspective and background noise, poor lighting and weather conditions. In this paper, we propose a new method to accurately segment and track vehicles. After removing perspective using Modified Inverse Perspective Mapping (MIPM), Hough transform is applied to extract road lines and lanes. Then, Gaussian Mixture Models (GMM) are used to segment moving objects and to tackle car shadow effects, we apply a chromacity-based strategy. Finally, performance is evaluated through three different video benchmarks: own recorded videos in Madrid and Tehran (with different weather conditions at urban and interurban areas); and two well-known public datasets (KITTI and DETRAC). Our results indicate that the proposed algorithms are robust, and more accurate compared to others, especially when facing occlusions, lighting variations and weather conditions. PMID:29513664

  10. Efficient ensemble forecasting of marine ecology with clustered 1D models and statistical lateral exchange: application to the Red Sea

    NASA Astrophysics Data System (ADS)

    Dreano, Denis; Tsiaras, Kostas; Triantafyllou, George; Hoteit, Ibrahim

    2017-07-01

    Forecasting the state of large marine ecosystems is important for many economic and public health applications. However, advanced three-dimensional (3D) ecosystem models, such as the European Regional Seas Ecosystem Model (ERSEM), are computationally expensive, especially when implemented within an ensemble data assimilation system requiring several parallel integrations. As an alternative to 3D ecological forecasting systems, we propose to implement a set of regional one-dimensional (1D) water-column ecological models that run at a fraction of the computational cost. The 1D model domains are determined using a Gaussian mixture model (GMM)-based clustering method and satellite chlorophyll-a (Chl-a) data. Regionally averaged Chl-a data is assimilated into the 1D models using the singular evolutive interpolated Kalman (SEIK) filter. To laterally exchange information between subregions and improve the forecasting skills, we introduce a new correction step to the assimilation scheme, in which we assimilate a statistical forecast of future Chl-a observations based on information from neighbouring regions. We apply this approach to the Red Sea and show that the assimilative 1D ecological models can forecast surface Chl-a concentration with high accuracy. The statistical assimilation step further improves the forecasting skill by as much as 50%. This general approach of clustering large marine areas and running several interacting 1D ecological models is very flexible. It allows many combinations of clustering, filtering and regression technics to be used and can be applied to build efficient forecasting systems in other large marine ecosystems.

  11. Gaussian mixture modeling of acoustic emissions for structural health monitoring of reinforced concrete structures

    NASA Astrophysics Data System (ADS)

    Farhidzadeh, Alireza; Dehghan-Niri, Ehsan; Salamone, Salvatore

    2013-04-01

    Reinforced Concrete (RC) has been widely used in construction of infrastructures for many decades. The cracking behavior in concrete is crucial due to the harmful effects on structural performance such as serviceability and durability requirements. In general, in loading such structures until failure, tensile cracks develop at the initial stages of loading, while shear cracks dominate later. Therefore, monitoring the cracking modes is of paramount importance as it can lead to the prediction of the structural performance. In the past two decades, significant efforts have been made toward the development of automated structural health monitoring (SHM) systems. Among them, a technique that shows promises for monitoring RC structures is the acoustic emission (AE). This paper introduces a novel probabilistic approach based on Gaussian Mixture Modeling (GMM) to classify AE signals related to each crack mode. The system provides an early warning by recognizing nucleation of numerous critical shear cracks. The algorithm is validated through an experimental study on a full-scale reinforced concrete shear wall subjected to a reversed cyclic loading. A modified conventional classification scheme and a new criterion for crack classification are also proposed.

  12. Evaluation of a speaker identification system with and without fusion using three databases in the presence of noise and handset effects

    NASA Astrophysics Data System (ADS)

    S. Al-Kaltakchi, Musab T.; Woo, Wai L.; Dlay, Satnam; Chambers, Jonathon A.

    2017-12-01

    In this study, a speaker identification system is considered consisting of a feature extraction stage which utilizes both power normalized cepstral coefficients (PNCCs) and Mel frequency cepstral coefficients (MFCC). Normalization is applied by employing cepstral mean and variance normalization (CMVN) and feature warping (FW), together with acoustic modeling using a Gaussian mixture model-universal background model (GMM-UBM). The main contributions are comprehensive evaluations of the effect of both additive white Gaussian noise (AWGN) and non-stationary noise (NSN) (with and without a G.712 type handset) upon identification performance. In particular, three NSN types with varying signal to noise ratios (SNRs) were tested corresponding to street traffic, a bus interior, and a crowded talking environment. The performance evaluation also considered the effect of late fusion techniques based on score fusion, namely, mean, maximum, and linear weighted sum fusion. The databases employed were TIMIT, SITW, and NIST 2008; and 120 speakers were selected from each database to yield 3600 speech utterances. As recommendations from the study, mean fusion is found to yield overall best performance in terms of speaker identification accuracy (SIA) with noisy speech, whereas linear weighted sum fusion is overall best for original database recordings.

  13. Vehicle speed detection based on gaussian mixture model using sequential of images

    NASA Astrophysics Data System (ADS)

    Setiyono, Budi; Ratna Sulistyaningrum, Dwi; Soetrisno; Fajriyah, Farah; Wahyu Wicaksono, Danang

    2017-09-01

    Intelligent Transportation System is one of the important components in the development of smart cities. Detection of vehicle speed on the highway is supporting the management of traffic engineering. The purpose of this study is to detect the speed of the moving vehicles using digital image processing. Our approach is as follows: The inputs are a sequence of frames, frame rate (fps) and ROI. The steps are following: First we separate foreground and background using Gaussian Mixture Model (GMM) in each frames. Then in each frame, we calculate the location of object and its centroid. Next we determine the speed by computing the movement of centroid in sequence of frames. In the calculation of speed, we only consider frames when the centroid is inside the predefined region of interest (ROI). Finally we transform the pixel displacement into a time unit of km/hour. Validation of the system is done by comparing the speed calculated manually and obtained by the system. The results of software testing can detect the speed of vehicles with the highest accuracy is 97.52% and the lowest accuracy is 77.41%. And the detection results of testing by using real video footage on the road is included with real speed of the vehicle.

  14. Predicting fundamental frequency from mel-frequency cepstral coefficients to enable speech reconstruction.

    PubMed

    Shao, Xu; Milner, Ben

    2005-08-01

    This work proposes a method to reconstruct an acoustic speech signal solely from a stream of mel-frequency cepstral coefficients (MFCCs) as may be encountered in a distributed speech recognition (DSR) system. Previous methods for speech reconstruction have required, in addition to the MFCC vectors, fundamental frequency and voicing components. In this work the voicing classification and fundamental frequency are predicted from the MFCC vectors themselves using two maximum a posteriori (MAP) methods. The first method enables fundamental frequency prediction by modeling the joint density of MFCCs and fundamental frequency using a single Gaussian mixture model (GMM). The second scheme uses a set of hidden Markov models (HMMs) to link together a set of state-dependent GMMs, which enables a more localized modeling of the joint density of MFCCs and fundamental frequency. Experimental results on speaker-independent male and female speech show that accurate voicing classification and fundamental frequency prediction is attained when compared to hand-corrected reference fundamental frequency measurements. The use of the predicted fundamental frequency and voicing for speech reconstruction is shown to give very similar speech quality to that obtained using the reference fundamental frequency and voicing.

  15. The Analysis for Regulation Performance of a Variable Thrust Rocket Engine Control System,

    DTIC Science & Technology

    1982-06-29

    valve: Q,- K .W(t).±K.APN(t) (14) where (15) K-KK (16) ( 17 ) (18) Equations (13) and (14) can be expressed as one equation: . Q(t)-QCt)-Qa(t)-n(" -K:)EQ...Hydraulic pressure when the needle valve starts to rise [g/mm 2 4PH (t)-Hydraulic pressure increment 2 AHHydraulic pressure function area (mm2 B-Needle...rate gain Ke and solenoid valve pressure coefficient K use relatedPH equations (15), (16), ( 17 ) and (18). If we use the parameters of * the exhaust

  16. Environmental quality indicators and financial development in Malaysia: unity in diversity.

    PubMed

    Alam, Arif; Azam, Muhammad; Abdullah, Alias Bin; Malik, Ihtisham Abdul; Khan, Anwar; Hamzah, Tengku Adeline Adura Tengku; Faridullah; Khan, Muhammad Mushtaq; Zahoor, Hina; Zaman, Khalid

    2015-06-01

    Environmental quality indicators are crucial for responsive and cost-effective policies. The objective of the study is to examine the relationship between environmental quality indicators and financial development in Malaysia. For this purpose, the number of environmental quality indicators has been used, i.e., air pollution measured by carbon dioxide emissions, population density per square kilometer of land area, agricultural production measured by cereal production and livestock production, and energy resources considered by energy use and fossil fuel energy consumption, which placed an impact on the financial development of the country. The study used four main financial indicators, i.e., broad money supply (M2), domestic credit provided by the financial sector (DCFS), domestic credit to the private sector (DCPC), and inflation (CPI), which each financial indicator separately estimated with the environmental quality indicators, over a period of 1975-2013. The study used the generalized method of moments (GMM) technique to minimize the simultaneity from the model. The results show that carbon dioxide emissions exert the positive correlation with the M2, DCFC, and DCPC, while there is a negative correlation with the CPI. However, these results have been evaporated from the GMM estimates, where carbon emissions have no significant relationship with any of the four financial indicators in Malaysia. The GMM results show that population density has a negative relationship with the all four financial indicators; however, in case of M2, this relationship is insignificant to explain their result. Cereal production has a positive relationship with the DCPC, while there is a negative relationship with the CPI. Livestock production exerts the positive relationship with the all four financial indicators; however, this relationship with the CPI has a more elastic relationship, while the remaining relationship is less elastic with the three financial indicators in a country. Energy resources comprise energy use and fossil fuel energy consumption, both have distinct results with the financial indicators, as energy demand have a positive and significant relationship with the DCFC, DCPC, and CPI, while fossil fuel energy consumption have a negative relationship with these three financial indicators. The results of the study are of value to both environmentalists and policy makers.

  17. Semantic Indexing of Multimedia Content Using Visual, Audio, and Text Cues

    NASA Astrophysics Data System (ADS)

    Adams, W. H.; Iyengar, Giridharan; Lin, Ching-Yung; Naphade, Milind Ramesh; Neti, Chalapathy; Nock, Harriet J.; Smith, John R.

    2003-12-01

    We present a learning-based approach to the semantic indexing of multimedia content using cues derived from audio, visual, and text features. We approach the problem by developing a set of statistical models for a predefined lexicon. Novel concepts are then mapped in terms of the concepts in the lexicon. To achieve robust detection of concepts, we exploit features from multiple modalities, namely, audio, video, and text. Concept representations are modeled using Gaussian mixture models (GMM), hidden Markov models (HMM), and support vector machines (SVM). Models such as Bayesian networks and SVMs are used in a late-fusion approach to model concepts that are not explicitly modeled in terms of features. Our experiments indicate promise in the proposed classification and fusion methodologies: our proposed fusion scheme achieves more than 10% relative improvement over the best unimodal concept detector.

  18. Heterogeneity in the pharmacodynamics of two long-acting methylphenidate formulations for children with attention deficit/hyperactivity disorder. A growth mixture modelling analysis.

    PubMed

    Sonuga-Barke, Edmund J S; Van Lier, Pol; Swanson, James M; Coghill, David; Wigal, Sharon; Vandenberghe, Mieke; Hatch, Simon

    2008-06-01

    To use growth mixture modelling (GMM) to identify subgroups of children with attention deficit hyperactive disorder (ADHD) who have different pharmacodynamic profiles in response to extended release methylphenidate as assessed in a laboratory classroom setting. GMM analysis was performed on data from the COMACS study (Comparison of Methylphenidates in the Analog Classroom Setting): a large (n = 184) placebo-controlled cross-over study comparing three treatment conditions in the Laboratory School Protocol (with a 1.5-h cycle of attention and deportment assessments). Two orally administered, once-daily methylphenidate (MPH) bioequivalent formulations [Metadate CD/Equasym XL (MCD-EQXL) and Concerta XL (CON)] were compared with placebo (PLA). Three classes of children with distinct severity profiles in the PLA condition were identified. For both MCD-EQXL and CON, the more severe their PLA symptoms the better, the children's response. However, the formulations produced different growth curves by class, with CON having essentially a flat profile for all three classes (i.e. no effect of PLA severity) and MCD-EQXL showing a marked decline in symptoms immediately post-dosing in the two most severe classes compared with the least severe. Comparison of daily doses matched for immediate-release (IR) components accounted for this difference. The results suggest considerable heterogeneity in the pharmacodynamics of MPH response by children with ADHD. When treatment response for near-equal, bioequivalent daily doses the two formulations was compared, marked differences were seen for children in the most severe classes with a strong curvilinear trajectory for MCD-EQXL related to the greater IR component.

  19. Excitation-scanning hyperspectral imaging as a means to discriminate various tissues types

    NASA Astrophysics Data System (ADS)

    Deal, Joshua; Favreau, Peter F.; Lopez, Carmen; Lall, Malvika; Weber, David S.; Rich, Thomas C.; Leavesley, Silas J.

    2017-02-01

    Little is currently known about the fluorescence excitation spectra of disparate tissues and how these spectra change with pathological state. Current imaging diagnostic techniques have limited capacity to investigate fluorescence excitation spectral characteristics. This study utilized excitation-scanning hyperspectral imaging to perform a comprehensive assessment of fluorescence spectral signatures of various tissues. Immediately following tissue harvest, a custom inverted microscope (TE-2000, Nikon Instruments) with Xe arc lamp and thin film tunable filter array (VersaChrome, Semrock, Inc.) were used to acquire hyperspectral image data from each sample. Scans utilized excitation wavelengths from 340 nm to 550 nm in 5 nm increments. Hyperspectral images were analyzed with custom Matlab scripts including linear spectral unmixing (LSU), principal component analysis (PCA), and Gaussian mixture modeling (GMM). Spectra were examined for potential characteristic features such as consistent intensity peaks at specific wavelengths or intensity ratios among significant wavelengths. The resultant spectral features were conserved among tissues of similar molecular composition. Additionally, excitation spectra appear to be a mixture of pure endmembers with commonalities across tissues of varied molecular composition, potentially identifiable through GMM. These results suggest the presence of common autofluorescent molecules in most tissues and that excitationscanning hyperspectral imaging may serve as an approach for characterizing tissue composition as well as pathologic state. Future work will test the feasibility of excitation-scanning hyperspectral imaging as a contrast mode for discriminating normal and pathological tissues.

  20. The JPL Mars gravity field, Mars50c, based upon Viking and Mariner 9 Doppler tracking data

    NASA Technical Reports Server (NTRS)

    Konopliv, Alexander S.; Sjogren, William L.

    1995-01-01

    This report summarizes the current JPL efforts of generating a Mars gravity field from Viking 1 and 2 and Mariner 9 Doppler tracking data. The Mars 50c solution is a complete gravity field to degree and order 50 with solutions as well for the gravitational mass of Mars, Phobos, and Deimos. The constants and models used to obtain the solution are given and the method for determining the gravity field is presented. The gravity field is compared to the best current gravity GMM1 of Goddard Space Flight Center.

  1. Automatic identification of individual killer whales.

    PubMed

    Brown, Judith C; Smaragdis, Paris; Nousek-McGregor, Anna

    2010-09-01

    Following the successful use of HMM and GMM models for classification of a set of 75 calls of northern resident killer whales into call types [Brown, J. C., and Smaragdis, P., J. Acoust. Soc. Am. 125, 221-224 (2009)], the use of these same methods has been explored for the identification of vocalizations from the same call type N2 of four individual killer whales. With an average of 20 vocalizations from each of the individuals the pairwise comparisons have an extremely high success rate of 80 to 100% and the identifications within the entire group yield around 78%.

  2. KSC-20160908-RV-GMM01_0003-OSIRIS_REx_Launch_Broadcast_Ground_ISO-3126827

    NASA Image and Video Library

    2016-09-08

    Liftoff of OSIRIS-A United Launch Alliance Atlas V rocket lifts off from Space Launch Complex 41 at Cape Canaveral Air Force Station carrying NASA’s Origins, Spectral Interpretation, Resource Identification, Security-Regolith Explorer, or OSIRIS-REx spacecraft on the first U.S. mission to sample an asteroid, retrieve at least two ounces of surface material and return it to Earth for study. Liftoff was at 7:05 p.m. EDT. The asteroid, Bennu, may hold clues to the origin of the solar system and the source of water and organic molecules found on Earth.

  3. KSC-20160908-RV-GMM01_0002-OSIRIS_REx_Launch_Broadcast_VIF_ISO-3126827

    NASA Image and Video Library

    2016-09-08

    Liftoff of OSIRIS-A United Launch Alliance Atlas V rocket lifts off from Space Launch Complex 41 at Cape Canaveral Air Force Station carrying NASA’s Origins, Spectral Interpretation, Resource Identification, Security-Regolith Explorer, or OSIRIS-REx spacecraft on the first U.S. mission to sample an asteroid, retrieve at least two ounces of surface material and return it to Earth for study. Liftoff was at 7:05 p.m. EDT. The asteroid, Bennu, may hold clues to the origin of the solar system and the source of water and organic molecules found on Earth.

  4. KSC-20160908-RV-GMM01_0001-OSIRIS_REx_Launch_Broadcast_VAB_Roof_ISO-3126827

    NASA Image and Video Library

    2016-09-08

    Liftoff of OSIRIS-A United Launch Alliance Atlas V rocket lifts off from Space Launch Complex 41 at Cape Canaveral Air Force Station carrying NASA’s Origins, Spectral Interpretation, Resource Identification, Security-Regolith Explorer, or OSIRIS-REx spacecraft on the first U.S. mission to sample an asteroid, retrieve at least two ounces of surface material and return it to Earth for study. Liftoff was at 7:05 p.m. EDT. The asteroid, Bennu, may hold clues to the origin of the solar system and the source of water and organic molecules found on Earth.

  5. Variability of Suitable Habitat of Western Winter-Spring Cohort for Neon Flying Squid in the Northwest Pacific under Anomalous Environments.

    PubMed

    Yu, Wei; Chen, Xinjun; Yi, Qian; Chen, Yong; Zhang, Yang

    2015-01-01

    We developed a habitat suitability index (HSI) model to evaluate the variability of suitable habitat for neon flying squid (Ommastrephes bartramii) under anomalous environments in the Northwest Pacific Ocean. Commercial fisheries data from the Chinese squid-jigging vessels on the traditional fishing ground bounded by 35°-45°N and 150°-175°E from July to November during 1998-2009 were used for analyses, as well as the environmental variables including sea surface temperature (SST), chlorophyll-a (Chl-a) concentration, sea surface height anomaly (SSHA) and sea surface salinity (SSS). Two empirical HSI models (arithmetic mean model, AMM; geometric mean model, GMM) were established according to the frequency distribution of fishing efforts. The AMM model was found to perform better than the GMM model. The AMM-based HSI model was further validated by the fishery and environmental data in 2010. The predicted HSI values in 1998 (high catch), 2008 (average catch) and 2009 (low catch) indicated that the squid habitat quality was strongly associated with the ENSO-induced variability in the oceanic conditions on the fishing ground. The La Niña events in 1998 tended to yield warm SST and favorable range of Chl-a concentration and SSHA, resulting in high-quality habitats for O. bartramii. While the fishing ground in the El Niño year of 2009 experienced anomalous cool waters and unfavorable range of Chl-a concentration and SSHA, leading to relatively low-quality squid habitats. Our findings suggest that the La Niña event in 1998 tended to result in more favorable habitats for O. bartramii in the Northwest Pacific with the gravity centers of fishing efforts falling within the defined suitable habitat and yielding high squid catch; whereas the El Niño event in 2009 yielded less favorable habitat areas with the fishing effort distribution mismatching the suitable habitat and a dramatic decline of the catch of O. bartramii. This study might provide some potentially valuable insights into exploring the relationship between the underlying squid habitat and the inter-annual environmental change.

  6. Variability of Suitable Habitat of Western Winter-Spring Cohort for Neon Flying Squid in the Northwest Pacific under Anomalous Environments

    PubMed Central

    Yu, Wei; Chen, Xinjun; Yi, Qian; Chen, Yong; Zhang, Yang

    2015-01-01

    We developed a habitat suitability index (HSI) model to evaluate the variability of suitable habitat for neon flying squid (Ommastrephes bartramii) under anomalous environments in the Northwest Pacific Ocean. Commercial fisheries data from the Chinese squid-jigging vessels on the traditional fishing ground bounded by 35°-45°N and 150°-175°E from July to November during 1998-2009 were used for analyses, as well as the environmental variables including sea surface temperature (SST), chlorophyll-a (Chl-a) concentration, sea surface height anomaly (SSHA) and sea surface salinity (SSS). Two empirical HSI models (arithmetic mean model, AMM; geometric mean model, GMM) were established according to the frequency distribution of fishing efforts. The AMM model was found to perform better than the GMM model. The AMM-based HSI model was further validated by the fishery and environmental data in 2010. The predicted HSI values in 1998 (high catch), 2008 (average catch) and 2009 (low catch) indicated that the squid habitat quality was strongly associated with the ENSO-induced variability in the oceanic conditions on the fishing ground. The La Niña events in 1998 tended to yield warm SST and favorable range of Chl-a concentration and SSHA, resulting in high-quality habitats for O. bartramii. While the fishing ground in the El Niño year of 2009 experienced anomalous cool waters and unfavorable range of Chl-a concentration and SSHA, leading to relatively low-quality squid habitats. Our findings suggest that the La Niña event in 1998 tended to result in more favorable habitats for O. bartramii in the Northwest Pacific with the gravity centers of fishing efforts falling within the defined suitable habitat and yielding high squid catch; whereas the El Niño event in 2009 yielded less favorable habitat areas with the fishing effort distribution mismatching the suitable habitat and a dramatic decline of the catch of O. bartramii. This study might provide some potentially valuable insights into exploring the relationship between the underlying squid habitat and the inter-annual environmental change. PMID:25923519

  7. Calibration-free gaze tracking for automatic measurement of visual acuity in human infants.

    PubMed

    Xiong, Chunshui; Huang, Lei; Liu, Changping

    2014-01-01

    Most existing vision-based methods for gaze tracking need a tedious calibration process. In this process, subjects are required to fixate on a specific point or several specific points in space. However, it is hard to cooperate, especially for children and human infants. In this paper, a new calibration-free gaze tracking system and method is presented for automatic measurement of visual acuity in human infants. As far as I know, it is the first time to apply the vision-based gaze tracking in the measurement of visual acuity. Firstly, a polynomial of pupil center-cornea reflections (PCCR) vector is presented to be used as the gaze feature. Then, Gaussian mixture models (GMM) is employed for gaze behavior classification, which is trained offline using labeled data from subjects with healthy eyes. Experimental results on several subjects show that the proposed method is accurate, robust and sufficient for the application of measurement of visual acuity in human infants.

  8. Analysis and automatic identification of sleep stages using higher order spectra.

    PubMed

    Acharya, U Rajendra; Chua, Eric Chern-Pin; Chua, Kuang Chua; Min, Lim Choo; Tamura, Toshiyo

    2010-12-01

    Electroencephalogram (EEG) signals are widely used to study the activity of the brain, such as to determine sleep stages. These EEG signals are nonlinear and non-stationary in nature. It is difficult to perform sleep staging by visual interpretation and linear techniques. Thus, we use a nonlinear technique, higher order spectra (HOS), to extract hidden information in the sleep EEG signal. In this study, unique bispectrum and bicoherence plots for various sleep stages were proposed. These can be used as visual aid for various diagnostics application. A number of HOS based features were extracted from these plots during the various sleep stages (Wakefulness, Rapid Eye Movement (REM), Stage 1-4 Non-REM) and they were found to be statistically significant with p-value lower than 0.001 using ANOVA test. These features were fed to a Gaussian mixture model (GMM) classifier for automatic identification. Our results indicate that the proposed system is able to identify sleep stages with an accuracy of 88.7%.

  9. Color-magnitude distribution of face-on nearby galaxies in Sloan digital sky survey DR7

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Shuo-Wen; Feng, Long-Long; Gu, Qiusheng

    2014-05-20

    We have analyzed the distributions in the color-magnitude diagram (CMD) of a large sample of face-on galaxies to minimize the effect of dust extinctions on galaxy color. About 300,000 galaxies with log (a/b) < 0.2 and redshift z < 0.2 are selected from the Sloan Digital Sky Survey DR7 catalog. Two methods are employed to investigate the distributions of galaxies in the CMD, including one-dimensional (1D) Gaussian fitting to the distributions in individual magnitude bins and two-dimensional (2D) Gaussian mixture model (GMM) fitting to galaxies as a whole. We find that in the 1D fitting, two Gaussians are not enoughmore » to fit galaxies with the excess present between the blue cloud and the red sequence. The fitting to this excess defines the center of the green valley in the local universe to be (u – r){sub 0.1} = –0.121M {sub r,} 0{sub .1} – 0.061. The fraction of blue cloud and red sequence galaxies turns over around M {sub r,} {sub 0.1} ∼ –20.1 mag, corresponding to stellar mass of 3 × 10{sup 10} M {sub ☉}. For the 2D GMM fitting, a total of four Gaussians are required, one for the blue cloud, one for the red sequence, and the additional two for the green valley. The fact that two Gaussians are needed to describe the distributions of galaxies in the green valley is consistent with some models that argue for two different evolutionary paths from the blue cloud to the red sequence.« less

  10. Popular song and lyrics synchronization and its application to music information retrieval

    NASA Astrophysics Data System (ADS)

    Chen, Kai; Gao, Sheng; Zhu, Yongwei; Sun, Qibin

    2006-01-01

    An automatic synchronization system of the popular song and its lyrics is presented in the paper. The system includes two main components: a) automatically detecting vocal/non-vocal in the audio signal and b) automatically aligning the acoustic signal of the song with its lyric using speech recognition techniques and positioning the boundaries of the lyrics in its acoustic realization at the multiple levels simultaneously (e.g. the word / syllable level and phrase level). The GMM models and a set of HMM-based acoustic model units are carefully designed and trained for the detection and alignment. To eliminate the severe mismatch due to the diversity of musical signal and sparse training data available, the unsupervised adaptation technique such as maximum likelihood linear regression (MLLR) is exploited for tailoring the models to the real environment, which improves robustness of the synchronization system. To further reduce the effect of the missed non-vocal music on alignment, a novel grammar net is build to direct the alignment. As we know, this is the first automatic synchronization system only based on the low-level acoustic feature such as MFCC. We evaluate the system on a Chinese song dataset collecting from 3 popular singers. We obtain 76.1% for the boundary accuracy at the syllable level (BAS) and 81.5% for the boundary accuracy at the phrase level (BAP) using fully automatic vocal/non-vocal detection and alignment. The synchronization system has many applications such as multi-modality (audio and textual) content-based popular song browsing and retrieval. Through the study, we would like to open up the discussion of some challenging problems when developing a robust synchronization system for largescale database.

  11. Motor-Enriched Learning Activities Can Improve Mathematical Performance in Preadolescent Children.

    PubMed

    Beck, Mikkel M; Lind, Rune R; Geertsen, Svend S; Ritz, Christian; Lundbye-Jensen, Jesper; Wienecke, Jacob

    2016-01-01

    Objective: An emerging field of research indicates that physical activity can benefit cognitive functions and academic achievements in children. However, less is known about how academic achievements can benefit from specific types of motor activities (e.g., fine and gross) integrated into learning activities. Thus, the aim of this study was to investigate whether fine or gross motor activity integrated into math lessons (i.e., motor-enrichment) could improve children's mathematical performance. Methods: A 6-week within school cluster-randomized intervention study investigated the effects of motor-enriched mathematical teaching in Danish preadolescent children ( n = 165, age = 7.5 ± 0.02 years). Three groups were included: a control group (CON), which received non-motor enriched conventional mathematical teaching, a fine motor math group (FMM) and a gross motor math group (GMM), which received mathematical teaching enriched with fine and gross motor activity, respectively. The children were tested before (T0), immediately after (T1) and 8 weeks after the intervention (T2). A standardized mathematical test (50 tasks) was used to evaluate mathematical performance. Furthermore, it was investigated whether motor-enriched math was accompanied by different effects in low and normal math performers. Additionally, the study investigated the potential contribution of cognitive functions and motor skills on mathematical performance. Results: All groups improved their mathematical performance from T0 to T1. However, from T0 to T1, the improvement was significantly greater in GMM compared to FMM (1.87 ± 0.71 correct answers) ( p = 0.02). At T2 no significant differences in mathematical performance were observed. A subgroup analysis revealed that normal math-performers benefitted from GMM compared to both CON 1.78 ± 0.73 correct answers ( p = 0.04) and FMM 2.14 ± 0.72 correct answers ( p = 0.008). These effects were not observed in low math-performers. The effects were partly accounted for by visuo-spatial short-term memory and gross motor skills. Conclusion: The study demonstrates that motor enriched learning activities can improve mathematical performance. In normal math performers GMM led to larger improvements than FMM and CON. This was not the case for the low math performers. Future studies should further elucidate the neurophysiological mechanisms underlying the observed behavioral effects.

  12. Motor-Enriched Learning Activities Can Improve Mathematical Performance in Preadolescent Children

    PubMed Central

    Beck, Mikkel M.; Lind, Rune R.; Geertsen, Svend S.; Ritz, Christian; Lundbye-Jensen, Jesper; Wienecke, Jacob

    2016-01-01

    Objective: An emerging field of research indicates that physical activity can benefit cognitive functions and academic achievements in children. However, less is known about how academic achievements can benefit from specific types of motor activities (e.g., fine and gross) integrated into learning activities. Thus, the aim of this study was to investigate whether fine or gross motor activity integrated into math lessons (i.e., motor-enrichment) could improve children's mathematical performance. Methods: A 6-week within school cluster-randomized intervention study investigated the effects of motor-enriched mathematical teaching in Danish preadolescent children (n = 165, age = 7.5 ± 0.02 years). Three groups were included: a control group (CON), which received non-motor enriched conventional mathematical teaching, a fine motor math group (FMM) and a gross motor math group (GMM), which received mathematical teaching enriched with fine and gross motor activity, respectively. The children were tested before (T0), immediately after (T1) and 8 weeks after the intervention (T2). A standardized mathematical test (50 tasks) was used to evaluate mathematical performance. Furthermore, it was investigated whether motor-enriched math was accompanied by different effects in low and normal math performers. Additionally, the study investigated the potential contribution of cognitive functions and motor skills on mathematical performance. Results: All groups improved their mathematical performance from T0 to T1. However, from T0 to T1, the improvement was significantly greater in GMM compared to FMM (1.87 ± 0.71 correct answers) (p = 0.02). At T2 no significant differences in mathematical performance were observed. A subgroup analysis revealed that normal math-performers benefitted from GMM compared to both CON 1.78 ± 0.73 correct answers (p = 0.04) and FMM 2.14 ± 0.72 correct answers (p = 0.008). These effects were not observed in low math-performers. The effects were partly accounted for by visuo-spatial short-term memory and gross motor skills. Conclusion: The study demonstrates that motor enriched learning activities can improve mathematical performance. In normal math performers GMM led to larger improvements than FMM and CON. This was not the case for the low math performers. Future studies should further elucidate the neurophysiological mechanisms underlying the observed behavioral effects. PMID:28066215

  13. Fusion of multi-source remote sensing data for agriculture monitoring tasks

    NASA Astrophysics Data System (ADS)

    Skakun, S.; Franch, B.; Vermote, E.; Roger, J. C.; Becker Reshef, I.; Justice, C. O.; Masek, J. G.; Murphy, E.

    2016-12-01

    Remote sensing data is essential source of information for enabling monitoring and quantification of crop state at global and regional scales. Crop mapping, state assessment, area estimation and yield forecasting are the main tasks that are being addressed within GEO-GLAM. Efficiency of agriculture monitoring can be improved when heterogeneous multi-source remote sensing datasets are integrated. Here, we present several case studies of utilizing MODIS, Landsat-8 and Sentinel-2 data along with meteorological data (growing degree days - GDD) for winter wheat yield forecasting, mapping and area estimation. Archived coarse spatial resolution data, such as MODIS, VIIRS and AVHRR, can provide daily global observations that coupled with statistical data on crop yield can enable the development of empirical models for timely yield forecasting at national level. With the availability of high-temporal and high spatial resolution Landsat-8 and Sentinel-2A imagery, course resolution empirical yield models can be downscaled to provide yield estimates at regional and field scale. In particular, we present the case study of downscaling the MODIS CMG based generalized winter wheat yield forecasting model to high spatial resolution data sets, namely harmonized Landsat-8 - Sentinel-2A surface reflectance product (HLS). Since the yield model requires corresponding in season crop masks, we propose an automatic approach to extract winter crop maps from MODIS NDVI and MERRA2 derived GDD using Gaussian mixture model (GMM). Validation for the state of Kansas (US) and Ukraine showed that the approach can yield accuracies > 90% without using reference (ground truth) data sets. Another application of yearly derived winter crop maps is their use for stratification purposes within area frame sampling for crop area estimation. In particular, one can simulate the dependence of error (coefficient of variation) on the number of samples and strata size. This approach was used for estimating the area of winter crops in Ukraine for 2013-2016. The GMM-GDD approach is further extended for HLS data to provide automatic winter crop mapping at 30 m resolution for crop yield model and area estimation. In case of persistent cloudiness, addition of Sentinel-1A synthetic aperture radar (SAR) images is explored for automatic winter crop mapping.

  14. Galileo NIMS Observations of Europa

    NASA Astrophysics Data System (ADS)

    Shirley, J. H.; Ocampo, A. C.; Carlson, R. W.

    2000-10-01

    The Galileo spacecraft began its tour of the Jovian system in December, 1995. The Galileo Millenium Mission (GMM) is scheduled to end in January, 2003. The opportunities to observe Europa in the remaining orbits are severely limited. Thus the catalog of NIMS observations of Europa is virtually complete. We summarize and describe this extraordinary dataset, which consists of 77 observations. The observations may be grouped in three categories, based on the scale of the data (km/pixel). The highest-resolution observations, with projected scales of 1-9 km/pixel, comprise one important subset of the catalog. These 29 observations sample both leading and trailing hemispheres at low and high latitudes. They have been employed in studies exploring the chemical composition of the non-ice surface materials on Europa (McCord et al., 1999, JGR 104, 11,827; Carlson et al., 1999, Science 286, 97). A second category consists of regional observations at moderate resolution. These 15 observations image Europa's surface at scales of 15-50 km/pixel, appropriate for construction of regional and global mosaics. A gap in coverage for longitudes 270-359 W may be partially filled during the 34th orbit of GMM. The final category consists of 33 global observations with scales ranging upward from 150 km/pixel. The noise levels are typically much reduced in comparison to observations taken deep within Jupiter's magnetosphere. Distant observations obtained during the 11th orbit revealed the presence of hydrogen peroxide on Europa's surface (Carlson et al., 1999b, Science 283, 2062). NIMS observations are archived in ISIS-format "cubes," which are available to researchers through the Planetary Data System (http://www-pdsimage.jpl.nasa.gov/PDS/Public/Atlas/Atlas.html). Detailed guides to every NIMS observation may be downloaded from the NIMS web site (http://jumpy.igpp.ucla.edu/ nims/).

  15. Two-year trajectory of fall risk in people with Parkinson’s disease: a latent class analysis

    PubMed Central

    Paul, Serene S; Thackeray, Anne; Duncan, Ryan P; Cavanaugh, James T; Ellis, Theresa D; Earhart, Gammon M; Ford, Matthew P; Foreman, K Bo; Dibble, Leland E

    2015-01-01

    Objective To examine fall risk trajectories occurring naturally in a sample of individuals with early to middle stage Parkinson’s disease (PD). Design Latent class analysis, specifically growth mixture modeling (GMM) of longitudinal fall risk trajectories. Setting Not applicable. Participants 230 community-dwelling PD participants of a longitudinal cohort study who attended at least two of five assessments over a two year period. Interventions Not applicable. Main Outcome Measures Fall risk trajectory (low, medium or high risk) and stability of fall risk trajectory (stable or fluctuating). Fall risk was determined at 6-monthly intervals using a simple clinical tool based on fall history, freezing of gait, and gait speed. Results The GMM optimally grouped participants into three fall risk trajectories that closely mirrored baseline fall risk status (p=.001). The high fall risk trajectory was most common (42.6%) and included participants with longer and more severe disease and with higher postural instability and gait disability (PIGD) scores than the low and medium risk trajectories (p<.001). Fluctuating fall risk (posterior probability <0.8 of belonging to any trajectory) was found in only 22.6% of the sample, most commonly among individuals who were transitioning to PIGD predominance. Conclusions Regardless of their baseline characteristics, most participants had clear and stable fall risk trajectories over two years. Further investigation is required to determine whether interventions to improve gait and balance may improve fall risk trajectories in people with PD. PMID:26606871

  16. Long-term health-related quality-of-life and symptom response profiles with arformoterol in COPD: results from a 52-week trial.

    PubMed

    Donohue, James F; Bollu, Vamsi K; Stull, Donald E; Nelson, Lauren M; Williams, Valerie Sl; Stensland, Michael D; Hanania, Nicola A

    2018-01-01

    Symptom severity is the largest factor in determining subjective health in COPD. Symptoms (eg, chronic cough, dyspnea) are associated with decreased health-related quality of life (HRQoL). We evaluated the impact of arformoterol on HRQoL in COPD patients, measured by St George's Respiratory Questionnaire (SGRQ). Post hoc growth mixture model (GMM) analysis examined symptom response profiles. We examined data from a randomized, double-blind, parallel-group, 12-month safety trial of twice-daily nebulized arformoterol 15 µg (n=420) versus placebo (n=421). COPD severity was assessed by Global Initiative for Chronic Obstructive Lung Disease (GOLD) status. GMM analysis identified previously unknown patient subgroups and examined the heterogeneity in response to SGRQ Symptoms scores. SGRQ Total score improved by 4.24 points with arformoterol and 2.02 points with placebo ( P =0.006). Significantly greater improvements occurred for arformoterol versus placebo in SGRQ Symptoms (6.34 vs 4.25, P =0.031) and Impacts (3.91 vs 0.97, P =0.001) scores, but not in Activity score (3.57 vs 1.75, P =0.057). GMM identified responders and nonresponders based on the SGRQ Symptoms score. End-of-study mean difference in SGRQ Symptoms scores between these latent classes was 20.7 points ( P <0.001; 95% confidence interval: 17.6-23.9). Compared with nonresponders, responders were more likely current smokers (55.52% vs 44.02%, P =0.0021) and had more severe COPD (forced expiratory volume in 1 second [FEV 1 ]: 1.16 vs 1.23 L, P =0.0419), more exacerbations (0.96 vs 0.69, P =0.0018), and worse mean SGRQ Total (59.81 vs 40.57, P <0.0001), Clinical COPD Questionnaire (3.29 vs 2.05, P <0.0001), and Modified Medical Research Council Dyspnea Scale (3.13 vs 2.75, P <0.0001) scores. Arformoterol-receiving responders exhibited significantly greater improvements in FEV 1 (0.09 vs 0.008, P =0.03) and fewer hospitalizations (0.13 vs 0.24, P =0.02) than those receiving placebo. In this study, arformoterol treatment significantly improved HRQoL reflected by SGRQ. For the analysis performed on these data, arformoterol may be particularly effective in improving lung function and reducing hospitalizations among patients who are unable to quit smoking or present with more severe symptoms.

  17. Acoustic landmarks contain more information about the phone string than other frames for automatic speech recognition with deep neural network acoustic model

    NASA Astrophysics Data System (ADS)

    He, Di; Lim, Boon Pang; Yang, Xuesong; Hasegawa-Johnson, Mark; Chen, Deming

    2018-06-01

    Most mainstream Automatic Speech Recognition (ASR) systems consider all feature frames equally important. However, acoustic landmark theory is based on a contradictory idea, that some frames are more important than others. Acoustic landmark theory exploits quantal non-linearities in the articulatory-acoustic and acoustic-perceptual relations to define landmark times at which the speech spectrum abruptly changes or reaches an extremum; frames overlapping landmarks have been demonstrated to be sufficient for speech perception. In this work, we conduct experiments on the TIMIT corpus, with both GMM and DNN based ASR systems and find that frames containing landmarks are more informative for ASR than others. We find that altering the level of emphasis on landmarks by re-weighting acoustic likelihood tends to reduce the phone error rate (PER). Furthermore, by leveraging the landmark as a heuristic, one of our hybrid DNN frame dropping strategies maintained a PER within 0.44% of optimal when scoring less than half (45.8% to be precise) of the frames. This hybrid strategy out-performs other non-heuristic-based methods and demonstrate the potential of landmarks for reducing computation.

  18. Fault Network Reconstruction using Agglomerative Clustering: Applications to South Californian Seismicity

    NASA Astrophysics Data System (ADS)

    Kamer, Yavor; Ouillon, Guy; Sornette, Didier; Wössner, Jochen

    2014-05-01

    We present applications of a new clustering method for fault network reconstruction based on the spatial distribution of seismicity. Unlike common approaches that start from the simplest large scale and gradually increase the complexity trying to explain the small scales, our method uses a bottom-up approach, by an initial sampling of the small scales and then reducing the complexity. The new approach also exploits the location uncertainty associated with each event in order to obtain a more accurate representation of the spatial probability distribution of the seismicity. For a given dataset, we first construct an agglomerative hierarchical cluster (AHC) tree based on Ward's minimum variance linkage. Such a tree starts out with one cluster and progressively branches out into an increasing number of clusters. To atomize the structure into its constitutive protoclusters, we initialize a Gaussian Mixture Modeling (GMM) at a given level of the hierarchical clustering tree. We then let the GMM converge using an Expectation Maximization (EM) algorithm. The kernels that become ill defined (less than 4 points) at the end of the EM are discarded. By incrementing the number of initialization clusters (by atomizing at increasingly populated levels of the AHC tree) and repeating the procedure above, we are able to determine the maximum number of Gaussian kernels the structure can hold. The kernels in this configuration constitute our protoclusters. In this setting, merging of any pair will lessen the likelihood (calculated over the pdf of the kernels) but in turn will reduce the model's complexity. The information loss/gain of any possible merging can thus be quantified based on the Minimum Description Length (MDL) principle. Similar to an inter-distance matrix, where the matrix element di,j gives the distance between points i and j, we can construct a MDL gain/loss matrix where mi,j gives the information gain/loss resulting from the merging of kernels i and j. Based on this matrix, merging events resulting in MDL gain are performed in descending order until no gainful merging is possible anymore. We envision that the results of this study could lead to a better understanding of the complex interactions within the Californian fault system and hopefully use the acquired insights for earthquake forecasting.

  19. Does competition improve financial stability of the banking sector in ASEAN countries? An empirical analysis.

    PubMed

    Noman, Abu Hanifa Md; Gee, Chan Sok; Isa, Che Ruhana

    2017-01-01

    This study examines the influence of competition on the financial stability of the commercial banks of Association of Southeast Asian Nation (ASEAN) over the 1990 to 2014 period. Panzar-Rosse H-statistic, Lerner index and Herfindahl-Hirschman Index (HHI) are used as measures of competition, while Z-score, non-performing loan (NPL) ratio and equity ratio are used as measures of financial stability. Two-step system Generalized Method of Moments (GMM) estimates demonstrate that competition measured by H-statistic is positively related to Z-score and equity ratio, and negatively related to non-performing loan ratio. Conversely, market power measured by Lerner index is negatively related to Z-score and equity ratio and positively related to NPL ratio. These results strongly support the competition-stability view for ASEAN banks. We also capture the non-linear relationship between competition and financial stability by incorporating a quadratic term of competition in our models. The results show that the coefficient of the quadratic term of H-statistic is negative for the Z-score model given a positive coefficient of the linear term in the same model. These results support the non-linear relationship between competition and financial stability of the banking sector. The study contains significant policy implications for improving the financial stability of the commercial banks.

  20. Does competition improve financial stability of the banking sector in ASEAN countries? An empirical analysis

    PubMed Central

    Gee, Chan Sok; Isa, Che Ruhana

    2017-01-01

    This study examines the influence of competition on the financial stability of the commercial banks of Association of Southeast Asian Nation (ASEAN) over the 1990 to 2014 period. Panzar-Rosse H-statistic, Lerner index and Herfindahl-Hirschman Index (HHI) are used as measures of competition, while Z-score, non-performing loan (NPL) ratio and equity ratio are used as measures of financial stability. Two-step system Generalized Method of Moments (GMM) estimates demonstrate that competition measured by H-statistic is positively related to Z-score and equity ratio, and negatively related to non-performing loan ratio. Conversely, market power measured by Lerner index is negatively related to Z-score and equity ratio and positively related to NPL ratio. These results strongly support the competition-stability view for ASEAN banks. We also capture the non-linear relationship between competition and financial stability by incorporating a quadratic term of competition in our models. The results show that the coefficient of the quadratic term of H-statistic is negative for the Z-score model given a positive coefficient of the linear term in the same model. These results support the non-linear relationship between competition and financial stability of the banking sector. The study contains significant policy implications for improving the financial stability of the commercial banks. PMID:28486548

  1. Geometric morphometrics reveals sex-differential shape allometry in a spider.

    PubMed

    Fernández-Montraveta, Carmen; Marugán-Lobón, Jesús

    2017-01-01

    Common scientific wisdom assumes that spider sexual dimorphism (SD) mostly results from sexual selection operating on males. However, testing predictions from this hypothesis, particularly male size hyperallometry, has been restricted by methodological constraints. Here, using geometric morphometrics (GMM) we studied for the first time sex-differential shape allometry in a spider ( Donacosa merlini , Araneae: Lycosidae) known to exhibit the reverse pattern (i.e., male-biased) of spider sexual size dimorphism. GMM reveals previously undetected sex-differential shape allometry and sex-related shape differences that are size independent (i.e., associated to the y-intercept, and not to size scaling). Sexual shape dimorphism affects both the relative carapace-to-opisthosoma size and the carapace geometry, arguably resulting from sex differences in both reproductive roles (female egg load and male competition) and life styles (wandering males and burrowing females). Our results demonstrate that body portions may vary modularly in response to different selection pressures, giving rise to sex differences in shape, which reconciles previously considered mutually exclusive interpretations about the origins of spider SD.

  2. Flux density measurement of radial magnetic bearing with a rotating rotor based on fiber Bragg grating-giant magnetostrictive material sensors.

    PubMed

    Ding, Guoping; Zhang, Songchao; Cao, Hao; Gao, Bin; Zhang, Biyun

    2017-06-10

    The rotational magnetic field of radial magnetic bearings characterizes remarkable time and spatial nonlinearity due to the eddy current and induced electromagnetic field. It is significant to experimentally obtain the features of the rotational magnetic field of the radial magnetic bearings to validate the theoretical analysis and reveal the discipline of a rotational magnetic field. This paper developed thin-slice fiber Bragg grating-giant magnetostrictive material (FBG-GMM) magnetic sensors to measure air-gap flux density of a radial magnetic bearing with a rotating rotor; a radial magnetic bearing test rig was constructed and the rotational magnetic field with different rotation speed was measured. Moreover, the finite element method (FEM) was used to simulate the rotational magnetic field; the measurement results and FEM results were investigated, and it was concluded that the FBG-GMM sensors were capable of measuring the radial magnetic bearing's air gap flux density with a rotating rotor, and the measurement results showed a certain degree of accuracy.

  3. Induction of cell expansion of goldfish melanocytoma cells (GMM-1) by epinephrine and dexamethasone requires external calcium.

    PubMed

    Shih, Y L; Lo, S J

    1993-05-01

    Treatment of GMM-1 (a goldfish melanocytoma cell line) cells with epinephrine induced a rapid cell expansion (flattening of cells, extension and broadening of cellular processes) similar to the effect of dexamethasone reported previously (Shih et al., 1990). Studies on the possible involvement of secondary messengers in cell expansion indicated that (i) both 8-bromo-CAMP and forskolin caused cell shrinking (the opposite of cell expansion); (ii) TPA also caused cell shrinking; (iii) phospholipid derivatives, such as 1,2-dioctanoyl-sn-glycerol, lysophosphatidic acid, and arachidonic acid caused cell expansion; and (iv) EGTA (calcium chelator) and nifedipine (calcium channel blocker) inhibited the effect of epinephrine. Together with the previous findings, these observations indicate that epinephrine and dexamethasone may share a common pathway in triggering an external calcium influx to cause cell expansion. The results of the effects of epinephrine agonists and antagonists, together with those of other workers, also show that there are multiple isoforms of adrenoceptor in the goldfish.

  4. [History of Medical Mycology in the former German Democratic Republic].

    PubMed

    Seebacher, C; Blaschke-Hellmessen, Renate; Kielstein, P

    2002-01-01

    After the Second World War the development of medical mycology in Germany had taken a very different course in the east and west parts depending on the political division. In this respect our contribution deals with the situation in the former German Democratic Republic. Efficient mycological centres were founded step by step almost in all medical universities on the basis of the mycological laboratories in dermatological hospitals competent for diagnostic work, but also for teaching and scientific research. In this context biologists were the main stay of mycology, they finally were integrated to the same degree in the universities like physicians. The effectiveness of the Gesellschaft für Medizinische Mykologie der DDR (GMM), its board of directors and its working groups as well as the topics of human and animal mycology during this period are described. Especially the merger of the GMM with the Deutschsprachige Mykologische Gesellschaft after the reunification of Germany without problems and the kind co-operation of Prof. Dr. Johannes Müller during this procedure are emphasized.

  5. High-Level Event Recognition in Unconstrained Videos

    DTIC Science & Technology

    2013-01-01

    frames per- forms well for urban soundscapes but not for polyphonic music. In place of GMM, Lu et al. [78] adopted spectral clustering to generate...Aucouturier JJ, Defreville B, Pachet F (2007) The bag-of-frames approach to audio pattern recognition: a sufficientmodel for urban soundscapes but not

  6. Using Statistical Multivariable Models to Understand the Relationship Between Interplanetary Coronal Mass Ejecta and Magnetic Flux Ropes

    NASA Technical Reports Server (NTRS)

    Riley, P.; Richardson, I. G.

    2012-01-01

    In-situ measurements of interplanetary coronal mass ejections (ICMEs) display a wide range of properties. A distinct subset, "magnetic clouds" (MCs), are readily identifiable by a smooth rotation in an enhanced magnetic field, together with an unusually low solar wind proton temperature. In this study, we analyze Ulysses spacecraft measurements to systematically investigate five possible explanations for why some ICMEs are observed to be MCs and others are not: i) An observational selection effect; that is, all ICMEs do in fact contain MCs, but the trajectory of the spacecraft through the ICME determines whether the MC is actually encountered; ii) interactions of an erupting flux rope (PR) with itself or between neighboring FRs, which produce complex structures in which the coherent magnetic structure has been destroyed; iii) an evolutionary process, such as relaxation to a low plasma-beta state that leads to the formation of an MC; iv) the existence of two (or more) intrinsic initiation mechanisms, some of which produce MCs and some that do not; or v) MCs are just an easily identifiable limit in an otherwise corntinuous spectrum of structures. We apply quantitative statistical models to assess these ideas. In particular, we use the Akaike information criterion (AIC) to rank the candidate models and a Gaussian mixture model (GMM) to uncover any intrinsic clustering of the data. Using a logistic regression, we find that plasma-beta, CME width, and the ratio O(sup 7) / O(sup 6) are the most significant predictor variables for the presence of an MC. Moreover, the propensity for an event to be identified as an MC decreases with heliocentric distance. These results tend to refute ideas ii) and iii). GMM clustering analysis further identifies three distinct groups of ICMEs; two of which match (at the 86% level) with events independently identified as MCs, and a third that matches with non-MCs (68 % overlap), Thus, idea v) is not supported. Choosing between ideas i) and iv) is more challenging, since they may effectively be indistinguishable from one another by a single in-situ spacecraft. We offer some suggestions on how future studies may address this.

  7. Segmentation of multiple heart cavities in 3-D transesophageal ultrasound images.

    PubMed

    Haak, Alexander; Vegas-Sánchez-Ferrero, Gonzalo; Mulder, Harriët W; Ren, Ben; Kirişli, Hortense A; Metz, Coert; van Burken, Gerard; van Stralen, Marijn; Pluim, Josien P W; van der Steen, Antonius F W; van Walsum, Theo; Bosch, Johannes G

    2015-06-01

    Three-dimensional transesophageal echocardiography (TEE) is an excellent modality for real-time visualization of the heart and monitoring of interventions. To improve the usability of 3-D TEE for intervention monitoring and catheter guidance, automated segmentation is desired. However, 3-D TEE segmentation is still a challenging task due to the complex anatomy with multiple cavities, the limited TEE field of view, and typical ultrasound artifacts. We propose to segment all cavities within the TEE view with a multi-cavity active shape model (ASM) in conjunction with a tissue/blood classification based on a gamma mixture model (GMM). 3-D TEE image data of twenty patients were acquired with a Philips X7-2t matrix TEE probe. Tissue probability maps were estimated by a two-class (blood/tissue) GMM. A statistical shape model containing the left ventricle, right ventricle, left atrium, right atrium, and aorta was derived from computed tomography angiography (CTA) segmentations by principal component analysis. ASMs of the whole heart and individual cavities were generated and consecutively fitted to tissue probability maps. First, an average whole-heart model was aligned with the 3-D TEE based on three manually indicated anatomical landmarks. Second, pose and shape of the whole-heart ASM were fitted by a weighted update scheme excluding parts outside of the image sector. Third, pose and shape of ASM for individual heart cavities were initialized by the previous whole heart ASM and updated in a regularized manner to fit the tissue probability maps. The ASM segmentations were validated against manual outlines by two observers and CTA derived segmentations. Dice coefficients and point-to-surface distances were used to determine segmentation accuracy. ASM segmentations were successful in 19 of 20 cases. The median Dice coefficient for all successful segmentations versus the average observer ranged from 90% to 71% compared with an inter-observer range of 95% to 84%. The agreement against the CTA segmentations was slightly lower with a median Dice coefficient between 85% and 57%. In this work, we successfully showed the accuracy and robustness of the proposed multi-cavity segmentation scheme. This is a promising development for intraoperative procedure guidance, e.g., in cardiac electrophysiology.

  8. Hydrologic risk analysis in the Yangtze River basin through coupling Gaussian mixtures into copulas

    NASA Astrophysics Data System (ADS)

    Fan, Y. R.; Huang, W. W.; Huang, G. H.; Li, Y. P.; Huang, K.; Li, Z.

    2016-02-01

    In this study, a bivariate hydrologic risk framework is proposed through coupling Gaussian mixtures into copulas, leading to a coupled GMM-copula method. In the coupled GMM-Copula method, the marginal distributions of flood peak, volume and duration are quantified through Gaussian mixture models and the joint probability distributions of flood peak-volume, peak-duration and volume-duration are established through copulas. The bivariate hydrologic risk is then derived based on the joint return period of flood variable pairs. The proposed method is applied to the risk analysis for the Yichang station on the main stream of the Yangtze River, China. The results indicate that (i) the bivariate risk for flood peak-volume would keep constant for the flood volume less than 1.0 × 105 m3/s day, but present a significant decreasing trend for the flood volume larger than 1.7 × 105 m3/s day; and (ii) the bivariate risk for flood peak-duration would not change significantly for the flood duration less than 8 days, and then decrease significantly as duration value become larger. The probability density functions (pdfs) of the flood volume and duration conditional on flood peak can also be generated through the fitted copulas. The results indicate that the conditional pdfs of flood volume and duration follow bimodal distributions, with the occurrence frequency of the first vertex decreasing and the latter one increasing as the increase of flood peak. The obtained conclusions from the bivariate hydrologic analysis can provide decision support for flood control and mitigation.

  9. Two-Year Trajectory of Fall Risk in People With Parkinson Disease: A Latent Class Analysis.

    PubMed

    Paul, Serene S; Thackeray, Anne; Duncan, Ryan P; Cavanaugh, James T; Ellis, Theresa D; Earhart, Gammon M; Ford, Matthew P; Foreman, K Bo; Dibble, Leland E

    2016-03-01

    To examine fall risk trajectories occurring naturally in a sample of individuals with early to middle stage Parkinson disease (PD). Latent class analysis, specifically growth mixture modeling (GMM), of longitudinal fall risk trajectories. Assessments were conducted at 1 of 4 universities. Community-dwelling participants with PD of a longitudinal cohort study who attended at least 2 of 5 assessments over a 2-year follow-up period (N=230). Not applicable. Fall risk trajectory (low, medium, or high risk) and stability of fall risk trajectory (stable or fluctuating). Fall risk was determined at 6 monthly intervals using a simple clinical tool based on fall history, freezing of gait, and gait speed. The GMM optimally grouped participants into 3 fall risk trajectories that closely mirrored baseline fall risk status (P=.001). The high fall risk trajectory was most common (42.6%) and included participants with longer and more severe disease and with higher postural instability and gait disability (PIGD) scores than the low and medium fall risk trajectories (P<.001). Fluctuating fall risk (posterior probability <0.8 of belonging to any trajectory) was found in only 22.6% of the sample, most commonly among individuals who were transitioning to PIGD predominance. Regardless of their baseline characteristics, most participants had clear and stable fall risk trajectories over 2 years. Further investigation is required to determine whether interventions to improve gait and balance may improve fall risk trajectories in people with PD. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  10. Using cystoscopy to segment bladder tumors with a multivariate approach in different color spaces.

    PubMed

    Freitas, Nuno R; Vieira, Pedro M; Lima, Estevao; Lima, Carlos S

    2017-07-01

    Nowadays the diagnosis of bladder lesions relies upon cystoscopy examination and depends on the interpreter's experience. State of the art of bladder tumor identification are based on 3D reconstruction, using CT images (Virtual Cystoscopy) or images where the structures are exalted with the use of pigmentation, but none uses white light cystoscopy images. An initial attempt to automatically identify tumoral tissue was already developed by the authors and this paper will develop this idea. Traditional cystoscopy images processing has a huge potential to improve early tumor detection and allows a more effective treatment. In this paper is described a multivariate approach to do segmentation of bladder cystoscopy images, that will be used to automatically detect and improve physician diagnose. Each region can be assumed as a normal distribution with specific parameters, leading to the assumption that the distribution of intensities is a Gaussian Mixture Model (GMM). Region of high grade and low grade tumors, usually appears with higher intensity than normal regions. This paper proposes a Maximum a Posteriori (MAP) approach based on pixel intensities read simultaneously in different color channels from RGB, HSV and CIELab color spaces. The Expectation-Maximization (EM) algorithm is used to estimate the best multivariate GMM parameters. Experimental results show that the proposed method does bladder tumor segmentation into two classes in a more efficient way in RGB even in cases where the tumor shape is not well defined. Results also show that the elimination of component L from CIELab color space does not allow definition of the tumor shape.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Dengwang; Wang, Jie; Kapp, Daniel S.

    Purpose: The aim of this work is to develop a robust algorithm for accurate segmentation of liver with special attention paid to the problems with fuzzy edges and tumor. Methods: 200 CT images were collected from radiotherapy treatment planning system. 150 datasets are selected as the panel data for shape dictionary and parameters estimation. The remaining 50 datasets were used as test images. In our study liver segmentation was formulated as optimization process of implicit function. The liver region was optimized via local and global optimization during iterations. Our method consists five steps: 1)The livers from the panel data weremore » segmented manually by physicians, and then We estimated the parameters of GMM (Gaussian mixture model) and MRF (Markov random field). Shape dictionary was built by utilizing the 3D liver shapes. 2)The outlines of chest and abdomen were located according to rib structure in the input images, and the liver region was initialized based on GMM. 3)The liver shape for each 2D slice was adjusted using MRF within the neighborhood of liver edge for local optimization. 4)The 3D liver shape was corrected by employing SSR (sparse shape representation) based on liver shape dictionary for global optimization. Furthermore, H-PSO(Hybrid Particle Swarm Optimization) was employed to solve the SSR equation. 5)The corrected 3D liver was divided into 2D slices as input data of the third step. The iteration was repeated within the local optimization and global optimization until it satisfied the suspension conditions (maximum iterations and changing rate). Results: The experiments indicated that our method performed well even for the CT images with fuzzy edge and tumors. Comparing with physician delineated results, the segmentation accuracy with the 50 test datasets (VOE, volume overlap percentage) was on average 91%–95%. Conclusion: The proposed automatic segmentation method provides a sensible technique for segmentation of CT images. This work is supported by NIH/NIBIB (1R01-EB016777), National Natural Science Foundation of China (No.61471226 and No.61201441), Research funding from Shandong Province (No.BS2012DX038 and No.J12LN23), and Research funding from Jinan City (No.201401221 and No.20120109)« less

  12. Predicting Adolescents' Bullying Participation from Developmental Trajectories of Social Status and Behavior.

    PubMed

    Pouwels, J Loes; Salmivalli, Christina; Saarento, Silja; van den Berg, Yvonne H M; Lansu, Tessa A M; Cillessen, Antonius H N

    2017-03-28

    The aim of this study was to determine how trajectory clusters of social status (social preference and perceived popularity) and behavior (direct aggression and prosocial behavior) from age 9 to age 14 predicted adolescents' bullying participant roles at age 16 and 17 (n = 266). Clusters were identified with multivariate growth mixture modeling (GMM). The findings showed that participants' developmental trajectories of social status and social behavior across childhood and early adolescence predicted their bullying participant role involvement in adolescence. Practical implications and suggestions for further research are discussed. © 2017 The Authors. Child Development published by Wiley Periodicals, Inc. on behalf of Society for Research in Child Development.

  13. VDES J2325-5229 a z = 2.7 gravitationally lensed quasar discovered using morphology-independent supervised machine learning

    NASA Astrophysics Data System (ADS)

    Ostrovski, Fernanda; McMahon, Richard G.; Connolly, Andrew J.; Lemon, Cameron A.; Auger, Matthew W.; Banerji, Manda; Hung, Johnathan M.; Koposov, Sergey E.; Lidman, Christopher E.; Reed, Sophie L.; Allam, Sahar; Benoit-Lévy, Aurélien; Bertin, Emmanuel; Brooks, David; Buckley-Geer, Elizabeth; Carnero Rosell, Aurelio; Carrasco Kind, Matias; Carretero, Jorge; Cunha, Carlos E.; da Costa, Luiz N.; Desai, Shantanu; Diehl, H. Thomas; Dietrich, Jörg P.; Evrard, August E.; Finley, David A.; Flaugher, Brenna; Fosalba, Pablo; Frieman, Josh; Gerdes, David W.; Goldstein, Daniel A.; Gruen, Daniel; Gruendl, Robert A.; Gutierrez, Gaston; Honscheid, Klaus; James, David J.; Kuehn, Kyler; Kuropatkin, Nikolay; Lima, Marcos; Lin, Huan; Maia, Marcio A. G.; Marshall, Jennifer L.; Martini, Paul; Melchior, Peter; Miquel, Ramon; Ogando, Ricardo; Plazas Malagón, Andrés; Reil, Kevin; Romer, Kathy; Sanchez, Eusebio; Santiago, Basilio; Scarpine, Vic; Sevilla-Noarbe, Ignacio; Soares-Santos, Marcelle; Sobreira, Flavia; Suchyta, Eric; Tarle, Gregory; Thomas, Daniel; Tucker, Douglas L.; Walker, Alistair R.

    2017-03-01

    We present the discovery and preliminary characterization of a gravitationally lensed quasar with a source redshift zs = 2.74 and image separation of 2.9 arcsec lensed by a foreground zl = 0.40 elliptical galaxy. Since optical observations of gravitationally lensed quasars show the lens system as a superposition of multiple point sources and a foreground lensing galaxy, we have developed a morphology-independent multi-wavelength approach to the photometric selection of lensed quasar candidates based on Gaussian Mixture Models (GMM) supervised machine learning. Using this technique and gi multicolour photometric observations from the Dark Energy Survey (DES), near-IR JK photometry from the VISTA Hemisphere Survey (VHS) and WISE mid-IR photometry, we have identified a candidate system with two catalogue components with IAB = 18.61 and IAB = 20.44 comprising an elliptical galaxy and two blue point sources. Spectroscopic follow-up with NTT and the use of an archival AAT spectrum show that the point sources can be identified as a lensed quasar with an emission line redshift of z = 2.739 ± 0.003 and a foreground early-type galaxy with z = 0.400 ± 0.002. We model the system as a single isothermal ellipsoid and find the Einstein radius θE ˜ 1.47 arcsec, enclosed mass Menc ˜ 4 × 1011 M⊙ and a time delay of ˜52 d. The relatively wide separation, month scale time delay duration and high redshift make this an ideal system for constraining the expansion rate beyond a redshift of 1.

  14. Bulk silica transmission grating made by reactive ion etching for NIR space instruments

    NASA Astrophysics Data System (ADS)

    Caillat, Amandine; Pascal, Sandrine; Tisserand, Stéphane; Dohlen, Kjetil; Grange, Robert; Sauget, Vincent; Gautier, Sophie

    2014-07-01

    A GRISM, made of a grating on a prism, allow combining image and spectroscopy of the same field of view with the same optical system and detector, thus simplify instrument concept. New GRISM designs impose technical specifications difficult to reach with classical grating manufacturing processes: large useful aperture (>100mm), low groove frequency (<30g/mm), small blaze angle (<3°) and, last but not least, line curvature allowing wavefront corrections. In addition, gratings are commonly made of resin which may not be suitable to withstand the extreme space environment. Therefore, in the frame of a R&D project financed by the CNES, SILIOS Technologies developed a new resin-free grating manufacturing process and realized a first 80mm diameter prototype optically tested at LAM. We present detailed specifications of this resin-free grating, the manufacturing process, optical setups and models for optical performance verification and very encouraging results obtained on the first 80mm diameter grating prototype: >80% transmitted efficiency, <30nm RMS wavefront error, groove shape and roughness very close to theory and uniform over the useful aperture.

  15. Trajectories of depression in adults with newly diagnosed type 1 diabetes: results from the German Multicenter Diabetes Cohort Study.

    PubMed

    Kampling, Hanna; Petrak, Frank; Farin, Erik; Kulzer, Bernd; Herpertz, Stephan; Mittag, Oskar

    2017-01-01

    There is a paucity of longitudinal data on type 1 diabetes and depression, especially in adults. The present study prospectively analysed trajectories of depressive symptoms in adults during the first 5 years of living with type 1 diabetes. We aimed to identify distinct trajectories of depressive symptoms and to examine how they affect diabetes outcome. We reanalysed data from a prospective multicentre observational cohort study including 313 adults with newly diagnosed type 1 diabetes. At baseline and in annual postal surveys over 5 consecutive years, we gathered patient characteristics and behavioural and psychosocial data (e.g. Symptom Checklist-90-R [SCL-90-R]). Medical data (e.g. HbA 1c levels) was obtained from the treating physicians. We applied growth mixture modelling (GMM) to identify distinct trajectories of depression over time. Five years after diagnosis, 7.8% (n = 20) of patients were moderately depressed and 10.2% (n = 26) were severely depressed. GMM statistics identified three possible models of trajectories (class 1, 'no depressive symptoms'; class 2, 'worsening depressive symptoms that improve after 2 years'; class 3, 'worsening depressive symptoms'). Severity of depression symptoms at baseline (subscale of the SCL-90-R questionnaire) significantly predicted membership of classes 2 and 3 vs class 1. After 5 years, higher HbA 1c values were detected in class 3 patients (mean = 8.2%, 66 mmol/mol) compared with class 1 and class 2 (both: mean = 7.2%, 55 mmol/mol). We identified distinct trajectories of depressive symptoms that are also relevant for diabetes outcome. Patients with worsening depressive symptoms over time exhibited poor glycaemic control after the first 5 years of living with diabetes. They also exhibited a reduced quality of life and increased diabetes-related distress.

  16. An Efficient and Robust Moving Shadow Removal Algorithm and Its Applications in ITS

    NASA Astrophysics Data System (ADS)

    Lin, Chin-Teng; Yang, Chien-Ting; Shou, Yu-Wen; Shen, Tzu-Kuei

    2010-12-01

    We propose an efficient algorithm for removing shadows of moving vehicles caused by non-uniform distributions of light reflections in the daytime. This paper presents a brand-new and complete structure in feature combination as well as analysis for orientating and labeling moving shadows so as to extract the defined objects in foregrounds more easily in each snapshot of the original files of videos which are acquired in the real traffic situations. Moreover, we make use of Gaussian Mixture Model (GMM) for background removal and detection of moving shadows in our tested images, and define two indices for characterizing non-shadowed regions where one indicates the characteristics of lines and the other index can be characterized by the information in gray scales of images which helps us to build a newly defined set of darkening ratios (modified darkening factors) based on Gaussian models. To prove the effectiveness of our moving shadow algorithm, we carry it out with a practical application of traffic flow detection in ITS (Intelligent Transportation System)—vehicle counting. Our algorithm shows the faster processing speed, 13.84 ms/frame, and can improve the accuracy rate in 4% ~ 10% for our three tested videos in the experimental results of vehicle counting.

  17. Predicting Driver Behavior during the Yellow Interval Using Video Surveillance

    PubMed Central

    Li, Juan; Jia, Xudong; Shao, Chunfu

    2016-01-01

    At a signalized intersection, drivers must make a stop/go decision at the onset of the yellow signal. Incorrect decisions would lead to red light running (RLR) violations or crashes. This study aims to predict drivers’ stop/go decisions and RLR violations during yellow intervals. Traffic data such as vehicle approaching speed, acceleration, distance to the intersection, and occurrence of RLR violations are gathered by a Vehicle Data Collection System (VDCS). An enhanced Gaussian Mixture Model (GMM) is used to extract moving vehicles from target lanes, and the Kalman Filter (KF) algorithm is utilized to acquire vehicle trajectories. The data collected from the VDCS are further analyzed by a sequential logit model, and the relationship between drivers’ stop/go decisions and RLR violations is identified. The results indicate that the distance of vehicles to the stop line at the onset of the yellow signal is an important predictor for both drivers’ stop/go decisions and RLR violations. In addition, vehicle approaching speed is a contributing factor for stop/go decisions. Furthermore, the accelerations of vehicles after the onset of the yellow signal are positively related to RLR violations. The findings of this study can be used to predict the probability of drivers’ RLR violations and improve traffic safety at signalized intersections. PMID:27929447

  18. Predicting Driver Behavior during the Yellow Interval Using Video Surveillance.

    PubMed

    Li, Juan; Jia, Xudong; Shao, Chunfu

    2016-12-06

    At a signalized intersection, drivers must make a stop/go decision at the onset of the yellow signal. Incorrect decisions would lead to red light running (RLR) violations or crashes. This study aims to predict drivers' stop/go decisions and RLR violations during yellow intervals. Traffic data such as vehicle approaching speed, acceleration, distance to the intersection, and occurrence of RLR violations are gathered by a Vehicle Data Collection System (VDCS). An enhanced Gaussian Mixture Model (GMM) is used to extract moving vehicles from target lanes, and the Kalman Filter (KF) algorithm is utilized to acquire vehicle trajectories. The data collected from the VDCS are further analyzed by a sequential logit model, and the relationship between drivers' stop/go decisions and RLR violations is identified. The results indicate that the distance of vehicles to the stop line at the onset of the yellow signal is an important predictor for both drivers' stop/go decisions and RLR violations. In addition, vehicle approaching speed is a contributing factor for stop/go decisions. Furthermore, the accelerations of vehicles after the onset of the yellow signal are positively related to RLR violations. The findings of this study can be used to predict the probability of drivers' RLR violations and improve traffic safety at signalized intersections.

  19. A Descriptive, Retrospective Study of Using an Oblique Downward-design Gluteus Maximus Myocutaneous Flap for Reconstruction of Ischial Pressure Ulcers.

    PubMed

    Chou, Chang-Yi; Sun, Yu-Shan; Shih, Yu-Jen; Tzeng, Yuan-Sheng; Chang, Shun-Cheng; Dai, Niann-Tzyy; Lin, Chin-Ta

    2018-03-01

    Despite advances in reconstruction techniques, ischial pressure ulcers continue to present a challenge for the plastic surgeon. The purpose of this retrospective study was to evaluate outcomes of using an oblique downward gluteus maximus myocutaneous (GMM) flap for coverage of grade IV ischial ulcers. Data regarding defect size, flap size, operation time, duration of wound healing, and surgical outcome were abstracted from the medical records of patients whose ischial pressure ulcers had been reconstructed using GMM island flaps between January 2010 and December 2015. The 22 patients comprised 15 men and 7 women with a mean age of 52 (range 16-81) years. Twenty (20) had paraplegia, 6 had a recurrent ischial ulcer, 2 were bedridden following a cerebrovascular accident, 1 had a myelomeningocele status post operation, and 19 were spinal cord injury patients. Follow-up time ranged from 6 to 40 months. Pressure ulcer size ranged from 3 cm x 2 cm to 10 cm x 5 cm (average 22.3 cm2). The average flap size was 158 cm2 (15.9 cm x 9.7 cm); the largest was 286 cm2 (22 cm x 13 cm). The operating time ranged from 52 minutes to 110 minutes (average, 80 minutes). In 2 cases, wound dehiscence occurred but completely healed after resuturing. One (1) ischial pressure ulcer recurred 6 months following surgery and was successfully covered with a pedicled anterolateral thigh flap. No recurrences or problems were observed in the remaining 20 patients. Time to complete wound healing ranged from 14 to 24 days (average 17.8 days). Treatment of ischial pressure ulcers with GMM flaps allowed for an easy, simple procedure that provided the adequate thickness of soft tissue needed to cover the bony prominence, fill dead space, and cover the lesion. This technique was a reliable and safe reconstructive modality for the management of ischial pressure ulcers, even in recurrent cases.

  20. Principles for the risk assessment of genetically modified microorganisms and their food products in the European Union.

    PubMed

    Aguilera, Jaime; Gomes, Ana R; Olaru, Irina

    2013-10-01

    Genetically modified microorganisms (GMMs) are involved in the production of a variety of food and feed. The release and consumption of these products can raise questions about health and environmental safety. Therefore, the European Union has different legislative instruments in place in order to ensure the safety of such products. A key requirement is to conduct a scientific risk assessment as a prerequisite for the product to be placed on the market. This risk assessment is performed by the European Food Safety Authority (EFSA), through its Scientific Panels. The EFSA Panel on Genetically Modified Organisms has published complete and comprehensive guidance for the risk assessment of GMMs and their products for food and/or feed use, in which the strategy and the criteria to conduct the assessment are explained, as well as the scientific data to be provided in applications for regulated products. This Guidance follows the main risk assessment principles developed by various international organisations (Codex Alimentarius, 2003; OECD, 2010). The assessment considers two aspects: the characterisation of the GMM and the possible effects of its modification with respect to safety, and the safety of the product itself. Due to the existing diversity of GMMs and their products, a categorisation is recommended to optimise the assessment and to determine the extent of the required data. The assessment starts with a comprehensive characterisation of the GMM, covering the recipient/parental organism, the donor(s) of the genetic material, the genetic modification, and the final GMM and its phenotype. Evaluation of the composition, potential toxicity and/or allergenicity, nutritional value and environmental impact of the product constitute further cornerstones of the process. The outcome of the assessment is reflected in a scientific opinion which indicates whether the product raises any safety issues. This opinion is taken into account by the different European regulatory authorities prior to a decision regarding authorisation to commercialise the product. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. A Copula-Based Conditional Probabilistic Forecast Model for Wind Power Ramps

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hodge, Brian S; Krishnan, Venkat K; Zhang, Jie

    Efficient management of wind ramping characteristics can significantly reduce wind integration costs for balancing authorities. By considering the stochastic dependence of wind power ramp (WPR) features, this paper develops a conditional probabilistic wind power ramp forecast (cp-WPRF) model based on Copula theory. The WPRs dataset is constructed by extracting ramps from a large dataset of historical wind power. Each WPR feature (e.g., rate, magnitude, duration, and start-time) is separately forecasted by considering the coupling effects among different ramp features. To accurately model the marginal distributions with a copula, a Gaussian mixture model (GMM) is adopted to characterize the WPR uncertaintymore » and features. The Canonical Maximum Likelihood (CML) method is used to estimate parameters of the multivariable copula. The optimal copula model is chosen based on the Bayesian information criterion (BIC) from each copula family. Finally, the best conditions based cp-WPRF model is determined by predictive interval (PI) based evaluation metrics. Numerical simulations on publicly available wind power data show that the developed copula-based cp-WPRF model can predict WPRs with a high level of reliability and sharpness.« less

  2. Promoting the development of resilient academic functioning in maltreated children.

    PubMed

    Holmes, Megan R; Yoon, Susan; Berg, Kristen A; Cage, Jamie L; Perzynski, Adam T

    2018-01-01

    This study examined (a) the extent of heterogeneity in the patterns of developmental trajectories of language development and academic functioning in children who have experienced maltreatment, (b) how maltreatment type (i.e., neglect or physical abuse) and timing of abuse explained variation in developmental trajectories, and (c) the extent to which individual protective factors (i.e., preschool attendance, prosocial skills), relationship protective factors (i.e., parental warmth, absence of past-year depressive episode, cognitive/verbal responsiveness) and community protective factors (i.e., neighborhood safety) promoted the development of resilient language/academic functioning trajectories. Longitudinal data analyses were conducted using cohort sequential Growth Mixture Model (CS-GMM) with a United States national representative sample of children reported to Child Protective Services (n=1,776). Five distinct developmental trajectories from birth to age 10 were identified including two resilient groups. Children who were neglected during infancy/toddlerhood or physically abused during preschool age were more likely to be in the poorer language/academic functioning groups (decreasing/recovery/decreasing and high decreasing) than the resilient high stable group. Child prosocial skills, caregiver warmth, and caregiver cognitive stimulation significantly predicted membership in the two resilient academic functioning groups (low increasing and high stable), after controlling for demographics and child physical abuse and neglect. Results suggest that it is possible for a maltreated child to successfully achieve competent academic functioning, despite the early adversity, and identifies three possible avenues of intervention points. This study also makes a significant contribution to the field of child development research through the novel use of CS-GMM, which has implications for future longitudinal data collection methodology. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. A novel content-based medical image retrieval method based on query topic dependent image features (QTDIF)

    NASA Astrophysics Data System (ADS)

    Xiong, Wei; Qiu, Bo; Tian, Qi; Mueller, Henning; Xu, Changsheng

    2005-04-01

    Medical image retrieval is still mainly a research domain with a large variety of applications and techniques. With the ImageCLEF 2004 benchmark, an evaluation framework has been created that includes a database, query topics and ground truth data. Eleven systems (with a total of more than 50 runs) compared their performance in various configurations. The results show that there is not any one feature that performs well on all query tasks. Key to successful retrieval is rather the selection of features and feature weights based on a specific set of input features, thus on the query task. In this paper we propose a novel method based on query topic dependent image features (QTDIF) for content-based medical image retrieval. These feature sets are designed to capture both inter-category and intra-category statistical variations to achieve good retrieval performance in terms of recall and precision. We have used Gaussian Mixture Models (GMM) and blob representation to model medical images and construct the proposed novel QTDIF for CBIR. Finally, trained multi-class support vector machines (SVM) are used for image similarity ranking. The proposed methods have been tested over the Casimage database with around 9000 images, for the given 26 image topics, used for imageCLEF 2004. The retrieval performance has been compared with the medGIFT system, which is based on the GNU Image Finding Tool (GIFT). The experimental results show that the proposed QTDIF-based CBIR can provide significantly better performance than systems based general features only.

  4. Tracking and people counting using Particle Filter Method

    NASA Astrophysics Data System (ADS)

    Sulistyaningrum, D. R.; Setiyono, B.; Rizky, M. S.

    2018-03-01

    In recent years, technology has developed quite rapidly, especially in the field of object tracking. Moreover, if the object under study is a person and the number of people a lot. The purpose of this research is to apply Particle Filter method for tracking and counting people in certain area. Tracking people will be rather difficult if there are some obstacles, one of which is occlusion. The stages of tracking and people counting scheme in this study include pre-processing, segmentation using Gaussian Mixture Model (GMM), tracking using particle filter, and counting based on centroid. The Particle Filter method uses the estimated motion included in the model used. The test results show that the tracking and people counting can be done well with an average accuracy of 89.33% and 77.33% respectively from six videos test data. In the process of tracking people, the results are good if there is partial occlusion and no occlusion

  5. Spot counting on fluorescence in situ hybridization in suspension images using Gaussian mixture model

    NASA Astrophysics Data System (ADS)

    Liu, Sijia; Sa, Ruhan; Maguire, Orla; Minderman, Hans; Chaudhary, Vipin

    2015-03-01

    Cytogenetic abnormalities are important diagnostic and prognostic criteria for acute myeloid leukemia (AML). A flow cytometry-based imaging approach for FISH in suspension (FISH-IS) was established that enables the automated analysis of several log-magnitude higher number of cells compared to the microscopy-based approaches. The rotational positioning can occur leading to discordance between spot count. As a solution of counting error from overlapping spots, in this study, a Gaussian Mixture Model based classification method is proposed. The Akaike information criterion (AIC) and Bayesian information criterion (BIC) of GMM are used as global image features of this classification method. Via Random Forest classifier, the result shows that the proposed method is able to detect closely overlapping spots which cannot be separated by existing image segmentation based spot detection methods. The experiment results show that by the proposed method we can obtain a significant improvement in spot counting accuracy.

  6. Assessment of Severe Apnoea through Voice Analysis, Automatic Speech, and Speaker Recognition Techniques

    NASA Astrophysics Data System (ADS)

    Fernández Pozo, Rubén; Blanco Murillo, Jose Luis; Hernández Gómez, Luis; López Gonzalo, Eduardo; Alcázar Ramírez, José; Toledano, Doroteo T.

    2009-12-01

    This study is part of an ongoing collaborative effort between the medical and the signal processing communities to promote research on applying standard Automatic Speech Recognition (ASR) techniques for the automatic diagnosis of patients with severe obstructive sleep apnoea (OSA). Early detection of severe apnoea cases is important so that patients can receive early treatment. Effective ASR-based detection could dramatically cut medical testing time. Working with a carefully designed speech database of healthy and apnoea subjects, we describe an acoustic search for distinctive apnoea voice characteristics. We also study abnormal nasalization in OSA patients by modelling vowels in nasal and nonnasal phonetic contexts using Gaussian Mixture Model (GMM) pattern recognition on speech spectra. Finally, we present experimental findings regarding the discriminative power of GMMs applied to severe apnoea detection. We have achieved an 81% correct classification rate, which is very promising and underpins the interest in this line of inquiry.

  7. The Impact of Employment during School on College Student Academic Performance. NBER Working Paper No. 14006

    ERIC Educational Resources Information Center

    DeSimone, Jeffrey S.

    2008-01-01

    This paper estimates the effect of paid employment on grades of full-time, four-year students from four nationally representative cross sections of the Harvard College Alcohol Study administered during 1993-2001. The relationship could be causal in either direction and is likely contaminated by unobserved heterogeneity. Two-stage GMM regressions…

  8. VDES J2325-5229 a z = 2.7 gravitationally lensed quasar discovered using morphology-independent supervised machine learning

    DOE PAGES

    Ostrovski, Fernanda; McMahon, Richard G.; Connolly, Andrew J.; ...

    2016-11-17

    In this paper, we present the discovery and preliminary characterization of a gravitationally lensed quasar with a source redshift z s = 2.74 and image separation of 2.9 arcsec lensed by a foreground z l = 0.40 elliptical galaxy. Since optical observations of gravitationally lensed quasars show the lens system as a superposition of multiple point sources and a foreground lensing galaxy, we have developed a morphology-independent multi-wavelength approach to the photometric selection of lensed quasar candidates based on Gaussian Mixture Models (GMM) supervised machine learning. Using this technique and gi multicolour photometric observations from the Dark Energy Survey (DES),more » near-IR JK photometry from the VISTA Hemisphere Survey (VHS) and WISE mid-IR photometry, we have identified a candidate system with two catalogue components with i AB = 18.61 and i AB = 20.44 comprising an elliptical galaxy and two blue point sources. Spectroscopic follow-up with NTT and the use of an archival AAT spectrum show that the point sources can be identified as a lensed quasar with an emission line redshift of z = 2.739 ± 0.003 and a foreground early-type galaxy with z = 0.400 ± 0.002. We model the system as a single isothermal ellipsoid and find the Einstein radius θ E ~ 1.47 arcsec, enclosed mass M enc ~ 4 × 10 11 M ⊙ and a time delay of ~52 d. Finally, the relatively wide separation, month scale time delay duration and high redshift make this an ideal system for constraining the expansion rate beyond a redshift of 1.« less

  9. VDES J2325-5229 a z = 2.7 gravitationally lensed quasar discovered using morphology-independent supervised machine learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ostrovski, Fernanda; McMahon, Richard G.; Connolly, Andrew J.

    In this paper, we present the discovery and preliminary characterization of a gravitationally lensed quasar with a source redshift z s = 2.74 and image separation of 2.9 arcsec lensed by a foreground z l = 0.40 elliptical galaxy. Since optical observations of gravitationally lensed quasars show the lens system as a superposition of multiple point sources and a foreground lensing galaxy, we have developed a morphology-independent multi-wavelength approach to the photometric selection of lensed quasar candidates based on Gaussian Mixture Models (GMM) supervised machine learning. Using this technique and gi multicolour photometric observations from the Dark Energy Survey (DES),more » near-IR JK photometry from the VISTA Hemisphere Survey (VHS) and WISE mid-IR photometry, we have identified a candidate system with two catalogue components with i AB = 18.61 and i AB = 20.44 comprising an elliptical galaxy and two blue point sources. Spectroscopic follow-up with NTT and the use of an archival AAT spectrum show that the point sources can be identified as a lensed quasar with an emission line redshift of z = 2.739 ± 0.003 and a foreground early-type galaxy with z = 0.400 ± 0.002. We model the system as a single isothermal ellipsoid and find the Einstein radius θ E ~ 1.47 arcsec, enclosed mass M enc ~ 4 × 10 11 M ⊙ and a time delay of ~52 d. Finally, the relatively wide separation, month scale time delay duration and high redshift make this an ideal system for constraining the expansion rate beyond a redshift of 1.« less

  10. Robust Framework to Combine Diverse Classifiers Assigning Distributed Confidence to Individual Classifiers at Class Level

    PubMed Central

    Arshad, Sannia; Rho, Seungmin

    2014-01-01

    We have presented a classification framework that combines multiple heterogeneous classifiers in the presence of class label noise. An extension of m-Mediods based modeling is presented that generates model of various classes whilst identifying and filtering noisy training data. This noise free data is further used to learn model for other classifiers such as GMM and SVM. A weight learning method is then introduced to learn weights on each class for different classifiers to construct an ensemble. For this purpose, we applied genetic algorithm to search for an optimal weight vector on which classifier ensemble is expected to give the best accuracy. The proposed approach is evaluated on variety of real life datasets. It is also compared with existing standard ensemble techniques such as Adaboost, Bagging, and Random Subspace Methods. Experimental results show the superiority of proposed ensemble method as compared to its competitors, especially in the presence of class label noise and imbalance classes. PMID:25295302

  11. Robust framework to combine diverse classifiers assigning distributed confidence to individual classifiers at class level.

    PubMed

    Khalid, Shehzad; Arshad, Sannia; Jabbar, Sohail; Rho, Seungmin

    2014-01-01

    We have presented a classification framework that combines multiple heterogeneous classifiers in the presence of class label noise. An extension of m-Mediods based modeling is presented that generates model of various classes whilst identifying and filtering noisy training data. This noise free data is further used to learn model for other classifiers such as GMM and SVM. A weight learning method is then introduced to learn weights on each class for different classifiers to construct an ensemble. For this purpose, we applied genetic algorithm to search for an optimal weight vector on which classifier ensemble is expected to give the best accuracy. The proposed approach is evaluated on variety of real life datasets. It is also compared with existing standard ensemble techniques such as Adaboost, Bagging, and Random Subspace Methods. Experimental results show the superiority of proposed ensemble method as compared to its competitors, especially in the presence of class label noise and imbalance classes.

  12. A black hole quartet: New solutions and applications to string theory

    NASA Astrophysics Data System (ADS)

    Padi, Megha

    In this thesis, we study a zoo of black hole solutions which help us connect string theory to the universe we live in. The intuition for how to attack fundamental problems can often be found in a toy model. In Chapter 2, we show that three-dimensional topologically massive gravity with a negative cosmological constant -ℓ -2 and coupling constant has "warped AdS3" solutions with SL(2, R ) x U(1) isometry. For muℓ > 3, we show that certain discrete quotients of warped AdS3 lead to black holes. Their thermodynamics is consistent with the existence of a holographic dual CFT with central charges cR = 15mℓ 2+81Gmm ℓ2+27 and cL = 12mℓ 2Gmm ℓ2+27 . The entropy of many supersymmetric black holes have been accounted for, but more realistic non-supersymmetric black holes have been largely overlooked. In Chapter 3, we derive new single-centered and multi-centered non-BPS black hole solutions for several four dimensional models which, after Kaluza-Klein reduction, admit a description in terms of a sigma model with symmetric target space. In particular, we provide the exact solution with generic charges and asymptotic moduli in N=2 supergravity coupled to one vector multiplet. As it stands, the current formulation of string theory allows for an extremely large number of possible solutions (or vacua). We first analyze this landscape by looking for universal characteristics. In Chapter 4, we provide evidence for the conjecture that gravity is always the weakest force in any string compactification. We show that, in several examples arising in string theory, higher-derivative corrections always make extremal non-supersymmetric black holes lighter than the classical bound M/Q = 1. In Chapter 5, we construct novel black hole bound states, called orientiholes, that are T-dual to IIB orientifold compactifications. The gravitational entropy of such orientiholes provides an "experimental" estimate of the number of vacua in various sectors of the IIB landscape. Furthermore, basic physical properties of orientiholes map to (sometimes subtle) microscopic features, thus providing a useful alternative viewpoint on a number of issues arising in D-brane model building. We also suggest a relation to the topological string analogous to the OSV conjecture.

  13. 16 CFR § 1632.4 - Mattress test procedure.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... threads per square inch and fabric weight of 3.7±0.8 oz/yd2 (125±28 gm/m2). The size of the sheet or.... The cigarettes shall be positioned directly over the thread or in the depression created by the quilting process on the half of the test surface reserved for bare mattress tests. If the quilt design is...

  14. [Presentation of the Editor of Gaceta Médica de México].

    PubMed

    Treviño-Becerra, Alejandro

    La Gaceta Médica de México (GMM) es nuestro órgano oficial de divulgación que muestra los valores de la Academia Nacional de Medicina de México. Sirve como identidad del médico mexicano con sus académicos y divulga los fundamentos científicos de la práctica médica nacional.

  15. Probabilistic Wind Power Ramp Forecasting Based on a Scenario Generation Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Qin; Florita, Anthony R; Krishnan, Venkat K

    Wind power ramps (WPRs) are particularly important in the management and dispatch of wind power and currently drawing the attention of balancing authorities. With the aim to reduce the impact of WPRs for power system operations, this paper develops a probabilistic ramp forecasting method based on a large number of simulated scenarios. An ensemble machine learning technique is first adopted to forecast the basic wind power forecasting scenario and calculate the historical forecasting errors. A continuous Gaussian mixture model (GMM) is used to fit the probability distribution function (PDF) of forecasting errors. The cumulative distribution function (CDF) is analytically deduced.more » The inverse transform method based on Monte Carlo sampling and the CDF is used to generate a massive number of forecasting error scenarios. An optimized swinging door algorithm is adopted to extract all the WPRs from the complete set of wind power forecasting scenarios. The probabilistic forecasting results of ramp duration and start-time are generated based on all scenarios. Numerical simulations on publicly available wind power data show that within a predefined tolerance level, the developed probabilistic wind power ramp forecasting method is able to predict WPRs with a high level of sharpness and accuracy.« less

  16. Probabilistic Wind Power Ramp Forecasting Based on a Scenario Generation Method: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Qin; Florita, Anthony R; Krishnan, Venkat K

    2017-08-31

    Wind power ramps (WPRs) are particularly important in the management and dispatch of wind power, and they are currently drawing the attention of balancing authorities. With the aim to reduce the impact of WPRs for power system operations, this paper develops a probabilistic ramp forecasting method based on a large number of simulated scenarios. An ensemble machine learning technique is first adopted to forecast the basic wind power forecasting scenario and calculate the historical forecasting errors. A continuous Gaussian mixture model (GMM) is used to fit the probability distribution function (PDF) of forecasting errors. The cumulative distribution function (CDF) ismore » analytically deduced. The inverse transform method based on Monte Carlo sampling and the CDF is used to generate a massive number of forecasting error scenarios. An optimized swinging door algorithm is adopted to extract all the WPRs from the complete set of wind power forecasting scenarios. The probabilistic forecasting results of ramp duration and start time are generated based on all scenarios. Numerical simulations on publicly available wind power data show that within a predefined tolerance level, the developed probabilistic wind power ramp forecasting method is able to predict WPRs with a high level of sharpness and accuracy.« less

  17. Effect of e-health on medical expenditures of outpatients with lifestyle-related diseases.

    PubMed

    Minetaki, Kazunori; Akematsu, Yuji; Tsuji, Masatsugu

    2011-10-01

    We analyzed the effect of e-health on medical expenditures in Nishi-aizu Town, Fukushima Prefecture, Japan, using panel data of medical expenditures for about 400 residents from 2002 to 2006. The Nishi-aizu Town system was introduced in 1994 and is still successfully operating as one of the longest running implementations of e-health in Japan. The town office maintains a register of receipts for medical expenditures paid by the National Health Insurance system and provides data on e-health users, allowing users and nonusers of e-health and their respective costs to be distinguished. Here, we focus on patients with lifestyle-related diseases such as high blood pressure, diabetes, stroke, heart failure, etc. This article postulates that e-health reduces medical expenditures via two mechanisms, decreasing travel expenses and preventing symptoms from worsening. The former implies that e-health monitoring allows patients at home to visit medical institutions less frequently, and the latter that the symptoms experienced by e-health users are less severe than those experienced by nonusers. We termed these the travel cost effect and opportunity cost effect, respectively. Chronic conditions tend not to occur singly, and many patients have more than one; for example, patients with high blood pressure or diabetes also likely have heart disease at the same time. This multiplicity of conditions hampers cost analysis. Among methodological issues, a number of recent empirical health analyses have focused on the endogenous problem of explanatory variables. Here, we solved this problem using the generalized method moments (GMM) system, which allows treatment of not only the endogenous problem of explanatory variables but also the dynamic relationship among variables, which arise due to the chronic time-lagged effect of lifestyle-related diseases on patients. We also examined a second important methodological problem related to reverse correlation between the medical expenditures of an outpatient and e-health and took sampling biases into consideration. We concluded that this control of endogeneity through system GMM confirms that the relationship between the medical expenditures of an outpatient and e-health shows causation rather than simple correlation and that e-health use, duration of e-health use, and frequency of e-health use can reduce outpatient medical expenditures for lifestyle-related diseases.

  18. Audio visual speech source separation via improved context dependent association model

    NASA Astrophysics Data System (ADS)

    Kazemi, Alireza; Boostani, Reza; Sobhanmanesh, Fariborz

    2014-12-01

    In this paper, we exploit the non-linear relation between a speech source and its associated lip video as a source of extra information to propose an improved audio-visual speech source separation (AVSS) algorithm. The audio-visual association is modeled using a neural associator which estimates the visual lip parameters from a temporal context of acoustic observation frames. We define an objective function based on mean square error (MSE) measure between estimated and target visual parameters. This function is minimized for estimation of the de-mixing vector/filters to separate the relevant source from linear instantaneous or time-domain convolutive mixtures. We have also proposed a hybrid criterion which uses AV coherency together with kurtosis as a non-Gaussianity measure. Experimental results are presented and compared in terms of visually relevant speech detection accuracy and output signal-to-interference ratio (SIR) of source separation. The suggested audio-visual model significantly improves relevant speech classification accuracy compared to existing GMM-based model and the proposed AVSS algorithm improves the speech separation quality compared to reference ICA- and AVSS-based methods.

  19. A novel approach for modelling vegetation distributions and analysing vegetation sensitivity through trait-climate relationships in China

    PubMed Central

    Yang, Yanzheng; Zhu, Qiuan; Peng, Changhui; Wang, Han; Xue, Wei; Lin, Guanghui; Wen, Zhongming; Chang, Jie; Wang, Meng; Liu, Guobin; Li, Shiqing

    2016-01-01

    Increasing evidence indicates that current dynamic global vegetation models (DGVMs) have suffered from insufficient realism and are difficult to improve, particularly because they are built on plant functional type (PFT) schemes. Therefore, new approaches, such as plant trait-based methods, are urgently needed to replace PFT schemes when predicting the distribution of vegetation and investigating vegetation sensitivity. As an important direction towards constructing next-generation DGVMs based on plant functional traits, we propose a novel approach for modelling vegetation distributions and analysing vegetation sensitivity through trait-climate relationships in China. The results demonstrated that a Gaussian mixture model (GMM) trained with a LMA-Nmass-LAI data combination yielded an accuracy of 72.82% in simulating vegetation distribution, providing more detailed parameter information regarding community structures and ecosystem functions. The new approach also performed well in analyses of vegetation sensitivity to different climatic scenarios. Although the trait-climate relationship is not the only candidate useful for predicting vegetation distributions and analysing climatic sensitivity, it sheds new light on the development of next-generation trait-based DGVMs. PMID:27052108

  20. Using Fuzzy Gaussian Inference and Genetic Programming to Classify 3D Human Motions

    NASA Astrophysics Data System (ADS)

    Khoury, Mehdi; Liu, Honghai

    This research introduces and builds on the concept of Fuzzy Gaussian Inference (FGI) (Khoury and Liu in Proceedings of UKCI, 2008 and IEEE Workshop on Robotic Intelligence in Informationally Structured Space (RiiSS 2009), 2009) as a novel way to build Fuzzy Membership Functions that map to hidden Probability Distributions underlying human motions. This method is now combined with a Genetic Programming Fuzzy rule-based system in order to classify boxing moves from natural human Motion Capture data. In this experiment, FGI alone is able to recognise seven different boxing stances simultaneously with an accuracy superior to a GMM-based classifier. Results seem to indicate that adding an evolutionary Fuzzy Inference Engine on top of FGI improves the accuracy of the classifier in a consistent way.

  1. Growth Mixture Modeling of Depression Symptoms Following Traumatic Brain Injury

    PubMed Central

    Gomez, Rapson; Skilbeck, Clive; Thomas, Matt; Slatyer, Mark

    2017-01-01

    Growth Mixture Modeling (GMM) was used to investigate the longitudinal trajectory of groups (classes) of depression symptoms, and how these groups were predicted by the covariates of age, sex, severity, and length of hospitalization following Traumatic Brain Injury (TBI) in a group of 1074 individuals (696 males, and 378 females) from the Royal Hobart Hospital, who sustained a TBI. The study began in late December 2003 and recruitment continued until early 2007. Ages ranged from 14 to 90 years, with a mean of 35.96 years (SD = 16.61). The study also examined the associations between the groups and causes of TBI. Symptoms of depression were assessed using the Hospital Anxiety and Depression Scale within 3 weeks of injury, and at 1, 3, 6, 12, and 24 months post-injury. The results revealed three groups: low, high, and delayed depression. In the low group depression scores remained below the clinical cut-off at all assessment points during the 24-months post-TBI, and in the high group, depression scores were above the clinical cut-off at all assessment points. The delayed group showed an increase in depression symptoms to 12 months after injury, followed by a return to initial assessment level during the following 12 months. Covariates were found to be differentially associated with the three groups. For example, relative to the low group, the high depression group was associated with more severe TBI, being female, and a shorter period of hospitalization. The delayed group also had a shorter period of hospitalization, were younger, and sustained less severe TBI. Our findings show considerable fluctuation of depression over time, and that a non-clinical level of depression at any one point in time does not necessarily mean that the person will continue to have non-clinical levels in the future. As we used GMM, we were able to show new findings and also bring clarity to contradictory past findings on depression and TBI. Consequently, we recommend the use of this approach in future studies in this area. PMID:28878700

  2. Three dimensional indoor positioning based on visible light with Gaussian mixture sigma-point particle filter technique

    NASA Astrophysics Data System (ADS)

    Gu, Wenjun; Zhang, Weizhi; Wang, Jin; Amini Kashani, M. R.; Kavehrad, Mohsen

    2015-01-01

    Over the past decade, location based services (LBS) have found their wide applications in indoor environments, such as large shopping malls, hospitals, warehouses, airports, etc. Current technologies provide wide choices of available solutions, which include Radio-frequency identification (RFID), Ultra wideband (UWB), wireless local area network (WLAN) and Bluetooth. With the rapid development of light-emitting-diodes (LED) technology, visible light communications (VLC) also bring a practical approach to LBS. As visible light has a better immunity against multipath effect than radio waves, higher positioning accuracy is achieved. LEDs are utilized both for illumination and positioning purpose to realize relatively lower infrastructure cost. In this paper, an indoor positioning system using VLC is proposed, with LEDs as transmitters and photo diodes as receivers. The algorithm for estimation is based on received-signalstrength (RSS) information collected from photo diodes and trilateration technique. By appropriately making use of the characteristics of receiver movements and the property of trilateration, estimation on three-dimensional (3-D) coordinates is attained. Filtering technique is applied to enable tracking capability of the algorithm, and a higher accuracy is reached compare to raw estimates. Gaussian mixture Sigma-point particle filter (GM-SPPF) is proposed for this 3-D system, which introduces the notion of Gaussian Mixture Model (GMM). The number of particles in the filter is reduced by approximating the probability distribution with Gaussian components.

  3. Image-based Modeling of PSF Deformation with Application to Limited Angle PET Data

    PubMed Central

    Matej, Samuel; Li, Yusheng; Panetta, Joseph; Karp, Joel S.; Surti, Suleman

    2016-01-01

    The point-spread-functions (PSFs) of reconstructed images can be deformed due to detector effects such as resolution blurring and parallax error, data acquisition geometry such as insufficient sampling or limited angular coverage in dual-panel PET systems, or reconstruction imperfections/simplifications. PSF deformation decreases quantitative accuracy and its spatial variation lowers consistency of lesion uptake measurement across the imaging field-of-view (FOV). This can be a significant problem with dual panel PET systems even when using TOF data and image reconstruction models of the detector and data acquisition process. To correct for the spatially variant reconstructed PSF distortions we propose to use an image-based resolution model (IRM) that includes such image PSF deformation effects. Originally the IRM was mostly used for approximating data resolution effects of standard PET systems with full angular coverage in a computationally efficient way, but recently it was also used to mitigate effects of simplified geometric projectors. Our work goes beyond this by including into the IRM reconstruction imperfections caused by combination of the limited angle, parallax errors, and any other (residual) deformation effects and testing it for challenging dual panel data with strongly asymmetric and variable PSF deformations. We applied and tested these concepts using simulated data based on our design for a dedicated breast imaging geometry (B-PET) consisting of dual-panel, time-of-flight (TOF) detectors. We compared two image-based resolution models; i) a simple spatially invariant approximation to PSF deformation, which captures only the general PSF shape through an elongated 3D Gaussian function, and ii) a spatially variant model using a Gaussian mixture model (GMM) to more accurately capture the asymmetric PSF shape in images reconstructed from data acquired with the B-PET scanner geometry. Results demonstrate that while both IRMs decrease the overall uptake bias in the reconstructed image, the second one with the spatially variant and accurate PSF shape model is also able to ameliorate the spatially variant deformation effects to provide consistent uptake results independent of the lesion location within the FOV. PMID:27812222

  4. Classification of Anticipatory Signals for Grasp and Release from Surface Electromyography.

    PubMed

    Siu, Ho Chit; Shah, Julie A; Stirling, Leia A

    2016-10-25

    Surface electromyography (sEMG) is a technique for recording natural muscle activation signals, which can serve as control inputs for exoskeletons and prosthetic devices. Previous experiments have incorporated these signals using both classical and pattern-recognition control methods in order to actuate such devices. We used the results of an experiment incorporating grasp and release actions with object contact to develop an intent-recognition system based on Gaussian mixture models (GMM) and continuous-emission hidden Markov models (HMM) of sEMG data. We tested this system with data collected from 16 individuals using a forearm band with distributed sEMG sensors. The data contain trials with shifted band alignments to assess robustness to sensor placement. This study evaluated and found that pattern-recognition-based methods could classify transient anticipatory sEMG signals in the presence of shifted sensor placement and object contact. With the best-performing classifier, the effect of label lengths in the training data was also examined. A mean classification accuracy of 75.96% was achieved through a unigram HMM method with five mixture components. Classification accuracy on different sub-movements was found to be limited by the length of the shortest sub-movement, which means that shorter sub-movements within dynamic sequences require larger training sets to be classified correctly. This classification of user intent is a potential control mechanism for a dynamic grasping task involving user contact with external objects and noise. Further work is required to test its performance as part of an exoskeleton controller, which involves contact with actuated external surfaces.

  5. Classification of Anticipatory Signals for Grasp and Release from Surface Electromyography

    PubMed Central

    Siu, Ho Chit; Shah, Julie A.; Stirling, Leia A.

    2016-01-01

    Surface electromyography (sEMG) is a technique for recording natural muscle activation signals, which can serve as control inputs for exoskeletons and prosthetic devices. Previous experiments have incorporated these signals using both classical and pattern-recognition control methods in order to actuate such devices. We used the results of an experiment incorporating grasp and release actions with object contact to develop an intent-recognition system based on Gaussian mixture models (GMM) and continuous-emission hidden Markov models (HMM) of sEMG data. We tested this system with data collected from 16 individuals using a forearm band with distributed sEMG sensors. The data contain trials with shifted band alignments to assess robustness to sensor placement. This study evaluated and found that pattern-recognition-based methods could classify transient anticipatory sEMG signals in the presence of shifted sensor placement and object contact. With the best-performing classifier, the effect of label lengths in the training data was also examined. A mean classification accuracy of 75.96% was achieved through a unigram HMM method with five mixture components. Classification accuracy on different sub-movements was found to be limited by the length of the shortest sub-movement, which means that shorter sub-movements within dynamic sequences require larger training sets to be classified correctly. This classification of user intent is a potential control mechanism for a dynamic grasping task involving user contact with external objects and noise. Further work is required to test its performance as part of an exoskeleton controller, which involves contact with actuated external surfaces. PMID:27792155

  6. Development of a Kalman Filter in the Gauss-Helmert Model for Reliability Analysis in Orientation Determination with Smartphone Sensors.

    PubMed

    Ettlinger, Andreas; Neuner, Hans; Burgess, Thomas

    2018-01-31

    The topic of indoor positioning and indoor navigation by using observations from smartphone sensors is very challenging as the determined trajectories can be subject to significant deviations compared to the route travelled in reality. Especially the calculation of the direction of movement is the critical part of pedestrian positioning approaches such as Pedestrian Dead Reckoning ("PDR"). Due to distinct systematic effects in filtered trajectories, it can be assumed that there are systematic deviations present in the observations from smartphone sensors. This article has two aims: one is to enable the estimation of partial redundancies for each observation as well as for observation groups. Partial redundancies are a measure for the reliability indicating how well systematic deviations can be detected in single observations used in PDR. The second aim is to analyze the behavior of partial redundancy by modifying the stochastic and functional model of the Kalman filter. The equations relating the observations to the orientation are condition equations, which do not exhibit the typical structure of the Gauss-Markov model ("GMM"), wherein the observations are linear and can be formulated as functions of the states. To calculate and analyze the partial redundancy of the observations from smartphone-sensors used in PDR, the system equation and the measurement equation of a Kalman filter as well as the redundancy matrix need to be derived in the Gauss-Helmert model ("GHM"). These derivations are introduced in this article and lead to a novel Kalman filter structure based on condition equations, enabling reliability assessment of each observation.

  7. An anomaly detection approach for the identification of DME patients using spectral domain optical coherence tomography images.

    PubMed

    Sidibé, Désiré; Sankar, Shrinivasan; Lemaître, Guillaume; Rastgoo, Mojdeh; Massich, Joan; Cheung, Carol Y; Tan, Gavin S W; Milea, Dan; Lamoureux, Ecosse; Wong, Tien Y; Mériaudeau, Fabrice

    2017-02-01

    This paper proposes a method for automatic classification of spectral domain OCT data for the identification of patients with retinal diseases such as Diabetic Macular Edema (DME). We address this issue as an anomaly detection problem and propose a method that not only allows the classification of the OCT volume, but also allows the identification of the individual diseased B-scans inside the volume. Our approach is based on modeling the appearance of normal OCT images with a Gaussian Mixture Model (GMM) and detecting abnormal OCT images as outliers. The classification of an OCT volume is based on the number of detected outliers. Experimental results with two different datasets show that the proposed method achieves a sensitivity and a specificity of 80% and 93% on the first dataset, and 100% and 80% on the second one. Moreover, the experiments show that the proposed method achieves better classification performance than other recently published works. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Fuzzy membership functions for analysis of high-resolution CT images of diffuse pulmonary diseases.

    PubMed

    Almeida, Eliana; Rangayyan, Rangaraj M; Azevedo-Marques, Paulo M

    2015-08-01

    We propose the use of fuzzy membership functions to analyze images of diffuse pulmonary diseases (DPDs) based on fractal and texture features. The features were extracted from preprocessed regions of interest (ROIs) selected from high-resolution computed tomography images. The ROIs represent five different patterns of DPDs and normal lung tissue. A Gaussian mixture model (GMM) was constructed for each feature, with six Gaussians modeling the six patterns. Feature selection was performed and the GMMs of the five significant features were used. From the GMMs, fuzzy membership functions were obtained by a probability-possibility transformation and further statistical analysis was performed. An average classification accuracy of 63.5% was obtained for the six classes. For four of the six classes, the classification accuracy was superior to 65%, and the best classification accuracy was 75.5% for one class. The use of fuzzy membership functions to assist in pattern classification is an alternative to deterministic approaches to explore strategies for medical diagnosis.

  9. Distant Speech Recognition Using a Microphone Array Network

    NASA Astrophysics Data System (ADS)

    Nakano, Alberto Yoshihiro; Nakagawa, Seiichi; Yamamoto, Kazumasa

    In this work, spatial information consisting of the position and orientation angle of an acoustic source is estimated by an artificial neural network (ANN). The estimated position of a speaker in an enclosed space is used to refine the estimated time delays for a delay-and-sum beamformer, thus enhancing the output signal. On the other hand, the orientation angle is used to restrict the lexicon used in the recognition phase, assuming that the speaker faces a particular direction while speaking. To compensate the effect of the transmission channel inside a short frame analysis window, a new cepstral mean normalization (CMN) method based on a Gaussian mixture model (GMM) is investigated and shows better performance than the conventional CMN for short utterances. The performance of the proposed method is evaluated through Japanese digit/command recognition experiments.

  10. Freedom and the State: Kant on Revolution and International Interface

    DTIC Science & Technology

    1999-11-11

    physical and psychological nature and many of them make us selfish and disagreeable beings. In fact, Kant contends that human beings are naturally...circumstances include a "defective education, bad company,... the viciousness of a natural disposition insensitive to shame,... levity and thoughtlessness...possibility of ethical action. Unless we are capable of choosing duty over inclinations, we cannot be held responsible for our actions (GMM 49). This

  11. Multiscale Data Assimilation

    DTIC Science & Technology

    2014-09-30

    good test 3 case to study the multiscale data assimilation capabilities of our GMM-DO filter. We also performed stochastic simulations with our DO...Morakot and internal tides. The ignorance score and Kullback - Leibler divergence were employed to measure the skill of the multiscale pdf forecasts...read off from the posterior of the augmented state vector. We implemented this new smoother and tested it using a 2D-in-space stochastic flow exiting

  12. Torsional strength of computer-aided design/computer-aided manufacturing-fabricated esthetic orthodontic brackets.

    PubMed

    Alrejaye, Najla; Pober, Richard; Giordano Ii, Russell

    2017-01-01

    To fabricate orthodontic brackets from esthetic materials and determine their fracture resistance during archwire torsion. Computer-aided design/computer-aided manufacturing technology (Cerec inLab, Sirona) was used to mill brackets with a 0.018 × 0.025-inch slot. Materials used were Paradigm MZ100 and Lava Ultimate resin composite (3M ESPE), Mark II feldspathic porcelain (Vita Zahnfabrik), and In-Ceram YZ zirconia (Vita Zahnfabrik). Ten brackets of each material were subjected to torque by a 0.018 × 0.025-inch stainless steel archwire (G&H) using a specially designed apparatus. The average moments and degrees of torsion necessary to fracture the brackets were determined and compared with those of commercially available alumina brackets, Mystique MB (Dentsply GAC). The YZ brackets were statistically significantly stronger than any other tested material in their resistance to torsion (P < .05). The mean torques at failure ranged from 3467 g.mm for Mark II to 11,902 g.mm for YZ. The mean torsion angles at failure ranged from 15.3° to 40.9°. Zirconia had the highest torsional strength among the tested esthetic brackets. Resistance of MZ100 and Lava Ultimate composite resin brackets to archwire torsion was comparable to commercially available alumina ceramic brackets.

  13. Limit Theory for Panel Data Models with Cross Sectional Dependence and Sequential Exogeneity.

    PubMed

    Kuersteiner, Guido M; Prucha, Ingmar R

    2013-06-01

    The paper derives a general Central Limit Theorem (CLT) and asymptotic distributions for sample moments related to panel data models with large n . The results allow for the data to be cross sectionally dependent, while at the same time allowing the regressors to be only sequentially rather than strictly exogenous. The setup is sufficiently general to accommodate situations where cross sectional dependence stems from spatial interactions and/or from the presence of common factors. The latter leads to the need for random norming. The limit theorem for sample moments is derived by showing that the moment conditions can be recast such that a martingale difference array central limit theorem can be applied. We prove such a central limit theorem by first extending results for stable convergence in Hall and Hedye (1980) to non-nested martingale arrays relevant for our applications. We illustrate our result by establishing a generalized estimation theory for GMM estimators of a fixed effect panel model without imposing i.i.d. or strict exogeneity conditions. We also discuss a class of Maximum Likelihood (ML) estimators that can be analyzed using our CLT.

  14. Behavioural and Emotional Problems in Children and Educational Outcomes: A Dynamic Panel Data Analysis.

    PubMed

    Khanam, Rasheda; Nghiem, Son

    2018-05-01

    This study investigates the effects of behavioural and emotional problems in children on their educational outcomes using data from the Longitudinal Survey of Australian Children (LSAC). We contribute to the extant literature using a dynamic specification to test the hypothesis of knowledge accumulation. Further, we apply the system generalised method of moments (GMM) estimator to minimise biases due to unobserved factors. We find that mental disorders in children has a negative effect on the National Assessment Program-Literacy and Numeracy (NAPLAN) test scores. Among all mental disorders, having emotional problems is found to be the most influential with one standard deviation (SD) increase in emotional problems being associated with 0.05 SD reduction in NAPLAN reading, writing and spelling; 0.04 SD reduction in matrix reasoning and grammar; and 0.03 SD reduction in NAPLAN numeracy.

  15. Student-oriented learning: an inquiry-based developmental biology lecture course.

    PubMed

    Malacinski, George M

    2003-01-01

    In this junior-level undergraduate course, developmental life cycles exhibited by various organisms are reviewed, with special attention--where relevant--to the human embryo. Morphological features and processes are described and recent insights into the molecular biology of gene expression are discussed. Ways are studied in which model systems, including marine invertebrates, amphibia, fruit flies and other laboratory species are employed to elucidate general principles which apply to fertilization, cleavage, gastrulation and organogenesis. Special attention is given to insights into those topics which will soon be researched with data from the Human Genome Project. The learning experience is divided into three parts: Part I is a in which the Socratic (inquiry) method is employed by the instructor (GMM) to organize a review of classical developmental phenomena; Part II represents an in which students study the details related to the surveys included in Part I as they have been reported in research journals; Part III focuses on a class project--the preparation of a spiral bound on a topic of relevance to human developmental biology (e.g.,Textbook of Embryonal Stem Cells). Student response to the use of the Socratic method increases as the course progresses and represents the most successful aspect of the course.

  16. Investigation of the Minimum Deployment Time of a Foam/Fabric Composite Material.

    DTIC Science & Technology

    1980-09-01

    Kevlar Fabric! use xperienced, trained personnel. The pres- Polyurethane Foam Composites. TR M-272/ADA076310 sure containers should be adequately...evaluated. High molecular ponent foam producing materials. (Polyurethanes, weight resin performed best because its solubility char- epoxies, phenolics , and...that was coated to a total Because earlier CERL tests had established the weight of about 10 oz/sq yd (237 gm/m 2 ). strength of Kevlar * fabric, it was

  17. Real time lobster posture estimation for behavior research

    NASA Astrophysics Data System (ADS)

    Yan, Sheng; Alfredsen, Jo Arve

    2017-02-01

    In animal behavior research, the main task of observing the behavior of an animal is usually done manually. The measurement of the trajectory of an animal and its real-time posture description is often omitted due to the lack of automatic computer vision tools. Even though there are many publications for pose estimation, few are efficient enough to apply in real-time or can be used without the machine learning algorithm to train a classifier from mass samples. In this paper, we propose a novel strategy for the real-time lobster posture estimation to overcome those difficulties. In our proposed algorithm, we use the Gaussian mixture model (GMM) for lobster segmentation. Then the posture estimation is based on the distance transform and skeleton calculated from the segmentation. We tested the algorithm on a serials lobster videos in different size and lighting conditions. The results show that our proposed algorithm is efficient and robust under various conditions.

  18. Improved initial guess with semi-subpixel level accuracy in digital image correlation by feature-based method

    NASA Astrophysics Data System (ADS)

    Zhang, Yunlu; Yan, Lei; Liou, Frank

    2018-05-01

    The quality initial guess of deformation parameters in digital image correlation (DIC) has a serious impact on convergence, robustness, and efficiency of the following subpixel level searching stage. In this work, an improved feature-based initial guess (FB-IG) scheme is presented to provide initial guess for points of interest (POIs) inside a large region. Oriented FAST and Rotated BRIEF (ORB) features are semi-uniformly extracted from the region of interest (ROI) and matched to provide initial deformation information. False matched pairs are eliminated by the novel feature guided Gaussian mixture model (FG-GMM) point set registration algorithm, and nonuniform deformation parameters of the versatile reproducing kernel Hilbert space (RKHS) function are calculated simultaneously. Validations on simulated images and real-world mini tensile test verify that this scheme can robustly and accurately compute initial guesses with semi-subpixel level accuracy in cases with small or large translation, deformation, or rotation.

  19. Multi-sectorial convergence in greenhouse gas emissions.

    PubMed

    Oliveira, Guilherme de; Bourscheidt, Deise Maria

    2017-07-01

    This paper uses the World Input-Output Database (WIOD) to test the hypothesis of per capita convergence in greenhouse gas (GHG) emissions for a multi-sectorial panel of countries. The empirical strategy applies conventional estimators of random and fixed effects and Arellano and Bond's (1991) GMM to the main pollutants related to the greenhouse effect. For reasonable empirical specifications, the model revealed robust evidence of per capita convergence in CH 4 emissions in the agriculture, food, and services sectors. The evidence of convergence in CO 2 emissions was moderate in the following sectors: agriculture, food, non-durable goods manufacturing, and services. In all cases, the time for convergence was less than 15 years. Regarding emissions by energy use, the largest source of global warming, there was only moderate evidence in the extractive industry sector-all other pollutants presented little or no evidence. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Predicting the Trajectories of Perceived Pain Intensity in Southern Community-Dwelling Older Adults: The Role of Religiousness.

    PubMed

    Sun, Fei; Park, Nan Sook; Wardian, Jana; Lee, Beom S; Roff, Lucinda L; Klemmack, David L; Parker, Michael W; Koenig, Harold G; Sawyer, Patricia L; Allman, Richard M

    2013-11-01

    This study focuses on the identification of multiple latent trajectories of pain intensity, and it examines how religiousness is related to different classes of pain trajectory. Participants were 720 community-dwelling older adults who were interviewed at four time points over a 3-year period. Overall, intensity of pain decreased over 3 years. Analysis using latent growth mixture modeling (GMM) identified three classes of pain: (1) increasing ( n = 47); (2) consistently unchanging ( n = 292); and (3) decreasing ( n = 381). Higher levels of intrinsic religiousness (IR) at baseline were associated with higher levels of pain at baseline, although it attenuated the slope of pain trajectories in the increasing pain group. Higher service attendance at baseline was associated with a higher probability of being in the decreasing pain group. The increasing pain group and the consistently unchanging group reported more negative physical and mental health outcomes than the decreasing pain group.

  1. Parameter estimation and forecasting for multiplicative log-normal cascades.

    PubMed

    Leövey, Andrés E; Lux, Thomas

    2012-04-01

    We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing et al. [Physica D 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica D 193, 195 (2004)] and Kiyono et al. [Phys. Rev. E 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono et al.'s procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.

  2. Ifosfamide in the treatment of recurrent or disseminated lung cancer: a phase II study of two dose schedules.

    PubMed

    Costanzi, J J; Gagliano, R; Loukas, D; Panettiere, F J; Hokanson, J A

    1978-05-01

    Ifosfamide was administered to 21 patients with recurrent or disseminated lung cancer at a dose of 4.0 gm/M2 iv every 3 weeks. The response rate was 33% with an additional 14% showing no response or stable disease. At a dose of 1.2 gm/M2 daily for 5 days every 4 weeks, 57% of 14 patients responded with 35% showing no response or stable disease. The majority of the patients (28) had epidermoid carcinoma. Two (7%) had complete response with 9 (32%) showing partial responses. Other responses included 1/2 oat cell carcinomas and 3/6 large cell undifferentiated carcinomas. Toxicity was equal in both regimens for nausea, vomiting, increased serum LDH and neutropenia but the 5 day program had significantly less hemorrhagic cystitis. Survival was greatly influenced by response. There was no statistical difference in overall length of response between responders and the non responding/stable disease patients. But these two groups had a very significant survival advantage when compared to those patients with increasing disease. Similarly, there was a significant improvement in response duration for the low dosage regimen. Therefore, the low dose 5 day regimen is recommended because of its response rate, it has less hemorrhagic cystitis and it has better patient acceptance in that it can be given as an outpatient and does not require a Foley catheter.

  3. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update.

    PubMed

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-04-15

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the "good" models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm.

  4. Enhancement of ELDA Tracker Based on CNN Features and Adaptive Model Update

    PubMed Central

    Gao, Changxin; Shi, Huizhang; Yu, Jin-Gang; Sang, Nong

    2016-01-01

    Appearance representation and the observation model are the most important components in designing a robust visual tracking algorithm for video-based sensors. Additionally, the exemplar-based linear discriminant analysis (ELDA) model has shown good performance in object tracking. Based on that, we improve the ELDA tracking algorithm by deep convolutional neural network (CNN) features and adaptive model update. Deep CNN features have been successfully used in various computer vision tasks. Extracting CNN features on all of the candidate windows is time consuming. To address this problem, a two-step CNN feature extraction method is proposed by separately computing convolutional layers and fully-connected layers. Due to the strong discriminative ability of CNN features and the exemplar-based model, we update both object and background models to improve their adaptivity and to deal with the tradeoff between discriminative ability and adaptivity. An object updating method is proposed to select the “good” models (detectors), which are quite discriminative and uncorrelated to other selected models. Meanwhile, we build the background model as a Gaussian mixture model (GMM) to adapt to complex scenes, which is initialized offline and updated online. The proposed tracker is evaluated on a benchmark dataset of 50 video sequences with various challenges. It achieves the best overall performance among the compared state-of-the-art trackers, which demonstrates the effectiveness and robustness of our tracking algorithm. PMID:27092505

  5. Does economic, financial and institutional developments matter for environmental quality? A comparative analysis of EU and MEA countries.

    PubMed

    Abid, Mehdi

    2017-03-01

    The aim of this study is to test the hypothesis of the Environmental Kuznets Curve (EKC) with a sample of 58 MEA (Middle East & African) and 41 EU (European Union) countries for the period 1990 to 2011. The empirical analysis is carried out using the GMM-system method to solve the problem of endogenous variables. We focused on direct and indirect effects of institutional quality (through the efficiency of public expenditure, financial development, trade openness and foreign direct investment) and the income-emission relationship. We found a monotonically increasing relationship between CO 2 emissions and GDP in both MEA and EU regions. The policy implication is clear: in order to have sustainable positive economic performance and to reduce carbon dioxide emission in the country at the same time, policy makers should regulate and enhance the role and efficiency of domestic institutions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Audio feature extraction using probability distribution function

    NASA Astrophysics Data System (ADS)

    Suhaib, A.; Wan, Khairunizam; Aziz, Azri A.; Hazry, D.; Razlan, Zuradzman M.; Shahriman A., B.

    2015-05-01

    Voice recognition has been one of the popular applications in robotic field. It is also known to be recently used for biometric and multimedia information retrieval system. This technology is attained from successive research on audio feature extraction analysis. Probability Distribution Function (PDF) is a statistical method which is usually used as one of the processes in complex feature extraction methods such as GMM and PCA. In this paper, a new method for audio feature extraction is proposed which is by using only PDF as a feature extraction method itself for speech analysis purpose. Certain pre-processing techniques are performed in prior to the proposed feature extraction method. Subsequently, the PDF result values for each frame of sampled voice signals obtained from certain numbers of individuals are plotted. From the experimental results obtained, it can be seen visually from the plotted data that each individuals' voice has comparable PDF values and shapes.

  7. Getting stuck in the blues: persistence of mental health problems in Australia.

    PubMed

    Roy, John; Schurer, Stefanie

    2013-09-01

    Do episodes of mental health (MH) problems cause future MH problems, and if yes, how strong are these dynamics? We quantify the degree of persistence in MH problems using nationally representative, longitudinal data from Australia and system generalized method of moments (GMM), and correlated random effects approaches are applied to separate true from spurious state dependence. Our results suggest only a moderate degree of persistence in MH problems when assuming that persistence is constant across the MH distribution once individual-specific heterogeneity is accounted for. However, individuals who fell once below a threshold that indicates an episode of depression are up to five times more likely to experience such a low score again a year later, indicating a strong element of state dependence in depression. Low income is a strong risk factor in state dependence for both men and women, which has important policy implications. Copyright © 2013 John Wiley & Sons, Ltd.

  8. Effect of segmentation algorithms on the performance of computerized detection of lung nodules in CT

    PubMed Central

    Guo, Wei; Li, Qiang

    2014-01-01

    Purpose: The purpose of this study is to reveal how the performance of lung nodule segmentation algorithm impacts the performance of lung nodule detection, and to provide guidelines for choosing an appropriate segmentation algorithm with appropriate parameters in a computer-aided detection (CAD) scheme. Methods: The database consisted of 85 CT scans with 111 nodules of 3 mm or larger in diameter from the standard CT lung nodule database created by the Lung Image Database Consortium. The initial nodule candidates were identified as those with strong response to a selective nodule enhancement filter. A uniform viewpoint reformation technique was applied to a three-dimensional nodule candidate to generate 24 two-dimensional (2D) reformatted images, which would be used to effectively distinguish between true nodules and false positives. Six different algorithms were employed to segment the initial nodule candidates in the 2D reformatted images. Finally, 2D features from the segmented areas in the 24 reformatted images were determined, selected, and classified for removal of false positives. Therefore, there were six similar CAD schemes, in which only the segmentation algorithms were different. The six segmentation algorithms included the fixed thresholding (FT), Otsu thresholding (OTSU), fuzzy C-means (FCM), Gaussian mixture model (GMM), Chan and Vese model (CV), and local binary fitting (LBF). The mean Jaccard index and the mean absolute distance (Dmean) were employed to evaluate the performance of segmentation algorithms, and the number of false positives at a fixed sensitivity was employed to evaluate the performance of the CAD schemes. Results: For the segmentation algorithms of FT, OTSU, FCM, GMM, CV, and LBF, the highest mean Jaccard index between the segmented nodule and the ground truth were 0.601, 0.586, 0.588, 0.563, 0.543, and 0.553, respectively, and the corresponding Dmean were 1.74, 1.80, 2.32, 2.80, 3.48, and 3.18 pixels, respectively. With these segmentation results of the six segmentation algorithms, the six CAD schemes reported 4.4, 8.8, 3.4, 9.2, 13.6, and 10.4 false positives per CT scan at a sensitivity of 80%. Conclusions: When multiple algorithms are available for segmenting nodule candidates in a CAD scheme, the “optimal” segmentation algorithm did not necessarily lead to the “optimal” CAD detection performance. PMID:25186393

  9. Tracking the visual focus of attention for a varying number of wandering people.

    PubMed

    Smith, Kevin; Ba, Sileye O; Odobez, Jean-Marc; Gatica-Perez, Daniel

    2008-07-01

    We define and address the problem of finding the visual focus of attention for a varying number of wandering people (VFOA-W), determining where the people's movement is unconstrained. VFOA-W estimation is a new and important problem with mplications for behavior understanding and cognitive science, as well as real-world applications. One such application, which we present in this article, monitors the attention passers-by pay to an outdoor advertisement. Our approach to the VFOA-W problem proposes a multi-person tracking solution based on a dynamic Bayesian network that simultaneously infers the (variable) number of people in a scene, their body locations, their head locations, and their head pose. For efficient inference in the resulting large variable-dimensional state-space we propose a Reversible Jump Markov Chain Monte Carlo (RJMCMC) sampling scheme, as well as a novel global observation model which determines the number of people in the scene and localizes them. We propose a Gaussian Mixture Model (GMM) and Hidden Markov Model (HMM)-based VFOA-W model which use head pose and location information to determine people's focus state. Our models are evaluated for tracking performance and ability to recognize people looking at an outdoor advertisement, with results indicating good performance on sequences where a moderate number of people pass in front of an advertisement.

  10. Easy-interactive and quick psoriasis lesion segmentation

    NASA Astrophysics Data System (ADS)

    Ma, Guoli; He, Bei; Yang, Wenming; Shu, Chang

    2013-12-01

    This paper proposes an interactive psoriasis lesion segmentation algorithm based on Gaussian Mixture Model (GMM). Psoriasis is an incurable skin disease and affects large population in the world. PASI (Psoriasis Area and Severity Index) is the gold standard utilized by dermatologists to monitor the severity of psoriasis. Computer aid methods of calculating PASI are more objective and accurate than human visual assessment. Psoriasis lesion segmentation is the basis of the whole calculating. This segmentation is different from the common foreground/background segmentation problems. Our algorithm is inspired by GrabCut and consists of three main stages. First, skin area is extracted from the background scene by transforming the RGB values into the YCbCr color space. Second, a rough segmentation of normal skin and psoriasis lesion is given. This is an initial segmentation given by thresholding a single gaussian model and the thresholds are adjustable, which enables user interaction. Third, two GMMs, one for the initial normal skin and one for psoriasis lesion, are built to refine the segmentation. Experimental results demonstrate the effectiveness of the proposed algorithm.

  11. Offline handwritten word recognition using MQDF-HMMs

    NASA Astrophysics Data System (ADS)

    Ramachandrula, Sitaram; Hambarde, Mangesh; Patial, Ajay; Sahoo, Dushyant; Kochar, Shaivi

    2015-01-01

    We propose an improved HMM formulation for offline handwriting recognition (HWR). The main contribution of this work is using modified quadratic discriminant function (MQDF) [1] within HMM framework. In an MQDF-HMM the state observation likelihood is calculated by a weighted combination of MQDF likelihoods of individual Gaussians of GMM (Gaussian Mixture Model). The quadratic discriminant function (QDF) of a multivariate Gaussian can be rewritten by avoiding the inverse of covariance matrix by using the Eigen values and Eigen vectors of it. The MQDF is derived from QDF by substituting few of badly estimated lower-most Eigen values by an appropriate constant. The estimation errors of non-dominant Eigen vectors and Eigen values of covariance matrix for which the training data is insufficient can be controlled by this approach. MQDF has been successfully shown to improve the character recognition performance [1]. The usage of MQDF in HMM improves the computation, storage and modeling power of HMM when there is limited training data. We have got encouraging results on offline handwritten character (NIST database) and word recognition in English using MQDF HMMs.

  12. Detecting ship targets in spaceborne infrared image based on modeling radiation anomalies

    NASA Astrophysics Data System (ADS)

    Wang, Haibo; Zou, Zhengxia; Shi, Zhenwei; Li, Bo

    2017-09-01

    Using infrared imaging sensors to detect ship target in the ocean environment has many advantages compared to other sensor modalities, such as better thermal sensitivity and all-weather detection capability. We propose a new ship detection method by modeling radiation anomalies for spaceborne infrared image. The proposed method can be decomposed into two stages, where in the first stage, a test infrared image is densely divided into a set of image patches and the radiation anomaly of each patch is estimated by a Gaussian Mixture Model (GMM), and thereby target candidates are obtained from anomaly image patches. In the second stage, target candidates are further checked by a more discriminative criterion to obtain the final detection result. The main innovation of the proposed method is inspired by the biological mechanism that human eyes are sensitive to the unusual and anomalous patches among complex background. The experimental result on short wavelength infrared band (1.560 - 2.300 μm) and long wavelength infrared band (10.30 - 12.50 μm) of Landsat-8 satellite shows the proposed method achieves a desired ship detection accuracy with higher recall than other classical ship detection methods.

  13. Limit Theory for Panel Data Models with Cross Sectional Dependence and Sequential Exogeneity

    PubMed Central

    Kuersteiner, Guido M.; Prucha, Ingmar R.

    2013-01-01

    The paper derives a general Central Limit Theorem (CLT) and asymptotic distributions for sample moments related to panel data models with large n. The results allow for the data to be cross sectionally dependent, while at the same time allowing the regressors to be only sequentially rather than strictly exogenous. The setup is sufficiently general to accommodate situations where cross sectional dependence stems from spatial interactions and/or from the presence of common factors. The latter leads to the need for random norming. The limit theorem for sample moments is derived by showing that the moment conditions can be recast such that a martingale difference array central limit theorem can be applied. We prove such a central limit theorem by first extending results for stable convergence in Hall and Hedye (1980) to non-nested martingale arrays relevant for our applications. We illustrate our result by establishing a generalized estimation theory for GMM estimators of a fixed effect panel model without imposing i.i.d. or strict exogeneity conditions. We also discuss a class of Maximum Likelihood (ML) estimators that can be analyzed using our CLT. PMID:23794781

  14. COCHISE Observations of Argon Rydberg Emission from 2 to 16 Micrometers.

    DTIC Science & Technology

    1983-08-05

    8217natu 5 I RNDALL E. M YJOHN S . GARIG Branch Cief Division Director Qualified requestors may obtain additional copies from the Defense Technical... S . TYPE or REPORT a PERIOD COVERED COCHISE OBSERVATIONS OF ARGON Scientific. Interim. RYDBERG EMISSION FROM 2 TO 16 6. PERFORMING 0140. REPORT NUMMER...Comparisons of obd~erved and simulated spectra sliow that s i.ntial ",WIR emission (- 1~gmm Arises from Rydberg states ert~ DO .W-1473 fDITION OF I NOV 65

  15. Automatic classification of unexploded ordnance applied to Spencer Range live site for 5x5 TEMTADS sensor

    NASA Astrophysics Data System (ADS)

    Sigman, John B.; Barrowes, Benjamin E.; O'Neill, Kevin; Shubitidze, Fridon

    2013-06-01

    This paper details methods for automatic classification of Unexploded Ordnance (UXO) as applied to sensor data from the Spencer Range live site. The Spencer Range is a former military weapons range in Spencer, Tennessee. Electromagnetic Induction (EMI) sensing is carried out using the 5x5 Time-domain Electromagnetic Multi-sensor Towed Array Detection System (5x5 TEMTADS), which has 25 receivers and 25 co-located transmitters. Every transmitter is activated sequentially, each followed by measuring the magnetic field in all 25 receivers, from 100 microseconds to 25 milliseconds. From these data target extrinsic and intrinsic parameters are extracted using the Differential Evolution (DE) algorithm and the Ortho-Normalized Volume Magnetic Source (ONVMS) algorithms, respectively. Namely, the inversion provides x, y, and z locations and a time series of the total ONVMS principal eigenvalues, which are intrinsic properties of the objects. The eigenvalues are fit to a power-decay empirical model, the Pasion-Oldenburg model, providing 3 coefficients (k, b, and g) for each object. The objects are grouped geometrically into variably-sized clusters, in the k-b-g space, using clustering algorithms. Clusters matching a priori characteristics are identified as Targets of Interest (TOI), and larger clusters are automatically subclustered. Ground Truths (GT) at the center of each class are requested, and probability density functions are created for clusters that have centroid TOI using a Gaussian Mixture Model (GMM). The probability functions are applied to all remaining anomalies. All objects of UXO probability higher than a chosen threshold are placed in a ranked dig list. This prioritized list is scored and the results are demonstrated and analyzed.

  16. Gaussian Mixture Models of Between-Source Variation for Likelihood Ratio Computation from Multivariate Data

    PubMed Central

    Franco-Pedroso, Javier; Ramos, Daniel; Gonzalez-Rodriguez, Joaquin

    2016-01-01

    In forensic science, trace evidence found at a crime scene and on suspect has to be evaluated from the measurements performed on them, usually in the form of multivariate data (for example, several chemical compound or physical characteristics). In order to assess the strength of that evidence, the likelihood ratio framework is being increasingly adopted. Several methods have been derived in order to obtain likelihood ratios directly from univariate or multivariate data by modelling both the variation appearing between observations (or features) coming from the same source (within-source variation) and that appearing between observations coming from different sources (between-source variation). In the widely used multivariate kernel likelihood-ratio, the within-source distribution is assumed to be normally distributed and constant among different sources and the between-source variation is modelled through a kernel density function (KDF). In order to better fit the observed distribution of the between-source variation, this paper presents a different approach in which a Gaussian mixture model (GMM) is used instead of a KDF. As it will be shown, this approach provides better-calibrated likelihood ratios as measured by the log-likelihood ratio cost (Cllr) in experiments performed on freely available forensic datasets involving different trace evidences: inks, glass fragments and car paints. PMID:26901680

  17. Supply and demand in physician markets: a panel data analysis of GP services in Australia.

    PubMed

    McRae, Ian; Butler, James R G

    2014-09-01

    To understand the trends in any physician services market it is necessary to understand the nature of both supply and demand, but few studies have jointly examined supply and demand in these markets. This study uses aggregate panel data on general practitioner (GP) services at the Statistical Local Area level in Australia spanning eight years to estimate supply and demand equations for GP services. The structural equations of the model are estimated separately using population-weighted fixed effects panel modelling with the two stage least squares formulation of the generalised method of moments approach (GMM (2SLS)). The estimated price elasticity of demand of [Formula: see text] is comparable with other studies. The direct impact of GP density on demand, while significant, proves almost immaterial in the context of near vertical supply curves. Supply changes are therefore due to shifts in the position of the curves, partly determined by a time trend. The model is validated by comparing post-panel model predictions with actual market outcomes over a period of three years and is found to provide surprisingly accurate projections over a period of significant policy change. The study confirms the need to jointly consider supply and demand in exploring the behaviour of physician services markets.

  18. Device-Free Passive Identity Identification via WiFi Signals.

    PubMed

    Lv, Jiguang; Yang, Wu; Man, Dapeng

    2017-11-02

    Device-free passive identity identification attracts much attention in recent years, and it is a representative application in sensorless sensing. It can be used in many applications such as intrusion detection and smart building. Previous studies show the sensing potential of WiFi signals in a device-free passive manner. It is confirmed that human's gait is unique from each other similar to fingerprint and iris. However, the identification accuracy of existing approaches is not satisfactory in practice. In this paper, we present Wii, a device-free WiFi-based Identity Identification approach utilizing human's gait based on Channel State Information (CSI) of WiFi signals. Principle Component Analysis (PCA) and low pass filter are applied to remove the noises in the signals. We then extract several entities' gait features from both time and frequency domain, and select the most effective features according to information gain. Based on these features, Wii realizes stranger recognition through Gaussian Mixture Model (GMM) and identity identification through a Support Vector Machine (SVM) with Radial Basis Function (RBF) kernel. It is implemented using commercial WiFi devices and evaluated on a dataset with more than 1500 gait instances collected from eight subjects walking in a room. The results indicate that Wii can effectively recognize strangers and can achieves high identification accuracy with low computational cost. As a result, Wii has the potential to work in typical home security systems.

  19. Device-Free Passive Identity Identification via WiFi Signals

    PubMed Central

    Yang, Wu; Man, Dapeng

    2017-01-01

    Device-free passive identity identification attracts much attention in recent years, and it is a representative application in sensorless sensing. It can be used in many applications such as intrusion detection and smart building. Previous studies show the sensing potential of WiFi signals in a device-free passive manner. It is confirmed that human’s gait is unique from each other similar to fingerprint and iris. However, the identification accuracy of existing approaches is not satisfactory in practice. In this paper, we present Wii, a device-free WiFi-based Identity Identification approach utilizing human’s gait based on Channel State Information (CSI) of WiFi signals. Principle Component Analysis (PCA) and low pass filter are applied to remove the noises in the signals. We then extract several entities’ gait features from both time and frequency domain, and select the most effective features according to information gain. Based on these features, Wii realizes stranger recognition through Gaussian Mixture Model (GMM) and identity identification through a Support Vector Machine (SVM) with Radial Basis Function (RBF) kernel. It is implemented using commercial WiFi devices and evaluated on a dataset with more than 1500 gait instances collected from eight subjects walking in a room. The results indicate that Wii can effectively recognize strangers and can achieves high identification accuracy with low computational cost. As a result, Wii has the potential to work in typical home security systems. PMID:29099091

  20. Age-Related Effects and Sex Differences in Gray Matter Density, Volume, Mass, and Cortical Thickness from Childhood to Young Adulthood.

    PubMed

    Gennatas, Efstathios D; Avants, Brian B; Wolf, Daniel H; Satterthwaite, Theodore D; Ruparel, Kosha; Ciric, Rastko; Hakonarson, Hakon; Gur, Raquel E; Gur, Ruben C

    2017-05-17

    Developmental structural neuroimaging studies in humans have long described decreases in gray matter volume (GMV) and cortical thickness (CT) during adolescence. Gray matter density (GMD), a measure often assumed to be highly related to volume, has not been systematically investigated in development. We used T1 imaging data collected on the Philadelphia Neurodevelopmental Cohort to study age-related effects and sex differences in four regional gray matter measures in 1189 youths ranging in age from 8 to 23 years. Custom T1 segmentation and a novel high-resolution gray matter parcellation were used to extract GMD, GMV, gray matter mass (GMM; defined as GMD × GMV), and CT from 1625 brain regions. Nonlinear models revealed that each modality exhibits unique age-related effects and sex differences. While GMV and CT generally decrease with age, GMD increases and shows the strongest age-related effects, while GMM shows a slight decline overall. Females have lower GMV but higher GMD than males throughout the brain. Our findings suggest that GMD is a prime phenotype for the assessment of brain development and likely cognition and that periadolescent gray matter loss may be less pronounced than previously thought. This work highlights the need for combined quantitative histological MRI studies. SIGNIFICANCE STATEMENT This study demonstrates that different MRI-derived gray matter measures show distinct age and sex effects and should not be considered equivalent but complementary. It is shown for the first time that gray matter density increases from childhood to young adulthood, in contrast with gray matter volume and cortical thickness, and that females, who are known to have lower gray matter volume than males, have higher density throughout the brain. A custom preprocessing pipeline and a novel high-resolution parcellation were created to analyze brain scans of 1189 youths collected as part of the Philadelphia Neurodevelopmental Cohort. A clear understanding of normal structural brain development is essential for the examination of brain-behavior relationships, the study of brain disease, and, ultimately, clinical applications of neuroimaging. Copyright © 2017 the authors 0270-6474/17/375065-09$15.00/0.

  1. Automated 3D Damaged Cavity Model Builder for Lower Surface Acreage Tile on Orbiter

    NASA Technical Reports Server (NTRS)

    Belknap, Shannon; Zhang, Michael

    2013-01-01

    The 3D Automated Thermal Tool for Damaged Acreage Tile Math Model builder was developed to perform quickly and accurately 3D thermal analyses on damaged lower surface acreage tiles and structures beneath the damaged locations on a Space Shuttle Orbiter. The 3D model builder created both TRASYS geometric math models (GMMs) and SINDA thermal math models (TMMs) to simulate an idealized damaged cavity in the damaged tile(s). The GMMs are processed in TRASYS to generate radiation conductors between the surfaces in the cavity. The radiation conductors are inserted into the TMMs, which are processed in SINDA to generate temperature histories for all of the nodes on each layer of the TMM. The invention allows a thermal analyst to create quickly and accurately a 3D model of a damaged lower surface tile on the orbiter. The 3D model builder can generate a GMM and the correspond ing TMM in one or two minutes, with the damaged cavity included in the tile material. A separate program creates a configuration file, which would take a couple of minutes to edit. This configuration file is read by the model builder program to determine the location of the damage, the correct tile type, tile thickness, structure thickness, and SIP thickness of the damage, so that the model builder program can build an accurate model at the specified location. Once the models are built, they are processed by the TRASYS and SINDA.

  2. Parameter estimation and forecasting for multiplicative log-normal cascades

    NASA Astrophysics Data System (ADS)

    Leövey, Andrés E.; Lux, Thomas

    2012-04-01

    We study the well-known multiplicative log-normal cascade process in which the multiplication of Gaussian and log normally distributed random variables yields time series with intermittent bursts of activity. Due to the nonstationarity of this process and the combinatorial nature of such a formalism, its parameters have been estimated mostly by fitting the numerical approximation of the associated non-Gaussian probability density function to empirical data, cf. Castaing [Physica DPDNPDT0167-278910.1016/0167-2789(90)90035-N 46, 177 (1990)]. More recently, alternative estimators based upon various moments have been proposed by Beck [Physica DPDNPDT0167-278910.1016/j.physd.2004.01.020 193, 195 (2004)] and Kiyono [Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.76.041113 76, 041113 (2007)]. In this paper, we pursue this moment-based approach further and develop a more rigorous generalized method of moments (GMM) estimation procedure to cope with the documented difficulties of previous methodologies. We show that even under uncertainty about the actual number of cascade steps, our methodology yields very reliable results for the estimated intermittency parameter. Employing the Levinson-Durbin algorithm for best linear forecasts, we also show that estimated parameters can be used for forecasting the evolution of the turbulent flow. We compare forecasting results from the GMM and Kiyono 's procedure via Monte Carlo simulations. We finally test the applicability of our approach by estimating the intermittency parameter and forecasting of volatility for a sample of financial data from stock and foreign exchange markets.

  3. Use of next generation sequencing data to develop a qPCR method for specific detection of EU-unauthorized genetically modified Bacillus subtilis overproducing riboflavin.

    PubMed

    Barbau-Piednoir, Elodie; De Keersmaecker, Sigrid C J; Delvoye, Maud; Gau, Céline; Philipp, Patrick; Roosens, Nancy H

    2015-11-11

    Recently, the presence of an unauthorized genetically modified (GM) Bacillus subtilis bacterium overproducing vitamin B2 in a feed additive was notified by the Rapid Alert System for Food and Feed (RASFF). This has demonstrated that a contamination by a GM micro-organism (GMM) may occur in feed additives and has confronted for the first time,the enforcement laboratories with this type of RASFF. As no sequence information of this GMM nor any specific detection or identification method was available, Next GenerationSequencing (NGS) was used to generate sequence information. However, NGS data analysis often requires appropriate tools, involving bioinformatics expertise which is not alwayspresent in the average enforcement laboratory. This hampers the use of this technology to rapidly obtain critical sequence information in order to be able to develop a specific qPCRdetection method. Data generated by NGS were exploited using a simple BLAST approach. A TaqMan® qPCR method was developed and tested on isolated bacterial strains and on the feed additive directly. In this study, a very simple strategy based on the common BLAST tools that can be used by any enforcement lab without profound bioinformatics expertise, was successfully used toanalyse the B. subtilis data generated by NGS. The results were used to design and assess a new TaqMan® qPCR method, specifically detecting this GM vitamin B2 overproducing bacterium. The method complies with EU critical performance parameters for specificity, sensitivity, PCR efficiency and repeatability. The VitB2-UGM method also could detect the B. subtilis strain in genomic DNA extracted from the feed additive, without prior culturing step. The proposed method, provides a crucial tool for specifically and rapidly identifying this unauthorized GM bacterium in food and feed additives by enforcement laboratories. Moreover, this work can be seen as a case study to substantiate how the use of NGS data can offer an added value to easily gain access to sequence information needed to develop qPCR methods to detect unknown andunauthorized GMO in food and feed.

  4. Profiling pleural effusion cells by a diffraction imaging method

    NASA Astrophysics Data System (ADS)

    Al-Qaysi, Safaa; Hong, Heng; Wen, Yuhua; Lu, Jun Q.; Feng, Yuanming; Hu, Xin-Hua

    2018-02-01

    Assay of cells in pleural effusion (PE) is an important means of disease diagnosis. Conventional cytology of effusion samples, however, has low sensitivity and depends heavily on the expertise of cytopathologists. We applied a polarization diffraction imaging flow cytometry method on effusion cells to investigate their features. Diffraction imaging of the PE cell samples has been performed on 6000 to 12000 cells for each effusion cell sample of three patients. After prescreening to remove images by cellular debris and aggregated non-cellular particles, the image textures were extracted with a gray level co-occurrence matrix (GLCM) algorithm. The distribution of the imaged cells in the GLCM parameters space was analyzed by a Gaussian Mixture Model (GMM) to determine the number of clusters among the effusion cells. These results yield insight on textural features of diffraction images and related cellular morphology in effusion samples and can be used toward the development of a label-free method for effusion cells assay.

  5. The global move toward Internet shopping and its influence on pollution: an empirical analysis.

    PubMed

    Al-Mulali, Usama; Sheau-Ting, Low; Ozturk, Ilhan

    2015-07-01

    This study investigates the influence of Internet retailing on carbon dioxide (CO2) emission in 77 countries categorized into developed and developing countries during the period of 2000-2013. To realize the aims of the study, a model that represents pollution is established utilizing the panel two-stage least square (TSLS) and the generalized method of moments (GMM). The results for both regressions similarly indicated that GDP growth, electricity consumption, urbanization, and trade openness are the main factors that increase CO2 emission in the investigated countries. Although the results show that Internet retailing reduces CO2 emission in general, a disaggregation occurs between developed and developing countries whereby Internet retailing has a significant negative effect on CO2 emission in the developed countries while it has no significant impact on CO2 emission in the developing countries. From the outcome of this study, a number of policy implications are provided for the investigated countries.

  6. State-vector formalism and the Legendre polynomial solution for modelling guided waves in anisotropic plates

    NASA Astrophysics Data System (ADS)

    Zheng, Mingfang; He, Cunfu; Lu, Yan; Wu, Bin

    2018-01-01

    We presented a numerical method to solve phase dispersion curve in general anisotropic plates. This approach involves an exact solution to the problem in the form of the Legendre polynomial of multiple integrals, which we substituted into the state-vector formalism. In order to improve the efficiency of the proposed method, we made a special effort to demonstrate the analytical methodology. Furthermore, we analyzed the algebraic symmetries of the matrices in the state-vector formalism for anisotropic plates. The basic feature of the proposed method was the expansion of field quantities by Legendre polynomials. The Legendre polynomial method avoid to solve the transcendental dispersion equation, which can only be solved numerically. This state-vector formalism combined with Legendre polynomial expansion distinguished the adjacent dispersion mode clearly, even when the modes were very close. We then illustrated the theoretical solutions of the dispersion curves by this method for isotropic and anisotropic plates. Finally, we compared the proposed method with the global matrix method (GMM), which shows excellent agreement.

  7. Automated flow cytometric analysis across large numbers of samples and cell types.

    PubMed

    Chen, Xiaoyi; Hasan, Milena; Libri, Valentina; Urrutia, Alejandra; Beitz, Benoît; Rouilly, Vincent; Duffy, Darragh; Patin, Étienne; Chalmond, Bernard; Rogge, Lars; Quintana-Murci, Lluis; Albert, Matthew L; Schwikowski, Benno

    2015-04-01

    Multi-parametric flow cytometry is a key technology for characterization of immune cell phenotypes. However, robust high-dimensional post-analytic strategies for automated data analysis in large numbers of donors are still lacking. Here, we report a computational pipeline, called FlowGM, which minimizes operator input, is insensitive to compensation settings, and can be adapted to different analytic panels. A Gaussian Mixture Model (GMM)-based approach was utilized for initial clustering, with the number of clusters determined using Bayesian Information Criterion. Meta-clustering in a reference donor permitted automated identification of 24 cell types across four panels. Cluster labels were integrated into FCS files, thus permitting comparisons to manual gating. Cell numbers and coefficient of variation (CV) were similar between FlowGM and conventional gating for lymphocyte populations, but notably FlowGM provided improved discrimination of "hard-to-gate" monocyte and dendritic cell (DC) subsets. FlowGM thus provides rapid high-dimensional analysis of cell phenotypes and is amenable to cohort studies. Copyright © 2015. Published by Elsevier Inc.

  8. Robust Target Tracking with Multi-Static Sensors under Insufficient TDOA Information.

    PubMed

    Shin, Hyunhak; Ku, Bonhwa; Nelson, Jill K; Ko, Hanseok

    2018-05-08

    This paper focuses on underwater target tracking based on a multi-static sonar network composed of passive sonobuoys and an active ping. In the multi-static sonar network, the location of the target can be estimated using TDOA (Time Difference of Arrival) measurements. However, since the sensor network may obtain insufficient and inaccurate TDOA measurements due to ambient noise and other harsh underwater conditions, target tracking performance can be significantly degraded. We propose a robust target tracking algorithm designed to operate in such a scenario. First, track management with track splitting is applied to reduce performance degradation caused by insufficient measurements. Second, a target location is estimated by a fusion of multiple TDOA measurements using a Gaussian Mixture Model (GMM). In addition, the target trajectory is refined by conducting a stack-based data association method based on multiple-frames measurements in order to more accurately estimate target trajectory. The effectiveness of the proposed method is verified through simulations.

  9. Study of wavelet packet energy entropy for emotion classification in speech and glottal signals

    NASA Astrophysics Data System (ADS)

    He, Ling; Lech, Margaret; Zhang, Jing; Ren, Xiaomei; Deng, Lihua

    2013-07-01

    The automatic speech emotion recognition has important applications in human-machine communication. Majority of current research in this area is focused on finding optimal feature parameters. In recent studies, several glottal features were examined as potential cues for emotion differentiation. In this study, a new type of feature parameter is proposed, which calculates energy entropy on values within selected Wavelet Packet frequency bands. The modeling and classification tasks are conducted using the classical GMM algorithm. The experiments use two data sets: the Speech Under Simulated Emotion (SUSE) data set annotated with three different emotions (angry, neutral and soft) and Berlin Emotional Speech (BES) database annotated with seven different emotions (angry, bored, disgust, fear, happy, sad and neutral). The average classification accuracy achieved for the SUSE data (74%-76%) is significantly higher than the accuracy achieved for the BES data (51%-54%). In both cases, the accuracy was significantly higher than the respective random guessing levels (33% for SUSE and 14.3% for BES).

  10. DSA Image Blood Vessel Skeleton Extraction Based on Anti-concentration Diffusion and Level Set Method

    NASA Astrophysics Data System (ADS)

    Xu, Jing; Wu, Jian; Feng, Daming; Cui, Zhiming

    Serious types of vascular diseases such as carotid stenosis, aneurysm and vascular malformation may lead to brain stroke, which are the third leading cause of death and the number one cause of disability. In the clinical practice of diagnosis and treatment of cerebral vascular diseases, how to do effective detection and description of the vascular structure of two-dimensional angiography sequence image that is blood vessel skeleton extraction has been a difficult study for a long time. This paper mainly discussed two-dimensional image of blood vessel skeleton extraction based on the level set method, first do the preprocessing to the DSA image, namely uses anti-concentration diffusion model for the effective enhancement and uses improved Otsu local threshold segmentation technology based on regional division for the image binarization, then vascular skeleton extraction based on GMM (Group marching method) with fast sweeping theory was actualized. Experiments show that our approach not only improved the time complexity, but also make a good extraction results.

  11. Predicting the Trajectories of Perceived Pain Intensity in Southern Community-Dwelling Older Adults: The Role of Religiousness

    PubMed Central

    Sun, Fei; Park, Nan Sook; Wardian, Jana; Lee, Beom S.; Roff, Lucinda L.; Klemmack, David L.; Parker, Michael W.; Koenig, Harold G.; Sawyer, Patricia L.; Allman, Richard M.

    2013-01-01

    This study focuses on the identification of multiple latent trajectories of pain intensity, and it examines how religiousness is related to different classes of pain trajectory. Participants were 720 community-dwelling older adults who were interviewed at four time points over a 3-year period. Overall, intensity of pain decreased over 3 years. Analysis using latent growth mixture modeling (GMM) identified three classes of pain: (1) increasing (n = 47); (2) consistently unchanging (n = 292); and (3) decreasing (n = 381). Higher levels of intrinsic religiousness (IR) at baseline were associated with higher levels of pain at baseline, although it attenuated the slope of pain trajectories in the increasing pain group. Higher service attendance at baseline was associated with a higher probability of being in the decreasing pain group. The increasing pain group and the consistently unchanging group reported more negative physical and mental health outcomes than the decreasing pain group. PMID:24187410

  12. Speech Enhancement Using Gaussian Scale Mixture Models

    PubMed Central

    Hao, Jiucang; Lee, Te-Won; Sejnowski, Terrence J.

    2011-01-01

    This paper presents a novel probabilistic approach to speech enhancement. Instead of a deterministic logarithmic relationship, we assume a probabilistic relationship between the frequency coefficients and the log-spectra. The speech model in the log-spectral domain is a Gaussian mixture model (GMM). The frequency coefficients obey a zero-mean Gaussian whose covariance equals to the exponential of the log-spectra. This results in a Gaussian scale mixture model (GSMM) for the speech signal in the frequency domain, since the log-spectra can be regarded as scaling factors. The probabilistic relation between frequency coefficients and log-spectra allows these to be treated as two random variables, both to be estimated from the noisy signals. Expectation-maximization (EM) was used to train the GSMM and Bayesian inference was used to compute the posterior signal distribution. Because exact inference of this full probabilistic model is computationally intractable, we developed two approaches to enhance the efficiency: the Laplace method and a variational approximation. The proposed methods were applied to enhance speech corrupted by Gaussian noise and speech-shaped noise (SSN). For both approximations, signals reconstructed from the estimated frequency coefficients provided higher signal-to-noise ratio (SNR) and those reconstructed from the estimated log-spectra produced lower word recognition error rate because the log-spectra fit the inputs to the recognizer better. Our algorithms effectively reduced the SSN, which algorithms based on spectral analysis were not able to suppress. PMID:21359139

  13. Globalization and economic growth: empirical evidence on the role of complementarities.

    PubMed

    Samimi, Parisa; Jenatabadi, Hashem Salarzadeh

    2014-01-01

    This study was carried out to investigate the effect of economic globalization on economic growth in OIC countries. Furthermore, the study examined the effect of complementary policies on the growth effect of globalization. It also investigated whether the growth effect of globalization depends on the income level of countries. Utilizing the generalized method of moments (GMM) estimator within the framework of a dynamic panel data approach, we provide evidence which suggests that economic globalization has statistically significant impact on economic growth in OIC countries. The results indicate that this positive effect is increased in the countries with better-educated workers and well-developed financial systems. Our finding shows that the effect of economic globalization also depends on the country's level of income. High and middle-income countries benefit from globalization whereas low-income countries do not gain from it. In fact, the countries should receive the appropriate income level to be benefited from globalization. Economic globalization not only directly promotes growth but also indirectly does so via complementary reforms.

  14. Globalization and Economic Growth: Empirical Evidence on the Role of Complementarities

    PubMed Central

    Samimi, Parisa; Jenatabadi, Hashem Salarzadeh

    2014-01-01

    This study was carried out to investigate the effect of economic globalization on economic growth in OIC countries. Furthermore, the study examined the effect of complementary policies on the growth effect of globalization. It also investigated whether the growth effect of globalization depends on the income level of countries. Utilizing the generalized method of moments (GMM) estimator within the framework of a dynamic panel data approach, we provide evidence which suggests that economic globalization has statistically significant impact on economic growth in OIC countries. The results indicate that this positive effect is increased in the countries with better-educated workers and well-developed financial systems. Our finding shows that the effect of economic globalization also depends on the country’s level of income. High and middle-income countries benefit from globalization whereas low-income countries do not gain from it. In fact, the countries should receive the appropriate income level to be benefited from globalization. Economic globalization not only directly promotes growth but also indirectly does so via complementary reforms. PMID:24721896

  15. Land use impact on water quality: valuing forest services in terms of the water supply sector.

    PubMed

    Fiquepron, Julien; Garcia, Serge; Stenger, Anne

    2013-09-15

    The aim of this paper is to quantify the impact of the forest on raw water quality within the framework of other land uses. On the basis of measurements of quality parameters that were identified as being the most problematic (i.e., pesticides and nitrates), we modeled how water quality is influenced by land uses. In order to assess the benefits provided by the forest in terms of improved water quality, we used variations of drinking water prices that were determined by the operating costs of water supply services (WSS). Given the variability of links between forests and water quality, we chose to cover all of France using data observed in each administrative department (France is divided into 95 départements), including a description of WSS and information on land uses. We designed a model that describes the impact of land uses on water quality, as well as the operation of WSS and prices. This bioeconomic model was estimated by the generalized method of moments (GMM) to account for endogeneity and heteroscedasticity issues. We showed that the forest has a positive effect on raw water quality compared to other land uses, with an indirect impact on water prices, making them lower for consumers. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. An Advanced Deep Learning Approach for Ki-67 Stained Hotspot Detection and Proliferation Rate Scoring for Prognostic Evaluation of Breast Cancer.

    PubMed

    Saha, Monjoy; Chakraborty, Chandan; Arun, Indu; Ahmed, Rosina; Chatterjee, Sanjoy

    2017-06-12

    Being a non-histone protein, Ki-67 is one of the essential biomarkers for the immunohistochemical assessment of proliferation rate in breast cancer screening and grading. The Ki-67 signature is always sensitive to radiotherapy and chemotherapy. Due to random morphological, color and intensity variations of cell nuclei (immunopositive and immunonegative), manual/subjective assessment of Ki-67 scoring is error-prone and time-consuming. Hence, several machine learning approaches have been reported; nevertheless, none of them had worked on deep learning based hotspots detection and proliferation scoring. In this article, we suggest an advanced deep learning model for computerized recognition of candidate hotspots and subsequent proliferation rate scoring by quantifying Ki-67 appearance in breast cancer immunohistochemical images. Unlike existing Ki-67 scoring techniques, our methodology uses Gamma mixture model (GMM) with Expectation-Maximization for seed point detection and patch selection and deep learning, comprises with decision layer, for hotspots detection and proliferation scoring. Experimental results provide 93% precision, 0.88% recall and 0.91% F-score value. The model performance has also been compared with the pathologists' manual annotations and recently published articles. In future, the proposed deep learning framework will be highly reliable and beneficial to the junior and senior pathologists for fast and efficient Ki-67 scoring.

  17. A Robust Wireless Sensor Network Localization Algorithm in Mixed LOS/NLOS Scenario.

    PubMed

    Li, Bing; Cui, Wei; Wang, Bin

    2015-09-16

    Localization algorithms based on received signal strength indication (RSSI) are widely used in the field of target localization due to its advantages of convenient application and independent from hardware devices. Unfortunately, the RSSI values are susceptible to fluctuate under the influence of non-line-of-sight (NLOS) in indoor space. Existing algorithms often produce unreliable estimated distances, leading to low accuracy and low effectiveness in indoor target localization. Moreover, these approaches require extra prior knowledge about the propagation model. As such, we focus on the problem of localization in mixed LOS/NLOS scenario and propose a novel localization algorithm: Gaussian mixed model based non-metric Multidimensional (GMDS). In GMDS, the RSSI is estimated using a Gaussian mixed model (GMM). The dissimilarity matrix is built to generate relative coordinates of nodes by a multi-dimensional scaling (MDS) approach. Finally, based on the anchor nodes' actual coordinates and target's relative coordinates, the target's actual coordinates can be computed via coordinate transformation. Our algorithm could perform localization estimation well without being provided with prior knowledge. The experimental verification shows that GMDS effectively reduces NLOS error and is of higher accuracy in indoor mixed LOS/NLOS localization and still remains effective when we extend single NLOS to multiple NLOS.

  18. The mass of Mars, Phobos, and Deimos, from the analysis of the Mariner 9 and Viking Orbiter tracking data

    NASA Technical Reports Server (NTRS)

    Smith, D. E.; Lemoine, F. G.; Fricke, S. K.

    1994-01-01

    We have estimated the mass of Phobos, Deimos, and Mars using the Viking Orbiter and Mariner 9 tracking data. We divided the data into 282 arcs and sorted the data by periapse height, by inclination, and by satellite. The data were processed with the GEODYN/SOLVE orbit determination programs, which have previously been used to analyze planetary tracking data. The a priori Mars gravity field applied in this study was the 50th degree and order GMM-1 (Goddard Mars Model-1) model. The subsets of data were carefully edited to remove any arcs with close encounters of less than 500 km with either Phobos or Deimos. Whereas previous investigators have used close flybys (less than 500 km) to estimate the satellite masses, we have attempted to estimate the masses of Phobos and Deimos from multiday arcs which only included more distant encounters. The subsets of data were further edited to eliminate spurious data near solar conjunction (Nov.-Dec. 1976 and January 1979). In addition, the Viking-1 data from Oct. through Dec. 1978 were also excluded because of the low periapse altitude (as low as 232 km) and thus high sensitivity to atmospheric drag.

  19. Investigation of Parametric Influence on the Properties of Al6061-SiCp Composite

    NASA Astrophysics Data System (ADS)

    Adebisi, A. A.; Maleque, M. A.; Bello, K. A.

    2017-03-01

    The influence of process parameter in stir casting play a major role on the development of aluminium reinforced silicon carbide particle (Al-SiCp) composite. This study aims to investigate the influence of process parameters on wear and density properties of Al-SiCp composite using stir casting technique. Experimental data are generated based on a four-factors-five-level central composite design of response surface methodology. Analysis of variance is utilized to confirm the adequacy and validity of developed models considering the significant model terms. Optimization of the process parameters adequately predicts the Al-SiCp composite properties with stirring speed as the most influencing factor. The aim of optimization process is to minimize wear and maximum density. The multiple objective optimization (MOO) achieved an optimal value of 14 wt% reinforcement fraction (RF), 460 rpm stirring speed (SS), 820 °C processing temperature (PTemp) and 150 secs processing time (PT). Considering the optimum parametric combination, wear mass loss achieved a minimum of 1 x 10-3 g and maximum density value of 2.780g/mm3 with a confidence and desirability level of 95.5%.

  20. Complex Sequencing Rules of Birdsong Can be Explained by Simple Hidden Markov Processes

    PubMed Central

    Katahira, Kentaro; Suzuki, Kenta; Okanoya, Kazuo; Okada, Masato

    2011-01-01

    Complex sequencing rules observed in birdsongs provide an opportunity to investigate the neural mechanism for generating complex sequential behaviors. To relate the findings from studying birdsongs to other sequential behaviors such as human speech and musical performance, it is crucial to characterize the statistical properties of the sequencing rules in birdsongs. However, the properties of the sequencing rules in birdsongs have not yet been fully addressed. In this study, we investigate the statistical properties of the complex birdsong of the Bengalese finch (Lonchura striata var. domestica). Based on manual-annotated syllable labeles, we first show that there are significant higher-order context dependencies in Bengalese finch songs, that is, which syllable appears next depends on more than one previous syllable. We then analyze acoustic features of the song and show that higher-order context dependencies can be explained using first-order hidden state transition dynamics with redundant hidden states. This model corresponds to hidden Markov models (HMMs), well known statistical models with a large range of application for time series modeling. The song annotation with these models with first-order hidden state dynamics agreed well with manual annotation, the score was comparable to that of a second-order HMM, and surpassed the zeroth-order model (the Gaussian mixture model; GMM), which does not use context information. Our results imply that the hierarchical representation with hidden state dynamics may underlie the neural implementation for generating complex behavioral sequences with higher-order dependencies. PMID:21915345

  1. The development of miocene extensional and short-lived basin in the Andean broken foreland: The Conglomerado Los Patos, Northwestern Argentina

    NASA Astrophysics Data System (ADS)

    del Papa, Cecilia E.; Petrinovic, Ivan A.

    2017-01-01

    The Conglomerado Los Patos is a coarse-grained clastic unit that crops out irregularly in the San Antonio de los Cobres Valley in the Puna, Northwestern Argentina. It covers different units of the Cretaceous-Paleogene Salta Group by means of an angular unconformity and, in turn, is overlaid in angular unconformity by the Viscachayoc Ignimbrite (13 ± 0.3 Ma) or by late Miocene tuffs. Three lithofacies have been identified in the Corte Blanco locality; 1) Bouldery matrix-supported conglomerate (Gmm); 2) Clast-supported conglomerate (Gch) and 3) Imbricated clast-supported conglomerate (Gci). The stratigraphic pattern displays a general fining upward trend. The sedimentary facies association suggests gravitational flow processes and sedimentation in alluvial fan settings, from proximal to medial fan positions, together with a slope decrease upsection. Provenance studies reveal sediments sourced from Precambrian to Ordovician units located to the southwest, except for volcanic clasts in the Gmm facies that shows U/Pb age of 14.5 ± 0.5 Ma. This new age represents the maximum depositional age for the Conglomerado Los Patos, and it documents that deposition took place simultaneously during a period of increased tectonic and volcanic activity in the area. The structural analysis of the San Antonio de los Cobres Valley and the available thermochronological ages, indicate active N-S main thrusts and NW-SE transpressive and locally normal faults during the middle Miocene. In this context, we interpret the Conglomerado Los Patos to represent sedimentation in a small, extensional and short-lived basin associated with the compressional Andean setting.

  2. Pharmacological characterisation of extracts of coffee dusts.

    PubMed Central

    Zuskin, E; Duncan, P G; Douglas, J S

    1983-01-01

    The contractile or relaxant activities or both of aqueous extracts of green and roasted coffees were assayed on isolated guinea pig tracheal spirals. Contractile and relaxant activities were compared with histamine and theophylline, respectively. Green coffee extracts induced concentration dependent contraction, but the maximal tension never exceeded 76.3% +/- 5.2 of a maximal histamine contraction (0.69 +/- 0.07 g/mm2 v 0.52 +/- 0.05 g/mm2; p (0.01). One gram of green coffee dust had a biological activity equivalent to 1.23 +/- 0.1 mg of histamine. The pD2 value of histamine was -5.17 +/- 0.05. The potency of green coffee was unaffected by mepyramine maleate (1 micrograms/ml, final bath concentration) while that of histamine was reduced 500 fold. Tissues contracted with histamine were not significantly relaxed by green coffee extracts. By contrast, roasted coffee extracts induced concentration dependent relaxation of uncontracted and histamine contracted tissues. Tissues contracted with green coffee extracts were also completely relaxed by roasted coffee extracts. The pD2 value of theophylline was -4.10 +/- 0.03. The relaxant activity of 1 g of roasted coffee was equivalent to 1.95 +/- 0.16 mg of theophylline. The potency of these extracts was significantly reduced after propranolol (1 micrograms/ml; dose ratio 1.56). Our results show that coffee dust extracts have considerable biological activity which changes from a contractile to a relaxant action as a consequence of processing. The greater incidence of adverse reactions to green coffee dust(s) in coffee workers may be related to the contractile activity present in green coffee dust. PMID:6830717

  3. OCCURRENCE OF HIGH-SPEED SOLAR WIND STREAMS OVER THE GRAND MODERN MAXIMUM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mursula, K.; Holappa, L.; Lukianova, R., E-mail: kalevi.mursula@oulu.fi

    2015-03-01

    In the declining phase of the solar cycle (SC), when the new-polarity fields of the solar poles are strengthened by the transport of same-signed magnetic flux from lower latitudes, the polar coronal holes expand and form non-axisymmetric extensions toward the solar equator. These extensions enhance the occurrence of high-speed solar wind (SW) streams (HSS) and related co-rotating interaction regions in the low-latitude heliosphere, and cause moderate, recurrent geomagnetic activity (GA) in the near-Earth space. Here, using a novel definition of GA at high (polar cap) latitudes and the longest record of magnetic observations at a polar cap station, we calculatemore » the annually averaged SW speeds as proxies for the effective annual occurrence of HSS over the whole Grand Modern Maximum (GMM) from 1920s onward. We find that a period of high annual speeds (frequent occurrence of HSS) occurs in the declining phase of each of SCs 16-23. For most cycles the HSS activity clearly reaches a maximum in one year, suggesting that typically only one strong activation leading to a coronal hole extension is responsible for the HSS maximum. We find that the most persistent HSS activity occurred in the declining phase of SC 18. This suggests that cycle 19, which marks the sunspot maximum period of the GMM, was preceded by exceptionally strong polar fields during the previous sunspot minimum. This gives interesting support for the validity of solar dynamo theory during this dramatic period of solar magnetism.« less

  4. De novo identification of replication-timing domains in the human genome by deep learning.

    PubMed

    Liu, Feng; Ren, Chao; Li, Hao; Zhou, Pingkun; Bo, Xiaochen; Shu, Wenjie

    2016-03-01

    The de novo identification of the initiation and termination zones-regions that replicate earlier or later than their upstream and downstream neighbours, respectively-remains a key challenge in DNA replication. Building on advances in deep learning, we developed a novel hybrid architecture combining a pre-trained, deep neural network and a hidden Markov model (DNN-HMM) for the de novo identification of replication domains using replication timing profiles. Our results demonstrate that DNN-HMM can significantly outperform strong, discriminatively trained Gaussian mixture model-HMM (GMM-HMM) systems and other six reported methods that can be applied to this challenge. We applied our trained DNN-HMM to identify distinct replication domain types, namely the early replication domain (ERD), the down transition zone (DTZ), the late replication domain (LRD) and the up transition zone (UTZ), using newly replicated DNA sequencing (Repli-Seq) data across 15 human cells. A subsequent integrative analysis revealed that these replication domains harbour unique genomic and epigenetic patterns, transcriptional activity and higher-order chromosomal structure. Our findings support the 'replication-domain' model, which states (1) that ERDs and LRDs, connected by UTZs and DTZs, are spatially compartmentalized structural and functional units of higher-order chromosomal structure, (2) that the adjacent DTZ-UTZ pairs form chromatin loops and (3) that intra-interactions within ERDs and LRDs tend to be short-range and long-range, respectively. Our model reveals an important chromatin organizational principle of the human genome and represents a critical step towards understanding the mechanisms regulating replication timing. Our DNN-HMM method and three additional algorithms can be freely accessed at https://github.com/wenjiegroup/DNN-HMM The replication domain regions identified in this study are available in GEO under the accession ID GSE53984. shuwj@bmi.ac.cn or boxc@bmi.ac.cn Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  5. Robust group-wise rigid registration of point sets using t-mixture model

    NASA Astrophysics Data System (ADS)

    Ravikumar, Nishant; Gooya, Ali; Frangi, Alejandro F.; Taylor, Zeike A.

    2016-03-01

    A probabilistic framework for robust, group-wise rigid alignment of point-sets using a mixture of Students t-distribution especially when the point sets are of varying lengths, are corrupted by an unknown degree of outliers or in the presence of missing data. Medical images (in particular magnetic resonance (MR) images), their segmentations and consequently point-sets generated from these are highly susceptible to corruption by outliers. This poses a problem for robust correspondence estimation and accurate alignment of shapes, necessary for training statistical shape models (SSMs). To address these issues, this study proposes to use a t-mixture model (TMM), to approximate the underlying joint probability density of a group of similar shapes and align them to a common reference frame. The heavy-tailed nature of t-distributions provides a more robust registration framework in comparison to state of the art algorithms. Significant reduction in alignment errors is achieved in the presence of outliers, using the proposed TMM-based group-wise rigid registration method, in comparison to its Gaussian mixture model (GMM) counterparts. The proposed TMM-framework is compared with a group-wise variant of the well-known Coherent Point Drift (CPD) algorithm and two other group-wise methods using GMMs, using both synthetic and real data sets. Rigid alignment errors for groups of shapes are quantified using the Hausdorff distance (HD) and quadratic surface distance (QSD) metrics.

  6. Ground motion models used in the 2014 U.S. National Seismic Hazard Maps

    USGS Publications Warehouse

    Rezaeian, Sanaz; Petersen, Mark D.; Moschetti, Morgan P.

    2015-01-01

    The National Seismic Hazard Maps (NSHMs) are an important component of seismic design regulations in the United States. This paper compares hazard using the new suite of ground motion models (GMMs) relative to hazard using the suite of GMMs applied in the previous version of the maps. The new source characterization models are used for both cases. A previous paper (Rezaeian et al. 2014) discussed the five NGA-West2 GMMs used for shallow crustal earthquakes in the Western United States (WUS), which are also summarized here. Our focus in this paper is on GMMs for earthquakes in stable continental regions in the Central and Eastern United States (CEUS), as well as subduction interface and deep intraslab earthquakes. We consider building code hazard levels for peak ground acceleration (PGA), 0.2-s, and 1.0-s spectral accelerations (SAs) on uniform firm-rock site conditions. The GMM modifications in the updated version of the maps created changes in hazard within 5% to 20% in WUS; decreases within 5% to 20% in CEUS; changes within 5% to 15% for subduction interface earthquakes; and changes involving decreases of up to 50% and increases of up to 30% for deep intraslab earthquakes for most U.S. sites. These modifications were combined with changes resulting from modifications in the source characterization models to obtain the new hazard maps.

  7. Short-term monitoring of a gas seep field in the Katakolo bay (Western Greece) using Raman spectra DTS and DAS fibre-optic methods

    NASA Astrophysics Data System (ADS)

    Chalari, A.; Mondanos, M.; Finfer, D.; Christodoulou, D.; Kordella, S.; Papatheodorou, G.; Geraga, M.; Ferentinos, G.

    2012-12-01

    A wide submarine seep of thermogenic gas in the Katakolo bay, Western Greece, was monitored passively using the intelligent Distributed Acoustic Sensor (iDAS) and Ultima Raman spectra Distributed Temperature Sensor (DTS), in order to study the thermal and noise signal of the bubble plumes released from the seafloor. Katakolo is one one of the most prolific thermogenic gas seepage zones in Europe and the biggest methane seep ever reported in Greece. Very detailed repetitive offshore gas surveys, including marine remote sensing (sub-bottom profiling, side scan sonar), underwater exploration by a towed instrumented system (MEDUSA), long-term monitoring benthic station (GMM), compositional and isotopic analyses, and flux measurements of gas, showed that: (a) gas seepage takes place over an extended area in the Katakolo harbour and along two main normal faults off the harbour; (b) at least 823 gas bubble ( 10-20 cm in diameter) plumes escaping over an area of 94,200 m2, at depths ranging from 5.5 to 16 m; (c) the gas consists mainly of methane and has H2S levels of hundreds to thousands ppmv, and shows significant amounts of other light hydrocarbons like ethane, propane, iso-butane and C6 alkanes, (d) offshore and onshore seeps release the same type of thermogenic gas; (e) due to the shallow depth, more than 90 % of CH4 released at the seabed enters the atmosphere, and (f) the gas seeps may produce severe geohazards for people, buildings and construction facilities due to the explosive and toxicological properties of methane and hydrogen sulfide, respectively. For the short-term monitoring, the deployment took place on a site located inside the harbour of Katakolo within a thermogenic gas seepage area where active faults are intersected. The iDAS system makes it possible to observe the acoustical signal along the entire length of an unmodified optical cable without introducing any form of point sensors such as Bragg gratings. When the bubble plumes are released by the seabed into the water column, they ring at their resonance frequency in a manner consistent with standard bubble acoustics. This bubble ringing can be detected by iDAS allowing for both seepage detection, quantification and relationship with seismic activity. The DTS system makes it possible to observe temporal variations of the gas plumes and its relationship with the meteorological factors of the area. Moreover, DTS and iDAS data interpretation needs a detailed examination in comparison with the long-term GMM monitoring data (O2, CH4, H2S, temperature, pressure and conductivity) which was collected from the same location. The processing chain used to observe this phenomenon can have applications in both industrial and environmental monitoring capacities.

  8. Application of recurrence quantification analysis for the automated identification of epileptic EEG signals.

    PubMed

    Acharya, U Rajendra; Sree, S Vinitha; Chattopadhyay, Subhagata; Yu, Wenwei; Ang, Peng Chuan Alvin

    2011-06-01

    Epilepsy is a common neurological disorder that is characterized by the recurrence of seizures. Electroencephalogram (EEG) signals are widely used to diagnose seizures. Because of the non-linear and dynamic nature of the EEG signals, it is difficult to effectively decipher the subtle changes in these signals by visual inspection and by using linear techniques. Therefore, non-linear methods are being researched to analyze the EEG signals. In this work, we use the recorded EEG signals in Recurrence Plots (RP), and extract Recurrence Quantification Analysis (RQA) parameters from the RP in order to classify the EEG signals into normal, ictal, and interictal classes. Recurrence Plot (RP) is a graph that shows all the times at which a state of the dynamical system recurs. Studies have reported significantly different RQA parameters for the three classes. However, more studies are needed to develop classifiers that use these promising features and present good classification accuracy in differentiating the three types of EEG segments. Therefore, in this work, we have used ten RQA parameters to quantify the important features in the EEG signals.These features were fed to seven different classifiers: Support vector machine (SVM), Gaussian Mixture Model (GMM), Fuzzy Sugeno Classifier, K-Nearest Neighbor (KNN), Naive Bayes Classifier (NBC), Decision Tree (DT), and Radial Basis Probabilistic Neural Network (RBPNN). Our results show that the SVM classifier was able to identify the EEG class with an average efficiency of 95.6%, sensitivity and specificity of 98.9% and 97.8%, respectively.

  9. Oil prices, fiscal policy, and economic growth in oil-exporting countries

    NASA Astrophysics Data System (ADS)

    El-Anshasy, Amany A.

    This dissertation argues that in oil-exporting countries fiscal policy could play an important role in transmitting the oil shocks to the economy and that the indirect effects of the changes in oil prices via the fiscal channel could be quite significant. The study comprises three distinct, yet related, essays. In the first essay, I try to study the fiscal policy response to the changes in oil prices and to their growing volatility. In a dynamic general equilibrium framework, a fiscal policy reaction function is derived and is empirically tested for a panel of 15 oil-exporters covering the period 1970--2000. After the link between oil price shocks and fiscal policy is established, the second essay tries to investigate the impact of the highly volatile oil prices on economic growth for the same sample, controlling for the fiscal channel. In both essays the study employs recent dynamic panel-data estimation techniques: System GMM. This approach has the potential advantages of minimizing the bias resulting from estimating dynamic panel models, exploiting the time series properties of the data, controlling for the unobserved country-specific effects, and correcting for any simultaneity bias. In the third essay, I focus on the case of Venezuela for the period 1950--2001. The recent developments in the cointegrating vector autoregression, CVAR technique is applied to provide a suitable framework for analyzing the short-run dynamics and the long-run relationships among oil prices, government revenues, government consumption, investment, and output.

  10. Cost sharing and hospitalizations for ambulatory care sensitive conditions.

    PubMed

    Arrieta, Alejandro; García-Prado, Ariadna

    2015-01-01

    During the last decade, Chile's private health sector has experienced a dramatic increase in hospitalization rates, growing at four times the rate of ambulatory visits. Such evolution has raised concern among policy-makers. We studied the effect of ambulatory and hospital co-insurance rates on hospitalizations for ambulatory care sensitive conditions (ACSC) among individuals with private insurance in Chile. We used a large administrative dataset of private insurance claims for the period 2007-8 and a final sample of 2,792,662 individuals to estimate a structural model of two equations. The first equation was for ambulatory visits and the second for future hospitalizations for ACSC. We estimated the system by Two Stage Least Squares (2SLS) corrected by heteroskedasticity via Generalized Method of Moments (GMM) estimation. Results show that increased ambulatory visits reduced the probability of future hospitalizations, and increased ambulatory co-insurance decreased ambulatory visits for the adult population (19-65 years-old). Both findings indicate the need to reduce ambulatory co-insurance as a way to reduce hospitalizations for ACSC. Results also showed that increasing hospital co-insurance does have a statistically significant reduction on hospitalizations for the adult group, while it does not seem to have a significant effect on hospitalizations for the children (1-18 years-old) group. This paper's contribution is twofold: first, it shows how the level of co-insurance can be a determinant in avoiding unnecessary hospitalizations for certain conditions; second, it highlights the relevance for policy-making of using data on ACSC to improve the efficiency of health systems by promoting ambulatory care as well as population health. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Identification of trail pheromone of larva of eastern tent caterpillarMalacosoma americanum (Lepidoptera: Lasiocampidae).

    PubMed

    Crump, D; Silverstein, R M; Williams, H J; Fitzgerald, T D

    1987-03-01

    Previous studies have shown that larvae of the eastern tent caterpillar (Malacosoma americanum F.) mark trails, leading from their tent to feeding sites on host trees, with a pheromone secreted from the posterior tip of the abdominal sternum. 5β-Cholestane-3,24-dione (1) has been identified as an active component of the trail. The larvae have a threshold sensitivity to the pheromone of 10(-11) g/mm of trail. Several related compounds elicit the trail-following response. Two other species of tent caterpillars also responded positively to the pheromone in preliminary laboratory tests.

  12. Daily exercise prevents diastolic dysfunction and oxidative stress in a female mouse model of western diet induced obesity by maintaining cardiac heme oxygenase-1 levels.

    PubMed

    Bostick, Brian; Aroor, Annayya R; Habibi, Javad; Durante, William; Ma, Lixin; DeMarco, Vincent G; Garro, Mona; Hayden, Melvin R; Booth, Frank W; Sowers, James R

    2017-01-01

    Obesity is a global epidemic with profound cardiovascular disease (CVD) complications. Obese women are particularly vulnerable to CVD, suffering higher rates of CVD compared to non-obese females. Diastolic dysfunction is the earliest manifestation of CVD in obese women but remains poorly understood with no evidence-based therapies. We have shown early diastolic dysfunction in obesity is associated with oxidative stress and myocardial fibrosis. Recent evidence suggests exercise may increase levels of the antioxidant heme oxygenase-1 (HO-1). Accordingly, we hypothesized that diastolic dysfunction in female mice consuming a western diet (WD) could be prevented by daily volitional exercise with reductions in oxidative stress, myocardial fibrosis and maintenance of myocardial HO-1 levels. Four-week-old female C57BL/6J mice were fed a high-fat/high-fructose WD for 16weeks (N=8) alongside control diet fed mice (N=8). A separate cohort of WD fed females was allowed a running wheel for the entire study (N=7). Cardiac function was assessed at 20weeks by high-resolution cardiac magnetic resonance imaging (MRI). Functional assessment was followed by immunohistochemistry, transmission electron microscopy (TEM) and Western blotting to identify pathologic mechanisms and assess HO-1 protein levels. There was no significant body weight decrease in exercising mice, normalized body weight 14.3g/mm, compared to sedentary mice, normalized body weight 13.6g/mm (p=0.38). Total body fat was also unchanged in exercising, fat mass of 6.6g, compared to sedentary mice, fat mass 7.4g (p=0.55). Exercise prevented diastolic dysfunction with a significant reduction in left ventricular relaxation time to 23.8ms for exercising group compared to 33.0ms in sedentary group (p<0.01). Exercise markedly reduced oxidative stress and myocardial fibrosis with improved mitochondrial architecture. HO-1 protein levels were increased in the hearts of exercising mice compared to sedentary WD fed females. This study provides seminal evidence that exercise can prevent diastolic dysfunction in WD-induced obesity in females even without changes in body weight. Furthermore, the reduction in myocardial oxidative stress and fibrosis and improved HO-1 levels in exercising mice suggests a novel mechanism for the antioxidant effect of exercise. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Correlation between cardiac remodelling, function, and myocardial contractility in rat hearts 5 weeks after myocardial infarction.

    PubMed

    Gosselin, H; Qi, X; Rouleau, J L

    1998-01-01

    Early after infarction, ventricular dysfunction occurs as a result of loss of myocardial tissue. Although papillary muscle studies suggest that reduced myocardial contractility contributes to this ventricular dysfunction, in vivo studies indicate that at rest, cardiac output is normal or near normal, suggesting that contractility of the remaining viable myocardium of the ventricular wall is preserved. However, this has never been verified. To explore this further, 100 rats with various-sized myocardial infarctions had ventricular function assessed by Langendorff preparation or by isolated papillary muscle studies 5 weeks after infarction. Morphologic studies were also done. Rats with large infarctions (54%) had marked ventricular dilatation (dilatation index from 0.23 to 0.75, p < 0.01) and papillary muscle dysfunction (total tension from 6.7 to 3.2 g/mm2, p < 0.01) but only moderate left ventricular dysfunction (maximum developed tension from 206 to 151 mmHg (1 mmHg = 133.3 Pa), p < 0.01), a decrease less than one would expect with an infarct size of 54%. The contractility of the remaining viable myocardium of the ventricle was also moderately depressed (peak systolic midwall stress 91 to 60 mmHg, p < 0.01). Rats with moderate infarctions (32%) had less marked but still moderate ventricular dilatation (dilatation index 0.37, p < 0.001) and moderate papillary muscle dysfunction (total tension 4.2 g/mm2, p < 0.01). However, their decrease in ventricular function was only mild (maximum developed pressure 178 mmHg, p < 0.01) and less than one would expect with an infarct size of 32%. The remaining viable myocardium of the ventricular wall appeared to have normal contractility (peak systolic midwall stress = 86 mmHg, ns). We conclude that in this postinfarction model, in large myocardial infarctions, a loss of contractility of the remaining viable myocardium of the ventricular wall occurs as early as 5 weeks after infarction and that papillary muscle studies slightly overestimate the degree of ventricular dysfunction. In moderate infarctions, the remaining viable myocardium of the ventricular wall has preserved contractility while papillary muscle function is depressed. In this relatively early postinfarction phase, ventricular remodelling appears to help maintain left ventricular function in both moderate and large infarctions.

  14. Rational ideation and empiric validation of an innovative digital dermographic tester.

    PubMed

    Lembo, C; Patruno, C; Balato, N; Ayala, F; Balato, A; Lembo, S

    2018-04-01

    Dermographism is a condition characterized by a weal response to a combination of pressure and traction on skin surface, and its diagnosis is based on medical history, clinical criteria and provocation test. The Dermographic Tester ® , a pen-sized tool containing a spring-loaded blunt tip, is the most widely used instrument for the provocation test, and it exerts increasing pressures on the skin surface according to an arbitrary units (AU) scale. Analysing the mechanism of function and trying to convert the AUs to SI units (g/mm 2 ), we found that this instrument had some defects and limits that would compromise a true and repeatable quantification of the weal response threshold. Consequently, we decided to develop a new instrument, the Digital Dermographic Tester (DDT), which is engineered with an inside force sensor to implement features lacking in the current tools, in the hope of enhancing the precision of the provocation test. To validate the effectiveness and accuracy of the DDT. We tested the DDT on 213 participants purposely sampled to obtain three groups, each with a different pattern of reaction to mechanical stimuli. Based on anamnestic, diagnostic and symptomatic criteria, patients were divided into dermographic urticaria (DU), spontaneous urticaria (SU) and healthy control (HC) groups. The DDT was used to apply 12 levels of pressure to the skin surface, and a frequency distribution of positive reactions was displayed for each group. A force of 36-40 g/mm 2 appropriately differentiated physiological from pathological conditions with high sensitivity and specificity. The DDT was found to be capable of differentiating patients with DU patients from those with SU and from HCs, and was able to precisely identify the weal elicitation threshold. © 2017 British Association of Dermatologists.

  15. The effects of topical diclofenac, topical flurbiprofen, and humidity on corneal sensitivity in normal dogs.

    PubMed

    Dorbandt, Daniel M; Labelle, Amber L; Mitchell, Mark A; Hamor, Ralph E

    2017-03-01

    To determine the immediate and chronic effects of topical 0.1% diclofenac and 0.03% flurbiprofen on corneal sensitivity in normal canine eyes. Eighteen normal, nonbrachycephalic dogs. A prospective, randomized, masked, crossover study was performed. To determine the immediate effects associated with treatment, the study drug was instilled into the eye every 5 min for five doses, and corneal sensitivity of treated and untreated eyes was obtained prior to treatment and every 15 min post-treatment for 60 min. To determine the chronic effects, the study drug was instilled every 12 h for 30 days, and corneal sensitivity of treated and untreated eyes was obtained prior to treatment on days 0 and 30. A washout period of at least 30 days occurred between drug crossover. Ambient temperature and humidity were measured throughout the study. After multiple instillations, there was no difference in corneal sensitivity between eyes over time for diclofenac (P = 0.67) or flurbiprofen (P = 0.54), with a median sensitivity of 25 mm (1.8 g/mm 2 ). After chronic dosing, there was no difference in corneal sensitivity between eyes over time for diclofenac (P = 0.82) or flurbiprofen (P = 0.56), with a median sensitivity of 35 mm (1.0 g/mm 2 ). Decreasing ambient humidity was associated with an increase in sensitivity measurements (P = 0.0001). Neither diclofenac nor flurbiprofen had an effect on corneal sensitivity after multiple-drops or twice-daily dosing for 30 days. Ambient humidity may have an effect on corneal sensitivity measurements, with a longer filament length eliciting a blink response at lower humidity. © 2016 American College of Veterinary Ophthalmologists.

  16. Quantification of left ventricular myocardial mass in humans by nuclear magnetic resonance imaging.

    PubMed

    Ostrzega, E; Maddahi, J; Honma, H; Crues, J V; Resser, K J; Charuzi, Y; Berman, D S

    1989-02-01

    The ability of NMRI to assess LV mass was studied in 20 normal males. By means of a 1.5 Tesla GE superconducting magnet and a standard spin-echo pulse sequence, multiple gated short-axis and axial slices of the entire left ventricle were obtained. LV mass was determined by Simpson's rule with the use of a previous experimentally validated method. The weight of the LV apex (subject to partial volume effect in the short-axis images) was derived from axial slices and that of the remaining left ventricle from short-axis slices. The weight of each slice was calculated by multiplying the planimetered surface area of the LV myocardium by slice thickness and by myocardial specific gravity (1.05). Mean +/- standard deviation of LV mass and LV mass index were 146 +/- 23.1 gm (range 92.3 to 190.4 gm) and 78.4 +/- 7.8 gm/m2 (range 57.7 to 89.4 gm/m2), respectively. Interobserver agreement as assessed by ICC was high for determining 161 individual slice masses (ICC = 0.99) and for total LV mass (ICC = 0.97). Intraobserver agreement for total LV mass was also high (ICC = 0.96). NMRI-determined LV mass correlated with body surface area: LV mass = 55 + 108 body surface area, r = 0.83; with body weight: LV mass = 26 + 0.77 body weight, r = 0.82; and with body height: LV mass = 262 +/- 5.9 body height, r = 0.75. Normal limits were developed for these relationships. NMRI-determined LV mass as related to body weight was in agreement with normal limits derived from autopsy literature data.(ABSTRACT TRUNCATED AT 250 WORDS)

  17. The final Galileo SSI observations of Io: Orbits G28-I33

    USGS Publications Warehouse

    Turtle, E.P.; Keszthelyi, L.P.; McEwen, A.S.; Radebaugh, J.; Milazzo, M.; Simonelli, D.P.; Geissler, P.; Williams, D.A.; Perry, J.; Jaeger, W.L.; Klaasen, K.P.; Breneman, H.H.; Denk, T.; Phillips, C.B.

    2004-01-01

    We present the observations of Io acquired by the Solid State Imaging (SSI) experiment during the Galileo Millennium Mission (GMM) and the strategy we used to plan the exploration of Io. Despite Galileo's tight restrictions on data volume and downlink capability and several spacecraft and camera anomalies due to the intense radiation close to Jupiter, there were many successful SSI observations during GMM. Four giant, high-latitude plumes, including the largest plume ever observed on Io, were documented over a period of eight months; only faint evidence of such plumes had been seen since the Voyager 2 encounter, despite monitoring by Galileo during the previous five years. Moreover, the source of one of the plumes was Tvashtar Catena, demonstrating that a single site can exhibit remarkably diverse eruption styles - from a curtain of lava fountains, to extensive surface flows, and finally a ??? 400 km high plume - over a relatively short period of time (??? 13 months between orbits 125 and G29). Despite this substantial activity, no evidence of any truly new volcanic center was seen during the six years of Galileo observations. The recent observations also revealed details of mass wasting processes acting on Io. Slumping and landsliding dominate and occur in close proximity to each other, demonstrating spatial variation in material properties over distances of several kilometers. However, despite the ubiquitous evidence for mass wasting, the rate of volcanic resurfacing seems to dominate; the floors of paterae in proximity to mountains are generally free of debris. Finally, the highest resolution observations obtained during Galileo's final encounters with Io provided further evidence for a wide diversity of surface processes at work on Io. ?? 2003 Elsevier Inc. All rights reserved.

  18. Is CO2 emission a side effect of financial development? An empirical analysis for China.

    PubMed

    Hao, Yu; Zhang, Zong-Yong; Liao, Hua; Wei, Yi-Ming; Wang, Shuo

    2016-10-01

    Based on panel data for 29 Chinese provinces from 1995 to 2012, this paper explores the relationship between financial development and environmental quality in China. A comprehensive framework is utilized to estimate both the direct and indirect effects of financial development on CO 2 emissions in China using a carefully designed two-stage regression model. The first-difference and orthogonal-deviation Generalized Method of Moments (GMM) methods are used to control for potential endogeneity and introduce dynamics. To ensure the robustness of the estimations, two indicators measuring financial development-financial depth and financial efficiency-are used. The empirical results indicate that the direct effects of financial depth and financial efficiency on environmental quality are positive and negative, respectively. The indirect effects of both indicators are U shaped and dominate the shape of the total effects. These findings suggest that the influences of the financial development on environment depend on the level of economic development. At the early stage of economic growth, financial development is environmentally friendly. When the economy is highly developed, a higher level of financial development is harmful to the environmental quality.

  19. Congestion detection of pedestrians using the velocity entropy: A case study of Love Parade 2010 disaster

    NASA Astrophysics Data System (ADS)

    Huang, Lida; Chen, Tao; Wang, Yan; Yuan, Hongyong

    2015-12-01

    Gatherings of large human crowds often result in crowd disasters such as the Love Parade Disaster in Duisburg, Germany on July 24, 2010. To avoid these tragedies, video surveillance and early warning are becoming more and more significant. In this paper, the velocity entropy is first defined as the criterion for congestion detection, which represents the motion magnitude distribution and the motion direction distribution simultaneously. Then the detection method is verified by the simulation data based on AnyLogic software. To test the generalization performance of this method, video recordings of a real-world case, the Love Parade disaster, are also used in the experiments. The velocity histograms of the foreground object in the videos are extracted by the Gaussian Mixture Model (GMM) and optical flow computation. With a sequential change-point detection algorithm, the velocity entropy can be applied to detect congestions of the Love Parade festival. It turned out that without recognizing and tracking individual pedestrian, our method can detect abnormal crowd behaviors in real-time.

  20. Bayesian Lagrangian Data Assimilation and Drifter Deployment Strategies

    NASA Astrophysics Data System (ADS)

    Dutt, A.; Lermusiaux, P. F. J.

    2017-12-01

    Ocean currents transport a variety of natural (e.g. water masses, phytoplankton, zooplankton, sediments, etc.) and man-made materials and other objects (e.g. pollutants, floating debris, search and rescue, etc.). Lagrangian Coherent Structures (LCSs) or the most influential/persistent material lines in a flow, provide a robust approach to characterize such Lagrangian transports and organize classic trajectories. Using the flow-map stochastic advection and a dynamically-orthogonal decomposition, we develop uncertainty prediction schemes for both Eulerian and Lagrangian variables. We then extend our Bayesian Gaussian Mixture Model (GMM)-DO filter to a joint Eulerian-Lagrangian Bayesian data assimilation scheme. The resulting nonlinear filter allows the simultaneous non-Gaussian estimation of Eulerian variables (e.g. velocity, temperature, salinity, etc.) and Lagrangian variables (e.g. drifter/float positions, trajectories, LCSs, etc.). Its results are showcased using a double-gyre flow with a random frequency, a stochastic flow past a cylinder, and realistic ocean examples. We further show how our Bayesian mutual information and adaptive sampling equations provide a rigorous efficient methodology to plan optimal drifter deployment strategies and predict the optimal times, locations, and types of measurements to be collected.

  1. Encoding the local connectivity patterns of fMRI for cognitive task and state classification.

    PubMed

    Onal Ertugrul, Itir; Ozay, Mete; Yarman Vural, Fatos T

    2018-06-15

    In this work, we propose a novel framework to encode the local connectivity patterns of brain, using Fisher vectors (FV), vector of locally aggregated descriptors (VLAD) and bag-of-words (BoW) methods. We first obtain local descriptors, called mesh arc descriptors (MADs) from fMRI data, by forming local meshes around anatomical regions, and estimating their relationship within a neighborhood. Then, we extract a dictionary of relationships, called brain connectivity dictionary by fitting a generative Gaussian mixture model (GMM) to a set of MADs, and selecting codewords at the mean of each component of the mixture. Codewords represent connectivity patterns among anatomical regions. We also encode MADs by VLAD and BoW methods using k-Means clustering. We classify cognitive tasks using the Human Connectome Project (HCP) task fMRI dataset and cognitive states using the Emotional Memory Retrieval (EMR). We train support vector machines (SVMs) using the encoded MADs. Results demonstrate that, FV encoding of MADs can be successfully employed for classification of cognitive tasks, and outperform VLAD and BoW representations. Moreover, we identify the significant Gaussians in mixture models by computing energy of their corresponding FV parts, and analyze their effect on classification accuracy. Finally, we suggest a new method to visualize the codewords of the learned brain connectivity dictionary.

  2. Landslide Detection in the Carlyon Beach, WA Peninsula: Analysis Of High Resolution DEMs

    NASA Astrophysics Data System (ADS)

    Fayne, J.; Tran, C.; Mora, O. E.

    2017-12-01

    Landslides are geological events caused by slope instability and degradation, leading to the sliding of large masses of rock and soil down a mountain or hillside. These events are influenced by topography, geology, weather and human activity, and can cause extensive damage to the environment and infrastructure, such as the destruction of transportation networks, homes, and businesses. It is therefore imperative to detect early-warning signs of landslide hazards as a means of mitigation and disaster prevention. Traditional landslide surveillance consists of field mapping, but the process is expensive and time consuming. This study uses Light Detection and Ranging (LiDAR) derived Digital Elevation Models (DEMs) and k-means clustering and Gaussian Mixture Model (GMM) to analyze surface roughness and extract spatial features and patterns of landslides and landslide-prone areas. The methodology based on several feature extractors employs an unsupervised classifier on the Carlyon Beach Peninsula in the state of Washington to attempt to identify slide potential terrain. When compared with the independently compiled landslide inventory map, the proposed algorithm correctly classifies up to 87% of the terrain. These results suggest that the proposed methods and LiDAR-derived DEMs can provide important surface information and be used as efficient tools for digital terrain analysis to create accurate landslide maps.

  3. Epileptic Seizure Detection Based on Time-Frequency Images of EEG Signals using Gaussian Mixture Model and Gray Level Co-Occurrence Matrix Features.

    PubMed

    Li, Yang; Cui, Weigang; Luo, Meilin; Li, Ke; Wang, Lina

    2018-01-25

    The electroencephalogram (EEG) signal analysis is a valuable tool in the evaluation of neurological disorders, which is commonly used for the diagnosis of epileptic seizures. This paper presents a novel automatic EEG signal classification method for epileptic seizure detection. The proposed method first employs a continuous wavelet transform (CWT) method for obtaining the time-frequency images (TFI) of EEG signals. The processed EEG signals are then decomposed into five sub-band frequency components of clinical interest since these sub-band frequency components indicate much better discriminative characteristics. Both Gaussian Mixture Model (GMM) features and Gray Level Co-occurrence Matrix (GLCM) descriptors are then extracted from these sub-band TFI. Additionally, in order to improve classification accuracy, a compact feature selection method by combining the ReliefF and the support vector machine-based recursive feature elimination (RFE-SVM) algorithm is adopted to select the most discriminative feature subset, which is an input to the SVM with the radial basis function (RBF) for classifying epileptic seizure EEG signals. The experimental results from a publicly available benchmark database demonstrate that the proposed approach provides better classification accuracy than the recently proposed methods in the literature, indicating the effectiveness of the proposed method in the detection of epileptic seizures.

  4. Seasonal and Static Gravity Field of Mars from MGS, Mars Odyssey and MRO Radio Science

    NASA Technical Reports Server (NTRS)

    Genova, Antonio; Goossens, Sander; Lemoine, Frank G.; Mazarico, Erwan; Neumann, Gregory A.; Smith, David E.; Zuber, Maria T.

    2016-01-01

    We present a spherical harmonic solution of the static gravity field of Mars to degree and order 120, GMM-3, that has been calculated using the Deep Space Network tracking data of the NASA Mars missions, Mars Global Surveyor (MGS), Mars Odyssey (ODY), and the Mars Reconnaissance Orbiter (MRO). We have also jointly determined spherical harmonic solutions for the static and time-variable gravity field of Mars, and the Mars k 2 Love numbers, exclusive of the gravity contribution of the atmosphere. Consequently, the retrieved time-varying gravity coefficients and the Love number k 2 solely yield seasonal variations in the mass of the polar caps and the solid tides of Mars, respectively. We obtain a Mars Love number k 2 of 0.1697 +/-0.0027 (3- sigma). The inclusion of MRO tracking data results in improved seasonal gravity field coefficients C 30 and, for the first time, C 50 . Refinements of the atmospheric model in our orbit determination program have allowed us to monitor the odd zonal harmonic C 30 for approx.1.5 solar cycles (16 years). This gravity model shows improved correlations with MOLA topography up to 15% larger at higher harmonics ( l = 60–80) than previous solutions.

  5. Seasonal and static Gravity Field of Mars from MGS, Mars Odyssey and MRO Radio Science

    NASA Technical Reports Server (NTRS)

    Genova, Antonio; Goossens, Sander; Lemoine, Frank G.; Mazarico, Erwan; Neumann, Gregory A.; Smith, David E.; Zuber, Maria T.

    2016-01-01

    We present a spherical harmonic solution of the static gravity field of Mars to degree and order 120, GMM-3, that has been calculated using the Deep Space Network tracking data of the NASA Mars missions, Mars Global Surveyor (MGS), Mars Odyssey (ODY), and the Mars Reconnaissance Orbiter (MRO). We have also jointly determined spherical harmonic solutions for the static and time-variable gravity field of Mars, and the Mars k(sub 2) Love numbers, exclusive of the gravity contribution of the atmosphere. Consequently, the retrieved time-varying gravity coefficients and the Love number k(sub 2) solely yield seasonal variations in the mass of the polar caps and the solid tides of Mars, respectively. We obtain a Mars Love number k(sub 2) of 0.1697 +/- 0.0027 (3- sigma). The inclusion of MRO tracking data results in improved seasonal gravity field coefficients C(sub 30) and, for the first time, C 50. Refinements of the atmospheric model in our orbit determination program have allowed us to monitor the odd zonal harmonic C(sub 30) for approximately 1.5 solar cycles (16 years). This gravity model shows improved correlations with MOLA topography up to 15% larger at higher harmonics ( l = 60-80) than previous solutions.

  6. Fireball multi object spectrograph: as-built optic performances

    NASA Astrophysics Data System (ADS)

    Grange, R.; Milliard, B.; Lemaitre, G.; Quiret, S.; Pascal, S.; Origné, A.; Hamden, E.; Schiminovich, D.

    2016-07-01

    Fireball (Faint Intergalactic Redshifted Emission Balloon) is a NASA/CNES balloon-borne experiment to study the faint diffuse circumgalactic medium from the line emissions in the ultraviolet (200 nm) above 37 km flight altitude. Fireball relies on a Multi Object Spectrograph (MOS) that takes full advantage of the new high QE, low noise 13 μm pixels UV EMCCD. The MOS is fed by a 1 meter diameter parabola with an extended field (1000 arcmin2) using a highly aspherized two mirror corrector. All the optical train is working at F/2.5 to maintain a high signal to noise ratio. The spectrograph (R 2200 and 1.5 arcsec FWHM) is based on two identical Schmidt systems acting as collimator and camera sharing a 2400 g/mm aspherized reflective Schmidt grating. This grating is manufactured from active optics methods by double replication technique of a metal deformable matrix whose active clear aperture is built-in to a rigid elliptical contour. The payload and gondola are presently under integration at LAM. We will present the alignment procedure and the as-built optic performances of the Fireball instrument.

  7. Impact of economic growth, nonrenewable and renewable energy consumption, and urbanization on carbon emissions in Sub-Saharan Africa.

    PubMed

    Hanif, Imran

    2018-05-01

    The present study explores the impact of economic growth; urban expansion; and consumption of fossil fuels, solid fuels, and renewable energy on environmental degradation in developing economies of Sub-Saharan Africa. To demonstrate its findings in detail, the study adopts a system generalized method of moment (GMM) on a panel of 34 emerging economies for the period from 1995 to 2015. The results describe that the consumption of fossil and solid fuels for cooking and expansion of urban areas are significantly contributing to carbon dioxide emissions, on one end, and stimulating air pollution, on the other. The results also exhibit an inverted U-shape relationship between per capita economic growth and carbon emissions. This relation confirms the existence of an environmental Kuznets curve (EKC) in middle- and low-income economies of Sub-Saharan Africa. Furthermore, the findings reveal that the use of renewable energy alternatives improves air quality by controlling carbon emissions and lowering the direct interaction of households with toxic gases. Thus, the use of renewable energy alternatives helps the economies to achieve sustainable development targets.

  8. Nanodevices for spintronics and methods of using same

    DOEpatents

    Zaliznyak, Igor; Tsvelik, Alexei; Kharzeev, Dmitri

    2013-02-19

    Graphene magnet multilayers (GMMs) are employed to facilitate development of spintronic devices. The GMMs can include a sheet of monolayer (ML) or few-layer (FL) graphene in contact with a magnetic material, such as a ferromagnetic (FM) or an antiferromagnetic material. Electrode terminals can be disposed on the GMMs to be in electrical contact with the graphene. A magnetic field effect is induced in the graphene sheet based on an exchange magnetic field resulting from a magnetization of the magnetic material which is in contact with graphene. Electrical characteristics of the graphene can be manipulated based on the magnetization of the magnetic material in the GMM.

  9. Self-organization comprehensive real-time state evaluation model for oil pump unit on the basis of operating condition classification and recognition

    NASA Astrophysics Data System (ADS)

    Liang, Wei; Yu, Xuchao; Zhang, Laibin; Lu, Wenqing

    2018-05-01

    In oil transmission station, the operating condition (OC) of an oil pump unit sometimes switches accordingly, which will lead to changes in operating parameters. If not taking the switching of OCs into consideration while performing a state evaluation on the pump unit, the accuracy of evaluation would be largely influenced. Hence, in this paper, a self-organization Comprehensive Real-Time State Evaluation Model (self-organization CRTSEM) is proposed based on OC classification and recognition. However, the underlying model CRTSEM is built through incorporating the advantages of Gaussian Mixture Model (GMM) and Fuzzy Comprehensive Evaluation Model (FCEM) first. That is to say, independent state models are established for every state characteristic parameter according to their distribution types (i.e. the Gaussian distribution and logistic regression distribution). Meanwhile, Analytic Hierarchy Process (AHP) is utilized to calculate the weights of state characteristic parameters. Then, the OC classification is determined by the types of oil delivery tasks, and CRTSEMs of different standard OCs are built to constitute the CRTSEM matrix. On the other side, the OC recognition is realized by a self-organization model that is established on the basis of Back Propagation (BP) model. After the self-organization CRTSEM is derived through integration, real-time monitoring data can be inputted for OC recognition. At the end, the current state of the pump unit can be evaluated by using the right CRTSEM. The case study manifests that the proposed self-organization CRTSEM can provide reasonable and accurate state evaluation results for the pump unit. Besides, the assumption that the switching of OCs will influence the results of state evaluation is also verified.

  10. Estimating unbiased economies of scale of HIV prevention projects: a case study of Avahan.

    PubMed

    Lépine, Aurélia; Vassall, Anna; Chandrashekar, Sudha; Blanc, Elodie; Le Nestour, Alexis

    2015-04-01

    Governments and donors are investing considerable resources on HIV prevention in order to scale up these services rapidly. Given the current economic climate, providers of HIV prevention services increasingly need to demonstrate that these investments offer good 'value for money'. One of the primary routes to achieve efficiency is to take advantage of economies of scale (a reduction in the average cost of a health service as provision scales-up), yet empirical evidence on economies of scale is scarce. Methodologically, the estimation of economies of scale is hampered by several statistical issues preventing causal inference and thus making the estimation of economies of scale complex. In order to estimate unbiased economies of scale when scaling up HIV prevention services, we apply our analysis to one of the few HIV prevention programmes globally delivered at a large scale: the Indian Avahan initiative. We costed the project by collecting data from the 138 Avahan NGOs and the supporting partners in the first four years of its scale-up, between 2004 and 2007. We develop a parsimonious empirical model and apply a system Generalized Method of Moments (GMM) and fixed-effects Instrumental Variable (IV) estimators to estimate unbiased economies of scale. At the programme level, we find that, after controlling for the endogeneity of scale, the scale-up of Avahan has generated high economies of scale. Our findings suggest that average cost reductions per person reached are achievable when scaling-up HIV prevention in low and middle income countries. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. The Effect of Physical Resistance Training on Baroreflex Sensitivity of Hypertensive Rats.

    PubMed

    Gomes, Moisés Felipe Pereira; Borges, Mariana Eiras; Rossi, Vitor de Almeida; Moura, Elizabeth de Orleans C de; Medeiros, Alessandra

    2017-01-01

    Baroreceptors act as regulators of blood pressure (BP); however, its sensitivity is impaired in hypertensive patients. Among the recommendations for BP reduction, exercise training has become an important adjuvant therapy in this population. However, there are many doubts about the effects of resistance exercise training in this population. To evaluate the effect of resistance exercise training on BP and baroreceptor sensitivity in spontaneously hypertensive rats (SHR). Rats SHR (n = 16) and Wistar (n = 16) at 8 weeks of age, at the beginning of the experiment, were randomly divided into 4 groups: sedentary control (CS, n = 8); trained control (CT, n = 8); sedentary SHR (HS, n = 8) and trained SHR (HT, n = 8). Resistance exercise training was performed in a stairmaster-type equipment (1.1 × 0.18 m, 2 cm between the steps, 80° incline) with weights attached to their tails, (5 days/week, 8 weeks). Baroreceptor reflex control of heart rate (HR) was tested by loading/unloading of baroreceptors with phenylephrine and sodium nitroprusside. Resistance exercise training increased the soleus muscle mass in SHR when compared to HS (HS 0.027 ± 0.002 g/mm and HT 0.056 ± 0.003 g/mm). Resistance exercise training did not alter BP. On the other hand, in relation to baroreflex sensitivity, bradycardic response was improved in the TH group when compared to HS (HS -1.3 ± 0.1 bpm/mmHg and HT -2.6 ± 0.2 bpm/mmHg) although tachycardia response was not altered by resistance exercise (CS -3.3 ± 0.2 bpm/mmHg, CT -3.3 ± 0.1 bpm/mmHg, HS -1.47 ± 0.06 bpm/mmHg and HT -1.6 ± 0.1 bpm/mmHg). Resistance exercise training was able to promote improvements on baroreflex sensitivity of SHR rats, through the improvement of bradycardic response, despite not having reduced BP. Os barorreceptores atuam como reguladores da pressão arterial (PA); no entanto, sua sensibilidade encontra-se prejudicada em pacientes hipertensos. Dentre as recomendações para a redução da PA, o treinamento físico tem se tornado um importante adjunto na terapia dessa população. Porém, ainda há diversos questionamentos sobre os efeitos de treinamento físico resistido nessa população. Avaliar o efeito do treinamento físico resistido na PA e na sensibilidade de barorreceptores em ratos espontaneamente hipertensos (SHR). Ratos SHR (n = 16) e Wistar (n = 16) com 08 semanas de idade foram aleatoriamente divididos em 4 grupos: controle sedentário (CS, n = 8); controle treinado (CT, n = 8); SHR sedentário (HS, n = 8) e SHR treinado (HT, n = 8). O treinamento físico foi realizado em aparato com degraus (1,1 × 0,18 m, 2 cm entre os degraus, 80° inclinação) com peso fixado na cauda, (5 vezes por semana durante 8 semanas). O controle barorreflexo da frequência cardíaca (FC) foi testado com estímulos de fenilefrina e nitroprussiato de sódio. O treinamento resistido foi capaz de aumentar a massa muscular do sóleo em ratos SHR (HS 0,027 ± 0,002 g/mm e HT 0,056 ± 0,003 g/mm). Não houve alteração da PA com o treinamento. Por outro lado, houve melhora na resposta bradicárdica da sensibilidade barorreflexa no grupo HT (HS -1,3 ± 0,1 bpm/mmHg e HT -2,6 ± 0,2 bpm/mmHg), no entanto, a resposta taquicárdica não foi alterada pelo exercício resistido (CS -3,3 ± 0,2 bpm/mmHg, CT -3,3 ± 0,1 bpm/mmHg, HS -1,47 ± 0,06 e HT -1,6 ± 0,1). O exercício físico resistido foi capaz de otimizar a sensibilidade barorreflexa dos ratos SHR por meio da melhora à resposta bradicárdica, apesar de não alterar a PA.

  12. An evaluation of talker localization based on direction of arrival estimation and statistical sound source identification

    NASA Astrophysics Data System (ADS)

    Nishiura, Takanobu; Nakamura, Satoshi

    2002-11-01

    It is very important to capture distant-talking speech for a hands-free speech interface with high quality. A microphone array is an ideal candidate for this purpose. However, this approach requires localizing the target talker. Conventional talker localization algorithms in multiple sound source environments not only have difficulty localizing the multiple sound sources accurately, but also have difficulty localizing the target talker among known multiple sound source positions. To cope with these problems, we propose a new talker localization algorithm consisting of two algorithms. One is DOA (direction of arrival) estimation algorithm for multiple sound source localization based on CSP (cross-power spectrum phase) coefficient addition method. The other is statistical sound source identification algorithm based on GMM (Gaussian mixture model) for localizing the target talker position among localized multiple sound sources. In this paper, we particularly focus on the talker localization performance based on the combination of these two algorithms with a microphone array. We conducted evaluation experiments in real noisy reverberant environments. As a result, we confirmed that multiple sound signals can be identified accurately between ''speech'' or ''non-speech'' by the proposed algorithm. [Work supported by ATR, and MEXT of Japan.

  13. Segmentation of 3D microPET images of the rat brain via the hybrid gaussian mixture method with kernel density estimation.

    PubMed

    Chen, Tai-Been; Chen, Jyh-Cheng; Lu, Henry Horng-Shing

    2012-01-01

    Segmentation of positron emission tomography (PET) is typically achieved using the K-Means method or other approaches. In preclinical and clinical applications, the K-Means method needs a prior estimation of parameters such as the number of clusters and appropriate initialized values. This work segments microPET images using a hybrid method combining the Gaussian mixture model (GMM) with kernel density estimation. Segmentation is crucial to registration of disordered 2-deoxy-2-fluoro-D-glucose (FDG) accumulation locations with functional diagnosis and to estimate standardized uptake values (SUVs) of region of interests (ROIs) in PET images. Therefore, simulation studies are conducted to apply spherical targets to evaluate segmentation accuracy based on Tanimoto's definition of similarity. The proposed method generates a higher degree of similarity than the K-Means method. The PET images of a rat brain are used to compare the segmented shape and area of the cerebral cortex by the K-Means method and the proposed method by volume rendering. The proposed method provides clearer and more detailed activity structures of an FDG accumulation location in the cerebral cortex than those by the K-Means method.

  14. Recognition of Activities of Daily Living Based on Environmental Analyses Using Audio Fingerprinting Techniques: A Systematic Review

    PubMed Central

    Santos, Rui; Pombo, Nuno; Flórez-Revuelta, Francisco

    2018-01-01

    An increase in the accuracy of identification of Activities of Daily Living (ADL) is very important for different goals of Enhanced Living Environments and for Ambient Assisted Living (AAL) tasks. This increase may be achieved through identification of the surrounding environment. Although this is usually used to identify the location, ADL recognition can be improved with the identification of the sound in that particular environment. This paper reviews audio fingerprinting techniques that can be used with the acoustic data acquired from mobile devices. A comprehensive literature search was conducted in order to identify relevant English language works aimed at the identification of the environment of ADLs using data acquired with mobile devices, published between 2002 and 2017. In total, 40 studies were analyzed and selected from 115 citations. The results highlight several audio fingerprinting techniques, including Modified discrete cosine transform (MDCT), Mel-frequency cepstrum coefficients (MFCC), Principal Component Analysis (PCA), Fast Fourier Transform (FFT), Gaussian mixture models (GMM), likelihood estimation, logarithmic moduled complex lapped transform (LMCLT), support vector machine (SVM), constant Q transform (CQT), symmetric pairwise boosting (SPB), Philips robust hash (PRH), linear discriminant analysis (LDA) and discrete cosine transform (DCT). PMID:29315232

  15. Gaussian mixture models for detection of autism spectrum disorders (ASD) in magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Almeida, Javier; Velasco, Nelson; Alvarez, Charlens; Romero, Eduardo

    2017-11-01

    Autism Spectrum Disorder (ASD) is a complex neurological condition characterized by a triad of signs: stereotyped behaviors, verbal and non-verbal communication problems. The scientific community has been interested on quantifying anatomical brain alterations of this disorder. Several studies have focused on measuring brain cortical and sub-cortical volumes. This article presents a fully automatic method which finds out differences among patients diagnosed with autism and control patients. After the usual pre-processing, a template (MNI152) is registered to an evaluated brain which becomes then a set of regions. Each of these regions is the represented by the normalized histogram of intensities which is approximated by mixture of Gaussian (GMM). The gray and white matter are separated to calculate the mean and standard deviation of each Gaussian. These features are then used to train, region per region, a binary SVM classifier. The method was evaluated in an adult population aged from 18 to 35 years, from the public database Autism Brain Imaging Data Exchange (ABIDE). Highest discrimination values were found for the Right Middle Temporal Gyrus, with an Area Under the Curve (AUC) of the Receiver Operating Characteristic (ROC) the curve of 0.72.

  16. Chemoorientation of eastern tent caterpillars to trail pheromone, 5β-Cholestane-3,24-dione.

    PubMed

    Peterson, S C; Fitzgerald, T D

    1991-10-01

    Chemoorientation behavior of the larval eastern tent caterpillar,Malacosoma americanum, was studied using the synthetic trail pheromone 5β-cholestane-3,24-dione. Divergent arms of Y mazes were treated with various concentration ratios of the pheromone. At application rates of 10(-10)-10(-9) g/mm of trail, larvae showed a significant preference for stronger trails when concentration ratios differed by as little as 4:1. At application rates of 10(-8) and greater there was no significant difference in trail choice even when trails differed in strength by a full order of magnitude. Other studies showed that the caterpillars abandon the pattern of choosing stronger over weaker trails when they repeatedly fail to find food at the end of a stronger trail. Experiments in which larvae were required to choose trails separated by a gap demonstrated orientation by chemoklinotaxis. Caterpillars that had one of the maxillary palps ablated looped in the direction of their intact chemo-receptor when placed on filter paper treated uniformly with pheromone, indicating that they may also orient by tropotaxis. The relevance of these findings to the tent caterpillar communication system is discussed.

  17. [Artificial vision for the human blind].

    PubMed

    Ortigoza-Ayala, Luis Octavio; Ruiz-Huerta, Leopoldo; Caballero-Ruiz, Alberto; Kussul, Ernst

    2009-01-01

    Since 1960 many attempts have been made to develop visual prostheses for the blind; most of the devices based on the production of phosphenes through electrical stimulation with microelectrodes at the retina, optic nerve, lateral geniculate or occipital lobe are incapable to reconstruct a coherent retinotopic map (coordinate match between the image and the visual perception of the patient); furthermore they display important restrictions at the biomaterial level that hinder their final implantation through surgical techniques which at present time offers more risks than benefits to the patient. Considering the new theories about intermodal perception it is possible the acquisition of visual information through other senses; The Micromechanics and Mecatronics Group (GMM) from The Center of Applied Sciences and Technological Development at The National Autonomous University of Mexico by this paper, describes the experimental design and psychophysical data necessary for the construction of a visual sensory substitution prostheses with a vibrotactile system. The vibrotactile mechanism locates different bars over the epidermis in a predetermined way to reproduce a point by point matrix order in a logical sequence of rows and columns that allow the construction of an image with an external device that not require invasive procedures.

  18. Distributed Energy Generation Systems Based on Renewable Energy and Natural Gas Blending: New Business Models for Economic Incentives, Electricity Market Design and Regulatory Innovation

    NASA Astrophysics Data System (ADS)

    Nyangon, Joseph

    Expansion of distributed energy resources (DERs) including solar photovoltaics, small- and medium-sized wind farms, gas-fired distributed generation, demand-side management, and energy storage poses significant complications to the design, operation, business model, and regulation of electricity systems. Using statistical regression analysis, this dissertation assesses if increased use of natural gas results in reduced renewable energy capacity, and if natural gas growth is correlated with increased or decreased non-fossil renewable fuels demand. System Generalized Method of Moments (System GMM) estimation of the dynamic relationship was performed on the indicators in the econometric model for the ten states with the fastest growth in solar generation capacity in the U.S. (e.g., California, North Carolina, Arizona, Nevada, New Jersey, Utah, Massachusetts, Georgia, Texas, and New York) to analyze the effect of natural gas on renewable energy diffusion and the ratio of fossil fuels increase for the period 2001-2016 to policy driven solar demand. The study identified ten major drivers of change in electricity systems, including growth in distributed energy generation systems such as intermittent renewable electricity and gas-fired distributed generation; flat to declining electricity demand growth; aging electricity infrastructure and investment gaps; proliferation of affordable information and communications technologies (e.g., advanced meters or interval meters), increasing innovations in data and system optimization; and greater customer engagement. In this ongoing electric power sector transformation, natural gas and fast-flexing renewable resources (mostly solar and wind energy) complement each other in several sectors of the economy. The dissertation concludes that natural gas has a positive impact on solar and wind energy development: a 1% rise in natural gas capacity produces 0.0304% increase in the share of renewable energy in the short-run (monthly) compared to the long-term effect estimated at 0.9696% (15-year period). Evidence from the main policy, environmental, and economic indicators for solar and wind-power development such as feed-in tariffs, state renewable portfolio standards, public benefits fund, net metering, interconnection standards, environmental quality, electricity import ratio, per-capita energy-related carbon dioxide emissions, average electricity price, per-capita real gross domestic product, and energy intensity are discussed and evaluated in detail in order to elucidate their effectiveness in supporting the utility industry transformation. The discussion is followed by a consideration of a plausible distributed utility framework that is tailored for major DERs development that has emerged in New York called Reforming the Energy Vision. This framework provides a conceptual base with which to imagine the utility of the future as well as a practical solution to study the potential of DERs in other states. The dissertation finds this grid and market modernization initiative has considerable influence and importance beyond New York in the development of a new market economy in which customer choice and distributed utilities are prominent.

  19. Infrared video based gas leak detection method using modified FAST features

    NASA Astrophysics Data System (ADS)

    Wang, Min; Hong, Hanyu; Huang, Likun

    2018-03-01

    In order to detect the invisible leaking gas that is usually dangerous and easily leads to fire or explosion in time, many new technologies have arisen in the recent years, among which the infrared video based gas leak detection is widely recognized as a viable tool. However, all the moving regions of a video frame can be detected as leaking gas regions by the existing infrared video based gas leak detection methods, without discriminating the property of each detected region, e.g., a walking person in a video frame may be also detected as gas by the current gas leak detection methods.To solve this problem, we propose a novel infrared video based gas leak detection method in this paper, which is able to effectively suppress strong motion disturbances.Firstly, the Gaussian mixture model(GMM) is used to establish the background model.Then due to the observation that the shapes of gas regions are different from most rigid moving objects, we modify the Features From Accelerated Segment Test (FAST) algorithm and use the modified FAST (mFAST) features to describe each connected component. In view of the fact that the statistical property of the mFAST features extracted from gas regions is different from that of other motion regions, we propose the Pixel-Per-Points (PPP) condition to further select candidate connected components.Experimental results show that the algorithm is able to effectively suppress most strong motion disturbances and achieve real-time leaking gas detection.

  20. Can Network Linkage Effects Determine Return? Evidence from Chinese Stock Market

    PubMed Central

    Qiao, Haishu; Xia, Yue; Li, Ying

    2016-01-01

    This study used the dynamic conditional correlations (DCC) method to identify the linkage effects of Chinese stock market, and further detected the influence of network linkage effects on magnitude of security returns across different industries. Applying two physics-derived techniques, the minimum spanning tree and the hierarchical tree, we analyzed the stock interdependence within the network of the China Securities Index (CSI) industry index basket. We observed that that obvious linkage effects existed among stock networks. CII and CCE, CAG and ITH as well as COU, CHA and REI were confirmed as the core nodes in the three different networks respectively. We also investigated the stability of linkage effects by estimating the mean correlations and mean distances, as well as the normalized tree length of these indices. In addition, using the GMM model approach, we found inter-node influence within the stock network had a pronounced effect on stock returns. Our results generally suggested that there appeared to be greater clustering effect among the indexes belonging to related industrial sectors than those of diverse sectors, and network comovement was significantly affected by impactive financial events in the reality. Besides, stocks that were more central within the network of stock market usually had higher returns for compensation because they endured greater exposure to correlation risk. PMID:27257816

  1. Robust detection of multiple sclerosis lesions from intensity-normalized multi-channel MRI

    NASA Astrophysics Data System (ADS)

    Karpate, Yogesh; Commowick, Olivier; Barillot, Christian

    2015-03-01

    Multiple sclerosis (MS) is a disease with heterogeneous evolution among the patients. Quantitative analysis of longitudinal Magnetic Resonance Images (MRI) provides a spatial analysis of the brain tissues which may lead to the discovery of biomarkers of disease evolution. Better understanding of the disease will lead to a better discovery of pathogenic mechanisms, allowing for patient-adapted therapeutic strategies. To characterize MS lesions, we propose a novel paradigm to detect white matter lesions based on a statistical framework. It aims at studying the benefits of using multi-channel MRI to detect statistically significant differences between each individual MS patient and a database of control subjects. This framework consists in two components. First, intensity standardization is conducted to minimize the inter-subject intensity difference arising from variability of the acquisition process and different scanners. The intensity normalization maps parameters obtained using a robust Gaussian Mixture Model (GMM) estimation not affected by the presence of MS lesions. The second part studies the comparison of multi-channel MRI of MS patients with respect to an atlas built from the control subjects, thereby allowing us to look for differences in normal appearing white matter, in and around the lesions of each patient. Experimental results demonstrate that our technique accurately detects significant differences in lesions consequently improving the results of MS lesion detection.

  2. Can Network Linkage Effects Determine Return? Evidence from Chinese Stock Market.

    PubMed

    Qiao, Haishu; Xia, Yue; Li, Ying

    2016-01-01

    This study used the dynamic conditional correlations (DCC) method to identify the linkage effects of Chinese stock market, and further detected the influence of network linkage effects on magnitude of security returns across different industries. Applying two physics-derived techniques, the minimum spanning tree and the hierarchical tree, we analyzed the stock interdependence within the network of the China Securities Index (CSI) industry index basket. We observed that that obvious linkage effects existed among stock networks. CII and CCE, CAG and ITH as well as COU, CHA and REI were confirmed as the core nodes in the three different networks respectively. We also investigated the stability of linkage effects by estimating the mean correlations and mean distances, as well as the normalized tree length of these indices. In addition, using the GMM model approach, we found inter-node influence within the stock network had a pronounced effect on stock returns. Our results generally suggested that there appeared to be greater clustering effect among the indexes belonging to related industrial sectors than those of diverse sectors, and network comovement was significantly affected by impactive financial events in the reality. Besides, stocks that were more central within the network of stock market usually had higher returns for compensation because they endured greater exposure to correlation risk.

  3. The Data Base for the May 1979 Marine Surface Layer Micrometeorological Experiment at San Nicolas Island, California.

    DTIC Science & Technology

    1982-05-07

    ATIO VE LOC I TY HUMIDITY TEMP. L ENGTH ClIFF. 72% 7TO 160 4% 53% IS 5% A8t RZ lit 492 29% 23% END Of DATA RIiM 29 MARINE SURFACE LAYER...DRAG NO.AT GMM AlTOIM FL UX FLUX FLUX FL UX FLU X RTATIO VK LOC ITTY HUtMIDITY TEMP. LEN4GTH COE. 160% 161% 116% 167% 128% 9% 124% 295% s8% 109% 71...SCC PSI 90951!4 DRAG; NT, VT "MXT ’tI TIM ELS L X H Sy I 1(09rU F LT PtX ATIO VE LOC ITY ((IIDITIY TE MP. L EM!.TIH ItF 11% (1, .14% Il 195 Tx 5% 2A

  4. On the assimilation of SWOT type data into 2D shallow-water models

    NASA Astrophysics Data System (ADS)

    Frédéric, Couderc; Denis, Dartus; Pierre-André, Garambois; Ronan, Madec; Jérôme, Monnier; Jean-Paul, Villa

    2013-04-01

    In river hydraulics, assimilation of water level measurements at gauging stations is well controlled, while assimilation of images is still delicate. In the present talk, we address the richness of satellite mapped information to constrain a 2D shallow-water model, but also related difficulties. 2D shallow models may be necessary for small scale modelling in particular for low-water and flood plain flows. Since in both cases, the dynamics of the wet-dry front is essential, one has to elaborate robust and accurate solvers. In this contribution we introduce robust second order, stable finite volume scheme [CoMaMoViDaLa]. Comparisons of real like tests cases with more classical solvers highlight the importance of an accurate flood plain modelling. A preliminary inverse study is presented in a flood plain flow case, [LaMo] [HoLaMoPu]. As a first step, a 0th order data processing model improves observation operator and produces more reliable water level derived from rough measurements [PuRa]. Then, both model and flow behaviours can be better understood thanks to variational sensitivities based on a gradient computation and adjoint equations. It can reveal several difficulties that a model designer has to tackle. Next, a 4D-Var data assimilation algorithm used with spatialized data leads to improved model calibration and potentially leads to identify river discharges. All the algorithms are implemented into DassFlow software (Fortran, MPI, adjoint) [Da]. All these results and experiments (accurate wet-dry front dynamics, sensitivities analysis, identification of discharges and calibration of model) are currently performed in view to use data from the future SWOT mission. [CoMaMoViDaLa] F. Couderc, R. Madec, J. Monnier, J.-P. Vila, D. Dartus, K. Larnier. "Sensitivity analysis and variational data assimilation for geophysical shallow water flows". Submitted. [Da] DassFlow - Data Assimilation for Free Surface Flows. Computational software http://www-gmm.insa-toulouse.fr/~monnier/DassFlow/ [HoLaMoPu] R. Hostache, X. Lai, J. Monnier, C. Puech. "Assimilation of spatial distributed water levels into a shallow-water flood model. Part II: using a remote sensing image of Mosel river". J. Hydrology (2010). [LaMo] X. Lai, J. Monnier. "Assimilation of spatial distributed water levels into a shallow-water flood model. Part I: mathematical method and test case". J. Hydrology (2009). [PuRa] C. Puech, D. Raclot. "Using geographic information systems and aerial photographs to determine water levels during floods". Hydrol. Process., 16, 1593 - 1602, (2002). [RoDa] H. Roux, D. Dartus. "Use of Parameter Optimization to Estimate a Flood Wave: Potential Applications to Remote Sensing of Rivers". J. Hydrology (2006).

  5. More Health Expenditure, Better Economic Performance? Empirical Evidence From OECD Countries.

    PubMed

    Wang, Fuhmei

    2015-01-01

    Recent economic downturns have led many countries to reduce health spending dramatically, with the World Health Organization raising concerns over the effects of this, in particular among the poor and vulnerable. With the provision of appropriate health care, the population of a country could have better health, thus strengthening the nation's human capital, which could contribute to economic growth through improved productivity. How much should countries spend on health care? This study aims to estimate the optimal health care expenditure in a growing economy. Applying the experiences of countries from the Organization for Economic Co-Operation and Development (OECD) over the period 1990 to 2009, this research introduces the method of system generalized method of moments (GMM) to derive the design of the estimators of the focal variables. Empirical evidence indicates that when the ratio of health spending to gross domestic product (GDP) is less than the optimal level of 7.55%, increases in health spending effectively lead to better economic performance. Above this, more spending does not equate to better care. The real level of health spending in OECD countries is 5.48% of GDP, with a 1.87% economic growth rate. The question which is posed by this study is a pertinent one, especially in the current context of financially constrained health systems around the world. The analytical results of this work will allow policymakers to better allocate scarce resources to achieve their macroeconomic goals. © The Author(s) 2015.

  6. More Health Expenditure, Better Economic Performance? Empirical Evidence From OECD Countries

    PubMed Central

    Wang, Fuhmei

    2015-01-01

    Recent economic downturns have led many countries to reduce health spending dramatically, with the World Health Organization raising concerns over the effects of this, in particular among the poor and vulnerable. With the provision of appropriate health care, the population of a country could have better health, thus strengthening the nation’s human capital, which could contribute to economic growth through improved productivity. How much should countries spend on health care? This study aims to estimate the optimal health care expenditure in a growing economy. Applying the experiences of countries from the Organization for Economic Co-Operation and Development (OECD) over the period 1990 to 2009, this research introduces the method of system generalized method of moments (GMM) to derive the design of the estimators of the focal variables. Empirical evidence indicates that when the ratio of health spending to gross domestic product (GDP) is less than the optimal level of 7.55%, increases in health spending effectively lead to better economic performance. Above this, more spending does not equate to better care. The real level of health spending in OECD countries is 5.48% of GDP, with a 1.87% economic growth rate. The question which is posed by this study is a pertinent one, especially in the current context of financially constrained health systems around the world. The analytical results of this work will allow policymakers to better allocate scarce resources to achieve their macroeconomic goals. PMID:26310501

  7. High ink absorption performance of inkjet printing based on SiO2@Al13 core-shell composites

    NASA Astrophysics Data System (ADS)

    Chen, YiFan; Jiang, Bo; Liu, Li; Du, Yunzhe; Zhang, Tong; Zhao, LiWei; Huang, YuDong

    2018-04-01

    The increasing growth of the inkjet market makes the inkjet printing more necessary. A composite material based on core-shell structure has been developed and applied to prepare inkjet printing layer. In this contribution, the ink printing record layers based on SiO2@Al13 core-shell composite was elaborated. The prepared core-shell composite materials were characterized by X-ray photoelectron spectroscopy (XPS), zeta potential, X-ray diffraction (XRD), scanning electron microscopy (SEM). The results proved the presence of electrostatic adsorption between SiO2 molecules and Al13 molecules with the formation of the well-dispersed system. In addition, based on the adsorption and the liquid permeability analysis, SiO2@Al13 ink printing record layer achieved a relatively high ink uptake (2.5 gmm-1) and permeability (87%), respectively. The smoothness and glossiness of SiO2@Al13 record layers were higher than SiO2 record layers. The core-shell structure facilitated the dispersion of the silica, thereby improved its ink absorption performance and made the clear printed image. Thus, the proposed procedure based on SiO2@Al13 core-shell structure of dye particles could be applied as a promising strategy for inkjet printing.

  8. Physical and mechanical properties of spinach for whole-surface online imaging inspection

    NASA Astrophysics Data System (ADS)

    Tang, Xiuying; Mo, Chang Y.; Chan, Diane E.; Peng, Yankun; Qin, Jianwei; Yang, Chun-Chieh; Kim, Moon S.; Chao, Kuanglin

    2011-06-01

    The physical and mechanical properties of baby spinach were investigated, including density, Young's modulus, fracture strength, and friction coefficient. The average apparent density of baby spinach leaves was 0.5666 g/mm3. The tensile tests were performed using parallel, perpendicular, and diagonal directions with respect to the midrib of each leaf. The test results showed that the mechanical properties of spinach are anisotropic. For the parallel, diagonal, and perpendicular test directions, the average values for the Young's modulus values were found to be 2.137MPa, 1.0841 MPa, and 0.3914 MPa, respectively, and the average fracture strength values were 0.2429 MPa, 0.1396 MPa, and 0.1113 MPa, respectively. The static and kinetic friction coefficient between the baby spinach and conveyor belt were researched, whose test results showed that the average coefficients of kinetic and maximum static friction between the adaxial (front side) spinach leaf surface and conveyor belt were 1.2737 and 1.3635, respectively, and between the abaxial (back side) spinach leaf surface and conveyor belt were 1.1780 and 1.2451 respectively. These works provide the basis for future development of a whole-surface online imaging inspection system that can be used by the commercial vegetable processing industry to reduce food safety risks.

  9. Quality of institution and the FEG (forest, energy intensity, and globalization) -environment relationships in sub-Saharan Africa.

    PubMed

    Amuakwa-Mensah, Franklin; Adom, Philip Kofi

    2017-07-01

    The current share of sub-Saharan Africa in global carbon dioxide emissions is negligible compared to major contributors like Asia, Americas, and Europe. This trend is, however, likely to change given that both economic growth and rate of urbanization in the region are projected to be robust in the future. The current study contributes to the literature by examining both the direct and the indirect impacts of quality of institution on the environment. Specifically, we investigate whether the institutional setting in the region provides some sort of a complementary role in the environment-FEG relationships. We use the panel two-step system generalized method of moments (GMM) technique to deal with the simultaneity problem. Data consists of 43 sub-Saharan African countries. The result shows that energy inefficiency compromises environmental standards. However, the quality of the institutional setting helps moderate this negative consequences; countries with good institutions show greater prospects than countries with poor institutions. On the other hand, globalization of the region and increased forest size generate positive environmental outcomes in the region. Their impacts are, however, independent of the quality of institution. Afforestation programs, promotion of other clean energy types, and investment in energy efficiency, basic city infrastructure, and regulatory and institutional structures, are desirable policies to pursue to safeguard the environment.

  10. Early Season Large-Area Winter Crop Mapping Using MODIS NDVI Data, Growing Degree Days Information and a Gaussian Mixture Model

    NASA Technical Reports Server (NTRS)

    Skakun, Sergii; Franch, Belen; Vermote, Eric; Roger, Jean-Claude; Becker-Reshef, Inbal; Justice, Christopher; Kussul, Nataliia

    2017-01-01

    Knowledge on geographical location and distribution of crops at global, national and regional scales is an extremely valuable source of information applications. Traditional approaches to crop mapping using remote sensing data rely heavily on reference or ground truth data in order to train/calibrate classification models. As a rule, such models are only applicable to a single vegetation season and should be recalibrated to be applicable for other seasons. This paper addresses the problem of early season large-area winter crop mapping using Moderate Resolution Imaging Spectroradiometer (MODIS) derived Normalized Difference Vegetation Index (NDVI) time-series and growing degree days (GDD) information derived from the Modern-Era Retrospective analysis for Research and Applications (MERRA-2) product. The model is based on the assumption that winter crops have developed biomass during early spring while other crops (spring and summer) have no biomass. As winter crop development is temporally and spatially non-uniform due to the presence of different agro-climatic zones, we use GDD to account for such discrepancies. A Gaussian mixture model (GMM) is applied to discriminate winter crops from other crops (spring and summer). The proposed method has the following advantages: low input data requirements, robustness, applicability to global scale application and can provide winter crop maps 1.5-2 months before harvest. The model is applied to two study regions, the State of Kansas in the US and Ukraine, and for multiple seasons (2001-2014). Validation using the US Department of Agriculture (USDA) Crop Data Layer (CDL) for Kansas and ground measurements for Ukraine shows that accuracies of greater than 90% can be achieved in mapping winter crops 1.5-2 months before harvest. Results also show good correspondence to official statistics with average coefficients of determination R(exp. 2) greater than 0.85.

  11. Improving the safety of high-dose methotrexate for children with hematologic cancers in settings without access to MTX levels using extended hydration and additional leucovorin.

    PubMed

    Vaishnavi, Kalthi; Bansal, Deepak; Trehan, Amita; Jain, Richa; Attri, Savita Verma

    2018-05-16

    A lack of access to methotrexate levels is common in low- and middle-income countries (LMIC), relevant for 80% of children with cancer worldwide. We evaluated whether high-dose methotrexate (HD-MTX) can be administered safely with extended hydration and leucovorin rescue, with monitoring of serum creatinine and urine pH. The prospective study was conducted at a single centre in Chandigarh, India in 2015. Patients with B-cell acute lymphoblastic leukemia (ALL) or with T-cell ALL or non-Hodgkin lymphoma (T-NHL) were administered 3 and 5 gm/m 2 of MTX (24 hr infusion), respectively. Six doses of leucovorin (15 mg/m 2 /dose), instead of recommended three (for optimally reduced levels) at standard timing (42 hr from start of HD-MTX) were administered. Hydration (125 ml/m 2 /hr) was continued for 72 hr, instead of the recommended 30 hr. Hydration fluid consisted of 0.45% sodium chloride, 5% dextrose, 7.5% sodium bicarbonate (50 mmol/l) and potassium chloride (20 mmol/l). Serum creatinine and urine pH were measured at baseline, 24 and 48 hr. The volume of hydration was increased (200 ml/m 2 /hr) for a serum creatinine > 1.25 times the baseline. The study included 100 cycles of HD-MTX in 53 patients: B-ALL 25 patients (51 cycles), T-ALL 16 patients (28 cycles), T-NHL 10 patients (18 cycles), and relapsed ALL 2 patients (3 cycles). The mean age was 6.8 ± 3.2 years. Patients were underweight in 15 (15%) cycles. Patients in 23% of cycles had a rise in creatinine to >1.25 times the baseline. Toxicities (NCI CTCAE v4.0) included mucositis (32%), diarrhoea (10%), and febrile neutropenia (9%). One patient died from dengue shock syndrome. It is safe to administer 3 or 5 gm/m 2 of MTX (24 hr infusion) without measuring MTX levels, with extended hydration, additional doses of leucovorin, and monitoring of serum creatinine and urine pH. © 2018 Wiley Periodicals, Inc.

  12. [Variations of the nutritional condition of lobsters Panulirus argus (Decapoda: Palinuridae) in Eastern region of the Gulf of Batabanó, Cuba].

    PubMed

    Lopeztegui Castillo, Alexander; Capetillo Piñar, Norberto; Betanzos Vega, Abel

    2012-03-01

    Nutritional condition can affect survival and growth rate of crustaceans, and this is mostly affected by habitat conditions. This study describes the space-temporary nutritional changes in this commercially important species. With this aim, the variations in the nutritional condition (K) of lobsters from four zones (1, 2, 4 and 5) in the Gulf of Batabanó, Cuba, were determined. For this, the weight/length ratio (K=Pt/Lt) was calculated using animals captured in 1981 and 2010. The nutritional condition between areas and sexes, and years and sexes, was contrasted by a bifactorial ANOVA, and the overall length and weight of lobsters were compared using a t-Test for independent samples and unifactorial ANOVA. It was found that the nutritional condition was significantly greater in males than in females. In addition, significant variations between zones were detected for both years. Nutritional condition of lobsters from Zone five was the highest for 1981, while it was Zone two for 2010. Lobsters nutritional state showed significant variations between years, being greater in 1981 (2.34 +/- 0.84g/mm) than in 2010 (1.96 +/- 0.49g/mm). The inter-zones variations as well as the inter-annual ones seem to be related to the reported variations of the bottom type and the vegetation cover. Seasonal variations in the abundance and distribution of benthic organisms, that constitute food for lobsters, could also be influencing. The differences between sexes, however, were assumed as a consequence of the methodology used and the sexual dimorphism of the species. Due to other K estimation methods, that do not include morphometric measurements, these differences were not detected. We suggested that the P. argus nutritional condition is a good estimator of the habitat condition. Besides, according to the applied K estimation methodology, it was found that different groups of lobsters that have resemblant nutritional condition, did not necessarily observe similarities in the overall mean length or weight, so they could exist under different habitat conditions.

  13. Information Geometry for Landmark Shape Analysis: Unifying Shape Representation and Deformation

    PubMed Central

    Peter, Adrian M.; Rangarajan, Anand

    2010-01-01

    Shape matching plays a prominent role in the comparison of similar structures. We present a unifying framework for shape matching that uses mixture models to couple both the shape representation and deformation. The theoretical foundation is drawn from information geometry wherein information matrices are used to establish intrinsic distances between parametric densities. When a parameterized probability density function is used to represent a landmark-based shape, the modes of deformation are automatically established through the information matrix of the density. We first show that given two shapes parameterized by Gaussian mixture models (GMMs), the well-known Fisher information matrix of the mixture model is also a Riemannian metric (actually, the Fisher-Rao Riemannian metric) and can therefore be used for computing shape geodesics. The Fisher-Rao metric has the advantage of being an intrinsic metric and invariant to reparameterization. The geodesic—computed using this metric—establishes an intrinsic deformation between the shapes, thus unifying both shape representation and deformation. A fundamental drawback of the Fisher-Rao metric is that it is not available in closed form for the GMM. Consequently, shape comparisons are computationally very expensive. To address this, we develop a new Riemannian metric based on generalized ϕ-entropy measures. In sharp contrast to the Fisher-Rao metric, the new metric is available in closed form. Geodesic computations using the new metric are considerably more efficient. We validate the performance and discriminative capabilities of these new information geometry-based metrics by pairwise matching of corpus callosum shapes. We also study the deformations of fish shapes that have various topological properties. A comprehensive comparative analysis is also provided using other landmark-based distances, including the Hausdorff distance, the Procrustes metric, landmark-based diffeomorphisms, and the bending energies of the thin-plate (TPS) and Wendland splines. PMID:19110497

  14. The environmental Kuznets curve in the presence of corruption in developing countries.

    PubMed

    Masron, Tajul Ariffin; Subramaniam, Yogeeswari

    2018-05-01

    Environmental degradation is at an alarming level in developing economies. The present paper examines the direct and indirect impacts of corruption on environmental deterioration using the panel data of 64 developing countries. Adopting the generalized method of moments (GMM) technique, the paper finds evidence that corruption exhibits a positive impact on pollution. Subsequently, there is also evidence indicating that the level of pollution tends to be higher in countries with a higher level of corruption, eliminating the effectiveness of income effect on environmental preservation. These results also suggest that environmental degradation is monotonically increasing with higher corruption and invalidate the presence of the EKC. Hence, a policy focuses that an anti-corruption particularly in the environmental and natural resources sector needs to be emphasized and enforced in order to reduce or possibly to totally eliminate the rent for corruption.

  15. Use of a Hybrid Edge Node-Centroid Node Approach to Thermal Modeling

    NASA Technical Reports Server (NTRS)

    Peabody, Hume L.

    2010-01-01

    A recent proposal submitted for an ESA mission required that models be delivered in ESARAD/ESAT AN formats. ThermalDesktop was the preferable analysis code to be used for model development with a conversion done as the final step before delivery. However, due to some differences between the capabilities of the two codes, a unique approach was developed to take advantage of the edge node capability of ThermalDesktop while maintaining the centroid node approach used by ESARAD. In essence, two separate meshes were used: one for conduction and one for radiation. The conduction calculations were eliminated from the radiation surfaces and the capacitance and radiative calculations were eliminated from the conduction surfaces. The resulting conduction surface nodes were coincident with all nodes of the radiation surface and were subsequently merged, while the nodes along the edges remained free. Merging of nodes on the edges of adjacent surfaces provided the conductive links between surfaces. Lastly, all nodes along edges were placed into the subnetwork and the resulting supernetwork included only the nodes associated with radiation surfaces. This approach had both benefits and disadvantages. The use of centroid, surface based radiation reduces the overall size of the radiation network, which is often the most computationally intensive part of the modeling process. Furthermore, using the conduction surfaces and allowing ThermalDesktop to calculate the conduction network can save significant time by not having to manually generate the couplings. Lastly, the resulting GMM/TMM models can be exported to formats which do not support edge nodes. One drawback, however, is the necessity to maintain two sets of surfaces. This requires additional care on the part of the analyst to ensure communication between the conductive and radiative surfaces in the resulting overall network. However, with more frequent use of this technique, the benefits of this approach can far outweigh the additional effort.

  16. Use of a Hybrid Edge Node-Centroid Node Approach to Thermal Modeling

    NASA Technical Reports Server (NTRS)

    Peabody, Hume L.

    2010-01-01

    A recent proposal submitted for an ESA mission required that models be delivered in ESARAD/ESATAN formats. ThermalDesktop was the preferable analysis code to be used for model development with a conversion done as the final step before delivery. However, due to some differences between the capabilities of the two codes, a unique approach was developed to take advantage of the edge node capability of ThermalDesktop while maintaining the centroid node approach used by ESARAD. In essence, two separate meshes were used: one for conduction and one for radiation. The conduction calculations were eliminated from the radiation surfaces and the capacitance and radiative calculations were eliminated from the conduction surfaces. The resulting conduction surface nodes were coincident with all nodes of the radiation surface and were subsequently merged, while the nodes along the edges remained free. Merging of nodes on the edges of adjacent surfaces provided the conductive links between surfaces. Lastly, all nodes along edges were placed into the subnetwork and the resulting supernetwork included only the nodes associated with radiation surfaces. This approach had both benefits and disadvantages. The use of centroid, surface based radiation reduces the overall size of the radiation network, which is often the most computationally intensive part of the modeling process. Furthermore, using the conduction surfaces and allowing ThermalDesktop to calculate the conduction network can save significant time by not having to manually generate the couplings. Lastly, the resulting GMM/TMM models can be exported to formats which do not support edge nodes. One drawback, however, is the necessity to maintain two sets of surfaces. This requires additional care on the part of the analyst to ensure communication between the conductive and radiative surfaces in the resulting overall network. However, with more frequent use of this technique, the benefits of this approach can far outweigh the additional effort.

  17. Robust estimation of mammographic breast density: a patient-based approach

    NASA Astrophysics Data System (ADS)

    Heese, Harald S.; Erhard, Klaus; Gooßen, Andre; Bulow, Thomas

    2012-02-01

    Breast density has become an established risk indicator for developing breast cancer. Current clinical practice reflects this by grading mammograms patient-wise as entirely fat, scattered fibroglandular, heterogeneously dense, or extremely dense based on visual perception. Existing (semi-) automated methods work on a per-image basis and mimic clinical practice by calculating an area fraction of fibroglandular tissue (mammographic percent density). We suggest a method that follows clinical practice more strictly by segmenting the fibroglandular tissue portion directly from the joint data of all four available mammographic views (cranio-caudal and medio-lateral oblique, left and right), and by subsequently calculating a consistently patient-based mammographic percent density estimate. In particular, each mammographic view is first processed separately to determine a region of interest (ROI) for segmentation into fibroglandular and adipose tissue. ROI determination includes breast outline detection via edge-based methods, peripheral tissue suppression via geometric breast height modeling, and - for medio-lateral oblique views only - pectoral muscle outline detection based on optimizing a three-parameter analytic curve with respect to local appearance. Intensity harmonization based on separately acquired calibration data is performed with respect to compression height and tube voltage to facilitate joint segmentation of available mammographic views. A Gaussian mixture model (GMM) on the joint histogram data with a posteriori calibration guided plausibility correction is finally employed for tissue separation. The proposed method was tested on patient data from 82 subjects. Results show excellent correlation (r = 0.86) to radiologist's grading with deviations ranging between -28%, (q = 0.025) and +16%, (q = 0.975).

  18. Comorbid Trajectories of Postpartum Depression and PTSD among Mothers with Childhood Trauma History: Course, Predictors, Processes and Child Adjustment

    PubMed Central

    Oh, Wonjung; Muzik, Maria; McGinnis, Ellen Waxler; Hamilton, Lindsay; Menke, Rena A.; Rosenblum, Katherine Lisa

    2016-01-01

    Background Both postpartum depression and posttraumatic stress disorder (PTSD) have been identified as unique risk factors for poor maternal psychopathology. Little is known, however, regarding the longitudinal processes of co-occurring depression and PTSD among mothers with childhood adversity. The present study addressed this research gap by examining co-occurring postpartum depression and PTSD trajectories among mothers with childhood trauma history. Methods 177 mothers with childhood trauma history reported depression and PTSD symptoms at 4, 6, 12, 15 and 18 months postpartum, as well as individual (shame, posttraumatic cognitions, dissociation) and contextual (social support, childhood and postpartum trauma experiences) factors. Results Growth mixture modeling (GMM) identified three comorbid change patterns: The Resilient group (64%) showed the lowest levels of depression and PTSD that remained stable over time; the Vulnerable group (23%) displayed moderately high levels of comorbid depression and PTSD; and the Chronic High-Risk group (14%) showed the highest level of comorbid depression and PTSD. Further, a path model revealed that postpartum dissociation, negative posttraumatic cognitions, shame, as well as social support, and childhood and postpartum trauma experiences differentiated membership in the Chronic High-Risk and Vulnerable. Finally, we found that children of mothers in the Vulnerable group were reported as having more externalizing and total problem behaviors. Limitations Generalizability is limited given sample of mothers with childhood trauma history and demographic risk. Conclusions The results highlight the strong comorbidity of postpartum depression and PTSD among mothers with childhood trauma history, and also emphasize its aversive impact on the offspring. PMID:27131504

  19. Comorbid trajectories of postpartum depression and PTSD among mothers with childhood trauma history: Course, predictors, processes and child adjustment.

    PubMed

    Oh, Wonjung; Muzik, Maria; McGinnis, Ellen Waxler; Hamilton, Lindsay; Menke, Rena A; Rosenblum, Katherine Lisa

    2016-08-01

    Both postpartum depression and posttraumatic stress disorder (PTSD) have been identified as unique risk factors for poor maternal psychopathology. Little is known, however, regarding the longitudinal processes of co-occurring depression and PTSD among mothers with childhood adversity. The present study addressed this research gap by examining co-occurring postpartum depression and PTSD trajectories among mothers with childhood trauma history. 177 mothers with childhood trauma history reported depression and PTSD symptoms at 4, 6, 12, 15 and 18 months postpartum, as well as individual (shame, posttraumatic cognitions, dissociation) and contextual (social support, childhood and postpartum trauma experiences) factors. Growth mixture modeling (GMM) identified three comorbid change patterns: The Resilient group (64%) showed the lowest levels of depression and PTSD that remained stable over time; the Vulnerable group (23%) displayed moderately high levels of comorbid depression and PTSD; and the Chronic High-Risk group (14%) showed the highest level of comorbid depression and PTSD. Further, a path model revealed that postpartum dissociation, negative posttraumatic cognitions, shame, as well as social support, and childhood and postpartum trauma experiences differentiated membership in the Chronic High-Risk and Vulnerable. Finally, we found that children of mothers in the Vulnerable group were reported as having more externalizing and total problem behaviors. Generalizability is limited, given this is a sample of mothers with childhood trauma history and demographic risk. The results highlight the strong comorbidity of postpartum depression and PTSD among mothers with childhood trauma history, and also emphasize its aversive impact on the offspring. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. MUSIC-Expected maximization gaussian mixture methodology for clustering and detection of task-related neuronal firing rates.

    PubMed

    Ortiz-Rosario, Alexis; Adeli, Hojjat; Buford, John A

    2017-01-15

    Researchers often rely on simple methods to identify involvement of neurons in a particular motor task. The historical approach has been to inspect large groups of neurons and subjectively separate neurons into groups based on the expertise of the investigator. In cases where neuron populations are small it is reasonable to inspect these neuronal recordings and their firing rates carefully to avoid data omissions. In this paper, a new methodology is presented for automatic objective classification of neurons recorded in association with behavioral tasks into groups. By identifying characteristics of neurons in a particular group, the investigator can then identify functional classes of neurons based on their relationship to the task. The methodology is based on integration of a multiple signal classification (MUSIC) algorithm to extract relevant features from the firing rate and an expectation-maximization Gaussian mixture algorithm (EM-GMM) to cluster the extracted features. The methodology is capable of identifying and clustering similar firing rate profiles automatically based on specific signal features. An empirical wavelet transform (EWT) was used to validate the features found in the MUSIC pseudospectrum and the resulting signal features captured by the methodology. Additionally, this methodology was used to inspect behavioral elements of neurons to physiologically validate the model. This methodology was tested using a set of data collected from awake behaving non-human primates. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Water Mapping Using Multispectral Airborne LIDAR Data

    NASA Astrophysics Data System (ADS)

    Yan, W. Y.; Shaker, A.; LaRocque, P. E.

    2018-04-01

    This study investigates the use of the world's first multispectral airborne LiDAR sensor, Optech Titan, manufactured by Teledyne Optech to serve the purpose of automatic land-water classification with a particular focus on near shore region and river environment. Although there exist recent studies utilizing airborne LiDAR data for shoreline detection and water surface mapping, the majority of them only perform experimental testing on clipped data subset or rely on data fusion with aerial/satellite image. In addition, most of the existing approaches require manual intervention or existing tidal/datum data for sample collection of training data. To tackle the drawbacks of previous approaches, we propose and develop an automatic data processing workflow for land-water classification using multispectral airborne LiDAR data. Depending on the nature of the study scene, two methods are proposed for automatic training data selection. The first method utilizes the elevation/intensity histogram fitted with Gaussian mixture model (GMM) to preliminarily split the land and water bodies. The second method mainly relies on the use of a newly developed scan line elevation intensity ratio (SLIER) to estimate the water surface data points. Regardless of the training methods being used, feature spaces can be constructed using the multispectral LiDAR intensity, elevation and other features derived from these parameters. The comprehensive workflow was tested with two datasets collected for different near shore region and river environment, where the overall accuracy yielded better than 96 %.

  2. Assessing links between energy consumption, freight transport, and economic growth: evidence from dynamic simultaneous equation models.

    PubMed

    Nasreen, Samia; Saidi, Samir; Ozturk, Ilhan

    2018-06-01

    We investigate this study to examine the relationship between economic growth, freight transport, and energy consumption for 63 developing countries over the period of 1990-2016. In order to make the panel data analysis more homogeneous, we apply the income level of countries to divide the global panel into three sub-panels, namely, lower-middle income countries (LMIC), upper-middle income countries (UMIC), and high-income countries (HIC). Using the generalized method of moments (GMM), the results prove evidence of bidirectional causal relationship between economic growth and freight transport for all selected panels and between economic growth and energy consumption for the high- and upper-middle income panels. For the lower-middle income panel, the causality is unidirectional running from energy consumption to economic growth. Also, the results indicate that the relationship between freight transport and energy use is bidirectional for the high-income countries and unidirectional from freight transport to energy consumption for the upper-middle and lower-middle income countries. Empirical evidence demonstrates the importance of energy for economic activity and rejects the neo-classical assumption that energy is neutral for growth. An important policy recommendation is that there is need of advancements in vehicle technology which can reduce energy intensity from transport sector and improve the energy efficiency in transport activity which in turn allows a greater positive role of transport in global economic activity.

  3. Would environmental pollution affect home prices? An empirical study based on China's key cities.

    PubMed

    Hao, Yu; Zheng, Shaoqing

    2017-11-01

    With the development of China's economy, the problem of environmental pollution has become increasingly more serious, affecting the sustained and healthy development of Chinese cities and the willingness of residents to invest in fixed assets. In this paper, a panel data set of 70 of China's key cities from 2003 to 2014 is used to study the effect of environmental pollution on home prices in China's key cities. In addition to the static panel data regression model, this paper uses the generalized method of moments (GMM) to control for the potential endogeneity and introduce the dynamics. To ensure the robustness of the research results, this paper uses four typical pollutants: per capita volume of SO 2 emissions, industrial soot (dust) emissions, industrial wastewater discharge, and industrial chemical oxygen demand discharge. The analysis shows that environmental pollution does have a negative impact on home prices, and the magnitude of this effect is dependent on the level of economic development. When GDP per capita increases, the size of the negative impact on home prices tends to reduce. Industrial soot (dust) has the greatest impact, and the impact of industrial wastewater is relatively small. It is also found that some other social and economic factors, including greening, public transport, citizen income, fiscal situation, loans, FDI, and population density, have positive effects on home prices, but the effect of employment on home prices is relatively weak.

  4. Quality evaluation of millet-soy blended extrudates formulated through linear programming.

    PubMed

    Balasubramanian, S; Singh, K K; Patil, R T; Onkar, Kolhe K

    2012-08-01

    Whole pearl millet, finger millet and decorticated soy bean blended (millet soy) extrudates formulations were designed using a linear programming (LP) model to minimize the total cost of the finished product. LP formulated composite flour was extruded through twin screw food extruder at different feed rate (6.5-13.5 kg/h), screw speed (200-350 rpm, constant feed moisture (14% wb), barrel temperature (120 °C) and cutter speed (15 rpm). The physical, functional, textural and pasting characteristics of extrudates were examined and their responses were studied. Expansion index (2.31) and sectional expansion index (5.39) was found to be was found maximum for feed rate and screw speed combination 9.5 kg/h and 250 rpm. However, density (0.25 × 10(-3) g/mm(3)) was maximum for 9.5 kg/h and 300 rpm combination. Maximum color change (10.32) was found for 9.5 kg/h feed rate and 200 rpm screw speed. The lower hardness was obtained for the samples extruded at lowest feed rate (6.5 kg/h) for all screw speed and feed rate at 9.5 kg/h for 300-350 rpm screw speed. Peak viscosity decreases with all screw speed of 9.5 kg/h feed rate.

  5. Voxel-Based Neighborhood for Spatial Shape Pattern Classification of Lidar Point Clouds with Supervised Learning.

    PubMed

    Plaza-Leiva, Victoria; Gomez-Ruiz, Jose Antonio; Mandow, Anthony; García-Cerezo, Alfonso

    2017-03-15

    Improving the effectiveness of spatial shape features classification from 3D lidar data is very relevant because it is largely used as a fundamental step towards higher level scene understanding challenges of autonomous vehicles and terrestrial robots. In this sense, computing neighborhood for points in dense scans becomes a costly process for both training and classification. This paper proposes a new general framework for implementing and comparing different supervised learning classifiers with a simple voxel-based neighborhood computation where points in each non-overlapping voxel in a regular grid are assigned to the same class by considering features within a support region defined by the voxel itself. The contribution provides offline training and online classification procedures as well as five alternative feature vector definitions based on principal component analysis for scatter, tubular and planar shapes. Moreover, the feasibility of this approach is evaluated by implementing a neural network (NN) method previously proposed by the authors as well as three other supervised learning classifiers found in scene processing methods: support vector machines (SVM), Gaussian processes (GP), and Gaussian mixture models (GMM). A comparative performance analysis is presented using real point clouds from both natural and urban environments and two different 3D rangefinders (a tilting Hokuyo UTM-30LX and a Riegl). Classification performance metrics and processing time measurements confirm the benefits of the NN classifier and the feasibility of voxel-based neighborhood.

  6. Evidence of Associations between Cytokine Genes and Subjective Reports of Sleep Disturbance in Oncology Patients and Their Family Caregivers

    PubMed Central

    Miaskowski, Christine; Cooper, Bruce A.; Dhruva, Anand; Dunn, Laura B.; Langford, Dale J.; Cataldo, Janine K.; Baggott, Christina R.; Merriman, John D.; Dodd, Marylin; Lee, Kathryn; West, Claudia; Paul, Steven M.; Aouizerat, Bradley E.

    2012-01-01

    The purposes of this study were to identify distinct latent classes of individuals based on subjective reports of sleep disturbance; to examine differences in demographic, clinical, and symptom characteristics between the latent classes; and to evaluate for variations in pro- and anti-inflammatory cytokine genes between the latent classes. Among 167 oncology outpatients with breast, prostate, lung, or brain cancer and 85 of their FCs, growth mixture modeling (GMM) was used to identify latent classes of individuals based on General Sleep Disturbance Scale (GSDS) obtained prior to, during, and for four months following completion of radiation therapy. Single nucleotide polymorphisms (SNPs) and haplotypes in candidate cytokine genes were interrogated for differences between the two latent classes. Multiple logistic regression was used to assess the effect of phenotypic and genotypic characteristics on GSDS group membership. Two latent classes were identified: lower sleep disturbance (88.5%) and higher sleep disturbance (11.5%). Participants who were younger and had a lower Karnofsky Performance status score were more likely to be in the higher sleep disturbance class. Variation in two cytokine genes (i.e., IL6, NFKB) predicted latent class membership. Evidence was found for latent classes with distinct sleep disturbance trajectories. Unique genetic markers in cytokine genes may partially explain the interindividual heterogeneity characterizing these trajectories. PMID:22844404

  7. The Multifaceted Effects of Agmatine on Functional Recovery after Spinal Cord Injury through Modulations of BMP-2/4/7 Expressions in Neurons and Glial Cells

    PubMed Central

    Park, Yu Mi; Lee, Won Taek; Bokara, Kiran Kumar; Seo, Su Kyoung; Park, Seung Hwa; Kim, Jae Hwan; Yenari, Midori A.; Park, Kyung Ah; Lee, Jong Eun

    2013-01-01

    Presently, few treatments for spinal cord injury (SCI) are available and none have facilitated neural regeneration and/or significant functional improvement. Agmatine (Agm), a guanidinium compound formed from decarboxylation of L-arginine by arginine decarboxylase, is a neurotransmitter/neuromodulator and been reported to exert neuroprotective effects in central nervous system injury models including SCI. The purpose of this study was to demonstrate the multifaceted effects of Agm on functional recovery and remyelinating events following SCI. Compression SCI in mice was produced by placing a 15 g/mm2 weight for 1 min at thoracic vertebra (Th) 9 segment. Mice that received an intraperitoneal (i.p.) injection of Agm (100 mg/kg/day) within 1 hour after SCI until 35 days showed improvement in locomotor recovery and bladder function. Emphasis was made on the analysis of remyelination events, neuronal cell preservation and ablation of glial scar area following SCI. Agm treatment significantly inhibited the demyelination events, neuronal loss and glial scar around the lesion site. In light of recent findings that expressions of bone morphogenetic proteins (BMPs) are modulated in the neuronal and glial cell population after SCI, we hypothesized whether Agm could modulate BMP- 2/4/7 expressions in neurons, astrocytes, oligodendrocytes and play key role in promoting the neuronal and glial cell survival in the injured spinal cord. The results from computer assisted stereological toolbox analysis (CAST) demonstrate that Agm treatment dramatically increased BMP- 2/7 expressions in neurons and oligodendrocytes. On the other hand, BMP- 4 expressions were significantly decreased in astrocytes and oligodendrocytes around the lesion site. Together, our results reveal that Agm treatment improved neurological and histological outcomes, induced oligodendrogenesis, protected neurons, and decreased glial scar formation through modulating the BMP- 2/4/7 expressions following SCI. PMID:23349763

  8. Energy consumption habits and human health nexus in Sub-Saharan Africa.

    PubMed

    Hanif, Imran

    2018-05-22

    This study explores the impact of fossil fuels consumption, solid fuels consumption for cooking purposes, economic growth, and carbon emissions on human health, with a key emphasis on the occurrence of tuberculosis and the high mortality rate in Sub-Saharan Africa. For its practical insights, the study develops a system Generalized Method of Moment (GMM) for a panel of 34 middle- and lower-middle-income countries from 1995 to 2015. The study adopts a flexible methodology to tackle endogeneity in the variables. The robust results report that the use of solid fuels (charcoal, peat, wood, wood pellets, crop residues) for cooking purposes and the consumption of fossil fuels (oil, coal, gas) are significantly increasing the occurrence of tuberculosis. In addition, the results highlight that the consumption of both solid fuels and fossil fuels has adverse affects on life expectancy by increasing the mortality rate in Sub-Saharan African countries. Results report that renewable energy sources like sun, wind, and water (all with potential to prevent households from direct exposure to particulate matters and harmful gases) as well as a rise in economic growth serve as helping factors to control the occurrence of tuberculosis and to decrease the mortality rate. Moreover, the use of renewable energy sources is serving to lessen emissions of carbon dioxide, nitrogen dioxides, and particulate matters, which can ultimately decrease the mortality rate and extend the life expectancy in Sub-Saharan Africa.

  9. Spoken language identification based on the enhanced self-adjusting extreme learning machine approach.

    PubMed

    Albadr, Musatafa Abbas Abbood; Tiun, Sabrina; Al-Dhief, Fahad Taha; Sammour, Mahmoud A M

    2018-01-01

    Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%.

  10. Spoken language identification based on the enhanced self-adjusting extreme learning machine approach

    PubMed Central

    Tiun, Sabrina; AL-Dhief, Fahad Taha; Sammour, Mahmoud A. M.

    2018-01-01

    Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%. PMID:29672546

  11. Sample Training Based Wildfire Segmentation by 2D Histogram θ-Division with Minimum Error

    PubMed Central

    Dong, Erqian; Sun, Mingui; Jia, Wenyan; Zhang, Dengyi; Yuan, Zhiyong

    2013-01-01

    A novel wildfire segmentation algorithm is proposed with the help of sample training based 2D histogram θ-division and minimum error. Based on minimum error principle and 2D color histogram, the θ-division methods were presented recently, but application of prior knowledge on them has not been explored. For the specific problem of wildfire segmentation, we collect sample images with manually labeled fire pixels. Then we define the probability function of error division to evaluate θ-division segmentations, and the optimal angle θ is determined by sample training. Performances in different color channels are compared, and the suitable channel is selected. To further improve the accuracy, the combination approach is presented with both θ-division and other segmentation methods such as GMM. Our approach is tested on real images, and the experiments prove its efficiency for wildfire segmentation. PMID:23878526

  12. A New Moving Object Detection Method Based on Frame-difference and Background Subtraction

    NASA Astrophysics Data System (ADS)

    Guo, Jiajia; Wang, Junping; Bai, Ruixue; Zhang, Yao; Li, Yong

    2017-09-01

    Although many methods of moving object detection have been proposed, moving object extraction is still the core in video surveillance. However, with the complex scene in real world, false detection, missed detection and deficiencies resulting from cavities inside the body still exist. In order to solve the problem of incomplete detection for moving objects, a new moving object detection method combined an improved frame-difference and Gaussian mixture background subtraction is proposed in this paper. To make the moving object detection more complete and accurate, the image repair and morphological processing techniques which are spatial compensations are applied in the proposed method. Experimental results show that our method can effectively eliminate ghosts and noise and fill the cavities of the moving object. Compared to other four moving object detection methods which are GMM, VIBE, frame-difference and a literature's method, the proposed method improve the efficiency and accuracy of the detection.

  13. Towards an unsupervised device for the diagnosis of childhood pneumonia in low resource settings: automatic segmentation of respiratory sounds.

    PubMed

    Sola, J; Braun, F; Muntane, E; Verjus, C; Bertschi, M; Hugon, F; Manzano, S; Benissa, M; Gervaix, A

    2016-08-01

    Pneumonia remains the worldwide leading cause of children mortality under the age of five, with every year 1.4 million deaths. Unfortunately, in low resource settings, very limited diagnostic support aids are provided to point-of-care practitioners. Current UNICEF/WHO case management algorithm relies on the use of a chronometer to manually count breath rates on pediatric patients: there is thus a major need for more sophisticated tools to diagnose pneumonia that increase sensitivity and specificity of breath-rate-based algorithms. These tools should be low cost, and adapted to practitioners with limited training. In this work, a novel concept of unsupervised tool for the diagnosis of childhood pneumonia is presented. The concept relies on the automated analysis of respiratory sounds as recorded by a point-of-care electronic stethoscope. By identifying the presence of auscultation sounds at different chest locations, this diagnostic tool is intended to estimate a pneumonia likelihood score. After presenting the overall architecture of an algorithm to estimate pneumonia scores, the importance of a robust unsupervised method to identify inspiratory and expiratory phases of a respiratory cycle is highlighted. Based on data from an on-going study involving pediatric pneumonia patients, a first algorithm to segment respiratory sounds is suggested. The unsupervised algorithm relies on a Mel-frequency filter bank, a two-step Gaussian Mixture Model (GMM) description of data, and a final Hidden Markov Model (HMM) interpretation of inspiratory-expiratory sequences. Finally, illustrative results on first recruited patients are provided. The presented algorithm opens the doors to a new family of unsupervised respiratory sound analyzers that could improve future versions of case management algorithms for the diagnosis of pneumonia in low-resources settings.

  14. Do Consumers Substitute Opium for Hashish? An Economic Analysis of Simultaneous Cannabinoid and Opiate Consumption in a Legal Regime

    PubMed Central

    Chandra, Madhur

    2015-01-01

    Aim To analyze interrelationships in the consumption of opiates and cannabinoids in a legal regime and, specifically, whether consumers of opiates and cannabinoids treat them as substitutes for each other. Method Econometric dynamic panel data models for opium consumption are estimated using the generalized method of moments (GMM). A unique dataset containing information about opiate (opium) consumption from the Punjab province of British India for the years 1907–1918 is analyzed (n=272) as a function of its own price, the prices of two forms of cannabis (the leaf (bhang), and the resin (charas, or hashish)), and wage income. Cross-price elasticities are examined to reveal substitution or complementarity between opium and cannabis. Results Opium is a substitute for charas (or hashish), with a cross price elasticity (β3) of 0.14 (p < 0.05), but not for bhang (cannabis leaves; cross price elasticity = 0.00, p > 0.10). Opium consumption (β1 = 0.47 to 0.49, p < 0.01) shows properties of habit persistence consistent with addiction. The consumption of opium is slightly responsive (inelastic) to changes in its own price (β2 = −0.34 to −0.35, p < 0.05 to 0.01) and consumer wages (β4 = 0.15, p < 0.05). Conclusion Opium and hashish, a form of cannabis, are substitutes. In addition, opium consumption displays properties of habit persistence and slight price and wage income responsiveness (inelasticity) consistent with an addictive substance. PMID:26455552

  15. Voxel-Based Neighborhood for Spatial Shape Pattern Classification of Lidar Point Clouds with Supervised Learning

    PubMed Central

    Plaza-Leiva, Victoria; Gomez-Ruiz, Jose Antonio; Mandow, Anthony; García-Cerezo, Alfonso

    2017-01-01

    Improving the effectiveness of spatial shape features classification from 3D lidar data is very relevant because it is largely used as a fundamental step towards higher level scene understanding challenges of autonomous vehicles and terrestrial robots. In this sense, computing neighborhood for points in dense scans becomes a costly process for both training and classification. This paper proposes a new general framework for implementing and comparing different supervised learning classifiers with a simple voxel-based neighborhood computation where points in each non-overlapping voxel in a regular grid are assigned to the same class by considering features within a support region defined by the voxel itself. The contribution provides offline training and online classification procedures as well as five alternative feature vector definitions based on principal component analysis for scatter, tubular and planar shapes. Moreover, the feasibility of this approach is evaluated by implementing a neural network (NN) method previously proposed by the authors as well as three other supervised learning classifiers found in scene processing methods: support vector machines (SVM), Gaussian processes (GP), and Gaussian mixture models (GMM). A comparative performance analysis is presented using real point clouds from both natural and urban environments and two different 3D rangefinders (a tilting Hokuyo UTM-30LX and a Riegl). Classification performance metrics and processing time measurements confirm the benefits of the NN classifier and the feasibility of voxel-based neighborhood. PMID:28294963

  16. A Bayesian least squares support vector machines based framework for fault diagnosis and failure prognosis

    NASA Astrophysics Data System (ADS)

    Khawaja, Taimoor Saleem

    A high-belief low-overhead Prognostics and Health Management (PHM) system is desired for online real-time monitoring of complex non-linear systems operating in a complex (possibly non-Gaussian) noise environment. This thesis presents a Bayesian Least Squares Support Vector Machine (LS-SVM) based framework for fault diagnosis and failure prognosis in nonlinear non-Gaussian systems. The methodology assumes the availability of real-time process measurements, definition of a set of fault indicators and the existence of empirical knowledge (or historical data) to characterize both nominal and abnormal operating conditions. An efficient yet powerful Least Squares Support Vector Machine (LS-SVM) algorithm, set within a Bayesian Inference framework, not only allows for the development of real-time algorithms for diagnosis and prognosis but also provides a solid theoretical framework to address key concepts related to classification for diagnosis and regression modeling for prognosis. SVM machines are founded on the principle of Structural Risk Minimization (SRM) which tends to find a good trade-off between low empirical risk and small capacity. The key features in SVM are the use of non-linear kernels, the absence of local minima, the sparseness of the solution and the capacity control obtained by optimizing the margin. The Bayesian Inference framework linked with LS-SVMs allows a probabilistic interpretation of the results for diagnosis and prognosis. Additional levels of inference provide the much coveted features of adaptability and tunability of the modeling parameters. The two main modules considered in this research are fault diagnosis and failure prognosis. With the goal of designing an efficient and reliable fault diagnosis scheme, a novel Anomaly Detector is suggested based on the LS-SVM machines. The proposed scheme uses only baseline data to construct a 1-class LS-SVM machine which, when presented with online data is able to distinguish between normal behavior and any abnormal or novel data during real-time operation. The results of the scheme are interpreted as a posterior probability of health (1 - probability of fault). As shown through two case studies in Chapter 3, the scheme is well suited for diagnosing imminent faults in dynamical non-linear systems. Finally, the failure prognosis scheme is based on an incremental weighted Bayesian LS-SVR machine. It is particularly suited for online deployment given the incremental nature of the algorithm and the quick optimization problem solved in the LS-SVR algorithm. By way of kernelization and a Gaussian Mixture Modeling (GMM) scheme, the algorithm can estimate "possibly" non-Gaussian posterior distributions for complex non-linear systems. An efficient regression scheme associated with the more rigorous core algorithm allows for long-term predictions, fault growth estimation with confidence bounds and remaining useful life (RUL) estimation after a fault is detected. The leading contributions of this thesis are (a) the development of a novel Bayesian Anomaly Detector for efficient and reliable Fault Detection and Identification (FDI) based on Least Squares Support Vector Machines, (b) the development of a data-driven real-time architecture for long-term Failure Prognosis using Least Squares Support Vector Machines, (c) Uncertainty representation and management using Bayesian Inference for posterior distribution estimation and hyper-parameter tuning, and finally (d) the statistical characterization of the performance of diagnosis and prognosis algorithms in order to relate the efficiency and reliability of the proposed schemes.

  17. Using finite-difference waveform modeling to better understand rupture kinematics and path effects in ground motion modeling: an induced seismicity case study at the Groningen Gas field

    NASA Astrophysics Data System (ADS)

    Zurek, B.; Burnett, W. A.; deMartin, B.

    2017-12-01

    Ground motion models (GMMs) have historically been used as input in the development of probabilistic seismic hazard analysis (PSHA) and as an engineering tool to assess risk in building design. Generally these equations are developed from empirical analysis of observations that come from fairly complete catalogs of seismic events. One of the challenges when doing a PSHA analysis in a region where earthquakes are anthropogenically induced is that the catalog of observations is not complete enough to come up with a set of equations to cover all expected outcomes. For example, PSHA analysis at the Groningen gas field, an area of known induced seismicity, requires estimates of ground motions from tremors up to a maximum magnitude of 6.5 ML. Of the roughly 1300 recordable earthquakes the maximum observed magnitude to date has been 3.6ML. This paper is part of a broader study where we use a deterministic finite-difference wave-form modeling tool to compliment the traditional development of GMMs. Of particular interest is the sensitivity of the GMM's to uncertainty in the rupture process and how this scales to larger magnitude events that have not been observed. A kinematic fault rupture model is introduced to our waveform simulations to test the sensitivity of the GMMs to variability in the fault rupture process that is physically consistent with observations. These tests will aid in constraining the degree of variability in modeled ground motions due to a realistic range of fault parameters and properties. From this study it is our conclusion that in order to properly capture the uncertainty of the GMMs with magnitude up-scaling one needs to address the impact of uncertainty in the near field (<10km) imposed by the lack of constraint on the finite rupture model. By quantifying the uncertainty back to physical principles it is our belief that it can be better constrained and thus reduce exposure to risk. Further, by investigating and constraining the range of fault rupture scenarios and earthquake magnitudes on ground motion models, hazard and risk analysis in regions with incomplete earthquake catalogs, such as the Groningen gas field, can be better understood.

  18. Properties of polyvinyl alcohol/xylan composite films with citric acid.

    PubMed

    Wang, Shuaiyang; Ren, Junli; Li, Weiying; Sun, Runcang; Liu, Shijie

    2014-03-15

    Composite films of xylan and polyvinyl alcohol were produced with citric acid as a new plasticizer or a cross-linking agent. The effects of citric acid content and polyvinyl alcohol/xylan weight ratio on the mechanical properties, thermal stability, solubility, degree of swelling and water vapor permeability of the composite films were investigated. The intermolecular interactions and morphology of composite films were characterized by FTIR spectroscopy and SEM. The results indicated that polyvinyl alcohol/xylan composite films had good compatibility. With an increase in citric acid content from 10% to 50%, the tensile strength reduced from 35.1 to 11.6 MPa. However, the elongation at break increased sharply from 15.1% to 249.5%. The values of water vapor permeability ranged from 2.35 to 2.95 × 10(-7)g/(mm(2)h). Interactions between xylan and polyvinyl alcohol in the presence of citric acid become stronger, which were caused by hydrogen bond and ester bond formation among the components during film forming. Copyright © 2013. Published by Elsevier Ltd.

  19. Effect of polyethyleneimine modified graphene on the mechanical and water vapor barrier properties of methyl cellulose composite films.

    PubMed

    Liu, Hongyu; Liu, Cuiyun; Peng, Shuge; Pan, Bingli; Lu, Chang

    2018-02-15

    A series of novel methyl cellulose (MC) composite films were prepared using polyethyleneimine reduced graphene oxide (PEI-RGO) as an effective filler for water vapor barrier application. The as-prepared PEI-RGO/MC composites were characterized by Fourier transform infrared spectroscopy, X-ray diffraction, thermogravimetric analysis, tensile test and scanning electron microscopy. The experimental and theoretical results exhibited that PEI-RGO was uniformly dispersed in the MC matrix without aggregation and formed an aligned dispersion. The addition of PEI-RGO resulted in an enhanced surface hydrophobicity and a tortuous diffusion pathway for water molecules. Water vapor permeability of PEI-RGO/MC with loading of 3.0% of surface modified graphene was as low as 5.98×10 -11 gmm -2 s -1 Pa -1 . The synergistic effects of enhanced surface hydrophobicity and tortuous diffusion pathway were accounted for the improved water vapor barrier performance of the PEI-RGO/MC composite films. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Current status of adjuvant chemotherapy after radical cystectomy for deeply invasive bladder cancer.

    PubMed

    Skinner, D G; Daniels, J R; Lieskovsky, G

    1984-07-01

    Between March, 1976, and December, 1982, 70 of 157 patients (45%) undergoing single-stage radical cystectomy with pelvic lymphadenectomy and urinary diversion with the intent to cure invasive bladder cancer were found to have pathologic Stage P3B, P4 and/or N + disease. Thirty-four of the 70 patients received adjuvant prophylactic chemotherapy after cystectomy and 36 patients were followed expectantly. From 1976 through 1977 adjuvant chemotherapy consisted of cyclophosphamide 1 Gm/M2 each month for six months; from 1978 through June, 1980, adjuvant chemotherapy consisted of cis-platinum 100 mg/M2 each month for four months with the exception of 1 patient treated more aggressively with combination chemotherapy (CISCA). Since July, 1980, a prospective study has been utilized in which patients were randomized into two groups, Group A receiving combination chemotherapy and Group B followed up expectantly; adjuvant chemotherapy appears to result in a slight delay in time to relapse but no influence in overall survival was observed.

  1. Risk stratification of ambulatory patients with advanced heart failure undergoing evaluation for heart transplantation.

    PubMed

    Kato, Tomoko S; Stevens, Gerin R; Jiang, Jeffrey; Schulze, P Christian; Gukasyan, Natalie; Lippel, Matthew; Levin, Alison; Homma, Shunichi; Mancini, Donna; Farr, Maryjane

    2013-03-01

    Risk stratification of ambulatory heart failure (HF) patients has relied on peak VO(2)<14 ml/kg/min. We investigated whether additional clinical variables might further specify risk of death, ventricular assist device (VAD) implantation (INTERMACS <4) or heart transplantation (HTx, Status 1A or 1B) within 1 year after HTx evaluation. We hypothesized that right ventricular stroke work index (RVSWI), pulmonary capillary wedge pressure (PCWP) and the model for end-stage liver disease-albumin score (MELD-A) would be additive prognostic predictors. We retrospectively collected data on 151 ambulatory patients undergoing HTx evaluation. Primary outcomes were defined as HTx, LVAD or death within 1 year after evaluation. Average age in our cohort was 55 ± 11.1 years, 79.1% were male and 39% had an ischemic etiology (LVEF 21 ± 10.5% and peak VO(2) 12.6 ± 3.5 ml/kg/min). Fifty outcomes (33.1%) were observed (27 HTxs, 15 VADs and 8 deaths). Univariate logistic regression showed a significant association of RVSWI (OR 0.47, p = 0.036), PCWP (OR 2.65, p = 0.007) and MELD-A (OR 2.73, p = 0.006) with 1-year events. Stepwise regression showed an independent correlation of RVSWI<5gm-m(2)/beat (OR 6.70, p < 0.01), PCWP>20 mm Hg (OR 5.48, p < 0.01), MELD-A>14 (OR 3.72, p< 0.01) and peak VO(2)<14 ml/kg/min (OR 3.36, p = 0.024) with 1-year events. A scoring system was developed: MELD-A>14 and peak VO(2)<14-1 point each; and PCWP>20 and RVSWI<5-2 points each. A cut-off at≥4 demonstrated a 54% sensitivity and 88% specificity for 1-year events. Ambulatory HF patients have significant 1-year event rates. Risk stratification based on exercise performance, left-sided congestion, right ventricular dysfunction and liver congestion allows prediction of 1-year prognosis. Our findings support early and timely referral for VAD and/or transplant. Copyright © 2013 International Society for Heart and Lung Transplantation. Published by Elsevier Inc. All rights reserved.

  2. Semi-Supervised Sparse Representation Based Classification for Face Recognition With Insufficient Labeled Samples

    NASA Astrophysics Data System (ADS)

    Gao, Yuan; Ma, Jiayi; Yuille, Alan L.

    2017-05-01

    This paper addresses the problem of face recognition when there is only few, or even only a single, labeled examples of the face that we wish to recognize. Moreover, these examples are typically corrupted by nuisance variables, both linear (i.e., additive nuisance variables such as bad lighting, wearing of glasses) and non-linear (i.e., non-additive pixel-wise nuisance variables such as expression changes). The small number of labeled examples means that it is hard to remove these nuisance variables between the training and testing faces to obtain good recognition performance. To address the problem we propose a method called Semi-Supervised Sparse Representation based Classification (S$^3$RC). This is based on recent work on sparsity where faces are represented in terms of two dictionaries: a gallery dictionary consisting of one or more examples of each person, and a variation dictionary representing linear nuisance variables (e.g., different lighting conditions, different glasses). The main idea is that (i) we use the variation dictionary to characterize the linear nuisance variables via the sparsity framework, then (ii) prototype face images are estimated as a gallery dictionary via a Gaussian Mixture Model (GMM), with mixed labeled and unlabeled samples in a semi-supervised manner, to deal with the non-linear nuisance variations between labeled and unlabeled samples. We have done experiments with insufficient labeled samples, even when there is only a single labeled sample per person. Our results on the AR, Multi-PIE, CAS-PEAL, and LFW databases demonstrate that the proposed method is able to deliver significantly improved performance over existing methods.

  3. Co-occurrence of Anxiety and Depressive Symptoms Following Breast Cancer Surgery and Its Impact on Quality of Life

    PubMed Central

    Gold, Marshall; Dunn, Laura B.; Phoenix, Bethany; Paul, Steven M.; Hamolsky, Deborah; Levine, Jon D.; Miaskowski, Christine

    2015-01-01

    Purpose Little is known about the prevalence of combined anxiety and depressive symptoms (CADS) in breast cancer patients. Purpose was to evaluate for differences in demographic and clinical characteristics and quality of life (QOL) prior to breast cancer surgery among women classified into one of four distinct anxiety and/or depressive symptom groups. Methods A total of 335 patients completed measures of anxiety and depressive symptoms and QOL prior to and for 6 months following breast cancer surgery. Growth Mixture Modelling (GMM) was used to identify subgroups of women with distinct trajectories of anxiety and depressive symptoms. These results were used to create four distinct anxiety and/or depressive symptom groups. Differences in demographic, clinical, and symptom characteristics, among these groups were evaluated using analyses of variance and Chi square analyses. Results A total of 44.5% of patients were categorized with CADS. Women with CADS were younger, non-white, had lower performance status, received neoadjuvant or adjuvant chemotherapy, had greater difficulty dealing with their disease and treatment, and reported less support from others to meet their needs. These women had lower physical, psychological, social well-being, and total QOL scores. Higher levels of anxiety with or without subsyndromal depressive symptoms were associated with increased fears of recurrence, hopelessness, uncertainty, loss of control, and a decrease in life satisfaction. Conclusions Findings suggest that CADS occurs in a high percentage of women following breast cancer surgery and results in a poorer QOL. Assessments of anxiety and depressive symptoms are warranted prior to surgery for breast cancer. PMID:26187660

  4. Co-occurrence of anxiety and depressive symptoms following breast cancer surgery and its impact on quality of life.

    PubMed

    Gold, Marshall; Dunn, Laura B; Phoenix, Bethany; Paul, Steven M; Hamolsky, Deborah; Levine, Jon D; Miaskowski, Christine

    2016-02-01

    Little is known about the prevalence of combined anxiety and depressive symptoms (CADS) in breast cancer patients. Purpose was to evaluate for differences in demographic and clinical characteristics and quality of life (QOL) prior to breast cancer surgery among women classified into one of four distinct anxiety and/or depressive symptom groups. A total of 335 patients completed measures of anxiety and depressive symptoms and QOL prior to and for 6 months following breast cancer surgery. Growth Mixture Modelling (GMM) was used to identify subgroups of women with distinct trajectories of anxiety and depressive symptoms. These results were used to create four distinct anxiety and/or depressive symptom groups. Differences in demographic, clinical, and symptom characteristics, among these groups were evaluated using analyses of variance and Chi square analyses. A total of 44.5% of patients were categorized with CADS. Women with CADS were younger, non-white, had lower performance status, received neoadjuvant or adjuvant chemotherapy, had greater difficulty dealing with their disease and treatment, and reported less support from others to meet their needs. These women had lower physical, psychological, social well-being, and total QOL scores. Higher levels of anxiety with or without subsyndromal depressive symptoms were associated with increased fears of recurrence, hopelessness, uncertainty, loss of control, and a decrease in life satisfaction. Findings suggest that CADS occurs in a high percentage of women following breast cancer surgery and results in a poorer QOL. Assessments of anxiety and depressive symptoms are warranted prior to surgery for breast cancer. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. 3D characterization of EMT cell density in developing cardiac cushions using optical coherence tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Yu, Siyao; Gu, Shi; Zhao, Xiaowei; Liu, Yehe; Jenkins, Michael W.; Watanabe, Michiko; Rollins, Andrew M.

    2017-02-01

    Congenital heart defects (CHDs) are the most common birth defect, affecting between 4 and 75 per 1,000 live births depending on the inclusion criteria. Many of these defects can be traced to defects of cardiac cushions, critical structures during development that serve as precursors to many structures in the mature heart, including the atrial and ventricular septa, and all four sets of cardiac valves. Epithelial-mesenchymal transition (EMT) is the process through which cardiac cushions become populated with cells. Altered cushion size or altered cushion cell density has been linked to many forms of CHDs, however, quantitation of cell density in the complex 3D cushion structure poses a significant challenge to conventional histology. Optical coherence tomography (OCT) is a technique capable of 3D imaging of the developing heart, but typically lacks the resolution to differentiate individual cells. Our goal is to develop an algorithm to quantitatively characterize the density of cells in the developing cushion using 3D OCT imaging. First, in a heart volume, the atrioventricular (AV) cushions were manually segmented. Next, all voxel values in the region of interest were pooled together to generate a histogram. Finally, two populations of voxels were classified using either K-means classification, or a Gaussian mixture model (GMM). The voxel population with higher values represents cells in the cushion. To test the algorithm, we imaged and evaluated avian embryonic hearts at looping stages. As expected, our result suggested that the cell density increases with developmental stages. We validated the technique against scoring by expert readers.

  6. The heterogeneity of antipsychotic response in the treatment of schizophrenia

    PubMed Central

    Case, M.; Stauffer, V. L.; Ascher-Svanum, H.; Conley, R.; Kapur, S.; Kane, J. M.; Kollack-Walker, S.; Jacob, J.; Kinon, B. J.

    2011-01-01

    Background Schizophrenia is a heterogeneous disorder in terms of patient response to antipsychotic treatment. Understanding the heterogeneity of treatment response may help to guide treatment decisions. This study was undertaken to capture inherent patterns of response to antipsychotic treatment in patients with schizophrenia, characterize the subgroups of patients with similar courses of response, and examine illness characteristics at baseline as possible predictors of response. Method Growth mixture modeling (GMM) was applied to data from a randomized, double-blind, 12-week study of 628 patients with schizophrenia or schizo-affective disorder treated with risperidone or olanzapine. Results Four distinct response trajectories based on Positive and Negative Syndrome Scale (PANSS) total score over 12 weeks were identified: Class 1 (420 patients, 80.6%) with moderate average baseline PANSS total score showing gradual symptom improvement; Class 2 (65 patients, 12.5%) showing rapid symptom improvement; Class 3 (24 patients, 4.6%) with high average baseline PANSS total score showing gradual symptom improvement; and Class 4 (12 patients, 2.3%) showing unsustained symptom improvement. Latent class membership of early responders (ER) and early non-responders (ENR) was determined based on 20% symptom improvement criteria at 2 weeks and ultimate responders (UR) and ultimate non-responders (UNR) based on 40% symptom improvement criteria at 12 weeks. Baseline factors with potential influence on latent class membership were identified. Conclusions This study identified four distinct treatment response patterns with predominant representation of responders or non-responders to treatment in these classes. This heterogeneity may represent discrete endophenotypes of response to treatment with different etiologic underpinnings. PMID:20925971

  7. Effect of Governance Indicators on Under-Five Mortality in OECD Nations: Generalized Method of Moments.

    PubMed

    Emamgholipour, Sara; Asemane, Zahra

    2016-01-01

    Today, it is recognized that factors other than health services are involved in health improvement and decreased inequality so identifying them is the main concern of policy makers and health authorities. The aim of this study was to investigate the effect of governance indicators on health outcomes. A panel data study was conducted to investigate the effect of governance indicators on child mortality rate in 27 OECD countries from 1996 to 2012 using the Generalized Method of Moments (GMM) model and EVIEWS.8 software. According to the results obtained, under-five mortality rate was significantly related to all of the research variables (p < 0.05). One percent increase in under-five mortality in the previous period resulted in a 0.83% increase in the mortality rate in the next period, and a 1% increase in total fertility rate, increased the under-five mortality rate by 0.09%. In addition, a 1% increase in GDP per capita decreased the under-five mortality rate by 0.07%, and a 1% improvement in control of corruption and rule of law indicators decreased child mortality rate by 0.05 and 0.08%, respectively. Furthermore, 1% increase in public health expenditure per capita resulted in a 0.03% decrease in under-five mortality rate. The results of the study suggest that considering control variables, including GDP per capita, public health expenditure per capita, total fertility rate, and improvement of governance indicators (control of corruption and rule of law) would decrease the child mortality rate.

  8. Do consumers substitute opium for hashish? An economic analysis of simultaneous cannabinoid and opiate consumption in a legal regime.

    PubMed

    Chandra, Siddharth; Chandra, Madhur

    2015-11-01

    To analyze interrelationships in the consumption of opiates and cannabinoids in a legal regime and, specifically, whether consumers of opiates and cannabinoids treat them as substitutes for each other. Econometric dynamic panel data models for opium consumption are estimated using the generalized method of moments (GMM). A unique dataset containing information about opiate (opium) consumption from the Punjab province of British India for the years 1907-1918 is analyzed (n=252) as a function of its own price, the prices of two forms of cannabis (the leaf (bhang), and the resin (charas, or hashish)), and wage income. Cross-price elasticities are examined to reveal substitution or complementarity between opium and cannabis. Opium is a substitute for charas (or hashish), with a cross price elasticity (βˆ3) of 0.14 (p<0.05), but not for bhang (cannabis leaves; cross price elasticity=0.00, p>0.10). Opium consumption (βˆ1=0.47 to 0.49, p<0.01) shows properties of habit persistence consistent with addiction. The consumption of opium is slightly responsive (inelastic) to changes in its own price (βˆ2=-0.34 to -0.35, p<0.05 to 0.01) and consumer wages (βˆ1=0.15, p<0.05). Opium and hashish, a form of cannabis, are substitutes. In addition, opium consumption displays properties of habit persistence and slight price and wage income responsiveness (inelasticity) consistent with an addictive substance. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  9. Trajectories of Marijuana Use from Adolescence to Adulthood as Predictors of Unemployment Status in the Early Forties

    PubMed Central

    Zhang, Chenshu; Brook, Judith S.; Leukefeld, Carl G.; Brook, David W.

    2016-01-01

    Objectives To study the degree to which individuals in different trajectories of marijuana use are similar or different in terms of unemployment status at mean age 43. Methods We gathered longitudinal data on a prospective cohort taken from a community sample (N = 548). Forty nine percent of the original participants were females. Over 90% of the participants were white. The participants were followed from adolescence to early midlife. The mean ages of participants at the follow-up interviews were 14.1, 16.3, 22.3, 27.0, 31.9, 36.6, and 43.0, respectively. We used the growth mixture modeling (GMM) approach to identify the trajectories of marijuana use over a 29 year period. Results Five trajectories of marijuana use were identified: chronic users/decreasers (8.3%), quitters (18.6%), increasing users (7.3%), chronic occasional users (25.6%), and nonusers/experimenters (40.2%). Compared with nonusers/experimenters, chronic users/decreasers had a significantly higher likelihood of unemployment at mean age 43 (Adjusted Odds Ratio =3.51, 95% Confidence Interval = 1.13 – 10.91), even after controlling for the covariates. Conclusions and Scientific Significance The results of the associations between the distinct trajectories of marijuana use and unemployment in early midlife indicate that it is important to develop intervention programs targeting chronic marijuana use as well as unemployment in individuals at this stage of development. Results from this study should encourage clinicians, teachers, and parents to assess and treat chronic marijuana use in adolescents. PMID:26991779

  10. Three essays on agricultural price volatility and the linkages between agricultural and energy markets

    NASA Astrophysics Data System (ADS)

    Wu, Feng

    This dissertation contains three essays. In the first essay I use a volatility spillover model to find evidence of significant spillovers from crude oil prices to corn cash and futures prices, and that these spillover effects are time-varying. Results reveal that corn markets have become much more connected to crude oil markets after the introduction of the Energy Policy Act of 2005. Furthermore, crude oil prices transmit positive volatility spillovers into corn prices and movements in corn prices become more energy-driven as the ethanol gasoline consumption ratio increases. Based on this strong volatility link between crude oil and corn prices, a new cross hedging strategy for managing corn price risk using oil futures is examined and its performance studied. Results show that this cross hedging strategy provides only slightly better hedging performance compared to traditional hedging in corn futures markets alone. The implication is that hedging corn price risk in corn futures markets alone can still provide relatively satisfactory performance in the biofuel era. The second essay studies the spillover effect of biofuel policy on participation in the Conservation Reserve Program. Landowners' participation decisions are modeled using a real options framework. A novel aspect of the model is that it captures the structural change in agriculture caused by rising biofuel production. The resulting model is used to simulate the spillover effect under various conditions. In particular, I simulate how increased growth in agricultural returns, persistence of the biofuel production boom, and the volatility surrounding agricultural returns, affect conservation program participation decisions. Policy implications of these results are also discussed. The third essay proposes a methodology to construct a risk-adjusted implied volatility measure that removes the forecasting bias of the model-free implied volatility measure. The risk adjustment is based on a closed-form relationship between the expectation of future volatility and the model-free implied volatility assuming a jump-diffusion model. I use a GMM estimation framework to identify the key model parameters needed to apply the model. An empirical application to corn futures implied volatility is used to illustrate the methodology and demonstrate differences between my approach and the model-free implied volatility using observed corn option prices. I compare the risk-adjusted forecast with the unadjusted forecast as well as other alternatives; and results suggest that the risk-adjusted volatility is unbiased, informationally more efficient, and has superior predictive power over the alternatives considered.

  11. The Principal Components of Adult Female Insole Shape Align Closely with Two of Its Classic Indicators.

    PubMed

    Bookstein, Fred L; Domjanic, Jacqueline

    2015-01-01

    The plantar surface of the human foot transmits the weight and dynamic force of the owner's lower limbs to the ground and the reaction forces back to the musculoskeletal system. Its anatomical variation is intensely studied in such fields as sports medicine and orthopedic dysmorphology. Yet, strangely, the shape of the insole that accommodates this surface and elastically buffers these forces is neither an aspect of the conventional anthropometrics of feet nor an informative label on the packet that markets supplementary insoles. In this paper we pursue an earlier suggestion that insole form in vertical view be quantified in terms of the shape of the foot not at the plane of support (the "footprint") but some two millimeters above that level. Using such sections extracted from laser scans of 158 feet of adult women from the University of Zagreb, in conjunction with an appropriate modification of today's standard geometric morphometrics (GMM), we find that the sectioned form can be described by its size together with two meaningful relative warps of shape. The pattern of this shape variation is not novel. It is closely aligned with two of the standard footprint measurements, the Chippaux-Šmiřák arch index and the Clarke arch angle, whose geometrical foci (the former in the ball of the foot, the latter in the arch) it apparently combines. Thus a strong contemporary analysis complements but does not supplant the simpler anthropometric analyses of half a century ago, with implications for applied anthropology.

  12. The Principal Components of Adult Female Insole Shape Align Closely with Two of Its Classic Indicators

    PubMed Central

    Bookstein, Fred L.; Domjanic, Jacqueline

    2015-01-01

    The plantar surface of the human foot transmits the weight and dynamic force of the owner’s lower limbs to the ground and the reaction forces back to the musculoskeletal system. Its anatomical variation is intensely studied in such fields as sports medicine and orthopedic dysmorphology. Yet, strangely, the shape of the insole that accommodates this surface and elastically buffers these forces is neither an aspect of the conventional anthropometrics of feet nor an informative label on the packet that markets supplementary insoles. In this paper we pursue an earlier suggestion that insole form in vertical view be quantified in terms of the shape of the foot not at the plane of support (the “footprint”) but some two millimeters above that level. Using such sections extracted from laser scans of 158 feet of adult women from the University of Zagreb, in conjunction with an appropriate modification of today’s standard geometric morphometrics (GMM), we find that the sectioned form can be described by its size together with two meaningful relative warps of shape. The pattern of this shape variation is not novel. It is closely aligned with two of the standard footprint measurements, the Chippaux-Šmiřák arch index and the Clarke arch angle, whose geometrical foci (the former in the ball of the foot, the latter in the arch) it apparently combines. Thus a strong contemporary analysis complements but does not supplant the simpler anthropometric analyses of half a century ago, with implications for applied anthropology. PMID:26308442

  13. Effect of the Prevalence of HIV/AIDS and the Life Expectancy Rate on Economic Growth in SSA Countries: Difference GMM Approach.

    PubMed

    Waziri, Salisu Ibrahim; Mohamed Nor, Norashidah; Raja Abdullah, Nik Mustapha; Adamu, Peter

    2015-09-01

    The productivity of countries around the globe is adversely affected by the health-related problems of their labour force. This study examined the effect of the prevalence of human immunodeficiency virus/acquired immune deficiency syndrome (HIV/AIDS) and life expectancy on the economic growth of 33 Sub-Saharan African (SSA) countries over a period of 11 years (2002-2012). The study employed a dynamic panel approach as opposed to the static traditional approach utilised in the literature. The dynamic approach became eminent because of the fact that HIV/AIDS is a dynamic variable as its prevalence today depends on the previous years. The result revealed that HIV/AIDS is negatively correlated with economic growth in the region, with a coefficient of 0.014, and significant at the 1% level. That is, a 10% increase in HIV/AIDS prevalence leads to a 0.14% decrease in the GDP of the region. Tackling HIV/AIDS is therefore imperative to the developing Sub-Saharan African region and all hands must be on deck to end the menace globally.

  14. Experimental investigation of effects of stitching orientation on forming behaviors of 2D P-aramid multilayer woven preform

    NASA Astrophysics Data System (ADS)

    Abtew, Mulat Alubel; Boussu, François; Bruniaux, Pascal; Loghin, Carmen; Cristian, Irina; Chen, Yan; Wang, Lichuan

    2018-05-01

    In many textile applications stitching process is one of the widely used methods to join the multi-layer fabric plies not only due to its easy applicability and flexible production but also provide structural integrity throughout-the-thickness of materials. In this research, the influences of stitching pattern on various molding characteristics of multi-layer 2D para-aramid plain woven fabrics while deformation was investigated. The fabrics were made of high performance fiber with 930dtex yarn linear density and fabric areal density of 200gm/m2. First, different stitch pattern (orientation) was applied for joining the mentioned multi-layered fabrics keeping other stitching parameters such as stitch gap, stitch thread tension, stitch length, stitch type, stitch thread type etc. constant throughout the study. Then, a pneumatic based molding device with a low speed forming process specially designed for preforming of textile with a predefined hemispherical shape of punch. The result shows that stitching pattern is one of the parameter that influences the different molding behavior and should be consider while molding stitched multi-layer fabrics.

  15. Effect of the Prevalence of HIV/AIDS and the Life Expectancy Rate on Economic Growth in SSA Countries: Difference GMM Approach

    PubMed Central

    Waziri, Salisu Ibrahim; Nor, Norashidah Mohamed; Abdullah, Nik Mustapha Raja; Adamu, Peter

    2016-01-01

    The productivity of countries around the globe is adversely affected by the health-related problems of their labour force. This study examined the effect of the prevalence of human immunodeficiency virus/acquired immune deficiency syndrome (HIV/AIDS) and life expectancy on the economic growth of 33 Sub-Saharan African (SSA) countries over a period of 11 years (2002–2012). The study employed a dynamic panel approach as opposed to the static traditional approach utilised in the literature. The dynamic approach became eminent because of the fact that HIV/AIDS is a dynamic variable as its prevalence today depends on the previous years. The result revealed that HIV/AIDS is negatively correlated with economic growth in the region, with a coefficient of 0.014, and significant at the 1% level. That is, a 10% increase in HIV/AIDS prevalence leads to a 0.14% decrease in the GDP of the region. Tackling HIV/AIDS is therefore imperative to the developing Sub-Saharan African region and all hands must be on deck to end the menace globally. PMID:26573032

  16. Scattering and propagation of a Laguerre-Gaussian vortex beam by uniaxial anisotropic bispheres

    NASA Astrophysics Data System (ADS)

    Qu, Tan; Wu, Zhensen; Shang, Qingchao; Li, Zhengjun; Wu, Jiaji; Li, Haiying

    2018-04-01

    Within the framework of the generalized multi-particle Mie (GMM) theory, analytical solution to electromagnetic scattering of two interacting homogeneous uniaxial anisotropic spheres by a Laguerre-Gaussian (LG) vortex beam is investigated. The particles with different size and dielectric parameter tensor elements are arbitrarily configured. Based on the continuous boundary conditions at each sphere surface, the interactive scattering coefficients are derived. The internal and near-surface field is investigated to describe the propagation of LG vortex beam through the NaCl crystal. In addition, the far fields of some typical anisotropic medium such as LiNbO3, TiO2 bispheres illuminated by an LG vortex beam are numerically presented in detail to analyze the influence of the anisotropic parameters, sphere positions, separation distance and topological charge etc. The results show that LG vortex beam has a better recovery after interacting with a spherical particle compared with Gaussian beam. The study in the paper are useful for the further research on the scattering and propagation characteristics of arbitrary vortex beam in anisotropic chains and periodic structure.

  17. The social network of international health aid.

    PubMed

    Han, Lu; Koenig-Archibugi, Mathias; Opsahl, Tore

    2018-06-01

    International development assistance for health generates an emergent social network in which policy makers in recipient countries are connected to numerous bilateral and multilateral aid agencies and to other aid recipients. Ties in this global network are channels for the transmission of knowledge, norms and influence in addition to material resources, and policy makers in centrally situated governments receive information faster and are exposed to a more diverse range of sources and perspectives. Since diversity of perspectives improves problem-solving capacity, the structural position of aid-receiving governments in the health aid network can affect the health outcomes that those governments are able to attain. We apply a recently developed Social Network Analysis measure to health aid data for 1990-2010 to investigate the relationship between country centrality in the health aid network and improvements in child health. A generalized method of moments (GMM) analysis indicates that, controlling for the volume of health aid and other factors, higher centrality in the health aid network is associated with better child survival rates in a sample of 110 low and middle income countries. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. VizieR Online Data Catalog: Line list for seven target PAndAS clusters (Sakari+, 2015)

    NASA Astrophysics Data System (ADS)

    Sakari, C. M.; Venn, K. A.; Mackey, D.; Shetrone, M. D.; Dotter, A.; Ferguson, A. M. N.; Huxor, A.

    2017-11-01

    The targets were observed with the Hobby-Eberly Telescope (HET; Ramsey et al. 1998, Proc. SPIE, 3352, 34; Shetrone et al. 2007PASP..119..556S) at McDonald Observatory in Fort Davis, TX in 2011 and early 2012. The High Resolution Spectrograph (HRS; Tull 1998, Proc. SPIE, 3355, 387) was utilized with the 3-arcsec fibre and a slit width of 1 arcsec, yielding an instrumental spectral resolution of R=30000. With the 600 g/mm cross-disperser set to a central wavelength of 6302.9Å, wavelength coverages of ~5320-6290 and ~6360-7340Å were achieved in the blue and the red, respectively. The 3-arcsec fibre provided coverage of the clusters past their half-light radii; the additional sky fibres (located 10 arcsec from the central object fibre) provided simultaneous observations for sky subtraction. Exposure times were calculated to obtain a total signal-to-noise ratio (S/N)=80 (per resolution element), although not all targets received sufficient time to meet this goal. (2 data files).

  19. An Android based location service using GSMCellID and GPS to obtain a graphical guide to the nearest cash machine

    NASA Astrophysics Data System (ADS)

    Jacobsen, Jurma; Edlich, Stefan

    2009-02-01

    There is a broad range of potential useful mobile location-based applications. One crucial point seems to be to make them available to the public at large. This case illuminates the abilities of Android - the operating system for mobile devices - to fulfill this demand in the mashup way by use of some special geocoding web services and one integrated web service for getting the nearest cash machines data. It shows an exemplary approach for building mobile location-based mashups for everyone: 1. As a basis for reaching as many people as possible the open source Android OS is assumed to spread widely. 2. Everyone also means that the handset has not to be an expensive GPS device. This is realized by re-utilization of the existing GSM infrastructure with the Cell of Origin (COO) method which makes a lookup of the CellID in one of the growing web available CellID databases. Some of these databases are still undocumented and not yet published. Furthermore the Google Maps API for Mobile (GMM) and the open source counterpart OpenCellID are used. The user's current position localization via lookup of the closest cell to which the handset is currently connected to (COO) is not as precise as GPS, but appears to be sufficient for lots of applications. For this reason the GPS user is the most pleased one - for this user the system is fully automated. In contrary there could be some users who doesn't own a GPS cellular. This user should refine his/her location by one click on the map inside of the determined circular region. The users are then shown and guided by a path to the nearest cash machine by integrating Google Maps API with an overlay. Additionally, the GPS user can keep track of him- or herself by getting a frequently updated view via constantly requested precise GPS data for his or her position.

  20. Pulsed and Tissue Doppler Echocardiographic Changes in Hypertensive Crisis with and without End Organ Damage

    PubMed Central

    Garadah, Taysir; Kassab, Salah; Gabani, Saleh; Abu-Taleb, Ahmed; Abdelatif, Ahmed; Asef, Aysha; Shoroqi, Issa; Jamsheer, Anwer

    2011-01-01

    Background Hypertensive crisis (HC) is a common medical emergency associated with acute rise in arterial blood pressure that leads to end-organ damage (EOD). Therefore, it is imperative to find markers that may help in the prediction of EOD in acute hypertensive crisis. Aim To assess the clinical presentations on admission; echocardiographic changes of pulsed and tissue Doppler changes in EOD patients compared with no EOD; and the risk of developing end organ damage for clinical and biochemical variables in hypertension crisis. Material and Methods The data of 241 patients with hypertensive crisis with systolic blood pressure (SBP) of >180 mmHg or diastolic blood pressure (DBP) >120 mmHg were extracted from patients files. Patients divided into hypertensive emergency (HE) with EOD, n = 62 and hypertensive urgency (HU) without EOD, n = 179. LV hypertrophy on ECG, echo parameters for wall thickness, left Ventricular mass index (LVMI), Body mass index (BMI), pulse Doppler ratio of early filling velocity E wave to late A wave (E/A) and ratio of E wave velocity to tissue Doppler Em to E wave (E/Em) were evaluated. Serum creatinine, hemoglobin, age, gender, body mass Index (BMI), history of diabetes mellitus, smoking, hypertension, stroke and hyperlipidemia were recorded. Multiple logistic regression analysis was applied for risk prediction of end organ damage of clinical variables. Results Patients with HE compared with HU were significantly older, with a significantly higher SBP on admission, high BMI and LVMI. Further there were significantly higher E/A ratio on Doppler echo and higher E/Em ratio on tissue Doppler echocardiogram. Multiple regression analysis with adjustment for age and sex shows positive predictive value with odds ratio of SBP on admission >220 mmHg of 1.98, serum creatinine > 120 µg/L of 1.43, older age > 60 year of 1.304, obesity (BMI ≥ 30) of 1.9, male gender of 2.26 and left ventricle hypertrophy on ECG of 1.92. The hemoglobin level, history of smoking, hyperlipidemia and DM were with no significant predictive value. The pulsed Doppler E/A ratio was ≥1.6, E/Em > 15, LVMI > 125 gm/m2 in patients with EOD compared with those without. Conclusion In patients presented with hypertensive crisis, the echo indices of E/A ratio and E/Em ratio of tissue Doppler are significantly higher in patients with hypertensive emergency compared to hypertensive urgency. The left ventricle hypertrophy on ECG, high LV mass index of >125 gm/m2, BMI > 30, old age > 60 year, male gender and history of hypertension and stroke were positive predictors of poor outcome and end organ damage. PMID:26949338

  1. Pulsed and Tissue Doppler Echocardiographic Changes in Hypertensive Crisis with and without End Organ Damage.

    PubMed

    Garadah, Taysir; Kassab, Salah; Gabani, Saleh; Abu-Taleb, Ahmed; Abdelatif, Ahmed; Asef, Aysha; Shoroqi, Issa; Jamsheer, Anwer

    2011-01-01

    Hypertensive crisis (HC) is a common medical emergency associated with acute rise in arterial blood pressure that leads to end-organ damage (EOD). Therefore, it is imperative to find markers that may help in the prediction of EOD in acute hypertensive crisis. To assess the clinical presentations on admission; echocardiographic changes of pulsed and tissue Doppler changes in EOD patients compared with no EOD; and the risk of developing end organ damage for clinical and biochemical variables in hypertension crisis. The data of 241 patients with hypertensive crisis with systolic blood pressure (SBP) of >180 mmHg or diastolic blood pressure (DBP) >120 mmHg were extracted from patients files. Patients divided into hypertensive emergency (HE) with EOD, n = 62 and hypertensive urgency (HU) without EOD, n = 179. LV hypertrophy on ECG, echo parameters for wall thickness, left Ventricular mass index (LVMI), Body mass index (BMI), pulse Doppler ratio of early filling velocity E wave to late A wave (E/A) and ratio of E wave velocity to tissue Doppler Em to E wave (E/Em) were evaluated. Serum creatinine, hemoglobin, age, gender, body mass Index (BMI), history of diabetes mellitus, smoking, hypertension, stroke and hyperlipidemia were recorded. Multiple logistic regression analysis was applied for risk prediction of end organ damage of clinical variables. Patients with HE compared with HU were significantly older, with a significantly higher SBP on admission, high BMI and LVMI. Further there were significantly higher E/A ratio on Doppler echo and higher E/Em ratio on tissue Doppler echocardiogram. Multiple regression analysis with adjustment for age and sex shows positive predictive value with odds ratio of SBP on admission >220 mmHg of 1.98, serum creatinine > 120 µg/L of 1.43, older age > 60 year of 1.304, obesity (BMI ≥ 30) of 1.9, male gender of 2.26 and left ventricle hypertrophy on ECG of 1.92. The hemoglobin level, history of smoking, hyperlipidemia and DM were with no significant predictive value. The pulsed Doppler E/A ratio was ≥1.6, E/Em > 15, LVMI > 125 gm/m(2) in patients with EOD compared with those without. In patients presented with hypertensive crisis, the echo indices of E/A ratio and E/Em ratio of tissue Doppler are significantly higher in patients with hypertensive emergency compared to hypertensive urgency. The left ventricle hypertrophy on ECG, high LV mass index of >125 gm/m(2), BMI > 30, old age > 60 year, male gender and history of hypertension and stroke were positive predictors of poor outcome and end organ damage.

  2. Trajectories of depressive symptoms in the acute phase of psychosis: Implications for treatment.

    PubMed

    Kjelby, E; Gjestad, R; Sinkeviciute, I; Kroken, R A; Løberg, E-M; Jørgensen, H A; Johnsen, E

    2018-06-02

    Depression is common in schizophrenia and associated with negative outcomes. Previous studies have identified heterogeneity in treatment response in schizophrenia. We aimed to investigate different trajectories of depression in patients suffering from psychosis and predictors of change in depressive symptoms during antipsychotic treatment. Two hundred and twenty-six patients >18 years acutely admitted due to psychosis were consecutively included and the follow-up was 27 weeks. The Calgary Depression Scale for Schizophrenia (CDSS) sum score was the primary outcome. Latent growth curve (LGCM) and Growth Mixture Models (GMM) were conducted. Predictors were the Positive sum score of the Positive and Negative Syndrome Scale for Schizophrenia (PANSS), Schizophrenia spectrum/non-spectrum psychoses, gender and being antipsychotic naive at inclusion. We found support for three depression-trajectories, including a high- (14.7%), a low depression-level (69.6%) class and a third depressed class quickly decreasing to a low level (15.7%). Change in CDSS was associated with change in PANSS positive score in all time intervals (4 weeks: b = 0.18, p < 0.001, 3 months: 0.21, p < 0.023, 6 months: 0.43, p < 0.001) and with a diagnosis within schizophrenia spectrum but not with antipsychotic naivety or gender. The schizophrenia-spectrum patients had less depressive symptoms at inclusion (-2.63, p < 0.001). In conclusion, an early responding and a treatment refractory group were identified. The treatment-refractory patients are candidates for enhanced anti-depressive treatment, for which current evidence is limited. The post-psychotic depression group was characterized by depressive symptoms in the acute phase as well. We could not identify differentiating characteristics of the depression trajectories. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Volumetric image classification using homogeneous decomposition and dictionary learning: A study using retinal optical coherence tomography for detecting age-related macular degeneration.

    PubMed

    Albarrak, Abdulrahman; Coenen, Frans; Zheng, Yalin

    2017-01-01

    Three-dimensional (3D) (volumetric) diagnostic imaging techniques are indispensable with respect to the diagnosis and management of many medical conditions. However there is a lack of automated diagnosis techniques to facilitate such 3D image analysis (although some support tools do exist). This paper proposes a novel framework for volumetric medical image classification founded on homogeneous decomposition and dictionary learning. In the proposed framework each image (volume) is recursively decomposed until homogeneous regions are arrived at. Each region is represented using a Histogram of Oriented Gradients (HOG) which is transformed into a set of feature vectors. The Gaussian Mixture Model (GMM) is then used to generate a "dictionary" and the Improved Fisher Kernel (IFK) approach is used to encode feature vectors so as to generate a single feature vector for each volume, which can then be fed into a classifier generator. The principal advantage offered by the framework is that it does not require the detection (segmentation) of specific objects within the input data. The nature of the framework is fully described. A wide range of experiments was conducted with which to analyse the operation of the proposed framework and these are also reported fully in the paper. Although the proposed approach is generally applicable to 3D volumetric images, the focus for the work is 3D retinal Optical Coherence Tomography (OCT) images in the context of the diagnosis of Age-related Macular Degeneration (AMD). The results indicate that excellent diagnostic predictions can be produced using the proposed framework. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Medical expenditure and unmet need of the pre-elderly and the elderly according to job status in Korea: Are the elderly indeed most vulnerable?

    PubMed Central

    Lee, Hwa-Young; Kondo, Naoki

    2018-01-01

    Increase in the elderly population and early retirement imposes immense economic burden on societies. Previous studies on the association between medical expenditure and working status in the elderly population have not adequately addressed reverse causality problem. In addition, the pre-elderly group has hardly been discussed in this regard. This study assessed possible causal association between employment status and medical expenditure as well as employment status and medical unmet needs in a representative sample of the Korean elderly (aged≧65) and the pre-elderly (aged ≧50 and < 65) adults from the Korea Health Panel Data (KHP). Dynamic panel Generalized Method of Moments (GMM) estimation was employed for the analysis of medical expenditure to address reverse causality, and fixed effect panel logistic regression was used for the analysis of unmet need. The results showed no significant association between job status and medical expenditure in the elderly, but a negative and significant influence on the level of medical expenditure in the pre-elderly. Unemployment was a significant determinant of lowering unmet need from lack of time while it was not associated with unmet need from financial burden in the fixed-effect panel model for both the elderly and pre-elderly groups. The pre-elderly adults were more likely to reduce necessary health service utilization due to unemployment compared to the elderly group because there is no proper financial safety net for the pre-elderly, which may cause non-adherence to treatment and therefore lead to negative health effects. The policy dialogue on safety net currently centers only on the elderly, but should be extended to the pre-elderly population. PMID:29570736

  5. The impact of environmental pollution on public health expenditure: dynamic panel analysis based on Chinese provincial data.

    PubMed

    Hao, Yu; Liu, Shuang; Lu, Zhi-Nan; Huang, Junbing; Zhao, Mingyuan

    2018-05-01

    In recent years, along with rapid economic growth, China's environmental problems have become increasingly prominent. At the same time, the level of China's pollution has been growing rapidly, which has caused huge damages to the residents' health. In this regard, the public health expenditure ballooned as the environmental quality deteriorated in China. In this study, the effect of environmental pollution on residents' health expenditure is empirically investigated by employing the first-order difference generalized method of moments (GMM) method to control for potential endogeneity. Using a panel data of Chinese provinces for the period of 1998-2015, this study found that the environmental pollution (represented by SO 2 and soot emissions) would indeed lead to the increase in the medical expenses of Chinese residents. At the current stage of economic development, an increase in SO 2 and soot emissions per capita would push up the public health expenditure per capita significantly. The estimation results are quite robust for different types of regression specifications and different combinations of control variables. Some social and economic variables such as public services and education may also have remarkable influences on residential medical expenses through different channels.

  6. Does consumption of processed foods explain disparities in the body weight of individuals? The case of Guatemala.

    PubMed

    Asfaw, Abay

    2011-02-01

    Overweight/obesity, caused by the 'nutrition transition', is identified as one of the leading risk factors for non-communicable mortality. The nutrition transition in developing countries is associated with a major shift from the consumption of staple crops and whole grains to highly and partially processed foods. This study examines the contribution of processed foods consumption to the prevalence of overweight/obesity in Guatemala using generalized methods of moments (GMM) regression. The results show that all other things remaining constant, a 10% point increase in the share of partially processed foods from the total household food expenditure increases the BMI of family members (aged 10 years and above) by 3.95%. The impact of highly processed foods is much stronger. A 10% point increase in the share of highly processed food items increases the BMI of individuals by 4.25%, ceteris paribus. The results are robust when body weight is measured by overweight/obesity indicators. These findings suggest that increasing shares of partially and highly processed foods from the total consumption expenditure could be one of the major risk factors for the high prevalence of overweight/obesity in the country.

  7. Effects of multiple scattering on radiative properties of soot fractal aggregates

    NASA Astrophysics Data System (ADS)

    Yon, Jérôme; Liu, Fengshan; Bescond, Alexandre; Caumont-Prim, Chloé; Rozé, Claude; Ouf, François-Xavier; Coppalle, Alexis

    2014-01-01

    The in situ optical characterization of smokes composed of soot particles relies on light extinction, angular static light scattering (SLS), or laser induced incandescence (LII). These measurements are usually interpreted by using the Rayleigh-Debye-Gans theory for Fractal Aggregates (RDG-FA). RDG-FA is simple to use but it completely neglects the impact of multiple scattering (MS) within soot aggregates. In this paper, based on a scaling approach that takes into account MS effects, an extended form of the RDG-FA theory is proposed in order to take into account these effects. The parameters of this extended theory and their dependency on the number of primary sphere inside the aggregate (1

  8. Preliminary assessment of framework conditions for release of genetically modified mosquitoes in Burkina Faso.

    PubMed

    De Freece, Chenoa; Paré Toé, Léa; Esposito, Fulvio; Diabaté, Abdoulaye; Favia, Guido

    2014-09-01

    Genetically modified mosquitoes (GMMs) are emerging as a measure to control mosquito-borne diseases, but before any genetically modified organisms (GMOs) are released into the environment, it is imperative to establish regulatory standards incorporating public engagement. A previous project in Burkina Faso introduced a type of genetically modified cotton [Bacillus thuringiensis (Bt)] cotton) that produces insecticide, and incorporated policies on public engagement. We explored the perspectives of Burkinabè (citizens of Burkina Faso) on bio-agricultural exposure to GMOs and their receptiveness to the use of GMOs. Interviews were conducted in a village (Bondoukuy) and with representatives from stakeholder organizations. The population may be very receptive to the use of GMMs against malaria, but may voice unfounded concerns that GMMs can transmit other diseases. It is important to constantly supply the population with correct and factual information. Investigating the application of Burkina Faso's biotechnology policies with regard to Bt cotton has shown that it may be conceivable in the future to have open discussions about the merits of GMM release. © The Author 2014. Published by Oxford University Press on behalf of Royal Society of Tropical Medicine and Hygiene. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. The wire material and cross-section effect on double delta closing loops regarding load and spring rate magnitude: an in vitro study.

    PubMed

    Ferreira, M do A

    1999-03-01

    The mechanical behavior of orthodontics closing loops, with three different wire materials (stainless steel, cobalt-chromium and titanium-molybdenum) and with different cross-sections and a double delta design, was studied in tension tests. The springs were stress-relieved, except the titanium-molybdenum wires. There were 72 sample springs, divided into 33 stainless steel, 26 cobalt-chromium and 13 titanium-molybdenum, activated at 0.5 mm intervals, from neutral position to 3.0 mm. It was hypothesized that loads, after spring activation, and spring rate, are dependent on cross-section, wire material, and activation. The analysis of variance and the Tukey-Kramer test were applied to verify the differences between all coupled averages of the loads. Regression analysis was also used to verify if closing loops behavior was in accordance with Hooke's law and to obtain the spring rate. The results show that the loads are dependent on activation, cross-section, and wire material. Titanium-molybdenum 0.017 x 0.025 inch (Ormco) springs showed the smallest loads and the best spring rate. (beta = 84.9 g/mm)

  10. Hypocalcemia Following Resuscitation from Cardiac Arrest Revisited

    PubMed Central

    Youngquist, Scott T.; Heyming, Theodore; Rosborough, John P.; Niemann, James T.

    2009-01-01

    Objective Hypocalcemia associated with cardiac arrest has been reported. However, mechanistic hypotheses for the decrease in ionized calcium (iCa) vary and its importance unknown. The objective of this study was to assess the relationships of iCa, pH, base excess (BE), and lactate in two porcine cardiac arrest models, and to determine the effect of exogenous calcium administration on postresuscitation hemodynamics. Methods Swine were instrumented and VF was induced either electrically (EVF, n=65) or spontaneously, ischemically induced (IVF) with balloon occlusion of the LAD (n=37). Animals were resuscitated after 7 minutes of VF. BE, iCa, and pH, were determined prearrest and at 15, 30, 60, 90, 120 min after ROSC. Lactate was also measured in 26 animals in the EVF group. Twelve EVF animals were randomized to receive 1 gm of CaCl2 infused over 20 min after ROSC or normal saline. Results iCa, BE, and pH declined significantly over the 60 min following ROSC, regardless of VF type, with the lowest levels observed at the nadir of left ventricular stroke work post resuscitation. Lactate was strongly correlated with BE (r = −0.89, p<0.0001) and iCa (r= −0.40, p < 0.0001). In a multivariate generalized linear mixed model, iCa was 0.005 mg/dL higher for every one unit increase in BE (95% CI 0.003–0.007, p<0.0001), while controlling for type of induced VF. While there was a univariate correlation between iCa and BE, when BE was included in the regression analysis with lactate, only lactate showed a statistically significant relationship with iCa (p=0.02). Postresuscitation CaCl2 infusion improved post-ROSC hemodynamics when compared to saline infusion (LV stroke work control 8 ± 5 gm-m vs 23 ± 4, p = 0.014, at 30 min) with no significant difference in tau between groups. Conclusions Ionized hypocalcemia occurs following ROSC. CaCl2 improves post-ROSC hemodynamics suggesting that hypocalcemia may play a role in early post-resuscitation myocardial dysfunction. PMID:19913975

  11. Storm Identification and Tracking for Hydrologic Modeling Using Hourly Accumulated NEXRAD Precipitation Data

    NASA Astrophysics Data System (ADS)

    Olivera, F.; Choi, J.; Socolofsky, S.

    2006-12-01

    Watershed responses to storm events are strongly affected by the spatial and temporal patterns of rainfall; that is, the spatial distribution of the precipitation intensity and its evolution over time. Although real storms are moving entities with non-uniform intensities in both space and time, hydrological applications often synthesize these attributes by assuming storms that are uniformly distributed and have variable intensity according to a pre-defined hyetograph shape. As one considers watersheds of greater size, the non-uniformity of rainfall becomes more important, because a storm may not cover the watershed's entire area and may not stay in the watershed for its full duration. In order to incorporate parameters such as storm area, propagation velocity and direction, and intensity distribution in the definition of synthetic storms, it is necessary to determine these storm characteristics from spatially distributed precipitation data. To date, most algorithms for identifying and tracking storms have been applied to short time-step radar reflectivity data (i.e., 15 minutes or less), where storm features are captured in an effectively synoptic manner. For the entire United States, however, the most reliable distributed precipitation data are the one-hour accumulated 4 km × 4 km gridded NEXRAD data of the U.S. National Weather Service (NWS) (NWS 2005. The one-hour aggregation level of the data, though, makes it more difficult to identify and track storms than when using sequences of synoptic radar reflectivity data, because storms can traverse over a number of NEXRAD cells and change size and shape appreciably between consecutive data maps. In this paper, we present a methodology to overcome the identification and tracking difficulties and to extract the characteristics of moving storms (e.g. size, propagation velocity and direction, and intensity distribution) from one-hour accumulated distributed rainfall data. The algorithm uses Gaussian Mixture Models (GMM) for storm identification and image processing for storm tracking. The method has been successfully applied to Brazos County in Texas using the 2003 Multi-sensor Precipitation Estimator (MPE) NEXRAD rainfall data.

  12. Three Essays in Energy Economics and Industrial Organization, with Applications to Electricity and Distribution Networks

    NASA Astrophysics Data System (ADS)

    Dimitropoulos, Dimitrios

    Electricity industries are experiencing upward cost pressures in many parts of the world. Chapter 1 of this thesis studies the production technology of electricity distributors. Although production and cost functions are mathematical duals, practitioners typically estimate only one or the other. This chapter proposes an approach for joint estimation of production and costs. Combining such quantity and price data has the effect of adding statistical information without introducing additional parameters into the model. We define a GMM estimator that produces internally consistent parameter estimates for both the production function and the cost function. We consider a multi-output framework, and show how to account for the presence of certain types of simultaneity and measurement error. The methodology is applied to data on 73 Ontario distributors for the period 2002-2012. As expected, the joint model results in a substantial improvement in the precision of parameter estimates. Chapter 2 focuses on productivity trends in electricity distribution. We apply two methodologies for estimating productivity growth . an index based approach, and an econometric cost based approach . to our data on the 73 Ontario distributors for the period 2002 to 2012. The resulting productivity growth estimates are approximately 1% per year, suggesting a reversal of the positive estimates that have generally been reported in previous periods. We implement flexible semi-parametric variants to assess the robustness of these conclusions and discuss the use of such statistical analyses for calibrating productivity and relative efficiencies within a price-cap framework. In chapter 3, I turn to the historically important problem of vertical contractual relations. While the existing literature has established that resale price maintenance is sufficient to coordinate the distribution network of a manufacturer, this chapter asks whether such vertical restraints are necessary. Specifically, I study the vertical contracting problem between an upstream manufacturer and its downstream distributors in a setting where spot market contracts fail, but resale price maintenance cannot be appealed to due to legal prohibition. I show that a bonus scheme based on retail revenues is sufficient to provide incentives to decentralized retailers to elicit the correct levels of both price and service.

  13. Three Essays in Energy Economics and Industrial Organization, with Applications to Electricity and Distribution Networks

    NASA Astrophysics Data System (ADS)

    Dimitropoulos, Dimitrios

    Electricity industries are experiencing upward cost pressures in many parts of the world. Chapter 1 of this thesis studies the production technology of electricity distributors. Although production and cost functions are mathematical duals, practitioners typically estimate only one or the other. This chapter proposes an approach for joint estimation of production and costs. Combining such quantity and price data has the effect of adding statistical information without introducing additional parameters into the model. We define a GMM estimator that produces internally consistent parameter estimates for both the production function and the cost function. We consider a multi-output framework, and show how to account for the presence of certain types of simultaneity and measurement error. The methodology is applied to data on 73 Ontario distributors for the period 2002-2012. As expected, the joint model results in a substantial improvement in the precision of parameter estimates. Chapter 2 focuses on productivity trends in electricity distribution. We apply two methodologies for estimating productivity growth---an index based approach, and an econometric cost based approach---to our data on the 73 Ontario distributors for the period 2002 to 2012. The resulting productivity growth estimates are approximately -1% per year, suggesting a reversal of the positive estimates that have generally been reported in previous periods. We implement flexible semi-parametric variants to assess the robustness of these conclusions and discuss the use of such statistical analyses for calibrating productivity and relative efficiencies within a price-cap framework. In chapter 3, I turn to the historically important problem of vertical contractual relations. While the existing literature has established that resale price maintenance is sufficient to coordinate the distribution network of a manufacturer, this chapter asks whether such vertical restraints are necessary. Specifically, I study the vertical contracting problem between an upstream manufacturer and its downstream distributors in a setting where spot market contracts fail, but resale price maintenance cannot be appealed to due to legal prohibition. I show that a bonus scheme based on retail revenues is sufficient to provide incentives to decentralized retailers to elicit the correct levels of both price and service.

  14. Wood fuel consumption and mortality rates in Sub-Saharan Africa: Evidence from a dynamic panel study.

    PubMed

    Sulaiman, Chindo; Abdul-Rahim, A S; Chin, Lee; Mohd-Shahwahid, H O

    2017-06-01

    This study examined the impact of wood fuel consumption on health outcomes, specifically under-five and adult mortality in Sub-Saharan Africa, where wood usage for cooking and heating is on the increase. Generalized method of moment (GMM) estimators were used to estimate the impact of wood fuel consumption on under-five and adult mortality (and also male and female mortality) in the region. The findings revealed that wood fuel consumption had significant positive impact on under-five and adult mortality. It suggests that over the studied period, an increase in wood fuel consumption has increased the mortality of under-five and adult. Importantly, it indicated that the magnitude of the effect of wood fuel consumption was more on the under-five than the adults. Similarly, assessing the effect on a gender basis, it was revealed that the effect was more on female than male adults. This finding suggests that the resultant mortality from wood smoke related infections is more on under-five children than adults, and also are more on female adults than male adults. We, therefore, recommended that an alternative affordable, clean energy source for cooking and heating should be provided to reduce the wood fuel consumption. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Information and communication technology use and economic growth.

    PubMed

    Farhadi, Maryam; Ismail, Rahmah; Fooladi, Masood

    2012-01-01

    In recent years, progress in information and communication technology (ICT) has caused many structural changes such as reorganizing of economics, globalization, and trade extension, which leads to capital flows and enhancing information availability. Moreover, ICT plays a significant role in development of each economic sector, especially during liberalization process. Growth economists predict that economic growth is driven by investments in ICT. However, empirical studies on this issue have produced mixed results, regarding to different research methodology and geographical configuration of the study. This paper examines the impact of Information and Communication Technology (ICT) use on economic growth using the Generalized Method of Moments (GMM) estimator within the framework of a dynamic panel data approach and applies it to 159 countries over the period 2000 to 2009. The results indicate that there is a positive relationship between growth rate of real GDP per capita and ICT use index (as measured by the number of internet users, fixed broadband internet subscribers and the number of mobile subscription per 100 inhabitants). We also find that the effect of ICT use on economic growth is higher in high income group rather than other groups. This implies that if these countries seek to enhance their economic growth, they need to implement specific policies that facilitate ICT use.

  16. Comparison of Classification Methods for Detecting Emotion from Mandarin Speech

    NASA Astrophysics Data System (ADS)

    Pao, Tsang-Long; Chen, Yu-Te; Yeh, Jun-Heng

    It is said that technology comes out from humanity. What is humanity? The very definition of humanity is emotion. Emotion is the basis for all human expression and the underlying theme behind everything that is done, said, thought or imagined. Making computers being able to perceive and respond to human emotion, the human-computer interaction will be more natural. Several classifiers are adopted for automatically assigning an emotion category, such as anger, happiness or sadness, to a speech utterance. These classifiers were designed independently and tested on various emotional speech corpora, making it difficult to compare and evaluate their performance. In this paper, we first compared several popular classification methods and evaluated their performance by applying them to a Mandarin speech corpus consisting of five basic emotions, including anger, happiness, boredom, sadness and neutral. The extracted feature streams contain MFCC, LPCC, and LPC. The experimental results show that the proposed WD-MKNN classifier achieves an accuracy of 81.4% for the 5-class emotion recognition and outperforms other classification techniques, including KNN, MKNN, DW-KNN, LDA, QDA, GMM, HMM, SVM, and BPNN. Then, to verify the advantage of the proposed method, we compared these classifiers by applying them to another Mandarin expressive speech corpus consisting of two emotions. The experimental results still show that the proposed WD-MKNN outperforms others.

  17. Gravity Acceleration and Gravity Paradox

    NASA Astrophysics Data System (ADS)

    Hanyongquan, Han; Yuteng, Tang

    2017-10-01

    The magnitude of the gravitational acceleration of the earth is derived from low of universal gravitation. If the size and mass of the gravitational force are proportional to any situation, then the celestial surface gravity is greater than the celestial center near the gravity, and objective facts do not match. Specific derivation method, F = GMm / R2 = mg, g = GM/R2 . c / Ú, G is the gravitational constant, M is the mass of the earth, and finally the g = 9.8 m/s 2 is obtained. We assume that the earth is a standard positive sphere, the earth's volume V = 4 ΠR3/3, assuming that the earth's density is ρ, then M = ρ 4 ΠR3/3 .. c / Ú, the c / Ú into c / Ú get: g = G ρ4 ΠR / 3 .. c / Û, the density of the earth is constant. Careful analysis of the formula c / Û The result of this calculation, we can reach conclusion the gravity acceleration g and the radius of the earth is proportional. In addition to the radius of the Earth c / U the right is constant, That is, the Earth's Gravity acceleration of the outer layer of the earth is greater than the Earth's Gravity acceleration of Inner layer. We are in High School, Huairou District, Beijing, China Author: hanyongquan tangyuteng TEL: 15611860790, 15810953809.

  18. Reaction mechanism of guanidinoacetate methyltransferase, concerted or step-wise.

    PubMed

    Zhang, Xiaodong; Bruice, Thomas C

    2006-10-31

    We describe a quantum mechanics/molecular mechanics investigation of the guanidinoacetate methyltransferase catalyzed reaction, which shows that proton transfer from guanidinoacetate (GAA) to Asp-134 and methyl transfer from S-adenosyl-L-methionine (AdoMet) to GAA are concerted. By self-consistent-charge density functional tight binding/molecular mechanics, the bond lengths in the concerted mechanism's transition state are 1.26 A for both the OD1 (Asp-134)-H(E) (GAA) and H(E) (GAA)-N(E) (GAA) bonds, and 2.47 and 2.03 A for the S8 (AdoMet)-C9 (AdoMet) and C9 (AdoMet)-N(E) (GAA) bonds, respectively. The potential-energy barrier (DeltaE++) determined by single-point B3LYP/6-31+G*//MM is 18.9 kcal/mol. The contributions of the entropy (-TDeltaS++) and zero-point energy corrections Delta(ZPE)++ by normal mode analysis are 2.3 kcal/mol and -1.7 kcal/mol, respectively. Thus, the activation enthalpy of this concerted mechanism is predicted to be DeltaH++ = DeltaE++ plus Delta(ZPE)++ = 17.2 kcal/mol. The calculated free-energy barrier for the concerted mechanism is DeltaG++ = 19.5 kcal/mol, which is in excellent agreement with the value of 19.0 kcal/mol calculated from the experimental rate constant (3.8 +/- 0.2.min(-1)).

  19. Development and testing of a human collagen graft material.

    PubMed

    Quteish, D; Singh, G; Dolby, A E

    1990-06-01

    Human Type I collagen was extracted from placenta using pepsin and salt fractionation. The collagen was characterized by SDS-PAG electrophoresis dispersed in acidic medium, freeze-dried, and cross-linked in an 0.25% glutaraldehyde solution pH 4.5 for 2 days. After washing for 7 days and freeze drying the resultant collagen sponge was tested with regard to mechanical, physical, enzymatic degradation properties and biological responses. The modulus of elasticity was found to be 289 +/- 10 g/mm2 and the sponge was insoluble in water, buffered saline, or tissue culture medium over a period of 6 weeks with swelling occurring at less than 5% of volume. The sponge had a high fluid binding capacity, amounting to 56 +/- 5 mL tissue culture medium per gram of dry weight. Bacterial collagenase produced slow degradation of the sponge with complete disappearance by 24 h only when high concentrations (200 units enzyme per mg of the collagen sponge) were used. Cytotoxicity studies using human gingival and periodontal ligament fibroblasts revealed less than 5% apparent cytotoxicity or proliferation. Subcutaneous implantation was followed by resorption and vascularization over a period of 6-8 weeks. It was concluded that the collagen sponge prepared from human Type I collagen has potential as a graft material in oral surgical procedures.

  20. Efficient gas barrier properties of multi-layer films based on poly(lactic acid) and fish gelatin.

    PubMed

    Hosseini, Seyed Fakhreddin; Javidi, Zahra; Rezaei, Masoud

    2016-11-01

    Multi-layer film structures of poly(lactic acid) (PLA) and fish gelatin (FG), prepared using the solvent casting technique, were studied in an effort to produce bio-based films with low oxygen (OP) and water vapor permeability (WVP). The scanning electron microscopy (SEM) images of triple-layer film showed that the outer PLA layers are being closely attached to the inner FG layer to make continuous film. The OP of multi-layer film (5.02cm 3 /m 2 daybar) decreased more than 8-fold compared with that of the PLA film, and the WVP of multi-layer film (0.125gmm/kPah m 2 ) also decreased 11-fold compared with that of the FG film. Lamination with PLA profoundly increased the water resistance of the bare gelatin film. Meanwhile, the tensile strength of the triple-layer film (25±2.13MPa) was greater than that of FG film (7.48±1.70MPa). At the same time, the resulting film maintains high optical clarity. Differential scanning calorimetry (DSC) analysis also revealed that the materials were compatible showing only one T g which decreased with FG deposition. This material exhibits an environmental-friendliness potential and a high versatility in food packaging. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Analytical description of lateral binding force exerted on bi-sphere induced by high-order Bessel beams

    NASA Astrophysics Data System (ADS)

    Bai, J.; Wu, Z. S.; Ge, C. X.; Li, Z. J.; Qu, T.; Shang, Q. C.

    2018-07-01

    Based on the generalized multi-particle Mie equation (GMM) and Electromagnetic Momentum (EM) theory, the lateral binding force (BF) exerted on bi-sphere induced by an arbitrary polarized high-order Bessel beam (HOBB) is investigated with particular emphasis on the half-conical angle of the wave number components and the order (or topological charge) of the beam. The illuminating HOBB with arbitrary polarization angle is described in terms of beam shape coefficients (BSCs) within the framework of generalized Lorenz-Mie theories (GLMT). Utilizing the vector addition theorem of the spherical vector wave functions (SVWFs), the interactive scattering coefficients are derived through the continuous boundary conditions on which the interaction of the bi-sphere is considered. Numerical effects of various parameters such as beam polarization angles, incident wavelengths, particle sizes, material losses and the refractive index, including the cases of weak, moderate, and strong than the surrounding medium are numerically analyzed in detail. The observed dependence of the separation of optically bound particles on the incidence of HOBB is in agreement with earlier theoretical prediction. Accurate investigation of BF induced by HOBB could provide an effective test for further research on BF between more complex particles, which plays an important role in using optical manipulation on particle self-assembly.

  2. Information and Communication Technology Use and Economic Growth

    PubMed Central

    Farhadi, Maryam; Ismail, Rahmah; Fooladi, Masood

    2012-01-01

    In recent years, progress in information and communication technology (ICT) has caused many structural changes such as reorganizing of economics, globalization, and trade extension, which leads to capital flows and enhancing information availability. Moreover, ICT plays a significant role in development of each economic sector, especially during liberalization process. Growth economists predict that economic growth is driven by investments in ICT. However, empirical studies on this issue have produced mixed results, regarding to different research methodology and geographical configuration of the study. This paper examines the impact of Information and Communication Technology (ICT) use on economic growth using the Generalized Method of Moments (GMM) estimator within the framework of a dynamic panel data approach and applies it to 159 countries over the period 2000 to 2009. The results indicate that there is a positive relationship between growth rate of real GDP per capita and ICT use index (as measured by the number of internet users, fixed broadband internet subscribers and the number of mobile subscription per 100 inhabitants). We also find that the effect of ICT use on economic growth is higher in high income group rather than other groups. This implies that if these countries seek to enhance their economic growth, they need to implement specific policies that facilitate ICT use. PMID:23152817

  3. Computing the Free Energy Barriers for Less by Sampling with a Coarse Reference Potential while Retaining Accuracy of the Target Fine Model.

    PubMed

    Plotnikov, Nikolay V

    2014-08-12

    Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force.

  4. Three-year latent class trajectories of attention-deficit/hyperactivity disorder (ADHD) symptoms in a clinical sample not selected for ADHD.

    PubMed

    Arnold, L Eugene; Ganocy, Stephen J; Mount, Katherine; Youngstrom, Eric A; Frazier, Thomas; Fristad, Mary; Horwitz, Sarah M; Birmaher, Boris; Findling, Robert; Kowatch, Robert A; Demeter, Christine; Axelson, David; Gill, Mary Kay; Marsh, Linda

    2014-07-01

    This study aims to examine trajectories of attention-deficit/hyperactivity disorder (ADHD) symptoms in the Longitudinal Assessment of Manic Symptoms (LAMS) sample. The LAMS study assessed 684 children aged 6 to 12 years with the Kiddie-Schedule for Affective Disorders and Schizophrenia (K-SADS) and rating scales semi-annually for 3 years. Although they were selected for elevated manic symptoms, 526 children had baseline ADHD diagnoses. With growth mixture modeling (GMM), we separately analyzed inattentive and hyperactive/impulsive symptoms, covarying baseline age. Multiple standard methods determined optimal fit. The χ(2) and Kruskal-Wallis analysis of variance compared resulting latent classes/trajectories on clinical characteristics and medication. Three latent class trajectories best described inattentive symptoms, and 4 classes best described hyperactive/impulsive symptoms. Inattentive trajectories maintained their relative position over time. Hyperactive/impulsive symptoms had 2 consistent trajectories (least and most severe). A third trajectory (4.5%) started mild, then escalated; and a fourth (14%) started severe but improved dramatically. The improving trajectory was associated with the highest rate of ADHD and lowest rate of bipolar diagnoses. Three-fourths of the mildest inattention class were also in the mildest hyperactive/impulsive class; 72% of the severest inattentive class were in the severest hyperactive/impulsive class, but the severest inattention class also included 62% of the improving hyperactive-impulsive class. An ADHD rather than bipolar diagnosis prognosticates a better course of hyperactive/impulsive, but not inattentive, symptoms. High overlap of relative severity between inattention and hyperactivity/impulsivity confirms the link between these symptom clusters. Hyperactive/impulsive symptoms wane more over time. Group means are insufficient to understand individual ADHD prognosis. A small subgroup deteriorates over time in hyperactivity/impulsivity and needs better treatments than currently provided. Copyright © 2014 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.

  5. Computing the Free Energy Barriers for Less by Sampling with a Coarse Reference Potential while Retaining Accuracy of the Target Fine Model

    PubMed Central

    2015-01-01

    Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force. PMID:25136268

  6. Preventive role of exercise training in autonomic, hemodynamic, and metabolic parameters in rats under high risk of metabolic syndrome development.

    PubMed

    Moraes-Silva, Ivana Cinthya; Mostarda, Cristiano; Moreira, Edson Dias; Silva, Kleiton Augusto Santos; dos Santos, Fernando; de Angelis, Kátia; Farah, Vera de Moura Azevedo; Irigoyen, Maria Claudia

    2013-03-15

    High fructose consumption contributes to metabolic syndrome incidence, whereas exercise training promotes several beneficial adaptations. In this study, we demonstrated the preventive role of exercise training in the metabolic syndrome derangements in a rat model. Wistar rats receiving fructose overload in drinking water (100 g/l) were concomitantly trained on a treadmill (FT) or kept sedentary (F) for 10 wk. Control rats treated with normal water were also submitted to exercise training (CT) or sedentarism (C). Metabolic evaluations consisted of the Lee index and glycemia and insulin tolerance test (kITT). Blood pressure (BP) was directly measured, whereas heart rate (HR) and BP variabilities were evaluated in time and frequency domains. Renal sympathetic nerve activity was also recorded. F rats presented significant alterations compared with all the other groups in insulin resistance (in mg · dl(-1) · min(-1): F: 3.4 ± 0.2; C: 4.7 ± 0.2; CT: 5.0 ± 0.5 FT: 4.6 ± 0.4), mean BP (in mmHG: F: 117 ± 2; C: 100 ± 2; CT: 98 ± 2; FT: 105 ± 2), and Lee index (in g/mm: F = 0.31 ± 0.001; C = 0.29 ± 0.001; CT = 0.27 ± 0.002; FT = 0.28 ± 0.002), confirming the metabolic syndrome diagnosis. Exercise training blunted all these derangements. Additionally, FS group presented autonomic dysfunction in relation to the others, as seen by an ≈ 50% decrease in baroreflex sensitivity and 24% in HR variability, and increases in sympathovagal balance (140%) and in renal sympathetic nerve activity (45%). These impairments were not observed in FT group, as well as in C and CT. Correlation analysis showed that both Lee index and kITT were associated with vagal impairment caused by fructose. Therefore, exercise training plays a preventive role in both autonomic and hemodynamic alterations related to the excessive fructose consumption.

  7. Time-variable and static gravity field of Mars from MGS, Mars Odyssey, and MRO

    NASA Astrophysics Data System (ADS)

    Genova, Antonio; Goossens, Sander; Lemoine, Frank G.; Mazarico, Erwan; Neumann, Gregory A.; Smith, David E.; Zuber, Maria T.

    2016-04-01

    The Mars Global Surveyor (MGS), Mars Odyssey (ODY), and Mars Reconnaissance Orbiter (MRO) missions have significantly contributed to the determination of global high-resolution global gravity fields of Mars for the last 16 years. All three spacecraft were located in sun-synchronous, near-circular polar mapping orbits for their primary mission phases at different altitudes and Local Solar Time (LST). X-Band tracking data have been acquired from the NASA Deep Space Network (DSN) providing information on the time-variable and static gravity field of Mars. MGS operated between 1999 and 2006 at 390 km altitude. ODY and MRO are still orbiting Mars with periapsis altitudes of 400 km and 255 km, respectively. Before entering these mapping phases, all three spacecraft collected radio tracking data at lower altitudes (˜170-200 km) that help improve the resolution of the gravity field of Mars in specific regions. We analyzed the entire MGS radio tracking data set, and ODY and MRO radio data until 2015. These observations were processed using a batch least-squares filter through the NASA GSFC GEODYN II software. We combined all 2- and 3-way range rate data to estimate the global gravity field of Mars to degree and order 120, the seasonal variations of gravity harmonic coefficients C20, C30, C40 and C50 and the Love number k2. The gravity contribution of Mars atmospheric pressures on the surface of the planet has been discerned from the time-varying and static gravity harmonic coefficients. Surface pressure grids computed using the Mars-GRAM 2010 atmospheric model, with 2.5° x2.5° spatial and 2-h resolution, are converted into gravity spherical harmonic coefficients. Consequently, the estimated gravity and tides provide direct information on the solid planet. We will present the new Goddard Mars Model (GMM-3) of Mars gravity field in spherical harmonics to degree and order 120. The solution includes the Love number k2 and the 3-frequencies (annual, semi-annual, and tri-annual) time-variable coefficients of the gravity zonal harmonics C20, C30, C40 and C50. The seasonal gravity coefficients led us to determine the inter-annual mass exchange between the polar caps over ˜11 years from October 2002 to November 2014.

  8. Functional organization of intrinsic connectivity networks in Chinese-chess experts.

    PubMed

    Duan, Xujun; Long, Zhiliang; Chen, Huafu; Liang, Dongmei; Qiu, Lihua; Huang, Xiaoqi; Liu, Timon Cheng-Yi; Gong, Qiyong

    2014-04-16

    The functional architecture of the human brain has been extensively described in terms of functional connectivity networks, detected from the low-frequency coherent neuronal fluctuations during a resting state condition. Accumulating evidence suggests that the overall organization of functional connectivity networks is associated with individual differences in cognitive performance and prior experience. Such an association raises the question of how cognitive expertise exerts an influence on the topological properties of large-scale functional networks. To address this question, we examined the overall organization of brain functional networks in 20 grandmaster and master level Chinese-chess players (GM/M) and twenty novice players, by means of resting-state functional connectivity and graph theoretical analyses. We found that, relative to novices, functional connectivity was increased in GM/Ms between basal ganglia, thalamus, hippocampus, and several parietal and temporal areas, suggesting the influence of cognitive expertise on intrinsic connectivity networks associated with learning and memory. Furthermore, we observed economical small-world topology in the whole-brain functional connectivity networks in both groups, but GM/Ms exhibited significantly increased values of normalized clustering coefficient which resulted in increased small-world topology. These findings suggest an association between the functional organization of brain networks and individual differences in cognitive expertise, which might provide further evidence of the mechanisms underlying expert behavior. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Reaction mechanism of guanidinoacetate methyltransferase, concerted or step-wise

    PubMed Central

    Zhang, Xiaodong; Bruice, Thomas C.

    2006-01-01

    We describe a quantum mechanics/molecular mechanics investigation of the guanidinoacetate methyltransferase catalyzed reaction, which shows that proton transfer from guanidinoacetate (GAA) to Asp-134 and methyl transfer from S-adenosyl-l-methionine (AdoMet) to GAA are concerted. By self-consistent-charge density functional tight binding/molecular mechanics, the bond lengths in the concerted mechanism's transition state are 1.26 Å for both the OD1 (Asp-134)–HE (GAA) and HE (GAA)–NE (GAA) bonds, and 2.47 and 2.03 Å for the S8 (AdoMet)–C9 (AdoMet) and C9 (AdoMet)–NE (GAA) bonds, respectively. The potential-energy barrier (ΔE‡) determined by single-point B3LYP/6–31+G*//MM is 18.9 kcal/mol. The contributions of the entropy (−TΔS‡) and zero-point energy corrections Δ(ZPE)‡ by normal mode analysis are 2.3 kcal/mol and −1.7 kcal/mol, respectively. Thus, the activation enthalpy of this concerted mechanism is predicted to be ΔH‡ = ΔE‡ + Δ(ZPE)‡ = 17.2 kcal/mol. The calculated free-energy barrier for the concerted mechanism is ΔG‡ = 19.5 kcal/mol, which is in excellent agreement with the value of 19.0 kcal/mol calculated from the experimental rate constant (3.8 ± 0.2·min−1). PMID:17053070

  10. From ether to acid: A plausible degradation pathway of glycerol dialkyl glycerol tetraethers

    NASA Astrophysics Data System (ADS)

    Liu, Xiao-Lei; Birgel, Daniel; Elling, Felix J.; Sutton, Paul A.; Lipp, Julius S.; Zhu, Rong; Zhang, Chuanlun; Könneke, Martin; Peckmann, Jörn; Rowland, Steven J.; Summons, Roger E.; Hinrichs, Kai-Uwe

    2016-06-01

    Glycerol dialkyl glycerol tetraethers (GDGTs) are ubiquitous microbial lipids with extensive demonstrated and potential roles as paleoenvironmental proxies. Despite the great attention they receive, comparatively little is known regarding their diagenetic fate. Putative degradation products of GDGTs, identified as hydroxyl and carboxyl derivatives, were detected in lipid extracts of marine sediment, seep carbonate, hot spring sediment and cells of the marine thaumarchaeon Nitrosopumilus maritimus. The distribution of GDGT degradation products in environmental samples suggests that both biotic and abiotic processes act as sinks for GDGTs. More than a hundred newly recognized degradation products afford a view of the stepwise degradation of GDGT via (1) ether bond hydrolysis yielding hydroxyl isoprenoids, namely, GDGTol (glycerol dialkyl glycerol triether alcohol), GMGD (glycerol monobiphytanyl glycerol diether), GDD (glycerol dibiphytanol diether), GMM (glycerol monobiphytanol monoether) and bpdiol (biphytanic diol); (2) oxidation of isoprenoidal alcohols into corresponding carboxyl derivatives and (3) chain shortening to yield C39 and smaller isoprenoids. This plausible GDGT degradation pathway from glycerol ethers to isoprenoidal fatty acids provides the link to commonly detected head-to-head linked long chain isoprenoidal hydrocarbons in petroleum and sediment samples. The problematic C80 to C82 tetraacids that cause naphthenate deposits in some oil production facilities can be generated from H-shaped glycerol monoalkyl glycerol tetraethers (GMGTs) following the same process, as indicated by the distribution of related derivatives in hydrothermally influenced sediments.

  11. Automated oral cancer identification using histopathological images: a hybrid feature extraction paradigm.

    PubMed

    Krishnan, M Muthu Rama; Venkatraghavan, Vikram; Acharya, U Rajendra; Pal, Mousumi; Paul, Ranjan Rashmi; Min, Lim Choo; Ray, Ajoy Kumar; Chatterjee, Jyotirmoy; Chakraborty, Chandan

    2012-02-01

    Oral cancer (OC) is the sixth most common cancer in the world. In India it is the most common malignant neoplasm. Histopathological images have widely been used in the differential diagnosis of normal, oral precancerous (oral sub-mucous fibrosis (OSF)) and cancer lesions. However, this technique is limited by subjective interpretations and less accurate diagnosis. The objective of this work is to improve the classification accuracy based on textural features in the development of a computer assisted screening of OSF. The approach introduced here is to grade the histopathological tissue sections into normal, OSF without Dysplasia (OSFWD) and OSF with Dysplasia (OSFD), which would help the oral onco-pathologists to screen the subjects rapidly. The biopsy sections are stained with H&E. The optical density of the pixels in the light microscopic images is recorded and represented as matrix quantized as integers from 0 to 255 for each fundamental color (Red, Green, Blue), resulting in a M×N×3 matrix of integers. Depending on either normal or OSF condition, the image has various granular structures which are self similar patterns at different scales termed "texture". We have extracted these textural changes using Higher Order Spectra (HOS), Local Binary Pattern (LBP), and Laws Texture Energy (LTE) from the histopathological images (normal, OSFWD and OSFD). These feature vectors were fed to five different classifiers: Decision Tree (DT), Sugeno Fuzzy, Gaussian Mixture Model (GMM), K-Nearest Neighbor (K-NN), Radial Basis Probabilistic Neural Network (RBPNN) to select the best classifier. Our results show that combination of texture and HOS features coupled with Fuzzy classifier resulted in 95.7% accuracy, sensitivity and specificity of 94.5% and 98.8% respectively. Finally, we have proposed a novel integrated index called Oral Malignancy Index (OMI) using the HOS, LBP, LTE features, to diagnose benign or malignant tissues using just one number. We hope that this OMI can help the clinicians in making a faster and more objective detection of benign/malignant oral lesions. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Machine-learned analysis of quantitative sensory testing responses to noxious cold stimulation in healthy subjects.

    PubMed

    Weyer-Menkhoff, I; Thrun, M C; Lötsch, J

    2018-05-01

    Pain in response to noxious cold has a complex molecular background probably involving several types of sensors. A recent observation has been the multimodal distribution of human cold pain thresholds. This study aimed at analysing reproducibility and stability of this observation and further exploration of data patterns supporting a complex background. Pain thresholds to noxious cold stimuli (range 32-0 °C, tonic: temperature decrease -1 °C/s, phasic: temperature decrease -8 °C/s) were acquired in 148 healthy volunteers. The probability density distribution was analysed using machine-learning derived methods implemented as Gaussian mixture modeling (GMM), emergent self-organizing maps and self-organizing swarms of data agents. The probability density function of pain responses was trimodal (mean thresholds at 25.9, 18.4 and 8.0 °C for tonic and 24.5, 18.1 and 7.5 °C for phasic stimuli). Subjects' association with Gaussian modes was consistent between both types of stimuli (weighted Cohen's κ = 0.91). Patterns emerging in self-organizing neuronal maps and swarms could be associated with different trends towards decreasing cold pain sensitivity in different Gaussian modes. On self-organizing maps, the third Gaussian mode emerged as particularly distinct. Thresholds at, roughly, 25 and 18 °C agree with known working temperatures of TRPM8 and TRPA1 ion channels, respectively, and hint at relative local dominance of either channel in respective subjects. Data patterns suggest involvement of further distinct mechanisms in cold pain perception at lower temperatures. Findings support data science approaches to identify biologically plausible hints at complex molecular mechanisms underlying human pain phenotypes. Sensitivity to pain is heterogeneous. Data-driven computational research approaches allow the identification of subgroups of subjects with a distinct pattern of sensitivity to cold stimuli. The subgroups are reproducible with different types of noxious cold stimuli. Subgroups show pattern that hints at distinct and inter-individually different types of the underlying molecular background. © 2018 European Pain Federation - EFIC®.

  13. Carbon fiber based composites stress analysis. Experimental and computer comparative studies

    NASA Astrophysics Data System (ADS)

    Sobek, M.; Baier, A.; Buchacz, A.; Grabowski, Ł.; Majzner, M.

    2015-11-01

    Composite materials used nowadays for the production of composites are the result of advanced research. This allows assuming that they are among the most elaborate tech products of our century. That fact is evidenced by the widespread use of them in the most demanding industries like aerospace and space industry. But the heterogeneous materials and their advantages have been known to mankind in ancient times and they have been used by nature for millions of years. Among the fibers used in the industry most commonly used are nylon, polyester, polypropylene, boron, metal, glass, carbon and aramid. Thanks to their physical properties last three fiber types deserve special attention. High strength to weight ratio allow the use of many industrial solutions. Composites based on carbon and glass fibers are widely used in the automotive. Aramid fibers ideal for the fashion industry where the fabric made from the fibers used to produce the protective clothing. In the paper presented issues of stress analysis of composite materials have been presented. The components of composite materials and principles of composition have been discussed. Particular attention was paid to the epoxy resins and the fabrics made from carbon fibers. The article also includes basic information about strain measurements performed on with a resistance strain gauge method. For the purpose of the laboratory tests a series of carbon - epoxy composite samples were made. For this purpose plain carbon textile was used with a weight of 200 g/mm2 and epoxy resin LG730. During laboratory strain tests described in the paper Tenmex's delta type strain gauge rosettes were used. They were arranged in specific locations on the surface of the samples. Data acquisition preceded using HBM measurement equipment, which included measuring amplifier and measuring head. Data acquisition was performed using the Easy Catman. In order to verify the results of laboratory tests numerical studies were carried out in a computing environment, Siemens PLM NX 9.0. For this purpose, samples were modeled composite corresponding to real samples. Tests were made for boundary conditions compatible with the laboratory tests boundary conditions.

  14. Formulation, development and characterization of mucoadhesive film for treatment of vaginal candidiasis.

    PubMed

    Mishra, Renuka; Joshi, Priyanka; Mehta, Tejal

    2016-01-01

    The objective of the present investigation was formulation, optimization and characterization of mucoadhesive film of clotrimazole (CT) which is patient-convenient and provides an effective alternative for the treatment of vaginal candidiasis. CT is an antimycotic drug applied locally for the treatment of vaginal candidiasis. Mucoadhesive vaginal films were prepared by solvent casting technique using hydroxyl propylcellulose and sodium alginate as polymers. Propylene glycol and polyethylene glycol-400 were evaluated as plasticizers. The mucoadhesive vaginal films were evaluated for percentage elongation, tensile strength, folding endurance, drug content, in vitro disintegration time, in vitro dissolution study, swelling index, bioadhesive strength, and diffusion study. Among various permeation enhancers used, isopropyl myristate was found to be suitable. To evaluate the role of the concentration of permeation enhancer and concentration of polymers in the optimization of mucoadhesive vaginal film, 3(2) full factorial design was employed. Optimized batch showed in vitro disintegration time, 18 min; drug content, 99.83%; and tensile strength, 502.1 g/mm(2). In vitro diffusion study showed that 77% drug diffusion occurred in 6 h. This batch was further evaluated by scanning electron microscopy indicating uniformity of the film. In vitro Lactobacillus inhibition and in vitro antifungal activity of optimized batch showed an inhibitory effect against Candida albicans and no effect on Lactobacillus, which is a normal component of vaginal flora. Mucoadhesive vaginal film of CT is an effective dosage form for the treatment of vaginal candidiasis.

  15. VizieR Online Data Catalog: Spectroscopy of EG And over roughly 14 years (Kenyon+, 2016)

    NASA Astrophysics Data System (ADS)

    Kenyon, S. J.; Garcia, M. R.

    2016-08-01

    From 1994 September to 2016 January, P. Berlind, M. Calkins, and other observers acquired 480 low-resolution optical spectra of EG And with FAST, a high throughput, slit spectrograph mounted at the Fred L. Whipple Observatory 1.5m telescope on Mount Hopkins, Arizona They used a 300g/mm grating blazed at 4750Å, a 3'' slit, and a thinned 512*2688 CCD. These spectra cover 3800-7500Å at a resolution of 6Å. The full wavelength solution is derived from calibration lamps acquired immediately after each exposure. The wavelength solution for each frame has a probable error of <~+/-0.5Å. Most of the resulting spectra have moderate signal-to-noise ratio, S/N >~15-30 per pixel. Prior to the start of the FAST observations, we obtained occasional optical spectrophotometric observations of EG And throughout 1982-1989 with the cooled dual-beam intensified Reticon scanner (IRS) mounted on the white spectrograph at the KPNO No. 1 and No. 2 90cm telescopes. Various remote observers acquired high-resolution spectroscopic observations of EG And with the echelle spectrographs and Reticon detectors on the 1.5m telescopes of the Fred L. Whipple Observatory on Mount Hopkins, Arizona and the Oak Ridge Observatory in Harvard, Massachusetts. These spectra cover a 44Å bandpass centered near 5190Å or 5200Å and have a resolution of roughly 12km/s. (1 data file).

  16. Allogeneic bone marrow transplantation for children with acute lymphoblastic leukemia in second remission or relapse.

    PubMed

    Lin, K H; Jou, S T; Chen, R L; Lin, D T; Lui, L T; Lin, K S

    1994-01-01

    Most children with acute lymphoblastic leukemia (ALL) are successfully treated by chemotherapy. For those patients, who relapse on therapy, bone marrow transplantation (BMT) is considered most appropriate after a subsequent remission is achieved. Three boys with ALL aged from 9 to 13 years met these criteria and received BMT from their HLA-compatible sisters after marrow ablation with total body irradiation 12 Gy plus high dose cytosine arabinoside 3 gm/m2/12h x 12 doses and graft-versus-host disease (GVHD) prophylaxis with cyclosporine plus short course methotrexate from March 10, 1989 to May 23, 1992. Filgrastim (rhG-CSF) was used to hasten the recovery of granulocyte in one patient. All three patients got full engraftment and two had grade 1 acute GVHD. None of them developed chronic GVHD. Two patients have disease-free survival over 51 and 12 months respectively post BMT without further chemotherapy. One patient died of recurrent refractory leukemia 5 months after BMT. The toxicity of this conditioning regimen included photophobia, conjunctivitis and erythematous skin rashes. One patient who received filgrastim from day 1 to 21 developed severe bone pain. However, this patient had faster recovery of granulocyte count than the other two patients. The preliminary results of this work favors BMT for children with recurrent ALL whose ultimate survival is usually poor when treated with chemotherapy. Further efforts are necessary to investigate new methods for reducing leukemic relapse in ALL patients undergoing BMT.

  17. Surface desensitization of polarimetric waveguide interferometers

    NASA Astrophysics Data System (ADS)

    Worth, Colin

    Non-specific binding of small molecules to the surface of waveguide biosensors presents a major obstacle to surface-sensing techniques that attempt to detect very low concentrations (<1 g/mm2) of large (500 nm to 3 mum) biological objects. Interferometric waveguide biosensors use the interaction of an evanescent light field outside of the guiding layer with a biological sample to detect a particular type of cell or bacteria at some distance from the sensor surface. In such experiments, binding of small proteins close to the surface can be a significant source of noise. It is possible to significantly improve the signal-to-noise ratio by varying the properties of the biosensor, in order to reduce or eliminate the biosensor's response to a thin protein layer at the waveguide surface, without significantly reducing the response to larger target particles. In many biosensing applications, specifically bound particles, such as bacteria, are much larger than non-specifically bound particles such as proteins. In addition, due to laminar flow conditions at the sensor surface, the latter smaller particles tend to accumulate on the sensor surface. By varying the waveguide parameters, it is possible to vary the sensitivity of the detector response as a function of sample distance from the detector, by changing the properties of the TE0 and TM0 guided modes. This results in a signal reduction of more than 85%, for thin (30 nm or less) layers adjacent to the waveguide surface.

  18. Aerodynamic performance and particle image velocimetery of piezo actuated biomimetic manduca sexta engineered wings towards the design and application of a flapping wing flight vehicle

    NASA Astrophysics Data System (ADS)

    DeLuca, Anthony M.

    Considerable research and investigation has been conducted on the aerodynamic performance, and the predominate flow physics of the Manduca Sexta size of biomimetically designed and fabricated wings as part of the AFIT FWMAV design project. Despite a burgeoning interest and research into the diverse field of flapping wing flight and biomimicry, the aerodynamics of flapping wing flight remains a nebulous field of science with considerable variance into the theoretical abstractions surrounding aerodynamic mechanisms responsible for aerial performance. Traditional FWMAV flight models assume a form of a quasi-steady approximation of wing aerodynamics based on an infinite wing blade element model (BEM). An accurate estimation of the lift, drag, and side force coefficients is a critical component of autonomous stability and control models. This research focused on two separate experimental avenues into the aerodynamics of AFIT's engineered hawkmoth wings|forces and flow visualization. 1. Six degree of freedom force balance testing, and high speed video analysis was conducted on 30°, 45°, and 60° angle stop wings. A novel, non-intrusive optical tracking algorithm was developed utilizing a combination of a Gaussian Mixture Model (GMM) and ComputerVision (OpenCV) tools to track the wing in motion from multiple cameras. A complete mapping of the wing's kinematic angles as a function of driving amplitude was performed. The stroke angle, elevation angle, and angle of attack were tabulated for all three wings at driving amplitudes ranging from A=0.3 to A=0.6. The wing kinematics together with the force balance data was used to develop several aerodynamic force coefficient models. A combined translational and rotational aerodynamic model predicted lift forces within 10%, and vertical forces within 6%. The total power consumption was calculated for each of the three wings, and a Figure of Merit was calculated for each wing as a general expression of the overall efficiency of the wing. Th 60° angle stop wing achieved the largest total stroke angle and generated the most lift for the lowest power consumption of the wings tested. 2. Phase averaged stereo Particle Image Velocimetry (PIV) data was collected at eight phases through the flap cycle on the 30°, 45°, and 60° angle stop wings. Wings were mounted transverse and parallel to the interrogating laser sheet, and planar velocity intersections at the wing mid-span, one chord below the wing, were compared to one another to verify data fidelity. A Rankine-Froude actuator disk model was adapted to calculate the approximate vertical thrust generated from the total momentum flux through the flapping semi-disk using the velocity field measurements. Three component stereo u, v, and w-velocity contour measurements confirmed the presence of extensive vortical structures in the vicinity of the wing. The leading edge vortex was successfully tracked through the stroke cycle appearing at approximately 25% span, increasing in circulatory strength and translational velocity down the span toward the tip, and dissipating just after 75% span. Thrust calculations showed the vertically mounted wing more accurately represented the vertical forces when compared to its corresponding force balance measurement than the horizontally mounted wing. The mid-span showed the highest vertical velocity profile below the wing; and hence, was the location responsible for the majority of lift production along the span.

  19. An optical/NIR survey of globular clusters in early-type galaxies. III. On the colour bimodality of globular cluster systems

    NASA Astrophysics Data System (ADS)

    Chies-Santos, A. L.; Larsen, S. S.; Cantiello, M.; Strader, J.; Kuntschner, H.; Wehner, E. M.; Brodie, J. P.

    2012-03-01

    Context. The interpretation that bimodal colour distributions of globular clusters (GCs) reflect bimodal metallicity distributions has been challenged. Non-linearities in the colour to metallicity conversions caused for example by the horizontal branch (HB) stars may be responsible for transforming a unimodal metallicity distribution into a bimodal (optical) colour distribution. Aims: We study optical/near-infrared (NIR) colour distributions of the GC systems in 14 E/S0 galaxies. Methods: We test whether the bimodal feature, generally present in optical colour distributions, remains in the optical/NIR ones. The latter colour combination is a better metallicity proxy than the former. We use KMM and GMM tests to quantify the probability that different colour distributions are better described by a bimodal, as opposed to a unimodal distribution. Results: We find that double-peaked colour distributions are more commonly seen in optical than in optical/NIR colours. For some of the galaxies where the optical (g - z) distribution is clearly bimodal, a bimodal distribution is not preferred over a unimodal one at a statistically significant level for the (g - K) and (z - K) distributions. The two most cluster-rich galaxies in our sample, NGC 4486 and NGC 4649, show some interesting differences. The (g - K) distribution of NGC 4649 is better described by a bimodal distribution, while this is true for the (g - K) distribution of NGC 4486 GCs only if restricted to a brighter sub-sample with small K-band errors (<0.05 mag). Formally, the K-band photometric errors cannot be responsible for blurring bimodal metallicity distributions to unimodal (g - K) colour distributions. However, simulations including the extra scatter in the colour-colour diagrams (not fully accounted for in the photometric errors) show that such scatter may contribute to the disappearance of bimodality in (g - K) for the full NGC 4486 sample. For the less cluster-rich galaxies results are inconclusive due to poorer statistics. Conclusions: A bimodal optical colour distribution is not necessarily an indication of an underlying bimodal metallicity distribution. Horizontal branch morphology may play an important role in shaping some of the optical GC colour distributions. However, we find tentative evidence that the (g - K) colour distributions remain bimodal in the two cluster-rich galaxies in our sample (NGC 4486 and NGC 4649) when restricted to clusters with small K-band photometric errors. This bimodality becomes less pronounced when including objects with larger errors, or for the (z - K) colour distributions. Deeper observations of large numbers of GCs will be required to reach more secure conclusions.

  20. Classical and quantum decay of oscillations: Oscillating self-gravitating real scalar field solitons

    NASA Astrophysics Data System (ADS)

    Page, Don N.

    2004-07-01

    The oscillating gravitational field of an oscillaton of finite mass M causes it to lose energy by emitting classical scalar field waves, but at a rate that is nonperturbatively tiny for small μ≡GMm/ħc, where m is the scalar field mass: dM/dt≈-3 797 437.776(c3/G)μ-2e-39.433 795 197/μ[1+O(μ)]. Oscillatons also decay by the quantum process of the annihilation of scalarons into gravitons, which is only perturbatively small in μ, giving by itself dM/dt≈-0.008 513 223 935(m2c2/ħ)μ5[1+O(μ2)]. Thus the quantum decay is faster than the classical one for μ≲39.4338/[ln(ħc/Gm2)+7 ln(1/μ)+19.9160]. The time for an oscillaton to decay away completely into free scalarons and gravitons is tdecay˜2ħ6c3/G5m11˜10324 yr(1 meV/mc2)11. Oscillatons of more than one real scalar field of the same mass generically asymptotically approach a static-geometry U(1) boson star configuration with μ=μ0, at the rate d(GM/c3)/dt≈[(C/μ4)e-α/μ+Q(m/mPl)2μ3](μ2-μ20), with μ0 depending on the magnitudes and relative phases of the oscillating fields, and with the same constants C, α, and Q given numerically above for the single-field case that is equivalent to μ0=0.

  1. Inflammatory breast cancer: enhanced local control with hyperfractionated radiotherapy and infusional vincristine, ifosfamide and epirubicin.

    PubMed

    Gurney, H; Harnett, P; Kefford, R; Boyages, J

    1998-06-01

    Local control rate for inflammatory breast cancer (IBC) is < 50% with standard chemotherapy-radiotherapy regimen. Nineteen women (age range 40-65, median 50 years) with IBC (18 patients) or with a primary tumour of > 10 cm (one patient) received a novel treatment comprising hyperfractionated radiotherapy (HFRT) sandwiched between two cycles of infusional chemotherapy using vincristine, ifosfamide and epirubicin (VIE). The primary endpoint was local control. VIE was continuously infused for six weeks via a Hickman's line using a Deltec CADD-1 ambulatory pump. Ifosfamide (3 gm/m2) mixed with equi-dose mesna was infused for seven days and alternated every week with an infusion of epirubicin (50 mg/m2) mixed with vincristine (1.5 mg/m2). HFRT consisted of 1.5 Gy twice daily for 34 frct (51 Gy) followed by a boost of 15 Gy in 10 frct. The total treatment time was less than 22 weeks. Median follow-up was 37 months. Local control rate was 58%. Three patients failed to respond initially and five relapsed in the breast at a median time of 36.8 months. Median overall and disease-free survival was 18 and 25.3 months respectively. Toxicity from VIE was minimal (WHO gd 3 emesis--two patients, gd 3 mucositis--one patient, neutropenic sepsis--three patients). Radiotherapy caused moist desquamation in 17/19 patients. Twenty-four central lines were complicated by seven line infections, three thromboses, and one extravasation. The local control rate of 58% with VIE + HFRT appears similar to reported chemoradiotherapy regimen, although the treatment time of 22 weeks is much shorter than other regimens which take up to 12 months. Toxicity is acceptable. Hickman-related complications need to be reduced. The study is ongoing.

  2. Studies on 405nm blue-violet diode laser with external grating cavity

    NASA Astrophysics Data System (ADS)

    Li, Bin; Gao, Jun; Zhao, Jun; Yu, Anlan; Luo, Shiwen; Xiong, Dongsheng; Wang, Xinbing; Zuo, Duluo

    2016-03-01

    Spectroscopy applications of free-running laser diodes (LD) are greatly restricted as its broad band spectral emission. And the power of a single blue-violet LD is around several hundred milliwatts by far, it is of great importance to obtain stable and narrow line-width laser diodes with high efficiency. In this paper, a high efficiency external cavity diode laser (ECDL) with high output power and narrow band emission at 405 nm is presented. The ECDL is based on a commercially available LD with nominal output power of 110 mW at an injection current of 100 mA. The spectral width of the free-running LD is about 1 nm (FWHM). A reflective holographic grating which is installed on a home-made compact adjustable stage is utilized for optical feedback in Littrow configuration. In this configuration, narrow line-width operation is realized and the effects of grating groove density as well as the groove direction related to the beam polarization on the performances of the ECDL are experimentally investigated. In the case of grating with groove density of 3600 g/mm, the threshold is reduced from 21 mA to 18.3 mA or 15.6 mA and the tuning range is 3.95 nm or 6.01 nm respectively when the grating is orientated in TE or TM polarization. In addition, an output beam with a line-width of 30 pm and output power of 92.7 mW is achieved in TE polarization. With these narrow line-width and high efficiency, the ECDL is capable to serve as a light source for spectroscopy application such as Raman scattering and laser induced fluorescence.

  3. Phase II trial of vinblastine, ifosfamide, and gallium combination chemotherapy in metastatic urothelial carcinoma.

    PubMed

    Einhorn, L H; Roth, B J; Ansari, R; Dreicer, R; Gonin, R; Loehrer, P J

    1994-11-01

    Phase II trial in metastatic urothelial carcinoma using a novel combination chemotherapy regimen consisting of vinblastine, ifosfamide, and gallium nitrate (VIG). Twenty-seven patients were entered onto this phase II study. Dosages were vinblastine 0.11 mg/kg days 1 and 2, ifosfamide 1.2 gm/m2 days 1 through 5 (with mesna), and gallium 300 mg/m2 as a 24-hour infusion days 1 through 5, with calcitriol (1,25-dihydroxycholecalciferol) 0.5 microgram/d orally starting 3 days before each course (except the first) and continuing throughout gallium administration, plus recombinant human granulocyte colony-stimulating factor (rhG-CSF) (filgrastim) 5 micrograms/kg/d days 7 through 16. Courses were repeated every 21 days for a maximum of six cycles. The major toxicity was granulocytopenia. Fifteen patients (55.6%) had grade 3 or 4 granulocytopenia, including eight patients with granulocytopenic fevers. Eleven patients had grade 3 or 4 anemia and four had grade 3 or 4 nephrotoxicity, which was reversible. Other grade 3 to 4 toxicities included hypocalcemia (three patients), thrombocytopenia (two), encephalopathy (one), and temporary blindness (one). There was one treatment-related mortality. Toxicity was more severe in patients older than 70 years and those with prior pelvic irradiation, prior cisplatin adjuvant therapy, or prior nephrectomy. We now decrease VIG by 20% in this patient population. Eighteen patients (67%) achieved an objective response, including 11 (41%) who attained a disease-free status (five with VIG alone and six with subsequent surgery). Median duration of remission was 20 weeks, with five patients still in remission at 22+ to 56+ weeks. VIG combination chemotherapy is very active in patients with metastatic urothelial carcinoma. Toxicity was significant but manageable.

  4. Early direct-injection, low-temperature combustion of diesel fuel in an optical engine utilizing a 15-hole, dual-row, narrow-included-angle nozzle.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gehrke, Christopher R.; Radovanovic, Michael S.; Milam, David M.

    2008-04-01

    Low-temperature combustion of diesel fuel was studied in a heavy-duty, single-cylinder optical engine employing a 15-hole, dual-row, narrow-included-angle nozzle (10 holes x 70/mD and 5 holes x 35/mD) with 103-/gmm-diameter orifices. This nozzle configuration provided the spray targeting necessary to contain the direct-injected diesel fuel within the piston bowl for injection timings as early as 70/mD before top dead center. Spray-visualization movies, acquired using a high-speed camera, show that impingement of liquid fuel on the piston surface can result when the in-cylinder temperature and density at the time of injection are sufficiently low. Seven single- and two-parameter sweeps around amore » 4.82-bar gross indicated mean effective pressure load point were performed to map the sensitivity of the combustion and emissions to variations in injection timing, injection pressure, equivalence ratio, simulated exhaust-gas recirculation, intake temperature, intake boost pressure, and load. High-speed movies of natural luminosity were acquired by viewing through a window in the cylinder wall and through a window in the piston to provide quasi-3D information about the combustion process. These movies revealed that advanced combustion phasing resulted in intense pool fires within the piston bowl, after the end of significant heat release. These pool fires are a result of fuel-films created when the injected fuel impinged on the piston surface. The emissions results showed a strong correlation with pool-fire activity. Smoke and NO/dx emissions rose steadily as pool-fire intensity increased, whereas HC and CO showed a dramatic increase with near-zero pool-fire activity.« less

  5. Effect of Adjuvant Chemotherapy on Left Ventricular Remodeling in Women with Newly Diagnosed Primary Breast Cancer: A Pilot Prospective Longitudinal Cardiac Magnetic Resonance Imaging Study.

    PubMed

    Avelar, Erick; Truong, Quynh A; Inyangetor, David; Marfatia, Ravi; Yang, Clifford; Kaloudis, Electra; Tannenbaum, Susan; Rosito, Guido; Litwin, Sheldon

    2017-11-01

    The aim of this study was to assess the left ventricular (LV) remodeling response to chemotherapy in low-cardiac-risk women with newly diagnosed nonmetastatic breast cancer. Cardiotoxic effects of chemotherapy are an increasing concern. To effectively interpret cardiac imaging studies performed for screening purposes in patients undergoing cancer therapy it is necessary to understand the normal changes in structure and function that may occur. Twenty women without preexisting cardiovascular disease, of a mean age of 50 years, newly diagnosed with nonmetastatic breast cancer and treated with anthracycline or trastuzumab, were prospectively enrolled and evaluated at four time points (at baseline, during chemotherapy, 2 weeks after chemotherapy, and 6 months after chemotherapy) using cardiac magnetic resonance imaging, blood samples, and a clinical questionnaire. Over a 6-month period, the left ventricular ejection fraction (%) decreased (64.15±5.30 to 60.41±5.77, P<0.002) and the LV end-diastolic (mm) and end-systolic (mm) volumes increased (124.73±20.25 to 132.21±19.33, P<0.04 and 45.16±11.88 to 52.57±11.65, P<0.00, respectively). The LV mass (g) did not change (73.06±11.51 to 69.21±15.3, P=0.08), but the LV mass to LVEDV ratio (g/mm) decreased (0.594±0.098 to 0.530±0.124, P<0.04). In low-cardiac-risk women with nonmetastatic breast cancer, the increased LV volume and a mildly decreased left ventricular ejection fraction during and after chemotherapy do not seem to be associated with laboratory or clinical evidence of increased risk for heart failure.

  6. Microstructure and Mechanical Properties of Laser Clad and Post-cladding Tempered AISI H13 Tool Steel

    NASA Astrophysics Data System (ADS)

    Telasang, Gururaj; Dutta Majumdar, Jyotsna; Wasekar, Nitin; Padmanabham, G.; Manna, Indranil

    2015-05-01

    This study reports a detailed investigation of the microstructure and mechanical properties (wear resistance and tensile strength) of hardened and tempered AISI H13 tool steel substrate following laser cladding with AISI H13 tool steel powder in as-clad and after post-cladding conventional bulk isothermal tempering [at 823 K (550 °C) for 2 hours] heat treatment. Laser cladding was carried out on AISI H13 tool steel substrate using a 6 kW continuous wave diode laser coupled with fiber delivering an energy density of 133 J/mm2 and equipped with a co-axial powder feeding nozzle capable of feeding powder at the rate of 13.3 × 10-3 g/mm2. Laser clad zone comprises martensite, retained austenite, and carbides, and measures an average hardness of 600 to 650 VHN. Subsequent isothermal tempering converted the microstructure into one with tempered martensite and uniform dispersion of carbides with a hardness of 550 to 650 VHN. Interestingly, laser cladding introduced residual compressive stress of 670 ± 15 MPa, which reduces to 580 ± 20 MPa following isothermal tempering. Micro-tensile testing with specimens machined from the clad zone across or transverse to cladding direction showed high strength but failure in brittle mode. On the other hand, similar testing with samples sectioned from the clad zone parallel or longitudinal to the direction of laser cladding prior to and after post-cladding tempering recorded lower strength but ductile failure with 4.7 and 8 pct elongation, respectively. Wear resistance of the laser surface clad and post-cladding tempered samples (evaluated by fretting wear testing) registered superior performance as compared to that of conventional hardened and tempered AISI H13 tool steel.

  7. Acute lung injury following inhalation exposure to nerve agent VX in guinea pigs.

    PubMed

    Wright, Benjamin S; Rezk, Peter E; Graham, Jacob R; Steele, Keith E; Gordon, Richard K; Sciuto, Alfred M; Nambiar, Madhusoodana P

    2006-05-01

    A microinstillation technique of inhalation exposure was utilized to assess lung injury following chemical warfare nerve agent VX [methylphosphonothioic acid S-(2-[bis(1-methylethyl)amino]ethyl) O-ethyl ester] exposure in guinea pigs. Animals were anesthetized using Telazol-meditomidine, gently intubated, and VX was aerosolized using a microcatheter placed 2 cm above the bifurcation of the trachea. Different doses (50.4 microg/m3, 70.4 micro g/m(m3), 90.4 microg/m(m3)) of VX were administered at 40 pulses/min for 5 min. Dosing of VX was calculated by the volume of aerosol produced per 200 pulses and diluting the agent accordingly. Although the survival rate of animals exposed to different doses of VX was similar to the controls, nearly a 20% weight reduction was observed in exposed animals. After 24 h of recovery, the animals were euthanized and bronchoalveolar lavage (BAL) was performed with oxygen free saline. BAL was centrifuged and separated into BAL fluid (BALF) and BAL cells (BALC) and analyzed for indication of lung injury. The edema by dry/wet weight ratio of the accessory lobe increased 11% in VX-treated animals. BAL cell number was increased in VX-treated animals compared to controls, independent of dosage. Trypan blue viability assay indicated an increase in BAL cell death in 70.4 microg/m(m3) and 90.4 microg/m(m3) VX-exposed animals. Differential cell counting of BALC indicated a decrease in macrophage/monocytes in VX-exposed animals. The total amount of BAL protein increased gradually with the exposed dose of VX and was highest in animals exposed to 90.4 microg/m(m3), indicating that this dose of VX caused lung injury that persisted at 24 h. In addition, histopathology results also suggest that inhalation exposure to VX induces acute lung injury.

  8. Effect of high ambient temperature on behavior of sheep under semi-arid tropical environment

    NASA Astrophysics Data System (ADS)

    De, Kalyan; Kumar, Davendra; Saxena, Vijay Kumar; Thirumurugan, Palanisamy; Naqvi, Syed Mohammed Khursheed

    2017-07-01

    High environmental temperature is a major constraint in sheep production under semi-arid tropical environment. Behavior is the earliest indicator of animal's adaptation and responses to the environmental alteration. Therefore, the objective of this study was to assess the effects of high ambient temperature on the behavior of sheep under a semi-arid tropical environment. The experiment was conducted for 6 weeks on 16 Malpura cross (Garole × Malpura × Malpura (GMM)) rams. The rams were divided equally into two groups, designated as C and T. The rams of C were kept in comfortable environmental conditions served as control. The rams of T were exposed to a different temperature at different hours of the day in a climatic chamber, to simulate a high environmental temperature of summer in semi-arid tropic. The behavioral observations were taken by direct instantaneous observation at 15-min intervals for each animal individually. The feeding, ruminating, standing, and lying behaviors were recorded twice a week from morning (0800 hours) to afternoon (1700 hours) for 6 weeks. Exposure of rams to high temperature (T) significantly ( P < 0.05) decreased the proportion of time spent in feeding during the observation period in most of the hours of the day as compared to the C. The proportion of time spent in rumination and lying was significantly ( P < 0.05) lower in the T group compared to the C. The animals of T spent significantly ( P < 0.05) more time in rumination in standing position as compared to the C. The overall proportion of time spent in standing, panting in each hour, and total panting time was significantly ( P < 0.05) higher in the T as compared to the C. The result of the study indicates that the exposure of sheep to high ambient temperature severely modulates the behavior of sheep which is directed to circumvent the effect of the stressor.

  9. Effect of high ambient temperature on behavior of sheep under semi-arid tropical environment.

    PubMed

    De, Kalyan; Kumar, Davendra; Saxena, Vijay Kumar; Thirumurugan, Palanisamy; Naqvi, Syed Mohammed Khursheed

    2017-07-01

    High environmental temperature is a major constraint in sheep production under semi-arid tropical environment. Behavior is the earliest indicator of animal's adaptation and responses to the environmental alteration. Therefore, the objective of this study was to assess the effects of high ambient temperature on the behavior of sheep under a semi-arid tropical environment. The experiment was conducted for 6 weeks on 16 Malpura cross (Garole × Malpura × Malpura (GMM)) rams. The rams were divided equally into two groups, designated as C and T. The rams of C were kept in comfortable environmental conditions served as control. The rams of T were exposed to a different temperature at different hours of the day in a climatic chamber, to simulate a high environmental temperature of summer in semi-arid tropic. The behavioral observations were taken by direct instantaneous observation at 15-min intervals for each animal individually. The feeding, ruminating, standing, and lying behaviors were recorded twice a week from morning (0800 hours) to afternoon (1700 hours) for 6 weeks. Exposure of rams to high temperature (T) significantly (P < 0.05) decreased the proportion of time spent in feeding during the observation period in most of the hours of the day as compared to the C. The proportion of time spent in rumination and lying was significantly (P < 0.05) lower in the T group compared to the C. The animals of T spent significantly (P < 0.05) more time in rumination in standing position as compared to the C. The overall proportion of time spent in standing, panting in each hour, and total panting time was significantly (P < 0.05) higher in the T as compared to the C. The result of the study indicates that the exposure of sheep to high ambient temperature severely modulates the behavior of sheep which is directed to circumvent the effect of the stressor.

  10. Listeria monocytogenes inhibition by defatted mustard meal-based edible films.

    PubMed

    Lee, Hahn-Bit; Noh, Bong Soo; Min, Sea C

    2012-02-01

    An antimicrobial edible film was developed from defatted mustard meal (Sinapis alba) (DMM), a byproduct from the bio-fuel industry, without incorporating external antimicrobials and its antimicrobial activity against Listeria monocytogenes and physical properties were investigated. The DMM colloidal solution consisting of 184 g water, 14 g DMM, and 2g glycerol was homogenized and incubated at 37°C for 0.2, 0.5, 24 or 48 h to prepare a film-forming solution. The pH of a portion of the film-forming solution (pH 5.5) was adjusted to 2.0 or 4.0. Films were formed by drying the film-forming solutions at 23°C for 48 h. The film-forming solution incubated for 48 h inhibited L. monocytogenes in broth and on agar media. Antimicrobial effects of the film prepared from the 48 h-incubated solution increased with decrease in pH of the solution from 5.5 to 2.0. The film from the film forming solution incubated for 48 h (pH 2.0) initially inhibited more than 4.0 log CFU/g of L. monocytogenes inoculated on film-coated salmon. The film-coating retarded the growth of L. monocytogenes in smoked salmon at 5, 10, and 15°C and the antimicrobial effect during storage was more noticeable when the coating was applied before inoculation than when it was applied after inoculation. The tensile strength, percentage elongation, solubility in watercxu, and water vapor permeability of the anti microbial film were 2.44 ± 0.19 MPa, 6.40 ± 1.13%, 3.19 ± 0.90%, and 3.18 ± 0.63 gmm/kPa hm(2), respectively. The antimicrobial DMM films have demonstrated a potential to be applied to foods as wraps or coatings to control the growth of L. monocytogenes. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Cloning and expression of vgb gene in Bacillus cereus, improve phenol and p-nitrophenol biodegradation

    NASA Astrophysics Data System (ADS)

    Vélez-Lee, Angel Eduardo; Cordova-Lozano, Felipe; Bandala, Erick R.; Sanchez-Salas, Jose Luis

    2016-02-01

    In this work, the vgb gene from Vitrocilla stercoraria was used to genetically modify a Bacillus cereus strain isolated from pulp and paper wastewater effluent. The gene was cloned in a multicopy plasmid (pUB110) or uni-copy gene using a chromosome integrative vector (pTrpBG1). B. cereus and its recombinant strains were used for phenol and p-nitrophenol biodegradation using aerobic or micro-aerobic conditions and two different temperatures (i.e. 37 and 25 °C). Complete (100%) phenol degradation was obtained for the strain where the multicopy of vgb gene was present, 98% for the strain where uni-copy gene was present and 45% for wild type strain for the same experimental conditions (i.e. 37 °C and aerobic condition). For p-nitrophenol degradation at the same conditions, the strain with the multi-copy vgb gene was capable to achieve 50% of biodegradation, ˜100% biodegradation was obtained using the uni-copy strain and ˜24% for wild type strain. When the micro-aerobic condition was tested, the biodegradation yield showed a significant decreased. The biodegradation trend observed for aerobic was similar for micro-aerobic assessments: the modified strains showed higher degradation rates when compared with wild type strain. For all experimental conditions, the highest p-nitrophenol degradation was observed using the strain with uni-copy of vgb gene. Besides the increase of biodegradative capability of the strain, insertion of the vgb gene was observed able to modify other morphological characteristics such as avoiding the typical flake formation in the B. cereus culture. In both cases, the modification seems to be related with the enhancement of oxygen supply to the cells generated by the vgb gene insertion. The application of the genetically modified microorganism (GMM) to the biodegradation of pollutants in contaminated water possesses high potential as an environmentally friendly technology to facing this emergent problem.

  12. Structural health monitoring of composite laminates using piezoelectric and fiber optics sensors

    NASA Astrophysics Data System (ADS)

    Roman, Catalin

    This research proposes a new approach to structural health monitoring (SHM) for composite laminates using piezoelectric wafer active sensors (PWAS) and fiber optic bragg grating sensors (FBG). One major focus of this research was directed towards extending the theory of laminates to composite beams by combining the global matrix method (GMM) with the stiffness transfer matrix method (STMM). The STMM approach, developed by Rokhlin et al (2002), is unconditionally stable and is more computationally efficient than the transfer matrix method (TMM). Starting from theory, we developed different configurations for composite beams and validated the results from the developed analytical method against experimental data. STMM was then developed for pristine composite beam and delaminated composite beam. We studied the influence of the bonded PWAS by looking at their mode frequencies and amplitudes via experiments and simulations with different sensor positions on pristine and damaged beams, with different delamination sizes and depths. We also extended the TMM and the electro-mechanical (E/M) impedance method for applications to the convergence of TMM of beam vibrations. The focus was on the high-accuracy predictive modeling of the interaction between PWAS and structural waves and vibration using a methodology as in Cuc (2010). We expanded the frequency resonances of a uniform beam from the range of 1-30 kHz previously studied by Cuc (2010) to a higher frequency range of 10-100 kHz and performed the reliability and accuracy analysis (error rates) of all available theoretical models (modal expansion, TMM, and FEM) given experimental data for the uniform beam specimen. Another focus of this research was to explore the use of FBG for fiber composites applications. We performed tests that vary the load on the free end in order to understand the behavior of composite materials under tensile forces and to extend results to ring sensor applications. The last part this research focused on developing a novel acousto-ultrasonic sensor that can detect acoustic emission (AE) events using optical FBG sensing combined with mechanical resonance amplification principles. This method consists of a sensor that can detect the ultrasonic out of plane motion with preference for a certain frequency (300 kHz). Finally, we introduced the concept of a FBG ring sensor for a Navy application, which can provide significant improvements in detecting vibrations. We use a laser vibrometry tool (PSV-400-3D from Polytec) to study the mode shapes of the sensor ring under different resonance frequencies in order to understand the behavior of the ring in the frequency band of interest (300 kHz) and further compare these results and shapes with FEM predictions (ANSYS WB).Our experiments proved that the concept works and a ring sensor that can reach the first resonance at any desired frequency was built and successfully tested. This work was finalized with an invention disclosure for a novel acousto-ultrasonic FBG ring sensor (Disclosure ID No. 00937). The dissertation ends with conclusions and suggestions for future work.

  13. Trajectories of Depressive Symptoms Among Web-Based Health Risk Assessment Participants.

    PubMed

    Bedrosian, Richard; Hawrilenko, Matt; Cole-Lewis, Heather

    2017-03-31

    Health risk assessments (HRAs), which often screen for depressive symptoms, are administered to millions of employees and health plan members each year. HRA data provide an opportunity to examine longitudinal trends in depressive symptomatology, as researchers have done previously with other populations. The primary research questions were: (1) Can we observe longitudinal trajectories in HRA populations like those observed in other study samples? (2) Do HRA variables, which primarily reflect modifiable health risks, help us to identify predictors associated with these trajectories? (3) Can we make meaningful recommendations for population health management, applicable to HRA participants, based on predictors we identify? This study used growth mixture modeling (GMM) to examine longitudinal trends in depressive symptomatology among 22,963 participants in a Web-based HRA used by US employers and health plans. The HRA assessed modifiable health risks and variables such as stress, sleep, and quality of life. Five classes were identified: A "minimal depression" class (63.91%, 14,676/22,963) whose scores were consistently low across time, a "low risk" class (19.89%, 4568/22,963) whose condition remained subthreshold, a "deteriorating" class (3.15%, 705/22,963) who began at subthreshold but approached severe depression by the end of the study, a "chronic" class (4.71%, 1081/22,963) who remained highly depressed over time, and a "remitting" class (8.42%, 1933/22,963) who had moderate depression to start, but crossed into minimal depression by the end. Among those with subthreshold symptoms, individuals who were male (P<.001) and older (P=.01) were less likely to show symptom deterioration, whereas current depression treatment (P<.001) and surprisingly, higher sleep quality (P<.001) were associated with increased probability of membership in the "deteriorating" class as compared with "low risk." Among participants with greater symptomatology to start, those in the "severe" class tended to be younger than the "remitting" class (P<.001). Lower baseline sleep quality (P<.001), quality of life (P<.001), stress level (P<.001), and current treatment involvement (P<.001) were all predictive of membership in the "severe" class. The trajectories identified were consistent with trends in previous research. The results identified some key predictors: we discuss those that mirror prior studies and offer some hypotheses as to why others did not. The finding that 1 in 5 HRA participants with subthreshold symptoms deteriorated to the point of clinical distress during succeeding years underscores the need to learn more about such individuals. We offer additional recommendations for follow-up research, which should be designed to reflect changes in health plan demographics and HRA delivery platforms. In addition to utilizing additional variables such as cognitive style to refine predictive models, future research could also begin to test the impact of more aggressive outreach strategies aimed at participants who are likely to deteriorate or remain significantly depressed over time. ©Richard Bedrosian, Matt Hawrilenko, Heather Cole-Lewis. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 31.03.2017.

  14. Continuous system modeling

    NASA Technical Reports Server (NTRS)

    Cellier, Francois E.

    1991-01-01

    A comprehensive and systematic introduction is presented for the concepts associated with 'modeling', involving the transition from a physical system down to an abstract description of that system in the form of a set of differential and/or difference equations, and basing its treatment of modeling on the mathematics of dynamical systems. Attention is given to the principles of passive electrical circuit modeling, planar mechanical systems modeling, hierarchical modular modeling of continuous systems, and bond-graph modeling. Also discussed are modeling in equilibrium thermodynamics, population dynamics, and system dynamics, inductive reasoning, artificial neural networks, and automated model synthesis.

  15. Feature-based component model for design of embedded systems

    NASA Astrophysics Data System (ADS)

    Zha, Xuan Fang; Sriram, Ram D.

    2004-11-01

    An embedded system is a hybrid of hardware and software, which combines software's flexibility and hardware real-time performance. Embedded systems can be considered as assemblies of hardware and software components. An Open Embedded System Model (OESM) is currently being developed at NIST to provide a standard representation and exchange protocol for embedded systems and system-level design, simulation, and testing information. This paper proposes an approach to representing an embedded system feature-based model in OESM, i.e., Open Embedded System Feature Model (OESFM), addressing models of embedded system artifacts, embedded system components, embedded system features, and embedded system configuration/assembly. The approach provides an object-oriented UML (Unified Modeling Language) representation for the embedded system feature model and defines an extension to the NIST Core Product Model. The model provides a feature-based component framework allowing the designer to develop a virtual embedded system prototype through assembling virtual components. The framework not only provides a formal precise model of the embedded system prototype but also offers the possibility of designing variation of prototypes whose members are derived by changing certain virtual components with different features. A case study example is discussed to illustrate the embedded system model.

  16. The Value of SysML Modeling During System Operations: A Case Study

    NASA Technical Reports Server (NTRS)

    Dutenhoffer, Chelsea; Tirona, Joseph

    2013-01-01

    System models are often touted as engineering tools that promote better understanding of systems, but these models are typically created during system design. The Ground Data System (GDS) team for the Dawn spacecraft took on a case study to see if benefits could be achieved by starting a model of a system already in operations. This paper focuses on the four steps the team undertook in modeling the Dawn GDS: defining a model structure, populating model elements, verifying that the model represented reality, and using the model to answer system-level questions and simplify day-to-day tasks. Throughout this paper the team outlines our thought processes and the system insights the model provided.

  17. The value of SysML modeling during system operations: A case study

    NASA Astrophysics Data System (ADS)

    Dutenhoffer, C.; Tirona, J.

    System models are often touted as engineering tools that promote better understanding of systems, but these models are typically created during system design. The Ground Data System (GDS) team for the Dawn spacecraft took on a case study to see if benefits could be achieved by starting a model of a system already in operations. This paper focuses on the four steps the team undertook in modeling the Dawn GDS: defining a model structure, populating model elements, verifying that the model represented reality, and using the model to answer system-level questions and simplify day-to-day tasks. Throughout this paper the team outlines our thought processes and the system insights the model provided.

  18. A Model-Driven Visualization Tool for Use with Model-Based Systems Engineering Projects

    NASA Technical Reports Server (NTRS)

    Trase, Kathryn; Fink, Eric

    2014-01-01

    Model-Based Systems Engineering (MBSE) promotes increased consistency between a system's design and its design documentation through the use of an object-oriented system model. The creation of this system model facilitates data presentation by providing a mechanism from which information can be extracted by automated manipulation of model content. Existing MBSE tools enable model creation, but are often too complex for the unfamiliar model viewer to easily use. These tools do not yet provide many opportunities for easing into the development and use of a system model when system design documentation already exists. This study creates a Systems Modeling Language (SysML) Document Traceability Framework (SDTF) for integrating design documentation with a system model, and develops an Interactive Visualization Engine for SysML Tools (InVEST), that exports consistent, clear, and concise views of SysML model data. These exported views are each meaningful to a variety of project stakeholders with differing subjects of concern and depth of technical involvement. InVEST allows a model user to generate multiple views and reports from a MBSE model, including wiki pages and interactive visualizations of data. System data can also be filtered to present only the information relevant to the particular stakeholder, resulting in a view that is both consistent with the larger system model and other model views. Viewing the relationships between system artifacts and documentation, and filtering through data to see specialized views improves the value of the system as a whole, as data becomes information

  19. Communication system modeling

    NASA Technical Reports Server (NTRS)

    Holland, L. D.; Walsh, J. R., Jr.; Wetherington, R. D.

    1971-01-01

    This report presents the results of work on communications systems modeling and covers three different areas of modeling. The first of these deals with the modeling of signals in communication systems in the frequency domain and the calculation of spectra for various modulations. These techniques are applied in determining the frequency spectra produced by a unified carrier system, the down-link portion of the Command and Communications System (CCS). The second modeling area covers the modeling of portions of a communication system on a block basis. A detailed analysis and modeling effort based on control theory is presented along with its application to modeling of the automatic frequency control system of an FM transmitter. A third topic discussed is a method for approximate modeling of stiff systems using state variable techniques.

  20. Adaptive System Modeling for Spacecraft Simulation

    NASA Technical Reports Server (NTRS)

    Thomas, Justin

    2011-01-01

    This invention introduces a methodology and associated software tools for automatically learning spacecraft system models without any assumptions regarding system behavior. Data stream mining techniques were used to learn models for critical portions of the International Space Station (ISS) Electrical Power System (EPS). Evaluation on historical ISS telemetry data shows that adaptive system modeling reduces simulation error anywhere from 50 to 90 percent over existing approaches. The purpose of the methodology is to outline how someone can create accurate system models from sensor (telemetry) data. The purpose of the software is to support the methodology. The software provides analysis tools to design the adaptive models. The software also provides the algorithms to initially build system models and continuously update them from the latest streaming sensor data. The main strengths are as follows: Creates accurate spacecraft system models without in-depth system knowledge or any assumptions about system behavior. Automatically updates/calibrates system models using the latest streaming sensor data. Creates device specific models that capture the exact behavior of devices of the same type. Adapts to evolving systems. Can reduce computational complexity (faster simulations).

  1. Moving alcohol prevention research forward-Part II: new directions grounded in community-based system dynamics modeling.

    PubMed

    Apostolopoulos, Yorghos; Lemke, Michael K; Barry, Adam E; Lich, Kristen Hassmiller

    2018-02-01

    Given the complexity of factors contributing to alcohol misuse, appropriate epistemologies and methodologies are needed to understand and intervene meaningfully. We aimed to (1) provide an overview of computational modeling methodologies, with an emphasis on system dynamics modeling; (2) explain how community-based system dynamics modeling can forge new directions in alcohol prevention research; and (3) present a primer on how to build alcohol misuse simulation models using system dynamics modeling, with an emphasis on stakeholder involvement, data sources and model validation. Throughout, we use alcohol misuse among college students in the United States as a heuristic example for demonstrating these methodologies. System dynamics modeling employs a top-down aggregate approach to understanding dynamically complex problems. Its three foundational properties-stocks, flows and feedbacks-capture non-linearity, time-delayed effects and other system characteristics. As a methodological choice, system dynamics modeling is amenable to participatory approaches; in particular, community-based system dynamics modeling has been used to build impactful models for addressing dynamically complex problems. The process of community-based system dynamics modeling consists of numerous stages: (1) creating model boundary charts, behavior-over-time-graphs and preliminary system dynamics models using group model-building techniques; (2) model formulation; (3) model calibration; (4) model testing and validation; and (5) model simulation using learning-laboratory techniques. Community-based system dynamics modeling can provide powerful tools for policy and intervention decisions that can result ultimately in sustainable changes in research and action in alcohol misuse prevention. © 2017 Society for the Study of Addiction.

  2. A novel simulation theory and model system for multi-field coupling pipe-flow system

    NASA Astrophysics Data System (ADS)

    Chen, Yang; Jiang, Fan; Cai, Guobiao; Xu, Xu

    2017-09-01

    Due to the lack of a theoretical basis for multi-field coupling in many system-level models, a novel set of system-level basic equations for flow/heat transfer/combustion coupling is put forward. Then a finite volume model of quasi-1D transient flow field for multi-species compressible variable-cross-section pipe flow is established by discretising the basic equations on spatially staggered grids. Combining with the 2D axisymmetric model for pipe-wall temperature field and specific chemical reaction mechanisms, a finite volume model system is established; a set of specific calculation methods suitable for multi-field coupling system-level research is structured for various parameters in this model; specific modularisation simulation models can be further derived in accordance with specific structures of various typical components in a liquid propulsion system. This novel system can also be used to derive two sub-systems: a flow/heat transfer two-field coupling pipe-flow model system without chemical reaction and species diffusion; and a chemical equilibrium thermodynamic calculation-based multi-field coupling system. The applicability and accuracy of two sub-systems have been verified through a series of dynamic modelling and simulations in earlier studies. The validity of this system is verified in an air-hydrogen combustion sample system. The basic equations and the model system provide a unified universal theory and numerical system for modelling and simulation and even virtual testing of various pipeline systems.

  3. Rule-based simulation models

    NASA Technical Reports Server (NTRS)

    Nieten, Joseph L.; Seraphine, Kathleen M.

    1991-01-01

    Procedural modeling systems, rule based modeling systems, and a method for converting a procedural model to a rule based model are described. Simulation models are used to represent real time engineering systems. A real time system can be represented by a set of equations or functions connected so that they perform in the same manner as the actual system. Most modeling system languages are based on FORTRAN or some other procedural language. Therefore, they must be enhanced with a reaction capability. Rule based systems are reactive by definition. Once the engineering system has been decomposed into a set of calculations using only basic algebraic unary operations, a knowledge network of calculations and functions can be constructed. The knowledge network required by a rule based system can be generated by a knowledge acquisition tool or a source level compiler. The compiler would take an existing model source file, a syntax template, and a symbol table and generate the knowledge network. Thus, existing procedural models can be translated and executed by a rule based system. Neural models can be provide the high capacity data manipulation required by the most complex real time models.

  4. Genome-resolved metaproteomic characterization of preterm infant gut microbiota development reveals species-specific metabolic shifts and variabilities during early life

    DOE PAGES

    Xiong, Weili; Brown, Christopher T.; Morowitz, Michael J.; ...

    2017-07-10

    Establishment of the human gut microbiota begins at birth. This early-life microbiota development can impact host physiology during infancy and even across an entire life span. But, the functional stability and population structure of the gut microbiota during initial colonization remain poorly understood. Metaproteomics is an emerging technology for the large-scale characterization of metabolic functions in complex microbial communities (gut microbiota). We applied a metagenome-informed metaproteomic approach to study the temporal and inter-individual differences of metabolic functions during microbial colonization of preterm human infants’ gut. By analyzing 30 individual fecal samples, we identified up to 12,568 protein groups for eachmore » of four infants, including both human and microbial proteins. With genome-resolved matched metagenomics, proteins were confidently identified at the species/strain level. The maximum percentage of the proteome detected for the abundant organisms was ~45%. A time-dependent increase in the relative abundance of microbial versus human proteins suggested increasing microbial colonization during the first few weeks of early life. We observed remarkable variations and temporal shifts in the relative protein abundances of each organism in these preterm gut communities. Given the dissimilarity of the communities, only 81 microbial EggNOG orthologous groups and 57 human proteins were observed across all samples. These conserved microbial proteins were involved in carbohydrate, energy, amino acid and nucleotide metabolism while conserved human proteins were related to immune response and mucosal maturation. We also identified seven proteome clusters for the communities and showed infant gut proteome profiles were unstable across time and not individual-specific. By applying a gut-specific metabolic module (GMM) analysis, we found that gut communities varied primarily in the contribution of nutrient (carbohydrates, lipids, and amino acids) utilization and short-chain fatty acid production. Overall, this study reports species-specific proteome profiles and metabolic functions of human gut microbiota during early colonization. In particular, our work contributes to reveal microbiota-associated shifts and variations in the metabolism of three major nutrient sources and short-chain fatty acid during colonization of preterm infant gut.« less

  5. Genome-resolved metaproteomic characterization of preterm infant gut microbiota development reveals species-specific metabolic shifts and variabilities during early life

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiong, Weili; Brown, Christopher T.; Morowitz, Michael J.

    Establishment of the human gut microbiota begins at birth. This early-life microbiota development can impact host physiology during infancy and even across an entire life span. But, the functional stability and population structure of the gut microbiota during initial colonization remain poorly understood. Metaproteomics is an emerging technology for the large-scale characterization of metabolic functions in complex microbial communities (gut microbiota). We applied a metagenome-informed metaproteomic approach to study the temporal and inter-individual differences of metabolic functions during microbial colonization of preterm human infants’ gut. By analyzing 30 individual fecal samples, we identified up to 12,568 protein groups for eachmore » of four infants, including both human and microbial proteins. With genome-resolved matched metagenomics, proteins were confidently identified at the species/strain level. The maximum percentage of the proteome detected for the abundant organisms was ~45%. A time-dependent increase in the relative abundance of microbial versus human proteins suggested increasing microbial colonization during the first few weeks of early life. We observed remarkable variations and temporal shifts in the relative protein abundances of each organism in these preterm gut communities. Given the dissimilarity of the communities, only 81 microbial EggNOG orthologous groups and 57 human proteins were observed across all samples. These conserved microbial proteins were involved in carbohydrate, energy, amino acid and nucleotide metabolism while conserved human proteins were related to immune response and mucosal maturation. We also identified seven proteome clusters for the communities and showed infant gut proteome profiles were unstable across time and not individual-specific. By applying a gut-specific metabolic module (GMM) analysis, we found that gut communities varied primarily in the contribution of nutrient (carbohydrates, lipids, and amino acids) utilization and short-chain fatty acid production. Overall, this study reports species-specific proteome profiles and metabolic functions of human gut microbiota during early colonization. In particular, our work contributes to reveal microbiota-associated shifts and variations in the metabolism of three major nutrient sources and short-chain fatty acid during colonization of preterm infant gut.« less

  6. Genome-resolved metaproteomic characterization of preterm infant gut microbiota development reveals species-specific metabolic shifts and variabilities during early life.

    PubMed

    Xiong, Weili; Brown, Christopher T; Morowitz, Michael J; Banfield, Jillian F; Hettich, Robert L

    2017-07-10

    Establishment of the human gut microbiota begins at birth. This early-life microbiota development can impact host physiology during infancy and even across an entire life span. However, the functional stability and population structure of the gut microbiota during initial colonization remain poorly understood. Metaproteomics is an emerging technology for the large-scale characterization of metabolic functions in complex microbial communities (gut microbiota). We applied a metagenome-informed metaproteomic approach to study the temporal and inter-individual differences of metabolic functions during microbial colonization of preterm human infants' gut. By analyzing 30 individual fecal samples, we identified up to 12,568 protein groups for each of four infants, including both human and microbial proteins. With genome-resolved matched metagenomics, proteins were confidently identified at the species/strain level. The maximum percentage of the proteome detected for the abundant organisms was ~45%. A time-dependent increase in the relative abundance of microbial versus human proteins suggested increasing microbial colonization during the first few weeks of early life. We observed remarkable variations and temporal shifts in the relative protein abundances of each organism in these preterm gut communities. Given the dissimilarity of the communities, only 81 microbial EggNOG orthologous groups and 57 human proteins were observed across all samples. These conserved microbial proteins were involved in carbohydrate, energy, amino acid and nucleotide metabolism while conserved human proteins were related to immune response and mucosal maturation. We identified seven proteome clusters for the communities and showed infant gut proteome profiles were unstable across time and not individual-specific. Applying a gut-specific metabolic module (GMM) analysis, we found that gut communities varied primarily in the contribution of nutrient (carbohydrates, lipids, and amino acids) utilization and short-chain fatty acid production. Overall, this study reports species-specific proteome profiles and metabolic functions of human gut microbiota during early colonization. In particular, our work contributes to reveal microbiota-associated shifts and variations in the metabolism of three major nutrient sources and short-chain fatty acid during colonization of preterm infant gut.

  7. Curative chemotherapy for acute myeloid leukemia: the development of high-dose ara-C from the laboratory to bedside.

    PubMed

    Capizzi, R L

    1996-01-01

    In the bench to bedside development of drugs to treat patients with cancer, the common guide to dose and schedule selection is toxicity to normal organs patterned after the preclinical profile of the drug. An understanding of the cellular pharmacology of the drug and specifically the cellular targets linked to the drug's effect is of substantial value in assisting the clinical investigator in selecting the proper dose and schedule of drug administration. The clinical development of ara-C for the treatment of acute myeloid leukemia (AML) provides a useful paradigm for the study of this process. An understanding of the cellular pharmacology, cytokinetics and pharmacokinetics of ara-C in leukemic mice showed substantial schedule-dependency. Exposure to high doses for a short duration (C x t) resulted in a palliative therapeutic outcome. In marked contrast, exposure to lower doses for a protracted period (c x T) was curative. Clinical use of ara-C in patients with AML patterned after the murine experience, c x T approach, has been of limited benefit in terms of long-term disease-free survival. Studies with human leukemia blasts from patients have shown that for the majority of patients, the initial rate-limiting step is membrane transport, the characteristics of which are substantially affected by extracellular drug concentration (dose). This pharmacologic impediment is eliminated with the blood levels attained during the infusion of gram doses (1-3 gm/m2) of the drug (high-dose ara-C, HiDaC) for shorter periods of time, a C x t approach. Clinical confirmation of these pharmacologic observations is evident in the therapeutic efficacy of HiDaC in patients with relapsed or SDaC-refractory acute leukemia. This is further emphasized by the significantly improved leukemia-free survival of patients with AML treated with HiDaC intensification during remission compared to those patients treated with milligram doses typical of SDaC protocols. Thus, the identification and monitoring of important parameters of drug action in tumors during the course of a clinical trial can be of substantial assistance in optimizing drug dose and schedule so as to attain the best therapeutic index.

  8. Expression of HSP 70 and its mRNAS during ischemia-reperfusion in the rat bladder.

    PubMed

    Saito, Motoaki; Tominaga, Lika; Nanba, Eiji; Kinoshita, Yukako; Housi, Daisuke; Miyagawa, Ikuo; Satoh, Keisuke

    2004-08-27

    HSP 70 is an important protein that repairs damaged tissue after injury. In the present study, we investigated the expression of HSP 70 and its mRNAs during ischemia-reperfusion in the rat bladder. Rat abdominal aorta was clamped with a small clip to induce ischemia-reperfusion injury in the bladder dome. Male Wistar rats, 8 weeks old, were divided into six groups: controls, 30-min ischemia, 30-min ischemia and 30-, 60-minute, 1- and 7-day reperfusion, groups A, B, C, D, E, and F, respectively. In functional studies, contractile responses to carbachol were measured in these groups. The expression of HSP 70-1/2 mRNAs was quantified using a real-time PCR method, and that of HSP 70 proteins was measured using ELISA in the bladders. In the functional study, Emax values of carbachol to bladders in the A, B, C, D, E and F groups were 9.3 +/- 1.3, 7.9 +/- 1.7, 4.3 +/- 0.8, 4.2 +/- 0.7, 4.5 +/- 0.6, and 8.1 +/- 1.2 g/mm2, respectively. In the control group, the expression of HSP 70-1/2 mRNA was detected, and the expression of HSP 70-1 mRNAs was significantly higher than that of HSP 70-2 mRNAs in each group. The expression of HSP 70-1 mRNA increased in groups B and C, but decreased in groups D, E, and F. The expression of HSP 70-2 mRNA in group C was significantly higher than that of groups A, D, E, and F. The expression of HSP 70-1/2 mRNAs after 1 day or 1 week of reperfusion was similar to control levels. The expression of HSP 70 proteins was increased shortly after the expression of their mRNAs. The expression of HSP 70 after 1 day or 1 week of reperfusion was almost identical to control levels. Our data indicate that contractile responses of the bladder were decreased by ischemia reperfusion, and that expression of HSP 70 and its mRNAs appeared to increase after a short period of the insult.

  9. A Theory of Gravity and General Relativity based on Quantum Electromagnetism

    NASA Astrophysics Data System (ADS)

    Zheng-Johansson, J. X.

    2018-02-01

    Based on first principles solutions in a unified framework of quantum mechanics and electromagnetism we predict the presence of a universal attractive depolarisation radiation (DR) Lorentz force (F) between quantum entities, each being either an IED matter particle or light quantum, in a polarisable dielectric vacuum. Given two quantum entities i = 1, 2 of either kind, of characteristic frequencies ν _i^0, masses m_i0 = hν _i^0/{c^2} and separated at a distance r 0, the solution for F is F = - G}m_1^0m_2^0/{≤ft( {{r^2}} \\right)^2}, where G} = χ _0^2{e^4}/12{π ^2} \\in _0^2{ρ _λ };{χ _0} is the susceptibility and π λ is the reduced linear mass density of the vacuum. This force F resembles in all respects Newton’s gravity and is accurate at the weak F limit; hence ℊ equals the gravitational constant G. The DR wave fields and hence the gravity are each propagated in the dielectric vacuum at the speed of light c; these can not be shielded by matter. A test particle µ of mass m 0 therefore interacts gravitationally with all of the building particles of a given large mass M at r 0 apart, by a total gravitational force F = -GMm 0/(r 0)2 and potential V = -∂F/∂r 0. For a finite V and hence a total Hamiltonian H = m 0 c 2 + V, solution for the eigenvalue equation of µ presents a red-shift in the eigen frequency ν = ν 0(1 - GM/r 0 c 2) and hence in other wave variables. The quantum solutions combined with the wave nature of the gravity further lead to dilated gravito optical distance r = r 0/(1 - GM/r 0 c 2) and time t = t 0/(1 - GM/r 0 c 2), and modified Newton’s gravity and Einstein’s mass energy relation. Applications of these give predictions of the general relativistic effects manifested in the four classical test experiments of Einstein’s general relativity (GR), in direct agreement with the experiments and the predictions given based on GR.

  10. What can formal methods offer to digital flight control systems design

    NASA Technical Reports Server (NTRS)

    Good, Donald I.

    1990-01-01

    Formal methods research begins to produce methods which will enable mathematic modeling of the physical behavior of digital hardware and software systems. The development of these methods directly supports the NASA mission of increasing the scope and effectiveness of flight system modeling capabilities. The conventional, continuous mathematics that is used extensively in modeling flight systems is not adequate for accurate modeling of digital systems. Therefore, the current practice of digital flight control system design has not had the benefits of extensive mathematical modeling which are common in other parts of flight system engineering. Formal methods research shows that by using discrete mathematics, very accurate modeling of digital systems is possible. These discrete modeling methods will bring the traditional benefits of modeling to digital hardware and hardware design. Sound reasoning about accurate mathematical models of flight control systems can be an important part of reducing risk of unsafe flight control.

  11. Component model reduction via the projection and assembly method

    NASA Technical Reports Server (NTRS)

    Bernard, Douglas E.

    1989-01-01

    The problem of acquiring a simple but sufficiently accurate model of a dynamic system is made more difficult when the dynamic system of interest is a multibody system comprised of several components. A low order system model may be created by reducing the order of the component models and making use of various available multibody dynamics programs to assemble them into a system model. The difficulty is in choosing the reduced order component models to meet system level requirements. The projection and assembly method, proposed originally by Eke, solves this difficulty by forming the full order system model, performing model reduction at the the system level using system level requirements, and then projecting the desired modes onto the components for component level model reduction. The projection and assembly method is analyzed to show the conditions under which the desired modes are captured exactly; to the numerical precision of the algorithm.

  12. Using iterative cluster merging with improved gap statistics to perform online phenotype discovery in the context of high-throughput RNAi screens

    PubMed Central

    Yin, Zheng; Zhou, Xiaobo; Bakal, Chris; Li, Fuhai; Sun, Youxian; Perrimon, Norbert; Wong, Stephen TC

    2008-01-01

    Background The recent emergence of high-throughput automated image acquisition technologies has forever changed how cell biologists collect and analyze data. Historically, the interpretation of cellular phenotypes in different experimental conditions has been dependent upon the expert opinions of well-trained biologists. Such qualitative analysis is particularly effective in detecting subtle, but important, deviations in phenotypes. However, while the rapid and continuing development of automated microscope-based technologies now facilitates the acquisition of trillions of cells in thousands of diverse experimental conditions, such as in the context of RNA interference (RNAi) or small-molecule screens, the massive size of these datasets precludes human analysis. Thus, the development of automated methods which aim to identify novel and biological relevant phenotypes online is one of the major challenges in high-throughput image-based screening. Ideally, phenotype discovery methods should be designed to utilize prior/existing information and tackle three challenging tasks, i.e. restoring pre-defined biological meaningful phenotypes, differentiating novel phenotypes from known ones and clarifying novel phenotypes from each other. Arbitrarily extracted information causes biased analysis, while combining the complete existing datasets with each new image is intractable in high-throughput screens. Results Here we present the design and implementation of a novel and robust online phenotype discovery method with broad applicability that can be used in diverse experimental contexts, especially high-throughput RNAi screens. This method features phenotype modelling and iterative cluster merging using improved gap statistics. A Gaussian Mixture Model (GMM) is employed to estimate the distribution of each existing phenotype, and then used as reference distribution in gap statistics. This method is broadly applicable to a number of different types of image-based datasets derived from a wide spectrum of experimental conditions and is suitable to adaptively process new images which are continuously added to existing datasets. Validations were carried out on different dataset, including published RNAi screening using Drosophila embryos [Additional files 1, 2], dataset for cell cycle phase identification using HeLa cells [Additional files 1, 3, 4] and synthetic dataset using polygons, our methods tackled three aforementioned tasks effectively with an accuracy range of 85%–90%. When our method is implemented in the context of a Drosophila genome-scale RNAi image-based screening of cultured cells aimed to identifying the contribution of individual genes towards the regulation of cell-shape, it efficiently discovers meaningful new phenotypes and provides novel biological insight. We also propose a two-step procedure to modify the novelty detection method based on one-class SVM, so that it can be used to online phenotype discovery. In different conditions, we compared the SVM based method with our method using various datasets and our methods consistently outperformed SVM based method in at least two of three tasks by 2% to 5%. These results demonstrate that our methods can be used to better identify novel phenotypes in image-based datasets from a wide range of conditions and organisms. Conclusion We demonstrate that our method can detect various novel phenotypes effectively in complex datasets. Experiment results also validate that our method performs consistently under different order of image input, variation of starting conditions including the number and composition of existing phenotypes, and dataset from different screens. In our findings, the proposed method is suitable for online phenotype discovery in diverse high-throughput image-based genetic and chemical screens. PMID:18534020

  13. The System of Systems Architecture Feasibility Assessment Model

    DTIC Science & Technology

    2016-06-01

    OF SYSTEMS ARCHITECTURE FEASIBILITY ASSESSMENT MODEL by Stephen E. Gillespie June 2016 Dissertation Supervisor Eugene Paulo THIS PAGE...Dissertation 4. TITLE AND SUBTITLE THE SYSTEM OF SYSTEMS ARCHITECTURE FEASIBILITY ASSESSMENT MODEL 5. FUNDING NUMBERS 6. AUTHOR(S) Stephen E...SoS architecture feasibility assessment model (SoS-AFAM). Together, these extend current model- based systems engineering (MBSE) and SoS engineering

  14. Using the Model Coupling Toolkit to couple earth system models

    USGS Publications Warehouse

    Warner, J.C.; Perlin, N.; Skyllingstad, E.D.

    2008-01-01

    Continued advances in computational resources are providing the opportunity to operate more sophisticated numerical models. Additionally, there is an increasing demand for multidisciplinary studies that include interactions between different physical processes. Therefore there is a strong desire to develop coupled modeling systems that utilize existing models and allow efficient data exchange and model control. The basic system would entail model "1" running on "M" processors and model "2" running on "N" processors, with efficient exchange of model fields at predetermined synchronization intervals. Here we demonstrate two coupled systems: the coupling of the ocean circulation model Regional Ocean Modeling System (ROMS) to the surface wave model Simulating WAves Nearshore (SWAN), and the coupling of ROMS to the atmospheric model Coupled Ocean Atmosphere Prediction System (COAMPS). Both coupled systems use the Model Coupling Toolkit (MCT) as a mechanism for operation control and inter-model distributed memory transfer of model variables. In this paper we describe requirements and other options for model coupling, explain the MCT library, ROMS, SWAN and COAMPS models, methods for grid decomposition and sparse matrix interpolation, and provide an example from each coupled system. Methods presented in this paper are clearly applicable for coupling of other types of models. ?? 2008 Elsevier Ltd. All rights reserved.

  15. Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Dali; Yuan, Fengming; Hernandez, Benjamin

    Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less

  16. Virtual Observation System for Earth System Model: An Application to ACME Land Model Simulations

    DOE PAGES

    Wang, Dali; Yuan, Fengming; Hernandez, Benjamin; ...

    2017-01-01

    Investigating and evaluating physical-chemical-biological processes within an Earth system model (EMS) can be very challenging due to the complexity of both model design and software implementation. A virtual observation system (VOS) is presented to enable interactive observation of these processes during system simulation. Based on advance computing technologies, such as compiler-based software analysis, automatic code instrumentation, and high-performance data transport, the VOS provides run-time observation capability, in-situ data analytics for Earth system model simulation, model behavior adjustment opportunities through simulation steering. A VOS for a terrestrial land model simulation within the Accelerated Climate Modeling for Energy model is also presentedmore » to demonstrate the implementation details and system innovations.« less

  17. Microphysics in Multi-scale Modeling System with Unified Physics

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo

    2012-01-01

    Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.

  18. Model-Based Prognostics of Hybrid Systems

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew; Roychoudhury, Indranil; Bregon, Anibal

    2015-01-01

    Model-based prognostics has become a popular approach to solving the prognostics problem. However, almost all work has focused on prognostics of systems with continuous dynamics. In this paper, we extend the model-based prognostics framework to hybrid systems models that combine both continuous and discrete dynamics. In general, most systems are hybrid in nature, including those that combine physical processes with software. We generalize the model-based prognostics formulation to hybrid systems, and describe the challenges involved. We present a general approach for modeling hybrid systems, and overview methods for solving estimation and prediction in hybrid systems. As a case study, we consider the problem of conflict (i.e., loss of separation) prediction in the National Airspace System, in which the aircraft models are hybrid dynamical systems.

  19. World Energy Projection System Plus Model Documentation: Commercial Module

    EIA Publications

    2016-01-01

    The Commercial Model of the World Energy Projection System Plus (WEPS ) is an energy demand modeling system of the world commercial end?use sector at a regional level. This report describes the version of the Commercial Model that was used to produce the commercial sector projections published in the International Energy Outlook 2016 (IEO2016). The Commercial Model is one of 13 components of the WEPS system. The WEPS is a modular system, consisting of a number of separate energy models that are communicate and work with each other through an integrated system model. The model components are each developed independently, but are designed with well?defined protocols for system communication and interactivity. The WEPS modeling system uses a shared database (the “restart” file) that allows all the models to communicate with each other when they are run in sequence over a number of iterations. The overall WEPS system uses an iterative solution technique that forces convergence of consumption and supply pressures to solve for an equilibrium price.

  20. [Model-based biofuels system analysis: a review].

    PubMed

    Chang, Shiyan; Zhang, Xiliang; Zhao, Lili; Ou, Xunmin

    2011-03-01

    Model-based system analysis is an important tool for evaluating the potential and impacts of biofuels, and for drafting biofuels technology roadmaps and targets. The broad reach of the biofuels supply chain requires that biofuels system analyses span a range of disciplines, including agriculture/forestry, energy, economics, and the environment. Here we reviewed various models developed for or applied to modeling biofuels, and presented a critical analysis of Agriculture/Forestry System Models, Energy System Models, Integrated Assessment Models, Micro-level Cost, Energy and Emission Calculation Models, and Specific Macro-level Biofuel Models. We focused on the models' strengths, weaknesses, and applicability, facilitating the selection of a suitable type of model for specific issues. Such an analysis was a prerequisite for future biofuels system modeling, and represented a valuable resource for researchers and policy makers.

  1. The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability

    DOE PAGES

    Theurich, Gerhard; DeLuca, C.; Campbell, T.; ...

    2016-08-22

    The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open-source terms or to credentialed users. Furthermore, the ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the United States. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC)more » Layer, a set of ESMF-based component templates and interoperability conventions. Our shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multiagency development of coupled modeling systems; controlled experimentation and testing; and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NAVGEM), the Hybrid Coordinate Ocean Model (HYCOM), and the Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and the Goddard Earth Observing System Model, version 5 (GEOS-5), atmospheric general circulation model.« less

  2. The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Theurich, Gerhard; DeLuca, C.; Campbell, T.

    The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open-source terms or to credentialed users. Furthermore, the ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the United States. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC)more » Layer, a set of ESMF-based component templates and interoperability conventions. Our shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multiagency development of coupled modeling systems; controlled experimentation and testing; and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NAVGEM), the Hybrid Coordinate Ocean Model (HYCOM), and the Coupled Ocean–Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and the Goddard Earth Observing System Model, version 5 (GEOS-5), atmospheric general circulation model.« less

  3. Computer-aided operations engineering with integrated models of systems and operations

    NASA Technical Reports Server (NTRS)

    Malin, Jane T.; Ryan, Dan; Fleming, Land

    1994-01-01

    CONFIG 3 is a prototype software tool that supports integrated conceptual design evaluation from early in the product life cycle, by supporting isolated or integrated modeling, simulation, and analysis of the function, structure, behavior, failures and operation of system designs. Integration and reuse of models is supported in an object-oriented environment providing capabilities for graph analysis and discrete event simulation. Integration is supported among diverse modeling approaches (component view, configuration or flow path view, and procedure view) and diverse simulation and analysis approaches. Support is provided for integrated engineering in diverse design domains, including mechanical and electro-mechanical systems, distributed computer systems, and chemical processing and transport systems. CONFIG supports abstracted qualitative and symbolic modeling, for early conceptual design. System models are component structure models with operating modes, with embedded time-related behavior models. CONFIG supports failure modeling and modeling of state or configuration changes that result in dynamic changes in dependencies among components. Operations and procedure models are activity structure models that interact with system models. CONFIG is designed to support evaluation of system operability, diagnosability and fault tolerance, and analysis of the development of system effects of problems over time, including faults, failures, and procedural or environmental difficulties.

  4. Research on complex 3D tree modeling based on L-system

    NASA Astrophysics Data System (ADS)

    Gang, Chen; Bin, Chen; Yuming, Liu; Hui, Li

    2018-03-01

    L-system as a fractal iterative system could simulate complex geometric patterns. Based on the field observation data of trees and knowledge of forestry experts, this paper extracted modeling constraint rules and obtained an L-system rules set. Using the self-developed L-system modeling software the L-system rule set was parsed to generate complex tree 3d models.The results showed that the geometrical modeling method based on l-system could be used to describe the morphological structure of complex trees and generate 3D tree models.

  5. Cyber Physical System Modelling of Distribution Power Systems for Dynamic Demand Response

    NASA Astrophysics Data System (ADS)

    Chu, Xiaodong; Zhang, Rongxiang; Tang, Maosen; Huang, Haoyi; Zhang, Lei

    2018-01-01

    Dynamic demand response (DDR) is a package of control methods to enhance power system security. A CPS modelling and simulation platform for DDR in distribution power systems is presented in this paper. CPS modelling requirements of distribution power systems are analyzed. A coupled CPS modelling platform is built for assessing DDR in the distribution power system, which combines seamlessly modelling tools of physical power networks and cyber communication networks. Simulations results of IEEE 13-node test system demonstrate the effectiveness of the modelling and simulation platform.

  6. An Integrated High Resolution Hydrometeorological Modeling Testbed using LIS and WRF

    NASA Technical Reports Server (NTRS)

    Kumar, Sujay V.; Peters-Lidard, Christa D.; Eastman, Joseph L.; Tao, Wei-Kuo

    2007-01-01

    Scientists have made great strides in modeling physical processes that represent various weather and climate phenomena. Many modeling systems that represent the major earth system components (the atmosphere, land surface, and ocean) have been developed over the years. However, developing advanced Earth system applications that integrates these independently developed modeling systems have remained a daunting task due to limitations in computer hardware and software. Recently, efforts such as the Earth System Modeling Ramework (ESMF) and Assistance for Land Modeling Activities (ALMA) have focused on developing standards, guidelines, and computational support for coupling earth system model components. In this article, the development of a coupled land-atmosphere hydrometeorological modeling system that adopts these community interoperability standards, is described. The land component is represented by the Land Information System (LIS), developed by scientists at the NASA Goddard Space Flight Center. The Weather Research and Forecasting (WRF) model, a mesoscale numerical weather prediction system, is used as the atmospheric component. LIS includes several community land surface models that can be executed at spatial scales as fine as 1km. The data management capabilities in LIS enable the direct use of high resolution satellite and observation data for modeling. Similarly, WRF includes several parameterizations and schemes for modeling radiation, microphysics, PBL and other processes. Thus the integrated LIS-WRF system facilitates several multi-model studies of land-atmosphere coupling that can be used to advance earth system studies.

  7. About Regional Energy Deployment System Model-ReEDS | Regional Energy

    Science.gov Websites

    Deployment System Model | Energy Analysis | NREL About Regional Energy Deployment System Model -ReEDS About Regional Energy Deployment System Model-ReEDS The Regional Energy Deployment System (ReEDS ) is a long-term, capacity-expansion model for the deployment of electric power generation technologies

  8. Propulsion System Dynamic Modeling for the NASA Supersonic Concept Vehicle: AeroPropulsoServoElasticity

    NASA Technical Reports Server (NTRS)

    Kopasakis, George; Connolly, Joseph; Seidel, Jonathan

    2014-01-01

    A summary of the propulsion system modeling under NASA's High Speed Project (HSP) AeroPropulsoServoElasticity (APSE) task is provided with a focus on the propulsion system for the low-boom supersonic configuration developed by Lockheed Martin and referred to as the N+2 configuration. This summary includes details on the effort to date to develop computational models for the various propulsion system components. The objective of this paper is to summarize the model development effort in this task, while providing more detail in the modeling areas that have not been previously published. The purpose of the propulsion system modeling and the overall APSE effort is to develop an integrated dynamic vehicle model to conduct appropriate unsteady analysis of supersonic vehicle performance. This integrated APSE system model concept includes the propulsion system model, and the vehicle structural-aerodynamics model. The development to date of such a preliminary integrated model will also be summarized in this report.propulsion system dynamics, the structural dynamics, and aerodynamics.

  9. Analysis about modeling MEC7000 excitation system of nuclear power unit

    NASA Astrophysics Data System (ADS)

    Liu, Guangshi; Sun, Zhiyuan; Dou, Qian; Liu, Mosi; Zhang, Yihui; Wang, Xiaoming

    2018-02-01

    Aiming at the importance of accurate modeling excitation system in stability calculation of nuclear power plant inland and lack of research in modeling MEC7000 excitation system,this paper summarize a general method to modeling and simulate MEC7000 excitation system. Among this method also solve the key issues of computing method of IO interface parameter and the conversion process of excitation system measured model to BPA simulation model. At last complete the simulation modeling of MEC7000 excitation system first time in domestic. By used No-load small disturbance check, demonstrates that the proposed model and algorithm is corrective and efficient.

  10. System Dynamics Approach for Critical Infrastructure and Decision Support. A Model for a Potable Water System.

    NASA Astrophysics Data System (ADS)

    Pasqualini, D.; Witkowski, M.

    2005-12-01

    The Critical Infrastructure Protection / Decision Support System (CIP/DSS) project, supported by the Science and Technology Office, has been developing a risk-informed Decision Support System that provides insights for making critical infrastructure protection decisions. The system considers seventeen different Department of Homeland Security defined Critical Infrastructures (potable water system, telecommunications, public health, economics, etc.) and their primary interdependencies. These infrastructures have been modeling in one model called CIP/DSS Metropolitan Model. The modeling approach used is a system dynamics modeling approach. System dynamics modeling combines control theory and the nonlinear dynamics theory, which is defined by a set of coupled differential equations, which seeks to explain how the structure of a given system determines its behavior. In this poster we present a system dynamics model for one of the seventeen critical infrastructures, a generic metropolitan potable water system (MPWS). Three are the goals: 1) to gain a better understanding of the MPWS infrastructure; 2) to identify improvements that would help protect MPWS; and 3) to understand the consequences, interdependencies, and impacts, when perturbations occur to the system. The model represents raw water sources, the metropolitan water treatment process, storage of treated water, damage and repair to the MPWS, distribution of water, and end user demand, but does not explicitly represent the detailed network topology of an actual MPWS. The MPWS model is dependent upon inputs from the metropolitan population, energy, telecommunication, public health, and transportation models as well as the national water and transportation models. We present modeling results and sensitivity analysis indicating critical choke points, negative and positive feedback loops in the system. A general scenario is also analyzed where the potable water system responds to a generic disruption.

  11. THE EARTH SYSTEM PREDICTION SUITE: Toward a Coordinated U.S. Modeling Capability

    PubMed Central

    Theurich, Gerhard; DeLuca, C.; Campbell, T.; Liu, F.; Saint, K.; Vertenstein, M.; Chen, J.; Oehmke, R.; Doyle, J.; Whitcomb, T.; Wallcraft, A.; Iredell, M.; Black, T.; da Silva, AM; Clune, T.; Ferraro, R.; Li, P.; Kelley, M.; Aleinov, I.; Balaji, V.; Zadeh, N.; Jacob, R.; Kirtman, B.; Giraldo, F.; McCarren, D.; Sandgathe, S.; Peckham, S.; Dunlap, R.

    2017-01-01

    The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users. The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS®); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model. PMID:29568125

  12. THE EARTH SYSTEM PREDICTION SUITE: Toward a Coordinated U.S. Modeling Capability.

    PubMed

    Theurich, Gerhard; DeLuca, C; Campbell, T; Liu, F; Saint, K; Vertenstein, M; Chen, J; Oehmke, R; Doyle, J; Whitcomb, T; Wallcraft, A; Iredell, M; Black, T; da Silva, A M; Clune, T; Ferraro, R; Li, P; Kelley, M; Aleinov, I; Balaji, V; Zadeh, N; Jacob, R; Kirtman, B; Giraldo, F; McCarren, D; Sandgathe, S; Peckham, S; Dunlap, R

    2016-07-01

    The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users. The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS ® ); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model.

  13. The Earth System Prediction Suite: Toward a Coordinated U.S. Modeling Capability

    NASA Technical Reports Server (NTRS)

    Theurich, Gerhard; DeLuca, C.; Campbell, T.; Liu, F.; Saint, K.; Vertenstein, M.; Chen, J.; Oehmke, R.; Doyle, J.; Whitcomb, T.; hide

    2016-01-01

    The Earth System Prediction Suite (ESPS) is a collection of flagship U.S. weather and climate models and model components that are being instrumented to conform to interoperability conventions, documented to follow metadata standards, and made available either under open source terms or to credentialed users.The ESPS represents a culmination of efforts to create a common Earth system model architecture, and the advent of increasingly coordinated model development activities in the U.S. ESPS component interfaces are based on the Earth System Modeling Framework (ESMF), community-developed software for building and coupling models, and the National Unified Operational Prediction Capability (NUOPC) Layer, a set of ESMF-based component templates and interoperability conventions. This shared infrastructure simplifies the process of model coupling by guaranteeing that components conform to a set of technical and semantic behaviors. The ESPS encourages distributed, multi-agency development of coupled modeling systems, controlled experimentation and testing, and exploration of novel model configurations, such as those motivated by research involving managed and interactive ensembles. ESPS codes include the Navy Global Environmental Model (NavGEM), HYbrid Coordinate Ocean Model (HYCOM), and Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS); the NOAA Environmental Modeling System (NEMS) and the Modular Ocean Model (MOM); the Community Earth System Model (CESM); and the NASA ModelE climate model and GEOS-5 atmospheric general circulation model.

  14. Watershed System Model: The Essentials to Model Complex Human-Nature System at the River Basin Scale

    NASA Astrophysics Data System (ADS)

    Li, Xin; Cheng, Guodong; Lin, Hui; Cai, Ximing; Fang, Miao; Ge, Yingchun; Hu, Xiaoli; Chen, Min; Li, Weiyue

    2018-03-01

    Watershed system models are urgently needed to understand complex watershed systems and to support integrated river basin management. Early watershed modeling efforts focused on the representation of hydrologic processes, while the next-generation watershed models should represent the coevolution of the water-land-air-plant-human nexus in a watershed and provide capability of decision-making support. We propose a new modeling framework and discuss the know-how approach to incorporate emerging knowledge into integrated models through data exchange interfaces. We argue that the modeling environment is a useful tool to enable effective model integration, as well as create domain-specific models of river basin systems. The grand challenges in developing next-generation watershed system models include but are not limited to providing an overarching framework for linking natural and social sciences, building a scientifically based decision support system, quantifying and controlling uncertainties, and taking advantage of new technologies and new findings in the various disciplines of watershed science. The eventual goal is to build transdisciplinary, scientifically sound, and scale-explicit watershed system models that are to be codesigned by multidisciplinary communities.

  15. System Operations Studies : Feeder System Model. User's Manual.

    DOT National Transportation Integrated Search

    1982-11-01

    The Feeder System Model (FSM) is one of the analytic models included in the System Operations Studies (SOS) software package developed for urban transit systems analysis. The objective of the model is to assign a proportion of the zone-to-zone travel...

  16. A model for plant lighting system selection.

    PubMed

    Ciolkosz, D E; Albright, L D; Sager, J C; Langhans, R W

    2002-01-01

    A decision model is presented that compares lighting systems for a plant growth scenario and chooses the most appropriate system from a given set of possible choices. The model utilizes a Multiple Attribute Utility Theory approach, and incorporates expert input and performance simulations to calculate a utility value for each lighting system being considered. The system with the highest utility is deemed the most appropriate system. The model was applied to a greenhouse scenario, and analyses were conducted to test the model's output for validity. Parameter variation indicates that the model performed as expected. Analysis of model output indicates that differences in utility among the candidate lighting systems were sufficiently large to give confidence that the model's order of selection was valid.

  17. Rethinking the Systems Engineering Process in Light of Design Thinking

    DTIC Science & Technology

    2016-04-30

    systems engineering process models (Blanchard & Fabrycky, 1990) and the majority of engineering design education (Dym et al., 2005). The waterfall model ...Engineering Career Competency Model Clifford Whitcomb, Systems Engineering Professor, NPS Corina White, Systems Engineering Research Associate, NPS...Postgraduate School (NPS) in Monterey, CA. He teaches and conducts research in the design of enterprise systems, systems modeling , and system

  18. Modeling in the Classroom: An Evolving Learning Tool

    NASA Astrophysics Data System (ADS)

    Few, A. A.; Marlino, M. R.; Low, R.

    2006-12-01

    Among the early programs (early 1990s) focused on teaching Earth System Science were the Global Change Instruction Program (GCIP) funded by NSF through UCAR and the Earth System Science Education Program (ESSE) funded by NASA through USRA. These two programs introduced modeling as a learning tool from the beginning, and they provided workshops, demonstrations and lectures for their participating universities. These programs were aimed at university-level education. Recently, classroom modeling is experiencing a revival of interest. Drs John Snow and Arthur Few conducted two workshops on modeling at the ESSE21 meeting in Fairbanks, Alaska, in August 2005. The Digital Library for Earth System Education (DLESE) at http://www.dlese.org provides web access to STELLA models and tutorials, and UCAR's Education and Outreach (EO) program holds workshops that include training in modeling. An important innovation to the STELLA modeling software by isee systems, http://www.iseesystems.com, called "isee Player" is available as a free download. The Player allows users to view and run STELLA models, change model parameters, share models with colleagues and students, and make working models available on the web. This is important because the expert can create models, and the user can learn how the modeled system works. Another aspect of this innovation is that the educational benefits of modeling concepts can be extended throughout most of the curriculum. The procedure for building a working computer model of an Earth Science System follows this general format: (1) carefully define the question(s) for which you seek the answer(s); (2) identify the interacting system components and inputs contributing to the system's behavior; (3) collect the information and data that will be required to complete the conceptual model; (4) construct a system diagram (graphic) of the system that displays all of system's central questions, components, relationships and required inputs. At this stage in the process the conceptual model of the system is compete and a clear understanding of how the system works is achieved. When appropriate software is available the advanced classes can proceed to (5) creating a computer model of the system and testing the conceptual model. For classes lacking these advanced capabilities they may view and run models using the free isee Player and shared working models. In any event there is understanding to be gained in every step of the procedure outlined above. You can view some examples at http://www.ruf.rice.edu/~few/. We plan to populate this site with samples of Earth science systems for use in Earth system science education.

  19. ASTP ranging system mathematical model

    NASA Technical Reports Server (NTRS)

    Ellis, M. R.; Robinson, L. H.

    1973-01-01

    A mathematical model is presented of the VHF ranging system to analyze the performance of the Apollo-Soyuz test project (ASTP). The system was adapted for use in the ASTP. The ranging system mathematical model is presented in block diagram form, and a brief description of the overall model is also included. A procedure for implementing the math model is presented along with a discussion of the validation of the math model and the overall summary and conclusions of the study effort. Detailed appendices of the five study tasks are presented: early late gate model development, unlock probability development, system error model development, probability of acquisition and model development, and math model validation testing.

  20. Strategic preparedness for recovery from catastrophic risks to communities and infrastructure systems of systems.

    PubMed

    Haimes, Yacov Y

    2012-11-01

    Natural and human-induced disasters affect organizations in myriad ways because of the inherent interconnectedness and interdependencies among human, cyber, and physical infrastructures, but more importantly, because organizations depend on the effectiveness of people and on the leadership they provide to the organizations they serve and represent. These human-organizational-cyber-physical infrastructure entities are termed systems of systems. Given the multiple perspectives that characterize them, they cannot be modeled effectively with a single model. The focus of this article is: (i) the centrality of the states of a system in modeling; (ii) the efficacious role of shared states in modeling systems of systems, in identification, and in the meta-modeling of systems of systems; and (iii) the contributions of the above to strategic preparedness, response to, and recovery from catastrophic risk to such systems. Strategic preparedness connotes a decision-making process and its associated actions. These must be: implemented in advance of a natural or human-induced disaster, aimed at reducing consequences (e.g., recovery time, community suffering, and cost), and/or controlling their likelihood to a level considered acceptable (through the decisionmakers' implicit and explicit acceptance of various risks and tradeoffs). The inoperability input-output model (IIM), which is grounded on Leontief's input/output model, has enabled the modeling of interdependent subsystems. Two separate modeling structures are introduced. These are: phantom system models (PSM), where shared states constitute the essence of modeling coupled systems; and the IIM, where interdependencies among sectors of the economy are manifested by the Leontief matrix of technological coefficients. This article demonstrates the potential contributions of these two models to each other, and thus to more informative modeling of systems of systems schema. The contributions of shared states to this modeling and to systems identification are presented with case studies. © 2012 Society for Risk Analysis.

Top