Sample records for physics machine statistical

  1. Modeling Stochastic Kinetics of Molecular Machines at Multiple Levels: From Molecules to Modules

    PubMed Central

    Chowdhury, Debashish

    2013-01-01

    A molecular machine is either a single macromolecule or a macromolecular complex. In spite of the striking superficial similarities between these natural nanomachines and their man-made macroscopic counterparts, there are crucial differences. Molecular machines in a living cell operate stochastically in an isothermal environment far from thermodynamic equilibrium. In this mini-review we present a catalog of the molecular machines and an inventory of the essential toolbox for theoretically modeling these machines. The tool kits include 1), nonequilibrium statistical-physics techniques for modeling machines and machine-driven processes; and 2), statistical-inference methods for reverse engineering a functional machine from the empirical data. The cell is often likened to a microfactory in which the machineries are organized in modular fashion; each module consists of strongly coupled multiple machines, but different modules interact weakly with each other. This microfactory has its own automated supply chain and delivery system. Buoyed by the success achieved in modeling individual molecular machines, we advocate integration of these models in the near future to develop models of functional modules. A system-level description of the cell from the perspective of molecular machinery (the mechanome) is likely to emerge from further integrations that we envisage here. PMID:23746505

  2. Machine learning methods as a tool to analyse incomplete or irregularly sampled radon time series data.

    PubMed

    Janik, M; Bossew, P; Kurihara, O

    2018-07-15

    Machine learning is a class of statistical techniques which has proven to be a powerful tool for modelling the behaviour of complex systems, in which response quantities depend on assumed controls or predictors in a complicated way. In this paper, as our first purpose, we propose the application of machine learning to reconstruct incomplete or irregularly sampled data of time series indoor radon ( 222 Rn). The physical assumption underlying the modelling is that Rn concentration in the air is controlled by environmental variables such as air temperature and pressure. The algorithms "learn" from complete sections of multivariate series, derive a dependence model and apply it to sections where the controls are available, but not the response (Rn), and in this way complete the Rn series. Three machine learning techniques are applied in this study, namely random forest, its extension called the gradient boosting machine and deep learning. For a comparison, we apply the classical multiple regression in a generalized linear model version. Performance of the models is evaluated through different metrics. The performance of the gradient boosting machine is found to be superior to that of the other techniques. By applying learning machines, we show, as our second purpose, that missing data or periods of Rn series data can be reconstructed and resampled on a regular grid reasonably, if data of appropriate physical controls are available. The techniques also identify to which degree the assumed controls contribute to imputing missing Rn values. Our third purpose, though no less important from the viewpoint of physics, is identifying to which degree physical, in this case environmental variables, are relevant as Rn predictors, or in other words, which predictors explain most of the temporal variability of Rn. We show that variables which contribute most to the Rn series reconstruction, are temperature, relative humidity and day of the year. The first two are physical predictors, while "day of the year" is a statistical proxy or surrogate for missing or unknown predictors. Copyright © 2018 Elsevier B.V. All rights reserved.

  3. Advanced Machine Learning Emulators of Radiative Transfer Models

    NASA Astrophysics Data System (ADS)

    Camps-Valls, G.; Verrelst, J.; Martino, L.; Vicent, J.

    2017-12-01

    Physically-based model inversion methodologies are based on physical laws and established cause-effect relationships. A plethora of remote sensing applications rely on the physical inversion of a Radiative Transfer Model (RTM), which lead to physically meaningful bio-geo-physical parameter estimates. The process is however computationally expensive, needs expert knowledge for both the selection of the RTM, its parametrization and the the look-up table generation, as well as its inversion. Mimicking complex codes with statistical nonlinear machine learning algorithms has become the natural alternative very recently. Emulators are statistical constructs able to approximate the RTM, although at a fraction of the computational cost, providing an estimation of uncertainty, and estimations of the gradient or finite integral forms. We review the field and recent advances of emulation of RTMs with machine learning models. We posit Gaussian processes (GPs) as the proper framework to tackle the problem. Furthermore, we introduce an automatic methodology to construct emulators for costly RTMs. The Automatic Gaussian Process Emulator (AGAPE) methodology combines the interpolation capabilities of GPs with the accurate design of an acquisition function that favours sampling in low density regions and flatness of the interpolation function. We illustrate the good capabilities of our emulators in toy examples, leaf and canopy levels PROSPECT and PROSAIL RTMs, and for the construction of an optimal look-up-table for atmospheric correction based on MODTRAN5.

  4. Modeling stochastic kinetics of molecular machines at multiple levels: from molecules to modules.

    PubMed

    Chowdhury, Debashish

    2013-06-04

    A molecular machine is either a single macromolecule or a macromolecular complex. In spite of the striking superficial similarities between these natural nanomachines and their man-made macroscopic counterparts, there are crucial differences. Molecular machines in a living cell operate stochastically in an isothermal environment far from thermodynamic equilibrium. In this mini-review we present a catalog of the molecular machines and an inventory of the essential toolbox for theoretically modeling these machines. The tool kits include 1), nonequilibrium statistical-physics techniques for modeling machines and machine-driven processes; and 2), statistical-inference methods for reverse engineering a functional machine from the empirical data. The cell is often likened to a microfactory in which the machineries are organized in modular fashion; each module consists of strongly coupled multiple machines, but different modules interact weakly with each other. This microfactory has its own automated supply chain and delivery system. Buoyed by the success achieved in modeling individual molecular machines, we advocate integration of these models in the near future to develop models of functional modules. A system-level description of the cell from the perspective of molecular machinery (the mechanome) is likely to emerge from further integrations that we envisage here. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  5. Dependency graph for code analysis on emerging architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shashkov, Mikhail Jurievich; Lipnikov, Konstantin

    Direct acyclic dependency (DAG) graph is becoming the standard for modern multi-physics codes.The ideal DAG is the true block-scheme of a multi-physics code. Therefore, it is the convenient object for insitu analysis of the cost of computations and algorithmic bottlenecks related to statistical frequent data motion and dymanical machine state.

  6. Accelerometry-based classification of human activities using Markov modeling.

    PubMed

    Mannini, Andrea; Sabatini, Angelo Maria

    2011-01-01

    Accelerometers are a popular choice as body-motion sensors: the reason is partly in their capability of extracting information that is useful for automatically inferring the physical activity in which the human subject is involved, beside their role in feeding biomechanical parameters estimators. Automatic classification of human physical activities is highly attractive for pervasive computing systems, whereas contextual awareness may ease the human-machine interaction, and in biomedicine, whereas wearable sensor systems are proposed for long-term monitoring. This paper is concerned with the machine learning algorithms needed to perform the classification task. Hidden Markov Model (HMM) classifiers are studied by contrasting them with Gaussian Mixture Model (GMM) classifiers. HMMs incorporate the statistical information available on movement dynamics into the classification process, without discarding the time history of previous outcomes as GMMs do. An example of the benefits of the obtained statistical leverage is illustrated and discussed by analyzing two datasets of accelerometer time series.

  7. Advances in Machine Learning and Data Mining for Astronomy

    NASA Astrophysics Data System (ADS)

    Way, Michael J.; Scargle, Jeffrey D.; Ali, Kamal M.; Srivastava, Ashok N.

    2012-03-01

    Advances in Machine Learning and Data Mining for Astronomy documents numerous successful collaborations among computer scientists, statisticians, and astronomers who illustrate the application of state-of-the-art machine learning and data mining techniques in astronomy. Due to the massive amount and complexity of data in most scientific disciplines, the material discussed in this text transcends traditional boundaries between various areas in the sciences and computer science. The book's introductory part provides context to issues in the astronomical sciences that are also important to health, social, and physical sciences, particularly probabilistic and statistical aspects of classification and cluster analysis. The next part describes a number of astrophysics case studies that leverage a range of machine learning and data mining technologies. In the last part, developers of algorithms and practitioners of machine learning and data mining show how these tools and techniques are used in astronomical applications. With contributions from leading astronomers and computer scientists, this book is a practical guide to many of the most important developments in machine learning, data mining, and statistics. It explores how these advances can solve current and future problems in astronomy and looks at how they could lead to the creation of entirely new algorithms within the data mining community.

  8. Applications of statistical physics to technology price evolution

    NASA Astrophysics Data System (ADS)

    McNerney, James

    Understanding how changing technology affects the prices of goods is a problem with both rich phenomenology and important policy consequences. Using methods from statistical physics, I model technology-driven price evolution. First, I examine a model for the price evolution of individual technologies. The price of a good often follows a power law equation when plotted against its cumulative production. This observation turns out to have significant consequences for technology policy aimed at mitigating climate change, where technologies are needed that achieve low carbon emissions at low cost. However, no theory adequately explains why technology prices follow power laws. To understand this behavior, I simplify an existing model that treats technologies as machines composed of interacting components. I find that the power law exponent of the price trajectory is inversely related to the number of interactions per component. I extend the model to allow for more realistic component interactions and make a testable prediction. Next, I conduct a case-study on the cost evolution of coal-fired electricity. I derive the cost in terms of various physical and economic components. The results suggest that commodities and technologies fall into distinct classes of price models, with commodities following martingales, and technologies following exponentials in time or power laws in cumulative production. I then examine the network of money flows between industries. This work is a precursor to studying the simultaneous evolution of multiple technologies. Economies resemble large machines, with different industries acting as interacting components with specialized functions. To begin studying the structure of these machines, I examine 20 economies with an emphasis on finding common features to serve as targets for statistical physics models. I find they share the same money flow and industry size distributions. I apply methods from statistical physics to show that industries cluster the same way according to industry type. Finally, I use these industry money flows to model the price evolution of many goods simultaneously, where network effects become important. I derive a prediction for which goods tend to improve most rapidly. The fastest-improving goods are those with the highest mean path lengths in the money flow network.

  9. Machine learning to analyze images of shocked materials for precise and accurate measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dresselhaus-Cooper, Leora; Howard, Marylesa; Hock, Margaret C.

    A supervised machine learning algorithm, called locally adaptive discriminant analysis (LADA), has been developed to locate boundaries between identifiable image features that have varying intensities. LADA is an adaptation of image segmentation, which includes techniques that find the positions of image features (classes) using statistical intensity distributions for each class in the image. In order to place a pixel in the proper class, LADA considers the intensity at that pixel and the distribution of intensities in local (nearby) pixels. This paper presents the use of LADA to provide, with statistical uncertainties, the positions and shapes of features within ultrafast imagesmore » of shock waves. We demonstrate the ability to locate image features including crystals, density changes associated with shock waves, and material jetting caused by shock waves. This algorithm can analyze images that exhibit a wide range of physical phenomena because it does not rely on comparison to a model. LADA enables analysis of images from shock physics with statistical rigor independent of underlying models or simulations.« less

  10. Library Resources for the Blind and Physically Handicapped: A Directory with FY 1998 Statistics on Readership, Circulation, Budget, Staff, and Collections.

    ERIC Educational Resources Information Center

    Library of Congress, Washington, DC. National Library Service for the Blind and Physically Handicapped.

    This directory lists National Library Service for the Blind and Physically Handicapped libraries and machine-lending agencies alphabetically by state. Each entry includes address, phone and fax numbers, e-mail address, World Wide Web site, area served, librarian name, hours, book collection, special collections, assistive devices, special…

  11. On the Safety of Machine Learning: Cyber-Physical Systems, Decision Sciences, and Data Products.

    PubMed

    Varshney, Kush R; Alemzadeh, Homa

    2017-09-01

    Machine learning algorithms increasingly influence our decisions and interact with us in all parts of our daily lives. Therefore, just as we consider the safety of power plants, highways, and a variety of other engineered socio-technical systems, we must also take into account the safety of systems involving machine learning. Heretofore, the definition of safety has not been formalized in a machine learning context. In this article, we do so by defining machine learning safety in terms of risk, epistemic uncertainty, and the harm incurred by unwanted outcomes. We then use this definition to examine safety in all sorts of applications in cyber-physical systems, decision sciences, and data products. We find that the foundational principle of modern statistical machine learning, empirical risk minimization, is not always a sufficient objective. We discuss how four different categories of strategies for achieving safety in engineering, including inherently safe design, safety reserves, safe fail, and procedural safeguards can be mapped to a machine learning context. We then discuss example techniques that can be adopted in each category, such as considering interpretability and causality of predictive models, objective functions beyond expected prediction accuracy, human involvement for labeling difficult or rare examples, and user experience design of software and open data.

  12. Comparative evaluation of features and techniques for identifying activity type and estimating energy cost from accelerometer data

    PubMed Central

    Kate, Rohit J.; Swartz, Ann M.; Welch, Whitney A.; Strath, Scott J.

    2016-01-01

    Wearable accelerometers can be used to objectively assess physical activity. However, the accuracy of this assessment depends on the underlying method used to process the time series data obtained from accelerometers. Several methods have been proposed that use this data to identify the type of physical activity and estimate its energy cost. Most of the newer methods employ some machine learning technique along with suitable features to represent the time series data. This paper experimentally compares several of these techniques and features on a large dataset of 146 subjects doing eight different physical activities wearing an accelerometer on the hip. Besides features based on statistics, distance based features and simple discrete features straight from the time series were also evaluated. On the physical activity type identification task, the results show that using more features significantly improve results. Choice of machine learning technique was also found to be important. However, on the energy cost estimation task, choice of features and machine learning technique were found to be less influential. On that task, separate energy cost estimation models trained specifically for each type of physical activity were found to be more accurate than a single model trained for all types of physical activities. PMID:26862679

  13. Mutual information, neural networks and the renormalization group

    NASA Astrophysics Data System (ADS)

    Koch-Janusz, Maciej; Ringel, Zohar

    2018-06-01

    Physical systems differing in their microscopic details often display strikingly similar behaviour when probed at macroscopic scales. Those universal properties, largely determining their physical characteristics, are revealed by the powerful renormalization group (RG) procedure, which systematically retains `slow' degrees of freedom and integrates out the rest. However, the important degrees of freedom may be difficult to identify. Here we demonstrate a machine-learning algorithm capable of identifying the relevant degrees of freedom and executing RG steps iteratively without any prior knowledge about the system. We introduce an artificial neural network based on a model-independent, information-theoretic characterization of a real-space RG procedure, which performs this task. We apply the algorithm to classical statistical physics problems in one and two dimensions. We demonstrate RG flow and extract the Ising critical exponent. Our results demonstrate that machine-learning techniques can extract abstract physical concepts and consequently become an integral part of theory- and model-building.

  14. Statistical mechanics of unsupervised feature learning in a restricted Boltzmann machine with binary synapses

    NASA Astrophysics Data System (ADS)

    Huang, Haiping

    2017-05-01

    Revealing hidden features in unlabeled data is called unsupervised feature learning, which plays an important role in pretraining a deep neural network. Here we provide a statistical mechanics analysis of the unsupervised learning in a restricted Boltzmann machine with binary synapses. A message passing equation to infer the hidden feature is derived, and furthermore, variants of this equation are analyzed. A statistical analysis by replica theory describes the thermodynamic properties of the model. Our analysis confirms an entropy crisis preceding the non-convergence of the message passing equation, suggesting a discontinuous phase transition as a key characteristic of the restricted Boltzmann machine. Continuous phase transition is also confirmed depending on the embedded feature strength in the data. The mean-field result under the replica symmetric assumption agrees with that obtained by running message passing algorithms on single instances of finite sizes. Interestingly, in an approximate Hopfield model, the entropy crisis is absent, and a continuous phase transition is observed instead. We also develop an iterative equation to infer the hyper-parameter (temperature) hidden in the data, which in physics corresponds to iteratively imposing Nishimori condition. Our study provides insights towards understanding the thermodynamic properties of the restricted Boltzmann machine learning, and moreover important theoretical basis to build simplified deep networks.

  15. Power training using pneumatic machines vs. plate-loaded machines to improve muscle power in older adults.

    PubMed

    Balachandran, Anoop T; Gandia, Kristine; Jacobs, Kevin A; Streiner, David L; Eltoukhy, Moataz; Signorile, Joseph F

    2017-11-01

    Power training has been shown to be more effective than conventional resistance training for improving physical function in older adults; however, most trials have used pneumatic machines during training. Considering that the general public typically has access to plate-loaded machines, the effectiveness and safety of power training using plate-loaded machines compared to pneumatic machines is an important consideration. The purpose of this investigation was to compare the effects of high-velocity training using pneumatic machines (Pn) versus standard plate-loaded machines (PL). Independently-living older adults, 60years or older were randomized into two groups: pneumatic machine (Pn, n=19) and plate-loaded machine (PL, n=17). After 12weeks of high-velocity training twice per week, groups were analyzed using an intention-to-treat approach. Primary outcomes were lower body power measured using a linear transducer and upper body power using medicine ball throw. Secondary outcomes included lower and upper body muscle muscle strength, the Physical Performance Battery (PPB), gallon jug test, the timed up-and-go test, and self-reported function using the Patient Reported Outcomes Measurement Information System (PROMIS) and an online video questionnaire. Outcome assessors were blinded to group membership. Lower body power significantly improved in both groups (Pn: 19%, PL: 31%), with no significant difference between the groups (Cohen's d=0.4, 95% CI (-1.1, 0.3)). Upper body power significantly improved only in the PL group, but showed no significant difference between the groups (Pn: 3%, PL: 6%). For balance, there was a significant difference between the groups favoring the Pn group (d=0.7, 95% CI (0.1, 1.4)); however, there were no statistically significant differences between groups for PPB, gallon jug transfer, muscle muscle strength, timed up-and-go or self-reported function. No serious adverse events were reported in either of the groups. Pneumatic and plate-loaded machines were effective in improving lower body power and physical function in older adults. The results suggest that power training can be safely and effectively performed by older adults using either pneumatic or plate-loaded machines. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Time for paradigmatic substitution in psychology. What are the alternatives?

    PubMed

    Kolstad, Arnulf

    2010-03-01

    This article focuses on the "machine paradigm" in psychology and its consequences for (mis)understanding of human beings. It discusses causes of the mainstream epistemology in Western societies, referring to philosophical traditions, the prestige of some natural sciences and mathematical statistics. It emphasizes how the higher psychological functions develop dialectically from a biological basis and how the brain due to its plasticity changes with mental and physical activity. This makes a causal machine paradigm unfit to describe and explain human psychology and human development. Some concepts for an alternative paradigm are suggested.

  17. Thermal machines beyond the weak coupling regime

    NASA Astrophysics Data System (ADS)

    Gallego, R.; Riera, A.; Eisert, J.

    2014-12-01

    How much work can be extracted from a heat bath using a thermal machine? The study of this question has a very long history in statistical physics in the weak-coupling limit, when applied to macroscopic systems. However, the assumption that thermal heat baths remain uncorrelated with associated physical systems is less reasonable on the nano-scale and in the quantum setting. In this work, we establish a framework of work extraction in the presence of quantum correlations. We show in a mathematically rigorous and quantitative fashion that quantum correlations and entanglement emerge as limitations to work extraction compared to what would be allowed by the second law of thermodynamics. At the heart of the approach are operations that capture the naturally non-equilibrium dynamics encountered when putting physical systems into contact with each other. We discuss various limits that relate to known results and put our work into the context of approaches to finite-time quantum thermodynamics.

  18. Using Machine Learning as a fast emulator of physical processes within the Met Office's Unified Model

    NASA Astrophysics Data System (ADS)

    Prudden, R.; Arribas, A.; Tomlinson, J.; Robinson, N.

    2017-12-01

    The Unified Model is a numerical model of the atmosphere used at the UK Met Office (and numerous partner organisations including Korean Meteorological Agency, Australian Bureau of Meteorology and US Air Force) for both weather and climate applications.Especifically, dynamical models such as the Unified Model are now a central part of weather forecasting. Starting from basic physical laws, these models make it possible to predict events such as storms before they have even begun to form. The Unified Model can be simply described as having two components: one component solves the navier-stokes equations (usually referred to as the "dynamics"); the other solves relevant sub-grid physical processes (usually referred to as the "physics"). Running weather forecasts requires substantial computing resources - for example, the UK Met Office operates the largest operational High Performance Computer in Europe - and the cost of a typical simulation is spent roughly 50% in the "dynamics" and 50% in the "physics". Therefore there is a high incentive to reduce cost of weather forecasts and Machine Learning is a possible option because, once a machine learning model has been trained, it is often much faster to run than a full simulation. This is the motivation for a technique called model emulation, the idea being to build a fast statistical model which closely approximates a far more expensive simulation. In this paper we discuss the use of Machine Learning as an emulator to replace the "physics" component of the Unified Model. Various approaches and options will be presented and the implications for further model development, operational running of forecasting systems, development of data assimilation schemes, and development of ensemble prediction techniques will be discussed.

  19. Statistical Machine Learning for Structured and High Dimensional Data

    DTIC Science & Technology

    2014-09-17

    AFRL-OSR-VA-TR-2014-0234 STATISTICAL MACHINE LEARNING FOR STRUCTURED AND HIGH DIMENSIONAL DATA Larry Wasserman CARNEGIE MELLON UNIVERSITY Final...Re . 8-98) v Prescribed by ANSI Std. Z39.18 14-06-2014 Final Dec 2009 - Aug 2014 Statistical Machine Learning for Structured and High Dimensional...area of resource-constrained statistical estimation. machine learning , high-dimensional statistics U U U UU John Lafferty 773-702-3813 > Research under

  20. Using Perturbed Physics Ensembles and Machine Learning to Select Parameters for Reducing Regional Biases in a Global Climate Model

    NASA Astrophysics Data System (ADS)

    Li, S.; Rupp, D. E.; Hawkins, L.; Mote, P.; McNeall, D. J.; Sarah, S.; Wallom, D.; Betts, R. A.

    2017-12-01

    This study investigates the potential to reduce known summer hot/dry biases over Pacific Northwest in the UK Met Office's atmospheric model (HadAM3P) by simultaneously varying multiple model parameters. The bias-reduction process is done through a series of steps: 1) Generation of perturbed physics ensemble (PPE) through the volunteer computing network weather@home; 2) Using machine learning to train "cheap" and fast statistical emulators of climate model, to rule out regions of parameter spaces that lead to model variants that do not satisfy observational constraints, where the observational constraints (e.g., top-of-atmosphere energy flux, magnitude of annual temperature cycle, summer/winter temperature and precipitation) are introduced sequentially; 3) Designing a new PPE by "pre-filtering" using the emulator results. Steps 1) through 3) are repeated until results are considered to be satisfactory (3 times in our case). The process includes a sensitivity analysis to find dominant parameters for various model output metrics, which reduces the number of parameters to be perturbed with each new PPE. Relative to observational uncertainty, we achieve regional improvements without introducing large biases in other parts of the globe. Our results illustrate the potential of using machine learning to train cheap and fast statistical emulators of climate model, in combination with PPEs in systematic model improvement.

  1. Derivative Free Optimization of Complex Systems with the Use of Statistical Machine Learning Models

    DTIC Science & Technology

    2015-09-12

    AFRL-AFOSR-VA-TR-2015-0278 DERIVATIVE FREE OPTIMIZATION OF COMPLEX SYSTEMS WITH THE USE OF STATISTICAL MACHINE LEARNING MODELS Katya Scheinberg...COMPLEX SYSTEMS WITH THE USE OF STATISTICAL MACHINE LEARNING MODELS 5a.  CONTRACT NUMBER 5b.  GRANT NUMBER FA9550-11-1-0239 5c.  PROGRAM ELEMENT...developed, which has been the focus of our research. 15. SUBJECT TERMS optimization, Derivative-Free Optimization, Statistical Machine Learning 16. SECURITY

  2. Multivariate Statistical Analysis of Cigarette Design Feature Influence on ISO TNCO Yields.

    PubMed

    Agnew-Heard, Kimberly A; Lancaster, Vicki A; Bravo, Roberto; Watson, Clifford; Walters, Matthew J; Holman, Matthew R

    2016-06-20

    The aim of this study is to explore how differences in cigarette physical design parameters influence tar, nicotine, and carbon monoxide (TNCO) yields in mainstream smoke (MSS) using the International Organization of Standardization (ISO) smoking regimen. Standardized smoking methods were used to evaluate 50 U.S. domestic brand cigarettes and a reference cigarette representing a range of TNCO yields in MSS collected from linear smoking machines using a nonintense smoking regimen. Multivariate statistical methods were used to form clusters of cigarettes based on their ISO TNCO yields and then to explore the relationship between the ISO generated TNCO yields and the nine cigarette physical design parameters between and within each cluster simultaneously. The ISO generated TNCO yields in MSS are 1.1-17.0 mg tar/cigarette, 0.1-2.2 mg nicotine/cigarette, and 1.6-17.3 mg CO/cigarette. Cluster analysis divided the 51 cigarettes into five discrete clusters based on their ISO TNCO yields. No one physical parameter dominated across all clusters. Predicting ISO machine generated TNCO yields based on these nine physical design parameters is complex due to the correlation among and between the nine physical design parameters and TNCO yields. From these analyses, it is estimated that approximately 20% of the variability in the ISO generated TNCO yields comes from other parameters (e.g., filter material, filter type, inclusion of expanded or reconstituted tobacco, and tobacco blend composition, along with differences in tobacco leaf origin and stalk positions and added ingredients). A future article will examine the influence of these physical design parameters on TNCO yields under a Canadian Intense (CI) smoking regimen. Together, these papers will provide a more robust picture of the design features that contribute to TNCO exposure across the range of real world smoking patterns.

  3. Dynamically allocated virtual clustering management system

    NASA Astrophysics Data System (ADS)

    Marcus, Kelvin; Cannata, Jess

    2013-05-01

    The U.S Army Research Laboratory (ARL) has built a "Wireless Emulation Lab" to support research in wireless mobile networks. In our current experimentation environment, our researchers need the capability to run clusters of heterogeneous nodes to model emulated wireless tactical networks where each node could contain a different operating system, application set, and physical hardware. To complicate matters, most experiments require the researcher to have root privileges. Our previous solution of using a single shared cluster of statically deployed virtual machines did not sufficiently separate each user's experiment due to undesirable network crosstalk, thus only one experiment could be run at a time. In addition, the cluster did not make efficient use of our servers and physical networks. To address these concerns, we created the Dynamically Allocated Virtual Clustering management system (DAVC). This system leverages existing open-source software to create private clusters of nodes that are either virtual or physical machines. These clusters can be utilized for software development, experimentation, and integration with existing hardware and software. The system uses the Grid Engine job scheduler to efficiently allocate virtual machines to idle systems and networks. The system deploys stateless nodes via network booting. The system uses 802.1Q Virtual LANs (VLANs) to prevent experimentation crosstalk and to allow for complex, private networks eliminating the need to map each virtual machine to a specific switch port. The system monitors the health of the clusters and the underlying physical servers and it maintains cluster usage statistics for historical trends. Users can start private clusters of heterogeneous nodes with root privileges for the duration of the experiment. Users also control when to shutdown their clusters.

  4. Simulation-driven machine learning: Bearing fault classification

    NASA Astrophysics Data System (ADS)

    Sobie, Cameron; Freitas, Carina; Nicolai, Mike

    2018-01-01

    Increasing the accuracy of mechanical fault detection has the potential to improve system safety and economic performance by minimizing scheduled maintenance and the probability of unexpected system failure. Advances in computational performance have enabled the application of machine learning algorithms across numerous applications including condition monitoring and failure detection. Past applications of machine learning to physical failure have relied explicitly on historical data, which limits the feasibility of this approach to in-service components with extended service histories. Furthermore, recorded failure data is often only valid for the specific circumstances and components for which it was collected. This work directly addresses these challenges for roller bearings with race faults by generating training data using information gained from high resolution simulations of roller bearing dynamics, which is used to train machine learning algorithms that are then validated against four experimental datasets. Several different machine learning methodologies are compared starting from well-established statistical feature-based methods to convolutional neural networks, and a novel application of dynamic time warping (DTW) to bearing fault classification is proposed as a robust, parameter free method for race fault detection.

  5. The impact of the availability of school vending machines on eating behavior during lunch: the Youth Physical Activity and Nutrition Survey.

    PubMed

    Park, Sohyun; Sappenfield, William M; Huang, Youjie; Sherry, Bettylou; Bensyl, Diana M

    2010-10-01

    Childhood obesity is a major public health concern and is associated with substantial morbidities. Access to less-healthy foods might facilitate dietary behaviors that contribute to obesity. However, less-healthy foods are usually available in school vending machines. This cross-sectional study examined the prevalence of students buying snacks or beverages from school vending machines instead of buying school lunch and predictors of this behavior. Analyses were based on the 2003 Florida Youth Physical Activity and Nutrition Survey using a representative sample of 4,322 students in grades six through eight in 73 Florida public middle schools. Analyses included χ2 tests and logistic regression. The outcome measure was buying a snack or beverage from vending machines 2 or more days during the previous 5 days instead of buying lunch. The survey response rate was 72%. Eighteen percent of respondents reported purchasing a snack or beverage from a vending machine 2 or more days during the previous 5 school days instead of buying school lunch. Although healthier options were available, the most commonly purchased vending machine items were chips, pretzels/crackers, candy bars, soda, and sport drinks. More students chose snacks or beverages instead of lunch in schools where beverage vending machines were also available than did students in schools where beverage vending machines were unavailable: 19% and 7%, respectively (P≤0.05). The strongest risk factor for buying snacks or beverages from vending machines instead of buying school lunch was availability of beverage vending machines in schools (adjusted odds ratio=3.5; 95% confidence interval, 2.2 to 5.7). Other statistically significant risk factors were smoking, non-Hispanic black race/ethnicity, Hispanic ethnicity, and older age. Although healthier choices were available, the most common choices were the less-healthy foods. Schools should consider developing policies to reduce the availability of less-healthy choices in vending machines and to reduce access to beverage vending machines. Copyright © 2010 American Dietetic Association. Published by Elsevier Inc. All rights reserved.

  6. Inverse Problems in Geodynamics Using Machine Learning Algorithms

    NASA Astrophysics Data System (ADS)

    Shahnas, M. H.; Yuen, D. A.; Pysklywec, R. N.

    2018-01-01

    During the past few decades numerical studies have been widely employed to explore the style of circulation and mixing in the mantle of Earth and other planets. However, in geodynamical studies there are many properties from mineral physics, geochemistry, and petrology in these numerical models. Machine learning, as a computational statistic-related technique and a subfield of artificial intelligence, has rapidly emerged recently in many fields of sciences and engineering. We focus here on the application of supervised machine learning (SML) algorithms in predictions of mantle flow processes. Specifically, we emphasize on estimating mantle properties by employing machine learning techniques in solving an inverse problem. Using snapshots of numerical convection models as training samples, we enable machine learning models to determine the magnitude of the spin transition-induced density anomalies that can cause flow stagnation at midmantle depths. Employing support vector machine algorithms, we show that SML techniques can successfully predict the magnitude of mantle density anomalies and can also be used in characterizing mantle flow patterns. The technique can be extended to more complex geodynamic problems in mantle dynamics by employing deep learning algorithms for putting constraints on properties such as viscosity, elastic parameters, and the nature of thermal and chemical anomalies.

  7. Image statistics for surface reflectance perception.

    PubMed

    Sharan, Lavanya; Li, Yuanzhen; Motoyoshi, Isamu; Nishida, Shin'ya; Adelson, Edward H

    2008-04-01

    Human observers can distinguish the albedo of real-world surfaces even when the surfaces are viewed in isolation, contrary to the Gelb effect. We sought to measure this ability and to understand the cues that might underlie it. We took photographs of complex surfaces such as stucco and asked observers to judge their diffuse reflectance by comparing them to a physical Munsell scale. Their judgments, while imperfect, were highly correlated with the true reflectance. The judgments were also highly correlated with certain image statistics, such as moment and percentile statistics of the luminance and subband histograms. When we digitally manipulated these statistics in an image, human judgments were correspondingly altered. Moreover, linear combinations of such statistics allow a machine vision system (operating within the constrained world of single surfaces) to estimate albedo with an accuracy similar to that of human observers. Taken together, these results indicate that some simple image statistics have a strong influence on the judgment of surface reflectance.

  8. Development of a sterilizing in-place application for a production machine using Vaporized Hydrogen Peroxide.

    PubMed

    Mau, T; Hartmann, V; Burmeister, J; Langguth, P; Häusler, H

    2004-01-01

    The use of steam in sterilization processes is limited by the implementation of heat-sensitive components inside the machines to be sterilized. Alternative low-temperature sterilization methods need to be found and their suitability evaluated. Vaporized Hydrogen Peroxide (VHP) technology was adapted for a production machine consisting of highly sensitive pressure sensors and thermo-labile air tube systems. This new kind of "cold" surface sterilization, known from the Barrier Isolator Technology, is based on the controlled release of hydrogen peroxide vapour into sealed enclosures. A mobile VHP generator was used to generate the hydrogen peroxide vapour. The unit was combined with the air conduction system of the production machine. Terminal vacuum pumps were installed to distribute the gas within the production machine and for its elimination. In order to control the sterilization process, different physical process monitors were incorporated. The validation of the process was based on biological indicators (Geobacillus stearothermophilus). The Limited Spearman Karber Method (LSKM) was used to statistically evaluate the sterilization process. The results show that it is possible to sterilize surfaces in a complex tube system with the use of gaseous hydrogen peroxide. A total microbial reduction of 6 log units was reached.

  9. Prevalence and associated factors of work related musculoskeletal disorders among commercial milling machine operators in South-Eastern Nigerian markets.

    PubMed

    Ojukwu, Chidiebele Petronilla; Anyanwu, Godson Emeka; Nwabueze, Augustine Chijindu; Anekwu, Emelie Morris; Chukwu, Sylvester Caesar

    2017-01-01

    Milling machine operators perform physically demanding tasks that can lead to work related musculoskeletal disorders (WRMSDs), but literature on WRMSDs among milling machine operators is scarce. Knowledge of prevalence and risk factors of WRMSDs can be an appropriate base for planning and implementing ergonomics intervention programs in the workplace. This study aimed to determine the prevalence, pattern and associated factors of WRMSDs among commercial milling machine operators in Enugu, Nigeria. This cross-sectional survey involved 148 commercial milling machine operators (74 hand-operated milling machine operators (HOMMO) and 74 electrically-operated milling machine operators (EOMMO)), within the age range of 18-65 years, who were conveniently selected from four markets in Enugu, Nigeria. A standard Nordic questionnaire was used to assess the prevalence of WRMSDs among the participants. Data were summarized using descriptive statistics. There was a significant difference (p = 0.001) related to prevalence of WRMSDs between HOMMOs (77%) and EOMMOs (50%). All body parts were affected in both groups and shoulders (85.1%) and lower back (46%) had the highest percentage of prevalence. Working in awkward and same postures, working with injury, poor workplace design, repetition of tasks, vibratory working equipments, reduced rest, high job demand and heavy lifting were significantly associated with the prevalence of WRMSDs. WRMSDs are prevalent among commercial milling machine operators with higher occurrence in HOMMOs. Ergonomic interventions, including the re-design of milling machines and appropriate work posture education of machine operators are recommended in the milling industry.

  10. Statistical complexity measure of pseudorandom bit generators

    NASA Astrophysics Data System (ADS)

    González, C. M.; Larrondo, H. A.; Rosso, O. A.

    2005-08-01

    Pseudorandom number generators (PRNG) are extensively used in Monte Carlo simulations, gambling machines and cryptography as substitutes of ideal random number generators (RNG). Each application imposes different statistical requirements to PRNGs. As L’Ecuyer clearly states “the main goal for Monte Carlo methods is to reproduce the statistical properties on which these methods are based whereas for gambling machines and cryptology, observing the sequence of output values for some time should provide no practical advantage for predicting the forthcoming numbers better than by just guessing at random”. In accordance with different applications several statistical test suites have been developed to analyze the sequences generated by PRNGs. In a recent paper a new statistical complexity measure [Phys. Lett. A 311 (2003) 126] has been defined. Here we propose this measure, as a randomness quantifier of a PRNGs. The test is applied to three very well known and widely tested PRNGs available in the literature. All of them are based on mathematical algorithms. Another PRNGs based on Lorenz 3D chaotic dynamical system is also analyzed. PRNGs based on chaos may be considered as a model for physical noise sources and important new results are recently reported. All the design steps of this PRNG are described, and each stage increase the PRNG randomness using different strategies. It is shown that the MPR statistical complexity measure is capable to quantify this randomness improvement. The PRNG based on the chaotic 3D Lorenz dynamical system is also evaluated using traditional digital signal processing tools for comparison.

  11. Classification without labels: learning from mixed samples in high energy physics

    NASA Astrophysics Data System (ADS)

    Metodiev, Eric M.; Nachman, Benjamin; Thaler, Jesse

    2017-10-01

    Modern machine learning techniques can be used to construct powerful models for difficult collider physics problems. In many applications, however, these models are trained on imperfect simulations due to a lack of truth-level information in the data, which risks the model learning artifacts of the simulation. In this paper, we introduce the paradigm of classification without labels (CWoLa) in which a classifier is trained to distinguish statistical mixtures of classes, which are common in collider physics. Crucially, neither individual labels nor class proportions are required, yet we prove that the optimal classifier in the CWoLa paradigm is also the optimal classifier in the traditional fully-supervised case where all label information is available. After demonstrating the power of this method in an analytical toy example, we consider a realistic benchmark for collider physics: distinguishing quark- versus gluon-initiated jets using mixed quark/gluon training samples. More generally, CWoLa can be applied to any classification problem where labels or class proportions are unknown or simulations are unreliable, but statistical mixtures of the classes are available.

  12. Classification without labels: learning from mixed samples in high energy physics

    DOE PAGES

    Metodiev, Eric M.; Nachman, Benjamin; Thaler, Jesse

    2017-10-25

    Modern machine learning techniques can be used to construct powerful models for difficult collider physics problems. In many applications, however, these models are trained on imperfect simulations due to a lack of truth-level information in the data, which risks the model learning artifacts of the simulation. In this paper, we introduce the paradigm of classification without labels (CWoLa) in which a classifier is trained to distinguish statistical mixtures of classes, which are common in collider physics. Crucially, neither individual labels nor class proportions are required, yet we prove that the optimal classifier in the CWoLa paradigm is also the optimalmore » classifier in the traditional fully-supervised case where all label information is available. After demonstrating the power of this method in an analytical toy example, we consider a realistic benchmark for collider physics: distinguishing quark- versus gluon-initiated jets using mixed quark/gluon training samples. More generally, CWoLa can be applied to any classification problem where labels or class proportions are unknown or simulations are unreliable, but statistical mixtures of the classes are available.« less

  13. Classification without labels: learning from mixed samples in high energy physics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Metodiev, Eric M.; Nachman, Benjamin; Thaler, Jesse

    Modern machine learning techniques can be used to construct powerful models for difficult collider physics problems. In many applications, however, these models are trained on imperfect simulations due to a lack of truth-level information in the data, which risks the model learning artifacts of the simulation. In this paper, we introduce the paradigm of classification without labels (CWoLa) in which a classifier is trained to distinguish statistical mixtures of classes, which are common in collider physics. Crucially, neither individual labels nor class proportions are required, yet we prove that the optimal classifier in the CWoLa paradigm is also the optimalmore » classifier in the traditional fully-supervised case where all label information is available. After demonstrating the power of this method in an analytical toy example, we consider a realistic benchmark for collider physics: distinguishing quark- versus gluon-initiated jets using mixed quark/gluon training samples. More generally, CWoLa can be applied to any classification problem where labels or class proportions are unknown or simulations are unreliable, but statistical mixtures of the classes are available.« less

  14. Exploring cluster Monte Carlo updates with Boltzmann machines

    NASA Astrophysics Data System (ADS)

    Wang, Lei

    2017-11-01

    Boltzmann machines are physics informed generative models with broad applications in machine learning. They model the probability distribution of an input data set with latent variables and generate new samples accordingly. Applying the Boltzmann machines back to physics, they are ideal recommender systems to accelerate the Monte Carlo simulation of physical systems due to their flexibility and effectiveness. More intriguingly, we show that the generative sampling of the Boltzmann machines can even give different cluster Monte Carlo algorithms. The latent representation of the Boltzmann machines can be designed to mediate complex interactions and identify clusters of the physical system. We demonstrate these findings with concrete examples of the classical Ising model with and without four-spin plaquette interactions. In the future, automatic searches in the algorithm space parametrized by Boltzmann machines may discover more innovative Monte Carlo updates.

  15. Machine Learning Based Multi-Physical-Model Blending for Enhancing Renewable Energy Forecast -- Improvement via Situation Dependent Error Correction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Siyuan; Hwang, Youngdeok; Khabibrakhmanov, Ildar

    With increasing penetration of solar and wind energy to the total energy supply mix, the pressing need for accurate energy forecasting has become well-recognized. Here we report the development of a machine-learning based model blending approach for statistically combining multiple meteorological models for improving the accuracy of solar/wind power forecast. Importantly, we demonstrate that in addition to parameters to be predicted (such as solar irradiance and power), including additional atmospheric state parameters which collectively define weather situations as machine learning input provides further enhanced accuracy for the blended result. Functional analysis of variance shows that the error of individual modelmore » has substantial dependence on the weather situation. The machine-learning approach effectively reduces such situation dependent error thus produces more accurate results compared to conventional multi-model ensemble approaches based on simplistic equally or unequally weighted model averaging. Validation over an extended period of time results show over 30% improvement in solar irradiance/power forecast accuracy compared to forecasts based on the best individual model.« less

  16. Counterfeit Electronics Detection Using Image Processing and Machine Learning

    NASA Astrophysics Data System (ADS)

    Asadizanjani, Navid; Tehranipoor, Mark; Forte, Domenic

    2017-01-01

    Counterfeiting is an increasing concern for businesses and governments as greater numbers of counterfeit integrated circuits (IC) infiltrate the global market. There is an ongoing effort in experimental and national labs inside the United States to detect and prevent such counterfeits in the most efficient time period. However, there is still a missing piece to automatically detect and properly keep record of detected counterfeit ICs. Here, we introduce a web application database that allows users to share previous examples of counterfeits through an online database and to obtain statistics regarding the prevalence of known defects. We also investigate automated techniques based on image processing and machine learning to detect different physical defects and to determine whether or not an IC is counterfeit.

  17. Quantum Entanglement in Neural Network States

    NASA Astrophysics Data System (ADS)

    Deng, Dong-Ling; Li, Xiaopeng; Das Sarma, S.

    2017-04-01

    Machine learning, one of today's most rapidly growing interdisciplinary fields, promises an unprecedented perspective for solving intricate quantum many-body problems. Understanding the physical aspects of the representative artificial neural-network states has recently become highly desirable in the applications of machine-learning techniques to quantum many-body physics. In this paper, we explore the data structures that encode the physical features in the network states by studying the quantum entanglement properties, with a focus on the restricted-Boltzmann-machine (RBM) architecture. We prove that the entanglement entropy of all short-range RBM states satisfies an area law for arbitrary dimensions and bipartition geometry. For long-range RBM states, we show by using an exact construction that such states could exhibit volume-law entanglement, implying a notable capability of RBM in representing quantum states with massive entanglement. Strikingly, the neural-network representation for these states is remarkably efficient, in the sense that the number of nonzero parameters scales only linearly with the system size. We further examine the entanglement properties of generic RBM states by randomly sampling the weight parameters of the RBM. We find that their averaged entanglement entropy obeys volume-law scaling, and the meantime strongly deviates from the Page entropy of the completely random pure states. We show that their entanglement spectrum has no universal part associated with random matrix theory and bears a Poisson-type level statistics. Using reinforcement learning, we demonstrate that RBM is capable of finding the ground state (with power-law entanglement) of a model Hamiltonian with a long-range interaction. In addition, we show, through a concrete example of the one-dimensional symmetry-protected topological cluster states, that the RBM representation may also be used as a tool to analytically compute the entanglement spectrum. Our results uncover the unparalleled power of artificial neural networks in representing quantum many-body states regardless of how much entanglement they possess, which paves a novel way to bridge computer-science-based machine-learning techniques to outstanding quantum condensed-matter physics problems.

  18. Information integration and diagnosis analysis of equipment status and production quality for machining process

    NASA Astrophysics Data System (ADS)

    Zan, Tao; Wang, Min; Hu, Jianzhong

    2010-12-01

    Machining status monitoring technique by multi-sensors can acquire and analyze the machining process information to implement abnormity diagnosis and fault warning. Statistical quality control technique is normally used to distinguish abnormal fluctuations from normal fluctuations through statistical method. In this paper by comparing the advantages and disadvantages of the two methods, the necessity and feasibility of integration and fusion is introduced. Then an approach that integrates multi-sensors status monitoring and statistical process control based on artificial intelligent technique, internet technique and database technique is brought forward. Based on virtual instrument technique the author developed the machining quality assurance system - MoniSysOnline, which has been used to monitoring the grinding machining process. By analyzing the quality data and AE signal information of wheel dressing process the reason of machining quality fluctuation has been obtained. The experiment result indicates that the approach is suitable for the status monitoring and analyzing of machining process.

  19. Correlated randomness: Some examples of exotic statistical physics

    NASA Astrophysics Data System (ADS)

    Stanley, H. Eugene

    2005-05-01

    One challenge of biology, medicine, and economics is that the systems treated by these sciences have no perfect metronome in time and no perfect spatial architecture -- crystalline or otherwise. Nonetheless, as if by magic, out of nothing but randomness one finds remarkably fine-tuned processes in time and remarkably fine-tuned structures in space. To understand this `miracle', one might consider placing aside the human tendency to see the universe as a machine. Instead, one might address the challenge of uncovering how, through randomness (albeit, as we shall see, strongly correlated randomness), one can arrive at many spatial and temporal patterns in biology, medicine, and economics. Inspired by principles developed by statistical physics over the past 50 years -- scale invariance and universality -- we review some recent applications of correlated randomness to fields that might startle Boltzmann if he were alive today.

  20. Physical properties of wild mango fruit and nut

    NASA Astrophysics Data System (ADS)

    Ehiem, J.; Simonyan, K.

    2012-02-01

    Physical properties of two wild mango varieties were studied at 81.9 and 24.5% moisture (w.b.) for the fruits and nuts, respectively. The shape and size of the fruit are the same while that of nuts differs at P = 0.05. The mass, density and bulk density of the fruits are statistically different at P = 0.05 but the volume is the same. The shape and size, volume and bulk density of the nuts are statistically the same at P = 0.05. The nuts of both varieties are also the same at P = 0.05 in terms of mass and density. The packing factor for both fruits and nut of the two varieties are the same at 0.95. The relevant data obtained for the two varieties would be useful for design and development of machines and equipment for processing and handling operations.

  1. Physics of Electronic Materials

    NASA Astrophysics Data System (ADS)

    Rammer, Jørgen

    2017-03-01

    1. Quantum mechanics; 2. Quantum tunneling; 3. Standard metal model; 4. Standard conductor model; 5. Electric circuit theory; 6. Quantum wells; 7. Particle in a periodic potential; 8. Bloch currents; 9. Crystalline solids; 10. Semiconductor doping; 11. Transistors; 12. Heterostructures; 13. Mesoscopic physics; 14. Arithmetic, logic and machines; Appendix A. Principles of quantum mechanics; Appendix B. Dirac's delta function; Appendix C. Fourier analysis; Appendix D. Classical mechanics; Appendix E. Wave function properties; Appendix F. Transfer matrix properties; Appendix G. Momentum; Appendix H. Confined particles; Appendix I. Spin and quantum statistics; Appendix J. Statistical mechanics; Appendix K. The Fermi-Dirac distribution; Appendix L. Thermal current fluctuations; Appendix M. Gaussian wave packets; Appendix N. Wave packet dynamics; Appendix O. Screening by symmetry method; Appendix P. Commutation and common eigenfunctions; Appendix Q. Interband coupling; Appendix R. Common crystal structures; Appendix S. Effective mass approximation; Appendix T. Integral doubling formula; Bibliography; Index.

  2. System Acquires Data On Reactivities Of Foams

    NASA Technical Reports Server (NTRS)

    Walls, Joe T.

    1994-01-01

    Data-acquisition and -plotting system, called DAPS(TM), developed enabling accurate and objective determination of physical properties related to reactivities of polyurethane and polyisocyanurate foams. Automated, computer-controlled test apparatus that acquires data on rates of rise, rise profiles, exothermic temperatures, and internal pressures of foams prepared from both manual and machine-mixed batches. Data used to determine minute differences between reaction kinetics and exothermic profiles of foam formulations, properties of end products which are statistically undifferentiated.

  3. Proceedings of the Workshop on Change of Representation and Problem Reformulation

    NASA Technical Reports Server (NTRS)

    Lowry, Michael R.

    1992-01-01

    The proceedings of the third Workshop on Change of representation and Problem Reformulation is presented. In contrast to the first two workshops, this workshop was focused on analytic or knowledge-based approaches, as opposed to statistical or empirical approaches called 'constructive induction'. The organizing committee believes that there is a potential for combining analytic and inductive approaches at a future date. However, it became apparent at the previous two workshops that the communities pursuing these different approaches are currently interested in largely non-overlapping issues. The constructive induction community has been holding its own workshops, principally in conjunction with the machine learning conference. While this workshop is more focused on analytic approaches, the organizing committee has made an effort to include more application domains. We have greatly expanded from the origins in the machine learning community. Participants in this workshop come from the full spectrum of AI application domains including planning, qualitative physics, software engineering, knowledge representation, and machine learning.

  4. Parameterizing Phrase Based Statistical Machine Translation Models: An Analytic Study

    ERIC Educational Resources Information Center

    Cer, Daniel

    2011-01-01

    The goal of this dissertation is to determine the best way to train a statistical machine translation system. I first develop a state-of-the-art machine translation system called Phrasal and then use it to examine a wide variety of potential learning algorithms and optimization criteria and arrive at two very surprising results. First, despite the…

  5. Evaluation of machinability and flexural strength of a novel dental machinable glass-ceramic.

    PubMed

    Qin, Feng; Zheng, Shucan; Luo, Zufeng; Li, Yong; Guo, Ling; Zhao, Yunfeng; Fu, Qiang

    2009-10-01

    To evaluate the machinability and flexural strength of a novel dental machinable glass-ceramic (named PMC), and to compare the machinability property with that of Vita Mark II and human enamel. The raw batch materials were selected and mixed. Four groups of novel glass-ceramics were formed at different nucleation temperatures, and were assigned to Group 1, Group 2, Group 3 and Group 4. The machinability of the four groups of novel glass-ceramics, Vita Mark II ceramic and freshly extracted human premolars were compared by means of drilling depth measurement. A three-point bending test was used to measure the flexural strength of the novel glass-ceramics. The crystalline phases of the group with the best machinability were identified by X-ray diffraction. In terms of the drilling depth, Group 2 of the novel glass-ceramics proves to have the largest drilling depth. There was no statistical difference among Group 1, Group 4 and the natural teeth. The drilling depth of Vita MK II was statistically less than that of Group 1, Group 4 and the natural teeth. Group 3 had the least drilling depth. In respect of the flexural strength, Group 2 exhibited the maximum flexural strength; Group 1 was statistically weaker than Group 2; there was no statistical difference between Group 3 and Group 4, and they were the weakest materials. XRD of Group 2 ceramic showed that a new type of dental machinable glass-ceramic containing calcium-mica had been developed by the present study and was named PMC. PMC is promising for application as a dental machinable ceramic due to its good machinability and relatively high strength.

  6. Financial Statistics. Higher Education General Information Survey (HEGIS) [machine-readable data file].

    ERIC Educational Resources Information Center

    Center for Education Statistics (ED/OERI), Washington, DC.

    The Financial Statistics machine-readable data file (MRDF) is a subfile of the larger Higher Education General Information Survey (HEGIS). It contains basic financial statistics for over 3,000 institutions of higher education in the United States and its territories. The data are arranged sequentially by institution, with institutional…

  7. Can Statistical Machine Learning Algorithms Help for Classification of Obstructive Sleep Apnea Severity to Optimal Utilization of Polysomnography Resources?

    PubMed

    Bozkurt, Selen; Bostanci, Asli; Turhan, Murat

    2017-08-11

    The goal of this study is to evaluate the results of machine learning methods for the classification of OSA severity of patients with suspected sleep disorder breathing as normal, mild, moderate and severe based on non-polysomnographic variables: 1) clinical data, 2) symptoms and 3) physical examination. In order to produce classification models for OSA severity, five different machine learning methods (Bayesian network, Decision Tree, Random Forest, Neural Networks and Logistic Regression) were trained while relevant variables and their relationships were derived empirically from observed data. Each model was trained and evaluated using 10-fold cross-validation and to evaluate classification performances of all methods, true positive rate (TPR), false positive rate (FPR), Positive Predictive Value (PPV), F measure and Area Under Receiver Operating Characteristics curve (ROC-AUC) were used. Results of 10-fold cross validated tests with different variable settings promisingly indicated that the OSA severity of suspected OSA patients can be classified, using non-polysomnographic features, with 0.71 true positive rate as the highest and, 0.15 false positive rate as the lowest, respectively. Moreover, the test results of different variables settings revealed that the accuracy of the classification models was significantly improved when physical examination variables were added to the model. Study results showed that machine learning methods can be used to estimate the probabilities of no, mild, moderate, and severe obstructive sleep apnea and such approaches may improve accurate initial OSA screening and help referring only the suspected moderate or severe OSA patients to sleep laboratories for the expensive tests.

  8. Cook-Levin Theorem Algorithmic-Reducibility/Completeness = Wilson Renormalization-(Semi)-Group Fixed-Points; ``Noise''-Induced Phase-Transitions (NITs) to Accelerate Algorithmics (``NIT-Picking'') REPLACING CRUTCHES!!!: Models: Turing-machine, finite-state-models, finite-automata

    NASA Astrophysics Data System (ADS)

    Young, Frederic; Siegel, Edward

    Cook-Levin theorem theorem algorithmic computational-complexity(C-C) algorithmic-equivalence reducibility/completeness equivalence to renormalization-(semi)-group phase-transitions critical-phenomena statistical-physics universality-classes fixed-points, is exploited via Siegel FUZZYICS =CATEGORYICS = ANALOGYICS =PRAGMATYICS/CATEGORY-SEMANTICS ONTOLOGY COGNITION ANALYTICS-Aristotle ``square-of-opposition'' tabular list-format truth-table matrix analytics predicts and implements ''noise''-induced phase-transitions (NITs) to accelerate versus to decelerate Harel [Algorithmics (1987)]-Sipser[Intro.Thy. Computation(`97)] algorithmic C-C: ''NIT-picking''(!!!), to optimize optimization-problems optimally(OOPO). Versus iso-''noise'' power-spectrum quantitative-only amplitude/magnitude-only variation stochastic-resonance, ''NIT-picking'' is ''noise'' power-spectrum QUALitative-type variation via quantitative critical-exponents variation. Computer-''science''/SEANCE algorithmic C-C models: Turing-machine, finite-state-models, finite-automata,..., discrete-maths graph-theory equivalence to physics Feynman-diagrams are identified as early-days once-workable valid but limiting IMPEDING CRUTCHES(!!!), ONLY IMPEDE latter-days new-insights!!!

  9. Tool geometry and damage mechanisms influencing CNC turning efficiency of Ti6Al4V

    NASA Astrophysics Data System (ADS)

    Suresh, Sangeeth; Hamid, Darulihsan Abdul; Yazid, M. Z. A.; Nasuha, Nurdiyanah; Ain, Siti Nurul

    2017-12-01

    Ti6Al4V or Grade 5 titanium alloy is widely used in the aerospace, medical, automotive and fabrication industries, due to its distinctive combination of mechanical and physical properties. Ti6Al4V has always been perverse during its machining, strangely due to the same mix of properties mentioned earlier. Ti6Al4V machining has resulted in shorter cutting tool life which has led to objectionable surface integrity and rapid failure of the parts machined. However, the proven functional relevance of this material has prompted extensive research in the optimization of machine parameters and cutting tool characteristics. Cutting tool geometry plays a vital role in ensuring dimensional and geometric accuracy in machined parts. In this study, an experimental investigation is actualized to optimize the nose radius and relief angles of the cutting tools and their interaction to different levels of machining parameters. Low elastic modulus and thermal conductivity of Ti6Al4V contribute to the rapid tool damage. The impact of these properties over the tool tips damage is studied. An experimental design approach is utilized in the CNC turning process of Ti6Al4V to statistically analyze and propose optimum levels of input parameters to lengthen the tool life and enhance surface characteristics of the machined parts. A greater tool nose radius with a straight flank, combined with low feed rates have resulted in a desirable surface integrity. The presence of relief angle has proven to aggravate tool damage and also dimensional instability in the CNC turning of Ti6Al4V.

  10. High Throughput Determination of Mercury in Tobacco and Mainstream Smoke from Little Cigars

    PubMed Central

    Fresquez, Mark R.; Gonzalez-Jimenez, Nathalie; Gray, Naudia; Watson, Clifford H.; Pappas, R. Steven

    2015-01-01

    A method was developed that utilizes a platinum trap for mercury from mainstream tobacco smoke which represents an improvement over traditional approaches that require impingers and long sample preparation procedures. In this approach, the trapped mercury is directly released for analysis by heating the trap in a direct mercury analyzer. The method was applied to the analysis of mercury in the mainstream smoke of little cigars. The mercury levels in little cigar smoke obtained under Health Canada Intense smoking machine conditions ranged from 7.1 × 10−3 mg/m3 to 1.2 × 10−2 mg/m3. These air mercury levels exceed the chronic inhalation Minimal Risk Level corrected for intermittent exposure to metallic mercury (e.g., 1 or 2 hours per day, 5 days per week) determined by the Agency for Toxic Substances and Disease Registry. Multivariate statistical analysis was used to assess associations between mercury levels and little cigar physical design properties. Filter ventilation was identified as the principal physical parameter influencing mercury concentrations in mainstream little cigar smoke generated under ISO machine smoking conditions. With filter ventilation blocked under Health Canada Intense smoking conditions, mercury concentrations in tobacco and puff number (smoke volume) were the primary physical parameters that influenced mainstream smoke mercury concentrations. PMID:26051388

  11. Fusing Data Mining, Machine Learning and Traditional Statistics to Detect Biomarkers Associated with Depression

    PubMed Central

    Dipnall, Joanna F.

    2016-01-01

    Background Atheoretical large-scale data mining techniques using machine learning algorithms have promise in the analysis of large epidemiological datasets. This study illustrates the use of a hybrid methodology for variable selection that took account of missing data and complex survey design to identify key biomarkers associated with depression from a large epidemiological study. Methods The study used a three-step methodology amalgamating multiple imputation, a machine learning boosted regression algorithm and logistic regression, to identify key biomarkers associated with depression in the National Health and Nutrition Examination Study (2009–2010). Depression was measured using the Patient Health Questionnaire-9 and 67 biomarkers were analysed. Covariates in this study included gender, age, race, smoking, food security, Poverty Income Ratio, Body Mass Index, physical activity, alcohol use, medical conditions and medications. The final imputed weighted multiple logistic regression model included possible confounders and moderators. Results After the creation of 20 imputation data sets from multiple chained regression sequences, machine learning boosted regression initially identified 21 biomarkers associated with depression. Using traditional logistic regression methods, including controlling for possible confounders and moderators, a final set of three biomarkers were selected. The final three biomarkers from the novel hybrid variable selection methodology were red cell distribution width (OR 1.15; 95% CI 1.01, 1.30), serum glucose (OR 1.01; 95% CI 1.00, 1.01) and total bilirubin (OR 0.12; 95% CI 0.05, 0.28). Significant interactions were found between total bilirubin with Mexican American/Hispanic group (p = 0.016), and current smokers (p<0.001). Conclusion The systematic use of a hybrid methodology for variable selection, fusing data mining techniques using a machine learning algorithm with traditional statistical modelling, accounted for missing data and complex survey sampling methodology and was demonstrated to be a useful tool for detecting three biomarkers associated with depression for future hypothesis generation: red cell distribution width, serum glucose and total bilirubin. PMID:26848571

  12. Fusing Data Mining, Machine Learning and Traditional Statistics to Detect Biomarkers Associated with Depression.

    PubMed

    Dipnall, Joanna F; Pasco, Julie A; Berk, Michael; Williams, Lana J; Dodd, Seetal; Jacka, Felice N; Meyer, Denny

    2016-01-01

    Atheoretical large-scale data mining techniques using machine learning algorithms have promise in the analysis of large epidemiological datasets. This study illustrates the use of a hybrid methodology for variable selection that took account of missing data and complex survey design to identify key biomarkers associated with depression from a large epidemiological study. The study used a three-step methodology amalgamating multiple imputation, a machine learning boosted regression algorithm and logistic regression, to identify key biomarkers associated with depression in the National Health and Nutrition Examination Study (2009-2010). Depression was measured using the Patient Health Questionnaire-9 and 67 biomarkers were analysed. Covariates in this study included gender, age, race, smoking, food security, Poverty Income Ratio, Body Mass Index, physical activity, alcohol use, medical conditions and medications. The final imputed weighted multiple logistic regression model included possible confounders and moderators. After the creation of 20 imputation data sets from multiple chained regression sequences, machine learning boosted regression initially identified 21 biomarkers associated with depression. Using traditional logistic regression methods, including controlling for possible confounders and moderators, a final set of three biomarkers were selected. The final three biomarkers from the novel hybrid variable selection methodology were red cell distribution width (OR 1.15; 95% CI 1.01, 1.30), serum glucose (OR 1.01; 95% CI 1.00, 1.01) and total bilirubin (OR 0.12; 95% CI 0.05, 0.28). Significant interactions were found between total bilirubin with Mexican American/Hispanic group (p = 0.016), and current smokers (p<0.001). The systematic use of a hybrid methodology for variable selection, fusing data mining techniques using a machine learning algorithm with traditional statistical modelling, accounted for missing data and complex survey sampling methodology and was demonstrated to be a useful tool for detecting three biomarkers associated with depression for future hypothesis generation: red cell distribution width, serum glucose and total bilirubin.

  13. The effect of CNC and manual laser machining on electrical resistance of HDPE/MWCNT composite

    NASA Astrophysics Data System (ADS)

    Mohammadi, Fatemeh; Farshbaf Zinati, Reza; Fattahi, A. M.

    2018-05-01

    In this study, electrical conductivity of high-density polyethylene (HDPE)/multi-walled carbon nanotube (MWCNT) composite was investigated after laser machining. To this end, produced using plastic injection process, nano-composite samples were laser machined with various combinations of input parameters such as feed rate (35, 45, and 55 mm/min), feed angle with injection flow direction (0°, 45°, and 90°), and MWCNT content (0.5, 1, and 1.5 wt%). The angle between laser feed and injected flow direction was set via either of two different methods: CNC programming and manual setting. The results showed that the parameters of angle between laser line and melt flow direction and feed rate were both found to have statistically significance and physical impacts on electrical resistance of the samples in manual setting. Also, maximum conductivity was seen when the angle between laser line and melt flow direction was set to 90° in manual setting, and maximum conductivity was seen at feed rate of 55 mm/min in both of CNC programming and manual setting.

  14. Machine learning of frustrated classical spin models. I. Principal component analysis

    NASA Astrophysics Data System (ADS)

    Wang, Ce; Zhai, Hui

    2017-10-01

    This work aims at determining whether artificial intelligence can recognize a phase transition without prior human knowledge. If this were successful, it could be applied to, for instance, analyzing data from the quantum simulation of unsolved physical models. Toward this goal, we first need to apply the machine learning algorithm to well-understood models and see whether the outputs are consistent with our prior knowledge, which serves as the benchmark for this approach. In this work, we feed the computer data generated by the classical Monte Carlo simulation for the X Y model in frustrated triangular and union jack lattices, which has two order parameters and exhibits two phase transitions. We show that the outputs of the principal component analysis agree very well with our understanding of different orders in different phases, and the temperature dependences of the major components detect the nature and the locations of the phase transitions. Our work offers promise for using machine learning techniques to study sophisticated statistical models, and our results can be further improved by using principal component analysis with kernel tricks and the neural network method.

  15. Broiler weight estimation based on machine vision and artificial neural network.

    PubMed

    Amraei, S; Abdanan Mehdizadeh, S; Salari, S

    2017-04-01

    1. Machine vision and artificial neural network (ANN) procedures were used to estimate live body weight of broiler chickens in 30 1-d-old broiler chickens reared for 42 d. 2. Imaging was performed two times daily. To localise chickens within the pen, an ellipse fitting algorithm was used and the chickens' head and tail removed using the Chan-Vese method. 3. The correlations between the body weight and 6 physical extracted features indicated that there were strong correlations between body weight and the 5 features including area, perimeter, convex area, major and minor axis length. 5. According to statistical analysis there was no significant difference between morning and afternoon data over 42 d. 6. In an attempt to improve the accuracy of live weight approximation different ANN techniques, including Bayesian regulation, Levenberg-Marquardt, Scaled conjugate gradient and gradient descent were used. Bayesian regulation with R 2 value of 0.98 was the best network for prediction of broiler weight. 7. The accuracy of the machine vision technique was examined and most errors were less than 50 g.

  16. A Survey of Statistical Machine Translation

    DTIC Science & Technology

    2007-04-01

    methods are notoriously sen- sitive to domain differences, however, so the move to informal text is likely to present many interesting challenges ...Och, Christoph Tillman, and Hermann Ney. Improved alignment models for statistical machine translation. In Proc. of EMNLP- VLC , pages 20–28, Jun 1999

  17. Reversibility in Quantum Models of Stochastic Processes

    NASA Astrophysics Data System (ADS)

    Gier, David; Crutchfield, James; Mahoney, John; James, Ryan

    Natural phenomena such as time series of neural firing, orientation of layers in crystal stacking and successive measurements in spin-systems are inherently probabilistic. The provably minimal classical models of such stochastic processes are ɛ-machines, which consist of internal states, transition probabilities between states and output values. The topological properties of the ɛ-machine for a given process characterize the structure, memory and patterns of that process. However ɛ-machines are often not ideal because their statistical complexity (Cμ) is demonstrably greater than the excess entropy (E) of the processes they represent. Quantum models (q-machines) of the same processes can do better in that their statistical complexity (Cq) obeys the relation Cμ >= Cq >= E. q-machines can be constructed to consider longer lengths of strings, resulting in greater compression. With code-words of sufficiently long length, the statistical complexity becomes time-symmetric - a feature apparently novel to this quantum representation. This result has ramifications for compression of classical information in quantum computing and quantum communication technology.

  18. "Pack[superscript2]": VM Resource Scheduling for Fine-Grained Application SLAs in Highly Consolidated Environment

    ERIC Educational Resources Information Center

    Sukwong, Orathai

    2013-01-01

    Virtualization enables the ability to consolidate multiple servers on a single physical machine, increasing the infrastructure utilization. Maximizing the ratio of server virtual machines (VMs) to physical machines, namely the consolidation ratio, becomes an important goal toward infrastructure cost saving in a cloud. However, the consolidation…

  19. Science 101: Q--What Is the Physics behind Simple Machines?

    ERIC Educational Resources Information Center

    Robertson, Bill

    2013-01-01

    Bill Robertson thinks that questioning the physics behind simple machines is a great idea because when he encounters the subject of simple machines in textbooks, activities, and classrooms, he seldom encounters, a scientific explanation of how they work. Instead, what one often sees is a discussion of load, effort, fulcrum, actual mechanical…

  20. Reliability of the hospital nutrition environment scan for cafeterias, vending machines, and gift shops.

    PubMed

    Winston, Courtney P; Sallis, James F; Swartz, Michael D; Hoelscher, Deanna M; Peskin, Melissa F

    2013-08-01

    According to ecological models, the physical environment plays a major role in determining individual health behaviors. As such, researchers have started targeting the consumer nutrition environment of large-scale foodservice operations when implementing obesity-prevention programs. In 2010, the American Hospital Association released a call-to-action encouraging health care facilities to join in this movement and improve their facilities' consumer nutrition environments. The Hospital Nutrition Environment Scan (HNES) for Cafeterias, Vending Machines, and Gift Shops was developed in 2011, and the present study evaluated the inter-rater reliability of this instrument. Two trained raters visited 39 hospitals in southern California and completed the HNES. Percent agreement, kappa statistics, and intraclass correlation coefficients were calculated. Percent agreement between raters ranged from 74.4% to 100% and kappa statistics ranged from 0.458 to 1.0. The intraclass correlation coefficient for the overall nutrition composite scores was 0.961. Given these results, the HNES demonstrated acceptable reliability metrics and can now be disseminated to assess the current state of hospital consumer nutrition environments. Copyright © 2013 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  1. Statistical learning algorithms for identifying contrasting tillage practices with landsat thematic mapper data

    USDA-ARS?s Scientific Manuscript database

    Tillage management practices have direct impact on water holding capacity, evaporation, carbon sequestration, and water quality. This study examines the feasibility of two statistical learning algorithms, such as Least Square Support Vector Machine (LSSVM) and Relevance Vector Machine (RVM), for cla...

  2. Machine Learning in Medicine

    PubMed Central

    Deo, Rahul C.

    2015-01-01

    Spurred by advances in processing power, memory, storage, and an unprecedented wealth of data, computers are being asked to tackle increasingly complex learning tasks, often with astonishing success. Computers have now mastered a popular variant of poker, learned the laws of physics from experimental data, and become experts in video games – tasks which would have been deemed impossible not too long ago. In parallel, the number of companies centered on applying complex data analysis to varying industries has exploded, and it is thus unsurprising that some analytic companies are turning attention to problems in healthcare. The purpose of this review is to explore what problems in medicine might benefit from such learning approaches and use examples from the literature to introduce basic concepts in machine learning. It is important to note that seemingly large enough medical data sets and adequate learning algorithms have been available for many decades – and yet, although there are thousands of papers applying machine learning algorithms to medical data, very few have contributed meaningfully to clinical care. This lack of impact stands in stark contrast to the enormous relevance of machine learning to many other industries. Thus part of my effort will be to identify what obstacles there may be to changing the practice of medicine through statistical learning approaches, and discuss how these might be overcome. PMID:26572668

  3. The Trail Making test: a study of its ability to predict falls in the acute neurological in-patient population.

    PubMed

    Mateen, Bilal Akhter; Bussas, Matthias; Doogan, Catherine; Waller, Denise; Saverino, Alessia; Király, Franz J; Playford, E Diane

    2018-05-01

    To determine whether tests of cognitive function and patient-reported outcome measures of motor function can be used to create a machine learning-based predictive tool for falls. Prospective cohort study. Tertiary neurological and neurosurgical center. In all, 337 in-patients receiving neurosurgical, neurological, or neurorehabilitation-based care. Binary (Y/N) for falling during the in-patient episode, the Trail Making Test (a measure of attention and executive function) and the Walk-12 (a patient-reported measure of physical function). The principal outcome was a fall during the in-patient stay ( n = 54). The Trail test was identified as the best predictor of falls. Moreover, addition of other variables, did not improve the prediction (Wilcoxon signed-rank P < 0.001). Classical linear statistical modeling methods were then compared with more recent machine learning based strategies, for example, random forests, neural networks, support vector machines. The random forest was the best modeling strategy when utilizing just the Trail Making Test data (Wilcoxon signed-rank P < 0.001) with 68% (± 7.7) sensitivity, and 90% (± 2.3) specificity. This study identifies a simple yet powerful machine learning (Random Forest) based predictive model for an in-patient neurological population, utilizing a single neuropsychological test of cognitive function, the Trail Making test.

  4. Computational Approaches to Chemical Hazard Assessment

    PubMed Central

    Luechtefeld, Thomas; Hartung, Thomas

    2018-01-01

    Summary Computational prediction of toxicity has reached new heights as a result of decades of growth in the magnitude and diversity of biological data. Public packages for statistics and machine learning make model creation faster. New theory in machine learning and cheminformatics enables integration of chemical structure, toxicogenomics, simulated and physical data in the prediction of chemical health hazards, and other toxicological information. Our earlier publications have characterized a toxicological dataset of unprecedented scale resulting from the European REACH legislation (Registration Evaluation Authorisation and Restriction of Chemicals). These publications dove into potential use cases for regulatory data and some models for exploiting this data. This article analyzes the options for the identification and categorization of chemicals, moves on to the derivation of descriptive features for chemicals, discusses different kinds of targets modeled in computational toxicology, and ends with a high-level perspective of the algorithms used to create computational toxicology models. PMID:29101769

  5. BLS Machine-Readable Data and Tabulating Routines.

    ERIC Educational Resources Information Center

    DiFillipo, Tony

    This report describes the machine-readable data and tabulating routines that the Bureau of Labor Statistics (BLS) is prepared to distribute. An introduction discusses the LABSTAT (Labor Statistics) database and the BLS policy on release of unpublished data. Descriptions summarizing data stored in 25 files follow this format: overview, data…

  6. Study of a Variable Mass Atwood's Machine Using a Smartphone

    ERIC Educational Resources Information Center

    Lopez, Dany; Caprile, Isidora; Corvacho, Fernando; Reyes, Orfa

    2018-01-01

    The Atwood machine was invented in 1784 by George Atwood and this system has been widely studied both theoretically and experimentally over the years. Nowadays, it is commonplace that many experimental physics courses include both Atwood's machine and variable mass to introduce more complex concepts in physics. To study the dynamics of the masses…

  7. Calibration of raw accelerometer data to measure physical activity: A systematic review.

    PubMed

    de Almeida Mendes, Márcio; da Silva, Inácio C M; Ramires, Virgílio V; Reichert, Felipe F; Martins, Rafaela C; Tomasi, Elaine

    2018-03-01

    Most of calibration studies based on accelerometry were developed using count-based analyses. In contrast, calibration studies based on raw acceleration signals are relatively recent and their evidences are incipient. The aim of the current study was to systematically review the literature in order to summarize methodological characteristics and results from raw data calibration studies. The review was conducted up to May 2017 using four databases: PubMed, Scopus, SPORTDiscus and Web of Science. Methodological quality of the included studies was evaluated using the Landis and Koch's guidelines. Initially, 1669 titles were identified and, after assessing titles, abstracts and full-articles, 20 studies were included. All studies were conducted in high-income countries, most of them with relatively small samples and specific population groups. Physical activity protocols were different among studies and the indirect calorimetry was the criterion measure mostly used. High mean values of sensitivity, specificity and accuracy from the intensity thresholds of cut-point-based studies were observed (93.7%, 91.9% and 95.8%, respectively). The most frequent statistical approach applied was machine learning-based modelling, in which the mean coefficient of determination was 0.70 to predict physical activity energy expenditure. Regarding the recognition of physical activity types, the mean values of accuracy for sedentary, household and locomotive activities were 82.9%, 55.4% and 89.7%, respectively. In conclusion, considering the construct of physical activity that each approach assesses, linear regression, machine-learning and cut-point-based approaches presented promising validity parameters. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. On the Stability of Jump-Linear Systems Driven by Finite-State Machines with Markovian Inputs

    NASA Technical Reports Server (NTRS)

    Patilkulkarni, Sudarshan; Herencia-Zapana, Heber; Gray, W. Steven; Gonzalez, Oscar R.

    2004-01-01

    This paper presents two mean-square stability tests for a jump-linear system driven by a finite-state machine with a first-order Markovian input process. The first test is based on conventional Markov jump-linear theory and avoids the use of any higher-order statistics. The second test is developed directly using the higher-order statistics of the machine s output process. The two approaches are illustrated with a simple model for a recoverable computer control system.

  9. High Energy Colliders

    NASA Astrophysics Data System (ADS)

    Palmer, R. B.; Gallardo, J. C.

    INTRODUCTION PHYSICS CONSIDERATIONS GENERAL REQUIRED LUMINOSITY FOR LEPTON COLLIDERS THE EFFECTIVE PHYSICS ENERGIES OF HADRON COLLIDERS HADRON-HADRON MACHINES LUMINOSITY SIZE AND COST CIRCULAR e^{+}e^- MACHINES LUMINOSITY SIZE AND COST e^{+}e^- LINEAR COLLIDERS LUMINOSITY CONVENTIONAL RF SUPERCONDUCTING RF AT HIGHER ENERGIES γ - γ COLLIDERS μ ^{+} μ^- COLLIDERS ADVANTAGES AND DISADVANTAGES DESIGN STUDIES STATUS AND REQUIRED R AND D COMPARISION OF MACHINES CONCLUSIONS DISCUSSION

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Angers, Crystal Plume; Bottema, Ryan; Buckley, Les

    Purpose: Treatment unit uptime statistics are typically used to monitor radiation equipment performance. The Ottawa Hospital Cancer Centre has introduced the use of Quality Control (QC) test success as a quality indicator for equipment performance and overall health of the equipment QC program. Methods: Implemented in 2012, QATrack+ is used to record and monitor over 1100 routine machine QC tests each month for 20 treatment and imaging units ( http://qatrackplus.com/ ). Using an SQL (structured query language) script, automated queries of the QATrack+ database are used to generate program metrics such as the number of QC tests executed and themore » percentage of tests passing, at tolerance or at action. These metrics are compared against machine uptime statistics already reported within the program. Results: Program metrics for 2015 show good correlation between pass rate of QC tests and uptime for a given machine. For the nine conventional linacs, the QC test success rate was consistently greater than 97%. The corresponding uptimes for these units are better than 98%. Machines that consistently show higher failure or tolerance rates in the QC tests have lower uptimes. This points to either poor machine performance requiring corrective action or to problems with the QC program. Conclusions: QATrack+ significantly improves the organization of QC data but can also aid in overall equipment management. Complimenting machine uptime statistics with QC test metrics provides a more complete picture of overall machine performance and can be used to identify areas of improvement in the machine service and QC programs.« less

  11. Effect of Width of Kerf on Machining Accuracy and Subsurface Layer After WEDM

    NASA Astrophysics Data System (ADS)

    Mouralova, K.; Kovar, J.; Klakurkova, L.; Prokes, T.

    2018-02-01

    Wire electrical discharge machining is an unconventional machining technology that applies physical principles to material removal. The material is removed by a series of recurring current discharges between the workpiece and the tool electrode, and a `kerf' is created between the wire and the material being machined. The width of the kerf is directly dependent not only on the diameter of the wire used, but also on the machine parameter settings and, in particular, on the set of mechanical and physical properties of the material being machined. To ensure precise machining, it is important to have the width of the kerf as small as possible. The present study deals with the evaluation of the width of the kerf for four different metallic materials (some of which were subsequently heat treated using several methods) with different machine parameter settings. The kerf is investigated on metallographic cross sections using light and electron microscopy.

  12. Basic Machines - The "Nuts and Bolts" of Technical Physics Minicourse, Career Oriented Pre-Technical Physics. Preliminary Edition.

    ERIC Educational Resources Information Center

    Bullock, Bob; And Others

    This minicourse was prepared for use with secondary physics students in the Dallas Independent School District and is one option in a physics program which provides for the selection of topics on the basis of student career needs and interests. This minicourse was aimed at two levels in the study of basic machines. The "light" level…

  13. Machine Learning Approaches for Clinical Psychology and Psychiatry.

    PubMed

    Dwyer, Dominic B; Falkai, Peter; Koutsouleris, Nikolaos

    2018-05-07

    Machine learning approaches for clinical psychology and psychiatry explicitly focus on learning statistical functions from multidimensional data sets to make generalizable predictions about individuals. The goal of this review is to provide an accessible understanding of why this approach is important for future practice given its potential to augment decisions associated with the diagnosis, prognosis, and treatment of people suffering from mental illness using clinical and biological data. To this end, the limitations of current statistical paradigms in mental health research are critiqued, and an introduction is provided to critical machine learning methods used in clinical studies. A selective literature review is then presented aiming to reinforce the usefulness of machine learning methods and provide evidence of their potential. In the context of promising initial results, the current limitations of machine learning approaches are addressed, and considerations for future clinical translation are outlined.

  14. The Statistical Basis of Chemical Equilibria.

    ERIC Educational Resources Information Center

    Hauptmann, Siegfried; Menger, Eva

    1978-01-01

    Describes a machine which demonstrates the statistical bases of chemical equilibrium, and in doing so conveys insight into the connections among statistical mechanics, quantum mechanics, Maxwell Boltzmann statistics, statistical thermodynamics, and transition state theory. (GA)

  15. Adding Statistical Machine Translation Adaptation to Computer-Assisted Translation

    DTIC Science & Technology

    2013-09-01

    are automatically searched and used to suggest possible translations; (2) spell-checkers; (3) glossaries; (4) dictionaries ; (5) alignment and...matching against TMs to propose translations; spell-checking, glossary, and dictionary look-up; support for multiple file formats; regular expressions...on Telecommunications. Tehran, 2012, 822–826. Bertoldi, N.; Federico, M. Domain Adaptation for Statistical Machine Translation with Monolingual

  16. Introduction to multivariate discrimination

    NASA Astrophysics Data System (ADS)

    Kégl, Balázs

    2013-07-01

    Multivariate discrimination or classification is one of the best-studied problem in machine learning, with a plethora of well-tested and well-performing algorithms. There are also several good general textbooks [1-9] on the subject written to an average engineering, computer science, or statistics graduate student; most of them are also accessible for an average physics student with some background on computer science and statistics. Hence, instead of writing a generic introduction, we concentrate here on relating the subject to a practitioner experimental physicist. After a short introduction on the basic setup (Section 1) we delve into the practical issues of complexity regularization, model selection, and hyperparameter optimization (Section 2), since it is this step that makes high-complexity non-parametric fitting so different from low-dimensional parametric fitting. To emphasize that this issue is not restricted to classification, we illustrate the concept on a low-dimensional but non-parametric regression example (Section 2.1). Section 3 describes the common algorithmic-statistical formal framework that unifies the main families of multivariate classification algorithms. We explain here the large-margin principle that partly explains why these algorithms work. Section 4 is devoted to the description of the three main (families of) classification algorithms, neural networks, the support vector machine, and AdaBoost. We do not go into the algorithmic details; the goal is to give an overview on the form of the functions these methods learn and on the objective functions they optimize. Besides their technical description, we also make an attempt to put these algorithm into a socio-historical context. We then briefly describe some rather heterogeneous applications to illustrate the pattern recognition pipeline and to show how widespread the use of these methods is (Section 5). We conclude the chapter with three essentially open research problems that are either relevant to or even motivated by certain unorthodox applications of multivariate discrimination in experimental physics.

  17. Machine Learning Algorithms Outperform Conventional Regression Models in Predicting Development of Hepatocellular Carcinoma

    PubMed Central

    Singal, Amit G.; Mukherjee, Ashin; Elmunzer, B. Joseph; Higgins, Peter DR; Lok, Anna S.; Zhu, Ji; Marrero, Jorge A; Waljee, Akbar K

    2015-01-01

    Background Predictive models for hepatocellular carcinoma (HCC) have been limited by modest accuracy and lack of validation. Machine learning algorithms offer a novel methodology, which may improve HCC risk prognostication among patients with cirrhosis. Our study's aim was to develop and compare predictive models for HCC development among cirrhotic patients, using conventional regression analysis and machine learning algorithms. Methods We enrolled 442 patients with Child A or B cirrhosis at the University of Michigan between January 2004 and September 2006 (UM cohort) and prospectively followed them until HCC development, liver transplantation, death, or study termination. Regression analysis and machine learning algorithms were used to construct predictive models for HCC development, which were tested on an independent validation cohort from the Hepatitis C Antiviral Long-term Treatment against Cirrhosis (HALT-C) Trial. Both models were also compared to the previously published HALT-C model. Discrimination was assessed using receiver operating characteristic curve analysis and diagnostic accuracy was assessed with net reclassification improvement and integrated discrimination improvement statistics. Results After a median follow-up of 3.5 years, 41 patients developed HCC. The UM regression model had a c-statistic of 0.61 (95%CI 0.56-0.67), whereas the machine learning algorithm had a c-statistic of 0.64 (95%CI 0.60–0.69) in the validation cohort. The machine learning algorithm had significantly better diagnostic accuracy as assessed by net reclassification improvement (p<0.001) and integrated discrimination improvement (p=0.04). The HALT-C model had a c-statistic of 0.60 (95%CI 0.50-0.70) in the validation cohort and was outperformed by the machine learning algorithm (p=0.047). Conclusion Machine learning algorithms improve the accuracy of risk stratifying patients with cirrhosis and can be used to accurately identify patients at high-risk for developing HCC. PMID:24169273

  18. Machine learning–enabled identification of material phase transitions based on experimental data: Exploring collective dynamics in ferroelectric relaxors

    DOE PAGES

    Li, Linglong; Yang, Yaodong; Zhang, Dawei; ...

    2018-03-30

    Exploration of phase transitions and construction of associated phase diagrams are of fundamental importance for condensed matter physics and materials science alike, and remain the focus of extensive research for both theoretical and experimental studies. For the latter, comprehensive studies involving scattering, thermodynamics, and modeling are typically required. We present a new approach to data mining multiple realizations of collective dynamics, measured through piezoelectric relaxation studies, to identify the onset of a structural phase transition in nanometer-scale volumes, that is, the probed volume of an atomic force microscope tip. Machine learning is used to analyze the multidimensional data sets describingmore » relaxation to voltage and thermal stimuli, producing the temperature-bias phase diagram for a relaxor crystal without the need to measure (or know) the order parameter. The suitability of the approach to determine the phase diagram is shown with simulations based on a two-dimensional Ising model. Finally, these results indicate that machine learning approaches can be used to determine phase transitions in ferroelectrics, providing a general, statistically significant, and robust approach toward determining the presence of critical regimes and phase boundaries.« less

  19. Machine learning–enabled identification of material phase transitions based on experimental data: Exploring collective dynamics in ferroelectric relaxors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Linglong; Yang, Yaodong; Zhang, Dawei

    Exploration of phase transitions and construction of associated phase diagrams are of fundamental importance for condensed matter physics and materials science alike, and remain the focus of extensive research for both theoretical and experimental studies. For the latter, comprehensive studies involving scattering, thermodynamics, and modeling are typically required. We present a new approach to data mining multiple realizations of collective dynamics, measured through piezoelectric relaxation studies, to identify the onset of a structural phase transition in nanometer-scale volumes, that is, the probed volume of an atomic force microscope tip. Machine learning is used to analyze the multidimensional data sets describingmore » relaxation to voltage and thermal stimuli, producing the temperature-bias phase diagram for a relaxor crystal without the need to measure (or know) the order parameter. The suitability of the approach to determine the phase diagram is shown with simulations based on a two-dimensional Ising model. Finally, these results indicate that machine learning approaches can be used to determine phase transitions in ferroelectrics, providing a general, statistically significant, and robust approach toward determining the presence of critical regimes and phase boundaries.« less

  20. VIEW EASTLEFTBUILDING 2 PHYSICAL TESTING HOUSE (1928) RIGHTBUILDING 7 MACHINE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW EAST-LEFT-BUILDING 2 PHYSICAL TESTING HOUSE (1928) RIGHT-BUILDING 7 MACHINE SHOP (1901 SECTION) - John A. Roebling's Sons Company & American Steel & Wire Company, South Broad, Clark, Elmer, Mott & Hudson Streets, Trenton, Mercer County, NJ

  1. Performance of thigh-mounted triaxial accelerometer algorithms in objective quantification of sedentary behaviour and physical activity in older adults

    PubMed Central

    Verschueren, Sabine M. P.; Degens, Hans; Morse, Christopher I.; Onambélé, Gladys L.

    2017-01-01

    Accurate monitoring of sedentary behaviour and physical activity is key to investigate their exact role in healthy ageing. To date, accelerometers using cut-off point models are most preferred for this, however, machine learning seems a highly promising future alternative. Hence, the current study compared between cut-off point and machine learning algorithms, for optimal quantification of sedentary behaviour and physical activity intensities in the elderly. Thus, in a heterogeneous sample of forty participants (aged ≥60 years, 50% female) energy expenditure during laboratory-based activities (ranging from sedentary behaviour through to moderate-to-vigorous physical activity) was estimated by indirect calorimetry, whilst wearing triaxial thigh-mounted accelerometers. Three cut-off point algorithms and a Random Forest machine learning model were developed and cross-validated using the collected data. Detailed analyses were performed to check algorithm robustness, and examine and benchmark both overall and participant-specific balanced accuracies. This revealed that the four models can at least be used to confidently monitor sedentary behaviour and moderate-to-vigorous physical activity. Nevertheless, the machine learning algorithm outperformed the cut-off point models by being robust for all individual’s physiological and non-physiological characteristics and showing more performance of an acceptable level over the whole range of physical activity intensities. Therefore, we propose that Random Forest machine learning may be optimal for objective assessment of sedentary behaviour and physical activity in older adults using thigh-mounted triaxial accelerometry. PMID:29155839

  2. Performance of thigh-mounted triaxial accelerometer algorithms in objective quantification of sedentary behaviour and physical activity in older adults.

    PubMed

    Wullems, Jorgen A; Verschueren, Sabine M P; Degens, Hans; Morse, Christopher I; Onambélé, Gladys L

    2017-01-01

    Accurate monitoring of sedentary behaviour and physical activity is key to investigate their exact role in healthy ageing. To date, accelerometers using cut-off point models are most preferred for this, however, machine learning seems a highly promising future alternative. Hence, the current study compared between cut-off point and machine learning algorithms, for optimal quantification of sedentary behaviour and physical activity intensities in the elderly. Thus, in a heterogeneous sample of forty participants (aged ≥60 years, 50% female) energy expenditure during laboratory-based activities (ranging from sedentary behaviour through to moderate-to-vigorous physical activity) was estimated by indirect calorimetry, whilst wearing triaxial thigh-mounted accelerometers. Three cut-off point algorithms and a Random Forest machine learning model were developed and cross-validated using the collected data. Detailed analyses were performed to check algorithm robustness, and examine and benchmark both overall and participant-specific balanced accuracies. This revealed that the four models can at least be used to confidently monitor sedentary behaviour and moderate-to-vigorous physical activity. Nevertheless, the machine learning algorithm outperformed the cut-off point models by being robust for all individual's physiological and non-physiological characteristics and showing more performance of an acceptable level over the whole range of physical activity intensities. Therefore, we propose that Random Forest machine learning may be optimal for objective assessment of sedentary behaviour and physical activity in older adults using thigh-mounted triaxial accelerometry.

  3. Machine Learning in Medicine.

    PubMed

    Deo, Rahul C

    2015-11-17

    Spurred by advances in processing power, memory, storage, and an unprecedented wealth of data, computers are being asked to tackle increasingly complex learning tasks, often with astonishing success. Computers have now mastered a popular variant of poker, learned the laws of physics from experimental data, and become experts in video games - tasks that would have been deemed impossible not too long ago. In parallel, the number of companies centered on applying complex data analysis to varying industries has exploded, and it is thus unsurprising that some analytic companies are turning attention to problems in health care. The purpose of this review is to explore what problems in medicine might benefit from such learning approaches and use examples from the literature to introduce basic concepts in machine learning. It is important to note that seemingly large enough medical data sets and adequate learning algorithms have been available for many decades, and yet, although there are thousands of papers applying machine learning algorithms to medical data, very few have contributed meaningfully to clinical care. This lack of impact stands in stark contrast to the enormous relevance of machine learning to many other industries. Thus, part of my effort will be to identify what obstacles there may be to changing the practice of medicine through statistical learning approaches, and discuss how these might be overcome. © 2015 American Heart Association, Inc.

  4. Uniting Cheminformatics and Chemical Theory To Predict the Intrinsic Aqueous Solubility of Crystalline Druglike Molecules

    PubMed Central

    2014-01-01

    We present four models of solution free-energy prediction for druglike molecules utilizing cheminformatics descriptors and theoretically calculated thermodynamic values. We make predictions of solution free energy using physics-based theory alone and using machine learning/quantitative structure–property relationship (QSPR) models. We also develop machine learning models where the theoretical energies and cheminformatics descriptors are used as combined input. These models are used to predict solvation free energy. While direct theoretical calculation does not give accurate results in this approach, machine learning is able to give predictions with a root mean squared error (RMSE) of ∼1.1 log S units in a 10-fold cross-validation for our Drug-Like-Solubility-100 (DLS-100) dataset of 100 druglike molecules. We find that a model built using energy terms from our theoretical methodology as descriptors is marginally less predictive than one built on Chemistry Development Kit (CDK) descriptors. Combining both sets of descriptors allows a further but very modest improvement in the predictions. However, in some cases, this is a statistically significant enhancement. These results suggest that there is little complementarity between the chemical information provided by these two sets of descriptors, despite their different sources and methods of calculation. Our machine learning models are also able to predict the well-known Solubility Challenge dataset with an RMSE value of 0.9–1.0 log S units. PMID:24564264

  5. Analysis of Machine Learning Techniques for Heart Failure Readmissions.

    PubMed

    Mortazavi, Bobak J; Downing, Nicholas S; Bucholz, Emily M; Dharmarajan, Kumar; Manhapra, Ajay; Li, Shu-Xia; Negahban, Sahand N; Krumholz, Harlan M

    2016-11-01

    The current ability to predict readmissions in patients with heart failure is modest at best. It is unclear whether machine learning techniques that address higher dimensional, nonlinear relationships among variables would enhance prediction. We sought to compare the effectiveness of several machine learning algorithms for predicting readmissions. Using data from the Telemonitoring to Improve Heart Failure Outcomes trial, we compared the effectiveness of random forests, boosting, random forests combined hierarchically with support vector machines or logistic regression (LR), and Poisson regression against traditional LR to predict 30- and 180-day all-cause readmissions and readmissions because of heart failure. We randomly selected 50% of patients for a derivation set, and a validation set comprised the remaining patients, validated using 100 bootstrapped iterations. We compared C statistics for discrimination and distributions of observed outcomes in risk deciles for predictive range. In 30-day all-cause readmission prediction, the best performing machine learning model, random forests, provided a 17.8% improvement over LR (mean C statistics, 0.628 and 0.533, respectively). For readmissions because of heart failure, boosting improved the C statistic by 24.9% over LR (mean C statistic 0.678 and 0.543, respectively). For 30-day all-cause readmission, the observed readmission rates in the lowest and highest deciles of predicted risk with random forests (7.8% and 26.2%, respectively) showed a much wider separation than LR (14.2% and 16.4%, respectively). Machine learning methods improved the prediction of readmission after hospitalization for heart failure compared with LR and provided the greatest predictive range in observed readmission rates. © 2016 American Heart Association, Inc.

  6. Interpreting support vector machine models for multivariate group wise analysis in neuroimaging

    PubMed Central

    Gaonkar, Bilwaj; Shinohara, Russell T; Davatzikos, Christos

    2015-01-01

    Machine learning based classification algorithms like support vector machines (SVMs) have shown great promise for turning a high dimensional neuroimaging data into clinically useful decision criteria. However, tracing imaging based patterns that contribute significantly to classifier decisions remains an open problem. This is an issue of critical importance in imaging studies seeking to determine which anatomical or physiological imaging features contribute to the classifier’s decision, thereby allowing users to critically evaluate the findings of such machine learning methods and to understand disease mechanisms. The majority of published work addresses the question of statistical inference for support vector classification using permutation tests based on SVM weight vectors. Such permutation testing ignores the SVM margin, which is critical in SVM theory. In this work we emphasize the use of a statistic that explicitly accounts for the SVM margin and show that the null distributions associated with this statistic are asymptotically normal. Further, our experiments show that this statistic is a lot less conservative as compared to weight based permutation tests and yet specific enough to tease out multivariate patterns in the data. Thus, we can better understand the multivariate patterns that the SVM uses for neuroimaging based classification. PMID:26210913

  7. The Physics of Ultrabroadband Frequency Comb Generation and Optimized Combs for Measurements in Fundamental Physics

    DTIC Science & Technology

    2016-07-02

    beams Superresolution machining Threshold effect of ablation means that structure diameter is less than the beam diameter fs pulses at 800 nm yield 200...Approved for public release: distribution unlimited. Applications of Bessel beams Superresolution machining Threshold effect of ablation means that... Superresolution machining Threshold effect of ablation means that structure diameter is less than the beam diameter fs pulses at 800 nm yield 200 nm

  8. Changes in the physical status of the typical and leached chernozems of Kursk oblast within 40 years

    NASA Astrophysics Data System (ADS)

    Kuznetsova, I. V.

    2013-04-01

    The changes in the physical properties of the chernozems in the Central Russian province of the forest-steppe zone (Kursk oblast) that took place from 1964 to 2002 are analyzed in relation to the corresponding changes in the agrotechnology, agroeconomy, and agroecology. Three periods of the soil transformation are distinguished. The first period was characterized by the use of machines with relatively small pressure on the soil and by the dynamic equilibrium between the physical state of the soils and the processes of the humification-mineralization of the soil organic matter. The use of power-intensive machines in the next period resulted in greater soil compaction with negative changes in the soil physical properties. At the same time, the physical properties of the chernozems remained close to optimum on the fields where heavy machines were not used. The third period was characterized by the use of heavy machines and by the decrease in the rates of the organic and mineral fertilizers and certain disturbances in the crop rotation systems because of the economic difficulties. The negative tendencies of the changes in the soil physical properties observed during the preceding period continued.

  9. Application of artificial neural network to search for gravitational-wave signals associated with short gamma-ray bursts

    NASA Astrophysics Data System (ADS)

    Kim, Kyungmin; Harry, Ian W.; Hodge, Kari A.; Kim, Young-Min; Lee, Chang-Hwan; Lee, Hyun Kyu; Oh, John J.; Oh, Sang Hoon; Son, Edwin J.

    2015-12-01

    We apply a machine learning algorithm, the artificial neural network, to the search for gravitational-wave signals associated with short gamma-ray bursts (GRBs). The multi-dimensional samples consisting of data corresponding to the statistical and physical quantities from the coherent search pipeline are fed into the artificial neural network to distinguish simulated gravitational-wave signals from background noise artifacts. Our result shows that the data classification efficiency at a fixed false alarm probability (FAP) is improved by the artificial neural network in comparison to the conventional detection statistic. Specifically, the distance at 50% detection probability at a fixed false positive rate is increased about 8%-14% for the considered waveform models. We also evaluate a few seconds of the gravitational-wave data segment using the trained networks and obtain the FAP. We suggest that the artificial neural network can be a complementary method to the conventional detection statistic for identifying gravitational-wave signals related to the short GRBs.

  10. Evaluating the Security of Machine Learning Algorithms

    DTIC Science & Technology

    2008-05-20

    Two far-reaching trends in computing have grown in significance in recent years. First, statistical machine learning has entered the mainstream as a...computing applications. The growing intersection of these trends compels us to investigate how well machine learning performs under adversarial conditions... machine learning has a structure that we can use to build secure learning systems. This thesis makes three high-level contributions. First, we develop a

  11. The Physics and Physical Chemistry of Molecular Machines.

    PubMed

    Astumian, R Dean; Mukherjee, Shayantani; Warshel, Arieh

    2016-06-17

    The concept of a "power stroke"-a free-energy releasing conformational change-appears in almost every textbook that deals with the molecular details of muscle, the flagellar rotor, and many other biomolecular machines. Here, it is shown by using the constraints of microscopic reversibility that the power stroke model is incorrect as an explanation of how chemical energy is used by a molecular machine to do mechanical work. Instead, chemically driven molecular machines operating under thermodynamic constraints imposed by the reactant and product concentrations in the bulk function as information ratchets in which the directionality and stopping torque or stopping force are controlled entirely by the gating of the chemical reaction that provides the fuel for the machine. The gating of the chemical free energy occurs through chemical state dependent conformational changes of the molecular machine that, in turn, are capable of generating directional mechanical motions. In strong contrast to this general conclusion for molecular machines driven by catalysis of a chemical reaction, a power stroke may be (and often is) an essential component for a molecular machine driven by external modulation of pH or redox potential or by light. This difference between optical and chemical driving properties arises from the fundamental symmetry difference between the physics of optical processes, governed by the Bose-Einstein relations, and the constraints of microscopic reversibility for thermally activated processes. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Projected Regression Methods for Inverting Fredholm Integrals: Formalism and Application to Analytical Continuation

    NASA Astrophysics Data System (ADS)

    Arsenault, Louis-Francois; Neuberg, Richard; Hannah, Lauren A.; Millis, Andrew J.

    We present a machine learning-based statistical regression approach to the inversion of Fredholm integrals of the first kind by studying an important example for the quantum materials community, the analytical continuation problem of quantum many-body physics. It involves reconstructing the frequency dependence of physical excitation spectra from data obtained at specific points in the complex frequency plane. The approach provides a natural regularization in cases where the inverse of the Fredholm kernel is ill-conditioned and yields robust error metrics. The stability of the forward problem permits the construction of a large database of input-output pairs. Machine learning methods applied to this database generate approximate solutions which are projected onto the subspace of functions satisfying relevant constraints. We show that for low input noise the method performs as well or better than Maximum Entropy (MaxEnt) under standard error metrics, and is substantially more robust to noise. We expect the methodology to be similarly effective for any problem involving a formally ill-conditioned inversion, provided that the forward problem can be efficiently solved. AJM was supported by the Office of Science of the U.S. Department of Energy under Subcontract No. 3F-3138 and LFA by the Columbia Univeristy IDS-ROADS project, UR009033-05 which also provided part support to RN and LH.

  13. The East, the West and the universal machine.

    PubMed

    Marchal, Bruno

    2017-12-01

    After reviewing the basic of theology of Universal Numbers/Machines, as detailed in Marchal (2007), I illustrate how that body of thought might be used to shed some light upon the apparent dichotomy in Eastern/Western spirituality. This paper relies entirely on my previous interdisciplinary work in mathematical logic, computer science and machine's theology, where "theology" is used here in the sense of Plato: it is the truth, or the "truth-theory" (in the sense of logicians) about a machine that the machine can either deduce from some of its primitive beliefs, or can be intuited in some sense that eventually is made clear through the modal logic of machine self-reference. Such a theology appears to be testable, because it has been shown that physics has to be necessarily retrieved from it when we assume the mechanist hypothesis in the cognitive sciences, and this in a unique precise (introspective) way, so that we only need to compare the physics of the introspective machine with the physics inferred from the human observation; and up to now, it is the only theory known to fit both the existence of personal "consciousness" (undoubtable yet unjustifiable truth) and quanta and quantum relationships (Marchal, 1998; Marchal, 2004; Marchal, 2013; Marchal, 2015). Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. A plea for "variational neuroethology". Comment on "Answering Schrödinger's question: A free-energy formulation" by M.J. Desormeau Ramstead et al.

    NASA Astrophysics Data System (ADS)

    Daunizeau, Jean

    2018-03-01

    What is life? According to Erwin Schrödinger [13], the living cell departs from other physical systems in that it - apparently - resists the second law of thermodynamics by restricting the dynamical repertoire (minimizing the entropy) of its physiological states. This is a physical rephrasing of Claude Bernard's biological notion of homeostasis, namely: the capacity of living systems to self-organize in order to maintain the stability of their internal milieu despite uninterrupted exchanges with an ever-altering external environment [2]. The important point here is that physical systems can neither identify nor prevent a state of high entropy. The Free Energy Principle or FEP was originally proposed as a mathematical description of how the brain actually solves this issue [4]. In line with the Bayesian brain hypothesis, the FEP views the brain as a hierarchical statistical learning machine, endowed with the imperative of minimizing Free Energy, i.e. prediction error. Action prescription under the FEP, however, does not follow standard Bayesian decision theory. Rather, action is assumed to further minimize Free Energy, which makes the active brain a self-fulfilling prophecy machine [6]. This is adaptive, under the assumption that evolution has equipped the brain with innate priors centered on homeostatic set points. In turn, avoiding (surprising) violations of such prior predictions implements homeostatic regulation [10], which becomes increasingly anticipatory as learning unfolds over the course of ontological development [5].

  15. Children's Beliefs about the Fantasy/Reality Status of Hypothesized Machines

    ERIC Educational Resources Information Center

    Cook, Claire; Sobel, David M.

    2011-01-01

    Four-year-olds, 6-year-olds, and adults were asked to make judgments about the reality status of four different types of machines: real machines that children and adults interact with on a daily basis, real machines that children and adults interact with rarely (if at all), and impossible machines that violated a real-world physical or biological…

  16. Fault Diagnosis for Rotating Machinery Using Vibration Measurement Deep Statistical Feature Learning.

    PubMed

    Li, Chuan; Sánchez, René-Vinicio; Zurita, Grover; Cerrada, Mariela; Cabrera, Diego

    2016-06-17

    Fault diagnosis is important for the maintenance of rotating machinery. The detection of faults and fault patterns is a challenging part of machinery fault diagnosis. To tackle this problem, a model for deep statistical feature learning from vibration measurements of rotating machinery is presented in this paper. Vibration sensor signals collected from rotating mechanical systems are represented in the time, frequency, and time-frequency domains, each of which is then used to produce a statistical feature set. For learning statistical features, real-value Gaussian-Bernoulli restricted Boltzmann machines (GRBMs) are stacked to develop a Gaussian-Bernoulli deep Boltzmann machine (GDBM). The suggested approach is applied as a deep statistical feature learning tool for both gearbox and bearing systems. The fault classification performances in experiments using this approach are 95.17% for the gearbox, and 91.75% for the bearing system. The proposed approach is compared to such standard methods as a support vector machine, GRBM and a combination model. In experiments, the best fault classification rate was detected using the proposed model. The results show that deep learning with statistical feature extraction has an essential improvement potential for diagnosing rotating machinery faults.

  17. Fault Diagnosis for Rotating Machinery Using Vibration Measurement Deep Statistical Feature Learning

    PubMed Central

    Li, Chuan; Sánchez, René-Vinicio; Zurita, Grover; Cerrada, Mariela; Cabrera, Diego

    2016-01-01

    Fault diagnosis is important for the maintenance of rotating machinery. The detection of faults and fault patterns is a challenging part of machinery fault diagnosis. To tackle this problem, a model for deep statistical feature learning from vibration measurements of rotating machinery is presented in this paper. Vibration sensor signals collected from rotating mechanical systems are represented in the time, frequency, and time-frequency domains, each of which is then used to produce a statistical feature set. For learning statistical features, real-value Gaussian-Bernoulli restricted Boltzmann machines (GRBMs) are stacked to develop a Gaussian-Bernoulli deep Boltzmann machine (GDBM). The suggested approach is applied as a deep statistical feature learning tool for both gearbox and bearing systems. The fault classification performances in experiments using this approach are 95.17% for the gearbox, and 91.75% for the bearing system. The proposed approach is compared to such standard methods as a support vector machine, GRBM and a combination model. In experiments, the best fault classification rate was detected using the proposed model. The results show that deep learning with statistical feature extraction has an essential improvement potential for diagnosing rotating machinery faults. PMID:27322273

  18. Development of a HIPAA-compliant environment for translational research data and analytics.

    PubMed

    Bradford, Wayne; Hurdle, John F; LaSalle, Bernie; Facelli, Julio C

    2014-01-01

    High-performance computing centers (HPC) traditionally have far less restrictive privacy management policies than those encountered in healthcare. We show how an HPC can be re-engineered to accommodate clinical data while retaining its utility in computationally intensive tasks such as data mining, machine learning, and statistics. We also discuss deploying protected virtual machines. A critical planning step was to engage the university's information security operations and the information security and privacy office. Access to the environment requires a double authentication mechanism. The first level of authentication requires access to the university's virtual private network and the second requires that the users be listed in the HPC network information service directory. The physical hardware resides in a data center with controlled room access. All employees of the HPC and its users take the university's local Health Insurance Portability and Accountability Act training series. In the first 3 years, researcher count has increased from 6 to 58.

  19. Selected aspects of microelectronics technology and applications: Numerically controlled machine tools. Technology trends series no. 2

    NASA Astrophysics Data System (ADS)

    Sigurdson, J.; Tagerud, J.

    1986-05-01

    A UNIDO publication about machine tools with automatic control discusses the following: (1) numerical control (NC) machine tool perspectives, definition of NC, flexible manufacturing systems, robots and their industrial application, research and development, and sensors; (2) experience in developing a capability in NC machine tools; (3) policy issues; (4) procedures for retrieval of relevant documentation from data bases. Diagrams, statistics, bibliography are included.

  20. Learning Simple Machines through Cross-Age Collaborations

    ERIC Educational Resources Information Center

    Lancor, Rachael; Schiebel, Amy

    2008-01-01

    In this project, introductory college physics students (noneducation majors) were asked to teach simple machines to a class of second graders. This nontraditional activity proved to be a successful way to encourage college students to think critically about physics and how it applied to their everyday lives. The noneducation majors benefited by…

  1. Support vector machines classifiers of physical activities in preschoolers

    USDA-ARS?s Scientific Manuscript database

    The goal of this study is to develop, test, and compare multinomial logistic regression (MLR) and support vector machines (SVM) in classifying preschool-aged children physical activity data acquired from an accelerometer. In this study, 69 children aged 3-5 years old were asked to participate in a s...

  2. Statistical machine translation for biomedical text: are we there yet?

    PubMed

    Wu, Cuijun; Xia, Fei; Deleger, Louise; Solti, Imre

    2011-01-01

    In our paper we addressed the research question: "Has machine translation achieved sufficiently high quality to translate PubMed titles for patients?". We analyzed statistical machine translation output for six foreign language - English translation pairs (bi-directionally). We built a high performing in-house system and evaluated its output for each translation pair on large scale both with automated BLEU scores and human judgment. In addition to the in-house system, we also evaluated Google Translate's performance specifically within the biomedical domain. We report high performance for German, French and Spanish -- English bi-directional translation pairs for both Google Translate and our system.

  3. Research in Chinese-English Machine Translation. Final Report.

    ERIC Educational Resources Information Center

    Wang, William S-Y.; And Others

    This report documents results of a two-year effort toward the study and investigation of the design of a prototype system for Chinese-English machine translation in the general area of physics. Previous work in Chinese-English machine translation is reviewed. Grammatical considerations in machine translation are discussed and detailed aspects of…

  4. Special Issue: Big data and predictive computational modeling

    NASA Astrophysics Data System (ADS)

    Koutsourelakis, P. S.; Zabaras, N.; Girolami, M.

    2016-09-01

    The motivation for this special issue stems from the symposium on "Big Data and Predictive Computational Modeling" that took place at the Institute for Advanced Study, Technical University of Munich, during May 18-21, 2015. With a mindset firmly grounded in computational discovery, but a polychromatic set of viewpoints, several leading scientists, from physics and chemistry, biology, engineering, applied mathematics, scientific computing, neuroscience, statistics and machine learning, engaged in discussions and exchanged ideas for four days. This special issue contains a subset of the presentations. Video and slides of all the presentations are available on the TUM-IAS website http://www.tum-ias.de/bigdata2015/.

  5. Characterizing Slow Slip Applying Machine Learning

    NASA Astrophysics Data System (ADS)

    Hulbert, C.; Rouet-Leduc, B.; Bolton, D. C.; Ren, C. X.; Marone, C.; Johnson, P. A.

    2017-12-01

    Over the last two decades it has become apparent from strain and GPS measurements, that slow slip on earthquake faults is a widespread phenomenon. Slow slip is also inferred from small amplitude seismic signals known as tremor and low frequency earthquakes (LFE's) and has been reproduced in laboratory studies, providing useful physical insight into the frictional properties associated with the behavior. From such laboratory studies we ask whether we can obtain quantitative information regarding the physics of friction from only the recorded continuous acoustical data originating from the fault zone. We show that by applying machine learning to the acoustical signal, we can infer upcoming slow slip failure initiation as well as the slip termination, and that we can also infer the magnitudes by a second machine learning procedure based on predicted inter-event times. We speculate that by applying this or other machine learning approaches to continuous seismic data, new information regarding the physics of faulting could be obtained.

  6. Machine Learning for Flood Prediction in Google Earth Engine

    NASA Astrophysics Data System (ADS)

    Kuhn, C.; Tellman, B.; Max, S. A.; Schwarz, B.

    2015-12-01

    With the increasing availability of high-resolution satellite imagery, dynamic flood mapping in near real time is becoming a reachable goal for decision-makers. This talk describes a newly developed framework for predicting biophysical flood vulnerability using public data, cloud computing and machine learning. Our objective is to define an approach to flood inundation modeling using statistical learning methods deployed in a cloud-based computing platform. Traditionally, static flood extent maps grounded in physically based hydrologic models can require hours of human expertise to construct at significant financial cost. In addition, desktop modeling software and limited local server storage can impose restraints on the size and resolution of input datasets. Data-driven, cloud-based processing holds promise for predictive watershed modeling at a wide range of spatio-temporal scales. However, these benefits come with constraints. In particular, parallel computing limits a modeler's ability to simulate the flow of water across a landscape, rendering traditional routing algorithms unusable in this platform. Our project pushes these limits by testing the performance of two machine learning algorithms, Support Vector Machine (SVM) and Random Forests, at predicting flood extent. Constructed in Google Earth Engine, the model mines a suite of publicly available satellite imagery layers to use as algorithm inputs. Results are cross-validated using MODIS-based flood maps created using the Dartmouth Flood Observatory detection algorithm. Model uncertainty highlights the difficulty of deploying unbalanced training data sets based on rare extreme events.

  7. Performance Analysis of Millimeter-Wave Multi-hop Machine-to-Machine Networks Based on Hop Distance Statistics

    PubMed Central

    2018-01-01

    As an intrinsic part of the Internet of Things (IoT) ecosystem, machine-to-machine (M2M) communications are expected to provide ubiquitous connectivity between machines. Millimeter-wave (mmWave) communication is another promising technology for the future communication systems to alleviate the pressure of scarce spectrum resources. For this reason, in this paper, we consider multi-hop M2M communications, where a machine-type communication (MTC) device with the limited transmit power relays to help other devices using mmWave. To be specific, we focus on hop distance statistics and their impacts on system performances in multi-hop wireless networks (MWNs) with directional antenna arrays in mmWave for M2M communications. Different from microwave systems, in mmWave communications, wireless channel suffers from blockage by obstacles that heavily attenuate line-of-sight signals, which may result in limited per-hop progress in MWNs. We consider two routing strategies aiming at different types of applications and derive the probability distributions of their hop distances. Moreover, we provide their baseline statistics assuming the blockage-free scenario to quantify the impact of blockages. Based on the hop distance analysis, we propose a method to estimate the end-to-end performances (e.g., outage probability, hop count, and transmit energy) of the mmWave MWNs, which provides important insights into mmWave MWN design without time-consuming and repetitive end-to-end simulation. PMID:29329248

  8. Performance Analysis of Millimeter-Wave Multi-hop Machine-to-Machine Networks Based on Hop Distance Statistics.

    PubMed

    Jung, Haejoon; Lee, In-Ho

    2018-01-12

    As an intrinsic part of the Internet of Things (IoT) ecosystem, machine-to-machine (M2M) communications are expected to provide ubiquitous connectivity between machines. Millimeter-wave (mmWave) communication is another promising technology for the future communication systems to alleviate the pressure of scarce spectrum resources. For this reason, in this paper, we consider multi-hop M2M communications, where a machine-type communication (MTC) device with the limited transmit power relays to help other devices using mmWave. To be specific, we focus on hop distance statistics and their impacts on system performances in multi-hop wireless networks (MWNs) with directional antenna arrays in mmWave for M2M communications. Different from microwave systems, in mmWave communications, wireless channel suffers from blockage by obstacles that heavily attenuate line-of-sight signals, which may result in limited per-hop progress in MWNs. We consider two routing strategies aiming at different types of applications and derive the probability distributions of their hop distances. Moreover, we provide their baseline statistics assuming the blockage-free scenario to quantify the impact of blockages. Based on the hop distance analysis, we propose a method to estimate the end-to-end performances (e.g., outage probability, hop count, and transmit energy) of the mmWave MWNs, which provides important insights into mmWave MWN design without time-consuming and repetitive end-to-end simulation.

  9. Cox process representation and inference for stochastic reaction-diffusion processes

    NASA Astrophysics Data System (ADS)

    Schnoerr, David; Grima, Ramon; Sanguinetti, Guido

    2016-05-01

    Complex behaviour in many systems arises from the stochastic interactions of spatially distributed particles or agents. Stochastic reaction-diffusion processes are widely used to model such behaviour in disciplines ranging from biology to the social sciences, yet they are notoriously difficult to simulate and calibrate to observational data. Here we use ideas from statistical physics and machine learning to provide a solution to the inverse problem of learning a stochastic reaction-diffusion process from data. Our solution relies on a non-trivial connection between stochastic reaction-diffusion processes and spatio-temporal Cox processes, a well-studied class of models from computational statistics. This connection leads to an efficient and flexible algorithm for parameter inference and model selection. Our approach shows excellent accuracy on numeric and real data examples from systems biology and epidemiology. Our work provides both insights into spatio-temporal stochastic systems, and a practical solution to a long-standing problem in computational modelling.

  10. On the role of exchange of power and information signals in control and stability of the human-robot interaction

    NASA Technical Reports Server (NTRS)

    Kazerooni, H.

    1991-01-01

    A human's ability to perform physical tasks is limited, not only by his intelligence, but by his physical strength. If, in an appropriate environment, a machine's mechanical power is closely integrated with a human arm's mechanical power under the control of the human intellect, the resulting system will be superior to a loosely integrated combination of a human and a fully automated robot. Therefore, we must develop a fundamental solution to the problem of 'extending' human mechanical power. The work presented here defines 'extenders' as a class of robot manipulators worn by humans to increase human mechanical strength, while the wearer's intellect remains the central control system for manipulating the extender. The human, in physical contact with the extender, exchanges power and information signals with the extender. The aim is to determine the fundamental building blocks of an intelligent controller, a controller which allows interaction between humans and a broad class of computer-controlled machines via simultaneous exchange of both power and information signals. The prevalent trend in automation has been to physically separate the human from the machine so the human must always send information signals via an intermediary device (e.g., joystick, pushbutton, light switch). Extenders, however are perfect examples of self-powered machines that are built and controlled for the optimal exchange of power and information signals with humans. The human wearing the extender is in physical contact with the machine, so power transfer is unavoidable and information signals from the human help to control the machine. Commands are transferred to the extender via the contact forces and the EMG signals between the wearer and the extender. The extender augments human motor ability without accepting any explicit commands: it accepts the EMG signals and the contact force between the person's arm and the extender, and the extender 'translates' them into a desired position. In this unique configuration, mechanical power transfer between the human and the extender occurs because the human is pushing against the extender. The extender transfers to the human's hand, in feedback fashion, a scaled-down version of the actual external load which the extender is manipulating. This natural feedback force on the human's hand allows him to 'feel' a modified version of the external forces on the extender. The information signals from the human (e.g., EMG signals) to the computer reflect human cognitive ability, and the power transfer between the human and the machine (e.g., physical interaction) reflects human physical ability. Thus the information transfer to the machine augments cognitive ability, and the power transfer augments motor ability. These two actions are coupled through the human cognitive/motor dynamic behavior. The goal is to derive the control rules for a class of computer-controlled machines that augment human physical and cognitive abilities in certain manipulative tasks.

  11. Machine Learning in the Presence of an Adversary: Attacking and Defending the SpamBayes Spam Filter

    DTIC Science & Technology

    2008-05-20

    Machine learning techniques are often used for decision making in security critical applications such as intrusion detection and spam filtering...filter. The defenses shown in this thesis are able to work against the attacks developed against SpamBayes and are sufficiently generic to be easily extended into other statistical machine learning algorithms.

  12. Testing meta tagger

    DTIC Science & Technology

    2017-12-21

    rank , and computer vision. Machine learning is closely related to (and often overlaps with) computational statistics, which also focuses on...Machine learning is a field of computer science that gives computers the ability to learn without being explicitly programmed.[1] Arthur Samuel...an American pioneer in the field of computer gaming and artificial intelligence, coined the term "Machine Learning " in 1959 while at IBM[2]. Evolved

  13. Machine learning Z2 quantum spin liquids with quasiparticle statistics

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Melko, Roger G.; Kim, Eun-Ah

    2017-12-01

    After decades of progress and effort, obtaining a phase diagram for a strongly correlated topological system still remains a challenge. Although in principle one could turn to Wilson loops and long-range entanglement, evaluating these nonlocal observables at many points in phase space can be prohibitively costly. With growing excitement over topological quantum computation comes the need for an efficient approach for obtaining topological phase diagrams. Here we turn to machine learning using quantum loop topography (QLT), a notion we have recently introduced. Specifically, we propose a construction of QLT that is sensitive to quasiparticle statistics. We then use mutual statistics between the spinons and visons to detect a Z2 quantum spin liquid in a multiparameter phase space. We successfully obtain the quantum phase boundary between the topological and trivial phases using a simple feed-forward neural network. Furthermore, we demonstrate advantages of our approach for the evaluation of phase diagrams relating to speed and storage. Such statistics-based machine learning of topological phases opens new efficient routes to studying topological phase diagrams in strongly correlated systems.

  14. Statistical complex fatigue data for SAE 4340 steel and its use in design by reliability

    NASA Technical Reports Server (NTRS)

    Kececioglu, D.; Smith, J. L.

    1970-01-01

    A brief description of the complex fatigue machines used in the test program is presented. The data generated from these machines are given and discussed. Two methods of obtaining strength distributions from the data are also discussed. Then follows a discussion of the construction of statistical fatigue diagrams and their use in designing by reliability. Finally, some of the problems encountered in the test equipment and a corrective modification are presented.

  15. Solving a Higgs optimization problem with quantum annealing for machine learning.

    PubMed

    Mott, Alex; Job, Joshua; Vlimant, Jean-Roch; Lidar, Daniel; Spiropulu, Maria

    2017-10-18

    The discovery of Higgs-boson decays in a background of standard-model processes was assisted by machine learning methods. The classifiers used to separate signals such as these from background are trained using highly unerring but not completely perfect simulations of the physical processes involved, often resulting in incorrect labelling of background processes or signals (label noise) and systematic errors. Here we use quantum and classical annealing (probabilistic techniques for approximating the global maximum or minimum of a given function) to solve a Higgs-signal-versus-background machine learning optimization problem, mapped to a problem of finding the ground state of a corresponding Ising spin model. We build a set of weak classifiers based on the kinematic observables of the Higgs decay photons, which we then use to construct a strong classifier. This strong classifier is highly resilient against overtraining and against errors in the correlations of the physical observables in the training data. We show that the resulting quantum and classical annealing-based classifier systems perform comparably to the state-of-the-art machine learning methods that are currently used in particle physics. However, in contrast to these methods, the annealing-based classifiers are simple functions of directly interpretable experimental parameters with clear physical meaning. The annealer-trained classifiers use the excited states in the vicinity of the ground state and demonstrate some advantage over traditional machine learning methods for small training datasets. Given the relative simplicity of the algorithm and its robustness to error, this technique may find application in other areas of experimental particle physics, such as real-time decision making in event-selection problems and classification in neutrino physics.

  16. Solving a Higgs optimization problem with quantum annealing for machine learning

    NASA Astrophysics Data System (ADS)

    Mott, Alex; Job, Joshua; Vlimant, Jean-Roch; Lidar, Daniel; Spiropulu, Maria

    2017-10-01

    The discovery of Higgs-boson decays in a background of standard-model processes was assisted by machine learning methods. The classifiers used to separate signals such as these from background are trained using highly unerring but not completely perfect simulations of the physical processes involved, often resulting in incorrect labelling of background processes or signals (label noise) and systematic errors. Here we use quantum and classical annealing (probabilistic techniques for approximating the global maximum or minimum of a given function) to solve a Higgs-signal-versus-background machine learning optimization problem, mapped to a problem of finding the ground state of a corresponding Ising spin model. We build a set of weak classifiers based on the kinematic observables of the Higgs decay photons, which we then use to construct a strong classifier. This strong classifier is highly resilient against overtraining and against errors in the correlations of the physical observables in the training data. We show that the resulting quantum and classical annealing-based classifier systems perform comparably to the state-of-the-art machine learning methods that are currently used in particle physics. However, in contrast to these methods, the annealing-based classifiers are simple functions of directly interpretable experimental parameters with clear physical meaning. The annealer-trained classifiers use the excited states in the vicinity of the ground state and demonstrate some advantage over traditional machine learning methods for small training datasets. Given the relative simplicity of the algorithm and its robustness to error, this technique may find application in other areas of experimental particle physics, such as real-time decision making in event-selection problems and classification in neutrino physics.

  17. The effect of the use of a TNF-alpha inhibitor in hypothermic machine perfusion on kidney function after transplantation.

    PubMed

    Diuwe, Piotr; Domagala, Piotr; Durlik, Magdalena; Trzebicki, Janusz; Chmura, Andrzej; Kwiatkowski, Artur

    2017-08-01

    One of the most important problems in transplantation medicine is the ischemia/reperfusion injury of the organs to be transplanted. The aim of the present study was to assess the effect of tumor necrosis factor-alpha (TNF-alpha) inhibitor etanercept on the machine perfusion hypothermia of renal allograft kidney function and organ perfusion. No statistically significant differences were found in the impact of the applied intervention on kidney machine perfusion during which the average flow and vascular resistance were evaluated. There were no statistically significant differences in the occurrence of delayed graft function (DGF). Fewer events in patients who received a kidney from the etanercept treated Group A compared to the patients who received a kidney from the control Group B were observed when comparing the functional DGF and occurrence of acute rejection episodes, however, there was no statistically significant difference. In summary, no effect of treatment with etanercept an inhibitor of TNF-alpha in a hypothermic machine perfusion on renal allograft renal survival and its perfusion were detected in this study. However, treatment of the isolated organ may be important for the future of transplantation medicine. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Man-systems integration and the man-machine interface

    NASA Technical Reports Server (NTRS)

    Hale, Joseph P.

    1990-01-01

    Viewgraphs on man-systems integration and the man-machine interface are presented. Man-systems integration applies the systems' approach to the integration of the user and the machine to form an effective, symbiotic Man-Machine System (MMS). A MMS is a combination of one or more human beings and one or more physical components that are integrated through the common purpose of achieving some objective. The human operator interacts with the system through the Man-Machine Interface (MMI).

  19. Study of a variable mass Atwood's machine using a smartphone

    NASA Astrophysics Data System (ADS)

    Lopez, Dany; Caprile, Isidora; Corvacho, Fernando; Reyes, Orfa

    2018-03-01

    The Atwood machine was invented in 1784 by George Atwood and this system has been widely studied both theoretically and experimentally over the years. Nowadays, it is commonplace that many experimental physics courses include both Atwood's machine and variable mass to introduce more complex concepts in physics. To study the dynamics of the masses that compose the variable Atwood's machine, laboratories typically use a smart pulley. Now, the first work that introduced a smartphone as data acquisition equipment to study the acceleration in the Atwood's machine was the one by M. Monteiro et al. Since then, there has been no further information available on the usage of smartphones in variable mass systems. This prompted us to do a study of this kind of system by means of data obtained with a smartphone and to show the practicality of using smartphones in complex experimental situations.

  20. Artificial intelligence in medicine.

    PubMed

    Hamet, Pavel; Tremblay, Johanne

    2017-04-01

    Artificial Intelligence (AI) is a general term that implies the use of a computer to model intelligent behavior with minimal human intervention. AI is generally accepted as having started with the invention of robots. The term derives from the Czech word robota, meaning biosynthetic machines used as forced labor. In this field, Leonardo Da Vinci's lasting heritage is today's burgeoning use of robotic-assisted surgery, named after him, for complex urologic and gynecologic procedures. Da Vinci's sketchbooks of robots helped set the stage for this innovation. AI, described as the science and engineering of making intelligent machines, was officially born in 1956. The term is applicable to a broad range of items in medicine such as robotics, medical diagnosis, medical statistics, and human biology-up to and including today's "omics". AI in medicine, which is the focus of this review, has two main branches: virtual and physical. The virtual branch includes informatics approaches from deep learning information management to control of health management systems, including electronic health records, and active guidance of physicians in their treatment decisions. The physical branch is best represented by robots used to assist the elderly patient or the attending surgeon. Also embodied in this branch are targeted nanorobots, a unique new drug delivery system. The societal and ethical complexities of these applications require further reflection, proof of their medical utility, economic value, and development of interdisciplinary strategies for their wider application. Copyright © 2017. Published by Elsevier Inc.

  1. Statistical Capability Study of a Helical Grinding Machine Producing Screw Rotors

    NASA Astrophysics Data System (ADS)

    Holmes, C. S.; Headley, M.; Hart, P. W.

    2017-08-01

    Screw compressors depend for their efficiency and reliability on the accuracy of the rotors, and therefore on the machinery used in their production. The machinery has evolved over more than half a century in response to customer demands for production accuracy, efficiency, and flexibility, and is now at a high level on all three criteria. Production equipment and processes must be capable of maintaining accuracy over a production run, and this must be assessed statistically under strictly controlled conditions. This paper gives numerical data from such a study of an innovative machine tool and shows that it is possible to meet the demanding statistical capability requirements.

  2. Simple Machines. Physical Science in Action[TM]. Schlessinger Science Library. [Videotape].

    ERIC Educational Resources Information Center

    2000

    In today's world, kids are aware that there are machines all around them. What they may not realize is that the function of all machines is to make work easier in some way. Simple Machines uses engaging visuals and colorful graphics to explain the concept of work and how humans use certain basic tools to help get work done. Students will learn…

  3. All about Simple Machines. Physical Science for Children[TM]. Schlessinger Science Library. [Videotape].

    ERIC Educational Resources Information Center

    2000

    All kids know the word "work." But they probably don't understand that work happens whenever a force is used to move something--whether it's lifting a heavy object or playing on a see-saw. All About Simple Machines introduces kids to the concepts of forces, work and how machines are used to make work easier. Six simple machines are…

  4. Using statistical and machine learning to help institutions detect suspicious access to electronic health records.

    PubMed

    Boxwala, Aziz A; Kim, Jihoon; Grillo, Janice M; Ohno-Machado, Lucila

    2011-01-01

    To determine whether statistical and machine-learning methods, when applied to electronic health record (EHR) access data, could help identify suspicious (ie, potentially inappropriate) access to EHRs. From EHR access logs and other organizational data collected over a 2-month period, the authors extracted 26 features likely to be useful in detecting suspicious accesses. Selected events were marked as either suspicious or appropriate by privacy officers, and served as the gold standard set for model evaluation. The authors trained logistic regression (LR) and support vector machine (SVM) models on 10-fold cross-validation sets of 1291 labeled events. The authors evaluated the sensitivity of final models on an external set of 58 events that were identified as truly inappropriate and investigated independently from this study using standard operating procedures. The area under the receiver operating characteristic curve of the models on the whole data set of 1291 events was 0.91 for LR, and 0.95 for SVM. The sensitivity of the baseline model on this set was 0.8. When the final models were evaluated on the set of 58 investigated events, all of which were determined as truly inappropriate, the sensitivity was 0 for the baseline method, 0.76 for LR, and 0.79 for SVM. The LR and SVM models may not generalize because of interinstitutional differences in organizational structures, applications, and workflows. Nevertheless, our approach for constructing the models using statistical and machine-learning techniques can be generalized. An important limitation is the relatively small sample used for the training set due to the effort required for its construction. The results suggest that statistical and machine-learning methods can play an important role in helping privacy officers detect suspicious accesses to EHRs.

  5. Using statistical and machine learning to help institutions detect suspicious access to electronic health records

    PubMed Central

    Kim, Jihoon; Grillo, Janice M; Ohno-Machado, Lucila

    2011-01-01

    Objective To determine whether statistical and machine-learning methods, when applied to electronic health record (EHR) access data, could help identify suspicious (ie, potentially inappropriate) access to EHRs. Methods From EHR access logs and other organizational data collected over a 2-month period, the authors extracted 26 features likely to be useful in detecting suspicious accesses. Selected events were marked as either suspicious or appropriate by privacy officers, and served as the gold standard set for model evaluation. The authors trained logistic regression (LR) and support vector machine (SVM) models on 10-fold cross-validation sets of 1291 labeled events. The authors evaluated the sensitivity of final models on an external set of 58 events that were identified as truly inappropriate and investigated independently from this study using standard operating procedures. Results The area under the receiver operating characteristic curve of the models on the whole data set of 1291 events was 0.91 for LR, and 0.95 for SVM. The sensitivity of the baseline model on this set was 0.8. When the final models were evaluated on the set of 58 investigated events, all of which were determined as truly inappropriate, the sensitivity was 0 for the baseline method, 0.76 for LR, and 0.79 for SVM. Limitations The LR and SVM models may not generalize because of interinstitutional differences in organizational structures, applications, and workflows. Nevertheless, our approach for constructing the models using statistical and machine-learning techniques can be generalized. An important limitation is the relatively small sample used for the training set due to the effort required for its construction. Conclusion The results suggest that statistical and machine-learning methods can play an important role in helping privacy officers detect suspicious accesses to EHRs. PMID:21672912

  6. Feature and Statistical Model Development in Structural Health Monitoring

    NASA Astrophysics Data System (ADS)

    Kim, Inho

    All structures suffer wear and tear because of impact, excessive load, fatigue, corrosion, etc. in addition to inherent defects during their manufacturing processes and their exposure to various environmental effects. These structural degradations are often imperceptible, but they can severely affect the structural performance of a component, thereby severely decreasing its service life. Although previous studies of Structural Health Monitoring (SHM) have revealed extensive prior knowledge on the parts of SHM processes, such as the operational evaluation, data processing, and feature extraction, few studies have been conducted from a systematical perspective, the statistical model development. The first part of this dissertation, the characteristics of inverse scattering problems, such as ill-posedness and nonlinearity, reviews ultrasonic guided wave-based structural health monitoring problems. The distinctive features and the selection of the domain analysis are investigated by analytically searching the conditions of the uniqueness solutions for ill-posedness and are validated experimentally. Based on the distinctive features, a novel wave packet tracing (WPT) method for damage localization and size quantification is presented. This method involves creating time-space representations of the guided Lamb waves (GLWs), collected at a series of locations, with a spatially dense distribution along paths at pre-selected angles with respect to the direction, normal to the direction of wave propagation. The fringe patterns due to wave dispersion, which depends on the phase velocity, are selected as the primary features that carry information, regarding the wave propagation and scattering. The following part of this dissertation presents a novel damage-localization framework, using a fully automated process. In order to construct the statistical model for autonomous damage localization deep-learning techniques, such as restricted Boltzmann machine and deep belief network, are trained and utilized to interpret nonlinear far-field wave patterns. Next, a novel bridge scour estimation approach that comprises advantages of both empirical and data-driven models is developed. Two field datasets from the literature are used, and a Support Vector Machine (SVM), a machine-learning algorithm, is used to fuse the field data samples and classify the data with physical phenomena. The Fast Non-dominated Sorting Genetic Algorithm (NSGA-II) is evaluated on the model performance objective functions to search for Pareto optimal fronts.

  7. General Theory of the Double Fed Synchronous Machine. Ph.D. Thesis - Swiss Technological Univ., 1950

    NASA Technical Reports Server (NTRS)

    El-Magrabi, M. G.

    1982-01-01

    Motor and generator operation of a double-fed synchronous machine were studied and physically and mathematically treated. Experiments with different connections, voltages, etc. were carried out. It was concluded that a certain degree of asymmetry is necessary for the best utilization of the machine.

  8. Optical alignment of electrodes on electrical discharge machines

    NASA Technical Reports Server (NTRS)

    Boissevain, A. G.; Nelson, B. W.

    1972-01-01

    Shadowgraph system projects magnified image on screen so that alignment of small electrodes mounted on electrical discharge machines can be corrected and verified. Technique may be adapted to other machine tool equipment where physical contact cannot be made during inspection and access to tool limits conventional runout checking procedures.

  9. 14 CFR 382.3 - What do the terms in this rule mean?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... devices and medications. Automated airport kiosk means a self-service transaction machine that a carrier... machine means a continuous positive airway pressure machine. Department or DOT means the United States..., emotional or mental illness, and specific learning disabilities. The term physical or mental impairment...

  10. Classification of jet fuel properties by near-infrared spectroscopy using fuzzy rule-building expert systems and support vector machines.

    PubMed

    Xu, Zhanfeng; Bunker, Christopher E; Harrington, Peter de B

    2010-11-01

    Monitoring the changes of jet fuel physical properties is important because fuel used in high-performance aircraft must meet rigorous specifications. Near-infrared (NIR) spectroscopy is a fast method to characterize fuels. Because of the complexity of NIR spectral data, chemometric techniques are used to extract relevant information from spectral data to accurately classify physical properties of complex fuel samples. In this work, discrimination of fuel types and classification of flash point, freezing point, boiling point (10%, v/v), boiling point (50%, v/v), and boiling point (90%, v/v) of jet fuels (JP-5, JP-8, Jet A, and Jet A1) were investigated. Each physical property was divided into three classes, low, medium, and high ranges, using two evaluations with different class boundary definitions. The class boundaries function as the threshold to alarm when the fuel properties change. Optimal partial least squares discriminant analysis (oPLS-DA), fuzzy rule-building expert system (FuRES), and support vector machines (SVM) were used to build the calibration models between the NIR spectra and classes of physical property of jet fuels. OPLS-DA, FuRES, and SVM were compared with respect to prediction accuracy. The validation of the calibration model was conducted by applying bootstrap Latin partition (BLP), which gives a measure of precision. Prediction accuracy of 97 ± 2% of the flash point, 94 ± 2% of freezing point, 99 ± 1% of the boiling point (10%, v/v), 98 ± 2% of the boiling point (50%, v/v), and 96 ± 1% of the boiling point (90%, v/v) were obtained by FuRES in one boundaries definition. Both FuRES and SVM obtained statistically better prediction accuracy over those obtained by oPLS-DA. The results indicate that combined with chemometric classifiers NIR spectroscopy could be a fast method to monitor the changes of jet fuel physical properties.

  11. Physics 30 Program Machine-Scorable Open-Ended Questions: Unit 2: Electric and Magnetic Forces. Diploma Examinations Program.

    ERIC Educational Resources Information Center

    Alberta Dept. of Education, Edmonton.

    This document outlines the use of machine-scorable open-ended questions for the evaluation of Physics 30 in Alberta. Contents include: (1) an introduction to the questions; (2) sample instruction sheet; (3) fifteen sample items; (4) item information including the key, difficulty, and source of each item; (5) solutions to items having multiple…

  12. Crabbing System for an Electron-Ion Collider

    NASA Astrophysics Data System (ADS)

    Castilla, Alejandro

    As high energy and nuclear physicists continue to push further the boundaries of knowledge using colliders, there is an imperative need, not only to increase the colliding beams' energies, but also to improve the accuracy of the experiments, and to collect a large quantity of events with good statistical sensitivity. To achieve the latter, it is necessary to collect more data by increasing the rate at which these pro- cesses are being produced and detected in the machine. This rate of events depends directly on the machine's luminosity. The luminosity itself is proportional to the frequency at which the beams are being delivered, the number of particles in each beam, and inversely proportional to the cross-sectional size of the colliding beams. There are several approaches that can be considered to increase the events statistics in a collider other than increasing the luminosity, such as running the experiments for a longer time. However, this also elevates the operation expenses, while increas- ing the frequency at which the beams are delivered implies strong physical changes along the accelerator and the detectors. Therefore, it is preferred to increase the beam intensities and reduce the beams cross-sectional areas to achieve these higher luminosities. In the case where the goal is to push the limits, sometimes even beyond the machines design parameters, one must develop a detailed High Luminosity Scheme. Any high luminosity scheme on a modern collider considers--in one of their versions--the use of crab cavities to correct the geometrical reduction of the luminosity due to the beams crossing angle. In this dissertation, we present the design and testing of a proof-of-principle compact superconducting crab cavity, at 750 MHz, for the future electron-ion collider, currently under design at Jefferson Lab. In addition to the design and validation of the cavity prototype, we present the analysis of the first order beam dynamics and the integration of the crabbing systems to the interaction region. Following this, we propose the concept of twin crabs to allow machines with variable beam transverse coupling in the interaction region to have full crabbing in only the desired plane. Finally, we present recommendations to extend this work to other frequencies.

  13. Machine learning for many-body physics: The case of the Anderson impurity model

    DOE PAGES

    Arsenault, Louis-François; Lopez-Bezanilla, Alejandro; von Lilienfeld, O. Anatole; ...

    2014-10-31

    We applied machine learning methods in order to find the Green's function of the Anderson impurity model, a basic model system of quantum many-body condensed-matter physics. Furthermore, different methods of parametrizing the Green's function are investigated; a representation in terms of Legendre polynomials is found to be superior due to its limited number of coefficients and its applicability to state of the art methods of solution. The dependence of the errors on the size of the training set is determined. Our results indicate that a machine learning approach to dynamical mean-field theory may be feasible.

  14. Machine learning for many-body physics: The case of the Anderson impurity model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arsenault, Louis-François; Lopez-Bezanilla, Alejandro; von Lilienfeld, O. Anatole

    We applied machine learning methods in order to find the Green's function of the Anderson impurity model, a basic model system of quantum many-body condensed-matter physics. Furthermore, different methods of parametrizing the Green's function are investigated; a representation in terms of Legendre polynomials is found to be superior due to its limited number of coefficients and its applicability to state of the art methods of solution. The dependence of the errors on the size of the training set is determined. Our results indicate that a machine learning approach to dynamical mean-field theory may be feasible.

  15. Machine learning to predict the occurrence of bisphosphonate-related osteonecrosis of the jaw associated with dental extraction: A preliminary report.

    PubMed

    Kim, Dong Wook; Kim, Hwiyoung; Nam, Woong; Kim, Hyung Jun; Cha, In-Ho

    2018-04-23

    The aim of this study was to build and validate five types of machine learning models that can predict the occurrence of BRONJ associated with dental extraction in patients taking bisphosphonates for the management of osteoporosis. A retrospective review of the medical records was conducted to obtain cases and controls for the study. Total 125 patients consisting of 41 cases and 84 controls were selected for the study. Five machine learning prediction algorithms including multivariable logistic regression model, decision tree, support vector machine, artificial neural network, and random forest were implemented. The outputs of these models were compared with each other and also with conventional methods, such as serum CTX level. Area under the receiver operating characteristic (ROC) curve (AUC) was used to compare the results. The performance of machine learning models was significantly superior to conventional statistical methods and single predictors. The random forest model yielded the best performance (AUC = 0.973), followed by artificial neural network (AUC = 0.915), support vector machine (AUC = 0.882), logistic regression (AUC = 0.844), decision tree (AUC = 0.821), drug holiday alone (AUC = 0.810), and CTX level alone (AUC = 0.630). Machine learning methods showed superior performance in predicting BRONJ associated with dental extraction compared to conventional statistical methods using drug holiday and serum CTX level. Machine learning can thus be applied in a wide range of clinical studies. Copyright © 2017. Published by Elsevier Inc.

  16. The universal numbers. From Biology to Physics.

    PubMed

    Marchal, Bruno

    2015-12-01

    I will explain how the mathematicians have discovered the universal numbers, or abstract computer, and I will explain some abstract biology, mainly self-reproduction and embryogenesis. Then I will explain how and why, and in which sense, some of those numbers can dream and why their dreams can glue together and must, when we assume computationalism in cognitive science, generate a phenomenological physics, as part of a larger phenomenological theology (in the sense of the greek theologians). The title should have been "From Biology to Physics, through the Phenomenological Theology of the Universal Numbers", if that was not too long for a title. The theology will consist mainly, like in some (neo)platonist greek-indian-chinese tradition, in the truth about numbers' relative relations, with each others, and with themselves. The main difference between Aristotle and Plato is that Aristotle (especially in its common and modern christian interpretation) makes reality WYSIWYG (What you see is what you get: reality is what we observe, measure, i.e. the natural material physical science) where for Plato and the (rational) mystics, what we see might be only the shadow or the border of something else, which might be non physical (mathematical, arithmetical, theological, …). Since Gödel, we know that Truth, even just the Arithmetical Truth, is vastly bigger than what the machine can rationally justify. Yet, with Church's thesis, and the mechanizability of the diagonalizations involved, machines can apprehend this and can justify their limitations, and get some sense of what might be true beyond what they can prove or justify rationally. Indeed, the incompleteness phenomenon introduces a gap between what is provable by some machine and what is true about that machine, and, as Gödel saw already in 1931, the existence of that gap is accessible to the machine itself, once it is has enough provability abilities. Incompleteness separates truth and provable, and machines can justify this in some way. More importantly incompleteness entails the distinction between many intensional variants of provability. For example, the absence of reflexion (beweisbar(⌜A⌝) → A with beweisbar being Gödel's provability predicate) makes it impossible for the machine's provability to obey the axioms usually taken for a theory of knowledge. The most important consequence of this in the machine's possible phenomenology is that it provides sense, indeed arithmetical sense, to intensional variants of provability, like the logics of provability-and-truth, which at the propositional level can be mirrored by the logic of provable-and-true statements (beweisbar(⌜A⌝) ∧ A). It is incompleteness which makes this logic different from the logic of provability. Other variants, like provable-and-consistent, or provable-and-consistent-and-true, appears in the same way, and inherits the incompleteness splitting, unlike beweisbar(⌜A⌝) ∧ A. I will recall thought experience which motivates the use of those intensional variants to associate a knower and an observer in some canonical way to the machines or the numbers. We will in this way get an abstract and phenomenological theology of a machine M through the true logics of their true self-referential abilities (even if not provable, or knowable, by the machine itself), in those different intensional senses. Cognitive science and theoretical physics motivate the study of those logics with the arithmetical interpretation of the atomic sentences restricted to the "verifiable" (Σ1) sentences, which is the way to study the theology of the computationalist machine. This provides a logic of the observable, as expected by the Universal Dovetailer Argument, which will be recalled briefly, and which can lead to a comparison of the machine's logic of physics with the empirical logic of the physicists (like quantum logic). This leads also to a series of open problems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Validity of a self-report survey tool measuring the nutrition and physical activity environment of primary schools.

    PubMed

    Nathan, Nicole; Wolfenden, Luke; Morgan, Philip J; Bell, Andrew C; Barker, Daniel; Wiggers, John

    2013-06-13

    Valid tools measuring characteristics of the school environment associated with the physical activity and dietary behaviours of children are needed to accurately evaluate the impact of initiatives to improve school environments. The aim of this study was to assess the validity of Principal self-report of primary school healthy eating and physical activity environments. Primary school Principals (n = 42) in New South Wales, Australia were invited to complete a telephone survey of the school environment; the School Environment Assessment Tool - SEAT. Equivalent observational data were collected by pre-service teachers located within the school. The SEAT, involved 65 items that assessed food availability via canteens, vending machines and fundraisers and the presence of physical activity facilities, equipment and organised physical activities. Kappa statistics were used to assess agreement between the two measures. Almost 70% of the survey demonstrated moderate to almost perfect agreement. Substantial agreement was found for 10 of 13 items assessing foods sold for fundraising, 3 of 6 items assessing physical activity facilities of the school, and both items assessing organised physical activities that occurred at recess and lunch and school sport. Limited agreement was found for items assessing foods sold through canteens and access to small screen recreation. The SEAT provides researchers and policy makers with a valid tool for assessing aspects of the school food and physical activity environment.

  18. Machine Learning Assessments of Soil Drying

    NASA Astrophysics Data System (ADS)

    Coopersmith, E. J.; Minsker, B. S.; Wenzel, C.; Gilmore, B. J.

    2011-12-01

    Agricultural activities require the use of heavy equipment and vehicles on unpaved farmlands. When soil conditions are wet, equipment can cause substantial damage, leaving deep ruts. In extreme cases, implements can sink and become mired, causing considerable delays and expense to extricate the equipment. Farm managers, who are often located remotely, cannot assess sites before allocating equipment, causing considerable difficulty in reliably assessing conditions of countless sites with any reliability and frequency. For example, farmers often trace serpentine paths of over one hundred miles each day to assess the overall status of various tracts of land spanning thirty, forty, or fifty miles in each direction. One means of assessing the moisture content of a field lies in the strategic positioning of remotely-monitored in situ sensors. Unfortunately, land owners are often reluctant to place sensors across their properties due to the significant monetary cost and complexity. This work aspires to overcome these limitations by modeling the process of wetting and drying statistically - remotely assessing field readiness using only information that is publically accessible. Such data includes Nexrad radar and state climate network sensors, as well as Twitter-based reports of field conditions for validation. Three algorithms, classification trees, k-nearest-neighbors, and boosted perceptrons are deployed to deliver statistical field readiness assessments of an agricultural site located in Urbana, IL. Two of the three algorithms performed with 92-94% accuracy, with the majority of misclassifications falling within the calculated margins of error. This demonstrates the feasibility of using a machine learning framework with only public data, knowledge of system memory from previous conditions, and statistical tools to assess "readiness" without the need for real-time, on-site physical observation. Future efforts will produce a workflow assimilating Nexrad, climate network, and Twitter data to generate a real-time web-map of estimated readiness conditions.

  19. When do traumatic experiences alter risk-taking behavior? A machine learning analysis of reports from refugees.

    PubMed

    Augsburger, Mareike; Elbert, Thomas

    2017-01-01

    Exposure to traumatic stressors and subsequent trauma-related mental changes may alter a person's risk-taking behavior. It is unclear whether this relationship depends on the specific types of traumatic experiences. Moreover, the association has never been tested in displaced individuals with substantial levels of traumatic experiences. The present study assessed risk-taking behavior in 56 displaced individuals by means of the balloon analogue risk task (BART). Exposure to traumatic events, symptoms of posttraumatic stress disorder and depression were assessed by means of semi-structured interviews. Using a novel statistical approach (stochastic gradient boosting machines), we analyzed predictors of risk-taking behavior. Exposure to organized violence was associated with less risk-taking, as indicated by fewer adjusted pumps in the BART, as was the reported experience of physical abuse and neglect, emotional abuse, and peer violence in childhood. However, civil traumatic stressors, as well as other events during childhood were associated with lower risk taking. This suggests that the association between global risk-taking behavior and exposure to traumatic stress depends on the particular type of the stressors that have been experienced.

  20. Language extraction from zinc sulfide

    NASA Astrophysics Data System (ADS)

    Varn, Dowman Parks

    2001-09-01

    Recent advances in the analysis of one-dimensional temporal and spacial series allow for detailed characterization of disorder and computation in physical systems. One such system that has defied theoretical understanding since its discovery in 1912 is polytypism. Polytypes are layered compounds, exhibiting crystallinity in two dimensions, yet having complicated stacking sequences in the third direction. They can show both ordered and disordered sequences, sometimes each in the same specimen. We demonstrate a method for extracting two-layer correlation information from ZnS diffraction patterns and employ a novel technique for epsilon-machine reconstruction. We solve a long-standing problem---that of determining structural information for disordered materials from their diffraction patterns---for this special class of disorder. Our solution offers the most complete possible statistical description of the disorder. Furthermore, from our reconstructed epsilon-machines we find the effective range of the interlayer interaction in these materials, as well as the configurational energy of both ordered and disordered specimens. Finally, we can determine the 'language' (in terms of the Chomsky Hierarchy) these small rocks speak, and we find that regular languages are sufficient to describe them.

  1. Machine learning vortices at the Kosterlitz-Thouless transition

    NASA Astrophysics Data System (ADS)

    Beach, Matthew J. S.; Golubeva, Anna; Melko, Roger G.

    2018-01-01

    Efficient and automated classification of phases from minimally processed data is one goal of machine learning in condensed-matter and statistical physics. Supervised algorithms trained on raw samples of microstates can successfully detect conventional phase transitions via learning a bulk feature such as an order parameter. In this paper, we investigate whether neural networks can learn to classify phases based on topological defects. We address this question on the two-dimensional classical XY model which exhibits a Kosterlitz-Thouless transition. We find significant feature engineering of the raw spin states is required to convincingly claim that features of the vortex configurations are responsible for learning the transition temperature. We further show a single-layer network does not correctly classify the phases of the XY model, while a convolutional network easily performs classification by learning the global magnetization. Finally, we design a deep network capable of learning vortices without feature engineering. We demonstrate the detection of vortices does not necessarily result in the best classification accuracy, especially for lattices of less than approximately 1000 spins. For larger systems, it remains a difficult task to learn vortices.

  2. Application of statistical machine translation to public health information: a feasibility study.

    PubMed

    Kirchhoff, Katrin; Turner, Anne M; Axelrod, Amittai; Saavedra, Francisco

    2011-01-01

    Accurate, understandable public health information is important for ensuring the health of the nation. The large portion of the US population with Limited English Proficiency is best served by translations of public-health information into other languages. However, a large number of health departments and primary care clinics face significant barriers to fulfilling federal mandates to provide multilingual materials to Limited English Proficiency individuals. This article presents a pilot study on the feasibility of using freely available statistical machine translation technology to translate health promotion materials. The authors gathered health-promotion materials in English from local and national public-health websites. Spanish versions were created by translating the documents using a freely available machine-translation website. Translations were rated for adequacy and fluency, analyzed for errors, manually corrected by a human posteditor, and compared with exclusively manual translations. Machine translation plus postediting took 15-53 min per document, compared to the reported days or even weeks for the standard translation process. A blind comparison of machine-assisted and human translations of six documents revealed overall equivalency between machine-translated and manually translated materials. The analysis of translation errors indicated that the most important errors were word-sense errors. The results indicate that machine translation plus postediting may be an effective method of producing multilingual health materials with equivalent quality but lower cost compared to manual translations.

  3. Application of statistical machine translation to public health information: a feasibility study

    PubMed Central

    Turner, Anne M; Axelrod, Amittai; Saavedra, Francisco

    2011-01-01

    Objective Accurate, understandable public health information is important for ensuring the health of the nation. The large portion of the US population with Limited English Proficiency is best served by translations of public-health information into other languages. However, a large number of health departments and primary care clinics face significant barriers to fulfilling federal mandates to provide multilingual materials to Limited English Proficiency individuals. This article presents a pilot study on the feasibility of using freely available statistical machine translation technology to translate health promotion materials. Design The authors gathered health-promotion materials in English from local and national public-health websites. Spanish versions were created by translating the documents using a freely available machine-translation website. Translations were rated for adequacy and fluency, analyzed for errors, manually corrected by a human posteditor, and compared with exclusively manual translations. Results Machine translation plus postediting took 15–53 min per document, compared to the reported days or even weeks for the standard translation process. A blind comparison of machine-assisted and human translations of six documents revealed overall equivalency between machine-translated and manually translated materials. The analysis of translation errors indicated that the most important errors were word-sense errors. Conclusion The results indicate that machine translation plus postediting may be an effective method of producing multilingual health materials with equivalent quality but lower cost compared to manual translations. PMID:21498805

  4. Risk estimation using probability machines.

    PubMed

    Dasgupta, Abhijit; Szymczak, Silke; Moore, Jason H; Bailey-Wilson, Joan E; Malley, James D

    2014-03-01

    Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a "risk machine", will share properties from the statistical machine that it is derived from.

  5. Machining Specific Fourier Power Spectrum Profiles into Plastics for High Energy Density Physics Experiments [Machining Specific Fourier Power Spectrum Profiles into Plastics for HEDP Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmidt, Derek William; Cardenas, Tana; Doss, Forrest W.

    In this paper, the High Energy Density Physics program at Los Alamos National Laboratory (LANL) has had a multiyear campaign to verify the predictive capability of the interface evolution of shock propagation through different profiles machined into the face of a plastic package with an iodine-doped plastic center region. These experiments varied the machined surface from a simple sine wave to a double sine wave and finally to a multitude of different profiles with power spectrum ranges and shapes to verify LANL’s simulation capability. The MultiMode-A profiles had a band-pass flat region of the power spectrum, while the MultiMode-B profilemore » had two band-pass flat regions. Another profile of interest was the 1-Peak profile, a band-pass concept with a spike to one side of the power spectrum. All these profiles were machined in flat and tilted orientations of 30 and 60 deg. Tailor-made machining profiles, supplied by experimental physicists, were compared to actual machined surfaces, and Fourier power spectra were compared to see the reproducibility of the machining process over the frequency ranges that physicists require.« less

  6. Machining Specific Fourier Power Spectrum Profiles into Plastics for High Energy Density Physics Experiments [Machining Specific Fourier Power Spectrum Profiles into Plastics for HEDP Experiments

    DOE PAGES

    Schmidt, Derek William; Cardenas, Tana; Doss, Forrest W.; ...

    2018-01-15

    In this paper, the High Energy Density Physics program at Los Alamos National Laboratory (LANL) has had a multiyear campaign to verify the predictive capability of the interface evolution of shock propagation through different profiles machined into the face of a plastic package with an iodine-doped plastic center region. These experiments varied the machined surface from a simple sine wave to a double sine wave and finally to a multitude of different profiles with power spectrum ranges and shapes to verify LANL’s simulation capability. The MultiMode-A profiles had a band-pass flat region of the power spectrum, while the MultiMode-B profilemore » had two band-pass flat regions. Another profile of interest was the 1-Peak profile, a band-pass concept with a spike to one side of the power spectrum. All these profiles were machined in flat and tilted orientations of 30 and 60 deg. Tailor-made machining profiles, supplied by experimental physicists, were compared to actual machined surfaces, and Fourier power spectra were compared to see the reproducibility of the machining process over the frequency ranges that physicists require.« less

  7. Machine Learning Methods for Attack Detection in the Smart Grid.

    PubMed

    Ozay, Mete; Esnaola, Inaki; Yarman Vural, Fatos Tunay; Kulkarni, Sanjeev R; Poor, H Vincent

    2016-08-01

    Attack detection problems in the smart grid are posed as statistical learning problems for different attack scenarios in which the measurements are observed in batch or online settings. In this approach, machine learning algorithms are used to classify measurements as being either secure or attacked. An attack detection framework is provided to exploit any available prior knowledge about the system and surmount constraints arising from the sparse structure of the problem in the proposed approach. Well-known batch and online learning algorithms (supervised and semisupervised) are employed with decision- and feature-level fusion to model the attack detection problem. The relationships between statistical and geometric properties of attack vectors employed in the attack scenarios and learning algorithms are analyzed to detect unobservable attacks using statistical learning methods. The proposed algorithms are examined on various IEEE test systems. Experimental analyses show that machine learning algorithms can detect attacks with performances higher than attack detection algorithms that employ state vector estimation methods in the proposed attack detection framework.

  8. Needs of ergonomic design at control units in production industries.

    PubMed

    Levchuk, I; Schäfer, A; Lang, K-H; Gebhardt, Hj; Klussmann, A

    2012-01-01

    During the last decades, an increasing use of innovative technologies in manufacturing areas was monitored. A huge amount of physical workload was replaced by the change from conventional machine tools to computer-controlled units. CNC systems spread in current production processes. Because of this, machine operators today mostly have an observational function. This caused increasing of static work (e.g., standing, sitting) and cognitive demands (e.g., process observation). Machine operators have a high responsibility, because mistakes may lead to human injuries as well as to product losses - and in consequence may lead to high monetary losses (for the company) as well. Being usable often means for a CNC machine being efficient. An intuitive usability and an ergonomic organization of CNC workplaces can be an essential basis to reduce the risk of failures in operation as well as physical complaints (e.g. pain or diseases because of bad body posture during work). In contrast to conventional machines, CNC machines are equipped both with hardware and software. An intuitive and clear-sighted operating of CNC systems is a requirement for quick learning of new systems. Within this study, a survey was carried out among trainees learning the operation of CNC machines.

  9. Determinants explaining the variability of hand-transmitted vibration emissions from two different work tasks: grinding and cutting using angle grinders.

    PubMed

    Liljelind, Ingrid; Pettersson, Hans; Nilsson, Leif; Wahlström, Jens; Toomingas, Allan; Lundström, Ronnie; Burström, Lage

    2013-10-01

    There are numerous factors including physical, biomechanical, and individual that influence exposure to hand-transmitted vibration (HTV) and cause variability in the exposure measurements. Knowledge of exposure variability and determinants of exposure could be used to improve working conditions. We performed a quasi-experimental study, where operators performed routine work tasks in order to obtain estimates of the variance components and to evaluate the effect of determinants, such as machine-wheel combinations and individual operator characteristics. Two pre-defined simulated work tasks were performed by 11 operators: removal of a weld puddle of mild steel and cutting of a square steel pipe. In both tasks, four angle grinders were used, two running on compressed air and two electrically driven. Two brands of both grinding and cutting wheels were used. Each operator performed both tasks twice in a random order with each grinder and wheel and the time to complete each task was recorded. Vibration emission values were collected and the wheel wear was measured as loss of weight. Operators' characteristics collected were as follows: age, body height and weight, length and volume of their hands, maximum hand grip force, and length of work experience with grinding machines (years). The tasks were also performed by one operator who used four machines of the same brand. Mixed and random effects models were used in the statistical evaluation. The statistical evaluation was performed for grinding and cutting separately and we used a measure referring to the sum of the 1-s r.m.s. average frequency-weighted acceleration over time for completing the work task (a(sa)). Within each work task, there was a significant effect as a result of the determinants 'the machine used', 'wheel wear', and 'time taken to complete the task'. For cutting, 'the brand of wheel' used also had a significant effect. More than 90% of the inherent variability in the data was explained by the determinants. The two electrically powered machines had a mean a(sa) that was 2.6 times higher than the two air-driven machines. For cutting, the effect of the brand of wheel on a(sa) was ~0.1 times. The a(sa) increased both with increasing wheel wear and with time taken to complete the work task. However, there were also a number of interaction effects which, to a minor extent, modified the a(sa). Only a minor part (1%) of the total variability was attributed to the operator: for cutting, the volume of the hands, maximum grip force, and body weight were significant, while for grinding, it was the maximum grip force. There was no clear difference in a(sa) between the four copies of the same brand of each machine. By including determinants that were attributed to the brand of both machine and wheel used as well as the time taken to complete the work task, we were able to explain >90% of the variability. The dominating determinant was the brand of the machine. Little variability was found between operators, indicating that the overall effect as due to the operator was small.

  10. Contemporary machine learning: techniques for practitioners in the physical sciences

    NASA Astrophysics Data System (ADS)

    Spears, Brian

    2017-10-01

    Machine learning is the science of using computers to find relationships in data without explicitly knowing or programming those relationships in advance. Often without realizing it, we employ machine learning every day as we use our phones or drive our cars. Over the last few years, machine learning has found increasingly broad application in the physical sciences. This most often involves building a model relationship between a dependent, measurable output and an associated set of controllable, but complicated, independent inputs. The methods are applicable both to experimental observations and to databases of simulated output from large, detailed numerical simulations. In this tutorial, we will present an overview of current tools and techniques in machine learning - a jumping-off point for researchers interested in using machine learning to advance their work. We will discuss supervised learning techniques for modeling complicated functions, beginning with familiar regression schemes, then advancing to more sophisticated decision trees, modern neural networks, and deep learning methods. Next, we will cover unsupervised learning and techniques for reducing the dimensionality of input spaces and for clustering data. We'll show example applications from both magnetic and inertial confinement fusion. Along the way, we will describe methods for practitioners to help ensure that their models generalize from their training data to as-yet-unseen test data. We will finally point out some limitations to modern machine learning and speculate on some ways that practitioners from the physical sciences may be particularly suited to help. This work was performed by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.

  11. Learn about Physical Science: Simple Machines. [CD-ROM].

    ERIC Educational Resources Information Center

    2000

    This CD-ROM, designed for students in grades K-2, explores the world of simple machines. It allows students to delve into the mechanical world and learn the ways in which simple machines make work easier. Animated demonstrations are provided of the lever, pulley, wheel, screw, wedge, and inclined plane. Activities include practical matching and…

  12. Fun with Physics in the Elementary School.

    ERIC Educational Resources Information Center

    Ediger, Marlow

    Primary grade pupils can become fascinated with simple machines. This paper suggests that teachers have simple machines in the classroom for a unit of study. It proposes some guidelines to create a unit of study for six simple machines that include the fulcrum, inclined plane, pulley, wheel and axle, wedge, and screw. Friction, gravity, force, and…

  13. Simple Machine Junk Cars

    ERIC Educational Resources Information Center

    Herald, Christine

    2010-01-01

    During the month of May, the author's eighth-grade physical science students study the six simple machines through hands-on activities, reading assignments, videos, and notes. At the end of the month, they can easily identify the six types of simple machine: inclined plane, wheel and axle, pulley, screw, wedge, and lever. To conclude this unit,…

  14. Implementing Machine Learning in Radiology Practice and Research.

    PubMed

    Kohli, Marc; Prevedello, Luciano M; Filice, Ross W; Geis, J Raymond

    2017-04-01

    The purposes of this article are to describe concepts that radiologists should understand to evaluate machine learning projects, including common algorithms, supervised as opposed to unsupervised techniques, statistical pitfalls, and data considerations for training and evaluation, and to briefly describe ethical dilemmas and legal risk. Machine learning includes a broad class of computer programs that improve with experience. The complexity of creating, training, and monitoring machine learning indicates that the success of the algorithms will require radiologist involvement for years to come, leading to engagement rather than replacement.

  15. Learning to predict chemical reactions.

    PubMed

    Kayala, Matthew A; Azencott, Chloé-Agathe; Chen, Jonathan H; Baldi, Pierre

    2011-09-26

    Being able to predict the course of arbitrary chemical reactions is essential to the theory and applications of organic chemistry. Approaches to the reaction prediction problems can be organized around three poles corresponding to: (1) physical laws; (2) rule-based expert systems; and (3) inductive machine learning. Previous approaches at these poles, respectively, are not high throughput, are not generalizable or scalable, and lack sufficient data and structure to be implemented. We propose a new approach to reaction prediction utilizing elements from each pole. Using a physically inspired conceptualization, we describe single mechanistic reactions as interactions between coarse approximations of molecular orbitals (MOs) and use topological and physicochemical attributes as descriptors. Using an existing rule-based system (Reaction Explorer), we derive a restricted chemistry data set consisting of 1630 full multistep reactions with 2358 distinct starting materials and intermediates, associated with 2989 productive mechanistic steps and 6.14 million unproductive mechanistic steps. And from machine learning, we pose identifying productive mechanistic steps as a statistical ranking, information retrieval problem: given a set of reactants and a description of conditions, learn a ranking model over potential filled-to-unfilled MO interactions such that the top-ranked mechanistic steps yield the major products. The machine learning implementation follows a two-stage approach, in which we first train atom level reactivity filters to prune 94.00% of nonproductive reactions with a 0.01% error rate. Then, we train an ensemble of ranking models on pairs of interacting MOs to learn a relative productivity function over mechanistic steps in a given system. Without the use of explicit transformation patterns, the ensemble perfectly ranks the productive mechanism at the top 89.05% of the time, rising to 99.86% of the time when the top four are considered. Furthermore, the system is generalizable, making reasonable predictions over reactants and conditions which the rule-based expert does not handle. A web interface to the machine learning based mechanistic reaction predictor is accessible through our chemoinformatics portal ( http://cdb.ics.uci.edu) under the Toolkits section.

  16. Learning to Predict Chemical Reactions

    PubMed Central

    Kayala, Matthew A.; Azencott, Chloé-Agathe; Chen, Jonathan H.

    2011-01-01

    Being able to predict the course of arbitrary chemical reactions is essential to the theory and applications of organic chemistry. Approaches to the reaction prediction problems can be organized around three poles corresponding to: (1) physical laws; (2) rule-based expert systems; and (3) inductive machine learning. Previous approaches at these poles respectively are not high-throughput, are not generalizable or scalable, or lack sufficient data and structure to be implemented. We propose a new approach to reaction prediction utilizing elements from each pole. Using a physically inspired conceptualization, we describe single mechanistic reactions as interactions between coarse approximations of molecular orbitals (MOs) and use topological and physicochemical attributes as descriptors. Using an existing rule-based system (Reaction Explorer), we derive a restricted chemistry dataset consisting of 1630 full multi-step reactions with 2358 distinct starting materials and intermediates, associated with 2989 productive mechanistic steps and 6.14 million unproductive mechanistic steps. And from machine learning, we pose identifying productive mechanistic steps as a statistical ranking, information retrieval, problem: given a set of reactants and a description of conditions, learn a ranking model over potential filled-to-unfilled MO interactions such that the top ranked mechanistic steps yield the major products. The machine learning implementation follows a two-stage approach, in which we first train atom level reactivity filters to prune 94.00% of non-productive reactions with a 0.01% error rate. Then, we train an ensemble of ranking models on pairs of interacting MOs to learn a relative productivity function over mechanistic steps in a given system. Without the use of explicit transformation patterns, the ensemble perfectly ranks the productive mechanism at the top 89.05% of the time, rising to 99.86% of the time when the top four are considered. Furthermore, the system is generalizable, making reasonable predictions over reactants and conditions which the rule-based expert does not handle. A web interface to the machine learning based mechanistic reaction predictor is accessible through our chemoinformatics portal (http://cdb.ics.uci.edu) under the Toolkits section. PMID:21819139

  17. Statistical downscaling of GCM simulations to streamflow using relevance vector machine

    NASA Astrophysics Data System (ADS)

    Ghosh, Subimal; Mujumdar, P. P.

    2008-01-01

    General circulation models (GCMs), the climate models often used in assessing the impact of climate change, operate on a coarse scale and thus the simulation results obtained from GCMs are not particularly useful in a comparatively smaller river basin scale hydrology. The article presents a methodology of statistical downscaling based on sparse Bayesian learning and Relevance Vector Machine (RVM) to model streamflow at river basin scale for monsoon period (June, July, August, September) using GCM simulated climatic variables. NCEP/NCAR reanalysis data have been used for training the model to establish a statistical relationship between streamflow and climatic variables. The relationship thus obtained is used to project the future streamflow from GCM simulations. The statistical methodology involves principal component analysis, fuzzy clustering and RVM. Different kernel functions are used for comparison purpose. The model is applied to Mahanadi river basin in India. The results obtained using RVM are compared with those of state-of-the-art Support Vector Machine (SVM) to present the advantages of RVMs over SVMs. A decreasing trend is observed for monsoon streamflow of Mahanadi due to high surface warming in future, with the CCSR/NIES GCM and B2 scenario.

  18. Solution of a tridiagonal system of equations on the finite element machine

    NASA Technical Reports Server (NTRS)

    Bostic, S. W.

    1984-01-01

    Two parallel algorithms for the solution of tridiagonal systems of equations were implemented on the Finite Element Machine. The Accelerated Parallel Gauss method, an iterative method, and the Buneman algorithm, a direct method, are discussed and execution statistics are presented.

  19. Comparing statistical and machine learning classifiers: alternatives for predictive modeling in human factors research.

    PubMed

    Carnahan, Brian; Meyer, Gérard; Kuntz, Lois-Ann

    2003-01-01

    Multivariate classification models play an increasingly important role in human factors research. In the past, these models have been based primarily on discriminant analysis and logistic regression. Models developed from machine learning research offer the human factors professional a viable alternative to these traditional statistical classification methods. To illustrate this point, two machine learning approaches--genetic programming and decision tree induction--were used to construct classification models designed to predict whether or not a student truck driver would pass his or her commercial driver license (CDL) examination. The models were developed and validated using the curriculum scores and CDL exam performances of 37 student truck drivers who had completed a 320-hr driver training course. Results indicated that the machine learning classification models were superior to discriminant analysis and logistic regression in terms of predictive accuracy. Actual or potential applications of this research include the creation of models that more accurately predict human performance outcomes.

  20. Detection of Cutting Tool Wear using Statistical Analysis and Regression Model

    NASA Astrophysics Data System (ADS)

    Ghani, Jaharah A.; Rizal, Muhammad; Nuawi, Mohd Zaki; Haron, Che Hassan Che; Ramli, Rizauddin

    2010-10-01

    This study presents a new method for detecting the cutting tool wear based on the measured cutting force signals. A statistical-based method called Integrated Kurtosis-based Algorithm for Z-Filter technique, called I-kaz was used for developing a regression model and 3D graphic presentation of I-kaz 3D coefficient during machining process. The machining tests were carried out using a CNC turning machine Colchester Master Tornado T4 in dry cutting condition. A Kistler 9255B dynamometer was used to measure the cutting force signals, which were transmitted, analyzed, and displayed in the DasyLab software. Various force signals from machining operation were analyzed, and each has its own I-kaz 3D coefficient. This coefficient was examined and its relationship with flank wear lands (VB) was determined. A regression model was developed due to this relationship, and results of the regression model shows that the I-kaz 3D coefficient value decreases as tool wear increases. The result then is used for real time tool wear monitoring.

  1. Machine learning for neuroimaging with scikit-learn.

    PubMed

    Abraham, Alexandre; Pedregosa, Fabian; Eickenberg, Michael; Gervais, Philippe; Mueller, Andreas; Kossaifi, Jean; Gramfort, Alexandre; Thirion, Bertrand; Varoquaux, Gaël

    2014-01-01

    Statistical machine learning methods are increasingly used for neuroimaging data analysis. Their main virtue is their ability to model high-dimensional datasets, e.g., multivariate analysis of activation images or resting-state time series. Supervised learning is typically used in decoding or encoding settings to relate brain images to behavioral or clinical observations, while unsupervised learning can uncover hidden structures in sets of images (e.g., resting state functional MRI) or find sub-populations in large cohorts. By considering different functional neuroimaging applications, we illustrate how scikit-learn, a Python machine learning library, can be used to perform some key analysis steps. Scikit-learn contains a very large set of statistical learning algorithms, both supervised and unsupervised, and its application to neuroimaging data provides a versatile tool to study the brain.

  2. Machine learning for neuroimaging with scikit-learn

    PubMed Central

    Abraham, Alexandre; Pedregosa, Fabian; Eickenberg, Michael; Gervais, Philippe; Mueller, Andreas; Kossaifi, Jean; Gramfort, Alexandre; Thirion, Bertrand; Varoquaux, Gaël

    2014-01-01

    Statistical machine learning methods are increasingly used for neuroimaging data analysis. Their main virtue is their ability to model high-dimensional datasets, e.g., multivariate analysis of activation images or resting-state time series. Supervised learning is typically used in decoding or encoding settings to relate brain images to behavioral or clinical observations, while unsupervised learning can uncover hidden structures in sets of images (e.g., resting state functional MRI) or find sub-populations in large cohorts. By considering different functional neuroimaging applications, we illustrate how scikit-learn, a Python machine learning library, can be used to perform some key analysis steps. Scikit-learn contains a very large set of statistical learning algorithms, both supervised and unsupervised, and its application to neuroimaging data provides a versatile tool to study the brain. PMID:24600388

  3. Water resources management: Hydrologic characterization through hydrograph simulation may bias streamflow statistics

    NASA Astrophysics Data System (ADS)

    Farmer, W. H.; Kiang, J. E.

    2017-12-01

    The development, deployment and maintenance of water resources management infrastructure and practices rely on hydrologic characterization, which requires an understanding of local hydrology. With regards to streamflow, this understanding is typically quantified with statistics derived from long-term streamgage records. However, a fundamental problem is how to characterize local hydrology without the luxury of streamgage records, a problem that complicates water resources management at ungaged locations and for long-term future projections. This problem has typically been addressed through the development of point estimators, such as regression equations, to estimate particular statistics. Physically-based precipitation-runoff models, which are capable of producing simulated hydrographs, offer an alternative to point estimators. The advantage of simulated hydrographs is that they can be used to compute any number of streamflow statistics from a single source (the simulated hydrograph) rather than relying on a diverse set of point estimators. However, the use of simulated hydrographs introduces a degree of model uncertainty that is propagated through to estimated streamflow statistics and may have drastic effects on management decisions. We compare the accuracy and precision of streamflow statistics (e.g. the mean annual streamflow, the annual maximum streamflow exceeded in 10% of years, and the minimum seven-day average streamflow exceeded in 90% of years, among others) derived from point estimators (e.g. regressions, kriging, machine learning) to that of statistics derived from simulated hydrographs across the continental United States. Initial results suggest that the error introduced through hydrograph simulation may substantially bias the resulting hydrologic characterization.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    This factsheet describes a project that developed and demonstrated a new manufacturing-informed design framework that utilizes advanced multi-scale, physics-based process modeling to dramatically improve manufacturing productivity and quality in machining operations while reducing the cost of machined components.

  5. Biomachining - A new approach for micromachining of metals

    NASA Astrophysics Data System (ADS)

    Vigneshwaran, S. C. Sakthi; Ramakrishnan, R.; Arun Prakash, C.; Sashank, C.

    2018-04-01

    Machining is the process of removal of material from workpiece. Machining can be done by physical, chemical or biological methods. Though physical and chemical methods have been widely used in machining process, they have their own disadvantages such as development of heat affected zone and usage of hazardous chemicals. Biomachining is the machining process in which bacteria is used to remove material from the metal parts. Chemolithotrophic bacteria such as Acidothiobacillus ferroxidans has been used in biomachining of metals like copper, iron etc. These bacteria are used because of their property of catalyzing the oxidation of inorganic substances. Biomachining is a suitable process for micromachining of metals. This paper reviews the biomachining process and various mechanisms involved in biomachining. This paper also briefs about various parameters/factors to be considered in biomachining and also the effect of those parameters on metal removal rate.

  6. Study of Man-Machine Communications Systems for Disabled Persons (The Handicapped). Volume V. Final Report.

    ERIC Educational Resources Information Center

    Kafafian, Haig

    Instructions are given for teaching severely physically and/or neurologically handicapped students to use the 14-key Cybertype man-machine communications system, an electric writing machine with a simplified keyboard to enable persons with limited motor ability or coordination to communicate in written form. Explained are the various possible…

  7. The compound Atwood machine problem

    NASA Astrophysics Data System (ADS)

    Lopes Coelho, R.

    2017-05-01

    The present paper accounts for progress in physics teaching in the sense that a problem, which has been closed to students for being too difficult, is gained for the high school curriculum. This problem is the compound Atwood machine with three bodies. Its introduction into high school classes is based on a recent study on the weighing of an Atwood machine.

  8. Galaxy Classification using Machine Learning

    NASA Astrophysics Data System (ADS)

    Fowler, Lucas; Schawinski, Kevin; Brandt, Ben-Elias; widmer, Nicole

    2017-01-01

    We present our current research into the use of machine learning to classify galaxy imaging data with various convolutional neural network configurations in TensorFlow. We are investigating how five-band Sloan Digital Sky Survey imaging data can be used to train on physical properties such as redshift, star formation rate, mass and morphology. We also investigate the performance of artificially redshifted images in recovering physical properties as image quality degrades.

  9. A machine-learning approach for computation of fractional flow reserve from coronary computed tomography.

    PubMed

    Itu, Lucian; Rapaka, Saikiran; Passerini, Tiziano; Georgescu, Bogdan; Schwemmer, Chris; Schoebinger, Max; Flohr, Thomas; Sharma, Puneet; Comaniciu, Dorin

    2016-07-01

    Fractional flow reserve (FFR) is a functional index quantifying the severity of coronary artery lesions and is clinically obtained using an invasive, catheter-based measurement. Recently, physics-based models have shown great promise in being able to noninvasively estimate FFR from patient-specific anatomical information, e.g., obtained from computed tomography scans of the heart and the coronary arteries. However, these models have high computational demand, limiting their clinical adoption. In this paper, we present a machine-learning-based model for predicting FFR as an alternative to physics-based approaches. The model is trained on a large database of synthetically generated coronary anatomies, where the target values are computed using the physics-based model. The trained model predicts FFR at each point along the centerline of the coronary tree, and its performance was assessed by comparing the predictions against physics-based computations and against invasively measured FFR for 87 patients and 125 lesions in total. Correlation between machine-learning and physics-based predictions was excellent (0.9994, P < 0.001), and no systematic bias was found in Bland-Altman analysis: mean difference was -0.00081 ± 0.0039. Invasive FFR ≤ 0.80 was found in 38 lesions out of 125 and was predicted by the machine-learning algorithm with a sensitivity of 81.6%, a specificity of 83.9%, and an accuracy of 83.2%. The correlation was 0.729 (P < 0.001). Compared with the physics-based computation, average execution time was reduced by more than 80 times, leading to near real-time assessment of FFR. Average execution time went down from 196.3 ± 78.5 s for the CFD model to ∼2.4 ± 0.44 s for the machine-learning model on a workstation with 3.4-GHz Intel i7 8-core processor. Copyright © 2016 the American Physiological Society.

  10. The study on the new approach to the prediction of the solar flares: The statistical relation from the SOHO archive

    NASA Astrophysics Data System (ADS)

    Lee, S.; Oh, S.; Lee, J.; Hong, S.

    2013-12-01

    We have investigated the statistical relationship of the solar active region to predict the solar flare event analyzing the sunspot catalogue, which has been newly constructed from the SOHO MDI observation data during the period from 1996 to 2011 (Solar Cycle 23 & 24) by ASSA(Automatic Solar Synoptic Analyzer) algorithms. The prediction relation has been made by machine-learning algorithms to establish a short- term flare prediction model for operational use in near future. In this study, continuum and magnetogram images observed by SOHO has been processed to yield 15-year sunspot group catalogue that contains various physical parameters such as sunspot area, extent, asymmetry measure of largest penumbral sunspot, roughness of magnetic neutral line as well as McIntosh and Mt. Wilson classification results.The latest result of our study will be presented and the new approach to the prediction of the solar flare will be discussed.

  11. Learning planar Ising models

    DOE PAGES

    Johnson, Jason K.; Oyen, Diane Adele; Chertkov, Michael; ...

    2016-12-01

    Inference and learning of graphical models are both well-studied problems in statistics and machine learning that have found many applications in science and engineering. However, exact inference is intractable in general graphical models, which suggests the problem of seeking the best approximation to a collection of random variables within some tractable family of graphical models. In this paper, we focus on the class of planar Ising models, for which exact inference is tractable using techniques of statistical physics. Based on these techniques and recent methods for planarity testing and planar embedding, we propose a greedy algorithm for learning the bestmore » planar Ising model to approximate an arbitrary collection of binary random variables (possibly from sample data). Given the set of all pairwise correlations among variables, we select a planar graph and optimal planar Ising model defined on this graph to best approximate that set of correlations. Finally, we demonstrate our method in simulations and for two applications: modeling senate voting records and identifying geo-chemical depth trends from Mars rover data.« less

  12. Learning planar Ising models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Jason K.; Oyen, Diane Adele; Chertkov, Michael

    Inference and learning of graphical models are both well-studied problems in statistics and machine learning that have found many applications in science and engineering. However, exact inference is intractable in general graphical models, which suggests the problem of seeking the best approximation to a collection of random variables within some tractable family of graphical models. In this paper, we focus on the class of planar Ising models, for which exact inference is tractable using techniques of statistical physics. Based on these techniques and recent methods for planarity testing and planar embedding, we propose a greedy algorithm for learning the bestmore » planar Ising model to approximate an arbitrary collection of binary random variables (possibly from sample data). Given the set of all pairwise correlations among variables, we select a planar graph and optimal planar Ising model defined on this graph to best approximate that set of correlations. Finally, we demonstrate our method in simulations and for two applications: modeling senate voting records and identifying geo-chemical depth trends from Mars rover data.« less

  13. Book Review: Maxwell's Demon 2: Entropy, classical and quantum information, computing. Harvey Leff and Andrew Rex (Eds.); Institute of Physics, Bristol, 2003, 500pp., US 55, ISBN 0750307595

    NASA Astrophysics Data System (ADS)

    Shenker, Orly R.

    2004-09-01

    In 1867, James Clerk Maxwell proposed a perpetuum mobile of the second kind, that is, a counter example for the Second Law of thermodynamics, which came to be known as "Maxwell's Demon." Unlike any other perpetual motion machine, this one escaped attempts by the best scientists and philosophers to show that the Second Law or its statistical mechanical counterparts are universal after all. "Maxwell's demon lives on. After more than 130 years of uncertain life and at least two pronouncements of death, this fanciful character seems more vibrant than ever." These words of Harvey Leff and Andrew Rex (1990), which open their introduction to Maxwell's Demon 2: Entropy, Classical and Quantum Information, Computing (hereafter MD2) are very true: the Demon is as challenging and as intriguing as ever, and forces us to think and rethink about the foundations of thermodynamics and of statistical mechanics.

  14. Trends of Occupational Fatalities Involving Machines, United States, 1992–2010

    PubMed Central

    Marsh, Suzanne M.; Fosbroke, David E.

    2016-01-01

    Background This paper describes trends of occupational machine-related fatalities from 1992–2010. We examine temporal patterns by worker demographics, machine types (e.g., stationary, mobile), and industries. Methods We analyzed fatalities from Census of Fatal Occupational Injuries data provided by the Bureau of Labor Statistics to the National Institute for Occupational Safety and Health. We used injury source to identify machine-related incidents and Poisson regression to assess trends over the 19-year period. Results There was an average annual decrease of 2.8% in overall machine-related fatality rates from 1992 through 2010. Mobile machine-related fatality rates decreased an average of 2.6% annually and stationary machine-related rates decreased an average of 3.5% annually. Groups that continued to be at high risk included older workers; self-employed; and workers in agriculture/forestry/fishing, construction, and mining. Conclusion Addressing dangers posed by tractors, excavators, and other mobile machines needs to continue. High-risk worker groups should receive targeted information on machine safety. PMID:26358658

  15. TEACHING PHYSICS: Atwood's machine: experiments in an accelerating frame

    NASA Astrophysics Data System (ADS)

    Teck Chee, Chia; Hong, Chia Yee

    1999-03-01

    Experiments in an accelerating frame are often difficult to perform, but simple computer software allows sufficiently rapid and accurate measurements to be made on an arrangement of weights and pulleys known as Atwood's machine.

  16. STATISTICAL EVALUATION OF CONFOCAL MICROSCOPY IMAGES

    EPA Science Inventory

    Abstract

    In this study the CV is defined as the Mean/SD of the population of beads or pixels. Flow cytometry uses the CV of beads to determine if the machine is aligned correctly and performing properly. This CV concept to determine machine performance has been adapted to...

  17. A MOOC on Approaches to Machine Translation

    ERIC Educational Resources Information Center

    Costa-jussà, Mart R.; Formiga, Lluís; Torrillas, Oriol; Petit, Jordi; Fonollosa, José A. R.

    2015-01-01

    This paper describes the design, development, and analysis of a MOOC entitled "Approaches to Machine Translation: Rule-based, statistical and hybrid", and provides lessons learned and conclusions to be taken into account in the future. The course was developed within the Canvas platform, used by recognized European universities. It…

  18. Signal detection using support vector machines in the presence of ultrasonic speckle

    NASA Astrophysics Data System (ADS)

    Kotropoulos, Constantine L.; Pitas, Ioannis

    2002-04-01

    Support Vector Machines are a general algorithm based on guaranteed risk bounds of statistical learning theory. They have found numerous applications, such as in classification of brain PET images, optical character recognition, object detection, face verification, text categorization and so on. In this paper we propose the use of support vector machines to segment lesions in ultrasound images and we assess thoroughly their lesion detection ability. We demonstrate that trained support vector machines with a Radial Basis Function kernel segment satisfactorily (unseen) ultrasound B-mode images as well as clinical ultrasonic images.

  19. Risk estimation using probability machines

    PubMed Central

    2014-01-01

    Background Logistic regression has been the de facto, and often the only, model used in the description and analysis of relationships between a binary outcome and observed features. It is widely used to obtain the conditional probabilities of the outcome given predictors, as well as predictor effect size estimates using conditional odds ratios. Results We show how statistical learning machines for binary outcomes, provably consistent for the nonparametric regression problem, can be used to provide both consistent conditional probability estimation and conditional effect size estimates. Effect size estimates from learning machines leverage our understanding of counterfactual arguments central to the interpretation of such estimates. We show that, if the data generating model is logistic, we can recover accurate probability predictions and effect size estimates with nearly the same efficiency as a correct logistic model, both for main effects and interactions. We also propose a method using learning machines to scan for possible interaction effects quickly and efficiently. Simulations using random forest probability machines are presented. Conclusions The models we propose make no assumptions about the data structure, and capture the patterns in the data by just specifying the predictors involved and not any particular model structure. So they do not run the same risks of model mis-specification and the resultant estimation biases as a logistic model. This methodology, which we call a “risk machine”, will share properties from the statistical machine that it is derived from. PMID:24581306

  20. AstroML: "better, faster, cheaper" towards state-of-the-art data mining and machine learning

    NASA Astrophysics Data System (ADS)

    Ivezic, Zeljko; Connolly, Andrew J.; Vanderplas, Jacob

    2015-01-01

    We present AstroML, a Python module for machine learning and data mining built on numpy, scipy, scikit-learn, matplotlib, and astropy, and distributed under an open license. AstroML contains a growing library of statistical and machine learning routines for analyzing astronomical data in Python, loaders for several open astronomical datasets (such as SDSS and other recent major surveys), and a large suite of examples of analyzing and visualizing astronomical datasets. AstroML is especially suitable for introducing undergraduate students to numerical research projects and for graduate students to rapidly undertake cutting-edge research. The long-term goal of astroML is to provide a community repository for fast Python implementations of common tools and routines used for statistical data analysis in astronomy and astrophysics (see http://www.astroml.org).

  1. Effectiveness and efficiency of different weight machine-based strength training programmes for patients with hip or knee osteoarthritis: a protocol for a quasi-experimental controlled study in the context of health services research.

    PubMed

    Krauss, Inga; Müller, Gerhard; Steinhilber, Benjamin; Haupt, Georg; Janssen, Pia; Martus, Peter

    2017-01-01

    Osteoarthritis is a chronic musculoskeletal disease with a major impact on the individual and the healthcare system. As there is no cure, therapy aims for symptom release and reduction of disease progression. Physical exercises have been defined as a core treatment for osteoarthritis. However, research questions related to dose response, sustainability of effects, economic efficiency and safety are still open and will be evaluated in this trial, investigating a progressive weight machine-based strength training. This is a quasi-experimental controlled trial in the context of health services research. The intervention group (n=300) is recruited from participants of an offer for insurants of a health insurance company suffering from hip or knee osteoarthritis. Potential participants of the control group are selected and written to from the insurance database according to predefined matching criteria. The final statistical twins from the control responders will be determined via propensity score matching (n=300). The training intervention comprises 24 supervised mandatory sessions (2/week) and another 12 facultative sessions (1/week). Exercises include resistance training for the lower extremity and core muscles by use of weight machines and small training devices. The training offer is available at two sites. They differ with respect to the weight machines in use resulting in different dosage parameters. Primary outcomes are self-reported pain and function immediately after the 12-week intervention period. Health-related quality of life, self-efficacy, cost utility and safety will be evaluated as secondary outcomes. Secondary analysis will be undertaken with two strata related to study site. Participants will be followed up 6, 12 and 24 months after baseline. German Clinical Trial Register DRKS00009257. Pre-results.

  2. The upgraded Large Plasma Device, a machine for studying frontier basic plasma physics.

    PubMed

    Gekelman, W; Pribyl, P; Lucky, Z; Drandell, M; Leneman, D; Maggs, J; Vincena, S; Van Compernolle, B; Tripathi, S K P; Morales, G; Carter, T A; Wang, Y; DeHaas, T

    2016-02-01

    In 1991 a manuscript describing an instrument for studying magnetized plasmas was published in this journal. The Large Plasma Device (LAPD) was upgraded in 2001 and has become a national user facility for the study of basic plasma physics. The upgrade as well as diagnostics introduced since then has significantly changed the capabilities of the device. All references to the machine still quote the original RSI paper, which at this time is not appropriate. In this work, the properties of the updated LAPD are presented. The strategy of the machine construction, the available diagnostics, the parameters available for experiments, as well as illustrations of several experiments are presented here.

  3. Machine learning with quantum relative entropy

    NASA Astrophysics Data System (ADS)

    Tsuda, Koji

    2009-12-01

    Density matrices are a central tool in quantum physics, but it is also used in machine learning. A positive definite matrix called kernel matrix is used to represent the similarities between examples. Positive definiteness assures that the examples are embedded in an Euclidean space. When a positive definite matrix is learned from data, one has to design an update rule that maintains the positive definiteness. Our update rule, called matrix exponentiated gradient update, is motivated by the quantum relative entropy. Notably, the relative entropy is an instance of Bregman divergences, which are asymmetric distance measures specifying theoretical properties of machine learning algorithms. Using the calculus commonly used in quantum physics, we prove an upperbound of the generalization error of online learning.

  4. TEACHING PHYSICS: A computer-based revitalization of Atwood's machine

    NASA Astrophysics Data System (ADS)

    Trumper, Ricardo; Gelbman, Moshe

    2000-09-01

    Atwood's machine is used in a microcomputer-based experiment to demonstrate Newton's second law with considerable precision. The friction force on the masses and the moment of inertia of the pulley can also be estimated.

  5. Progress Toward Fabrication of Machined Metal Shells for the First Double-Shell Implosions at the National Ignition Facility

    DOE PAGES

    Cardenas, Tana; Schmidt, Derek W.; Loomis, Eric N.; ...

    2018-01-25

    The double-shell platform fielded at the National Ignition Facility requires developments in new machining techniques and robotic assembly stations to meet the experimental specifications. Current double-shell target designs use a dense high-Z inner shell, a foam cushion, and a low-Z outer shell. The design requires that the inner shell be gas filled using a fill tube. This tube impacts the entire machining and assembly design. Other intermediate physics designs have to be fielded to answer physics questions and advance the technology to be able to fabricate the full point design in the near future. One of these intermediate designs ismore » a mid-Z imaging design. The methods of designing, fabricating, and characterizing each of the major components of an imaging double shell are discussed with an emphasis on the fabrication of the machined outer metal shell.« less

  6. Progress Toward Fabrication of Machined Metal Shells for the First Double-Shell Implosions at the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cardenas, Tana; Schmidt, Derek W.; Loomis, Eric N.

    The double-shell platform fielded at the National Ignition Facility requires developments in new machining techniques and robotic assembly stations to meet the experimental specifications. Current double-shell target designs use a dense high-Z inner shell, a foam cushion, and a low-Z outer shell. The design requires that the inner shell be gas filled using a fill tube. This tube impacts the entire machining and assembly design. Other intermediate physics designs have to be fielded to answer physics questions and advance the technology to be able to fabricate the full point design in the near future. One of these intermediate designs ismore » a mid-Z imaging design. The methods of designing, fabricating, and characterizing each of the major components of an imaging double shell are discussed with an emphasis on the fabrication of the machined outer metal shell.« less

  7. Machine learning of network metrics in ATLAS Distributed Data Management

    NASA Astrophysics Data System (ADS)

    Lassnig, Mario; Toler, Wesley; Vamosi, Ralf; Bogado, Joaquin; ATLAS Collaboration

    2017-10-01

    The increasing volume of physics data poses a critical challenge to the ATLAS experiment. In anticipation of high luminosity physics, automation of everyday data management tasks has become necessary. Previously many of these tasks required human decision-making and operation. Recent advances in hardware and software have made it possible to entrust more complicated duties to automated systems using models trained by machine learning algorithms. In this contribution we show results from one of our ongoing automation efforts that focuses on network metrics. First, we describe our machine learning framework built atop the ATLAS Analytics Platform. This framework can automatically extract and aggregate data, train models with various machine learning algorithms, and eventually score the resulting models and parameters. Second, we use these models to forecast metrics relevant for networkaware job scheduling and data brokering. We show the characteristics of the data and evaluate the forecasting accuracy of our models.

  8. Simulating wave-turbulence on thin elastic plates with arbitrary boundary conditions

    NASA Astrophysics Data System (ADS)

    van Rees, Wim M.; Mahadevan, L.

    2016-11-01

    The statistical characteristics of interacting waves are described by the theory of wave turbulence, with the study of deep water gravity wave turbulence serving as a paradigmatic physical example. Here we consider the elastic analog of this problem in the context of flexural waves arising from vibrations of a thin elastic plate. Such flexural waves generate the unique sounds of so-called thunder machines used in orchestras - thin metal plates that make a thunder-like sound when forcefully shaken. Wave turbulence in elastic plates is typically investigated numerically using spectral simulations with periodic boundary conditions, which are not very realistic. We will present the results of numerical simulations of the dynamics of thin elastic plates in physical space, with arbitrary shapes, boundary conditions, anisotropy and inhomogeneity, and show first results on wave turbulence beyond the conventionally studied rectangular plates. Finally, motivated by a possible method to measure ice-sheet thicknesses in the open ocean, we will further discuss the behavior of a vibrating plate when floating on an inviscid fluid.

  9. Machinability of titanium metal matrix composites (Ti-MMCs)

    NASA Astrophysics Data System (ADS)

    Aramesh, Maryam

    Titanium metal matrix composites (Ti-MMCs), as a new generation of materials, have various potential applications in aerospace and automotive industries. The presence of ceramic particles enhances the physical and mechanical properties of the alloy matrix. However, the hard and abrasive nature of these particles causes various issues in the field of their machinability. Severe tool wear and short tool life are the most important drawbacks of machining this class of materials. There is very limited work in the literature regarding the machinability of this class of materials especially in the area of tool life estimation and tool wear. By far, polycrystalline diamond (PCD) tools appear to be the best choice for machining MMCs from researchers' point of view. However, due to their high cost, economical alternatives are sought. Cubic boron nitride (CBN) inserts, as the second hardest available tools, show superior characteristics such as great wear resistance, high hardness at elevated temperatures, a low coefficient of friction and a high melting point. Yet, so far CBN tools have not been studied during machining of Ti-MMCs. In this study, a comprehensive study has been performed to explore the tool wear mechanisms of CBN inserts during turning of Ti-MMCs. The unique morphology of the worn faces of the tools was investigated for the first time, which led to new insights in the identification of chemical wear mechanisms during machining of Ti-MMCs. Utilizing the full tool life capacity of cutting tools is also very crucial, due to the considerable costs associated with suboptimal replacement of tools. This strongly motivates development of a reliable model for tool life estimation under any cutting conditions. In this study, a novel model based on the survival analysis methodology is developed to estimate the progressive states of tool wear under any cutting conditions during machining of Ti-MMCs. This statistical model takes into account the machining time in addition to the effect of cutting parameters. Thus, promising results were obtained which showed a very good agreement with the experimental results. Moreover, a more advanced model was constructed, by adding the tool wear as another variable to the previous model. Therefore, a new model was proposed for estimating the remaining life of worn inserts under different cutting conditions, using the current tool wear data as an input. The results of this model were validated with the experimental results. The estimated results were well consistent with the results obtained from the experiments.

  10. Physical mechanism of ultrasonic machining

    NASA Astrophysics Data System (ADS)

    Isaev, A.; Grechishnikov, V.; Kozochkin, M.; Pivkin, P.; Petuhov, Y.; Romanov, V.

    2016-04-01

    In this paper, the main aspects of ultrasonic machining of constructional materials are considered. Influence of coolant on surface parameters is studied. Results of experiments on ultrasonic lathe cutting with application of tangential vibrations and with use of coolant are considered.

  11. Advanced Telecommunications Technologies in Rural Communities: Factors Affecting Use.

    ERIC Educational Resources Information Center

    Leistritz, F. Larry; Allen, John C.; Johnson, Bruce B.; Olsen, Duane; Sell, Randy

    1997-01-01

    A survey of 2,000 rural residents in 6 states (36% response) found that 56% used answering machines, 48% fax machines, 46% personal computers, 27% cell phones, and 25% modems. Higher use was associated with higher income and education. Distance from the nearest metropolitan statistical area increased use. A large majority believed…

  12. OFFICE MACHINES USED IN BUSINESS TODAY.

    ERIC Educational Resources Information Center

    COOK, FRED S.; MALICHE, ELEANOR

    INTERVIEWS OF 239 BUSINESSES OF THE BAY CITY STANDARD METROPOLITAN STATISTICAL AREA OF MICHIGAN PROVIDED INFORMATION ON (1) THE TYPE AND NUMBER OF MACHINES USED IN BUSINESS, (2) THE TRAINING DEMANDED BY EMPLOYERS FOR PERSONNEL USING THIS OFFICE EQUIPMENT, (3) THE EXTENT OF ON-THE-JOB TRAINING GIVEN BY EMPLOYERS, (4) THE IMPLICATIONS FOR VOCATIONAL…

  13. Anomaly detection for machine learning redshifts applied to SDSS galaxies

    NASA Astrophysics Data System (ADS)

    Hoyle, Ben; Rau, Markus Michael; Paech, Kerstin; Bonnett, Christopher; Seitz, Stella; Weller, Jochen

    2015-10-01

    We present an analysis of anomaly detection for machine learning redshift estimation. Anomaly detection allows the removal of poor training examples, which can adversely influence redshift estimates. Anomalous training examples may be photometric galaxies with incorrect spectroscopic redshifts, or galaxies with one or more poorly measured photometric quantity. We select 2.5 million `clean' SDSS DR12 galaxies with reliable spectroscopic redshifts, and 6730 `anomalous' galaxies with spectroscopic redshift measurements which are flagged as unreliable. We contaminate the clean base galaxy sample with galaxies with unreliable redshifts and attempt to recover the contaminating galaxies using the Elliptical Envelope technique. We then train four machine learning architectures for redshift analysis on both the contaminated sample and on the preprocessed `anomaly-removed' sample and measure redshift statistics on a clean validation sample generated without any preprocessing. We find an improvement on all measured statistics of up to 80 per cent when training on the anomaly removed sample as compared with training on the contaminated sample for each of the machine learning routines explored. We further describe a method to estimate the contamination fraction of a base data sample.

  14. The machine body metaphor: From science and technology to physical education and sport, in France (1825-1935).

    PubMed

    Gleyse, J

    2013-12-01

    The long history of the conception of physical exercise in France may be viewed as a function of a series of changes in understanding the body. Scientific concepts were used to present the body in official texts by authors specializing in the subject, or to describe them, as did Michel Foucault, as epistemic changes. A departure occurred during the 19th century that is clearly demonstrated in the writings of Gustave Adolphe Hirn. This breakthrough concerned the idea of considering the organism as an energy-generating machine. This metaphor was employed in describing the body during physical exercise from the 17th to the 19th centuries, when the body was thought of as mechanical. Such metaphors were used by the most relevant figures writing at the end of the 19th century in the rationale that is examined in this paper. It shows how Hirn, Marey, Lagrange, Demenij, Hebert, and Tissié saw the body and how they employed machine metaphors when referring to it. These machine metaphors are analyzed from the time of their scientific and technological origins up to their current use in physical and sports education. This analysis will contribute to the understanding of how a scientific metaphor comes to be in common use and may lead to particular exercise practices. © 2012 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  15. External validation of ADO, DOSE, COTE and CODEX at predicting death in primary care patients with COPD using standard and machine learning approaches.

    PubMed

    Morales, Daniel R; Flynn, Rob; Zhang, Jianguo; Trucco, Emmanuel; Quint, Jennifer K; Zutis, Kris

    2018-05-01

    Several models for predicting the risk of death in people with chronic obstructive pulmonary disease (COPD) exist but have not undergone large scale validation in primary care. The objective of this study was to externally validate these models using statistical and machine learning approaches. We used a primary care COPD cohort identified using data from the UK Clinical Practice Research Datalink. Age-standardised mortality rates were calculated for the population by gender and discrimination of ADO (age, dyspnoea, airflow obstruction), COTE (COPD-specific comorbidity test), DOSE (dyspnoea, airflow obstruction, smoking, exacerbations) and CODEX (comorbidity, dyspnoea, airflow obstruction, exacerbations) at predicting death over 1-3 years measured using logistic regression and a support vector machine learning (SVM) method of analysis. The age-standardised mortality rate was 32.8 (95%CI 32.5-33.1) and 25.2 (95%CI 25.4-25.7) per 1000 person years for men and women respectively. Complete data were available for 54879 patients to predict 1-year mortality. ADO performed the best (c-statistic of 0.730) compared with DOSE (c-statistic 0.645), COTE (c-statistic 0.655) and CODEX (c-statistic 0.649) at predicting 1-year mortality. Discrimination of ADO and DOSE improved at predicting 1-year mortality when combined with COTE comorbidities (c-statistic 0.780 ADO + COTE; c-statistic 0.727 DOSE + COTE). Discrimination did not change significantly over 1-3 years. Comparable results were observed using SVM. In primary care, ADO appears superior at predicting death in COPD. Performance of ADO and DOSE improved when combined with COTE comorbidities suggesting better models may be generated with additional data facilitated using novel approaches. Copyright © 2018. Published by Elsevier Ltd.

  16. How much information is in a jet?

    NASA Astrophysics Data System (ADS)

    Datta, Kaustuv; Larkoski, Andrew

    2017-06-01

    Machine learning techniques are increasingly being applied toward data analyses at the Large Hadron Collider, especially with applications for discrimination of jets with different originating particles. Previous studies of the power of machine learning to jet physics have typically employed image recognition, natural language processing, or other algorithms that have been extensively developed in computer science. While these studies have demonstrated impressive discrimination power, often exceeding that of widely-used observables, they have been formulated in a non-constructive manner and it is not clear what additional information the machines are learning. In this paper, we study machine learning for jet physics constructively, expressing all of the information in a jet onto sets of observables that completely and minimally span N-body phase space. For concreteness, we study the application of machine learning for discrimination of boosted, hadronic decays of Z bosons from jets initiated by QCD processes. Our results demonstrate that the information in a jet that is useful for discrimination power of QCD jets from Z bosons is saturated by only considering observables that are sensitive to 4-body (8 dimensional) phase space.

  17. Prediction of the far field noise from wind energy farms

    NASA Technical Reports Server (NTRS)

    Shepherd, K. P.; Hubbard, H. H.

    1986-01-01

    The basic physical factors involved in making predictions of wind turbine noise and an approach which allows for differences in the machines, the wind energy farm configurations and propagation conditions are reviewed. Example calculations to illustrate the sensitivity of the radiated noise to such variables as machine size, spacing and numbers, and such atmosphere variables as absorption and wind direction are presented. It is found that calculated far field distances to particular sound level contours are greater for lower values of atmospheric absorption, for a larger total number of machines, for additional rows of machines and for more powerful machines. At short and intermediate distances, higher sound pressure levels are calculated for closer machine spacings, for more powerful machines, for longer row lengths and for closer row spacings.

  18. Machine learning topological states

    NASA Astrophysics Data System (ADS)

    Deng, Dong-Ling; Li, Xiaopeng; Das Sarma, S.

    2017-11-01

    Artificial neural networks and machine learning have now reached a new era after several decades of improvement where applications are to explode in many fields of science, industry, and technology. Here, we use artificial neural networks to study an intriguing phenomenon in quantum physics—the topological phases of matter. We find that certain topological states, either symmetry-protected or with intrinsic topological order, can be represented with classical artificial neural networks. This is demonstrated by using three concrete spin systems, the one-dimensional (1D) symmetry-protected topological cluster state and the 2D and 3D toric code states with intrinsic topological orders. For all three cases, we show rigorously that the topological ground states can be represented by short-range neural networks in an exact and efficient fashion—the required number of hidden neurons is as small as the number of physical spins and the number of parameters scales only linearly with the system size. For the 2D toric-code model, we find that the proposed short-range neural networks can describe the excited states with Abelian anyons and their nontrivial mutual statistics as well. In addition, by using reinforcement learning we show that neural networks are capable of finding the topological ground states of nonintegrable Hamiltonians with strong interactions and studying their topological phase transitions. Our results demonstrate explicitly the exceptional power of neural networks in describing topological quantum states, and at the same time provide valuable guidance to machine learning of topological phases in generic lattice models.

  19. When do traumatic experiences alter risk-taking behavior? A machine learning analysis of reports from refugees

    PubMed Central

    Augsburger, Mareike; Elbert, Thomas

    2017-01-01

    Exposure to traumatic stressors and subsequent trauma-related mental changes may alter a person’s risk-taking behavior. It is unclear whether this relationship depends on the specific types of traumatic experiences. Moreover, the association has never been tested in displaced individuals with substantial levels of traumatic experiences. The present study assessed risk-taking behavior in 56 displaced individuals by means of the balloon analogue risk task (BART). Exposure to traumatic events, symptoms of posttraumatic stress disorder and depression were assessed by means of semi-structured interviews. Using a novel statistical approach (stochastic gradient boosting machines), we analyzed predictors of risk-taking behavior. Exposure to organized violence was associated with less risk-taking, as indicated by fewer adjusted pumps in the BART, as was the reported experience of physical abuse and neglect, emotional abuse, and peer violence in childhood. However, civil traumatic stressors, as well as other events during childhood were associated with lower risk taking. This suggests that the association between global risk-taking behavior and exposure to traumatic stress depends on the particular type of the stressors that have been experienced. PMID:28498865

  20. Machine Learning-based discovery of closures for reduced models of dynamical systems

    NASA Astrophysics Data System (ADS)

    Pan, Shaowu; Duraisamy, Karthik

    2017-11-01

    Despite the successful application of machine learning (ML) in fields such as image processing and speech recognition, only a few attempts has been made toward employing ML to represent the dynamics of complex physical systems. Previous attempts mostly focus on parameter calibration or data-driven augmentation of existing models. In this work we present a ML framework to discover closure terms in reduced models of dynamical systems and provide insights into potential problems associated with data-driven modeling. Based on exact closure models for linear system, we propose a general linear closure framework from viewpoint of optimization. The framework is based on trapezoidal approximation of convolution term. Hyperparameters that need to be determined include temporal length of memory effect, number of sampling points, and dimensions of hidden states. To circumvent the explicit specification of memory effect, a general framework inspired from neural networks is also proposed. We conduct both a priori and posteriori evaluations of the resulting model on a number of non-linear dynamical systems. This work was supported in part by AFOSR under the project ``LES Modeling of Non-local effects using Statistical Coarse-graining'' with Dr. Jean-Luc Cambier as the technical monitor.

  1. Technical Report: Reference photon dosimetry data for Varian accelerators based on IROC-Houston site visit data.

    PubMed

    Kerns, James R; Followill, David S; Lowenstein, Jessica; Molineu, Andrea; Alvarez, Paola; Taylor, Paige A; Stingo, Francesco C; Kry, Stephen F

    2016-05-01

    Accurate data regarding linear accelerator (Linac) radiation characteristics are important for treatment planning system modeling as well as regular quality assurance of the machine. The Imaging and Radiation Oncology Core-Houston (IROC-H) has measured the dosimetric characteristics of numerous machines through their on-site dosimetry review protocols. Photon data are presented and can be used as a secondary check of acquired values, as a means to verify commissioning a new machine, or in preparation for an IROC-H site visit. Photon data from IROC-H on-site reviews from 2000 to 2014 were compiled and analyzed. Specifically, data from approximately 500 Varian machines were analyzed. Each dataset consisted of point measurements of several dosimetric parameters at various locations in a water phantom to assess the percentage depth dose, jaw output factors, multileaf collimator small field output factors, off-axis factors, and wedge factors. The data were analyzed by energy and parameter, with similarly performing machine models being assimilated into classes. Common statistical metrics are presented for each machine class. Measurement data were compared against other reference data where applicable. Distributions of the parameter data were shown to be robust and derive from a student's t distribution. Based on statistical and clinical criteria, all machine models were able to be classified into two or three classes for each energy, except for 6 MV for which there were eight classes. Quantitative analysis of the measurements for 6, 10, 15, and 18 MV photon beams is presented for each parameter; supplementary material has also been made available which contains further statistical information. IROC-H has collected numerous data on Varian Linacs and the results of photon measurements from the past 15 years are presented. The data can be used as a comparison check of a physicist's acquired values. Acquired values that are well outside the expected distribution should be verified by the physicist to identify whether the measurements are valid. Comparison of values to this reference data provides a redundant check to help prevent gross dosimetric treatment errors.

  2. Specification of a new de-stoner machine: evaluation of machining effects on olive paste's rheology and olive oil yield and quality.

    PubMed

    Romaniello, Roberto; Leone, Alessandro; Tamborrino, Antonia

    2017-01-01

    An industrial prototype of a partial de-stoner machine was specified, built and implemented in an industrial olive oil extraction plant. The partial de-stoner machine was compared to the traditional mechanical crusher to assess its quantitative and qualitative performance. The extraction efficiency of the olive oil extraction plant, olive oil quality, sensory evaluation and rheological aspects were investigated. The results indicate that by using the partial de-stoner machine the extraction plant did not show statistical differences with respect to the traditional mechanical crushing. Moreover, the partial de-stoner machine allowed recovery of 60% of olive pits and the oils obtained were characterised by more marked green fruitiness, flavour and aroma than the oils produced using the traditional processing systems. The partial de-stoner machine removes the limitations of the traditional total de-stoner machine, opening new frontiers for the recovery of pits to be used as biomass. Moreover, the partial de-stoner machine permitted a significant reduction in the viscosity of the olive paste. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  3. Crabbing system for an electron-ion collider

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Castilla, Alejandro

    2017-05-01

    As high energy and nuclear physicists continue to push further the boundaries of knowledge using colliders, there is an imperative need, not only to increase the colliding beams' energies, but also to improve the accuracy of the experiments, and to collect a large quantity of events with good statistical sensitivity. To achieve the latter, it is necessary to collect more data by increasing the rate at which these processes are being produced and detected in the machine. This rate of events depends directly on the machine's luminosity. The luminosity itself is proportional to the frequency at which the beams aremore » being delivered, the number of particles in each beam, and inversely proportional to the cross-sectional size of the colliding beams. There are several approaches that can be considered to increase the events statistics in a collider other than increasing the luminosity, such as running the experiments for a longer time. However, this also elevates the operation expenses, while increasing the frequency at which the beams are delivered implies strong physical changes along the accelerator and the detectors. Therefore, it is preferred to increase the beam intensities and reduce the beams cross-sectional areas to achieve these higher luminosities. In the case where the goal is to push the limits, sometimes even beyond the machines design parameters, one must develop a detailed High Luminosity Scheme. Any high luminosity scheme on a modern collider considers|in one of their versions|the use of crab cavities to correct the geometrical reduction of the luminosity due to the beams crossing angle. In this dissertation, we present the design and testing of a proof-of-principle compact superconducting crab cavity, at 750 MHz, for the future electron-ion collider, currently under design at Jefferson Lab. In addition to the design and validation of the cavity prototype, we present the analysis of the first order beam dynamics and the integration of the crabbing systems to the interaction region. Following this, we propose the concept of twin crabs to allow machines with variable beam transverse coupling in the interaction region to have full crabbing in only the desired plane. Finally, we present recommendations to extend this work to other frequencies.« less

  4. Soft electroactive actuators and hard ratchet-wheels enable unidirectional locomotion of hybrid machine

    NASA Astrophysics Data System (ADS)

    Sun, Wenjie; Liu, Fan; Ma, Ziqi; Li, Chenghai; Zhou, Jinxiong

    2017-01-01

    Combining synergistically the muscle-like actuation of soft materials and load-carrying and locomotive capability of hard mechanical components results in hybrid soft machines that can exhibit specific functions. Here, we describe the design, fabrication, modeling and experiment of a hybrid soft machine enabled by marrying unidirectionally actuated dielectric elastomer (DE) membrane-spring system and ratchet wheels. Subjected to an applied voltage 8.2 kV at ramping velocity 820 V/s, the hybrid machine prototype exhibits monotonic uniaxial locomotion with an averaged velocity 0.5mm/s. The underlying physics and working mechanisms of the soft machine are verified and elucidated by finite element simulation.

  5. Classifying Black Hole States with Machine Learning

    NASA Astrophysics Data System (ADS)

    Huppenkothen, Daniela

    2018-01-01

    Galactic black hole binaries are known to go through different states with apparent signatures in both X-ray light curves and spectra, leading to important implications for accretion physics as well as our knowledge of General Relativity. Existing frameworks of classification are usually based on human interpretation of low-dimensional representations of the data, and generally only apply to fairly small data sets. Machine learning, in contrast, allows for rapid classification of large, high-dimensional data sets. In this talk, I will report on advances made in classification of states observed in Black Hole X-ray Binaries, focusing on the two sources GRS 1915+105 and Cygnus X-1, and show both the successes and limitations of using machine learning to derive physical constraints on these systems.

  6. Quantum Computing: Solving Complex Problems

    ScienceCinema

    DiVincenzo, David

    2018-05-22

    One of the motivating ideas of quantum computation was that there could be a new kind of machine that would solve hard problems in quantum mechanics. There has been significant progress towards the experimental realization of these machines (which I will review), but there are still many questions about how such a machine could solve computational problems of interest in quantum physics. New categorizations of the complexity of computational problems have now been invented to describe quantum simulation. The bad news is that some of these problems are believed to be intractable even on a quantum computer, falling into a quantum analog of the NP class. The good news is that there are many other new classifications of tractability that may apply to several situations of physical interest.

  7. CP Violation and the Future of Flavor Physics

    NASA Astrophysics Data System (ADS)

    Kiesling, Christian

    2009-12-01

    With the nearing completion of the first-generation experiments at asymmetric e+e- colliders running at the Υ(4S) resonance ("B-Factories") a new era of high luminosity machines is at the horizon. We report here on the plans at KEK in Japan to upgrade the KEKB machine ("SuperKEKB") with the goal of achieving an instantaneous luminosity exceeding 8×1035 cm-2 s-1, which is almost two orders of magnitude higher than KEKB. Together with the machine, the Belle detector will be upgraded as well ("Belle-II"), with significant improvements to increase its background tolerance as well as improving its physics performance. The new generation of experiments is scheduled to take first data in the year 2013.

  8. Undulatory physical resistance training program increases maximal strength in elderly type 2 diabetics.

    PubMed

    Santos, Gilberto Monteiro dos; Montrezol, Fábio Tanil; Pauli, Luciana Santos Souza; Sartori-Cintra, Angélica Rossi; Colantonio, Emilson; Gomes, Ricardo José; Marinho, Rodolfo; Moura, Leandro Pereira de; Pauli, José Rodrigo

    2014-01-01

    To investigate the effects of a specific protocol of undulatory physical resistance training on maximal strength gains in elderly type 2 diabetics. The study included 48 subjects, aged between 60 and 85 years, of both genders. They were divided into two groups: Untrained Diabetic Elderly (n=19) with those who were not subjected to physical training and Trained Diabetic Elderly (n=29), with those who were subjected to undulatory physical resistance training. The participants were evaluated with several types of resistance training's equipment before and after training protocol, by test of one maximal repetition. The subjects were trained on undulatory resistance three times per week for a period of 16 weeks. The overload used in undulatory resistance training was equivalent to 50% of one maximal repetition and 70% of one maximal repetition, alternating weekly. Statistical analysis revealed significant differences (p<0.05) between pre-test and post-test over a period of 16 weeks. The average gains in strength were 43.20% (knee extension), 65.00% (knee flexion), 27.80% (supine sitting machine), 31.00% (rowing sitting), 43.90% (biceps pulley), and 21.10% (triceps pulley). Undulatory resistance training used with weekly different overloads was effective to provide significant gains in maximum strength in elderly type 2 diabetic individuals.

  9. Undulatory physical resistance training program increases maximal strength in elderly type 2 diabetics

    PubMed Central

    dos Santos, Gilberto Monteiro; Montrezol, Fábio Tanil; Pauli, Luciana Santos Souza; Sartori-Cintra, Angélica Rossi; Colantonio, Emilson; Gomes, Ricardo José; Marinho, Rodolfo; de Moura, Leandro Pereira; Pauli, José Rodrigo

    2014-01-01

    Objective To investigate the effects of a specific protocol of undulatory physical resistance training on maximal strength gains in elderly type 2 diabetics. Methods The study included 48 subjects, aged between 60 and 85 years, of both genders. They were divided into two groups: Untrained Diabetic Elderly (n=19) with those who were not subjected to physical training and Trained Diabetic Elderly (n=29), with those who were subjected to undulatory physical resistance training. The participants were evaluated with several types of resistance training’s equipment before and after training protocol, by test of one maximal repetition. The subjects were trained on undulatory resistance three times per week for a period of 16 weeks. The overload used in undulatory resistance training was equivalent to 50% of one maximal repetition and 70% of one maximal repetition, alternating weekly. Statistical analysis revealed significant differences (p<0.05) between pre-test and post-test over a period of 16 weeks. Results The average gains in strength were 43.20% (knee extension), 65.00% (knee flexion), 27.80% (supine sitting machine), 31.00% (rowing sitting), 43.90% (biceps pulley), and 21.10% (triceps pulley). Conclusion Undulatory resistance training used with weekly different overloads was effective to provide significant gains in maximum strength in elderly type 2 diabetic individuals. PMID:25628192

  10. Development and validation of methods for man-made machine interface evaluation. [for shuttles and shuttle payloads

    NASA Technical Reports Server (NTRS)

    Malone, T. B.; Micocci, A.

    1975-01-01

    The alternate methods of conducting a man-machine interface evaluation are classified as static and dynamic, and are evaluated. A dynamic evaluation tool is presented to provide for a determination of the effectiveness of the man-machine interface in terms of the sequence of operations (task and task sequences) and in terms of the physical characteristics of the interface. This dynamic checklist approach is recommended for shuttle and shuttle payload man-machine interface evaluations based on reduced preparation time, reduced data, and increased sensitivity of critical problems.

  11. Blessing of dimensionality: mathematical foundations of the statistical physics of data.

    PubMed

    Gorban, A N; Tyukin, I Y

    2018-04-28

    The concentrations of measure phenomena were discovered as the mathematical background to statistical mechanics at the end of the nineteenth/beginning of the twentieth century and have been explored in mathematics ever since. At the beginning of the twenty-first century, it became clear that the proper utilization of these phenomena in machine learning might transform the curse of dimensionality into the blessing of dimensionality This paper summarizes recently discovered phenomena of measure concentration which drastically simplify some machine learning problems in high dimension, and allow us to correct legacy artificial intelligence systems. The classical concentration of measure theorems state that i.i.d. random points are concentrated in a thin layer near a surface (a sphere or equators of a sphere, an average or median-level set of energy or another Lipschitz function, etc.). The new stochastic separation theorems describe the thin structure of these thin layers: the random points are not only concentrated in a thin layer but are all linearly separable from the rest of the set, even for exponentially large random sets. The linear functionals for separation of points can be selected in the form of the linear Fisher's discriminant. All artificial intelligence systems make errors. Non-destructive correction requires separation of the situations (samples) with errors from the samples corresponding to correct behaviour by a simple and robust classifier. The stochastic separation theorems provide us with such classifiers and determine a non-iterative (one-shot) procedure for their construction.This article is part of the theme issue 'Hilbert's sixth problem'. © 2018 The Author(s).

  12. Blessing of dimensionality: mathematical foundations of the statistical physics of data

    NASA Astrophysics Data System (ADS)

    Gorban, A. N.; Tyukin, I. Y.

    2018-04-01

    The concentrations of measure phenomena were discovered as the mathematical background to statistical mechanics at the end of the nineteenth/beginning of the twentieth century and have been explored in mathematics ever since. At the beginning of the twenty-first century, it became clear that the proper utilization of these phenomena in machine learning might transform the curse of dimensionality into the blessing of dimensionality. This paper summarizes recently discovered phenomena of measure concentration which drastically simplify some machine learning problems in high dimension, and allow us to correct legacy artificial intelligence systems. The classical concentration of measure theorems state that i.i.d. random points are concentrated in a thin layer near a surface (a sphere or equators of a sphere, an average or median-level set of energy or another Lipschitz function, etc.). The new stochastic separation theorems describe the thin structure of these thin layers: the random points are not only concentrated in a thin layer but are all linearly separable from the rest of the set, even for exponentially large random sets. The linear functionals for separation of points can be selected in the form of the linear Fisher's discriminant. All artificial intelligence systems make errors. Non-destructive correction requires separation of the situations (samples) with errors from the samples corresponding to correct behaviour by a simple and robust classifier. The stochastic separation theorems provide us with such classifiers and determine a non-iterative (one-shot) procedure for their construction. This article is part of the theme issue `Hilbert's sixth problem'.

  13. The desktop muon detector: A simple, physics-motivated machine- and electronics-shop project for university students

    NASA Astrophysics Data System (ADS)

    Axani, S. N.; Conrad, J. M.; Kirby, C.

    2017-12-01

    This paper describes the construction of a desktop muon detector, an undergraduate-level physics project that develops machine-shop and electronics-shop technical skills. The desktop muon detector is a self-contained apparatus that employs a plastic scintillator as the detection medium and a silicon photomultiplier for light collection. This detector can be battery powered and is used in conjunction with the provided software. The total cost per detector is approximately 100. We describe physics experiments we have performed, and then suggest several other interesting measurements that are possible, with one or more desktop muon detectors.

  14. Phenomenology tools on cloud infrastructures using OpenStack

    NASA Astrophysics Data System (ADS)

    Campos, I.; Fernández-del-Castillo, E.; Heinemeyer, S.; Lopez-Garcia, A.; Pahlen, F.; Borges, G.

    2013-04-01

    We present a new environment for computations in particle physics phenomenology employing recent developments in cloud computing. On this environment users can create and manage "virtual" machines on which the phenomenology codes/tools can be deployed easily in an automated way. We analyze the performance of this environment based on "virtual" machines versus the utilization of physical hardware. In this way we provide a qualitative result for the influence of the host operating system on the performance of a representative set of applications for phenomenology calculations.

  15. The world problem: on the computability of the topology of 4-manifolds

    NASA Technical Reports Server (NTRS)

    vanMeter, J. R.

    2005-01-01

    Topological classification of the 4-manifolds bridges computation theory and physics. A proof of the undecidability of the homeomorphy problem for 4-manifolds is outlined here in a clarifying way. It is shown that an arbitrary Turing machine with an arbitrary input can be encoded into the topology of a 4-manifold, such that the 4-manifold is homeomorphic to a certain other 4-manifold if and only if the corresponding Turing machine halts on the associated input. Physical implications are briefly discussed.

  16. PARTICLE PHYSICS: CERN Gives Higgs Hunters Extra Month to Collect Data.

    PubMed

    Morton, O

    2000-09-22

    After 11 years of banging electrons and positrons together at higher energies than any other machine in the world, CERN, the European laboratory for particle physics, had decided to shut down the Large Electron-Positron collider (LEP) and install a new machine, the Large Hadron Collider (LHC), in its 27-kilometer tunnel. In 2005, the LHC will start bashing protons together at even higher energies. But tantalizing hints of a long-sought fundamental particle have forced CERN managers to grant LEP a month's reprieve.

  17. Advances in molecular dynamics simulation of ultra-precision machining of hard and brittle materials

    NASA Astrophysics Data System (ADS)

    Guo, Xiaoguang; Li, Qiang; Liu, Tao; Kang, Renke; Jin, Zhuji; Guo, Dongming

    2017-03-01

    Hard and brittle materials, such as silicon, SiC, and optical glasses, are widely used in aerospace, military, integrated circuit, and other fields because of their excellent physical and chemical properties. However, these materials display poor machinability because of their hard and brittle properties. Damages such as surface micro-crack and subsurface damage often occur during machining of hard and brittle materials. Ultra-precision machining is widely used in processing hard and brittle materials to obtain nanoscale machining quality. However, the theoretical mechanism underlying this method remains unclear. This paper provides a review of present research on the molecular dynamics simulation of ultra-precision machining of hard and brittle materials. The future trends in this field are also discussed.

  18. A 3D Human-Machine Integrated Design and Analysis Framework for Squat Exercises with a Smith Machine.

    PubMed

    Lee, Haerin; Jung, Moonki; Lee, Ki-Kwang; Lee, Sang Hun

    2017-02-06

    In this paper, we propose a three-dimensional design and evaluation framework and process based on a probabilistic-based motion synthesis algorithm and biomechanical analysis system for the design of the Smith machine and squat training programs. Moreover, we implemented a prototype system to validate the proposed framework. The framework consists of an integrated human-machine-environment model as well as a squat motion synthesis system and biomechanical analysis system. In the design and evaluation process, we created an integrated model in which interactions between a human body and machine or the ground are modeled as joints with constraints at contact points. Next, we generated Smith squat motion using the motion synthesis program based on a Gaussian process regression algorithm with a set of given values for independent variables. Then, using the biomechanical analysis system, we simulated joint moments and muscle activities from the input of the integrated model and squat motion. We validated the model and algorithm through physical experiments measuring the electromyography (EMG) signals, ground forces, and squat motions as well as through a biomechanical simulation of muscle forces. The proposed approach enables the incorporation of biomechanics in the design process and reduces the need for physical experiments and prototypes in the development of training programs and new Smith machines.

  19. Man/Machine Interaction Dynamics And Performance (MMIDAP) capability

    NASA Technical Reports Server (NTRS)

    Frisch, Harold P.

    1991-01-01

    The creation of an ability to study interaction dynamics between a machine and its human operator can be approached from a myriad of directions. The Man/Machine Interaction Dynamics and Performance (MMIDAP) project seeks to create an ability to study the consequences of machine design alternatives relative to the performance of both machine and operator. The class of machines to which this study is directed includes those that require the intelligent physical exertions of a human operator. While Goddard's Flight Telerobotic's program was expected to be a major user, basic engineering design and biomedical applications reach far beyond telerobotics. Ongoing efforts are outlined of the GSFC and its University and small business collaborators to integrate both human performance and musculoskeletal data bases with analysis capabilities necessary to enable the study of dynamic actions, reactions, and performance of coupled machine/operator systems.

  20. Emulating Simulations of Cosmic Dawn for 21 cm Power Spectrum Constraints on Cosmology, Reionization, and X-Ray Heating

    NASA Astrophysics Data System (ADS)

    Kern, Nicholas S.; Liu, Adrian; Parsons, Aaron R.; Mesinger, Andrei; Greig, Bradley

    2017-10-01

    Current and upcoming radio interferometric experiments are aiming to make a statistical characterization of the high-redshift 21 cm fluctuation signal spanning the hydrogen reionization and X-ray heating epochs of the universe. However, connecting 21 cm statistics to the underlying physical parameters is complicated by the theoretical challenge of modeling the relevant physics at computational speeds quick enough to enable exploration of the high-dimensional and weakly constrained parameter space. In this work, we use machine learning algorithms to build a fast emulator that can accurately mimic an expensive simulation of the 21 cm signal across a wide parameter space. We embed our emulator within a Markov Chain Monte Carlo framework in order to perform Bayesian parameter constraints over a large number of model parameters, including those that govern the Epoch of Reionization, the Epoch of X-ray Heating, and cosmology. As a worked example, we use our emulator to present an updated parameter constraint forecast for the Hydrogen Epoch of Reionization Array experiment, showing that its characterization of a fiducial 21 cm power spectrum will considerably narrow the allowed parameter space of reionization and heating parameters, and could help strengthen Planck's constraints on {σ }8. We provide both our generalized emulator code and its implementation specifically for 21 cm parameter constraints as publicly available software.

  1. Application of machine learning and expert systems to Statistical Process Control (SPC) chart interpretation

    NASA Technical Reports Server (NTRS)

    Shewhart, Mark

    1991-01-01

    Statistical Process Control (SPC) charts are one of several tools used in quality control. Other tools include flow charts, histograms, cause and effect diagrams, check sheets, Pareto diagrams, graphs, and scatter diagrams. A control chart is simply a graph which indicates process variation over time. The purpose of drawing a control chart is to detect any changes in the process signalled by abnormal points or patterns on the graph. The Artificial Intelligence Support Center (AISC) of the Acquisition Logistics Division has developed a hybrid machine learning expert system prototype which automates the process of constructing and interpreting control charts.

  2. Objective research of auscultation signals in Traditional Chinese Medicine based on wavelet packet energy and support vector machine.

    PubMed

    Yan, Jianjun; Shen, Xiaojing; Wang, Yiqin; Li, Fufeng; Xia, Chunming; Guo, Rui; Chen, Chunfeng; Shen, Qingwei

    2010-01-01

    This study aims at utilising Wavelet Packet Transform (WPT) and Support Vector Machine (SVM) algorithm to make objective analysis and quantitative research for the auscultation in Traditional Chinese Medicine (TCM) diagnosis. First, Wavelet Packet Decomposition (WPD) at level 6 was employed to split more elaborate frequency bands of the auscultation signals. Then statistic analysis was made based on the extracted Wavelet Packet Energy (WPE) features from WPD coefficients. Furthermore, the pattern recognition was used to distinguish mixed subjects' statistical feature values of sample groups through SVM. Finally, the experimental results showed that the classification accuracies were at a high level.

  3. Machine learning patterns for neuroimaging-genetic studies in the cloud.

    PubMed

    Da Mota, Benoit; Tudoran, Radu; Costan, Alexandru; Varoquaux, Gaël; Brasche, Goetz; Conrod, Patricia; Lemaitre, Herve; Paus, Tomas; Rietschel, Marcella; Frouin, Vincent; Poline, Jean-Baptiste; Antoniu, Gabriel; Thirion, Bertrand

    2014-01-01

    Brain imaging is a natural intermediate phenotype to understand the link between genetic information and behavior or brain pathologies risk factors. Massive efforts have been made in the last few years to acquire high-dimensional neuroimaging and genetic data on large cohorts of subjects. The statistical analysis of such data is carried out with increasingly sophisticated techniques and represents a great computational challenge. Fortunately, increasing computational power in distributed architectures can be harnessed, if new neuroinformatics infrastructures are designed and training to use these new tools is provided. Combining a MapReduce framework (TomusBLOB) with machine learning algorithms (Scikit-learn library), we design a scalable analysis tool that can deal with non-parametric statistics on high-dimensional data. End-users describe the statistical procedure to perform and can then test the model on their own computers before running the very same code in the cloud at a larger scale. We illustrate the potential of our approach on real data with an experiment showing how the functional signal in subcortical brain regions can be significantly fit with genome-wide genotypes. This experiment demonstrates the scalability and the reliability of our framework in the cloud with a 2 weeks deployment on hundreds of virtual machines.

  4. Assessing Continuous Operator Workload With a Hybrid Scaffolded Neuroergonomic Modeling Approach.

    PubMed

    Borghetti, Brett J; Giametta, Joseph J; Rusnock, Christina F

    2017-02-01

    We aimed to predict operator workload from neurological data using statistical learning methods to fit neurological-to-state-assessment models. Adaptive systems require real-time mental workload assessment to perform dynamic task allocations or operator augmentation as workload issues arise. Neuroergonomic measures have great potential for informing adaptive systems, and we combine these measures with models of task demand as well as information about critical events and performance to clarify the inherent ambiguity of interpretation. We use machine learning algorithms on electroencephalogram (EEG) input to infer operator workload based upon Improved Performance Research Integration Tool workload model estimates. Cross-participant models predict workload of other participants, statistically distinguishing between 62% of the workload changes. Machine learning models trained from Monte Carlo resampled workload profiles can be used in place of deterministic workload profiles for cross-participant modeling without incurring a significant decrease in machine learning model performance, suggesting that stochastic models can be used when limited training data are available. We employed a novel temporary scaffold of simulation-generated workload profile truth data during the model-fitting process. A continuous workload profile serves as the target to train our statistical machine learning models. Once trained, the workload profile scaffolding is removed and the trained model is used directly on neurophysiological data in future operator state assessments. These modeling techniques demonstrate how to use neuroergonomic methods to develop operator state assessments, which can be employed in adaptive systems.

  5. MSUSTAT.

    ERIC Educational Resources Information Center

    Mauriello, David

    1984-01-01

    Reviews an interactive statistical analysis package (designed to run on 8- and 16-bit machines that utilize CP/M 80 and MS-DOS operating systems), considering its features and uses, documentation, operation, and performance. The package consists of 40 general purpose statistical procedures derived from the classic textbook "Statistical…

  6. Design and fabrication of complete dentures using CAD/CAM technology

    PubMed Central

    Han, Weili; Li, Yanfeng; Zhang, Yue; lv, Yuan; Zhang, Ying; Hu, Ping; Liu, Huanyue; Ma, Zheng; Shen, Yi

    2017-01-01

    Abstract The aim of the study was to test the feasibility of using commercially available computer-aided design and computer-aided manufacturing (CAD/CAM) technology including 3Shape Dental System 2013 trial version, WIELAND V2.0.049 and WIELAND ZENOTEC T1 milling machine to design and fabricate complete dentures. The modeling process of full denture available in the trial version of 3Shape Dental System 2013 was used to design virtual complete dentures on the basis of 3-dimensional (3D) digital edentulous models generated from the physical models. The virtual complete dentures designed were exported to CAM software of WIELAND V2.0.049. A WIELAND ZENOTEC T1 milling machine controlled by the CAM software was used to fabricate physical dentitions and baseplates by milling acrylic resin composite plates. The physical dentitions were bonded to the corresponding baseplates to form the maxillary and mandibular complete dentures. Virtual complete dentures were successfully designed using the software through several steps including generation of 3D digital edentulous models, model analysis, arrangement of artificial teeth, trimming relief area, and occlusal adjustment. Physical dentitions and baseplates were successfully fabricated according to the designed virtual complete dentures using milling machine controlled by a CAM software. Bonding physical dentitions to the corresponding baseplates generated the final physical complete dentures. Our study demonstrated that complete dentures could be successfully designed and fabricated by using CAD/CAM. PMID:28072686

  7. The Effect of Physical Activity on Science Competence and Attitude towards Science Content

    NASA Astrophysics Data System (ADS)

    Klinkenborg, Ann Maria

    This study examines the effect of physical activity on science instruction. To combat the implications of physical inactivity, schools need to be willing to consider all possible opportunities for students to engage in moderate-to-vigorous physical activity (MVPA). Integrating physical activity with traditional classroom content is one instructional method to consider. Researchers have typically focused on integration with English/language arts (ELA) and mathematics. The purpose of this study was to determine the effect of physical activity on science competence and attitude towards science. Fifty-three third grade children participated in this investigation; one group received science instruction with a physical activity intervention while the other group received traditional science instruction. Participants in both groups completed a modified version of What I Really Think of Science attitude scale (Pell & Jarvis, 2001) and a physical science test of competence prior to and following the intervention. Children were videotaped during science instruction and their movement coded to measure the proportion of time spent in MVPA. Results revealed that children in the intervention group demonstrated greater MVPA during the instructional period. A moderate to large effect size (partial eta squared = .091) was seen in the intervention group science competence post-test indicating greater understanding of force, motion, work, and simple machines concepts than that of the control group who were less physically active. There was no statistically significant attitude difference between the intervention and control groups post-test, (F(1,51) = .375, p = .543). These results provide evidence that integration can effectively present physical science content and have a positive impact on the number of minutes of health-enhancing physical activity in a school day.

  8. Machine Learning and Deep Learning Models to Predict Runoff Water Quantity and Quality

    NASA Astrophysics Data System (ADS)

    Bradford, S. A.; Liang, J.; Li, W.; Murata, T.; Simunek, J.

    2017-12-01

    Contaminants can be rapidly transported at the soil surface by runoff to surface water bodies. Physically-based models, which are based on the mathematical description of main hydrological processes, are key tools for predicting surface water impairment. Along with physically-based models, data-driven models are becoming increasingly popular for describing the behavior of hydrological and water resources systems since these models can be used to complement or even replace physically based-models. In this presentation we propose a new data-driven model as an alternative to a physically-based overland flow and transport model. First, we have developed a physically-based numerical model to simulate overland flow and contaminant transport (the HYDRUS-1D overland flow module). A large number of numerical simulations were carried out to develop a database containing information about the impact of various input parameters (weather patterns, surface topography, vegetation, soil conditions, contaminants, and best management practices) on runoff water quantity and quality outputs. This database was used to train data-driven models. Three different methods (Neural Networks, Support Vector Machines, and Recurrence Neural Networks) were explored to prepare input- output functional relations. Results demonstrate the ability and limitations of machine learning and deep learning models to predict runoff water quantity and quality.

  9. Finite machines, mental procedures, and modern physics.

    PubMed

    Lupacchini, Rossella

    2007-01-01

    A Turing machine provides a mathematical definition of the natural process of calculating. It rests on trust that a procedure of reason can be reproduced mechanically. Turing's analysis of the concept of mechanical procedure in terms of a finite machine convinced Gödel of the validity of the Church thesis. And yet, Gödel's later concern was that, insofar as Turing's work shows that "mental procedure cannot go beyond mechanical procedures", it would imply the same kind of limitation on human mind. He therefore deems Turing's argument to be inconclusive. The question then arises as to which extent a computing machine operating by finite means could provide an adequate model of human intelligence. It is argued that a rigorous answer to this question can be given by developing Turing's considerations on the nature of mental processes. For Turing such processes are the consequence of physical processes and he seems to be led to the conclusion that quantum mechanics could help to find a more comprehensive explanation of them.

  10. Using virtual machine monitors to overcome the challenges of monitoring and managing virtualized cloud infrastructures

    NASA Astrophysics Data System (ADS)

    Bamiah, Mervat Adib; Brohi, Sarfraz Nawaz; Chuprat, Suriayati

    2012-01-01

    Virtualization is one of the hottest research topics nowadays. Several academic researchers and developers from IT industry are designing approaches for solving security and manageability issues of Virtual Machines (VMs) residing on virtualized cloud infrastructures. Moving the application from a physical to a virtual platform increases the efficiency, flexibility and reduces management cost as well as effort. Cloud computing is adopting the paradigm of virtualization, using this technique, memory, CPU and computational power is provided to clients' VMs by utilizing the underlying physical hardware. Beside these advantages there are few challenges faced by adopting virtualization such as management of VMs and network traffic, unexpected additional cost and resource allocation. Virtual Machine Monitor (VMM) or hypervisor is the tool used by cloud providers to manage the VMs on cloud. There are several heterogeneous hypervisors provided by various vendors that include VMware, Hyper-V, Xen and Kernel Virtual Machine (KVM). Considering the challenge of VM management, this paper describes several techniques to monitor and manage virtualized cloud infrastructures.

  11. Extracting laboratory test information from biomedical text

    PubMed Central

    Kang, Yanna Shen; Kayaalp, Mehmet

    2013-01-01

    Background: No previous study reported the efficacy of current natural language processing (NLP) methods for extracting laboratory test information from narrative documents. This study investigates the pathology informatics question of how accurately such information can be extracted from text with the current tools and techniques, especially machine learning and symbolic NLP methods. The study data came from a text corpus maintained by the U.S. Food and Drug Administration, containing a rich set of information on laboratory tests and test devices. Methods: The authors developed a symbolic information extraction (SIE) system to extract device and test specific information about four types of laboratory test entities: Specimens, analytes, units of measures and detection limits. They compared the performance of SIE and three prominent machine learning based NLP systems, LingPipe, GATE and BANNER, each implementing a distinct supervised machine learning method, hidden Markov models, support vector machines and conditional random fields, respectively. Results: Machine learning systems recognized laboratory test entities with moderately high recall, but low precision rates. Their recall rates were relatively higher when the number of distinct entity values (e.g., the spectrum of specimens) was very limited or when lexical morphology of the entity was distinctive (as in units of measures), yet SIE outperformed them with statistically significant margins on extracting specimen, analyte and detection limit information in both precision and F-measure. Its high recall performance was statistically significant on analyte information extraction. Conclusions: Despite its shortcomings against machine learning methods, a well-tailored symbolic system may better discern relevancy among a pile of information of the same type and may outperform a machine learning system by tapping into lexically non-local contextual information such as the document structure. PMID:24083058

  12. 3D Visualization of Machine Learning Algorithms with Astronomical Data

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2016-01-01

    We present innovative machine learning (ML) methods using unsupervised clustering with minimum spanning trees (MSTs) to study 3D astronomical catalogs. Utilizing Python code to build trees based on galaxy catalogs, we can render the results with the visualization suite Blender to produce interactive 360 degree panoramic videos. The catalogs and their ML results can be explored in a 3D space using mobile devices, tablets or desktop browsers. We compare the statistics of the MST results to a number of machine learning methods relating to optimization and efficiency.

  13. Machine Learning Prediction of the Energy Gap of Graphene Nanoflakes Using Topological Autocorrelation Vectors.

    PubMed

    Fernandez, Michael; Abreu, Jose I; Shi, Hongqing; Barnard, Amanda S

    2016-11-14

    The possibility of band gap engineering in graphene opens countless new opportunities for application in nanoelectronics. In this work, the energy gaps of 622 computationally optimized graphene nanoflakes were mapped to topological autocorrelation vectors using machine learning techniques. Machine learning modeling revealed that the most relevant correlations appear at topological distances in the range of 1 to 42 with prediction accuracy higher than 80%. The data-driven model can statistically discriminate between graphene nanoflakes with different energy gaps on the basis of their molecular topology.

  14. What subject matter questions motivate the use of machine learning approaches compared to statistical models for probability prediction?

    PubMed

    Binder, Harald

    2014-07-01

    This is a discussion of the following papers: "Probability estimation with machine learning methods for dichotomous and multicategory outcome: Theory" by Jochen Kruppa, Yufeng Liu, Gérard Biau, Michael Kohler, Inke R. König, James D. Malley, and Andreas Ziegler; and "Probability estimation with machine learning methods for dichotomous and multicategory outcome: Applications" by Jochen Kruppa, Yufeng Liu, Hans-Christian Diener, Theresa Holste, Christian Weimar, Inke R. König, and Andreas Ziegler. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. ROOT: A C++ framework for petabyte data storage, statistical analysis and visualization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antcheva, I.; /CERN; Ballintijn, M.

    2009-01-01

    ROOT is an object-oriented C++ framework conceived in the high-energy physics (HEP) community, designed for storing and analyzing petabytes of data in an efficient way. Any instance of a C++ class can be stored into a ROOT file in a machine-independent compressed binary format. In ROOT the TTree object container is optimized for statistical data analysis over very large data sets by using vertical data storage techniques. These containers can span a large number of files on local disks, the web or a number of different shared file systems. In order to analyze this data, the user can chose outmore » of a wide set of mathematical and statistical functions, including linear algebra classes, numerical algorithms such as integration and minimization, and various methods for performing regression analysis (fitting). In particular, the RooFit package allows the user to perform complex data modeling and fitting while the RooStats library provides abstractions and implementations for advanced statistical tools. Multivariate classification methods based on machine learning techniques are available via the TMVA package. A central piece in these analysis tools are the histogram classes which provide binning of one- and multi-dimensional data. Results can be saved in high-quality graphical formats like Postscript and PDF or in bitmap formats like JPG or GIF. The result can also be stored into ROOT macros that allow a full recreation and rework of the graphics. Users typically create their analysis macros step by step, making use of the interactive C++ interpreter CINT, while running over small data samples. Once the development is finished, they can run these macros at full compiled speed over large data sets, using on-the-fly compilation, or by creating a stand-alone batch program. Finally, if processing farms are available, the user can reduce the execution time of intrinsically parallel tasks - e.g. data mining in HEP - by using PROOF, which will take care of optimally distributing the work over the available resources in a transparent way.« less

  16. 21 CFR 890.1850 - Diagnostic muscle stimulator.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...) MEDICAL DEVICES PHYSICAL MEDICINE DEVICES Physical Medicine Diagnostic Devices § 890.1850 Diagnostic... electromyograph machine to initiate muscle activity. It is intended for medical purposes, such as to diagnose...

  17. Statistical and Machine Learning forecasting methods: Concerns and ways forward

    PubMed Central

    Makridakis, Spyros; Assimakopoulos, Vassilios

    2018-01-01

    Machine Learning (ML) methods have been proposed in the academic literature as alternatives to statistical ones for time series forecasting. Yet, scant evidence is available about their relative performance in terms of accuracy and computational requirements. The purpose of this paper is to evaluate such performance across multiple forecasting horizons using a large subset of 1045 monthly time series used in the M3 Competition. After comparing the post-sample accuracy of popular ML methods with that of eight traditional statistical ones, we found that the former are dominated across both accuracy measures used and for all forecasting horizons examined. Moreover, we observed that their computational requirements are considerably greater than those of statistical methods. The paper discusses the results, explains why the accuracy of ML models is below that of statistical ones and proposes some possible ways forward. The empirical results found in our research stress the need for objective and unbiased ways to test the performance of forecasting methods that can be achieved through sizable and open competitions allowing meaningful comparisons and definite conclusions. PMID:29584784

  18. Plasma Wall interaction in the IGNITOR machine

    NASA Astrophysics Data System (ADS)

    Ferro, C.

    1998-11-01

    One of the critical issues in ignited machines is the management of the heat and particle exhaust without degradation of the plasma quality (pollution and confinement time) and without damage of the material facing the plasma. The IGNITOR machine has been conceived as a ``limiter" device, i.e., with the plasma leaning nearly on the entire surface of the first wall. Peak heat loads can easily be maintained at values lower than 1.35 MW/m^2 even considering displacements of the plasma column^1. This ``limiter" choice is based on the operational performances of high density, high field machines which suggests that intrinsic physics processes in the edge of the plasma are effective in spreading heat loads and maintaining the plasma pollution at a low level. The possibility of these operating scenarios has been demonstrated recently by different machines both in limiter and divertor configurations. The basis for the different physical processes that are expected to influence the IGNITOR edge parameters ^2 are discussed and a comparison with the latest experimental results is given. ^1 C. Ferro, G. Franzoni, R. Zanino, ENEA Internal Report RT/ERG/FUS/94/14. ^2 C. Ferro, R. Zanino, J. Nucl. Mater. 543, 176 (1990).

  19. Study of the Effect of Lubricant Emulsion Percentage and Tool Material on Surface Roughness in Machining of EN-AC 48000 Alloy

    NASA Astrophysics Data System (ADS)

    Soltani, E.; Shahali, H.; Zarepour, H.

    2011-01-01

    In this paper, the effect of machining parameters, namely, lubricant emulsion percentage and tool material on surface roughness has been studied in machining process of EN-AC 48000 aluminum alloy. EN-AC 48000 aluminum alloy is an important alloy in industries. Machining of this alloy is of vital importance due to built-up edge and tool wear. A L9 Taguchi standard orthogonal array has been applied as experimental design to investigate the effect of the factors and their interaction. Nine machining tests have been carried out with three random replications resulting in 27 experiments. Three type of cutting tools including coated carbide (CD1810), uncoated carbide (H10), and polycrystalline diamond (CD10) have been used in this research. Emulsion percentage of lubricant is selected at three levels including 3%, 5% and 10%. Statistical analysis has been employed to study the effect of factors and their interactions using ANOVA method. Moreover, the optimal factors level has been achieved through signal to noise ratio (S/N) analysis. Also, a regression model has been provided to predict the surface roughness. Finally, the results of the confirmation tests have been presented to verify the adequacy of the predictive model. In this research, surface quality was improved by 9% using lubricant and statistical optimization method.

  20. Jacks--A Study of Simple Machines.

    ERIC Educational Resources Information Center

    Parsons, Ralph

    This vocational physics individualized student instructional module on jacks (simple machines used to lift heavy objects) contains student prerequisites and objectives, an introduction, and sections on the ratchet bumper jack, the hydraulic jack, the screw jack, and load limitations. Designed with a laboratory orientation, each section consists of…

  1. A Data-Driven Approach to Develop Physically Sound Predictors: Application to Depth-Averaged Velocities and Drag Coefficients on Vegetated Flows

    NASA Astrophysics Data System (ADS)

    Tinoco, R. O.; Goldstein, E. B.; Coco, G.

    2016-12-01

    We use a machine learning approach to seek accurate, physically sound predictors, to estimate two relevant flow parameters for open-channel vegetated flows: mean velocities and drag coefficients. A genetic programming algorithm is used to find a robust relationship between properties of the vegetation and flow parameters. We use data published from several laboratory experiments covering a broad range of conditions to obtain: a) in the case of mean flow, an equation that matches the accuracy of other predictors from recent literature while showing a less complex structure, and b) for drag coefficients, a predictor that relies on both single element and array parameters. We investigate different criteria for dataset size and data selection to evaluate their impact on the resulting predictor, as well as simple strategies to obtain only dimensionally consistent equations, and avoid the need for dimensional coefficients. The results show that a proper methodology can deliver physically sound models representative of the processes involved, such that genetic programming and machine learning techniques can be used as powerful tools to study complicated phenomena and develop not only purely empirical, but "hybrid" models, coupling results from machine learning methodologies into physics-based models.

  2. Geologic Carbon Sequestration Leakage Detection: A Physics-Guided Machine Learning Approach

    NASA Astrophysics Data System (ADS)

    Lin, Y.; Harp, D. R.; Chen, B.; Pawar, R.

    2017-12-01

    One of the risks of large-scale geologic carbon sequestration is the potential migration of fluids out of the storage formations. Accurate and fast detection of this fluids migration is not only important but also challenging, due to the large subsurface uncertainty and complex governing physics. Traditional leakage detection and monitoring techniques rely on geophysical observations including pressure. However, the resulting accuracy of these methods is limited because of indirect information they provide requiring expert interpretation, therefore yielding in-accurate estimates of leakage rates and locations. In this work, we develop a novel machine-learning technique based on support vector regression to effectively and efficiently predict the leakage locations and leakage rates based on limited number of pressure observations. Compared to the conventional data-driven approaches, which can be usually seem as a "black box" procedure, we develop a physics-guided machine learning method to incorporate the governing physics into the learning procedure. To validate the performance of our proposed leakage detection method, we employ our method to both 2D and 3D synthetic subsurface models. Our novel CO2 leakage detection method has shown high detection accuracy in the example problems.

  3. Hidden physics models: Machine learning of nonlinear partial differential equations

    NASA Astrophysics Data System (ADS)

    Raissi, Maziar; Karniadakis, George Em

    2018-03-01

    While there is currently a lot of enthusiasm about "big data", useful data is usually "small" and expensive to acquire. In this paper, we present a new paradigm of learning partial differential equations from small data. In particular, we introduce hidden physics models, which are essentially data-efficient learning machines capable of leveraging the underlying laws of physics, expressed by time dependent and nonlinear partial differential equations, to extract patterns from high-dimensional data generated from experiments. The proposed methodology may be applied to the problem of learning, system identification, or data-driven discovery of partial differential equations. Our framework relies on Gaussian processes, a powerful tool for probabilistic inference over functions, that enables us to strike a balance between model complexity and data fitting. The effectiveness of the proposed approach is demonstrated through a variety of canonical problems, spanning a number of scientific domains, including the Navier-Stokes, Schrödinger, Kuramoto-Sivashinsky, and time dependent linear fractional equations. The methodology provides a promising new direction for harnessing the long-standing developments of classical methods in applied mathematics and mathematical physics to design learning machines with the ability to operate in complex domains without requiring large quantities of data.

  4. A Hierarchical Multivariate Bayesian Approach to Ensemble Model output Statistics in Atmospheric Prediction

    DTIC Science & Technology

    2017-09-01

    efficacy of statistical post-processing methods downstream of these dynamical model components with a hierarchical multivariate Bayesian approach to...Bayesian hierarchical modeling, Markov chain Monte Carlo methods , Metropolis algorithm, machine learning, atmospheric prediction 15. NUMBER OF PAGES...scale processes. However, this dissertation explores the efficacy of statistical post-processing methods downstream of these dynamical model components

  5. Robust one-step catalytic machine for high fidelity anticloning and W-state generation in a multiqubit system.

    PubMed

    Olaya-Castro, Alexandra; Johnson, Neil F; Quiroga, Luis

    2005-03-25

    We propose a physically realizable machine which can either generate multiparticle W-like states, or implement high-fidelity 1-->M (M=1,2,...infinity) anticloning of an arbitrary qubit state, in a single step. This universal machine acts as a catalyst in that it is unchanged after either procedure, effectively resetting itself for its next operation. It possesses an inherent immunity to decoherence. Most importantly in terms of practical multiparty quantum communication, the machine's robustness in the presence of decoherence actually increases as the number of qubits M increases.

  6. SUPAR: Smartphone as a ubiquitous physical activity recognizer for u-healthcare services.

    PubMed

    Fahim, Muhammad; Lee, Sungyoung; Yoon, Yongik

    2014-01-01

    Current generation smartphone can be seen as one of the most ubiquitous device for physical activity recognition. In this paper we proposed a physical activity recognizer to provide u-healthcare services in a cost effective manner by utilizing cloud computing infrastructure. Our model is comprised on embedded triaxial accelerometer of the smartphone to sense the body movements and a cloud server to store and process the sensory data for numerous kind of services. We compute the time and frequency domain features over the raw signals and evaluate different machine learning algorithms to identify an accurate activity recognition model for four kinds of physical activities (i.e., walking, running, cycling and hopping). During our experiments we found Support Vector Machine (SVM) algorithm outperforms for the aforementioned physical activities as compared to its counterparts. Furthermore, we also explain how smartphone application and cloud server communicate with each other.

  7. Method of Individual Forecasting of Technical State of Logging Machines

    NASA Astrophysics Data System (ADS)

    Kozlov, V. G.; Gulevsky, V. A.; Skrypnikov, A. V.; Logoyda, V. S.; Menzhulova, A. S.

    2018-03-01

    Development of the model that evaluates the possibility of failure requires the knowledge of changes’ regularities of technical condition parameters of the machines in use. To study the regularities, the need to develop stochastic models that take into account physical essence of the processes of destruction of structural elements of the machines, the technology of their production, degradation and the stochastic properties of the parameters of the technical state and the conditions and modes of operation arose.

  8. Predicting the Performance of Chain Saw Machines Based on Shore Scleroscope Hardness

    NASA Astrophysics Data System (ADS)

    Tumac, Deniz

    2014-03-01

    Shore hardness has been used to estimate several physical and mechanical properties of rocks over the last few decades. However, the number of researches correlating Shore hardness with rock cutting performance is quite limited. Also, rather limited researches have been carried out on predicting the performance of chain saw machines. This study differs from the previous investigations in the way that Shore hardness values (SH1, SH2, and deformation coefficient) are used to determine the field performance of chain saw machines. The measured Shore hardness values are correlated with the physical and mechanical properties of natural stone samples, cutting parameters (normal force, cutting force, and specific energy) obtained from linear cutting tests in unrelieved cutting mode, and areal net cutting rate of chain saw machines. Two empirical models developed previously are improved for the prediction of the areal net cutting rate of chain saw machines. The first model is based on a revised chain saw penetration index, which uses SH1, machine weight, and useful arm cutting depth as predictors. The second model is based on the power consumed for only cutting the stone, arm thickness, and specific energy as a function of the deformation coefficient. While cutting force has a strong relationship with Shore hardness values, the normal force has a weak or moderate correlation. Uniaxial compressive strength, Cerchar abrasivity index, and density can also be predicted by Shore hardness values.

  9. Predicting Solar Activity Using Machine-Learning Methods

    NASA Astrophysics Data System (ADS)

    Bobra, M.

    2017-12-01

    Of all the activity observed on the Sun, two of the most energetic events are flares and coronal mass ejections. However, we do not, as of yet, fully understand the physical mechanism that triggers solar eruptions. A machine-learning algorithm, which is favorable in cases where the amount of data is large, is one way to [1] empirically determine the signatures of this mechanism in solar image data and [2] use them to predict solar activity. In this talk, we discuss the application of various machine learning algorithms - specifically, a Support Vector Machine, a sparse linear regression (Lasso), and Convolutional Neural Network - to image data from the photosphere, chromosphere, transition region, and corona taken by instruments aboard the Solar Dynamics Observatory in order to predict solar activity on a variety of time scales. Such an approach may be useful since, at the present time, there are no physical models of flares available for real-time prediction. We discuss our results (Bobra and Couvidat, 2015; Bobra and Ilonidis, 2016; Jonas et al., 2017) as well as other attempts to predict flares using machine-learning (e.g. Ahmed et al., 2013; Nishizuka et al. 2017) and compare these results with the more traditional techniques used by the NOAA Space Weather Prediction Center (Crown, 2012). We also discuss some of the challenges in using machine-learning algorithms for space science applications.

  10. 14 CFR 382.3 - What do the terms in this rule mean?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... and places between which those flights are performed. CPAP machine means a continuous positive airway pressure machine. Department or DOT means the United States Department of Transportation. Direct threat... learning disabilities. The term physical or mental impairment includes, but is not limited to, such...

  11. 14 CFR 382.3 - What do the terms in this rule mean?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... and places between which those flights are performed. CPAP machine means a continuous positive airway pressure machine. Department or DOT means the United States Department of Transportation. Direct threat... learning disabilities. The term physical or mental impairment includes, but is not limited to, such...

  12. 14 CFR 382.3 - What do the terms in this rule mean?

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... and places between which those flights are performed. CPAP machine means a continuous positive airway pressure machine. Department or DOT means the United States Department of Transportation. Direct threat... learning disabilities. The term physical or mental impairment includes, but is not limited to, such...

  13. On The Computational Capabilities of Physical Systems. Part 2; Relationship With Conventional Computer Science

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.; Koga, Dennis (Technical Monitor)

    2000-01-01

    In the first of this pair of papers, it was proven that there cannot be a physical computer to which one can properly pose any and all computational tasks concerning the physical universe. It was then further proven that no physical computer C can correctly carry out all computational tasks that can be posed to C. As a particular example, this result means that no physical computer that can, for any physical system external to that computer, take the specification of that external system's state as input and then correctly predict its future state before that future state actually occurs; one cannot build a physical computer that can be assured of correctly "processing information faster than the universe does". These results do not rely on systems that are infinite, and/or non-classical, and/or obey chaotic dynamics. They also hold even if one uses an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing Machine. This generality is a direct consequence of the fact that a novel definition of computation - "physical computation" - is needed to address the issues considered in these papers, which concern real physical computers. While this novel definition does not fit into the traditional Chomsky hierarchy, the mathematical structure and impossibility results associated with it have parallels in the mathematics of the Chomsky hierarchy. This second paper of the pair presents a preliminary exploration of some of this mathematical structure. Analogues of Chomskian results concerning universal Turing Machines and the Halting theorem are derived, as are results concerning the (im)possibility of certain kinds of error-correcting codes. In addition, an analogue of algorithmic information complexity, "prediction complexity", is elaborated. A task-independent bound is derived on how much the prediction complexity of a computational task can differ for two different reference universal physical computers used to solve that task, a bound similar to the "encoding" bound governing how much the algorithm information complexity of a Turing machine calculation can differ for two reference universal Turing machines. Finally, it is proven that either the Hamiltonian of our universe proscribes a certain type of computation, or prediction complexity is unique (unlike algorithmic information complexity), in that there is one and only version of it that can be applicable throughout our universe.

  14. Many-Body Descriptors for Predicting Molecular Properties with Machine Learning: Analysis of Pairwise and Three-Body Interactions in Molecules.

    PubMed

    Pronobis, Wiktor; Tkatchenko, Alexandre; Müller, Klaus-Robert

    2018-06-12

    Machine learning (ML) based prediction of molecular properties across chemical compound space is an important and alternative approach to efficiently estimate the solutions of highly complex many-electron problems in chemistry and physics. Statistical methods represent molecules as descriptors that should encode molecular symmetries and interactions between atoms. Many such descriptors have been proposed; all of them have advantages and limitations. Here, we propose a set of general two-body and three-body interaction descriptors which are invariant to translation, rotation, and atomic indexing. By adapting the successfully used kernel ridge regression methods of machine learning, we evaluate our descriptors on predicting several properties of small organic molecules calculated using density-functional theory. We use two data sets. The GDB-7 set contains 6868 molecules with up to 7 heavy atoms of type CNO. The GDB-9 set is composed of 131722 molecules with up to 9 heavy atoms containing CNO. When trained on 5000 random molecules, our best model achieves an accuracy of 0.8 kcal/mol (on the remaining 1868 molecules of GDB-7) and 1.5 kcal/mol (on the remaining 126722 molecules of GDB-9) respectively. Applying a linear regression model on our novel many-body descriptors performs almost equal to a nonlinear kernelized model. Linear models are readily interpretable: a feature importance ranking measure helps to obtain qualitative and quantitative insights on the importance of two- and three-body molecular interactions for predicting molecular properties computed with quantum-mechanical methods.

  15. A Developmental Approach to Machine Learning?

    PubMed Central

    Smith, Linda B.; Slone, Lauren K.

    2017-01-01

    Visual learning depends on both the algorithms and the training material. This essay considers the natural statistics of infant- and toddler-egocentric vision. These natural training sets for human visual object recognition are very different from the training data fed into machine vision systems. Rather than equal experiences with all kinds of things, toddlers experience extremely skewed distributions with many repeated occurrences of a very few things. And though highly variable when considered as a whole, individual views of things are experienced in a specific order – with slow, smooth visual changes moment-to-moment, and developmentally ordered transitions in scene content. We propose that the skewed, ordered, biased visual experiences of infants and toddlers are the training data that allow human learners to develop a way to recognize everything, both the pervasively present entities and the rarely encountered ones. The joint consideration of real-world statistics for learning by researchers of human and machine learning seems likely to bring advances in both disciplines. PMID:29259573

  16. Feature recognition and detection for ancient architecture based on machine vision

    NASA Astrophysics Data System (ADS)

    Zou, Zheng; Wang, Niannian; Zhao, Peng; Zhao, Xuefeng

    2018-03-01

    Ancient architecture has a very high historical and artistic value. The ancient buildings have a wide variety of textures and decorative paintings, which contain a lot of historical meaning. Therefore, the research and statistics work of these different compositional and decorative features play an important role in the subsequent research. However, until recently, the statistics of those components are mainly by artificial method, which consumes a lot of labor and time, inefficiently. At present, as the strong support of big data and GPU accelerated training, machine vision with deep learning as the core has been rapidly developed and widely used in many fields. This paper proposes an idea to recognize and detect the textures, decorations and other features of ancient building based on machine vision. First, classify a large number of surface textures images of ancient building components manually as a set of samples. Then, using the convolution neural network to train the samples in order to get a classification detector. Finally verify its precision.

  17. Principle of maximum entropy for reliability analysis in the design of machine components

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin

    2018-03-01

    We studied the reliability of machine components with parameters that follow an arbitrary statistical distribution using the principle of maximum entropy (PME). We used PME to select the statistical distribution that best fits the available information. We also established a probability density function (PDF) and a failure probability model for the parameters of mechanical components using the concept of entropy and the PME. We obtained the first four moments of the state function for reliability analysis and design. Furthermore, we attained an estimate of the PDF with the fewest human bias factors using the PME. This function was used to calculate the reliability of the machine components, including a connecting rod, a vehicle half-shaft, a front axle, a rear axle housing, and a leaf spring, which have parameters that typically follow a non-normal distribution. Simulations were conducted for comparison. This study provides a design methodology for the reliability of mechanical components for practical engineering projects.

  18. Research on intelligent machine self-perception method based on LSTM

    NASA Astrophysics Data System (ADS)

    Wang, Qiang; Cheng, Tao

    2018-05-01

    In this paper, we use the advantages of LSTM in feature extraction and processing high-dimensional and complex nonlinear data, and apply it to the autonomous perception of intelligent machines. Compared with the traditional multi-layer neural network, this model has memory, can handle time series information of any length. Since the multi-physical domain signals of processing machines have a certain timing relationship, and there is a contextual relationship between states and states, using this deep learning method to realize the self-perception of intelligent processing machines has strong versatility and adaptability. The experiment results show that the method proposed in this paper can obviously improve the sensing accuracy under various working conditions of the intelligent machine, and also shows that the algorithm can well support the intelligent processing machine to realize self-perception.

  19. State machine analysis of sensor data from dynamic processes

    DOEpatents

    Cook, William R.; Brabson, John M.; Deland, Sharon M.

    2003-12-23

    A state machine model analyzes sensor data from dynamic processes at a facility to identify the actual processes that were performed at the facility during a period of interest for the purpose of remote facility inspection. An inspector can further input the expected operations into the state machine model and compare the expected, or declared, processes to the actual processes to identify undeclared processes at the facility. The state machine analysis enables the generation of knowledge about the state of the facility at all levels, from location of physical objects to complex operational concepts. Therefore, the state machine method and apparatus may benefit any agency or business with sensored facilities that stores or manipulates expensive, dangerous, or controlled materials or information.

  20. Machine learning: Trends, perspectives, and prospects.

    PubMed

    Jordan, M I; Mitchell, T M

    2015-07-17

    Machine learning addresses the question of how to build computers that improve automatically through experience. It is one of today's most rapidly growing technical fields, lying at the intersection of computer science and statistics, and at the core of artificial intelligence and data science. Recent progress in machine learning has been driven both by the development of new learning algorithms and theory and by the ongoing explosion in the availability of online data and low-cost computation. The adoption of data-intensive machine-learning methods can be found throughout science, technology and commerce, leading to more evidence-based decision-making across many walks of life, including health care, manufacturing, education, financial modeling, policing, and marketing. Copyright © 2015, American Association for the Advancement of Science.

  1. New machine learning tools for predictive vegetation mapping after climate change: Bagging and Random Forest perform better than Regression Tree Analysis

    Treesearch

    L.R. Iverson; A.M. Prasad; A. Liaw

    2004-01-01

    More and better machine learning tools are becoming available for landscape ecologists to aid in understanding species-environment relationships and to map probable species occurrence now and potentially into the future. To thal end, we evaluated three statistical models: Regression Tree Analybib (RTA), Bagging Trees (BT) and Random Forest (RF) for their utility in...

  2. A Comparative Study of "Google Translate" Translations: An Error Analysis of English-to-Persian and Persian-to-English Translations

    ERIC Educational Resources Information Center

    Ghasemi, Hadis; Hashemian, Mahmood

    2016-01-01

    Both lack of time and the need to translate texts for numerous reasons brought about an increase in studying machine translation with a history spanning over 65 years. During the last decades, Google Translate, as a statistical machine translation (SMT), was in the center of attention for supporting 90 languages. Although there are many studies on…

  3. A New Mathematical Framework for Design Under Uncertainty

    DTIC Science & Technology

    2016-05-05

    blending multiple information sources via auto-regressive stochastic modeling. A computationally efficient machine learning framework is developed based on...sion and machine learning approaches; see Fig. 1. This will lead to a comprehensive description of system performance with less uncertainty than in the...Bayesian optimization of super-cavitating hy- drofoils The goal of this study is to demonstrate the capabilities of statistical learning and

  4. A computational visual saliency model based on statistics and machine learning.

    PubMed

    Lin, Ru-Je; Lin, Wei-Song

    2014-08-01

    Identifying the type of stimuli that attracts human visual attention has been an appealing topic for scientists for many years. In particular, marking the salient regions in images is useful for both psychologists and many computer vision applications. In this paper, we propose a computational approach for producing saliency maps using statistics and machine learning methods. Based on four assumptions, three properties (Feature-Prior, Position-Prior, and Feature-Distribution) can be derived and combined by a simple intersection operation to obtain a saliency map. These properties are implemented by a similarity computation, support vector regression (SVR) technique, statistical analysis of training samples, and information theory using low-level features. This technique is able to learn the preferences of human visual behavior while simultaneously considering feature uniqueness. Experimental results show that our approach performs better in predicting human visual attention regions than 12 other models in two test databases. © 2014 ARVO.

  5. Machine Learning Predictions of a Multiresolution Climate Model Ensemble

    NASA Astrophysics Data System (ADS)

    Anderson, Gemma J.; Lucas, Donald D.

    2018-05-01

    Statistical models of high-resolution climate models are useful for many purposes, including sensitivity and uncertainty analyses, but building them can be computationally prohibitive. We generated a unique multiresolution perturbed parameter ensemble of a global climate model. We use a novel application of a machine learning technique known as random forests to train a statistical model on the ensemble to make high-resolution model predictions of two important quantities: global mean top-of-atmosphere energy flux and precipitation. The random forests leverage cheaper low-resolution simulations, greatly reducing the number of high-resolution simulations required to train the statistical model. We demonstrate that high-resolution predictions of these quantities can be obtained by training on an ensemble that includes only a small number of high-resolution simulations. We also find that global annually averaged precipitation is more sensitive to resolution changes than to any of the model parameters considered.

  6. Software Reviews.

    ERIC Educational Resources Information Center

    Science and Children, 1990

    1990-01-01

    Reviewed are seven computer software packages for IBM and/or Apple Computers. Included are "Windows on Science: Volume 1--Physical Science"; "Science Probe--Physical Science"; "Wildlife Adventures--Grizzly Bears"; "Science Skills--Development Programs"; "The Clean Machine"; "Rock Doctor";…

  7. Nowcasting Cloud Fields for U.S. Air Force Special Operations

    DTIC Science & Technology

    2017-03-01

    application of Bayes’ Rule offers many advantages over Kernel Density Estimation (KDE) and other commonly used statistical post-processing methods...reflectance and probability of cloud. A statistical post-processing technique is applied using Bayesian estimation to train the system from a set of past...nowcasting, low cloud forecasting, cloud reflectance, ISR, Bayesian estimation, statistical post-processing, machine learning 15. NUMBER OF PAGES

  8. What is the machine learning?

    NASA Astrophysics Data System (ADS)

    Chang, Spencer; Cohen, Timothy; Ostdiek, Bryan

    2018-03-01

    Applications of machine learning tools to problems of physical interest are often criticized for producing sensitivity at the expense of transparency. To address this concern, we explore a data planing procedure for identifying combinations of variables—aided by physical intuition—that can discriminate signal from background. Weights are introduced to smooth away the features in a given variable(s). New networks are then trained on this modified data. Observed decreases in sensitivity diagnose the variable's discriminating power. Planing also allows the investigation of the linear versus nonlinear nature of the boundaries between signal and background. We demonstrate the efficacy of this approach using a toy example, followed by an application to an idealized heavy resonance scenario at the Large Hadron Collider. By unpacking the information being utilized by these algorithms, this method puts in context what it means for a machine to learn.

  9. Molecular machines operating on the nanoscale: from classical to quantum

    PubMed Central

    2016-01-01

    Summary The main physical features and operating principles of isothermal nanomachines in the microworld, common to both classical and quantum machines, are reviewed. Special attention is paid to the dual, constructive role of dissipation and thermal fluctuations, the fluctuation–dissipation theorem, heat losses and free energy transduction, thermodynamic efficiency, and thermodynamic efficiency at maximum power. Several basic models are considered and discussed to highlight generic physical features. This work examines some common fallacies that continue to plague the literature. In particular, the erroneous beliefs that one should minimize friction and lower the temperature for high performance of Brownian machines, and that the thermodynamic efficiency at maximum power cannot exceed one-half are discussed. The emerging topic of anomalous molecular motors operating subdiffusively but very efficiently in the viscoelastic environment of living cells is also discussed. PMID:27335728

  10. Forensic Science Research and Development at the National Institute of Justice: Opportunities in Applied Physics

    NASA Astrophysics Data System (ADS)

    Dutton, Gregory

    Forensic science is a collection of applied disciplines that draws from all branches of science. A key question in forensic analysis is: to what degree do a piece of evidence and a known reference sample share characteristics? Quantification of similarity, estimation of uncertainty, and determination of relevant population statistics are of current concern. A 2016 PCAST report questioned the foundational validity and the validity in practice of several forensic disciplines, including latent fingerprints, firearms comparisons and DNA mixture interpretation. One recommendation was the advancement of objective, automated comparison methods based on image analysis and machine learning. These concerns parallel the National Institute of Justice's ongoing R&D investments in applied chemistry, biology and physics. NIJ maintains a funding program spanning fundamental research with potential for forensic application to the validation of novel instruments and methods. Since 2009, NIJ has funded over 179M in external research to support the advancement of accuracy, validity and efficiency in the forensic sciences. An overview of NIJ's programs will be presented, with examples of relevant projects from fluid dynamics, 3D imaging, acoustics, and materials science.

  11. Exploring the Function Space of Deep-Learning Machines

    NASA Astrophysics Data System (ADS)

    Li, Bo; Saad, David

    2018-06-01

    The function space of deep-learning machines is investigated by studying growth in the entropy of functions of a given error with respect to a reference function, realized by a deep-learning machine. Using physics-inspired methods we study both sparsely and densely connected architectures to discover a layerwise convergence of candidate functions, marked by a corresponding reduction in entropy when approaching the reference function, gain insight into the importance of having a large number of layers, and observe phase transitions as the error increases.

  12. Mi Quinto Libro de Maquinas Simples: El Plano Inclinado. Escuela Intermedia Grados 7, 8 y 9 (My Fifth Book of Simple Machines: The Inclined Plane. Intermediate School Grades 7, 8, and 9).

    ERIC Educational Resources Information Center

    Alvarado, Patricio R.; Montalvo, Luis

    This is the fifth book in a five-book physical science series on simple machines. The books are designed for Spanish-speaking junior high school students. This volume explains the principles and some of the uses of inclined planes, as they appear in simple machines, by suggesting experiments and posing questions concerning drawings in the book…

  13. Teach students Semiconductor Lasers according to their natural ability

    NASA Astrophysics Data System (ADS)

    Liu, Ken; Guo, Chu Cai; Zhang, Jian Fa

    2017-08-01

    Physics explain the world in strict rules. And with these rules, modern machines and electronic devices with exact operation manner have been developed. However, human beings exceed these machines with self-awareness. To treat these self-awareness students as machines to learn strict rules, or to teach these students according to their aptitude? We choose the latter, because the first kind of teaching would let students lose their individual thoughts and natural ability. In this paper we describe the individualized teaching of "semiconductor lasers".

  14. Machine learning approach for automated screening of malaria parasite using light microscopic images.

    PubMed

    Das, Dev Kumar; Ghosh, Madhumala; Pal, Mallika; Maiti, Asok K; Chakraborty, Chandan

    2013-02-01

    The aim of this paper is to address the development of computer assisted malaria parasite characterization and classification using machine learning approach based on light microscopic images of peripheral blood smears. In doing this, microscopic image acquisition from stained slides, illumination correction and noise reduction, erythrocyte segmentation, feature extraction, feature selection and finally classification of different stages of malaria (Plasmodium vivax and Plasmodium falciparum) have been investigated. The erythrocytes are segmented using marker controlled watershed transformation and subsequently total ninety six features describing shape-size and texture of erythrocytes are extracted in respect to the parasitemia infected versus non-infected cells. Ninety four features are found to be statistically significant in discriminating six classes. Here a feature selection-cum-classification scheme has been devised by combining F-statistic, statistical learning techniques i.e., Bayesian learning and support vector machine (SVM) in order to provide the higher classification accuracy using best set of discriminating features. Results show that Bayesian approach provides the highest accuracy i.e., 84% for malaria classification by selecting 19 most significant features while SVM provides highest accuracy i.e., 83.5% with 9 most significant features. Finally, the performance of these two classifiers under feature selection framework has been compared toward malaria parasite classification. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Combining Multiple Hypothesis Testing with Machine Learning Increases the Statistical Power of Genome-wide Association Studies

    PubMed Central

    Mieth, Bettina; Kloft, Marius; Rodríguez, Juan Antonio; Sonnenburg, Sören; Vobruba, Robin; Morcillo-Suárez, Carlos; Farré, Xavier; Marigorta, Urko M.; Fehr, Ernst; Dickhaus, Thorsten; Blanchard, Gilles; Schunk, Daniel; Navarro, Arcadi; Müller, Klaus-Robert

    2016-01-01

    The standard approach to the analysis of genome-wide association studies (GWAS) is based on testing each position in the genome individually for statistical significance of its association with the phenotype under investigation. To improve the analysis of GWAS, we propose a combination of machine learning and statistical testing that takes correlation structures within the set of SNPs under investigation in a mathematically well-controlled manner into account. The novel two-step algorithm, COMBI, first trains a support vector machine to determine a subset of candidate SNPs and then performs hypothesis tests for these SNPs together with an adequate threshold correction. Applying COMBI to data from a WTCCC study (2007) and measuring performance as replication by independent GWAS published within the 2008–2015 period, we show that our method outperforms ordinary raw p-value thresholding as well as other state-of-the-art methods. COMBI presents higher power and precision than the examined alternatives while yielding fewer false (i.e. non-replicated) and more true (i.e. replicated) discoveries when its results are validated on later GWAS studies. More than 80% of the discoveries made by COMBI upon WTCCC data have been validated by independent studies. Implementations of the COMBI method are available as a part of the GWASpi toolbox 2.0. PMID:27892471

  16. Combining Multiple Hypothesis Testing with Machine Learning Increases the Statistical Power of Genome-wide Association Studies.

    PubMed

    Mieth, Bettina; Kloft, Marius; Rodríguez, Juan Antonio; Sonnenburg, Sören; Vobruba, Robin; Morcillo-Suárez, Carlos; Farré, Xavier; Marigorta, Urko M; Fehr, Ernst; Dickhaus, Thorsten; Blanchard, Gilles; Schunk, Daniel; Navarro, Arcadi; Müller, Klaus-Robert

    2016-11-28

    The standard approach to the analysis of genome-wide association studies (GWAS) is based on testing each position in the genome individually for statistical significance of its association with the phenotype under investigation. To improve the analysis of GWAS, we propose a combination of machine learning and statistical testing that takes correlation structures within the set of SNPs under investigation in a mathematically well-controlled manner into account. The novel two-step algorithm, COMBI, first trains a support vector machine to determine a subset of candidate SNPs and then performs hypothesis tests for these SNPs together with an adequate threshold correction. Applying COMBI to data from a WTCCC study (2007) and measuring performance as replication by independent GWAS published within the 2008-2015 period, we show that our method outperforms ordinary raw p-value thresholding as well as other state-of-the-art methods. COMBI presents higher power and precision than the examined alternatives while yielding fewer false (i.e. non-replicated) and more true (i.e. replicated) discoveries when its results are validated on later GWAS studies. More than 80% of the discoveries made by COMBI upon WTCCC data have been validated by independent studies. Implementations of the COMBI method are available as a part of the GWASpi toolbox 2.0.

  17. Combining Multiple Hypothesis Testing with Machine Learning Increases the Statistical Power of Genome-wide Association Studies

    NASA Astrophysics Data System (ADS)

    Mieth, Bettina; Kloft, Marius; Rodríguez, Juan Antonio; Sonnenburg, Sören; Vobruba, Robin; Morcillo-Suárez, Carlos; Farré, Xavier; Marigorta, Urko M.; Fehr, Ernst; Dickhaus, Thorsten; Blanchard, Gilles; Schunk, Daniel; Navarro, Arcadi; Müller, Klaus-Robert

    2016-11-01

    The standard approach to the analysis of genome-wide association studies (GWAS) is based on testing each position in the genome individually for statistical significance of its association with the phenotype under investigation. To improve the analysis of GWAS, we propose a combination of machine learning and statistical testing that takes correlation structures within the set of SNPs under investigation in a mathematically well-controlled manner into account. The novel two-step algorithm, COMBI, first trains a support vector machine to determine a subset of candidate SNPs and then performs hypothesis tests for these SNPs together with an adequate threshold correction. Applying COMBI to data from a WTCCC study (2007) and measuring performance as replication by independent GWAS published within the 2008-2015 period, we show that our method outperforms ordinary raw p-value thresholding as well as other state-of-the-art methods. COMBI presents higher power and precision than the examined alternatives while yielding fewer false (i.e. non-replicated) and more true (i.e. replicated) discoveries when its results are validated on later GWAS studies. More than 80% of the discoveries made by COMBI upon WTCCC data have been validated by independent studies. Implementations of the COMBI method are available as a part of the GWASpi toolbox 2.0.

  18. Towards a generalized energy prediction model for machine tools

    PubMed Central

    Bhinge, Raunak; Park, Jinkyoo; Law, Kincho H.; Dornfeld, David A.; Helu, Moneer; Rachuri, Sudarsan

    2017-01-01

    Energy prediction of machine tools can deliver many advantages to a manufacturing enterprise, ranging from energy-efficient process planning to machine tool monitoring. Physics-based, energy prediction models have been proposed in the past to understand the energy usage pattern of a machine tool. However, uncertainties in both the machine and the operating environment make it difficult to predict the energy consumption of the target machine reliably. Taking advantage of the opportunity to collect extensive, contextual, energy-consumption data, we discuss a data-driven approach to develop an energy prediction model of a machine tool in this paper. First, we present a methodology that can efficiently and effectively collect and process data extracted from a machine tool and its sensors. We then present a data-driven model that can be used to predict the energy consumption of the machine tool for machining a generic part. Specifically, we use Gaussian Process (GP) Regression, a non-parametric machine-learning technique, to develop the prediction model. The energy prediction model is then generalized over multiple process parameters and operations. Finally, we apply this generalized model with a method to assess uncertainty intervals to predict the energy consumed to machine any part using a Mori Seiki NVD1500 machine tool. Furthermore, the same model can be used during process planning to optimize the energy-efficiency of a machining process. PMID:28652687

  19. Towards a generalized energy prediction model for machine tools.

    PubMed

    Bhinge, Raunak; Park, Jinkyoo; Law, Kincho H; Dornfeld, David A; Helu, Moneer; Rachuri, Sudarsan

    2017-04-01

    Energy prediction of machine tools can deliver many advantages to a manufacturing enterprise, ranging from energy-efficient process planning to machine tool monitoring. Physics-based, energy prediction models have been proposed in the past to understand the energy usage pattern of a machine tool. However, uncertainties in both the machine and the operating environment make it difficult to predict the energy consumption of the target machine reliably. Taking advantage of the opportunity to collect extensive, contextual, energy-consumption data, we discuss a data-driven approach to develop an energy prediction model of a machine tool in this paper. First, we present a methodology that can efficiently and effectively collect and process data extracted from a machine tool and its sensors. We then present a data-driven model that can be used to predict the energy consumption of the machine tool for machining a generic part. Specifically, we use Gaussian Process (GP) Regression, a non-parametric machine-learning technique, to develop the prediction model. The energy prediction model is then generalized over multiple process parameters and operations. Finally, we apply this generalized model with a method to assess uncertainty intervals to predict the energy consumed to machine any part using a Mori Seiki NVD1500 machine tool. Furthermore, the same model can be used during process planning to optimize the energy-efficiency of a machining process.

  20. Machine learning modelling for predicting soil liquefaction susceptibility

    NASA Astrophysics Data System (ADS)

    Samui, P.; Sitharam, T. G.

    2011-01-01

    This study describes two machine learning techniques applied to predict liquefaction susceptibility of soil based on the standard penetration test (SPT) data from the 1999 Chi-Chi, Taiwan earthquake. The first machine learning technique which uses Artificial Neural Network (ANN) based on multi-layer perceptions (MLP) that are trained with Levenberg-Marquardt backpropagation algorithm. The second machine learning technique uses the Support Vector machine (SVM) that is firmly based on the theory of statistical learning theory, uses classification technique. ANN and SVM have been developed to predict liquefaction susceptibility using corrected SPT [(N1)60] and cyclic stress ratio (CSR). Further, an attempt has been made to simplify the models, requiring only the two parameters [(N1)60 and peck ground acceleration (amax/g)], for the prediction of liquefaction susceptibility. The developed ANN and SVM models have also been applied to different case histories available globally. The paper also highlights the capability of the SVM over the ANN models.

  1. Effects of promotional materials on vending sales of low-fat items in teachers' lounges.

    PubMed

    Fiske, Amy; Cullen, Karen Weber

    2004-01-01

    This study examined the impact of an environmental intervention in the form of promotional materials and increased availability of low-fat items on vending machine sales. Ten vending machines were selected and randomly assigned to one of three conditions: control, or one of two experimental conditions. Vending machines in the two intervention conditions received three additional low-fat selections. Low-fat items were promoted at two levels: labels (intervention I), and labels plus signs (intervention II). The number of individual items sold and the total revenue generated was recorded weekly for each machine for 4 weeks. Use of promotional materials resulted in a small, but not significant, increase in the number of low-fat items sold, although machine sales were not significantly impacted by the change in product selection. Results of this study, although not statistically significant, suggest that environmental change may be a realistic means of positively influencing consumer behavior.

  2. A Critical Review for Developing Accurate and Dynamic Predictive Models Using Machine Learning Methods in Medicine and Health Care.

    PubMed

    Alanazi, Hamdan O; Abdullah, Abdul Hanan; Qureshi, Kashif Naseer

    2017-04-01

    Recently, Artificial Intelligence (AI) has been used widely in medicine and health care sector. In machine learning, the classification or prediction is a major field of AI. Today, the study of existing predictive models based on machine learning methods is extremely active. Doctors need accurate predictions for the outcomes of their patients' diseases. In addition, for accurate predictions, timing is another significant factor that influences treatment decisions. In this paper, existing predictive models in medicine and health care have critically reviewed. Furthermore, the most famous machine learning methods have explained, and the confusion between a statistical approach and machine learning has clarified. A review of related literature reveals that the predictions of existing predictive models differ even when the same dataset is used. Therefore, existing predictive models are essential, and current methods must be improved.

  3. Turning the LHC ring into a new physics search machine

    NASA Astrophysics Data System (ADS)

    Orava, Risto

    2017-03-01

    The LHC Collider Ring is proposed to be turned into an ultimate automatic search engine for new physics in four consecutive phases: (1) Searches for heavy particles produced in Central Exclusive Process (CEP): pp → p + X + p based on the existing Beam Loss Monitoring (BLM) system of the LHC; (2) Feasibility study of using the LHC Ring as a gravitation wave antenna; (3) Extensions to the current BLM system to facilitate precise registration of the selected CEP proton exit points from the LHC beam vacuum chamber; (4) Integration of the BLM based event tagging system together with the trigger/data acquisition systems of the LHC experiments to facilitate an on-line automatic search machine for the physics of tomorrow.

  4. Application of machine learning techniques to analyse the effects of physical exercise in ventricular fibrillation.

    PubMed

    Caravaca, Juan; Soria-Olivas, Emilio; Bataller, Manuel; Serrano, Antonio J; Such-Miquel, Luis; Vila-Francés, Joan; Guerrero, Juan F

    2014-02-01

    This work presents the application of machine learning techniques to analyse the influence of physical exercise in the physiological properties of the heart, during ventricular fibrillation. To this end, different kinds of classifiers (linear and neural models) are used to classify between trained and sedentary rabbit hearts. The use of those classifiers in combination with a wrapper feature selection algorithm allows to extract knowledge about the most relevant features in the problem. The obtained results show that neural models outperform linear classifiers (better performance indices and a better dimensionality reduction). The most relevant features to describe the benefits of physical exercise are those related to myocardial heterogeneity, mean activation rate and activation complexity. © 2013 Published by Elsevier Ltd.

  5. Reproducing an Early-20th-Century Wave Machine

    ERIC Educational Resources Information Center

    Daffron, John A.; Greenslade, Thomas B., Jr.

    2016-01-01

    Physics students often have problems understanding waves. Over the years numerous mechanical devices have been devised to show the propagation of both transverse and longitudinal waves (Ref. 1). In this article an updated version of an early-20th-century transverse wave machine is discussed. The original, Fig. 1, is at Creighton University in…

  6. Cybernetic anthropomorphic machine systems

    NASA Technical Reports Server (NTRS)

    Gray, W. E.

    1974-01-01

    Functional descriptions are provided for a number of cybernetic man machine systems that augment the capacity of normal human beings in the areas of strength, reach or physical size, and environmental interaction, and that are also applicable to aiding the neurologically handicapped. Teleoperators, computer control, exoskeletal devices, quadruped vehicles, space maintenance systems, and communications equipment are considered.

  7. Financial heat machine

    NASA Astrophysics Data System (ADS)

    Khrennikov, Andrei

    2005-05-01

    We consider dynamics of financial markets as dynamics of expectations and discuss such a dynamics from the point of view of phenomenological thermodynamics. We describe a financial Carnot cycle and the financial analog of a heat machine. We see, that while in physics a perpetuum mobile is absolutely impossible, in economics such mobile may exist under some conditions.

  8. A SYSTEMS APPROACH UTILIZING GENERAL-PURPOSE AND SPECIAL-PURPOSE TEACHING MACHINES.

    ERIC Educational Resources Information Center

    SILVERN, LEONARD C.

    IN ORDER TO IMPROVE THE EMPLOYEE TRAINING-EVALUATION METHOD, TEACHING MACHINES AND PERFORMANCE AIDS MUST BE PHYSICALLY AND OPERATIONALLY INTEGRATED INTO THE SYSTEM, THUS RETURNING TRAINING TO THE ACTUAL JOB ENVIRONMENT. GIVEN THESE CONDITIONS, TRAINING CAN BE MEASURED, CALIBRATED, AND CONTROLLED WITH RESPECT TO ACTUAL JOB PERFORMANCE STANDARDS AND…

  9. Identifying ecological "sweet spots" underlying cyanobacteria functional group dynamics from long-term observations using a statistical machine learning approach

    NASA Astrophysics Data System (ADS)

    Nelson, N.; Munoz-Carpena, R.; Phlips, E. J.

    2017-12-01

    Diversity in the eco-physiological adaptations of cyanobacteria genera creates challenges for water managers who are tasked with developing appropriate actions for controlling not only the intensity and frequency of cyanobacteria blooms, but also reducing the potential for blooms of harmful taxa (e.g., toxin producers, N2 fixers). Compounding these challenges, the efficacy of nutrient management strategies (phosphorus-only versus nitrogen-and-phosphorus) for cyanobacteria bloom abatement is the subject of an ongoing debate, which increases uncertainty associated with bloom mitigation decision-making. In this work, we analyze a unique long-term (17-year) dataset composed of monthly observations of cyanobacteria genera abundances, zooplankton abundances, water quality, and flow from Lake George, a bloom-impacted flow-through lake of the St. Johns River (FL, USA). Using the Random Forests machine learning algorithm, an assumption-free ensemble modeling approach, the dataset was evaluated to quantify and characterize relationships between environmental conditions and seven cyanobacteria groupings: five genera (Anabaena, Cylindrospermopsis, Lyngbya, Microcystis, and Oscillatoria) and two functional groups (N2 fixers and non-fixers). Results highlight the selectivity of nitrogen in describing genera and functional group dynamics, and potential for physical effects to limit the efficacy of nutrient management as a mechanism for cyanobacteria bloom mitigation.

  10. Predicting Failure Under Laboratory Conditions: Learning the Physics of Slow Frictional Slip and Dynamic Failure

    NASA Astrophysics Data System (ADS)

    Rouet-Leduc, B.; Hulbert, C.; Riviere, J.; Lubbers, N.; Barros, K.; Marone, C.; Johnson, P. A.

    2016-12-01

    Forecasting failure is a primary goal in diverse domains that include earthquake physics, materials science, nondestructive evaluation of materials and other engineering applications. Due to the highly complex physics of material failure and limitations on gathering data in the failure nucleation zone, this goal has often appeared out of reach; however, recent advances in instrumentation sensitivity, instrument density and data analysis show promise toward forecasting failure times. Here, we show that we can predict frictional failure times of both slow and fast stick slip failure events in the laboratory. This advance is made possible by applying a machine learning approach known as Random Forests1(RF) to the continuous acoustic emission (AE) time series recorded by detectors located on the fault blocks. The RF is trained using a large number of statistical features derived from the AE time series signal. The model is then applied to data not previously analyzed. Remarkably, we find that the RF method predicts upcoming failure time far in advance of a stick slip event, based only on a short time window of data. Further, the algorithm accurately predicts the time of the beginning and end of the next slip event. The predicted time improves as failure is approached, as other data features add to prediction. Our results show robust predictions of slow and dynamic failure based on acoustic emissions from the fault zone throughout the laboratory seismic cycle. The predictions are based on previously unidentified tremor-like acoustic signals that occur during stress build up and the onset of macroscopic frictional weakening. We suggest that the tremor-like signals carry information about fault zone processes and allow precise predictions of failure at any time in the slow slip or stick slip cycle2. If the laboratory experiments represent Earth frictional conditions, it could well be that signals are being missed that contain highly useful predictive information. 1Breiman, L. Random forests. Machine Learning 45, 5-32 (2001). 2Rouet-Leduc, B. C. Hulbert, N. Lubbers, K. Barros and P. A. Johnson, Learning the physics of failure, in review (2016).

  11. Comparative study of state-of-the-art myoelectric controllers for multigrasp prosthetic hands.

    PubMed

    Segil, Jacob L; Controzzi, Marco; Weir, Richard F ff; Cipriani, Christian

    2014-01-01

    A myoelectric controller should provide an intuitive and effective human-machine interface that deciphers user intent in real-time and is robust enough to operate in daily life. Many myoelectric control architectures have been developed, including pattern recognition systems, finite state machines, and more recently, postural control schemes. Here, we present a comparative study of two types of finite state machines and a postural control scheme using both virtual and physical assessment procedures with seven nondisabled subjects. The Southampton Hand Assessment Procedure (SHAP) was used in order to compare the effectiveness of the controllers during activities of daily living using a multigrasp artificial hand. Also, a virtual hand posture matching task was used to compare the controllers when reproducing six target postures. The performance when using the postural control scheme was significantly better (p < 0.05) than the finite state machines during the physical assessment when comparing within-subject averages using the SHAP percent difference metric. The virtual assessment results described significantly greater completion rates (97% and 99%) for the finite state machines, but the movement time tended to be faster (2.7 s) for the postural control scheme. Our results substantiate that postural control schemes rival other state-of-the-art myoelectric controllers.

  12. A 3D Human-Machine Integrated Design and Analysis Framework for Squat Exercises with a Smith Machine

    PubMed Central

    Lee, Haerin; Jung, Moonki; Lee, Ki-Kwang; Lee, Sang Hun

    2017-01-01

    In this paper, we propose a three-dimensional design and evaluation framework and process based on a probabilistic-based motion synthesis algorithm and biomechanical analysis system for the design of the Smith machine and squat training programs. Moreover, we implemented a prototype system to validate the proposed framework. The framework consists of an integrated human–machine–environment model as well as a squat motion synthesis system and biomechanical analysis system. In the design and evaluation process, we created an integrated model in which interactions between a human body and machine or the ground are modeled as joints with constraints at contact points. Next, we generated Smith squat motion using the motion synthesis program based on a Gaussian process regression algorithm with a set of given values for independent variables. Then, using the biomechanical analysis system, we simulated joint moments and muscle activities from the input of the integrated model and squat motion. We validated the model and algorithm through physical experiments measuring the electromyography (EMG) signals, ground forces, and squat motions as well as through a biomechanical simulation of muscle forces. The proposed approach enables the incorporation of biomechanics in the design process and reduces the need for physical experiments and prototypes in the development of training programs and new Smith machines. PMID:28178184

  13. Machine vision system for measuring conifer seedling morphology

    NASA Astrophysics Data System (ADS)

    Rigney, Michael P.; Kranzler, Glenn A.

    1995-01-01

    A PC-based machine vision system providing rapid measurement of bare-root tree seedling morphological features has been designed. The system uses backlighting and a 2048-pixel line- scan camera to acquire images with transverse resolutions as high as 0.05 mm for precise measurement of stem diameter. Individual seedlings are manually loaded on a conveyor belt and inspected by the vision system in less than 0.25 seconds. Designed for quality control and morphological data acquisition by nursery personnel, the system provides a user-friendly, menu-driven graphical interface. The system automatically locates the seedling root collar and measures stem diameter, shoot height, sturdiness ratio, root mass length, projected shoot and root area, shoot-root area ratio, and percent fine roots. Sample statistics are computed for each measured feature. Measurements for each seedling may be stored for later analysis. Feature measurements may be compared with multi-class quality criteria to determine sample quality or to perform multi-class sorting. Statistical summary and classification reports may be printed to facilitate the communication of quality concerns with grading personnel. Tests were conducted at a commercial forest nursery to evaluate measurement precision. Four quality control personnel measured root collar diameter, stem height, and root mass length on each of 200 conifer seedlings. The same seedlings were inspected four times by the machine vision system. Machine stem diameter measurement precision was four times greater than that of manual measurements. Machine and manual measurements had comparable precision for shoot height and root mass length.

  14. Effects of the sliding rehabilitation machine on balance and gait in chronic stroke patients - a controlled clinical trial.

    PubMed

    Byun, Seung-Deuk; Jung, Tae-Du; Kim, Chul-Hyun; Lee, Yang-Soo

    2011-05-01

    To investigate the effects of a sliding rehabilitation machine on balance and gait in chronic stroke patients. A non-randomized crossover design. Inpatient rehabilitation in a general hospital. Thirty patients with chronic stroke who had medium or high falling risk as determined by the Berg Balance Scale. Participants were divided into two groups and underwent four weeks of training. Group A (n = 15) underwent training with the sliding rehabilitation machine for two weeks with concurrent conventional training, followed by conventional training only for another two weeks. Group B (n = 15) underwent the same training in reverse order. The effect of the experimental period was defined as the sum of changes during training with sliding rehabilitation machine in each group, and the effect of the control period was defined as those during the conventional training only in each group. Functional Ambulation Category, Berg Balance Scale, Six-Minute Walk Test, Timed Up and Go Test, Korean Modified Barthel Index, Modified Ashworth Scale and Manual Muscle Test. Statistically significant improvements were observed in all parameters except Modified Ashworth Scale in the experimental period, but only in Six-Minute Walk Test (P < 0.01) in the control period. There were also statistically significant differences in the degree of change in all parameters in the experimental period as compared to the control period. The sliding rehabilitation machine may be a useful tool for the improvement of balance and gait abilities in chronic stroke patients.

  15. Active Gaming: Is "Virtual" Reality Right for Your Physical Education Program?

    ERIC Educational Resources Information Center

    Hansen, Lisa; Sanders, Stephen W.

    2012-01-01

    Active gaming is growing in popularity and the idea of increasing children's physical activity by using technology is largely accepted by physical educators. Teachers nationwide have been providing active gaming equipment such as virtual bikes, rhythmic dance machines, virtual sporting games, martial arts simulators, balance boards, and other…

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belley, M; Schmidt, M; Knutson, N

    Purpose: Physics second-checks for external beam radiation therapy are performed, in-part, to verify that the machine parameters in the Record-and-Verify (R&V) system that will ultimately be sent to the LINAC exactly match the values initially calculated by the Treatment Planning System (TPS). While performing the second-check, a large portion of the physicists’ time is spent navigating and arranging display windows to locate and compare the relevant numerical values (MLC position, collimator rotation, field size, MU, etc.). Here, we describe the development of a software tool that guides the physicist by aggregating and succinctly displaying machine parameter data relevant to themore » physics second-check process. Methods: A data retrieval software tool was developed using Python to aggregate data and generate a list of machine parameters that are commonly verified during the physics second-check process. This software tool imported values from (i) the TPS RT Plan DICOM file and (ii) the MOSAIQ (R&V) Structured Query Language (SQL) database. The machine parameters aggregated for this study included: MLC positions, X&Y jaw positions, collimator rotation, gantry rotation, MU, dose rate, wedges and accessories, cumulative dose, energy, machine name, couch angle, and more. Results: A GUI interface was developed to generate a side-by-side display of the aggregated machine parameter values for each field, and presented to the physicist for direct visual comparison. This software tool was tested for 3D conformal, static IMRT, sliding window IMRT, and VMAT treatment plans. Conclusion: This software tool facilitated the data collection process needed in order for the physicist to conduct a second-check, thus yielding an optimized second-check workflow that was both more user friendly and time-efficient. Utilizing this software tool, the physicist was able to spend less time searching through the TPS PDF plan document and the R&V system and focus the second-check efforts on assessing the patient-specific plan-quality.« less

  17. Machine Learning to Improve Energy Expenditure Estimation in Children With Disabilities: A Pilot Study in Duchenne Muscular Dystrophy.

    PubMed

    Pande, Amit; Mohapatra, Prasant; Nicorici, Alina; Han, Jay J

    2016-07-19

    Children with physical impairments are at a greater risk for obesity and decreased physical activity. A better understanding of physical activity pattern and energy expenditure (EE) would lead to a more targeted approach to intervention. This study focuses on studying the use of machine-learning algorithms for EE estimation in children with disabilities. A pilot study was conducted on children with Duchenne muscular dystrophy (DMD) to identify important factors for determining EE and develop a novel algorithm to accurately estimate EE from wearable sensor-collected data. There were 7 boys with DMD, 6 healthy control boys, and 22 control adults recruited. Data were collected using smartphone accelerometer and chest-worn heart rate sensors. The gold standard EE values were obtained from the COSMED K4b2 portable cardiopulmonary metabolic unit worn by boys (aged 6-10 years) with DMD and controls. Data from this sensor setup were collected simultaneously during a series of concurrent activities. Linear regression and nonlinear machine-learning-based approaches were used to analyze the relationship between accelerometer and heart rate readings and COSMED values. Existing calorimetry equations using linear regression and nonlinear machine-learning-based models, developed for healthy adults and young children, give low correlation to actual EE values in children with disabilities (14%-40%). The proposed model for boys with DMD uses ensemble machine learning techniques and gives a 91% correlation with actual measured EE values (root mean square error of 0.017). Our results confirm that the methods developed to determine EE using accelerometer and heart rate sensor values in normal adults are not appropriate for children with disabilities and should not be used. A much more accurate model is obtained using machine-learning-based nonlinear regression specifically developed for this target population. ©Amit Pande, Prasant Mohapatra, Alina Nicorici, Jay J Han. Originally published in JMIR Rehabilitation and Assistive Technology (http://rehab.jmir.org), 19.07.2016.

  18. Harry Mergler with His Modified Differential Analyzer

    NASA Image and Video Library

    1951-06-21

    Harry Mergler stands at the control board of a differential analyzer in the new Instrument Research Laboratory at the National Advisory Committee for Aeronautics (NACA) Lewis Flight Propulsion Laboratory. The differential analyzer was a multi-variable analog computation machine devised in 1931 by Massachusetts Institute of Technology researcher and future NACA Committee member Vannevar Bush. The mechanical device could solve computations up to the sixth order, but had to be rewired before each new computation. Mergler modified Bush’s differential analyzer in the late 1940s to calculate droplet trajectories for Lewis’ icing research program. In four days Mergler’s machine could calculate what previously required weeks. NACA Lewis built the Instrument Research Laboratory in 1950 and 1951 to house the large analog computer equipment. The two-story structure also provided offices for the Mechanical Computational Analysis, and Flow Physics sections of the Physics Division. The division had previously operated from the lab’s hangar because of its icing research and flight operations activities. Mergler joined the Instrument Research Section of the Physics Division in 1948 after earning an undergraduate degree in Physics from the Case Institute of Technology. Mergler’s focus was on the synthesis of analog computers with the machine tools used to create compressor and turbine blades for jet engines.

  19. Multi-physics CFD simulations in engineering

    NASA Astrophysics Data System (ADS)

    Yamamoto, Makoto

    2013-08-01

    Nowadays Computational Fluid Dynamics (CFD) software is adopted as a design and analysis tool in a great number of engineering fields. We can say that single-physics CFD has been sufficiently matured in the practical point of view. The main target of existing CFD software is single-phase flows such as water and air. However, many multi-physics problems exist in engineering. Most of them consist of flow and other physics, and the interactions between different physics are very important. Obviously, multi-physics phenomena are critical in developing machines and processes. A multi-physics phenomenon seems to be very complex, and it is so difficult to be predicted by adding other physics to flow phenomenon. Therefore, multi-physics CFD techniques are still under research and development. This would be caused from the facts that processing speed of current computers is not fast enough for conducting a multi-physics simulation, and furthermore physical models except for flow physics have not been suitably established. Therefore, in near future, we have to develop various physical models and efficient CFD techniques, in order to success multi-physics simulations in engineering. In the present paper, I will describe the present states of multi-physics CFD simulations, and then show some numerical results such as ice accretion and electro-chemical machining process of a three-dimensional compressor blade which were obtained in my laboratory. Multi-physics CFD simulations would be a key technology in near future.

  20. Using Phun to Study ``Perpetual Motion'' Machines

    NASA Astrophysics Data System (ADS)

    Koreš, Jaroslav

    2012-05-01

    The concept of "perpetual motion" has a long history. The Indian astronomer and mathematician Bhaskara II (12th century) was the first person to describe a perpetual motion (PM) machine. An example of a 13th- century PM machine is shown in Fig. 1. Although the law of conservation of energy clearly implies the impossibility of PM construction, over the centuries numerous proposals for PM have been made, involving ever more elements of modern science in their construction. It is possible to test a variety of PM machines in the classroom using a program called Phun2 or its commercial version Algodoo.3 The programs are designed to simulate physical processes and we can easily simulate mechanical machines using them. They provide an intuitive graphical environment controlled with a mouse; a programming language is not needed. This paper describes simulations of four different (supposed) PM machines.4

  1. When Machines Think: Radiology's Next Frontier.

    PubMed

    Dreyer, Keith J; Geis, J Raymond

    2017-12-01

    Artificial intelligence (AI), machine learning, and deep learning are terms now seen frequently, all of which refer to computer algorithms that change as they are exposed to more data. Many of these algorithms are surprisingly good at recognizing objects in images. The combination of large amounts of machine-consumable digital data, increased and cheaper computing power, and increasingly sophisticated statistical models combine to enable machines to find patterns in data in ways that are not only cost-effective but also potentially beyond humans' abilities. Building an AI algorithm can be surprisingly easy. Understanding the associated data structures and statistics, on the other hand, is often difficult and obscure. Converting the algorithm into a sophisticated product that works consistently in broad, general clinical use is complex and incompletely understood. To show how these AI products reduce costs and improve outcomes will require clinical translation and industrial-grade integration into routine workflow. Radiology has the chance to leverage AI to become a center of intelligently aggregated, quantitative, diagnostic information. Centaur radiologists, formed as a synergy of human plus computer, will provide interpretations using data extracted from images by humans and image-analysis computer algorithms, as well as the electronic health record, genomics, and other disparate sources. These interpretations will form the foundation of precision health care, or care customized to an individual patient. © RSNA, 2017.

  2. Machinery Bearing Fault Diagnosis Using Variational Mode Decomposition and Support Vector Machine as a Classifier

    NASA Astrophysics Data System (ADS)

    Rama Krishna, K.; Ramachandran, K. I.

    2018-02-01

    Crack propagation is a major cause of failure in rotating machines. It adversely affects the productivity, safety, and the machining quality. Hence, detecting the crack’s severity accurately is imperative for the predictive maintenance of such machines. Fault diagnosis is an established concept in identifying the faults, for observing the non-linear behaviour of the vibration signals at various operating conditions. In this work, we find the classification efficiencies for both original and the reconstructed vibrational signals. The reconstructed signals are obtained using Variational Mode Decomposition (VMD), by splitting the original signal into three intrinsic mode functional components and framing them accordingly. Feature extraction, feature selection and feature classification are the three phases in obtaining the classification efficiencies. All the statistical features from the original signals and reconstructed signals are found out in feature extraction process individually. A few statistical parameters are selected in feature selection process and are classified using the SVM classifier. The obtained results show the best parameters and appropriate kernel in SVM classifier for detecting the faults in bearings. Hence, we conclude that better results were obtained by VMD and SVM process over normal process using SVM. This is owing to denoising and filtering the raw vibrational signals.

  3. Occupational Accidents with Agricultural Machinery in Austria.

    PubMed

    Kogler, Robert; Quendler, Elisabeth; Boxberger, Josef

    2016-01-01

    The number of recognized accidents with fatalities during agricultural and forestry work, despite better technology and coordinated prevention and trainings, is still very high in Austria. The accident scenarios in which people are injured are very different on farms. The common causes of accidents in agriculture and forestry are the loss of control of machine, means of transport or handling equipment, hand-held tool, and object or animal, followed by slipping, stumbling and falling, breakage, bursting, splitting, slipping, fall, and collapse of material agent. In the literature, a number of studies of general (machine- and animal-related accidents) and specific (machine-related accidents) agricultural and forestry accident situations can be found that refer to different databases. From the database Data of the Austrian Workers Compensation Board (AUVA) about occupational accidents with different agricultural machinery over the period 2008-2010 in Austria, main characteristics of the accident, the victim, and the employer as well as variables on causes and circumstances by frequency and contexts of parameters were statistically analyzed by employing the chi-square test and odds ratio. The aim of the study was to determine the information content and quality of the European Statistics on Accidents at Work (ESAW) variables to evaluate safety gaps and risks as well as the accidental man-machine interaction.

  4. Travelogue--a newcomer encounters statistics and the computer.

    PubMed

    Bruce, Peter

    2011-11-01

    Computer-intensive methods have revolutionized statistics, giving rise to new areas of analysis and expertise in predictive analytics, image processing, pattern recognition, machine learning, genomic analysis, and more. Interest naturally centers on the new capabilities the computer allows the analyst to bring to the table. This article, instead, focuses on the account of how computer-based resampling methods, with their relative simplicity and transparency, enticed one individual, untutored in statistics or mathematics, on a long journey into learning statistics, then teaching it, then starting an education institution.

  5. Physical properties of conventional and Super Slick elastomeric ligatures after intraoral use.

    PubMed

    Crawford, Nicola Louise; McCarthy, Caroline; Murphy, Tanya C; Benson, Philip Edward

    2010-01-01

    To investigate the change in the physical properties of conventional and Super Slick elastomeric ligatures after they have been in the mouth. Nine healthy volunteers took part. One orthodontic bracket was bonded to a premolar tooth in each of the four quadrants of the mouth. Two conventional and two Super Slick elastomeric ligatures were placed at random locations on either side of the mouth. The ligatures were collected after various time intervals and tested using an Instron Universal testing machine. The two outcome measures were failure load and the static frictional resistance. The failure load for conventional ligatures was reduced to 67% of the original value after 6 weeks in situ. Super Slick elastomeric ligatures showed a comparable reduction after 6 weeks in situ (63% of original value). There were no statistical differences in the static friction between conventional and Super Slick elastomerics that had been in situ for either 24 hours (P = .686) or 6 weeks (P = .416). There was a good correlation between failure load and static friction (r = .49). There were statistically significant differences in the failure loads of elastomerics that had not be placed in the mouth and those that had been in the mouth for 6 weeks. There were no differences in the static frictional forces produced by conventional and Super Slick ligatures either before or after they had been placed in the mouth. There appears to be a direct proportional relationship between failure load and static friction of elastomeric ligatures.

  6. Math Machines: Using Actuators in Physics Classes

    NASA Astrophysics Data System (ADS)

    Thomas, Frederick J.; Chaney, Robert A.; Gruesbeck, Marta

    2018-01-01

    Probeware (sensors combined with data-analysis software) is a well-established part of physics education. In engineering and technology, sensors are frequently paired with actuators—motors, heaters, buzzers, valves, color displays, medical dosing systems, and other devices that are activated by electrical signals to produce intentional physical change. This article describes how a 20-year project aimed at better integration of the STEM disciplines (science, technology, engineering and mathematics) uses brief actuator activities in physics instruction. Math Machines "actionware" includes software and hardware that convert virtually any free-form, time-dependent algebraic function into the dynamic actions of a stepper motor, servo motor, or RGB (red, green, blue) color mixer. With wheels and a platform, the stepper motor becomes LACI, a programmable vehicle. Adding a low-power laser module turns the servo motor into a programmable Pointer. Adding a gear and platform can transform the Pointer into an earthquake simulator.

  7. Pre-use anesthesia machine check; certified anesthesia technician based quality improvement audit.

    PubMed

    Al Suhaibani, Mazen; Al Malki, Assaf; Al Dosary, Saad; Al Barmawi, Hanan; Pogoku, Mahdhav

    2014-01-01

    Quality assurance of providing a work ready machine in multiple theatre operating rooms in a tertiary modern medical center in Riyadh. The aim of the following study is to keep high quality environment for workers and patients in surgical operating rooms. Technicians based audit by using key performance indicators to assure inspection, passing test of machine worthiness for use daily and in between cases and in case of unexpected failure to provide quick replacement by ready to use another anesthetic machine. The anesthetic machines in all operating rooms are daily and continuously inspected and passed as ready by technicians and verified by anesthesiologist consultant or assistant consultant. The daily records of each machines were collected then inspected for data analysis by quality improvement committee department for descriptive analysis and report the degree of staff compliance to daily inspection as "met" items. Replaced machine during use and overall compliance. Distractive statistic using Microsoft Excel 2003 tables and graphs of sums and percentages of item studied in this audit. Audit obtained highest compliance percentage and low rate of replacement of machine which indicate unexpected machine state of use and quick machine switch. The authors are able to conclude that following regular inspection and running self-check recommended by the manufacturers can contribute to abort any possibility of hazard of anesthesia machine failure during operation. Furthermore in case of unexpected reason to replace the anesthesia machine in quick maneuver contributes to high assured operative utilization of man machine inter-phase in modern surgical operating rooms.

  8. Improved analyses using function datasets and statistical modeling

    Treesearch

    John S. Hogland; Nathaniel M. Anderson

    2014-01-01

    Raster modeling is an integral component of spatial analysis. However, conventional raster modeling techniques can require a substantial amount of processing time and storage space and have limited statistical functionality and machine learning algorithms. To address this issue, we developed a new modeling framework using C# and ArcObjects and integrated that framework...

  9. Self-replicating machines in continuous space with virtual physics.

    PubMed

    Smith, Arnold; Turney, Peter; Ewaschuk, Robert

    2003-01-01

    JohnnyVon is an implementation of self-replicating machines in continuous two-dimensional space. Two types of particles drift about in a virtual liquid. The particles are automata with discrete internal states but continuous external relationships. Their internal states are governed by finite state machines, but their external relationships are governed by a simulated physics that includes Brownian motion, viscosity, and springlike attractive and repulsive forces. The particles can be assembled into patterns that can encode arbitrary strings of bits. We demonstrate that, if an arbitrary seed pattern is put in a soup of separate individual particles, the pattern will replicate by assembling the individual particles into copies of itself. We also show that, given sufficient time, a soup of separate individual particles will eventually spontaneously form self-replicating patterns. We discuss the implications of JohnnyVon for research in nanotechnology, theoretical biology, and artificial life.

  10. Influence of various metal oxides on mechanical and physical properties of heat-cured polymethyl methacrylate denture base resins.

    PubMed

    Asar, Neset Volkan; Albayrak, Hamdi; Korkmaz, Turan; Turkyilmaz, Ilser

    2013-08-01

    To evaluate the effect of various metal oxides on impact strength (IS), fracture toughness (FT), water sorption (WSP) and solubility (WSL) of heat-cured acrylic resin. Fifty acrylic resin specimens were fabricated for each test and divided into five groups. Group 1 was the control group and Group 2, 3, 4 and 5 (test groups) included a mixture of 1% TiO2 and 1% ZrO2, 2% Al2O3, 2% TiO2, and 2% ZrO2 by volume, respectively. Rectangular unnotched specimens (50 mm × 6.0 mm × 4.0 mm) were fabricated and droptower impact testing machine was used to determine IS. For FT, compact test specimens were fabricated and tests were done with a universal testing machine with a cross-head speed of 5 mm/min. For WSP and WSL, discshaped specimens were fabricated and tests were performed in accordance to ISO 1567. ANOVA and Kruskal-Wallis tests were used for statistical analyses. IS and FT values were significantly higher and WSP and WSL values were significantly lower in test groups than in control group (P<.05). Group 5 had significantly higher IS and FT values and significantly lower WSP values than other groups (P<.05) and provided 40% and 30% increase in IS and FT, respectively, compared to control group. Significantly lower WSL values were detected for Group 2 and 5 (P<.05). Modification of heat-cured acrylic resin with metal oxides, especially with ZrO2, may be useful in preventing denture fractures and undesirable physical changes resulting from oral fluids clinically.

  11. Machine Learning Classification of Heterogeneous Fields to Estimate Physical Responses

    NASA Astrophysics Data System (ADS)

    McKenna, S. A.; Akhriev, A.; Alzate, C.; Zhuk, S.

    2017-12-01

    The promise of machine learning to enhance physics-based simulation is examined here using the transient pressure response to a pumping well in a heterogeneous aquifer. 10,000 random fields of log10 hydraulic conductivity (K) are created and conditioned on a single K measurement at the pumping well. Each K-field is used as input to a forward simulation of drawdown (pressure decline). The differential equations governing groundwater flow to the well serve as a non-linear transform of the input K-field to an output drawdown field. The results are stored and the data set is split into training and testing sets for classification. A Euclidean distance measure between any two fields is calculated and the resulting distances between all pairs of fields define a similarity matrix. Similarity matrices are calculated for both input K-fields and the resulting drawdown fields at the end of the simulation. The similarity matrices are then used as input to spectral clustering to determine groupings of similar input and output fields. Additionally, the similarity matrix is used as input to multi-dimensional scaling to visualize the clustering of fields in lower dimensional spaces. We examine the ability to cluster both input K-fields and output drawdown fields separately with the goal of identifying K-fields that create similar drawdowns and, conversely, given a set of simulated drawdown fields, identify meaningful clusters of input K-fields. Feature extraction based on statistical parametric mapping provides insight into what features of the fields drive the classification results. The final goal is to successfully classify input K-fields into the correct output class, and also, given an output drawdown field, be able to infer the correct class of input field that created it.

  12. Vegetation Monitoring with Gaussian Processes and Latent Force Models

    NASA Astrophysics Data System (ADS)

    Camps-Valls, Gustau; Svendsen, Daniel; Martino, Luca; Campos, Manuel; Luengo, David

    2017-04-01

    Monitoring vegetation by biophysical parameter retrieval from Earth observation data is a challenging problem, where machine learning is currently a key player. Neural networks, kernel methods, and Gaussian Process (GP) regression have excelled in parameter retrieval tasks at both local and global scales. GP regression is based on solid Bayesian statistics, yield efficient and accurate parameter estimates, and provides interesting advantages over competing machine learning approaches such as confidence intervals. However, GP models are hampered by lack of interpretability, that prevented the widespread adoption by a larger community. In this presentation we will summarize some of our latest developments to address this issue. We will review the main characteristics of GPs and their advantages in vegetation monitoring standard applications. Then, three advanced GP models will be introduced. First, we will derive sensitivity maps for the GP predictive function that allows us to obtain feature ranking from the model and to assess the influence of examples in the solution. Second, we will introduce a Joint GP (JGP) model that combines in situ measurements and simulated radiative transfer data in a single GP model. The JGP regression provides more sensible confidence intervals for the predictions, respects the physics of the underlying processes, and allows for transferability across time and space. Finally, a latent force model (LFM) for GP modeling that encodes ordinary differential equations to blend data-driven modeling and physical models of the system is presented. The LFM performs multi-output regression, adapts to the signal characteristics, is able to cope with missing data in the time series, and provides explicit latent functions that allow system analysis and evaluation. Empirical evidence of the performance of these models will be presented through illustrative examples.

  13. Review of Statistical Learning Methods in Integrated Omics Studies (An Integrated Information Science).

    PubMed

    Zeng, Irene Sui Lan; Lumley, Thomas

    2018-01-01

    Integrated omics is becoming a new channel for investigating the complex molecular system in modern biological science and sets a foundation for systematic learning for precision medicine. The statistical/machine learning methods that have emerged in the past decade for integrated omics are not only innovative but also multidisciplinary with integrated knowledge in biology, medicine, statistics, machine learning, and artificial intelligence. Here, we review the nontrivial classes of learning methods from the statistical aspects and streamline these learning methods within the statistical learning framework. The intriguing findings from the review are that the methods used are generalizable to other disciplines with complex systematic structure, and the integrated omics is part of an integrated information science which has collated and integrated different types of information for inferences and decision making. We review the statistical learning methods of exploratory and supervised learning from 42 publications. We also discuss the strengths and limitations of the extended principal component analysis, cluster analysis, network analysis, and regression methods. Statistical techniques such as penalization for sparsity induction when there are fewer observations than the number of features and using Bayesian approach when there are prior knowledge to be integrated are also included in the commentary. For the completeness of the review, a table of currently available software and packages from 23 publications for omics are summarized in the appendix.

  14. Statistical Learning Analysis in Neuroscience: Aiming for Transparency

    PubMed Central

    Hanke, Michael; Halchenko, Yaroslav O.; Haxby, James V.; Pollmann, Stefan

    2009-01-01

    Encouraged by a rise of reciprocal interest between the machine learning and neuroscience communities, several recent studies have demonstrated the explanatory power of statistical learning techniques for the analysis of neural data. In order to facilitate a wider adoption of these methods, neuroscientific research needs to ensure a maximum of transparency to allow for comprehensive evaluation of the employed procedures. We argue that such transparency requires “neuroscience-aware” technology for the performance of multivariate pattern analyses of neural data that can be documented in a comprehensive, yet comprehensible way. Recently, we introduced PyMVPA, a specialized Python framework for machine learning based data analysis that addresses this demand. Here, we review its features and applicability to various neural data modalities. PMID:20582270

  15. Study of Man-Machine Communications Systems for Disabled Persons (The Handicapped). Volume VII. Final Report.

    ERIC Educational Resources Information Center

    Kafafian, Haig

    Teaching instructions, lesson plans, and exercises are provided for severely physically and/or neurologically handicapped persons learning to use the Cybertype electric writing machine with a tongue-body keyboard. The keyboard, which has eight double-throw toggle switches and a three-position state-selector switch, is designed to be used by…

  16. The Compound Atwood Machine Problem

    ERIC Educational Resources Information Center

    Coelho, R. Lopes

    2017-01-01

    The present paper accounts for progress in physics teaching in the sense that a problem, which has been closed to students for being too difficult, is gained for the high school curriculum. This problem is the compound Atwood machine with three bodies. Its introduction into high school classes is based on a recent study on the weighing of an…

  17. Applications of Support Vector Machines In Chemo And Bioinformatics

    NASA Astrophysics Data System (ADS)

    Jayaraman, V. K.; Sundararajan, V.

    2010-10-01

    Conventional linear & nonlinear tools for classification, regression & data driven modeling are being replaced on a rapid scale by newer techniques & tools based on artificial intelligence and machine learning. While the linear techniques are not applicable for inherently nonlinear problems, newer methods serve as attractive alternatives for solving real life problems. Support Vector Machine (SVM) classifiers are a set of universal feed-forward network based classification algorithms that have been formulated from statistical learning theory and structural risk minimization principle. SVM regression closely follows the classification methodology. In this work recent applications of SVM in Chemo & Bioinformatics will be described with suitable illustrative examples.

  18. Design features and results from fatigue reliability research machines.

    NASA Technical Reports Server (NTRS)

    Lalli, V. R.; Kececioglu, D.; Mcconnell, J. B.

    1971-01-01

    The design, fabrication, development, operation, calibration and results from reversed bending combined with steady torque fatigue research machines are presented. Fifteen-centimeter long, notched, SAE 4340 steel specimens are subjected to various combinations of these stresses and cycled to failure. Failure occurs when the crack in the notch passes through the specimen automatically shutting down the test machine. These cycles-to-failure data are statistically analyzed to develop a probabilistic S-N diagram. These diagrams have many uses; a rotating component design example given in the literature shows that minimum size and weight for a specified number of cycles and reliability can be calculated using these diagrams.

  19. Energy landscapes for machine learning

    NASA Astrophysics Data System (ADS)

    Ballard, Andrew J.; Das, Ritankar; Martiniani, Stefano; Mehta, Dhagash; Sagun, Levent; Stevenson, Jacob D.; Wales, David J.

    Machine learning techniques are being increasingly used as flexible non-linear fitting and prediction tools in the physical sciences. Fitting functions that exhibit multiple solutions as local minima can be analysed in terms of the corresponding machine learning landscape. Methods to explore and visualise molecular potential energy landscapes can be applied to these machine learning landscapes to gain new insight into the solution space involved in training and the nature of the corresponding predictions. In particular, we can define quantities analogous to molecular structure, thermodynamics, and kinetics, and relate these emergent properties to the structure of the underlying landscape. This Perspective aims to describe these analogies with examples from recent applications, and suggest avenues for new interdisciplinary research.

  20. Physical dimensions, torsional performance, bending properties, and metallurgical characteristics of rotary endodontic instruments. VI. Canal Master drills.

    PubMed

    Luebke, N H; Brantley, W A; Sabri, Z I; Luebke, F L; Lausten, L L

    1995-05-01

    A laboratory study was performed on machine-driven Canal Master drills to determine their physical dimensions, torsional performance, bending properties, and metallurgical characteristics in fracture. Physical dimensions were determined for each of the available sizes (#50 to #100) of Canal Master drills from the manufacturer that distributes these instruments in the United States. Samples were also tested in clockwise torsion using a Maillefer memocouple. Bending properties of cantilever specimens were measured with a Tinius Olsen stiffness tester. Bending fatigue testing was performed on a unique laboratory apparatus. Scanning electron microscope examination confirmed visual observations that the stainless steel Canal Master drills exhibited ductile torsional fracture. This study is part of a continuing investigation to establish standards for all machine-driven rotary endodontic instruments.

  1. Resource Letter AFHEP-1: Accelerators for the Future of High-Energy Physics

    NASA Astrophysics Data System (ADS)

    Barletta, William A.

    2012-02-01

    This Resource Letter provides a guide to literature concerning the development of accelerators for the future of high-energy physics. Research articles, books, and Internet resources are cited for the following topics: motivation for future accelerators, present accelerators for high-energy physics, possible future machine, and laboratory and collaboration websites.

  2. Review of EuCARD project on accelerator infrastructure in Europe

    NASA Astrophysics Data System (ADS)

    Romaniuk, Ryszard S.

    2013-01-01

    The aim of big infrastructural and research programs (like pan-European Framework Programs) and individual projects realized inside these programs in Europe is to structure the European Research Area - ERA in this way as to be competitive with the leaders of the world. One of this projects in EuCARD (European Coordination of Accelerator Research and Development) with the aim to structure and modernize accelerator, (including accelerators for big free electron laser machines) research infrastructure. This article presents the periodic development of EuCARD which took place between the annual meeting, April 2012 in Warsaw and SC meeting in Uppsala, December 2012. The background of all these efforts are achievements of the LHC machine and associated detectors in the race for new physics. The LHC machine works in the regime of p-p, Pb-p, Pb-Pb (protons and lead ions). Recently, a discovery by the LHC of Higgs like boson, has started vivid debates on the further potential of this machine and the future. The periodic EuCARD conference, workshop and meetings concern building of the research infrastructure, including in this advanced photonic and electronic systems for servicing large high energy physics experiments. There are debated a few basic groups of such systems like: measurement - control networks of large geometrical extent, multichannel systems for large amounts of metrological data acquisition, precision photonic networks of reference time, frequency and phase distribution. The aim of the discussion is not only summarize the current status but make plans and prepare practically to building new infrastructures. Accelerator science and technology is one of a key enablers of the developments in the particle physic, photon physics and also applications in medicine and industry. Accelerator technology is intensely developed in all developed nations and regions of the world. The EuCARD project contains a lot of subjects related directly and indirectly to photon physics and photonics, as well as optoelectronics, electronics and integration of these with large research infrastructure.

  3. Evaluation of Cepstrum Algorithm with Impact Seeded Fault Data of Helicopter Oil Cooler Fan Bearings and Machine Fault Simulator Data

    DTIC Science & Technology

    2013-02-01

    of a bearing must be put into practice. There are many potential methods, the most traditional being the use of statistical time-domain features...accelerate degradation to test multiples bearings to gain statistical relevance and extrapolate results to scale for field conditions. Temperature...as time statistics , frequency estimation to improve the fault frequency detection. For future investigations, one can further explore the

  4. Improving Statistical Machine Translation Through N-best List Re-ranking and Optimization

    DTIC Science & Technology

    2014-03-27

    of Master of Science in Cyber Operations Jordan S. Keefer, B.S.C.S. Second Lieutenant, USAF March 2014 DISTRIBUTION STATEMENT A: APPROVED FOR PUBLIC...Atlantic Trade Organization NIST National Institute of Standards and Technology NL natural language NSF National Science Foundation ix Acronym Definition...the machine translation problem. In 1964 the Director of the National Science Foundation (NSF), 4 Dr. Leland Haworth, commissioned a research team to

  5. The influence of maintenance quality of hemodialysis machines on hemodialysis efficiency.

    PubMed

    Azar, Ahmad Taher

    2009-01-01

    Several studies suggest that there is a correlation between dose of dialysis and machine maintenance. However, in spite of the current practice, there are conflicting reports regarding the relationship between dose of dialysis or patient outcome, and machine maintenance. In order to evaluate the impact of hemodialysis machine maintenance on dialysis adequacy Kt/V and session performance, data were processed on 134 patients on 3-times-per-week dialysis regimens by dividing the patients into four groups and also dividing the hemodialysis machines into four groups according to their year of installation. The equilibrated dialysis dose eq Kt/V, urea reduction ratio (URR) and the overall equipment effectiveness (OEE) were calculated in each group to show the effect hemodialysis machine efficiency on the overall session performance. The average working time per machine per month was 270 hours. The cumulative number of hours according to the year of installation was: 26,122 hours for machines installed in 1998; 21,596 hours for machines installed in 1999, 8362 hours for those installed in 2003 and 2486 hours for those installed in 2005. The mean time between failures (MTBF) was 1.8, 2.1, 4.2 and 6 months between failures for machines installed in 1999, 1998, 2003 and 2005, respectively. Statistical analysis demonstrated that the dialysis dose eq Kt/V and URR were increased as the overall equipment effectiveness (OEE) increases with regular maintenance procedures. Maintenance has become one of the most expedient approaches to guarantee high machine dependability. The efficiency of dialysis machine is relevant in assuring a proper dialysis adequacy.

  6. Assessing a Novel Method to Reduce Anesthesia Machine Contamination: A Prospective, Observational Trial.

    PubMed

    Biddle, Chuck J; George-Gay, Beverly; Prasanna, Praveen; Hill, Emily M; Davis, Thomas C; Verhulst, Brad

    2018-01-01

    Anesthesia machines are known reservoirs of bacterial species, potentially contributing to healthcare associated infections (HAIs). An inexpensive, disposable, nonpermeable, transparent anesthesia machine wrap (AMW) may reduce microbial contamination of the anesthesia machine. This study quantified the density and diversity of bacterial species found on anesthesia machines after terminal cleaning and between cases during actual anesthesia care to assess the impact of the AMW. We hypothesized reduced bioburden with the use of the AMW. In a prospective, experimental research design, the AMW was used in 11 surgical cases (intervention group) and not used in 11 control surgical cases. Cases were consecutively assigned to general surgical operating rooms. Seven frequently touched and difficult to disinfect "hot spots" were cultured on each machine preceding and following each case. The density and diversity of cultured colony forming units (CFUs) between the covered and uncovered machines were compared using Wilcoxon signed-rank test and Student's t -tests. There was a statistically significant reduction in CFU density and diversity when the AMW was employed. The protective effect of the AMW during regular anesthetic care provides a reliable and low-cost method to minimize the transmission of pathogens across patients and potentially reduces HAIs.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keller, J; Hardin, M; Giaddui, T

    Purpose: To test whether unified vendor specified beam conformance for matched machines implies volumetric modulated arc radiotherapy(VMAT) delivery consistency. Methods: Twenty-two identical patient QA plans, eleven 6MV and eleven 15MV, were delivered to the Delta{sup 4}(Scandidos, Uppsala, Sweden) on two Varian TrueBEAM matched machines. Sixteen patient QA plans, nine 6 MV and seven 10 MV, were delivered to Delta{sup 4} on two Elekta Agility matched machines. The percent dose deviation(%DDev), distance-to-agreement(DTA), and the gamma analysis(γ) were collected for all plans and the differences in measurements were tabulated between matched machines. A paired t-test analysis of the data with an alphamore » of 0.05 determines statistical significance. Power(P) was calculated to detect a difference of 5%; all data except Elekta %DDev sets were strong with above a 0.85 power. Results: The average differences for Varian machines (%DDev, DTA, and γ) are 6.4%, 1.6% and 2.7% for 6MV, respectively, and 8.0%, 0.6%, and 2.5% for 15MV. The average differences for matched Elekta machines (%DDev, DTA, and γ) are 10.2%, 0.6% and 0.9% for 6 MV, respectively, and 7.0%, 1.9%, and 2.8% for 10MV.A paired t-test shows for Varian the %DDev difference is significant for 6MV and 15MV(p-value6MV=0.019, P6MV=0.96; p-value15MV=0.0003, P15MV=0.86). Differences in DTA are insignificant for both 6MV and 15MV(p-value6MV=0.063, P6MV=1; p-value15MV=0.907, P15MV=1). Varian differences in gamma are significant for both energies(p-value6MV=0.025, P6MV=0.99; p-value15MV=0.013, P15MV=1). A paired t-test shows for Elekta the difference in %DDev is significant for 6MV but not 10MV(p-value6MV=0.00065, P6MV=0.68; p-value10MV=0.262, P10MV=0.39). Differences in DTA are statistically insignificant(p-value6MV=0.803, P6MV = 1; p-value10MV=0.269, P10MV=1). Elekta differences in gamma are significant for 10MV only(p-value6MV=0.094, P6MV=1; p-value10MV=0.011, P10MV=1). Conclusion: These results show vendor specified beam conformance across machines does not ensure equivalent patient specific QA pass rates. Gamma differences are statistically significant in three of the four comparisons for two pairs of vendor matched machines.« less

  8. Volumetric Verification of Multiaxis Machine Tool Using Laser Tracker

    PubMed Central

    Aguilar, Juan José

    2014-01-01

    This paper aims to present a method of volumetric verification in machine tools with linear and rotary axes using a laser tracker. Beyond a method for a particular machine, it presents a methodology that can be used in any machine type. Along this paper, the schema and kinematic model of a machine with three axes of movement, two linear and one rotational axes, including the measurement system and the nominal rotation matrix of the rotational axis are presented. Using this, the machine tool volumetric error is obtained and nonlinear optimization techniques are employed to improve the accuracy of the machine tool. The verification provides a mathematical, not physical, compensation, in less time than other methods of verification by means of the indirect measurement of geometric errors of the machine from the linear and rotary axes. This paper presents an extensive study about the appropriateness and drawbacks of the regression function employed depending on the types of movement of the axes of any machine. In the same way, strengths and weaknesses of measurement methods and optimization techniques depending on the space available to place the measurement system are presented. These studies provide the most appropriate strategies to verify each machine tool taking into consideration its configuration and its available work space. PMID:25202744

  9. Image quality assessment and medical physics evaluation of different portable dental X-ray units.

    PubMed

    Pittayapat, Pisha; Oliveira-Santos, Christiano; Thevissen, Patrick; Michielsen, Koen; Bergans, Niki; Willems, Guy; Debruyckere, Deborah; Jacobs, Reinhilde

    2010-09-10

    Recently developed portable dental X-ray units increase the mobility of the forensic odontologists and allow more efficient X-ray work in a disaster field, especially when used in combination with digital sensors. This type of machines might also have potential for application in remote areas, military and humanitarian missions, dental care of patients with mobility limitation, as well as imaging in operating rooms. To evaluate radiographic image quality acquired by three portable X-ray devices in combination with four image receptors and to evaluate their medical physics parameters. Images of five samples consisting of four teeth and one formalin-fixed mandible were acquired by one conventional wall-mounted X-ray unit, MinRay 60/70 kVp, used as a clinical standard, and three portable dental X-ray devices: AnyRay 60 kVp, Nomad 60 kVp and Rextar 70 kVp, in combination with a phosphor image plate (PSP), a CCD, or a CMOS sensor. Three observers evaluated images for standard image quality besides forensic diagnostic quality on a 4-point rating scale. Furthermore, all machines underwent tests for occupational as well as patient dosimetry. Statistical analysis showed good quality imaging for all system, with the combination of Nomad and PSP yielding the best score. A significant difference in image quality between the combination of the four X-ray devices and four sensors was established (p<0.05). For patient safety, the exposure rate was determined and exit dose rates for MinRay at 60 kVp, MinRay at 70 kVp, AnyRay, Nomad and Rextar were 3.4 mGy/s, 4.5 mGy/s, 13.5 mGy/s, 3.8 mGy/s and 2.6 mGy/s respectively. The kVp of the AnyRay system was the most stable, with a ripple of 3.7%. Short-term variations in the tube output of all the devices were less than 10%. AnyRay presented higher estimated effective dose than other machines. Occupational dosimetry showed doses at the operator's hand being lowest with protective shielding (Nomad: 0.1 microGy). It was also low while using remote control (distance>1m: Rextar <0.2 microGy, MinRay <0.1 microGy). The present study demonstrated the feasibility of three portable X-ray systems to be used for specific indications, based on acceptable image quality and sufficient accuracy of the machines and following the standard guidelines for radiation hygiene. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  10. A Character Level Based and Word Level Based Approach for Chinese-Vietnamese Machine Translation.

    PubMed

    Tran, Phuoc; Dinh, Dien; Nguyen, Hien T

    2016-01-01

    Chinese and Vietnamese have the same isolated language; that is, the words are not delimited by spaces. In machine translation, word segmentation is often done first when translating from Chinese or Vietnamese into different languages (typically English) and vice versa. However, it is a matter for consideration that words may or may not be segmented when translating between two languages in which spaces are not used between words, such as Chinese and Vietnamese. Since Chinese-Vietnamese is a low-resource language pair, the sparse data problem is evident in the translation system of this language pair. Therefore, while translating, whether it should be segmented or not becomes more important. In this paper, we propose a new method for translating Chinese to Vietnamese based on a combination of the advantages of character level and word level translation. In addition, a hybrid approach that combines statistics and rules is used to translate on the word level. And at the character level, a statistical translation is used. The experimental results showed that our method improved the performance of machine translation over that of character or word level translation.

  11. Effect of overglazed and polished surface finishes on the compressive fracture strength of machinable ceramic materials.

    PubMed

    Asai, Tetsuya; Kazama, Ryunosuke; Fukushima, Masayoshi; Okiji, Takashi

    2010-11-01

    Controversy prevails over the effect of overglazing on the fracture strength of ceramic materials. Therefore, the effects of different surface finishes on the compressive fracture strength of machinable ceramic materials were investigated in this study. Plates prepared from four commercial brands of ceramic materials were either surface-polished or overglazed (n=10 per ceramic material for each surface finish), and bonded to flat surfaces of human dentin using a resin cement. Loads at failure were determined and statistically analyzed using two-way ANOVA and Bonferroni test. Although no statistical differences in load value were detected between polished and overglazed groups (p>0.05), the fracture load of Vita Mark II was significantly lower than those of ProCAD and IPS Empress CAD, whereas that of IPS e.max CAD was significantly higher than the latter two ceramic materials (p<0.05). It was concluded that overglazed and polished surfaces produced similar compressive fracture strengths irrespective of the machinable ceramic material tested, and that fracture strength was material-dependent.

  12. Comparing machine learning and logistic regression methods for predicting hypertension using a combination of gene expression and next-generation sequencing data.

    PubMed

    Held, Elizabeth; Cape, Joshua; Tintle, Nathan

    2016-01-01

    Machine learning methods continue to show promise in the analysis of data from genetic association studies because of the high number of variables relative to the number of observations. However, few best practices exist for the application of these methods. We extend a recently proposed supervised machine learning approach for predicting disease risk by genotypes to be able to incorporate gene expression data and rare variants. We then apply 2 different versions of the approach (radial and linear support vector machines) to simulated data from Genetic Analysis Workshop 19 and compare performance to logistic regression. Method performance was not radically different across the 3 methods, although the linear support vector machine tended to show small gains in predictive ability relative to a radial support vector machine and logistic regression. Importantly, as the number of genes in the models was increased, even when those genes contained causal rare variants, model predictive ability showed a statistically significant decrease in performance for both the radial support vector machine and logistic regression. The linear support vector machine showed more robust performance to the inclusion of additional genes. Further work is needed to evaluate machine learning approaches on larger samples and to evaluate the relative improvement in model prediction from the incorporation of gene expression data.

  13. Molecular machines open cell membranes

    NASA Astrophysics Data System (ADS)

    García-López, Víctor; Chen, Fang; Nilewski, Lizanne G.; Duret, Guillaume; Aliyan, Amir; Kolomeisky, Anatoly B.; Robinson, Jacob T.; Wang, Gufeng; Pal, Robert; Tour, James M.

    2017-08-01

    Beyond the more common chemical delivery strategies, several physical techniques are used to open the lipid bilayers of cellular membranes. These include using electric and magnetic fields, temperature, ultrasound or light to introduce compounds into cells, to release molecular species from cells or to selectively induce programmed cell death (apoptosis) or uncontrolled cell death (necrosis). More recently, molecular motors and switches that can change their conformation in a controlled manner in response to external stimuli have been used to produce mechanical actions on tissue for biomedical applications. Here we show that molecular machines can drill through cellular bilayers using their molecular-scale actuation, specifically nanomechanical action. Upon physical adsorption of the molecular motors onto lipid bilayers and subsequent activation of the motors using ultraviolet light, holes are drilled in the cell membranes. We designed molecular motors and complementary experimental protocols that use nanomechanical action to induce the diffusion of chemical species out of synthetic vesicles, to enhance the diffusion of traceable molecular machines into and within live cells, to induce necrosis and to introduce chemical species into live cells. We also show that, by using molecular machines that bear short peptide addends, nanomechanical action can selectively target specific cell-surface recognition sites. Beyond the in vitro applications demonstrated here, we expect that molecular machines could also be used in vivo, especially as their design progresses to allow two-photon, near-infrared and radio-frequency activation.

  14. Status and Roadmap of CernVM

    NASA Astrophysics Data System (ADS)

    Berzano, D.; Blomer, J.; Buncic, P.; Charalampidis, I.; Ganis, G.; Meusel, R.

    2015-12-01

    Cloud resources nowadays contribute an essential share of resources for computing in high-energy physics. Such resources can be either provided by private or public IaaS clouds (e.g. OpenStack, Amazon EC2, Google Compute Engine) or by volunteers computers (e.g. LHC@Home 2.0). In any case, experiments need to prepare a virtual machine image that provides the execution environment for the physics application at hand. The CernVM virtual machine since version 3 is a minimal and versatile virtual machine image capable of booting different operating systems. The virtual machine image is less than 20 megabyte in size. The actual operating system is delivered on demand by the CernVM File System. CernVM 3 has matured from a prototype to a production environment. It is used, for instance, to run LHC applications in the cloud, to tune event generators using a network of volunteer computers, and as a container for the historic Scientific Linux 5 and Scientific Linux 4 based software environments in the course of long-term data preservation efforts of the ALICE, CMS, and ALEPH experiments. We present experience and lessons learned from the use of CernVM at scale. We also provide an outlook on the upcoming developments. These developments include adding support for Scientific Linux 7, the use of container virtualization, such as provided by Docker, and the streamlining of virtual machine contextualization towards the cloud-init industry standard.

  15. Molecular machines open cell membranes.

    PubMed

    García-López, Víctor; Chen, Fang; Nilewski, Lizanne G; Duret, Guillaume; Aliyan, Amir; Kolomeisky, Anatoly B; Robinson, Jacob T; Wang, Gufeng; Pal, Robert; Tour, James M

    2017-08-30

    Beyond the more common chemical delivery strategies, several physical techniques are used to open the lipid bilayers of cellular membranes. These include using electric and magnetic fields, temperature, ultrasound or light to introduce compounds into cells, to release molecular species from cells or to selectively induce programmed cell death (apoptosis) or uncontrolled cell death (necrosis). More recently, molecular motors and switches that can change their conformation in a controlled manner in response to external stimuli have been used to produce mechanical actions on tissue for biomedical applications. Here we show that molecular machines can drill through cellular bilayers using their molecular-scale actuation, specifically nanomechanical action. Upon physical adsorption of the molecular motors onto lipid bilayers and subsequent activation of the motors using ultraviolet light, holes are drilled in the cell membranes. We designed molecular motors and complementary experimental protocols that use nanomechanical action to induce the diffusion of chemical species out of synthetic vesicles, to enhance the diffusion of traceable molecular machines into and within live cells, to induce necrosis and to introduce chemical species into live cells. We also show that, by using molecular machines that bear short peptide addends, nanomechanical action can selectively target specific cell-surface recognition sites. Beyond the in vitro applications demonstrated here, we expect that molecular machines could also be used in vivo, especially as their design progresses to allow two-photon, near-infrared and radio-frequency activation.

  16. Reducing lumber thickness variation using real-time statistical process control

    Treesearch

    Thomas M. Young; Brian H. Bond; Jan Wiedenbeck

    2002-01-01

    A technology feasibility study for reducing lumber thickness variation was conducted from April 2001 until March 2002 at two sawmills located in the southern U.S. A real-time statistical process control (SPC) system was developed that featured Wonderware human machine interface technology (HMI) with distributed real-time control charts for all sawing centers and...

  17. More physics in the laundromat

    NASA Astrophysics Data System (ADS)

    Denny, Mark

    2010-12-01

    The physics of a washing machine spin cycle is extended to include the spin-up and spin-down phases. We show that, for realistic parameters, an adiabatic approximation applies, and thus the familiar forced, damped harmonic oscillator analysis can be applied to these phases.

  18. An Inexpensive Method to use an Ocean Optics Spectrometer for Telescopic Spectroscopy

    NASA Astrophysics Data System (ADS)

    Joel, Berger; Sugerman, B. E. K.

    2012-01-01

    We present a relatively-inexpensive method for using an Ocean Optics spectrometer for telescopic spectroscopy. The Ocean Optics spectrometer is a highly-sensitive, affordable and versatile fiber-optic spectrometer that can be used in a variety of physics and astronomy classes and labs. With about $275 and a small amount of machining, this spectrometer can be easily adapted for any telescope that accepts 2" eyepieces. We provide the equipment list, machining specs, and calibration process, as well as sample stellar spectra. This work was supported by the Department of Physics and Astronomy and the Office of the Provost of Goucher College.

  19. Neural activity during affect labeling predicts expressive writing effects on well-being: GLM and SVM approaches

    PubMed Central

    Memarian, Negar; Torre, Jared B.; Haltom, Kate E.; Stanton, Annette L.

    2017-01-01

    Abstract Affect labeling (putting feelings into words) is a form of incidental emotion regulation that could underpin some benefits of expressive writing (i.e. writing about negative experiences). Here, we show that neural responses during affect labeling predicted changes in psychological and physical well-being outcome measures 3 months later. Furthermore, neural activity of specific frontal regions and amygdala predicted those outcomes as a function of expressive writing. Using supervised learning (support vector machines regression), improvements in four measures of psychological and physical health (physical symptoms, depression, anxiety and life satisfaction) after an expressive writing intervention were predicted with an average of 0.85% prediction error [root mean square error (RMSE) %]. The predictions were significantly more accurate with machine learning than with the conventional generalized linear model method (average RMSE: 1.3%). Consistent with affect labeling research, right ventrolateral prefrontal cortex (RVLPFC) and amygdalae were top predictors of improvement in the four outcomes. Moreover, RVLPFC and left amygdala predicted benefits due to expressive writing in satisfaction with life and depression outcome measures, respectively. This study demonstrates the substantial merit of supervised machine learning for real-world outcome prediction in social and affective neuroscience. PMID:28992270

  20. Statistical Analysis of NAS Parallel Benchmarks and LINPACK Results

    NASA Technical Reports Server (NTRS)

    Meuer, Hans-Werner; Simon, Horst D.; Strohmeier, Erich; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    In the last three years extensive performance data have been reported for parallel machines both based on the NAS Parallel Benchmarks, and on LINPACK. In this study we have used the reported benchmark results and performed a number of statistical experiments using factor, cluster, and regression analyses. In addition to the performance results of LINPACK and the eight NAS parallel benchmarks, we have also included peak performance of the machine, and the LINPACK n and n(sub 1/2) values. Some of the results and observations can be summarized as follows: 1) All benchmarks are strongly correlated with peak performance. 2) LINPACK and EP have each a unique signature. 3) The remaining NPB can grouped into three groups as follows: (CG and IS), (LU and SP), and (MG, FT, and BT). Hence three (or four with EP) benchmarks are sufficient to characterize the overall NPB performance. Our poster presentation will follow a standard poster format, and will present the data of our statistical analysis in detail.

  1. Comparison of Machine Learning Methods for the Arterial Hypertension Diagnostics

    PubMed Central

    Belo, David; Gamboa, Hugo

    2017-01-01

    The paper presents results of machine learning approach accuracy applied analysis of cardiac activity. The study evaluates the diagnostics possibilities of the arterial hypertension by means of the short-term heart rate variability signals. Two groups were studied: 30 relatively healthy volunteers and 40 patients suffering from the arterial hypertension of II-III degree. The following machine learning approaches were studied: linear and quadratic discriminant analysis, k-nearest neighbors, support vector machine with radial basis, decision trees, and naive Bayes classifier. Moreover, in the study, different methods of feature extraction are analyzed: statistical, spectral, wavelet, and multifractal. All in all, 53 features were investigated. Investigation results show that discriminant analysis achieves the highest classification accuracy. The suggested approach of noncorrelated feature set search achieved higher results than data set based on the principal components. PMID:28831239

  2. MTR WING, TRA604. FIRST FLOOR PLAN. ENTRY LOBBY, MACHINE SHOP, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    MTR WING, TRA-604. FIRST FLOOR PLAN. ENTRY LOBBY, MACHINE SHOP, INSTRUMENT SHOP, COUNTING ROOM, HEALTH PHYSICS LAB, LABS AND OFFICES, STORAGE, SHIPPING AND RECEIVING. BLAW-KNOX 3150-4-2, 7/1950. INL INDEX NO. 053-604-00-099-100008, REV. 7. - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID

  3. Ammunition Loading and Firing Test Pretest Physical Conditioning of Female Soldier Participants

    DTIC Science & Technology

    1978-10-01

    appear to be a significant improvement considering that Cooper’s values are based upon women running it, shorts and tennis shoes as opposed to the Ss who...machine. of the other, facing machine between handles. 2. Grasp lift handles. 2. Squat down, bending at knees and hips, and 3. "Pin" elbows to your side

  4. Study of Man-Machine Communications Systems for Disabled Persons (The Handicapped). Volume VI. Final Report.

    ERIC Educational Resources Information Center

    Kafafian, Haig

    The instruction manual contains lessons for teaching severely physically and/or neurologically handicapped students to use the seven-key Cybertype electric writing machine. Unlike the 14-key keyboard, which requires bilateral coordination in arms, legs, or other parts of the body, the seven-key keyboard requires the use of only one part of the…

  5. Reconstructing the behavior of walking fruit flies

    NASA Astrophysics Data System (ADS)

    Berman, Gordon; Bialek, William; Shaevitz, Joshua

    2010-03-01

    Over the past century, the fruit fly Drosophila melanogaster has arisen as almost a lingua franca in the study of animal behavior, having been utilized to study questions in fields as diverse as sleep deprivation, aging, and drug abuse, amongst many others. Accordingly, much is known about what can be done to manipulate these organisms genetically, behaviorally, and physiologically. Most of the behavioral work on this system to this point has been experiments where the flies in question have been given a choice between some discrete set of pre-defined behaviors. Our aim, however, is simply to spend some time with a cadre of flies, using techniques from nonlinear dynamics, statistical physics, and machine learning in an attempt to reconstruct and gain understanding into their behavior. More specifically, we use a multi-camera set-up combined with a motion tracking stage in order to obtain long time-series of walking fruit flies moving about a glass plate. This experimental system serves as a test-bed for analytical, statistical, and computational techniques for studying animal behavior. In particular, we attempt to reconstruct the natural modes of behavior for a fruit fly through a data-driven approach in a manner inspired by recent work in C. elegans and cockroaches.

  6. Black holes: theory and observations (Scientific session of the Physical Sciences Division of the Russian Academy of Sciences, 23 December 2015)

    NASA Astrophysics Data System (ADS)

    2016-07-01

    A scientific session of the Physical Sciences Division of the Russian Academy of Sciences (RAS), "Black holes: theory and observations," was held in the conference hall of the Lebedev Physical Institute, RAS, on 23 December 2015. The papers collected in this issue were written based on talks given at the session: (1) I D Novikov (Lebedev Physical Institute, Russian Academy of Sciences, Astro Space Center, Moscow; The Niels Bohr International Academy, The Niels Bohr Institute, Copenhagen; National Research Center 'Kurchatov Institute', Moscow) "Black holes, wormholes, and time machines"; (2) A M Cherepashchuk (Lomonosov Moscow State University, Sternberg Astronomical Institute, Moscow) "Observing stellar-mass and supermassive black holes"; (3) N S Kardashev (Lebedev Physical Institute, Russian Academy of Sciences, Astro Space Center, Moscow) "Millimetron space project: a tool for researching black holes and wormholes." Papers written on the basis of oral presentations 1, 2 are published below. • Observing stellar mass and supermassive black holes, A M Cherepashchuk Physics-Uspekhi, 2016, Volume 59, Number 7, Pages 702-712 • Black holes, wormholes, and time machines, I D Novikov Physics-Uspekhi, 2016, Volume 59, Number 7, Pages 713-715

  7. Statistical Physics of Adaptation

    DTIC Science & Technology

    2016-08-23

    Statistical Physics of Adaptation Nikolay Perunov, Robert A. Marsland, and Jeremy L. England Department of Physics , Physics of Living Systems Group...Subject Areas: Biological Physics , Complex Systems, Statistical Physics I. INTRODUCTION It has long been understood that nonequilibrium driving can...equilibrium may appear to have been specially selected for physical properties connected to their ability to absorb work from the particular driving environment

  8. Study on the Optimization and Process Modeling of the Rotary Ultrasonic Machining of Zerodur Glass-Ceramic

    NASA Astrophysics Data System (ADS)

    Pitts, James Daniel

    Rotary ultrasonic machining (RUM), a hybrid process combining ultrasonic machining and diamond grinding, was created to increase material removal rates for the fabrication of hard and brittle workpieces. The objective of this research was to experimentally derive empirical equations for the prediction of multiple machined surface roughness parameters for helically pocketed rotary ultrasonic machined Zerodur glass-ceramic workpieces by means of a systematic statistical experimental approach. A Taguchi parametric screening design of experiments was employed to systematically determine the RUM process parameters with the largest effect on mean surface roughness. Next empirically determined equations for the seven common surface quality metrics were developed via Box-Behnken surface response experimental trials. Validation trials were conducted resulting in predicted and experimental surface roughness in varying levels of agreement. The reductions in cutting force and tool wear associated with RUM, reported by previous researchers, was experimentally verified to also extended to helical pocketing of Zerodur glass-ceramic.

  9. AstroML: Python-powered Machine Learning for Astronomy

    NASA Astrophysics Data System (ADS)

    Vander Plas, Jake; Connolly, A. J.; Ivezic, Z.

    2014-01-01

    As astronomical data sets grow in size and complexity, automated machine learning and data mining methods are becoming an increasingly fundamental component of research in the field. The astroML project (http://astroML.org) provides a common repository for practical examples of the data mining and machine learning tools used and developed by astronomical researchers, written in Python. The astroML module contains a host of general-purpose data analysis and machine learning routines, loaders for openly-available astronomical datasets, and fast implementations of specific computational methods often used in astronomy and astrophysics. The associated website features hundreds of examples of these routines being used for analysis of real astronomical datasets, while the associated textbook provides a curriculum resource for graduate-level courses focusing on practical statistics, machine learning, and data mining approaches within Astronomical research. This poster will highlight several of the more powerful and unique examples of analysis performed with astroML, all of which can be reproduced in their entirety on any computer with the proper packages installed.

  10. Network challenges for cyber physical systems with tiny wireless devices: a case study on reliable pipeline condition monitoring.

    PubMed

    Ali, Salman; Qaisar, Saad Bin; Saeed, Husnain; Khan, Muhammad Farhan; Naeem, Muhammad; Anpalagan, Alagan

    2015-03-25

    The synergy of computational and physical network components leading to the Internet of Things, Data and Services has been made feasible by the use of Cyber Physical Systems (CPSs). CPS engineering promises to impact system condition monitoring for a diverse range of fields from healthcare, manufacturing, and transportation to aerospace and warfare. CPS for environment monitoring applications completely transforms human-to-human, human-to-machine and machine-to-machine interactions with the use of Internet Cloud. A recent trend is to gain assistance from mergers between virtual networking and physical actuation to reliably perform all conventional and complex sensing and communication tasks. Oil and gas pipeline monitoring provides a novel example of the benefits of CPS, providing a reliable remote monitoring platform to leverage environment, strategic and economic benefits. In this paper, we evaluate the applications and technical requirements for seamlessly integrating CPS with sensor network plane from a reliability perspective and review the strategies for communicating information between remote monitoring sites and the widely deployed sensor nodes. Related challenges and issues in network architecture design and relevant protocols are also provided with classification. This is supported by a case study on implementing reliable monitoring of oil and gas pipeline installations. Network parameters like node-discovery, node-mobility, data security, link connectivity, data aggregation, information knowledge discovery and quality of service provisioning have been reviewed.

  11. Obstacle Recognition Based on Machine Learning for On-Chip LiDAR Sensors in a Cyber-Physical System

    PubMed Central

    Beruvides, Gerardo

    2017-01-01

    Collision avoidance is an important feature in advanced driver-assistance systems, aimed at providing correct, timely and reliable warnings before an imminent collision (with objects, vehicles, pedestrians, etc.). The obstacle recognition library is designed and implemented to address the design and evaluation of obstacle detection in a transportation cyber-physical system. The library is integrated into a co-simulation framework that is supported on the interaction between SCANeR software and Matlab/Simulink. From the best of the authors’ knowledge, two main contributions are reported in this paper. Firstly, the modelling and simulation of virtual on-chip light detection and ranging sensors in a cyber-physical system, for traffic scenarios, is presented. The cyber-physical system is designed and implemented in SCANeR. Secondly, three specific artificial intelligence-based methods for obstacle recognition libraries are also designed and applied using a sensory information database provided by SCANeR. The computational library has three methods for obstacle detection: a multi-layer perceptron neural network, a self-organization map and a support vector machine. Finally, a comparison among these methods under different weather conditions is presented, with very promising results in terms of accuracy. The best results are achieved using the multi-layer perceptron in sunny and foggy conditions, the support vector machine in rainy conditions and the self-organized map in snowy conditions. PMID:28906450

  12. Network Challenges for Cyber Physical Systems with Tiny Wireless Devices: A Case Study on Reliable Pipeline Condition Monitoring

    PubMed Central

    Ali, Salman; Qaisar, Saad Bin; Saeed, Husnain; Farhan Khan, Muhammad; Naeem, Muhammad; Anpalagan, Alagan

    2015-01-01

    The synergy of computational and physical network components leading to the Internet of Things, Data and Services has been made feasible by the use of Cyber Physical Systems (CPSs). CPS engineering promises to impact system condition monitoring for a diverse range of fields from healthcare, manufacturing, and transportation to aerospace and warfare. CPS for environment monitoring applications completely transforms human-to-human, human-to-machine and machine-to-machine interactions with the use of Internet Cloud. A recent trend is to gain assistance from mergers between virtual networking and physical actuation to reliably perform all conventional and complex sensing and communication tasks. Oil and gas pipeline monitoring provides a novel example of the benefits of CPS, providing a reliable remote monitoring platform to leverage environment, strategic and economic benefits. In this paper, we evaluate the applications and technical requirements for seamlessly integrating CPS with sensor network plane from a reliability perspective and review the strategies for communicating information between remote monitoring sites and the widely deployed sensor nodes. Related challenges and issues in network architecture design and relevant protocols are also provided with classification. This is supported by a case study on implementing reliable monitoring of oil and gas pipeline installations. Network parameters like node-discovery, node-mobility, data security, link connectivity, data aggregation, information knowledge discovery and quality of service provisioning have been reviewed. PMID:25815444

  13. Obstacle Recognition Based on Machine Learning for On-Chip LiDAR Sensors in a Cyber-Physical System.

    PubMed

    Castaño, Fernando; Beruvides, Gerardo; Haber, Rodolfo E; Artuñedo, Antonio

    2017-09-14

    Collision avoidance is an important feature in advanced driver-assistance systems, aimed at providing correct, timely and reliable warnings before an imminent collision (with objects, vehicles, pedestrians, etc.). The obstacle recognition library is designed and implemented to address the design and evaluation of obstacle detection in a transportation cyber-physical system. The library is integrated into a co-simulation framework that is supported on the interaction between SCANeR software and Matlab/Simulink. From the best of the authors' knowledge, two main contributions are reported in this paper. Firstly, the modelling and simulation of virtual on-chip light detection and ranging sensors in a cyber-physical system, for traffic scenarios, is presented. The cyber-physical system is designed and implemented in SCANeR. Secondly, three specific artificial intelligence-based methods for obstacle recognition libraries are also designed and applied using a sensory information database provided by SCANeR. The computational library has three methods for obstacle detection: a multi-layer perceptron neural network, a self-organization map and a support vector machine. Finally, a comparison among these methods under different weather conditions is presented, with very promising results in terms of accuracy. The best results are achieved using the multi-layer perceptron in sunny and foggy conditions, the support vector machine in rainy conditions and the self-organized map in snowy conditions.

  14. Belief propagation decoding of quantum channels by passing quantum messages

    NASA Astrophysics Data System (ADS)

    Renes, Joseph M.

    2017-07-01

    The belief propagation (BP) algorithm is a powerful tool in a wide range of disciplines from statistical physics to machine learning to computational biology, and is ubiquitous in decoding classical error-correcting codes. The algorithm works by passing messages between nodes of the factor graph associated with the code and enables efficient decoding of the channel, in some cases even up to the Shannon capacity. Here we construct the first BP algorithm which passes quantum messages on the factor graph and is capable of decoding the classical-quantum channel with pure state outputs. This gives explicit decoding circuits whose number of gates is quadratic in the code length. We also show that this decoder can be modified to work with polar codes for the pure state channel and as part of a decoder for transmitting quantum information over the amplitude damping channel. These represent the first explicit capacity-achieving decoders for non-Pauli channels.

  15. Earthquake prediction in California using regression algorithms and cloud-based big data infrastructure

    NASA Astrophysics Data System (ADS)

    Asencio-Cortés, G.; Morales-Esteban, A.; Shang, X.; Martínez-Álvarez, F.

    2018-06-01

    Earthquake magnitude prediction is a challenging problem that has been widely studied during the last decades. Statistical, geophysical and machine learning approaches can be found in literature, with no particularly satisfactory results. In recent years, powerful computational techniques to analyze big data have emerged, making possible the analysis of massive datasets. These new methods make use of physical resources like cloud based architectures. California is known for being one of the regions with highest seismic activity in the world and many data are available. In this work, the use of several regression algorithms combined with ensemble learning is explored in the context of big data (1 GB catalog is used), in order to predict earthquakes magnitude within the next seven days. Apache Spark framework, H2 O library in R language and Amazon cloud infrastructure were been used, reporting very promising results.

  16. The need and approach for characterization - U.S. air force perspectives on materials state awareness

    NASA Astrophysics Data System (ADS)

    Aldrin, John C.; Lindgren, Eric A.

    2018-04-01

    This paper expands on the objective and motivation for NDE-based characterization and includes a discussion of the current approach using model-assisted inversion being pursued within the Air Force Research Laboratory (AFRL). This includes a discussion of the multiple model-based methods that can be used, including physics-based models, deep machine learning, and heuristic approaches. The benefits and drawbacks of each method is reviewed and the potential to integrate multiple methods is discussed. Initial successes are included to highlight the ability to obtain quantitative values of damage. Additional steps remaining to realize this capability with statistical metrics of accuracy are discussed, and how these results can be used to enable probabilistic life management are addressed. The outcome of this initiative will realize the long-term desired capability of NDE methods to provide quantitative characterization to accelerate certification of new materials and enhance life management of engineered systems.

  17. Geometry and Dynamics for Markov Chain Monte Carlo

    NASA Astrophysics Data System (ADS)

    Barp, Alessandro; Briol, François-Xavier; Kennedy, Anthony D.; Girolami, Mark

    2018-03-01

    Markov Chain Monte Carlo methods have revolutionised mathematical computation and enabled statistical inference within many previously intractable models. In this context, Hamiltonian dynamics have been proposed as an efficient way of building chains which can explore probability densities efficiently. The method emerges from physics and geometry and these links have been extensively studied by a series of authors through the last thirty years. However, there is currently a gap between the intuitions and knowledge of users of the methodology and our deep understanding of these theoretical foundations. The aim of this review is to provide a comprehensive introduction to the geometric tools used in Hamiltonian Monte Carlo at a level accessible to statisticians, machine learners and other users of the methodology with only a basic understanding of Monte Carlo methods. This will be complemented with some discussion of the most recent advances in the field which we believe will become increasingly relevant to applied scientists.

  18. Energy landscape analysis of neuroimaging data

    NASA Astrophysics Data System (ADS)

    Ezaki, Takahiro; Watanabe, Takamitsu; Ohzeki, Masayuki; Masuda, Naoki

    2017-05-01

    Computational neuroscience models have been used for understanding neural dynamics in the brain and how they may be altered when physiological or other conditions change. We review and develop a data-driven approach to neuroimaging data called the energy landscape analysis. The methods are rooted in statistical physics theory, in particular the Ising model, also known as the (pairwise) maximum entropy model and Boltzmann machine. The methods have been applied to fitting electrophysiological data in neuroscience for a decade, but their use in neuroimaging data is still in its infancy. We first review the methods and discuss some algorithms and technical aspects. Then, we apply the methods to functional magnetic resonance imaging data recorded from healthy individuals to inspect the relationship between the accuracy of fitting, the size of the brain system to be analysed and the data length. This article is part of the themed issue `Mathematical methods in medicine: neuroscience, cardiology and pathology'.

  19. Application of the Teager-Kaiser energy operator in bearing fault diagnosis.

    PubMed

    Henríquez Rodríguez, Patricia; Alonso, Jesús B; Ferrer, Miguel A; Travieso, Carlos M

    2013-03-01

    Condition monitoring of rotating machines is important in the prevention of failures. As most machine malfunctions are related to bearing failures, several bearing diagnosis techniques have been developed. Some of them feature the bearing vibration signal with statistical measures and others extract the bearing fault characteristic frequency from the AM component of the vibration signal. In this paper, we propose to transform the vibration signal to the Teager-Kaiser domain and feature it with statistical and energy-based measures. A bearing database with normal and faulty bearings is used. The diagnosis is performed with two classifiers: a neural network classifier and a LS-SVM classifier. Experiments show that the Teager domain features outperform those based on the temporal or AM signal. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Physically absorbable reagents-collectors in elementary flotation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S.A. Kondrat'ev; I.G. Bochkarev

    2007-09-15

    Based on the reviewed researches held at the Institute of Mining, Siberian Branch, Russian Academy of Sciences, the effect of physically absorbable reagents-collectors on formation of a flotation complex and its stability in turbulent pulp flows in flotation machines of basic types is considered. The basic requirements for physically absorbable reagents-collectors at different flotation stages are established.

  1. Stroke dynamics and frequency of 3 phacoemulsification machines.

    PubMed

    Tognetto, Daniele; Cecchini, Paolo; Leon, Pia; Di Nicola, Marta; Ravalico, Giuseppe

    2012-02-01

    To measure the working frequency and the stroke dynamics of the phaco tip of 3 phacoemulsification machines. University Eye Clinic of Trieste, Italy. Experimental study. A video wet fixture was assembled to measure the working frequency using a micro camera and a micropulsed strobe-light system. A different video wet fixture was created to measure tip displacement as vectorial movement at different phaco powers using a microscopic video apparatus. The working frequency of the Infiniti Ozil machine was 43.0 kHz in longitudinal mode and 31.6 kHz in torsional mode. The frequency of the Whitestar Signature machine was 29.0 kHz in longitudinal mode and 38.0 kHz with the Ellips FX handpiece. The Stellaris machine had a frequency of 28.8 kHz. The longitudinal stroke of the 3 machines at different phaco powers was statistically significantly different. The Stellaris machine had the highest stroke extent (139 μm). The lateral movement of the Infiniti Ozil and Whitestar Signature machines differed significantly. No movement on the y-axis was observed for the Infiniti Ozil machine in torsional mode. The elliptical path of the Ellips FX handpiece had different x and y components at different phaco powers. The 3 phaco machines performed differently in terms of working frequency and stroke dynamics. The knowledge of the peculiar lateral and elliptical path strokes of Infiniti and Whitestar Signature machines may allow the surgeon to fully use these features for lens removal. Copyright © 2012 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  2. Optical Neasurements Of Diamond-Turned Surfaces

    NASA Astrophysics Data System (ADS)

    Politch, Jacob

    1989-07-01

    We describe here a system for measuring very accurately diamond-turned surfaces. This system is based on heterodyne interfercmetry and measures surface height variations with an accuracy of 4A, and the spatial resolution is 1 micrometer. Fran the measured data we have calculated the statistical properties of the surface - enabling us to identify the spatial frequencies caused by the vibrations of the diamond - turning machine and the measuring machine as well as the frequency of the grid.

  3. Landslide susceptibility modeling applying machine learning methods: A case study from Longju in the Three Gorges Reservoir area, China

    NASA Astrophysics Data System (ADS)

    Zhou, Chao; Yin, Kunlong; Cao, Ying; Ahmed, Bayes; Li, Yuanyao; Catani, Filippo; Pourghasemi, Hamid Reza

    2018-03-01

    Landslide is a common natural hazard and responsible for extensive damage and losses in mountainous areas. In this study, Longju in the Three Gorges Reservoir area in China was taken as a case study for landslide susceptibility assessment in order to develop effective risk prevention and mitigation strategies. To begin, 202 landslides were identified, including 95 colluvial landslides and 107 rockfalls. Twelve landslide causal factor maps were prepared initially, and the relationship between these factors and each landslide type was analyzed using the information value model. Later, the unimportant factors were selected and eliminated using the information gain ratio technique. The landslide locations were randomly divided into two groups: 70% for training and 30% for verifying. Two machine learning models: the support vector machine (SVM) and artificial neural network (ANN), and a multivariate statistical model: the logistic regression (LR), were applied for landslide susceptibility modeling (LSM) for each type. The LSM index maps, obtained from combining the assessment results of the two landslide types, were classified into five levels. The performance of the LSMs was evaluated using the receiver operating characteristics curve and Friedman test. Results show that the elimination of noise-generating factors and the separated modeling of each landslide type have significantly increased the prediction accuracy. The machine learning models outperformed the multivariate statistical model and SVM model was found ideal for the case study area.

  4. An element search ant colony technique for solving virtual machine placement problem

    NASA Astrophysics Data System (ADS)

    Srija, J.; Rani John, Rose; Kanaga, Grace Mary, Dr.

    2017-09-01

    The data centres in the cloud environment play a key role in providing infrastructure for ubiquitous computing, pervasive computing, mobile computing etc. This computing technique tries to utilize the available resources in order to provide services. Hence maintaining the resource utilization without wastage of power consumption has become a challenging task for the researchers. In this paper we propose the direct guidance ant colony system for effective mapping of virtual machines to the physical machine with maximal resource utilization and minimal power consumption. The proposed algorithm has been compared with the existing ant colony approach which is involved in solving virtual machine placement problem and thus the proposed algorithm proves to provide better result than the existing technique.

  5. The physics of teams: Interdependence, measurable entropy and computational emotion

    NASA Astrophysics Data System (ADS)

    Lawless, William F.

    2017-08-01

    Most of the social sciences, including psychology, economics and subjective social network theory, are modeled on the individual, leaving the field not only a-theoretical, but also inapplicable to a physics of hybrid teams, where hybrid refers to arbitrarily combining humans, machines and robots into a team to perform a dedicated mission (e.g., military, business, entertainment) or to solve a targeted problem (e.g., with scientists, engineers, entrepreneurs). As a common social science practice, the ingredient at the heart of the social interaction, interdependence, is statistically removed prior to the replication of social experiments; but, as an analogy, statistically removing social interdependence to better study the individual is like statistically removing quantum effects as a complication to the study of the atom. Further, in applications of Shannon’s information theory to teams, the effects of interdependence are minimized, but even there, interdependence is how classical information is transmitted. Consequently, numerous mistakes are made when applying non-interdependent models to policies, the law and regulations, impeding social welfare by failing to exploit the power of social interdependence. For example, adding redundancy to human teams is thought by subjective social network theorists to improve the efficiency of a network, easily contradicted by our finding that redundancy is strongly associated with corruption in non-free markets. Thus, built atop the individual, most of the social sciences, economics and social network theory have little if anything to contribute to the engineering of hybrid teams. In defense of the social sciences, the mathematical physics of interdependence is elusive, non-intuitive and non-rational. However, by replacing determinism with bistable states, interdependence at the social level mirrors entanglement at the quantum level, suggesting the applicability of quantum tools for social science. We report how our quantum-like models capture some of the essential aspects of interdependence, a tool for the metrics of hybrid teams; as an example, we find additional support for our model of the solution to the open problem of team size. We also report on progress with the theory of computational emotion for hybrid teams, linking it qualitatively to the second law of thermodynamics. We conclude that the science of interdependence

  6. Data-driven advice for applying machine learning to bioinformatics problems

    PubMed Central

    Olson, Randal S.; La Cava, William; Mustahsan, Zairah; Varik, Akshay; Moore, Jason H.

    2017-01-01

    As the bioinformatics field grows, it must keep pace not only with new data but with new algorithms. Here we contribute a thorough analysis of 13 state-of-the-art, commonly used machine learning algorithms on a set of 165 publicly available classification problems in order to provide data-driven algorithm recommendations to current researchers. We present a number of statistical and visual comparisons of algorithm performance and quantify the effect of model selection and algorithm tuning for each algorithm and dataset. The analysis culminates in the recommendation of five algorithms with hyperparameters that maximize classifier performance across the tested problems, as well as general guidelines for applying machine learning to supervised classification problems. PMID:29218881

  7. Impact of Machine Virtualization on Timing Precision for Performance-critical Tasks

    NASA Astrophysics Data System (ADS)

    Karpov, Kirill; Fedotova, Irina; Siemens, Eduard

    2017-07-01

    In this paper we present a measurement study to characterize the impact of hardware virtualization on basic software timing, as well as on precise sleep operations of an operating system. We investigated how timer hardware is shared among heavily CPU-, I/O- and Network-bound tasks on a virtual machine as well as on the host machine. VMware ESXi and QEMU/KVM have been chosen as commonly used examples of hypervisor- and host-based models. Based on statistical parameters of retrieved distributions, our results provide a very good estimation of timing behavior. It is essential for real-time and performance-critical applications such as image processing or real-time control.

  8. Supervised Machine Learning for Regionalization of Environmental Data: Distribution of Uranium in Groundwater in Ukraine

    NASA Astrophysics Data System (ADS)

    Govorov, Michael; Gienko, Gennady; Putrenko, Viktor

    2018-05-01

    In this paper, several supervised machine learning algorithms were explored to define homogeneous regions of con-centration of uranium in surface waters in Ukraine using multiple environmental parameters. The previous study was focused on finding the primary environmental parameters related to uranium in ground waters using several methods of spatial statistics and unsupervised classification. At this step, we refined the regionalization using Artifi-cial Neural Networks (ANN) techniques including Multilayer Perceptron (MLP), Radial Basis Function (RBF), and Convolutional Neural Network (CNN). The study is focused on building local ANN models which may significantly improve the prediction results of machine learning algorithms by taking into considerations non-stationarity and autocorrelation in spatial data.

  9. AZOrange - High performance open source machine learning for QSAR modeling in a graphical programming environment

    PubMed Central

    2011-01-01

    Background Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. Results This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. Conclusions AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements. PMID:21798025

  10. AZOrange - High performance open source machine learning for QSAR modeling in a graphical programming environment.

    PubMed

    Stålring, Jonna C; Carlsson, Lars A; Almeida, Pedro; Boyer, Scott

    2011-07-28

    Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements.

  11. Methods And Systms For Analyzing The Degradation And Failure Of Mechanical Systems

    DOEpatents

    Jarrell, Donald B.; Sisk, Daniel R.; Hatley, Darrel D.; Kirihara, Leslie J.; Peters, Timothy J.

    2005-02-08

    Methods and systems for identifying, understanding, and predicting the degradation and failure of mechanical systems are disclosed. The methods include measuring and quantifying stressors that are responsible for the activation of degradation mechanisms in the machine component of interest. The intensity of the stressor may be correlated with the rate of physical degradation according to some determinable function such that a derivative relationship exists between the machine performance, degradation, and the underlying stressor. The derivative relationship may be used to make diagnostic and prognostic calculations concerning the performance and projected life of the machine. These calculations may be performed in real time to allow the machine operator to quickly adjust the operational parameters of the machinery in order to help minimize or eliminate the effects of the degradation mechanism, thereby prolonging the life of the machine. Various systems implementing the methods are also disclosed.

  12. High performance cutting of aircraft and turbine components

    NASA Astrophysics Data System (ADS)

    Krämer, A.; Lung, D.; Klocke, F.

    2012-04-01

    Titanium and nickel-based alloys belong to the group of difficult-to-cut materials. The machining of these high-temperature alloys is characterized by low productivity and low process stability as a result of their physical and mechanical properties. Major problems during the machining of these materials are low applicable cutting speeds due to excessive tool wear, long machining times, and thus high manufacturing costs, as well as the formation of ribbon and snarled chips. Under these conditions automation of the production process is limited. This paper deals with strategies to improve machinability of titanium and nickel-based alloys. Using the example of the nickel-based alloy Inconel 718 high performance cutting with advanced cutting materials, such as PCBN and cutting ceramics, is presented. Afterwards the influence of different cooling strategies, like high-pressure lubricoolant supply and cryogenic cooling, during machining of TiAl6V4 is shown.

  13. Ergonomic risk factor identification for sewing machine operators through supervised occupational therapy fieldwork in Bangladesh: A case study.

    PubMed

    Habib, Md Monjurul

    2015-01-01

    Many sewing machine operators are working with high risk factors for musculoskeletal health in the garments industries in Bangladesh. To identify the physical risk factors among sewing machine operators in a Bangladeshi garments factory. Sewing machine operators (327, 83% female), were evaluated. The mean age of the participants was 25.25 years. Six ergonomic risk factors were determined using the Musculoskeletal Disorders risk assessment. Data collection included measurements of sewing machine table and chair heights; this data was combined with information from informal interviews. Significant ergonomic risk factors found included the combination of awkward postures of the neck and back, repetitive hand and arm movements, poor ergonomic workstations and prolonged working hours without adequate breaks; these risk factors resulted in musculoskeletal complaints, sick leave, and switching jobs. One aspect of improving worker health in garment factories includes addressing musculoskeletal risk factors through ergonomic interventions.

  14. Tribology and energy efficiency: from molecules to lubricated contacts to complete machines.

    PubMed

    Taylor, Robert Ian

    2012-01-01

    The impact of lubricants on energy efficiency is considered. Molecular details of base oils used in lubricants can have a great impact on the lubricant's physical properties which will affect the energy efficiency performance of a lubricant. In addition, molecular details of lubricant additives can result in significant differences in measured friction coefficients for machine elements operating in the mixed/boundary lubrication regime. In single machine elements, these differences will result in lower friction losses, and for complete systems (such as cars, trucks, hydraulic circuits, industrial gearboxes etc.) lower fuel consumption or lower electricity consumption can result.

  15. Development of a Physical Employment Testing Battery for Armor Soldiers: 19D Cavalry Scout and 19K M1 Armor Crewman

    DTIC Science & Technology

    2015-12-01

    M2 .50 Caliber Machine Gun on the Abrams Tank While wearing a task specific uniform weighing approximately 49 lb, Soldiers lifted the M2 .50...12 Engage Targets with a Caliber .50 M2 Machine Gun X 13 Lay a 120mm Mortar – Emplace Base Plate X 14 Lay a 120mm Mortar...17 Mount M2 .50 Cal Machine Gun Receiver on an Abrams Tank X 18 Stow Ammunition on an Abrams Tank (Load 120mm MPAT Round to the Ready Rack

  16. Development of a Physical Employment Testing Battery for Infantry Soldiers: 11B Infantryman and 11C Infantryman-Indirect Fire

    DTIC Science & Technology

    2015-12-01

    43 1.9 Images of Move Under Direct Fire (Task 10) 44 1.10 Engage Targets with a .50 Caliber M2 Machine Gun (Task 12) 45 1.11 Image of Lay a...Caliber M2 Machine Gun While wearing a fighting load (approximately 83 lb) and working as a member of a two-person team, Soldiers lifted and carried the... M2 HB Machine Gun with tripod (153 lb) a distance of 10 m. Army Standard: Successful completion of the task 13. Emplace Base Plate (11C

  17. Development of a Physical Employment Testing Battery for Infantry Soldiers: 11B Infantryman and 11C Infantryman - Indirect Fire

    DTIC Science & Technology

    2015-12-01

    25mm barrel install (Task 5) and engage targets with an M2 machine gun (Task 12). During these tasks, the performance of one individual will affect...TOW Missile Launcher on BFV (Task 8) 43 1.9 Images of Move Under Direct Fire (Task 10) 44 1.10 Engage Targets with a .50 Caliber M2 Machine Gun ...Engage Targets with a .50 Caliber M2 Machine Gun While wearing a fighting load (approximately 83 lb) and working as a member of a two-person team

  18. Effects of Machine Traffic on the Physical Properties of Ash-Cap Soils

    Treesearch

    Leonard R. Johnson; Debbie Page-Dumroese; Han-Sup Han

    2007-01-01

    With pressure and vibration on a soil, air spaces between soil particles can be reduced by displaced soil particles. Activity associated with heavy machine traffic increases the density of the soil and can also increase the resistance of the soil to penetration. This paper reviews research related to disturbance of forest soils with a primary focus on compaction in ash...

  19. Another look at Atwood's machine

    NASA Astrophysics Data System (ADS)

    LoPresto, Michael C.

    1999-02-01

    Atwood's machine is a standard experimental apparatus that is likely to get pushed out of the laboratory portion of the general physics course due to the ever increasing use of microcomputers. To avoid this, I now use the apparatus for an experiment during the work and energy portion of the course which not only allows us to demonstrate those principles but also compare them with Newton's laws of motion.

  20. Remote Sensing as a Demonstration of Applied Physics.

    ERIC Educational Resources Information Center

    Colwell, Robert N.

    1980-01-01

    Provides information about the field of remote sensing, including discussions of geo-synchronous and sun-synchronous remote-sensing platforms, the actual physical processes and equipment involved in sensing, the analysis of images by humans and machines, and inexpensive, small scale methods, including aerial photography. (CS)

  1. Detailed Quantitative Classifications of Galaxy Morphology

    NASA Astrophysics Data System (ADS)

    Nair, Preethi

    2018-01-01

    Understanding the physical processes responsible for the growth of galaxies is one of the key challenges in extragalactic astronomy. The assembly history of a galaxy is imprinted in a galaxy’s detailed morphology. The bulge-to-total ratio of galaxies, the presence or absence of bars, rings, spiral arms, tidal tails etc, all have implications for the past merger, star formation, and feedback history of a galaxy. However, current quantitative galaxy classification schemes are only useful for broad binning. They cannot classify or exploit the wide variety of galaxy structures seen in nature. Therefore, comparisons of observations with theoretical predictions of secular structure formation have only been conducted on small samples of visually classified galaxies. However large samples are needed to disentangle the complex physical processes of galaxy formation. With the advent of large surveys, like the Sloan Digital Sky Survey (SDSS) and the upcoming Large Synoptic Survey Telescope (LSST) and WFIRST, the problem of statistics will be resolved. However, the need for a robust quantitative classification scheme will still remain. Here I will present early results on promising machine learning algorithms that are providing detailed classifications, identifying bars, rings, multi-armed spiral galaxies, and Hubble type.

  2. Study on the Factors Affecting the Mechanical Behavior of Electron Beam Melted Ti6Al4V

    NASA Astrophysics Data System (ADS)

    Pirozzi, Carmine; Franchitti, Stefania; Borrelli, Rosario; Caiazzo, Fabrizia; Alfieri, Vittorio; Argenio, Paolo

    2017-09-01

    In this study, a mechanical characterization has been performed on EBM built Ti-6Al-4V tensile samples. The results of tensile tests have shown a different behavior between two sets of specimens: as built and machined ones. Supporting investigations have been carried out in order to physically explain the statistical difference of mechanical performances. Cylindrical samples which represent the tensile specimens geometry have been EBM manufactured and then investigated in their as built conditions from macrostructural and microstructural point of view. In order to make robust this study, cylindrical samples have been EBM manufactured with different size and at different height from build plate. The reason of this choice was arisen from the need of understanding if other factors as the massivity and specific location could affect the microstructure and defects generations consequently influencing the mechanical behavior of the EBMed components. The results of this study have proved that the irregularity of external circular surfaces of examined cylinders, reducing significantly the true cross section withstanding the applied load, has given a comprehensive physical explanation of the different tensile behavior of the two sets of tensile specimens.

  3. Simulation of tibial counterface wear in mobile bearing knees with uncoated and ADLC coated surfaces.

    PubMed

    Jones, V C; Barton, D C; Auger, D D; Hardaker, C; Stone, M H; Fisher, J

    2001-01-01

    A multidirectional pin-on-plate reciprocating machine was used to compare the wear performance of UHMWPE sliding against cast cobalt chrome (CoCr) plates that were either untreated or coated with Amorphous Diamond Like Carbon (ADLC). The test conditions were based on a 1/5 scale model representative of in vivo motion at the tibial counterfaces of unconstrained mobile bearing knees. The average +/- STERR wear rates were 13.78+/-1.06 mm3/Mcycles for the ADLC counterfaces and 0.504+/-0.12 mm3/Mcycles for the control CoCr counterfaces. All of the pins run on the ADLC counterfaces exhibited the same patterns of blistering along the central axis, and severe abrasion elsewhere to the extent that all of the original machining marks were removed after just one week of testing. The average value of friction coefficient was 0.24 for the ADLC counterfaces and 0.073 for the control CoCr counterfaces. The factor of 3.5 increase was statistically significant at p < 0.05. In the tribological evaluation of ADLC coatings for tibial trays in mobile bearing knees, this study shows that this specific Physical Vapour Deposition (PVD) ADLC showed significantly poorer frictional and wear performance than uncoated surfaces which was sufficient to negate any potential benefits of improved resistance to third body damage.

  4. Creating Turbulent Flow Realizations with Generative Adversarial Networks

    NASA Astrophysics Data System (ADS)

    King, Ryan; Graf, Peter; Chertkov, Michael

    2017-11-01

    Generating valid inflow conditions is a crucial, yet computationally expensive, step in unsteady turbulent flow simulations. We demonstrate a new technique for rapid generation of turbulent inflow realizations that leverages recent advances in machine learning for image generation using a deep convolutional generative adversarial network (DCGAN). The DCGAN is an unsupervised machine learning technique consisting of two competing neural networks that are trained against each other using backpropagation. One network, the generator, tries to produce samples from the true distribution of states, while the discriminator tries to distinguish between true and synthetic samples. We present results from a fully-trained DCGAN that is able to rapidly draw random samples from the full distribution of possible inflow states without needing to solve the Navier-Stokes equations, eliminating the costly process of spinning up inflow turbulence. This suggests a new paradigm in physics informed machine learning where the turbulence physics can be encoded in either the discriminator or generator. Finally, we also propose additional applications such as feature identification and subgrid scale modeling.

  5. Discomfort analysis in computerized numeric control machine operations.

    PubMed

    Muthukumar, Krishnamoorthy; Sankaranarayanasamy, Krishnasamy; Ganguli, Anindya Kumar

    2012-06-01

    The introduction of computerized numeric control (CNC) technology in manufacturing industries has revolutionized the production process, but there are some health and safety problems associated with these machines. The present study aimed to investigate the extent of postural discomfort in CNC machine operators, and the relationship of this discomfort to the display and control panel height, with a view to validate the anthropometric recommendation for the location of the display and control panel in CNC machines. The postural discomforts associated with CNC machines were studied in 122 male operators using Corlett and Bishop's body part discomfort mapping, subject information, and discomfort level at various time intervals from starting to end of a shift. This information was collected using a questionnaire. Statistical analysis was carried out using ANOVA. Neck discomfort due to the positioning of the machine displays, and shoulder and arm discomfort due to the positioning of controls were identified as common health issues in the operators of these machines. The study revealed that 45.9% of machine operators reported discomfort in the lower back, 41.8% in the neck, 22.1% in the upper-back, 53.3% in the shoulder and arm, and 21.3% of the operators reported discomfort in the leg. Discomfort increased with the progress of the day and was highest at the end of a shift; subject age had no effect on patient tendency to experience discomfort levels.

  6. Discomfort Analysis in Computerized Numeric Control Machine Operations

    PubMed Central

    Sankaranarayanasamy, Krishnasamy; Ganguli, Anindya Kumar

    2012-01-01

    Objectives The introduction of computerized numeric control (CNC) technology in manufacturing industries has revolutionized the production process, but there are some health and safety problems associated with these machines. The present study aimed to investigate the extent of postural discomfort in CNC machine operators, and the relationship of this discomfort to the display and control panel height, with a view to validate the anthropometric recommendation for the location of the display and control panel in CNC machines. Methods The postural discomforts associated with CNC machines were studied in 122 male operators using Corlett and Bishop's body part discomfort mapping, subject information, and discomfort level at various time intervals from starting to end of a shift. This information was collected using a questionnaire. Statistical analysis was carried out using ANOVA. Results Neck discomfort due to the positioning of the machine displays, and shoulder and arm discomfort due to the positioning of controls were identified as common health issues in the operators of these machines. The study revealed that 45.9% of machine operators reported discomfort in the lower back, 41.8% in the neck, 22.1% in the upper-back, 53.3% in the shoulder and arm, and 21.3% of the operators reported discomfort in the leg. Conclusion Discomfort increased with the progress of the day and was highest at the end of a shift; subject age had no effect on patient tendency to experience discomfort levels. PMID:22993720

  7. Health-promoting vending machines: evaluation of a pediatric hospital intervention.

    PubMed

    Van Hulst, Andraea; Barnett, Tracie A; Déry, Véronique; Côté, Geneviève; Colin, Christine

    2013-01-01

    Taking advantage of a natural experiment made possible by the placement of health-promoting vending machines (HPVMs), we evaluated the impact of the intervention on consumers' attitudes toward and practices with vending machines in a pediatric hospital. Vending machines offering healthy snacks, meals, and beverages were developed to replace four vending machines offering the usual high-energy, low-nutrition fare. A pre- and post-intervention evaluation design was used; data were collected through exit surveys and six-week follow-up telephone surveys among potential vending machine users before (n=293) and after (n=226) placement of HPVMs. Chi-2 statistics were used to compare pre- and post-intervention participants' responses. More than 90% of pre- and post-intervention participants were satisfied with their purchase. Post-intervention participants were more likely to state that nutritional content and appropriateness of portion size were elements that influenced their purchase. Overall, post-intervention participants were more likely than pre-intervention participants to perceive as healthy the options offered by the hospital vending machines. Thirty-three percent of post-intervention participants recalled two or more sources of information integrated in the HPVM concept. No differences were found between pre- and post-intervention participants' readiness to adopt healthy diets. While the HPVM project had challenges as well as strengths, vending machines offering healthy snacks are feasible in hospital settings.

  8. Statistical interpretation of machine learning-based feature importance scores for biomarker discovery.

    PubMed

    Huynh-Thu, Vân Anh; Saeys, Yvan; Wehenkel, Louis; Geurts, Pierre

    2012-07-01

    Univariate statistical tests are widely used for biomarker discovery in bioinformatics. These procedures are simple, fast and their output is easily interpretable by biologists but they can only identify variables that provide a significant amount of information in isolation from the other variables. As biological processes are expected to involve complex interactions between variables, univariate methods thus potentially miss some informative biomarkers. Variable relevance scores provided by machine learning techniques, however, are potentially able to highlight multivariate interacting effects, but unlike the p-values returned by univariate tests, these relevance scores are usually not statistically interpretable. This lack of interpretability hampers the determination of a relevance threshold for extracting a feature subset from the rankings and also prevents the wide adoption of these methods by practicians. We evaluated several, existing and novel, procedures that extract relevant features from rankings derived from machine learning approaches. These procedures replace the relevance scores with measures that can be interpreted in a statistical way, such as p-values, false discovery rates, or family wise error rates, for which it is easier to determine a significance level. Experiments were performed on several artificial problems as well as on real microarray datasets. Although the methods differ in terms of computing times and the tradeoff, they achieve in terms of false positives and false negatives, some of them greatly help in the extraction of truly relevant biomarkers and should thus be of great practical interest for biologists and physicians. As a side conclusion, our experiments also clearly highlight that using model performance as a criterion for feature selection is often counter-productive. Python source codes of all tested methods, as well as the MATLAB scripts used for data simulation, can be found in the Supplementary Material.

  9. SU-F-P-20: Predicting Waiting Times in Radiation Oncology Using Machine Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joseph, A; Herrera, D; Hijal, T

    Purpose: Waiting times remain one of the most vexing patient satisfaction challenges facing healthcare. Waiting time uncertainty can cause patients, who are already sick or in pain, to worry about when they will receive the care they need. These waiting periods are often difficult for staff to predict and only rough estimates are typically provided based on personal experience. This level of uncertainty leaves most patients unable to plan their calendar, making the waiting experience uncomfortable, even painful. In the present era of electronic health records (EHRs), waiting times need not be so uncertain. Extensive EHRs provide unprecedented amounts ofmore » data that can statistically cluster towards representative values when appropriate patient cohorts are selected. Predictive modelling, such as machine learning, is a powerful approach that benefits from large, potentially complex, datasets. The essence of machine learning is to predict future outcomes by learning from previous experience. The application of a machine learning algorithm to waiting time data has the potential to produce personalized waiting time predictions such that the uncertainty may be removed from the patient’s waiting experience. Methods: In radiation oncology, patients typically experience several types of waiting (eg waiting at home for treatment planning, waiting in the waiting room for oncologist appointments and daily waiting in the waiting room for radiotherapy treatments). A daily treatment wait time model is discussed in this report. To develop a prediction model using our large dataset (with more than 100k sample points) a variety of machine learning algorithms from the Python package sklearn were tested. Results: We found that the Random Forest Regressor model provides the best predictions for daily radiotherapy treatment waiting times. Using this model, we achieved a median residual (actual value minus predicted value) of 0.25 minutes and a standard deviation residual of 6.5 minutes. This means that the majority of our estimates are within 6.5 minutes of the actual wait time. Conclusion: The goal of this project was to define an appropriate machine learning algorithm to estimate waiting times based on the collective knowledge and experience learned from previous patients. Our results offer an opportunity to improve the information that is provided to patients and family members regarding the amount of time they can expect to wait for radiotherapy treatment at our centre. AJ acknowledges support by the CREATE Medical Physics Research Training Network grant of the Natural Sciences and Engineering Research Council (Grant number: 432290) and from the 2014 Q+ Initiative of the McGill University Health Centre.« less

  10. CMM Interim Check (U)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Montano, Joshua Daniel

    2015-03-23

    Coordinate Measuring Machines (CMM) are widely used in industry, throughout the Nuclear Weapons Complex and at Los Alamos National Laboratory (LANL) to verify part conformance to design definition. Calibration cycles for CMMs at LANL are predominantly one year in length. Unfortunately, several nonconformance reports have been generated to document the discovery of a certified machine found out of tolerance during a calibration closeout. In an effort to reduce risk to product quality two solutions were proposed – shorten the calibration cycle which could be costly, or perform an interim check to monitor the machine’s performance between cycles. The CMM interimmore » check discussed makes use of Renishaw’s Machine Checking Gauge. This off-the-shelf product simulates a large sphere within a CMM’s measurement volume and allows for error estimation. Data was gathered, analyzed, and simulated from seven machines in seventeen different configurations to create statistical process control run charts for on-the-floor monitoring.« less

  11. A Comparison of Machine Learning Approaches for Corn Yield Estimation

    NASA Astrophysics Data System (ADS)

    Kim, N.; Lee, Y. W.

    2017-12-01

    Machine learning is an efficient empirical method for classification and prediction, and it is another approach to crop yield estimation. The objective of this study is to estimate corn yield in the Midwestern United States by employing the machine learning approaches such as the support vector machine (SVM), random forest (RF), and deep neural networks (DNN), and to perform the comprehensive comparison for their results. We constructed the database using satellite images from MODIS, the climate data of PRISM climate group, and GLDAS soil moisture data. In addition, to examine the seasonal sensitivities of corn yields, two period groups were set up: May to September (MJJAS) and July and August (JA). In overall, the DNN showed the highest accuracies in term of the correlation coefficient for the two period groups. The differences between our predictions and USDA yield statistics were about 10-11 %.

  12. NASA's online machine aided indexing system

    NASA Technical Reports Server (NTRS)

    Silvester, June P.; Genuardi, Michael T.; Klingbiel, Paul H.

    1993-01-01

    This report describes the NASA Lexical Dictionary, a machine aided indexing system used online at the National Aeronautics and Space Administration's Center for Aerospace Information (CASI). This system is comprised of a text processor that is based on the computational, non-syntactic analysis of input text, and an extensive 'knowledge base' that serves to recognize and translate text-extracted concepts. The structure and function of the various NLD system components are described in detail. Methods used for the development of the knowledge base are discussed. Particular attention is given to a statistically-based text analysis program that provides the knowledge base developer with a list of concept-specific phrases extracted from large textual corpora. Production and quality benefits resulting from the integration of machine aided indexing at CASI are discussed along with a number of secondary applications of NLD-derived systems including on-line spell checking and machine aided lexicography.

  13. Modeling Geomagnetic Variations using a Machine Learning Framework

    NASA Astrophysics Data System (ADS)

    Cheung, C. M. M.; Handmer, C.; Kosar, B.; Gerules, G.; Poduval, B.; Mackintosh, G.; Munoz-Jaramillo, A.; Bobra, M.; Hernandez, T.; McGranaghan, R. M.

    2017-12-01

    We present a framework for data-driven modeling of Heliophysics time series data. The Solar Terrestrial Interaction Neural net Generator (STING) is an open source python module built on top of state-of-the-art statistical learning frameworks (traditional machine learning methods as well as deep learning). To showcase the capability of STING, we deploy it for the problem of predicting the temporal variation of geomagnetic fields. The data used includes solar wind measurements from the OMNI database and geomagnetic field data taken by magnetometers at US Geological Survey observatories. We examine the predictive capability of different machine learning techniques (recurrent neural networks, support vector machines) for a range of forecasting times (minutes to 12 hours). STING is designed to be extensible to other types of data. We show how STING can be used on large sets of data from different sensors/observatories and adapted to tackle other problems in Heliophysics.

  14. Machine Learning in the Big Data Era: Are We There Yet?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sukumar, Sreenivas Rangan

    In this paper, we discuss the machine learning challenges of the Big Data era. We observe that recent innovations in being able to collect, access, organize, integrate, and query massive amounts of data from a wide variety of data sources have brought statistical machine learning under more scrutiny and evaluation for gleaning insights from the data than ever before. In that context, we pose and debate the question - Are machine learning algorithms scaling with the ability to store and compute? If yes, how? If not, why not? We survey recent developments in the state-of-the-art to discuss emerging and outstandingmore » challenges in the design and implementation of machine learning algorithms at scale. We leverage experience from real-world Big Data knowledge discovery projects across domains of national security and healthcare to suggest our efforts be focused along the following axes: (i) the data science challenge - designing scalable and flexible computational architectures for machine learning (beyond just data-retrieval); (ii) the science of data challenge the ability to understand characteristics of data before applying machine learning algorithms and tools; and (iii) the scalable predictive functions challenge the ability to construct, learn and infer with increasing sample size, dimensionality, and categories of labels. We conclude with a discussion of opportunities and directions for future research.« less

  15. Mortality risk prediction in burn injury: Comparison of logistic regression with machine learning approaches.

    PubMed

    Stylianou, Neophytos; Akbarov, Artur; Kontopantelis, Evangelos; Buchan, Iain; Dunn, Ken W

    2015-08-01

    Predicting mortality from burn injury has traditionally employed logistic regression models. Alternative machine learning methods have been introduced in some areas of clinical prediction as the necessary software and computational facilities have become accessible. Here we compare logistic regression and machine learning predictions of mortality from burn. An established logistic mortality model was compared to machine learning methods (artificial neural network, support vector machine, random forests and naïve Bayes) using a population-based (England & Wales) case-cohort registry. Predictive evaluation used: area under the receiver operating characteristic curve; sensitivity; specificity; positive predictive value and Youden's index. All methods had comparable discriminatory abilities, similar sensitivities, specificities and positive predictive values. Although some machine learning methods performed marginally better than logistic regression the differences were seldom statistically significant and clinically insubstantial. Random forests were marginally better for high positive predictive value and reasonable sensitivity. Neural networks yielded slightly better prediction overall. Logistic regression gives an optimal mix of performance and interpretability. The established logistic regression model of burn mortality performs well against more complex alternatives. Clinical prediction with a small set of strong, stable, independent predictors is unlikely to gain much from machine learning outside specialist research contexts. Copyright © 2015 Elsevier Ltd and ISBI. All rights reserved.

  16. Prediction of outcome in internet-delivered cognitive behaviour therapy for paediatric obsessive-compulsive disorder: A machine learning approach.

    PubMed

    Lenhard, Fabian; Sauer, Sebastian; Andersson, Erik; Månsson, Kristoffer Nt; Mataix-Cols, David; Rück, Christian; Serlachius, Eva

    2018-03-01

    There are no consistent predictors of treatment outcome in paediatric obsessive-compulsive disorder (OCD). One reason for this might be the use of suboptimal statistical methodology. Machine learning is an approach to efficiently analyse complex data. Machine learning has been widely used within other fields, but has rarely been tested in the prediction of paediatric mental health treatment outcomes. To test four different machine learning methods in the prediction of treatment response in a sample of paediatric OCD patients who had received Internet-delivered cognitive behaviour therapy (ICBT). Participants were 61 adolescents (12-17 years) who enrolled in a randomized controlled trial and received ICBT. All clinical baseline variables were used to predict strictly defined treatment response status three months after ICBT. Four machine learning algorithms were implemented. For comparison, we also employed a traditional logistic regression approach. Multivariate logistic regression could not detect any significant predictors. In contrast, all four machine learning algorithms performed well in the prediction of treatment response, with 75 to 83% accuracy. The results suggest that machine learning algorithms can successfully be applied to predict paediatric OCD treatment outcome. Validation studies and studies in other disorders are warranted. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Are we there yet?

    PubMed

    Cristianini, Nello

    2010-05-01

    Statistical approaches to Artificial Intelligence are behind most success stories of the field in the past decade. The idea of generating non-trivial behaviour by analysing vast amounts of data has enabled recommendation systems, search engines, spam filters, optical character recognition, machine translation and speech recognition, among other things. As we celebrate the spectacular achievements of this line of research, we need to assess its full potential and its limitations. What are the next steps to take towards machine intelligence? 2010 Elsevier Ltd. All rights reserved.

  18. Machine Learning Methods for Production Cases Analysis

    NASA Astrophysics Data System (ADS)

    Mokrova, Nataliya V.; Mokrov, Alexander M.; Safonova, Alexandra V.; Vishnyakov, Igor V.

    2018-03-01

    Approach to analysis of events occurring during the production process were proposed. Described machine learning system is able to solve classification tasks related to production control and hazard identification at an early stage. Descriptors of the internal production network data were used for training and testing of applied models. k-Nearest Neighbors and Random forest methods were used to illustrate and analyze proposed solution. The quality of the developed classifiers was estimated using standard statistical metrics, such as precision, recall and accuracy.

  19. TU-FG-201-05: Varian MPC as a Statistical Process Control Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carver, A; Rowbottom, C

    Purpose: Quality assurance in radiotherapy requires the measurement of various machine parameters to ensure they remain within permitted values over time. In Truebeam release 2.0 the Machine Performance Check (MPC) was released allowing beam output and machine axis movements to be assessed in a single test. We aim to evaluate the Varian Machine Performance Check (MPC) as a tool for Statistical Process Control (SPC). Methods: Varian’s MPC tool was used on three Truebeam and one EDGE linac for a period of approximately one year. MPC was commissioned against independent systems. After this period the data were reviewed to determine whethermore » or not the MPC was useful as a process control tool. Analyses on individual tests were analysed using Shewhart control plots, using Matlab for analysis. Principal component analysis was used to determine if a multivariate model was of any benefit in analysing the data. Results: Control charts were found to be useful to detect beam output changes, worn T-nuts and jaw calibration issues. Upper and lower control limits were defined at the 95% level. Multivariate SPC was performed using Principal Component Analysis. We found little evidence of clustering beyond that which might be naively expected such as beam uniformity and beam output. Whilst this makes multivariate analysis of little use it suggests that each test is giving independent information. Conclusion: The variety of independent parameters tested in MPC makes it a sensitive tool for routine machine QA. We have determined that using control charts in our QA programme would rapidly detect changes in machine performance. The use of control charts allows large quantities of tests to be performed on all linacs without visual inspection of all results. The use of control limits alerts users when data are inconsistent with previous measurements before they become out of specification. A. Carver has received a speaker’s honorarium from Varian.« less

  20. Physics with e{sup +}e{sup -} Linear Colliders

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barklow, Timothy L

    2003-05-05

    We describe the physics potential of e{sup +}e{sup -} linear colliders in this report. These machines are planned to operate in the first phase at a center-of-mass energy of 500 GeV, before being scaled up to about 1 TeV. In the second phase of the operation, a final energy of about 2 TeV is expected. The machines will allow us to perform precision tests of the heavy particles in the Standard Model, the top quark and the electroweak bosons. They are ideal facilities for exploring the properties of Higgs particles, in particular in the intermediate mass range. New vector bosonsmore » and novel matter particles in extended gauge theories can be searched for and studied thoroughly. The machines provide unique opportunities for the discovery of particles in supersymmetric extensions of the Standard Model, the spectrum of Higgs particles, the supersymmetric partners of the electroweak gauge and Higgs bosons, and of the matter particles. High precision analyses of their properties and interactions will allow for extrapolations to energy scales close to the Planck scale where gravity becomes significant. In alternative scenarios, like compositeness models, novel matter particles and interactions can be discovered and investigated in the energy range above the existing colliders up to the TeV scale. Whatever scenario is realized in Nature, the discovery potential of e{sup +}e{sup -} linear colliders and the high-precision with which the properties of particles and their interactions can be analyzed, define an exciting physics programme complementary to hadron machines.« less

  1. Physical Analytics: An emerging field with real-world applications and impact

    NASA Astrophysics Data System (ADS)

    Hamann, Hendrik

    2015-03-01

    In the past most information on the internet has been originated by humans or computers. However with the emergence of cyber-physical systems, vast amount of data is now being created by sensors from devices, machines etc digitizing the physical world. While cyber-physical systems are subject to active research around the world, the vast amount of actual data generated from the physical world has attracted so far little attention from the engineering and physics community. In this presentation we use examples to highlight the opportunities in this new subject of ``Physical Analytics'' for highly inter-disciplinary research (including physics, engineering and computer science), which aims understanding real-world physical systems by leveraging cyber-physical technologies. More specifically, the convergence of the physical world with the digital domain allows applying physical principles to everyday problems in a much more effective and informed way than what was possible in the past. Very much like traditional applied physics and engineering has made enormous advances and changed our lives by making detailed measurements to understand the physics of an engineered device, we can now apply the same rigor and principles to understand large-scale physical systems. In the talk we first present a set of ``configurable'' enabling technologies for Physical Analytics including ultralow power sensing and communication technologies, physical big data management technologies, numerical modeling for physical systems, machine learning based physical model blending, and physical analytics based automation and control. Then we discuss in detail several concrete applications of Physical Analytics ranging from energy management in buildings and data centers, environmental sensing and controls, precision agriculture to renewable energy forecasting and management.

  2. Multivariate analysis of fMRI time series: classification and regression of brain responses using machine learning.

    PubMed

    Formisano, Elia; De Martino, Federico; Valente, Giancarlo

    2008-09-01

    Machine learning and pattern recognition techniques are being increasingly employed in functional magnetic resonance imaging (fMRI) data analysis. By taking into account the full spatial pattern of brain activity measured simultaneously at many locations, these methods allow detecting subtle, non-strictly localized effects that may remain invisible to the conventional analysis with univariate statistical methods. In typical fMRI applications, pattern recognition algorithms "learn" a functional relationship between brain response patterns and a perceptual, cognitive or behavioral state of a subject expressed in terms of a label, which may assume discrete (classification) or continuous (regression) values. This learned functional relationship is then used to predict the unseen labels from a new data set ("brain reading"). In this article, we describe the mathematical foundations of machine learning applications in fMRI. We focus on two methods, support vector machines and relevance vector machines, which are respectively suited for the classification and regression of fMRI patterns. Furthermore, by means of several examples and applications, we illustrate and discuss the methodological challenges of using machine learning algorithms in the context of fMRI data analysis.

  3. Math Machines: Using Actuators in Physics Classes

    ERIC Educational Resources Information Center

    Thomas, Frederick J.; Chaney, Robert A.; Gruesbeck, Marta

    2018-01-01

    Probeware (sensors combined with data-analysis software) is a well-established part of physics education. In engineering and technology, sensors are frequently paired with actuators--motors, heaters, buzzers, valves, color displays, medical dosing systems, and other devices that are activated by electrical signals to produce intentional physical…

  4. Worksite Food and Physical Activity Environments and Wellness Supports Reported by Employed Adults in the United States, 2013.

    PubMed

    Onufrak, Stephen J; Watson, Kathleen B; Kimmons, Joel; Pan, Liping; Khan, Laura Kettel; Lee-Kwan, Seung Hee; Park, Sohyun

    2018-01-01

    To examine the workplace food and physical activity (PA) environments and wellness culture reported by employed United States adults, overall and by employer size. Cross-sectional study using web-based survey on wellness policies and environmental supports for healthy eating and PA. Worksites in the United States. A total of 2101 adults employed outside the home. Survey items were based on the Centers for Disease Control and Prevention Worksite Health ScoreCard and Checklist of Health Promotion Environments and included the availability and promotion of healthy food items, nutrition education, promotion of breast-feeding, availability of PA amenities and programs, facility discounts, time for PA, stairwell signage, health promotion programs, and health risk assessments. Descriptive statistics were used to examine the prevalence of worksite environmental and facility supports by employer size (<100 or ≥100 employees). Chi-square tests were used to examine the differences by employer size. Among employed respondents with workplace food or drink vending machines, approximately 35% indicated the availability of healthy items. Regarding PA, 30.9% of respondents reported that their employer provided opportunities to be physically active and 17.6% reported worksite exercise facilities. Wellness programs were reported by 53.2% working for large employers, compared to 18.1% for smaller employers. Employee reports suggested that workplace supports for healthy eating, PA, and wellness were limited and were less common among smaller employers.

  5. A Statistician's View of Upcoming Grand Challenges

    NASA Astrophysics Data System (ADS)

    Meng, Xiao Li

    2010-01-01

    In this session we have seen some snapshots of the broad spectrum of challenges, in this age of huge, complex, computer-intensive models, data, instruments,and questions. These challenges bridge astronomy at many wavelengths; basic physics; machine learning; -- and statistics. At one end of our spectrum, we think of 'compressing' the data with non-parametric methods. This raises the question of creating 'pseudo-replicas' of the data for uncertainty estimates. What would be involved in, e.g. boot-strap and related methods? Somewhere in the middle are these non-parametric methods for encapsulating the uncertainty information. At the far end, we find more model-based approaches, with the physics model embedded in the likelihood and analysis. The other distinctive problem is really the 'black-box' problem, where one has a complicated e.g. fundamental physics-based computer code, or 'black box', and one needs to know how changing the parameters at input -- due to uncertainties of any kind -- will map to changing the output. All of these connect to challenges in complexity of data and computation speed. Dr. Meng will highlight ways to 'cut corners' with advanced computational techniques, such as Parallel Tempering and Equal Energy methods. As well, there are cautionary tales of running automated analysis with real data -- where "30 sigma" outliers due to data artifacts can be more common than the astrophysical event of interest.

  6. Morphological diagnostics of star formation in molecular clouds

    NASA Astrophysics Data System (ADS)

    Beaumont, Christopher Norris

    Molecular clouds are the birth sites of all star formation in the present-day universe. They represent the initial conditions of star formation, and are the primary medium by which stars transfer energy and momentum back to parsec scales. Yet, the physical evolution of molecular clouds remains poorly understood. This is not due to a lack of observational data, nor is it due to an inability to simulate the conditions inside molecular clouds. Instead, the physics and structure of the interstellar medium are sufficiently complex that interpreting molecular cloud data is very difficult. This dissertation mitigates this problem, by developing more sophisticated ways to interpret morphological information in molecular cloud observations and simulations. In particular, I have focused on leveraging machine learning techniques to identify physically meaningful substructures in the interstellar medium, as well as techniques to inter-compare molecular cloud simulations to observations. These contributions make it easier to understand the interplay between molecular clouds and star formation. Specific contributions include: new insight about the sheet-like geometry of molecular clouds based on observations of stellar bubbles; a new algorithm to disambiguate overlapping yet morphologically distinct cloud structures; a new perspective on the relationship between molecular cloud column density distributions and the sizes of cloud substructures; a quantitative analysis of how projection effects affect measurements of cloud properties; and an automatically generated, statistically-calibrated catalog of bubbles identified from their infrared morphologies.

  7. Severe acute respiratory distress syndrome caused by unintentional sewing machine lubricant ingestion: A case report.

    PubMed

    Kishore, Sunil; Chandelia, Sudha; Patharia, Neha; Swarnim

    2016-11-01

    Sewing machine oil ingestion is rare but is possible due to its availability at home. Chemically, it belongs to hydrocarbon family which is toxic if aspirated, owing to their physical properties such as high volatility and low viscosity. On the contrary, sewing machine lubricant has high viscosity and low volatility which makes it aspiration less likely. The main danger of hydrocarbon ingestion is chemical pneumonitis which may be as severe as acute respiratory distress syndrome (ARDS). We report a case of a 5-year-old girl with accidental ingestion of sewing machine lubricant oil, who subsequently developed ARDS refractory to mechanical ventilation. There was much improvement with airway pressure release ventilation mode of ventilation, but the child succumbed to death due to pulmonary hemorrhage.

  8. Severe acute respiratory distress syndrome caused by unintentional sewing machine lubricant ingestion: A case report

    PubMed Central

    Kishore, Sunil; Chandelia, Sudha; Patharia, Neha; Swarnim

    2016-01-01

    Sewing machine oil ingestion is rare but is possible due to its availability at home. Chemically, it belongs to hydrocarbon family which is toxic if aspirated, owing to their physical properties such as high volatility and low viscosity. On the contrary, sewing machine lubricant has high viscosity and low volatility which makes it aspiration less likely. The main danger of hydrocarbon ingestion is chemical pneumonitis which may be as severe as acute respiratory distress syndrome (ARDS). We report a case of a 5-year-old girl with accidental ingestion of sewing machine lubricant oil, who subsequently developed ARDS refractory to mechanical ventilation. There was much improvement with airway pressure release ventilation mode of ventilation, but the child succumbed to death due to pulmonary hemorrhage. PMID:27994384

  9. Tensile strength of laser welded cobalt-chromium alloy with and without an argon atmosphere.

    PubMed

    Tartari, Anna; Clark, Robert K F; Juszczyk, Andrzej S; Radford, David R

    2010-06-01

    The tensile strength and depth of weld of two cobalt chromium alloys before and after laser welding with and without an argon gas atmosphere were investigated. Using two cobalt chromium alloys, rod shaped specimens (5 cm x 1.5 mm) were cast. Specimens were sand blasted, sectioned and welded with a pulsed Nd: YAG laser welding machine and tested in tension using an Instron universal testing machine. A statistically significant difference in tensile strength was observed between the two alloys. The tensile strength of specimens following laser welding was significantly less than the unwelded controls. Scanning electron microscopy showed that the micro-structure of the cast alloy was altered in the region of the weld. No statistically significant difference was found between specimens welded with or without an argon atmosphere.

  10. Statistical Mechanics of Coherent Ising Machine — The Case of Ferromagnetic and Finite-Loading Hopfield Models —

    NASA Astrophysics Data System (ADS)

    Aonishi, Toru; Mimura, Kazushi; Utsunomiya, Shoko; Okada, Masato; Yamamoto, Yoshihisa

    2017-10-01

    The coherent Ising machine (CIM) has attracted attention as one of the most effective Ising computing architectures for solving large scale optimization problems because of its scalability and high-speed computational ability. However, it is difficult to implement the Ising computation in the CIM because the theories and techniques of classical thermodynamic equilibrium Ising spin systems cannot be directly applied to the CIM. This means we have to adapt these theories and techniques to the CIM. Here we focus on a ferromagnetic model and a finite loading Hopfield model, which are canonical models sharing a common mathematical structure with almost all other Ising models. We derive macroscopic equations to capture nonequilibrium phase transitions in these models. The statistical mechanical methods developed here constitute a basis for constructing evaluation methods for other Ising computation models.

  11. Combining MEDLINE and publisher data to create parallel corpora for the automatic translation of biomedical text

    PubMed Central

    2013-01-01

    Background Most of the institutional and research information in the biomedical domain is available in the form of English text. Even in countries where English is an official language, such as the United States, language can be a barrier for accessing biomedical information for non-native speakers. Recent progress in machine translation suggests that this technique could help make English texts accessible to speakers of other languages. However, the lack of adequate specialized corpora needed to train statistical models currently limits the quality of automatic translations in the biomedical domain. Results We show how a large-sized parallel corpus can automatically be obtained for the biomedical domain, using the MEDLINE database. The corpus generated in this work comprises article titles obtained from MEDLINE and abstract text automatically retrieved from journal websites, which substantially extends the corpora used in previous work. After assessing the quality of the corpus for two language pairs (English/French and English/Spanish) we use the Moses package to train a statistical machine translation model that outperforms previous models for automatic translation of biomedical text. Conclusions We have built translation data sets in the biomedical domain that can easily be extended to other languages available in MEDLINE. These sets can successfully be applied to train statistical machine translation models. While further progress should be made by incorporating out-of-domain corpora and domain-specific lexicons, we believe that this work improves the automatic translation of biomedical texts. PMID:23631733

  12. Machine learning classifier using abnormal brain network topological metrics in major depressive disorder.

    PubMed

    Guo, Hao; Cao, Xiaohua; Liu, Zhifen; Li, Haifang; Chen, Junjie; Zhang, Kerang

    2012-12-05

    Resting state functional brain networks have been widely studied in brain disease research. However, it is currently unclear whether abnormal resting state functional brain network metrics can be used with machine learning for the classification of brain diseases. Resting state functional brain networks were constructed for 28 healthy controls and 38 major depressive disorder patients by thresholding partial correlation matrices of 90 regions. Three nodal metrics were calculated using graph theory-based approaches. Nonparametric permutation tests were then used for group comparisons of topological metrics, which were used as classified features in six different algorithms. We used statistical significance as the threshold for selecting features and measured the accuracies of six classifiers with different number of features. A sensitivity analysis method was used to evaluate the importance of different features. The result indicated that some of the regions exhibited significantly abnormal nodal centralities, including the limbic system, basal ganglia, medial temporal, and prefrontal regions. Support vector machine with radial basis kernel function algorithm and neural network algorithm exhibited the highest average accuracy (79.27 and 78.22%, respectively) with 28 features (P<0.05). Correlation analysis between feature importance and the statistical significance of metrics was investigated, and the results revealed a strong positive correlation between them. Overall, the current study demonstrated that major depressive disorder is associated with abnormal functional brain network topological metrics and statistically significant nodal metrics can be successfully used for feature selection in classification algorithms.

  13. A framework for medical image retrieval using machine learning and statistical similarity matching techniques with relevance feedback.

    PubMed

    Rahman, Md Mahmudur; Bhattacharya, Prabir; Desai, Bipin C

    2007-01-01

    A content-based image retrieval (CBIR) framework for diverse collection of medical images of different imaging modalities, anatomic regions with different orientations and biological systems is proposed. Organization of images in such a database (DB) is well defined with predefined semantic categories; hence, it can be useful for category-specific searching. The proposed framework consists of machine learning methods for image prefiltering, similarity matching using statistical distance measures, and a relevance feedback (RF) scheme. To narrow down the semantic gap and increase the retrieval efficiency, we investigate both supervised and unsupervised learning techniques to associate low-level global image features (e.g., color, texture, and edge) in the projected PCA-based eigenspace with their high-level semantic and visual categories. Specially, we explore the use of a probabilistic multiclass support vector machine (SVM) and fuzzy c-mean (FCM) clustering for categorization and prefiltering of images to reduce the search space. A category-specific statistical similarity matching is proposed in a finer level on the prefiltered images. To incorporate a better perception subjectivity, an RF mechanism is also added to update the query parameters dynamically and adjust the proposed matching functions. Experiments are based on a ground-truth DB consisting of 5000 diverse medical images of 20 predefined categories. Analysis of results based on cross-validation (CV) accuracy and precision-recall for image categorization and retrieval is reported. It demonstrates the improvement, effectiveness, and efficiency achieved by the proposed framework.

  14. Physics Accomplishments and Future Prospects of the BES Experiments at the Beijing Electron-Positron Collider

    NASA Astrophysics Data System (ADS)

    Briere, Roy A.; Harris, Frederick A.; Mitchell, Ryan E.

    2016-10-01

    The cornerstone of the Chinese experimental particle physics program is a series of experiments performed in the τ-charm energy region. China began building e+e- colliders at the Institute for High Energy Physics in Beijing more than three decades ago. Beijing Electron Spectrometer (BES) is the common root name for the particle physics detectors operated at these machines. We summarize the development of the BES program and highlight the physics results across several topical areas.

  15. Requirements for fault-tolerant factoring on an atom-optics quantum computer.

    PubMed

    Devitt, Simon J; Stephens, Ashley M; Munro, William J; Nemoto, Kae

    2013-01-01

    Quantum information processing and its associated technologies have reached a pivotal stage in their development, with many experiments having established the basic building blocks. Moving forward, the challenge is to scale up to larger machines capable of performing computational tasks not possible today. This raises questions that need to be urgently addressed, such as what resources these machines will consume and how large will they be. Here we estimate the resources required to execute Shor's factoring algorithm on an atom-optics quantum computer architecture. We determine the runtime and size of the computer as a function of the problem size and physical error rate. Our results suggest that once the physical error rate is low enough to allow quantum error correction, optimization to reduce resources and increase performance will come mostly from integrating algorithms and circuits within the error correction environment, rather than from improving the physical hardware.

  16. Atomic physics research with second and third generation synchrotron light sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, B.M.

    1990-10-01

    This contribution to these proceedings is intended to provide an introduction and overview for other contributions on atomic (and related) physics research at existing and planned synchrotron light sources. The emphasis will be on research accomplishments and future opportunities, but a comparison will be given of operating characteristics for first, second, and third generation machines. First generation light sources were built to do research with the primary electron and positron beams, rather than with the synchrotron radiation itself. Second generation machines were specifically designed to be dedicated synchrotron-radiation facilities, with an emphasis on the use of bending-magnet radiation. The newmore » third generation light sources are being designed to optimize radiation from insertion devices, such as undulators and wigglers. Each generation of synchrotron light source offers useful capabilities for forefront research in atomic physics and many other disciplines. 27 refs., 1 fig., 3 tabs.« less

  17. TOPICAL REVIEW: Advances in traceable nanometrology at the National Physical Laboratory†Advances in traceable nanometrology at the National Physical Laboratory

    NASA Astrophysics Data System (ADS)

    Leach, Richard; Haycocks, Jane; Jackson, Keith; Lewis, Andrew; Oldfield, Simon; Yacoot, Andrew

    2001-03-01

    The only difference between nanotechnology and many other fields of science or engineering is that of size. Control in manufacturing at the nanometre scale still requires accurate and traceable measurements whether one is attempting to machine optical quality glass or write one's company name in single atoms. A number of instruments have been developed at the National Physical Laboratory that address the measurement requirements of the nanotechnology community and provide traceability to the definition of the metre. The instruments discussed in this paper are an atomic force microscope and a surface texture measuring instrument with traceable metrology in all their operational axes, a combined optical and x-ray interferometer system that can be used to calibrate displacement transducers to subnanometre accuracy and a co-ordinate measuring machine with a working volume of (50 mm)3 and 50 nm volumetric accuracy.

  18. Neural activity during affect labeling predicts expressive writing effects on well-being: GLM and SVM approaches.

    PubMed

    Memarian, Negar; Torre, Jared B; Haltom, Kate E; Stanton, Annette L; Lieberman, Matthew D

    2017-09-01

    Affect labeling (putting feelings into words) is a form of incidental emotion regulation that could underpin some benefits of expressive writing (i.e. writing about negative experiences). Here, we show that neural responses during affect labeling predicted changes in psychological and physical well-being outcome measures 3 months later. Furthermore, neural activity of specific frontal regions and amygdala predicted those outcomes as a function of expressive writing. Using supervised learning (support vector machines regression), improvements in four measures of psychological and physical health (physical symptoms, depression, anxiety and life satisfaction) after an expressive writing intervention were predicted with an average of 0.85% prediction error [root mean square error (RMSE) %]. The predictions were significantly more accurate with machine learning than with the conventional generalized linear model method (average RMSE: 1.3%). Consistent with affect labeling research, right ventrolateral prefrontal cortex (RVLPFC) and amygdalae were top predictors of improvement in the four outcomes. Moreover, RVLPFC and left amygdala predicted benefits due to expressive writing in satisfaction with life and depression outcome measures, respectively. This study demonstrates the substantial merit of supervised machine learning for real-world outcome prediction in social and affective neuroscience. © The Author (2017). Published by Oxford University Press.

  19. Full-Physics Inverse Learning Machine for Satellite Remote Sensing Retrievals

    NASA Astrophysics Data System (ADS)

    Loyola, D. G.

    2017-12-01

    The satellite remote sensing retrievals are usually ill-posed inverse problems that are typically solved by finding a state vector that minimizes the residual between simulated data and real measurements. The classical inversion methods are very time-consuming as they require iterative calls to complex radiative-transfer forward models to simulate radiances and Jacobians, and subsequent inversion of relatively large matrices. In this work we present a novel and extremely fast algorithm for solving inverse problems called full-physics inverse learning machine (FP-ILM). The FP-ILM algorithm consists of a training phase in which machine learning techniques are used to derive an inversion operator based on synthetic data generated using a radiative transfer model (which expresses the "full-physics" component) and the smart sampling technique, and an operational phase in which the inversion operator is applied to real measurements. FP-ILM has been successfully applied to the retrieval of the SO2 plume height during volcanic eruptions and to the retrieval of ozone profile shapes from UV/VIS satellite sensors. Furthermore, FP-ILM will be used for the near-real-time processing of the upcoming generation of European Sentinel sensors with their unprecedented spectral and spatial resolution and associated large increases in the amount of data.

  20. Engineering of the `PCAST machine`

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sinnis, J.; Brooks, A.; Brown, T.

    The President`s Committee of Advisors on Science and Technology (PCAST) has suggested that a device with a mission of ignition and moderate burn time could address the physics of burning plasmas at a lesser cost than ITER with its more comprehensive physics and technology mission. The Department of Energy commissioned a study to explore this PCAST suggestion. This paper describes the results of the engineering portion of the study of this `PCAST Machine;` physics is covered in a companion paper authored by G.H. Neilson, et al; and the costs are covered in a companion paper by R.T. Simmons, et al.more » Both are published in the proceedings of this conference. The study was undertaken by a team under the direction of Bruce Montgomery that included representatives from MIT, PPPL, ORNL, LLNL, GA, Northrup-Grumman, and Stone and Webster. The performance requirements for the PCAST machine are to form and sustain a burning plasma for three helium accumulation times. The philosophy adopted for this design was to achieve the required performance at lower cost by decreasing the major radius to five meters, increasing the toroidal field to 7 tesla, and using stronger shaping. The major device parameters are given. 4 refs., 4 figs., 1 tab.« less

  1. A journey from nuclear criticality methods to high energy density radflow experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urbatsch, Todd James

    Los Alamos National Laboratory is a nuclear weapons laboratory supporting our nation's defense. In support of this mission is a high energy-density physics program in which we design and execute experiments to study radiationhydrodynamics phenomena and improve the predictive capability of our largescale multi-physics software codes on our big-iron computers. The Radflow project’s main experimental effort now is to understand why we haven't been able to predict opacities on Sandia National Laboratory's Z-machine. We are modeling an increasing fraction of the Z-machine's dynamic hohlraum to find multi-physics explanations for the experimental results. Further, we are building an entirely different opacitymore » platform on Lawrence Livermore National Laboratory's National Ignition Facility (NIF), which is set to get results early 2017. Will the results match our predictions, match the Z-machine, or give us something entirely different? The new platform brings new challenges such as designing hohlraums and spectrometers. The speaker will recount his history, starting with one-dimensional Monte Carlo nuclear criticality methods in graduate school, radiative transfer methods research and software development for his first 16 years at LANL, and, now, radflow technology and experiments. Who knew that the real world was more than just radiation transport? Experiments aren't easy, but they sure are fun.« less

  2. Obtaining the Thermal Efficiency of a Steam Railroad Machine Toy According Dale's Cone of Learning

    NASA Astrophysics Data System (ADS)

    Bautista-Hernandez, Omar Tomas; Ruiz-Chavarria, Gregorio

    2011-03-01

    Physics is crucial to understanding the world around us, the world inside us, and the world beyond us. It is the most basic and fundamental science, hence, our interest in developing innovative strategies supported by the imagination and knowledge to make the learning process funny, attractive and interesting to people, so, we can help to change the general idea that Physics is an abstract and complicated science. We all know this instinctively, however, turn-of-the-century educationist Edgar Dale illustrated this with research when he developed the Cone of Learning - which states that after two weeks we remember only 10% of what we read, but we remember 90% of what we do. Based on that theory, we obtain the thermal efficiency of a steam railroad machine -this is a toy train that could be bought at any department store-, and show you the great percentage of energy lost when moving this railroad machine, just as the real life is. While doing this practice we don't focus on the results itself, instead, we try to demostrate that physics is funny and it is not difficult to learn. We must stress that this practice was done with pre-universitary and univesitary students, however, can be shown to the community in general.

  3. Investigation of machinability characteristics on EN47 steel for cutting force and tool wear using optimization technique

    NASA Astrophysics Data System (ADS)

    M, Vasu; Shivananda Nayaka, H.

    2018-06-01

    In this experimental work dry turning process carried out on EN47 spring steel with coated tungsten carbide tool insert with 0.8 mm nose radius are optimized by using statistical technique. Experiments were conducted at three different cutting speeds (625, 796 and 1250 rpm) with three different feed rates (0.046, 0.062 and 0.093 mm/rev) and depth of cuts (0.2, 0.3 and 0.4 mm). Experiments are conducted based on full factorial design (FFD) 33 three factors and three levels. Analysis of variance is used to identify significant factor for each output response. The result reveals that feed rate is the most significant factor influencing on cutting force followed by depth of cut and cutting speed having less significance. Optimum machining condition for cutting force obtained from the statistical technique. Tool wear measurements are performed with optimum condition of Vc = 796 rpm, ap = 0.2 mm, f = 0.046 mm/rev. The minimum tool wear observed as 0.086 mm with 5 min machining. Analysis of tool wear was done by confocal microscope it was observed that tool wear increases with increasing cutting time.

  4. Noise induced hearing loss of forest workers in Turkey.

    PubMed

    Tunay, M; Melemez, K

    2008-09-01

    In this study, a total number of 114 workers who were in 3 different groups in terms of age and work underwent audiometric analysis. In order to determine whether there was a statistically significant difference between the hearing loss levels of the workers who were included in the study, variance analysis was applied with the help of the data obtained as a result of the evaluation. Correlation and regression analysis were applied in order to determine the relations between hearing loss and their age and their time of work. As a result of the variance analysis, statistically significant differences were found at 500, 2000 and 4000 Hz frequencies. The most specific difference was observed among chainsaw machine operators at 4000 Hz frequency, which was determined by the variance analysis. As a result of the correlation analysis, significant relations were found between time of work and hearing loss in 0.01 confidence level and between age and hearing loss in 0.05 confidence level. Forest workers using chainsaw machines should be informed, they should wear or use protective materials and less noising chainsaw machines should be used if possible and workers should undergo audiometric tests when they start work and once a year.

  5. A Character Level Based and Word Level Based Approach for Chinese-Vietnamese Machine Translation

    PubMed Central

    2016-01-01

    Chinese and Vietnamese have the same isolated language; that is, the words are not delimited by spaces. In machine translation, word segmentation is often done first when translating from Chinese or Vietnamese into different languages (typically English) and vice versa. However, it is a matter for consideration that words may or may not be segmented when translating between two languages in which spaces are not used between words, such as Chinese and Vietnamese. Since Chinese-Vietnamese is a low-resource language pair, the sparse data problem is evident in the translation system of this language pair. Therefore, while translating, whether it should be segmented or not becomes more important. In this paper, we propose a new method for translating Chinese to Vietnamese based on a combination of the advantages of character level and word level translation. In addition, a hybrid approach that combines statistics and rules is used to translate on the word level. And at the character level, a statistical translation is used. The experimental results showed that our method improved the performance of machine translation over that of character or word level translation. PMID:27446207

  6. Fault diagnosis of automobile hydraulic brake system using statistical features and support vector machines

    NASA Astrophysics Data System (ADS)

    Jegadeeshwaran, R.; Sugumaran, V.

    2015-02-01

    Hydraulic brakes in automobiles are important components for the safety of passengers; therefore, the brakes are a good subject for condition monitoring. The condition of the brake components can be monitored by using the vibration characteristics. On-line condition monitoring by using machine learning approach is proposed in this paper as a possible solution to such problems. The vibration signals for both good as well as faulty conditions of brakes were acquired from a hydraulic brake test setup with the help of a piezoelectric transducer and a data acquisition system. Descriptive statistical features were extracted from the acquired vibration signals and the feature selection was carried out using the C4.5 decision tree algorithm. There is no specific method to find the right number of features required for classification for a given problem. Hence an extensive study is needed to find the optimum number of features. The effect of the number of features was also studied, by using the decision tree as well as Support Vector Machines (SVM). The selected features were classified using the C-SVM and Nu-SVM with different kernel functions. The results are discussed and the conclusion of the study is presented.

  7. Towards a molecular logic machine

    NASA Astrophysics Data System (ADS)

    Remacle, F.; Levine, R. D.

    2001-06-01

    Finite state logic machines can be realized by pump-probe spectroscopic experiments on an isolated molecule. The most elaborate setup, a Turing machine, can be programmed to carry out a specific computation. We argue that a molecule can be similarly programmed, and provide examples using two photon spectroscopies. The states of the molecule serve as the possible states of the head of the Turing machine and the physics of the problem determines the possible instructions of the program. The tape is written in an alphabet that allows the listing of the different pump and probe signals that are applied in a given experiment. Different experiments using the same set of molecular levels correspond to different tapes that can be read and processed by the same head and program. The analogy to a Turing machine is not a mechanical one and is not completely molecular because the tape is not part of the molecular machine. We therefore also discuss molecular finite state machines, such as sequential devices, for which the tape is not part of the machine. Nonmolecular tapes allow for quite long input sequences with a rich alphabet (at the level of 7 bits) and laser pulse shaping experiments provide concrete examples. Single molecule spectroscopies show that a single molecule can be repeatedly cycled through a logical operation.

  8. An experimental investigation on orthogonal cutting of hybrid CFRP/Ti stacks

    NASA Astrophysics Data System (ADS)

    Xu, Jinyang; El Mansori, Mohamed

    2016-10-01

    Hybrid CFRP/Ti stack has been widely used in the modern aerospace industry owing to its superior mechanical/physical properties and excellent structural functions. Several applications require mechanical machining of these hybrid composite stacks in order to achieve dimensional accuracy and assembly performance. However, machining of such composite-to-metal alliance is usually an extremely challenging task in the manufacturing sectors due to the disparate natures of each stacked constituent and their respective poor machinability. Special issues may arise from the high force/heat generation, severe subsurface damage and rapid tool wear. To study the fundamental mechanisms controlling the bi-material machining, this paper presented an experimental study on orthogonal cutting of hybrid CFRP/Ti stack by using superior polycrystalline diamond (PCD) tipped tools. The utilized cutting parameters for hybrid CFRP/Ti machining were rigorously adopted through a compromise selection due to the disparate machinability behaviors of the CFRP laminate and Ti alloy. The key cutting responses in terms of cutting force generation, machined surface quality and tool wear mechanism were precisely addressed. The experimental results highlighted the involved five stages of CFRP/Ti cutting and the predominant crater wear and edge fracture failure governing the PCD cutting process.

  9. Stability Assessment of a System Comprising a Single Machine and Inverter with Scalable Ratings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Brian B; Lin, Yashen; Gevorgian, Vahan

    Synchronous machines have traditionally acted as the foundation of large-scale electrical infrastructures and their physical properties have formed the cornerstone of system operations. However, with the increased integration of distributed renewable resources and energy-storage technologies, there is a need to systematically acknowledge the dynamics of power-electronics inverters - the primary energy-conversion interface in such systems - in all aspects of modeling, analysis, and control of the bulk power network. In this paper, we assess the properties of coupled machine-inverter systems by studying an elementary system comprised of a synchronous generator, three-phase inverter, and a load. The inverter model is formulatedmore » such that its power rating can be scaled continuously across power levels while preserving its closed-loop response. Accordingly, the properties of the machine-inverter system can be assessed for varying ratios of machine-to-inverter power ratings. After linearizing the model and assessing its eigenvalues, we show that system stability is highly dependent on the inverter current controller and machine exciter, thus uncovering a key concern with mixed machine-inverter systems and motivating the need for next-generation grid-stabilizing inverter controls.« less

  10. Spinal Muscular Atrophy

    MedlinePlus

    ... with symptoms and prevent complications. They may include machines to help with breathing, nutritional support, physical therapy, and medicines. NIH: National Institute of Neurological Disorders and Stroke

  11. Learning Activity Package, Physical Science. LAP Numbers 8, 9, 10, and 11.

    ERIC Educational Resources Information Center

    Williams, G. J.

    These four units of the Learning Activity Packages (LAPs) for individualized instruction in physical science cover nuclear reactions, alpha and beta particles, atomic radiation, medical use of nuclear energy, fission, fusion, simple machines, Newton's laws of motion, electricity, currents, electromagnetism, Oersted's experiment, sound, light,…

  12. Applied Physics Modules: Notes, Instructions, Data Sheets, Tests, and Test Answer Keys.

    ERIC Educational Resources Information Center

    Southeast Community Coll., Lincoln, NE.

    These user instructions and related materials are designed to accompany a series of twenty-three applied physics modules which have been developed for postsecondary students in electrical, electronics, machine tool, metals, manufacturing, automotive, diesel, architecture, and civil drafting occupational programs. The instructions include an…

  13. How Things Work: Physics in the Copy Machine.

    ERIC Educational Resources Information Center

    Crane, H. Richard, Ed.

    1984-01-01

    Discusses the physics principles applied to the main steps of the photocopying process. Of particular interest (and at the heart of the process) are the ways in which electric charges, or particles carrying charges, are caused to transfer from one surface or medium to another at each stage. (JN)

  14. Applied Physics Modules Selected for Manufacturing and Metal Technologies.

    ERIC Educational Resources Information Center

    Waring, Gene

    Designed for individualized use in an applied physics course in postsecondary vocational-technical education, this series of eighteen learning modules is equivalent to the content of two quarters of a five-credit hour class in manufacturing engineering technology, machine tool and design technology, welding technology, and industrial plastics…

  15. Learning Activity Package, Physical Science 92, LAPs 1-9.

    ERIC Educational Resources Information Center

    Williams, G. J.

    This set of nine teacher-prepared Learning Activity Packages (LAPs) for individualized instruction in physical science covers the topics of scientific equipment and procedures; measure of time, length, area, and volume; water; oxygen and oxidation; atmospheric pressure; motion; machines; carbon; and light and sound. Each unit contains a rationale…

  16. Machine learning-based methods for prediction of linear B-cell epitopes.

    PubMed

    Wang, Hsin-Wei; Pai, Tun-Wen

    2014-01-01

    B-cell epitope prediction facilitates immunologists in designing peptide-based vaccine, diagnostic test, disease prevention, treatment, and antibody production. In comparison with T-cell epitope prediction, the performance of variable length B-cell epitope prediction is still yet to be satisfied. Fortunately, due to increasingly available verified epitope databases, bioinformaticians could adopt machine learning-based algorithms on all curated data to design an improved prediction tool for biomedical researchers. Here, we have reviewed related epitope prediction papers, especially those for linear B-cell epitope prediction. It should be noticed that a combination of selected propensity scales and statistics of epitope residues with machine learning-based tools formulated a general way for constructing linear B-cell epitope prediction systems. It is also observed from most of the comparison results that the kernel method of support vector machine (SVM) classifier outperformed other machine learning-based approaches. Hence, in this chapter, except reviewing recently published papers, we have introduced the fundamentals of B-cell epitope and SVM techniques. In addition, an example of linear B-cell prediction system based on physicochemical features and amino acid combinations is illustrated in details.

  17. Kernel machines for epilepsy diagnosis via EEG signal classification: a comparative study.

    PubMed

    Lima, Clodoaldo A M; Coelho, André L V

    2011-10-01

    We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely, Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Rare etiological factor of maxillofacial injury: Case series seen and managed in a tertiary referral centre

    PubMed Central

    Braimah, Ramat Oyebunmi; Ibikunle, Adebayo Aremu; Taiwo, Abdurrazaq Olanrewaju

    2016-01-01

    Entanglement injury from local milling/grinding machine with a conveyor belt is a rare etiology of maxillofacial injuries. While there is abundant literature on industrial cause of trauma, entanglement injury as a mechanism has not been reported in the literature. We present two cases of maxillofacial injury secondary to entanglement of the loose apparel into the conveyor belt of the local grinding machine. The community should be aware of this rare cause of trauma, and adequate protection of children using these facilities should be enforced. One of such measure is to provide physical barriers to guard against these machines. PMID:27162440

  19. Rare etiological factor of maxillofacial injury: Case series seen and managed in a tertiary referral centre.

    PubMed

    Braimah, Ramat Oyebunmi; Ibikunle, Adebayo Aremu; Taiwo, Abdurrazaq Olanrewaju

    2016-01-01

    Entanglement injury from local milling/grinding machine with a conveyor belt is a rare etiology of maxillofacial injuries. While there is abundant literature on industrial cause of trauma, entanglement injury as a mechanism has not been reported in the literature. We present two cases of maxillofacial injury secondary to entanglement of the loose apparel into the conveyor belt of the local grinding machine. The community should be aware of this rare cause of trauma, and adequate protection of children using these facilities should be enforced. One of such measure is to provide physical barriers to guard against these machines.

  20. An imperialist competitive algorithm for virtual machine placement in cloud computing

    NASA Astrophysics Data System (ADS)

    Jamali, Shahram; Malektaji, Sepideh; Analoui, Morteza

    2017-05-01

    Cloud computing, the recently emerged revolution in IT industry, is empowered by virtualisation technology. In this paradigm, the user's applications run over some virtual machines (VMs). The process of selecting proper physical machines to host these virtual machines is called virtual machine placement. It plays an important role on resource utilisation and power efficiency of cloud computing environment. In this paper, we propose an imperialist competitive-based algorithm for the virtual machine placement problem called ICA-VMPLC. The base optimisation algorithm is chosen to be ICA because of its ease in neighbourhood movement, good convergence rate and suitable terminology. The proposed algorithm investigates search space in a unique manner to efficiently obtain optimal placement solution that simultaneously minimises power consumption and total resource wastage. Its final solution performance is compared with several existing methods such as grouping genetic and ant colony-based algorithms as well as bin packing heuristic. The simulation results show that the proposed method is superior to other tested algorithms in terms of power consumption, resource wastage, CPU usage efficiency and memory usage efficiency.

  1. Chaotic behaviour of Zeeman machines at introductory course of mechanics

    NASA Astrophysics Data System (ADS)

    Nagy, Péter; Tasnádi, Péter

    2016-05-01

    Investigation of chaotic motions and cooperative systems offers a magnificent opportunity to involve modern physics into the basic course of mechanics taught to engineering students. In the present paper it will be demonstrated that Zeeman Machine can be a versatile and motivating tool for students to get introductory knowledge about chaotic motion via interactive simulations. It works in a relatively simple way and its properties can be understood very easily. Since the machine can be built easily and the simulation of its movement is also simple the experimental investigation and the theoretical description can be connected intuitively. Although Zeeman Machine is known mainly for its quasi-static and catastrophic behaviour, its dynamic properties are also of interest with its typical chaotic features. By means of a periodically driven Zeeman Machine a wide range of chaotic properties of the simple systems can be demonstrated such as bifurcation diagrams, chaotic attractors, transient chaos and so on. The main goal of this paper is the presentation of an interactive learning material for teaching the basic features of the chaotic systems through the investigation of the Zeeman Machine.

  2. De Vries-Weber gain control and dark adaptation in human vision

    NASA Astrophysics Data System (ADS)

    Bouman, Maarten A.

    2002-02-01

    Thresholds for seeing light from a stimulus are determined by a mechanism that pairs subliminal excitations from both halves of a twin unit. Such excitations stem from a package of k>=1 receptor responses. A half-unit contains one red or one green cone and P rods. The receptor's ``Weber machine'' controls the receptor's gain. Each half of a twin unit contains a ``de Vries machine,'' which controls the half's k number. In the dark the receptor's dark noise events reset its Weber machine and the receptor's relation to its de Vries machine. A pairing product for light perception also represents a direction event. The local time signs of the two subliminal excitations are crucial for the polarity, size, and pace of the direction event. In relation to the time when and the area in which the stimulus is presented, these signs have average latency periods that depend on intensity and average locations that depend on movement. Polarity depends on which of the two subliminal excitations happens to arrive first at the twin's pairing facility. The intra- and inter-twin pairings in a persepton for the perceptions of light, edge and movement and the probability summation of the pairing products of the mutually independent three sets of twins of the retrinet improve intensity discrimination. Cross-pairings of intra-receptor pairings in red and green cones of a trion for yellow improve visual discrimination further. Discrimination of stimuli that exploit the model's entire summation mechanisms and pairing facilities represents ``what the perfect human eye sees best.'' For the model this threshold of modulation in quantum absorption is the ideal limit that is prescribed by statistical physics. The lateral and meta interaction in a twin unit enhance the contrast of an edge and of a temporal transient. The precision of the local time sign of a half's stimulation determines the spatiotemporal hyperfunctions for location and speed. The model's design for the perfect retinal mosaic consists of red twins situated along clockwise and counterclockwise spirals and green twins along circles that are concentric with the fovea. The model's descriptions of discrimination, adaptation, and hyperfunctions agree with experimental data.

  3. Managing virtual machines with Vac and Vcycle

    NASA Astrophysics Data System (ADS)

    McNab, A.; Love, P.; MacMahon, E.

    2015-12-01

    We compare the Vac and Vcycle virtual machine lifecycle managers and our experiences in providing production job execution services for ATLAS, CMS, LHCb, and the GridPP VO at sites in the UK, France and at CERN. In both the Vac and Vcycle systems, the virtual machines are created outside of the experiment's job submission and pilot framework. In the case of Vac, a daemon runs on each physical host which manages a pool of virtual machines on that host, and a peer-to-peer UDP protocol is used to achieve the desired target shares between experiments across the site. In the case of Vcycle, a daemon manages a pool of virtual machines on an Infrastructure-as-a-Service cloud system such as OpenStack, and has within itself enough information to create the types of virtual machines to achieve the desired target shares. Both systems allow unused shares for one experiment to temporarily taken up by other experiements with work to be done. The virtual machine lifecycle is managed with a minimum of information, gathered from the virtual machine creation mechanism (such as libvirt or OpenStack) and using the proposed Machine/Job Features API from WLCG. We demonstrate that the same virtual machine designs can be used to run production jobs on Vac and Vcycle/OpenStack sites for ATLAS, CMS, LHCb, and GridPP, and that these technologies allow sites to be operated in a reliable and robust way.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Webb-Robertson, Bobbie-Jo M.

    Accurate identification of peptides is a current challenge in mass spectrometry (MS) based proteomics. The standard approach uses a search routine to compare tandem mass spectra to a database of peptides associated with the target organism. These database search routines yield multiple metrics associated with the quality of the mapping of the experimental spectrum to the theoretical spectrum of a peptide. The structure of these results make separating correct from false identifications difficult and has created a false identification problem. Statistical confidence scores are an approach to battle this false positive problem that has led to significant improvements in peptidemore » identification. We have shown that machine learning, specifically support vector machine (SVM), is an effective approach to separating true peptide identifications from false ones. The SVM-based peptide statistical scoring method transforms a peptide into a vector representation based on database search metrics to train and validate the SVM. In practice, following the database search routine, a peptides is denoted in its vector representation and the SVM generates a single statistical score that is then used to classify presence or absence in the sample« less

  5. Statistical quality control for volumetric modulated arc therapy (VMAT) delivery by using the machine's log data

    NASA Astrophysics Data System (ADS)

    Cheong, Kwang-Ho; Lee, Me-Yeon; Kang, Sei-Kwon; Yoon, Jai-Woong; Park, Soah; Hwang, Taejin; Kim, Haeyoung; Kim, Kyoung Ju; Han, Tae Jin; Bae, Hoonsik

    2015-07-01

    The aim of this study is to set up statistical quality control for monitoring the volumetric modulated arc therapy (VMAT) delivery error by using the machine's log data. Eclipse and a Clinac iX linac with the RapidArc system (Varian Medical Systems, Palo Alto, USA) are used for delivery of the VMAT plan. During the delivery of the RapidArc fields, the machine determines the delivered monitor units (MUs) and the gantry angle's position accuracy and the standard deviations of the MU ( σMU: dosimetric error) and the gantry angle ( σGA: geometric error) are displayed on the console monitor after completion of the RapidArc delivery. In the present study, first, the log data were analyzed to confirm its validity and usability; then, statistical process control (SPC) was applied to monitor the σMU and the σGA in a timely manner for all RapidArc fields: a total of 195 arc fields for 99 patients. The MU and the GA were determined twice for all fields, that is, first during the patient-specific plan QA and then again during the first treatment. The sMU and the σGA time series were quite stable irrespective of the treatment site; however, the sGA strongly depended on the gantry's rotation speed. The σGA of the RapidArc delivery for stereotactic body radiation therapy (SBRT) was smaller than that for the typical VMAT. Therefore, SPC was applied for SBRT cases and general cases respectively. Moreover, the accuracy of the potential meter of the gantry rotation is important because the σGA can change dramatically due to its condition. By applying SPC to the σMU and σGA, we could monitor the delivery error efficiently. However, the upper and the lower limits of SPC need to be determined carefully with full knowledge of the machine and log data.

  6. Looking for scaling laws, or physics with nuts and shells

    NASA Astrophysics Data System (ADS)

    Sheets, H. David; Lauffenburger, James C.

    1999-09-01

    Scaling laws relating the volume of a class of objects to a characteristic dimension of the object appear commonly in physics, chemistry, and biology. In this laboratory exercise for an introductory physics course scaling laws are derived for machine nuts and clam shells. In addition to covering a standard problem in physics, determining volume of the object by measuring the buoyant force on it, the biologically interesting idea of scaling laws are incorporated into the same lab.

  7. Automated Design of Complex Dynamic Systems

    PubMed Central

    Hermans, Michiel; Schrauwen, Benjamin; Bienstman, Peter; Dambre, Joni

    2014-01-01

    Several fields of study are concerned with uniting the concept of computation with that of the design of physical systems. For example, a recent trend in robotics is to design robots in such a way that they require a minimal control effort. Another example is found in the domain of photonics, where recent efforts try to benefit directly from the complex nonlinear dynamics to achieve more efficient signal processing. The underlying goal of these and similar research efforts is to internalize a large part of the necessary computations within the physical system itself by exploiting its inherent non-linear dynamics. This, however, often requires the optimization of large numbers of system parameters, related to both the system's structure as well as its material properties. In addition, many of these parameters are subject to fabrication variability or to variations through time. In this paper we apply a machine learning algorithm to optimize physical dynamic systems. We show that such algorithms, which are normally applied on abstract computational entities, can be extended to the field of differential equations and used to optimize an associated set of parameters which determine their behavior. We show that machine learning training methodologies are highly useful in designing robust systems, and we provide a set of both simple and complex examples using models of physical dynamical systems. Interestingly, the derived optimization method is intimately related to direct collocation a method known in the field of optimal control. Our work suggests that the application domains of both machine learning and optimal control have a largely unexplored overlapping area which envelopes a novel design methodology of smart and highly complex physical systems. PMID:24497969

  8. A Comparison Study of Classifier Algorithms for Cross-Person Physical Activity Recognition

    PubMed Central

    Saez, Yago; Baldominos, Alejandro; Isasi, Pedro

    2016-01-01

    Physical activity is widely known to be one of the key elements of a healthy life. The many benefits of physical activity described in the medical literature include weight loss and reductions in the risk factors for chronic diseases. With the recent advances in wearable devices, such as smartwatches or physical activity wristbands, motion tracking sensors are becoming pervasive, which has led to an impressive growth in the amount of physical activity data available and an increasing interest in recognizing which specific activity a user is performing. Moreover, big data and machine learning are now cross-fertilizing each other in an approach called “deep learning”, which consists of massive artificial neural networks able to detect complicated patterns from enormous amounts of input data to learn classification models. This work compares various state-of-the-art classification techniques for automatic cross-person activity recognition under different scenarios that vary widely in how much information is available for analysis. We have incorporated deep learning by using Google’s TensorFlow framework. The data used in this study were acquired from PAMAP2 (Physical Activity Monitoring in the Ageing Population), a publicly available dataset containing physical activity data. To perform cross-person prediction, we used the leave-one-subject-out (LOSO) cross-validation technique. When working with large training sets, the best classifiers obtain very high average accuracies (e.g., 96% using extra randomized trees). However, when the data volume is drastically reduced (where available data are only 0.001% of the continuous data), deep neural networks performed the best, achieving 60% in overall prediction accuracy. We found that even when working with only approximately 22.67% of the full dataset, we can statistically obtain the same results as when working with the full dataset. This finding enables the design of more energy-efficient devices and facilitates cold starts and big data processing of physical activity records. PMID:28042838

  9. A Comparison Study of Classifier Algorithms for Cross-Person Physical Activity Recognition.

    PubMed

    Saez, Yago; Baldominos, Alejandro; Isasi, Pedro

    2016-12-30

    Physical activity is widely known to be one of the key elements of a healthy life. The many benefits of physical activity described in the medical literature include weight loss and reductions in the risk factors for chronic diseases. With the recent advances in wearable devices, such as smartwatches or physical activity wristbands, motion tracking sensors are becoming pervasive, which has led to an impressive growth in the amount of physical activity data available and an increasing interest in recognizing which specific activity a user is performing. Moreover, big data and machine learning are now cross-fertilizing each other in an approach called "deep learning", which consists of massive artificial neural networks able to detect complicated patterns from enormous amounts of input data to learn classification models. This work compares various state-of-the-art classification techniques for automatic cross-person activity recognition under different scenarios that vary widely in how much information is available for analysis. We have incorporated deep learning by using Google's TensorFlow framework. The data used in this study were acquired from PAMAP2 (Physical Activity Monitoring in the Ageing Population), a publicly available dataset containing physical activity data. To perform cross-person prediction, we used the leave-one-subject-out (LOSO) cross-validation technique. When working with large training sets, the best classifiers obtain very high average accuracies (e.g., 96% using extra randomized trees). However, when the data volume is drastically reduced (where available data are only 0.001% of the continuous data), deep neural networks performed the best, achieving 60% in overall prediction accuracy. We found that even when working with only approximately 22.67% of the full dataset, we can statistically obtain the same results as when working with the full dataset. This finding enables the design of more energy-efficient devices and facilitates cold starts and big data processing of physical activity records.

  10. An Update on Statistical Boosting in Biomedicine.

    PubMed

    Mayr, Andreas; Hofner, Benjamin; Waldmann, Elisabeth; Hepp, Tobias; Meyer, Sebastian; Gefeller, Olaf

    2017-01-01

    Statistical boosting algorithms have triggered a lot of research during the last decade. They combine a powerful machine learning approach with classical statistical modelling, offering various practical advantages like automated variable selection and implicit regularization of effect estimates. They are extremely flexible, as the underlying base-learners (regression functions defining the type of effect for the explanatory variables) can be combined with any kind of loss function (target function to be optimized, defining the type of regression setting). In this review article, we highlight the most recent methodological developments on statistical boosting regarding variable selection, functional regression, and advanced time-to-event modelling. Additionally, we provide a short overview on relevant applications of statistical boosting in biomedicine.

  11. Materials and optimized designs for human-machine interfaces via epidermal electronics.

    PubMed

    Jeong, Jae-Woong; Yeo, Woon-Hong; Akhtar, Aadeel; Norton, James J S; Kwack, Young-Jin; Li, Shuo; Jung, Sung-Young; Su, Yewang; Lee, Woosik; Xia, Jing; Cheng, Huanyu; Huang, Yonggang; Choi, Woon-Seop; Bretl, Timothy; Rogers, John A

    2013-12-17

    Thin, soft, and elastic electronics with physical properties well matched to the epidermis can be conformally and robustly integrated with the skin. Materials and optimized designs for such devices are presented for surface electromyography (sEMG). The findings enable sEMG from wide ranging areas of the body. The measurements have quality sufficient for advanced forms of human-machine interface. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Robotic Telepresence: Perception, Performance, and User Experience

    DTIC Science & Technology

    2012-02-01

    defined as “a human-computer-machine condition in which a user receives sufficient information about a remote, real-world site through a machine so...that the user feels physically present at the remote, real-world site ” (Aliberti and Bruen, 2006). Telepresence often includes capabilities for a more...outdoor route reconnaissance course (figures 4 and 5) was located at the Molnar MOUT (Military Operations in Urban Terrain) site in Fort Benning, GA. It

  13. Mi Segundo Libro de Maquinas Simples: Las Palancas. Escuela Intermedia Grados 7, 8 y 9 (My Second Book of Simple Machines: Levers. Intermediate School Grades 7, 8, and 9).

    ERIC Educational Resources Information Center

    Alvarado, Patricio R.; Montalvo, Luis

    This is the second book in a five-book physical science series on simple machines. The books are designed for Spanish-speaking junior high school students. By suggesting experiments and posing questions concerning drawings in the book which illustrate the scientific principles, this book explains the workings of three types of levers. Resistance…

  14. Applications of Principled Search Methods in Climate Influences and Mechanisms

    NASA Technical Reports Server (NTRS)

    Glymour, Clark

    2005-01-01

    Forest and grass fires cause economic losses in the billions of dollars in the U.S. alone. In addition, boreal forests constitute a large carbon store; it has been estimated that, were no burning to occur, an additional 7 gigatons of carbon would be sequestered in boreal soils each century. Effective wildfire suppression requires anticipation of locales and times for which wildfire is most probable, preferably with a two to four week forecast, so that limited resources can be efficiently deployed. The United States Forest Service (USFS), and other experts and agencies have developed several measures of fire risk combining physical principles and expert judgment, and have used them in automated procedures for forecasting fire risk. Forecasting accuracies for some fire risk indices in combination with climate and other variables have been estimated for specific locations, with the value of fire risk index variables assessed by their statistical significance in regressions. In other cases, the MAPSS forecasts [23, 241 for example, forecasting accuracy has been estimated only by simulated data. We describe alternative forecasting methods that predict fire probability by locale and time using statistical or machine learning procedures trained on historical data, and we give comparative assessments of their forecasting accuracy for one fire season year, April- October, 2003, for all U.S. Forest Service lands. Aside from providing an accuracy baseline for other forecasting methods, the results illustrate the interdependence between the statistical significance of prediction variables and the forecasting method used.

  15. Optimal Prediction in the Retina and Natural Motion Statistics

    NASA Astrophysics Data System (ADS)

    Salisbury, Jared M.; Palmer, Stephanie E.

    2016-03-01

    Almost all behaviors involve making predictions. Whether an organism is trying to catch prey, avoid predators, or simply move through a complex environment, the organism uses the data it collects through its senses to guide its actions by extracting from these data information about the future state of the world. A key aspect of the prediction problem is that not all features of the past sensory input have predictive power, and representing all features of the external sensory world is prohibitively costly both due to space and metabolic constraints. This leads to the hypothesis that neural systems are optimized for prediction. Here we describe theoretical and computational efforts to define and quantify the efficient representation of the predictive information by the brain. Another important feature of the prediction problem is that the physics of the world is diverse enough to contain a wide range of possible statistical ensembles, yet not all inputs are probable. Thus, the brain might not be a generalized predictive machine; it might have evolved to specifically solve the prediction problems most common in the natural environment. This paper summarizes recent results on predictive coding and optimal predictive information in the retina and suggests approaches for quantifying prediction in response to natural motion. Basic statistics of natural movies reveal that general patterns of spatiotemporal correlation are present across a wide range of scenes, though individual differences in motion type may be important for optimal processing of motion in a given ecological niche.

  16. Probability machines: consistent probability estimation using nonparametric learning machines.

    PubMed

    Malley, J D; Kruppa, J; Dasgupta, A; Malley, K G; Ziegler, A

    2012-01-01

    Most machine learning approaches only provide a classification for binary responses. However, probabilities are required for risk estimation using individual patient characteristics. It has been shown recently that every statistical learning machine known to be consistent for a nonparametric regression problem is a probability machine that is provably consistent for this estimation problem. The aim of this paper is to show how random forests and nearest neighbors can be used for consistent estimation of individual probabilities. Two random forest algorithms and two nearest neighbor algorithms are described in detail for estimation of individual probabilities. We discuss the consistency of random forests, nearest neighbors and other learning machines in detail. We conduct a simulation study to illustrate the validity of the methods. We exemplify the algorithms by analyzing two well-known data sets on the diagnosis of appendicitis and the diagnosis of diabetes in Pima Indians. Simulations demonstrate the validity of the method. With the real data application, we show the accuracy and practicality of this approach. We provide sample code from R packages in which the probability estimation is already available. This means that all calculations can be performed using existing software. Random forest algorithms as well as nearest neighbor approaches are valid machine learning methods for estimating individual probabilities for binary responses. Freely available implementations are available in R and may be used for applications.

  17. [Comparison of machinability of two types of dental machinable ceramic].

    PubMed

    Fu, Qiang; Zhao, Yunfeng; Li, Yong; Fan, Xinping; Li, Yan; Lin, Xuefeng

    2002-11-01

    In terms of the problems of now available dental machinable ceramics, a new type of calcium-mica glass-ceramic, PMC-I ceramic, was developed, and its machinability was compared with that of Vita MKII quantitatively. Moreover, the relationship between the strength and the machinability of PMC-I ceramic was studied. Samples of PMC-I ceramic were divided into four groups according to their nucleation procedures. 600-seconds drilling tests were conducted with high-speed steel tools (Phi = 2.3 mm) to measure the drilling depths of Vita MKII ceramic and PMC-I ceramic, while constant drilling speed of 600 rpm and constant axial load of 39.2 N were used. And the 3-point bending strength of the four groups of PMC-I ceramic were recorded. Drilling depth of Vita MKII was 0.71 mm, while the depths of the four groups of PMC-I ceramic were 0.88 mm, 1.40 mm, 0.40 mm and 0.90 mm, respectively. Group B of PMC-I ceramic showed the largest depth of 1.40 mm and was statistically different from other groups and Vita MKII. And the strength of the four groups of PMC-I ceramic were 137.7, 210.2, 118.0 and 106.0 MPa, respectively. The machinability of the new developed dental machinable ceramic of PMC-I could meet the need of the clinic.

  18. [Hygienic assessment of student's nutrition through vending machines (fast food)].

    PubMed

    Karelin, A O; Pavlova, D V; Babalyan, A V

    2015-01-01

    The article presents the results of a research work on studying the nutrition of students through vending machines (fast food), taking into account consumer priorities of students of medical University, the features and possible consequences of their use by students. The object of study was assortment of products sold through vending machines on the territory of the First Saint-Petersburg Medical University. Net calories, content of proteins, fats and carbohydrates, glycemic index, glycemic load were determined for each product. Information about the use of vending machines was obtained by questionnaires of students 2 and 4 courses of medical and dental faculties by standardized interview method. As was found, most sold through vending machines products has a high energy value, mainly due to refined carbohydrates, and was characterized by medium and high glycemic load. They have got low protein content. Most of the students (87.3%) take some products from the vending machines, mainly because of lack of time for canteen and buffets visiting. Only 4.2% students like assortment of vending machines. More than 50% students have got gastrointestinal complaints. Statistically significant relationship between time of study at the University and morbidity of gastrointestinal tract, as well as the number of students needing medical diet nutrition was found. The students who need the medical diet use fast food significantly more often (46.6% who need the medical diet and 37.7% who don't need it).

  19. Design and Development of an Automatic Tool Changer for an Articulated Robot Arm

    NASA Astrophysics Data System (ADS)

    Ambrosio, H.; Karamanoglu, M.

    2014-07-01

    In the creative industries, the length of time between the ideation stage and the making of physical objects is decreasing due to the use of CAD/CAM systems and adicitive manufacturing. Natural anisotropic materials, such as solid wood can also be transformed using CAD/CAM systems, but only with subtractive processes such as machining with CNC routers. Whilst some 3 axis CNC routing machines are affordable to buy and widely available, more flexible 5 axis routing machines still present themselves as a too big investment for small companies. Small refurbished articulated robots can be a cheaper alternative but they require a light end-effector. This paper presents a new lightweight tool changer that converts a small 3kg payload 6 DOF robot into a robot apprentice able to machine wood and similar soft materials.

  20. Metrological Characterization of the Vickers Hardness Primary Standard Machine Established at CSIR-NPL

    NASA Astrophysics Data System (ADS)

    Titus, S. Seelakumar; Vikram; Girish; Jain, Sushil Kumar

    2018-06-01

    CSIR-National Physical Laboratory (CSIR-NPL) is the National Metrological Institute (NMI) of India, which has the mandate for the realization of SI units of measurements and dissemination of the same to the user organizations. CSIR-NPL has established a hardness standardizing machine for realizing the Vickers hardness scale as per ISO 6507-3 standard for providing national traceability in hardness measurement. Direct verification of the machine has been carried out by measuring the uncertainty in the generated force, the indenter geometry and the indentation measuring system. From these measurements, it is found that the machine exhibits a calibration and measurement capability (CMC) of ±1.5% for HV1-HV3 scales and ±1.0% for HV5-HV50 scales and ±0.8% for HV100 scale.

  1. Machine learning phases of matter

    NASA Astrophysics Data System (ADS)

    Carrasquilla, Juan; Melko, Roger G.

    2017-02-01

    Condensed-matter physics is the study of the collective behaviour of infinitely complex assemblies of electrons, nuclei, magnetic moments, atoms or qubits. This complexity is reflected in the size of the state space, which grows exponentially with the number of particles, reminiscent of the `curse of dimensionality' commonly encountered in machine learning. Despite this curse, the machine learning community has developed techniques with remarkable abilities to recognize, classify, and characterize complex sets of data. Here, we show that modern machine learning architectures, such as fully connected and convolutional neural networks, can identify phases and phase transitions in a variety of condensed-matter Hamiltonians. Readily programmable through modern software libraries, neural networks can be trained to detect multiple types of order parameter, as well as highly non-trivial states with no conventional order, directly from raw state configurations sampled with Monte Carlo.

  2. Sensitivity analysis of machine-learning models of hydrologic time series

    NASA Astrophysics Data System (ADS)

    O'Reilly, A. M.

    2017-12-01

    Sensitivity analysis traditionally has been applied to assessing model response to perturbations in model parameters, where the parameters are those model input variables adjusted during calibration. Unlike physics-based models where parameters represent real phenomena, the equivalent of parameters for machine-learning models are simply mathematical "knobs" that are automatically adjusted during training/testing/verification procedures. Thus the challenge of extracting knowledge of hydrologic system functionality from machine-learning models lies in their very nature, leading to the label "black box." Sensitivity analysis of the forcing-response behavior of machine-learning models, however, can provide understanding of how the physical phenomena represented by model inputs affect the physical phenomena represented by model outputs.As part of a previous study, hybrid spectral-decomposition artificial neural network (ANN) models were developed to simulate the observed behavior of hydrologic response contained in multidecadal datasets of lake water level, groundwater level, and spring flow. Model inputs used moving window averages (MWA) to represent various frequencies and frequency-band components of time series of rainfall and groundwater use. Using these forcing time series, the MWA-ANN models were trained to predict time series of lake water level, groundwater level, and spring flow at 51 sites in central Florida, USA. A time series of sensitivities for each MWA-ANN model was produced by perturbing forcing time-series and computing the change in response time-series per unit change in perturbation. Variations in forcing-response sensitivities are evident between types (lake, groundwater level, or spring), spatially (among sites of the same type), and temporally. Two generally common characteristics among sites are more uniform sensitivities to rainfall over time and notable increases in sensitivities to groundwater usage during significant drought periods.

  3. Sensitivity of Support Vector Machine Predictions of Passive Microwave Brightness Temperature Over Snow-covered Terrain in High Mountain Asia

    NASA Astrophysics Data System (ADS)

    Ahmad, J. A.; Forman, B. A.

    2017-12-01

    High Mountain Asia (HMA) serves as a water supply source for over 1.3 billion people, primarily in south-east Asia. Most of this water originates as snow (or ice) that melts during the summer months and contributes to the run-off downstream. In spite of its critical role, there is still considerable uncertainty regarding the total amount of snow in HMA and its spatial and temporal variation. In this study, the NASA Land Information Systems (LIS) is used to model the hydrologic cycle over the Indus basin. In addition, the ability of support vector machines (SVM), a machine learning technique, to predict passive microwave brightness temperatures at a specific frequency and polarization as a function of LIS-derived land surface model output is explored in a sensitivity analysis. Multi-frequency, multi-polarization passive microwave brightness temperatures as measured by the Advanced Microwave Scanning Radiometer - Earth Observing System (AMSR-E) over the Indus basin are used as training targets during the SVM training process. Normalized sensitivity coefficients (NSC) are then computed to assess the sensitivity of a well-trained SVM to each LIS-derived state variable. Preliminary results conform with the known first-order physics. For example, input states directly linked to physical temperature like snow temperature, air temperature, and vegetation temperature have positive NSC's whereas input states that increase volume scattering such as snow water equivalent or snow density yield negative NSC's. Air temperature exhibits the largest sensitivity coefficients due to its inherent, high-frequency variability. Adherence of this machine learning algorithm to the first-order physics bodes well for its potential use in LIS as the observation operator within a radiance data assimilation system aimed at improving regional- and continental-scale snow estimates.

  4. Statistical properties of two sine waves in Gaussian noise.

    NASA Technical Reports Server (NTRS)

    Esposito, R.; Wilson, L. R.

    1973-01-01

    A detailed study is presented of some statistical properties of a stochastic process that consists of the sum of two sine waves of unknown relative phase and a normal process. Since none of the statistics investigated seem to yield a closed-form expression, all the derivations are cast in a form that is particularly suitable for machine computation. Specifically, results are presented for the probability density function (pdf) of the envelope and the instantaneous value, the moments of these distributions, and the relative cumulative density function (cdf).

  5. An Intelligent and Interactive Simulation and Tutoring Environment for Exploring and Learning Simple Machines

    NASA Astrophysics Data System (ADS)

    Myneni, Lakshman Sundeep

    Students in middle school science classes have difficulty mastering physics concepts such as energy and work, taught in the context of simple machines. Moreover, students' naive conceptions of physics often remain unchanged after completing a science class. To address this problem, I developed an intelligent tutoring system, called the Virtual Physics System (ViPS), which coaches students through problem solving with one class of simple machines, pulley systems. The tutor uses a unique cognitive based approach to teaching simple machines, and includes innovations in three areas. (1) It employs a teaching strategy that focuses on highlighting links among concepts of the domain that are essential for conceptual understanding yet are seldom learned by students. (2) Concepts are taught through a combination of effective human tutoring techniques (e.g., hinting) and simulations. (3) For each student, the system identifies which misconceptions he or she has, from a common set of student misconceptions gathered from domain experts, and tailors tutoring to match the correct line of scientific reasoning regarding the misconceptions. ViPS was implemented as a platform on which students can design and simulate pulley system experiments, integrated with a constraint-based tutor that intervenes when students make errors during problem solving to teach them and to help them. ViPS has a web-based client-server architecture, and has been implemented using Java technologies. ViPS is different from existing physics simulations and tutoring systems due to several original features. (1). It is the first system to integrate a simulation based virtual experimentation platform with an intelligent tutoring component. (2) It uses a novel approach, based on Bayesian networks, to help students construct correct pulley systems for experimental simulation. (3) It identifies student misconceptions based on a novel decision tree applied to student pretest scores, and tailors tutoring to individual students based on detected misconceptions. ViPS has been evaluated through usability and usefulness experiments with undergraduate engineering students taking their first college-level engineering physics course and undergraduate pre-service teachers taking their first college-level physics course. These experiments demonstrated that ViPS is highly usable and effective. Students using ViPS reduced their misconceptions, and students conducting virtual experiments in ViPS learned more than students who conducted experiments with physical pulley systems. Interestingly, it was also found that college students exhibited many of the same misconceptions that have been identified in middle school students.

  6. The Southampton-York Natural Scenes (SYNS) dataset: Statistics of surface attitude

    PubMed Central

    Adams, Wendy J.; Elder, James H.; Graf, Erich W.; Leyland, Julian; Lugtigheid, Arthur J.; Muryy, Alexander

    2016-01-01

    Recovering 3D scenes from 2D images is an under-constrained task; optimal estimation depends upon knowledge of the underlying scene statistics. Here we introduce the Southampton-York Natural Scenes dataset (SYNS: https://syns.soton.ac.uk), which provides comprehensive scene statistics useful for understanding biological vision and for improving machine vision systems. In order to capture the diversity of environments that humans encounter, scenes were surveyed at random locations within 25 indoor and outdoor categories. Each survey includes (i) spherical LiDAR range data (ii) high-dynamic range spherical imagery and (iii) a panorama of stereo image pairs. We envisage many uses for the dataset and present one example: an analysis of surface attitude statistics, conditioned on scene category and viewing elevation. Surface normals were estimated using a novel adaptive scale selection algorithm. Across categories, surface attitude below the horizon is dominated by the ground plane (0° tilt). Near the horizon, probability density is elevated at 90°/270° tilt due to vertical surfaces (trees, walls). Above the horizon, probability density is elevated near 0° slant due to overhead structure such as ceilings and leaf canopies. These structural regularities represent potentially useful prior assumptions for human and machine observers, and may predict human biases in perceived surface attitude. PMID:27782103

  7. Bridge Health Monitoring Using a Machine Learning Strategy

    DOT National Transportation Integrated Search

    2017-01-01

    The goal of this project was to cast the SHM problem within a statistical pattern recognition framework. Techniques borrowed from speaker recognition, particularly speaker verification, were used as this discipline deals with problems very similar to...

  8. A novelty detection diagnostic methodology for gearboxes operating under fluctuating operating conditions using probabilistic techniques

    NASA Astrophysics Data System (ADS)

    Schmidt, S.; Heyns, P. S.; de Villiers, J. P.

    2018-02-01

    In this paper, a fault diagnostic methodology is developed which is able to detect, locate and trend gear faults under fluctuating operating conditions when only vibration data from a single transducer, measured on a healthy gearbox are available. A two-phase feature extraction and modelling process is proposed to infer the operating condition and based on the operating condition, to detect changes in the machine condition. Information from optimised machine and operating condition hidden Markov models are statistically combined to generate a discrepancy signal which is post-processed to infer the condition of the gearbox. The discrepancy signal is processed and combined with statistical methods for automatic fault detection and localisation and to perform fault trending over time. The proposed methodology is validated on experimental data and a tacholess order tracking methodology is used to enhance the cost-effectiveness of the diagnostic methodology.

  9. Investigating output and energy variations and their relationship to delivery QA results using Statistical Process Control for helical tomotherapy.

    PubMed

    Binny, Diana; Mezzenga, Emilio; Lancaster, Craig M; Trapp, Jamie V; Kairn, Tanya; Crowe, Scott B

    2017-06-01

    The aims of this study were to investigate machine beam parameters using the TomoTherapy quality assurance (TQA) tool, establish a correlation to patient delivery quality assurance results and to evaluate the relationship between energy variations detected using different TQA modules. TQA daily measurement results from two treatment machines for periods of up to 4years were acquired. Analyses of beam quality, helical and static output variations were made. Variations from planned dose were also analysed using Statistical Process Control (SPC) technique and their relationship to output trends were studied. Energy variations appeared to be one of the contributing factors to delivery output dose seen in the analysis. Ion chamber measurements were reliable indicators of energy and output variations and were linear with patient dose verifications. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.

  10. Algorithm of probabilistic assessment of fully-mechanized longwall downtime

    NASA Astrophysics Data System (ADS)

    Domrachev, A. N.; Rib, S. V.; Govorukhin, Yu M.; Krivopalov, V. G.

    2017-09-01

    The problem of increasing the load on a long fully-mechanized longwall has several aspects, one of which is the improvement of efficiency in using available stoping equipment due to the increase in coefficient of the machine operating time of a shearer and other mining machines that form an integral part of the longwall set of equipment. The task of predicting the reliability indicators of stoping equipment is solved by the statistical evaluation of parameters of downtime exponential distribution and failure recovery. It is more difficult to solve the problems of downtime accounting in case of accidents in the face workings and, despite the statistical data on accidents in mine workings, no solution has been found to date. The authors have proposed a variant of probability assessment of workings caving using Poisson distribution and the duration of their restoration using normal distribution. The above results confirm the possibility of implementing the approach proposed by the authors.

  11. Multi-fidelity machine learning models for accurate bandgap predictions of solids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab

    Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelitymore » quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.« less

  12. Multi-fidelity machine learning models for accurate bandgap predictions of solids

    DOE PAGES

    Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab

    2016-12-28

    Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelitymore » quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.« less

  13. Saving all the bits

    NASA Technical Reports Server (NTRS)

    Denning, Peter J.

    1990-01-01

    The scientific tradition of saving all the data from experiments for independent validation and for further investigation is under profound challenge by modern satellite data collectors and by supercomputers. The volume of data is beyond the capacity to store, transmit, and comprehend the data. A promising line of study is discovery machines that study the data at the collection site and transmit statistical summaries of patterns observed. Examples of discovery machines are the Autoclass system and the genetic memory system of NASA-Ames, and the proposal for knowbots by Kahn and Cerf.

  14. Epidermis area detection for immunofluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Dovganich, Andrey; Krylov, Andrey; Nasonov, Andrey; Makhneva, Natalia

    2018-04-01

    We propose a novel image segmentation method for immunofluorescence microscopy images of skin tissue for the diagnosis of various skin diseases. The segmentation is based on machine learning algorithms. The feature vector is filled by three groups of features: statistical features, Laws' texture energy measures and local binary patterns. The images are preprocessed for better learning. Different machine learning algorithms have been used and the best results have been obtained with random forest algorithm. We use the proposed method to detect the epidermis region as a part of pemphigus diagnosis system.

  15. Developing Human-Machine Interfaces to Support Appropriate Trust and Reliance on Automated Combat Identification Systems (Developpement d’Interfaces Homme-Machine Pour Appuyer la Confiance dans les Systemes Automatises d’Identification au Combat)

    DTIC Science & Technology

    2008-03-31

    on automation; the ‘response bias’ approach. This new approach is based on Signal Detection Theory (SDT) (Macmillan & Creelman , 1991; Wickens...SDT), response bias will vary with the expectation of the target probability, whereas their sensitivity will stay constant (Macmillan & Creelman ...measures, C has the simplest statistical properties (Macmillan & Creelman , 1991, p273), and it was also the measure used in Dzindolet et al.’s study

  16. Machine learning strategies for systems with invariance properties

    NASA Astrophysics Data System (ADS)

    Ling, Julia; Jones, Reese; Templeton, Jeremy

    2016-08-01

    In many scientific fields, empirical models are employed to facilitate computational simulations of engineering systems. For example, in fluid mechanics, empirical Reynolds stress closures enable computationally-efficient Reynolds Averaged Navier Stokes simulations. Likewise, in solid mechanics, constitutive relations between the stress and strain in a material are required in deformation analysis. Traditional methods for developing and tuning empirical models usually combine physical intuition with simple regression techniques on limited data sets. The rise of high performance computing has led to a growing availability of high fidelity simulation data. These data open up the possibility of using machine learning algorithms, such as random forests or neural networks, to develop more accurate and general empirical models. A key question when using data-driven algorithms to develop these empirical models is how domain knowledge should be incorporated into the machine learning process. This paper will specifically address physical systems that possess symmetry or invariance properties. Two different methods for teaching a machine learning model an invariance property are compared. In the first method, a basis of invariant inputs is constructed, and the machine learning model is trained upon this basis, thereby embedding the invariance into the model. In the second method, the algorithm is trained on multiple transformations of the raw input data until the model learns invariance to that transformation. Results are discussed for two case studies: one in turbulence modeling and one in crystal elasticity. It is shown that in both cases embedding the invariance property into the input features yields higher performance at significantly reduced computational training costs.

  17. Machine learning strategies for systems with invariance properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ling, Julia; Jones, Reese E.; Templeton, Jeremy Alan

    Here, in many scientific fields, empirical models are employed to facilitate computational simulations of engineering systems. For example, in fluid mechanics, empirical Reynolds stress closures enable computationally-efficient Reynolds-Averaged Navier-Stokes simulations. Likewise, in solid mechanics, constitutive relations between the stress and strain in a material are required in deformation analysis. Traditional methods for developing and tuning empirical models usually combine physical intuition with simple regression techniques on limited data sets. The rise of high-performance computing has led to a growing availability of high-fidelity simulation data, which open up the possibility of using machine learning algorithms, such as random forests or neuralmore » networks, to develop more accurate and general empirical models. A key question when using data-driven algorithms to develop these models is how domain knowledge should be incorporated into the machine learning process. This paper will specifically address physical systems that possess symmetry or invariance properties. Two different methods for teaching a machine learning model an invariance property are compared. In the first , a basis of invariant inputs is constructed, and the machine learning model is trained upon this basis, thereby embedding the invariance into the model. In the second method, the algorithm is trained on multiple transformations of the raw input data until the model learns invariance to that transformation. Results are discussed for two case studies: one in turbulence modeling and one in crystal elasticity. It is shown that in both cases embedding the invariance property into the input features yields higher performance with significantly reduced computational training costs.« less

  18. Machine learning strategies for systems with invariance properties

    DOE PAGES

    Ling, Julia; Jones, Reese E.; Templeton, Jeremy Alan

    2016-05-06

    Here, in many scientific fields, empirical models are employed to facilitate computational simulations of engineering systems. For example, in fluid mechanics, empirical Reynolds stress closures enable computationally-efficient Reynolds-Averaged Navier-Stokes simulations. Likewise, in solid mechanics, constitutive relations between the stress and strain in a material are required in deformation analysis. Traditional methods for developing and tuning empirical models usually combine physical intuition with simple regression techniques on limited data sets. The rise of high-performance computing has led to a growing availability of high-fidelity simulation data, which open up the possibility of using machine learning algorithms, such as random forests or neuralmore » networks, to develop more accurate and general empirical models. A key question when using data-driven algorithms to develop these models is how domain knowledge should be incorporated into the machine learning process. This paper will specifically address physical systems that possess symmetry or invariance properties. Two different methods for teaching a machine learning model an invariance property are compared. In the first , a basis of invariant inputs is constructed, and the machine learning model is trained upon this basis, thereby embedding the invariance into the model. In the second method, the algorithm is trained on multiple transformations of the raw input data until the model learns invariance to that transformation. Results are discussed for two case studies: one in turbulence modeling and one in crystal elasticity. It is shown that in both cases embedding the invariance property into the input features yields higher performance with significantly reduced computational training costs.« less

  19. Learning Physical Domains: Toward a Theoretical Framework.

    ERIC Educational Resources Information Center

    Forbus, Kenneth D.; Gentner, Dedre

    People use and extend their knowledge of the physical world constantly. Understanding how this fluency is achieved would be an important milestone in understanding human learning and intelligence, as well as a useful guide for constructing machines that learn. This paper presents a theoretical framework that is being developed in an attempt to…

  20. Do Children Think that Duplicating the Body also Duplicates the Mind?

    ERIC Educational Resources Information Center

    Hood, Bruce; Gjersoe, Nathalia L.; Bloom, Paul

    2012-01-01

    Philosophers use hypothetical duplication scenarios to explore intuitions about personal identity. Here we examined 5- to 6-year-olds' intuitions about the physical properties and memories of a live hamster that is apparently duplicated by a machine. In Study 1, children thought that more of the original's physical properties than episodic…

  1. Plasma Physics Lab and the Tokamak Fusion Test Reactor, 1989

    ScienceCinema

    None

    2018-01-16

    From the Princeton University Archives: Promotional video about the Plasma Physics Lab and the new Tokamak Fusion Test Reactor (TFTR), with footage of the interior, machines, and scientists at work. This film is discussed in the audiovisual blog of the Seeley G. Mudd Manuscript Library, which holds the archives of Princeton University.

  2. A Comparison of Video-Based and Interaction-Based Affect Detectors in Physics Playground

    ERIC Educational Resources Information Center

    Kai, Shiming; Paquette, Luc; Baker, Ryan S.; Bosch, Nigel; D'Mello, Sidney; Ocumpaugh, Jaclyn; Shute, Valerie; Ventura, Matthew

    2015-01-01

    Increased attention to the relationships between affect and learning has led to the development of machine-learned models that are able to identify students' affective states in computerized learning environments. Data for these affect detectors have been collected from multiple modalities including physical sensors, dialogue logs, and logs of…

  3. A bibliometric analysis of statistical terms used in American Physical Therapy Association journals (2011-2012): evidence for educating physical therapists.

    PubMed

    Tilson, Julie K; Marshall, Katie; Tam, Jodi J; Fetters, Linda

    2016-04-22

    A primary barrier to the implementation of evidence based practice (EBP) in physical therapy is therapists' limited ability to understand and interpret statistics. Physical therapists demonstrate limited skills and report low self-efficacy for interpreting results of statistical procedures. While standards for physical therapist education include statistics, little empirical evidence is available to inform what should constitute such curricula. The purpose of this study was to conduct a census of the statistical terms and study designs used in physical therapy literature and to use the results to make recommendations for curricular development in physical therapist education. We conducted a bibliometric analysis of 14 peer-reviewed journals associated with the American Physical Therapy Association over 12 months (Oct 2011-Sept 2012). Trained raters recorded every statistical term appearing in identified systematic reviews, primary research reports, and case series and case reports. Investigator-reported study design was also recorded. Terms representing the same statistical test or concept were combined into a single, representative term. Cumulative percentage was used to identify the most common representative statistical terms. Common representative terms were organized into eight categories to inform curricular design. Of 485 articles reviewed, 391 met the inclusion criteria. These 391 articles used 532 different terms which were combined into 321 representative terms; 13.1 (sd = 8.0) terms per article. Eighty-one representative terms constituted 90% of all representative term occurrences. Of the remaining 240 representative terms, 105 (44%) were used in only one article. The most common study design was prospective cohort (32.5%). Physical therapy literature contains a large number of statistical terms and concepts for readers to navigate. However, in the year sampled, 81 representative terms accounted for 90% of all occurrences. These "common representative terms" can be used to inform curricula to promote physical therapists' skills, competency, and confidence in interpreting statistics in their professional literature. We make specific recommendations for curriculum development informed by our findings.

  4. Full-Physics Inverse Learning Machine for Satellite Remote Sensing of Ozone Profile Shapes and Tropospheric Columns

    NASA Astrophysics Data System (ADS)

    Xu, J.; Heue, K.-P.; Coldewey-Egbers, M.; Romahn, F.; Doicu, A.; Loyola, D.

    2018-04-01

    Characterizing vertical distributions of ozone from nadir-viewing satellite measurements is known to be challenging, particularly the ozone information in the troposphere. A novel retrieval algorithm called Full-Physics Inverse Learning Machine (FP-ILM), has been developed at DLR in order to estimate ozone profile shapes based on machine learning techniques. In contrast to traditional inversion methods, the FP-ILM algorithm formulates the profile shape retrieval as a classification problem. Its implementation comprises a training phase to derive an inverse function from synthetic measurements, and an operational phase in which the inverse function is applied to real measurements. This paper extends the ability of the FP-ILM retrieval to derive tropospheric ozone columns from GOME- 2 measurements. Results of total and tropical tropospheric ozone columns are compared with the ones using the official GOME Data Processing (GDP) product and the convective-cloud-differential (CCD) method, respectively. Furthermore, the FP-ILM framework will be used for the near-real-time processing of the new European Sentinel sensors with their unprecedented spectral and spatial resolution and corresponding large increases in the amount of data.

  5. Development and validation of a machine learning algorithm and hybrid system to predict the need for life-saving interventions in trauma patients.

    PubMed

    Liu, Nehemiah T; Holcomb, John B; Wade, Charles E; Batchinsky, Andriy I; Cancio, Leopoldo C; Darrah, Mark I; Salinas, José

    2014-02-01

    Accurate and effective diagnosis of actual injury severity can be problematic in trauma patients. Inherent physiologic compensatory mechanisms may prevent accurate diagnosis and mask true severity in many circumstances. The objective of this project was the development and validation of a multiparameter machine learning algorithm and system capable of predicting the need for life-saving interventions (LSIs) in trauma patients. Statistics based on means, slopes, and maxima of various vital sign measurements corresponding to 79 trauma patient records generated over 110,000 feature sets, which were used to develop, train, and implement the system. Comparisons among several machine learning models proved that a multilayer perceptron would best implement the algorithm in a hybrid system consisting of a machine learning component and basic detection rules. Additionally, 295,994 feature sets from 82 h of trauma patient data showed that the system can obtain 89.8 % accuracy within 5 min of recorded LSIs. Use of machine learning technologies combined with basic detection rules provides a potential approach for accurately assessing the need for LSIs in trauma patients. The performance of this system demonstrates that machine learning technology can be implemented in a real-time fashion and potentially used in a critical care environment.

  6. Moving beyond regression techniques in cardiovascular risk prediction: applying machine learning to address analytic challenges

    PubMed Central

    Goldstein, Benjamin A.; Navar, Ann Marie; Carter, Rickey E.

    2017-01-01

    Abstract Risk prediction plays an important role in clinical cardiology research. Traditionally, most risk models have been based on regression models. While useful and robust, these statistical methods are limited to using a small number of predictors which operate in the same way on everyone, and uniformly throughout their range. The purpose of this review is to illustrate the use of machine-learning methods for development of risk prediction models. Typically presented as black box approaches, most machine-learning methods are aimed at solving particular challenges that arise in data analysis that are not well addressed by typical regression approaches. To illustrate these challenges, as well as how different methods can address them, we consider trying to predicting mortality after diagnosis of acute myocardial infarction. We use data derived from our institution's electronic health record and abstract data on 13 regularly measured laboratory markers. We walk through different challenges that arise in modelling these data and then introduce different machine-learning approaches. Finally, we discuss general issues in the application of machine-learning methods including tuning parameters, loss functions, variable importance, and missing data. Overall, this review serves as an introduction for those working on risk modelling to approach the diffuse field of machine learning. PMID:27436868

  7. Statistical analysis and machine learning algorithms for optical biopsy

    NASA Astrophysics Data System (ADS)

    Wu, Binlin; Liu, Cheng-hui; Boydston-White, Susie; Beckman, Hugh; Sriramoju, Vidyasagar; Sordillo, Laura; Zhang, Chunyuan; Zhang, Lin; Shi, Lingyan; Smith, Jason; Bailin, Jacob; Alfano, Robert R.

    2018-02-01

    Analyzing spectral or imaging data collected with various optical biopsy methods is often times difficult due to the complexity of the biological basis. Robust methods that can utilize the spectral or imaging data and detect the characteristic spectral or spatial signatures for different types of tissue is challenging but highly desired. In this study, we used various machine learning algorithms to analyze a spectral dataset acquired from human skin normal and cancerous tissue samples using resonance Raman spectroscopy with 532nm excitation. The algorithms including principal component analysis, nonnegative matrix factorization, and autoencoder artificial neural network are used to reduce dimension of the dataset and detect features. A support vector machine with a linear kernel is used to classify the normal tissue and cancerous tissue samples. The efficacies of the methods are compared.

  8. A hybrid approach to select features and classify diseases based on medical data

    NASA Astrophysics Data System (ADS)

    AbdelLatif, Hisham; Luo, Jiawei

    2018-03-01

    Feature selection is popular problem in the classification of diseases in clinical medicine. Here, we developing a hybrid methodology to classify diseases, based on three medical datasets, Arrhythmia, Breast cancer, and Hepatitis datasets. This methodology called k-means ANOVA Support Vector Machine (K-ANOVA-SVM) uses K-means cluster with ANOVA statistical to preprocessing data and selection the significant features, and Support Vector Machines in the classification process. To compare and evaluate the performance, we choice three classification algorithms, decision tree Naïve Bayes, Support Vector Machines and applied the medical datasets direct to these algorithms. Our methodology was a much better classification accuracy is given of 98% in Arrhythmia datasets, 92% in Breast cancer datasets and 88% in Hepatitis datasets, Compare to use the medical data directly with decision tree Naïve Bayes, and Support Vector Machines. Also, the ROC curve and precision with (K-ANOVA-SVM) Achieved best results than other algorithms

  9. Fast machine-learning online optimization of ultra-cold-atom experiments.

    PubMed

    Wigley, P B; Everitt, P J; van den Hengel, A; Bastian, J W; Sooriyabandara, M A; McDonald, G D; Hardman, K S; Quinlivan, C D; Manju, P; Kuhn, C C N; Petersen, I R; Luiten, A N; Hope, J J; Robins, N P; Hush, M R

    2016-05-16

    We apply an online optimization process based on machine learning to the production of Bose-Einstein condensates (BEC). BEC is typically created with an exponential evaporation ramp that is optimal for ergodic dynamics with two-body s-wave interactions and no other loss rates, but likely sub-optimal for real experiments. Through repeated machine-controlled scientific experimentation and observations our 'learner' discovers an optimal evaporation ramp for BEC production. In contrast to previous work, our learner uses a Gaussian process to develop a statistical model of the relationship between the parameters it controls and the quality of the BEC produced. We demonstrate that the Gaussian process machine learner is able to discover a ramp that produces high quality BECs in 10 times fewer iterations than a previously used online optimization technique. Furthermore, we show the internal model developed can be used to determine which parameters are essential in BEC creation and which are unimportant, providing insight into the optimization process of the system.

  10. Fast machine-learning online optimization of ultra-cold-atom experiments

    PubMed Central

    Wigley, P. B.; Everitt, P. J.; van den Hengel, A.; Bastian, J. W.; Sooriyabandara, M. A.; McDonald, G. D.; Hardman, K. S.; Quinlivan, C. D.; Manju, P.; Kuhn, C. C. N.; Petersen, I. R.; Luiten, A. N.; Hope, J. J.; Robins, N. P.; Hush, M. R.

    2016-01-01

    We apply an online optimization process based on machine learning to the production of Bose-Einstein condensates (BEC). BEC is typically created with an exponential evaporation ramp that is optimal for ergodic dynamics with two-body s-wave interactions and no other loss rates, but likely sub-optimal for real experiments. Through repeated machine-controlled scientific experimentation and observations our ‘learner’ discovers an optimal evaporation ramp for BEC production. In contrast to previous work, our learner uses a Gaussian process to develop a statistical model of the relationship between the parameters it controls and the quality of the BEC produced. We demonstrate that the Gaussian process machine learner is able to discover a ramp that produces high quality BECs in 10 times fewer iterations than a previously used online optimization technique. Furthermore, we show the internal model developed can be used to determine which parameters are essential in BEC creation and which are unimportant, providing insight into the optimization process of the system. PMID:27180805

  11. Current Developments in Machine Learning Techniques in Biological Data Mining.

    PubMed

    Dumancas, Gerard G; Adrianto, Indra; Bello, Ghalib; Dozmorov, Mikhail

    2017-01-01

    This supplement is intended to focus on the use of machine learning techniques to generate meaningful information on biological data. This supplement under Bioinformatics and Biology Insights aims to provide scientists and researchers working in this rapid and evolving field with online, open-access articles authored by leading international experts in this field. Advances in the field of biology have generated massive opportunities to allow the implementation of modern computational and statistical techniques. Machine learning methods in particular, a subfield of computer science, have evolved as an indispensable tool applied to a wide spectrum of bioinformatics applications. Thus, it is broadly used to investigate the underlying mechanisms leading to a specific disease, as well as the biomarker discovery process. With a growth in this specific area of science comes the need to access up-to-date, high-quality scholarly articles that will leverage the knowledge of scientists and researchers in the various applications of machine learning techniques in mining biological data.

  12. Efficient Embedded Decoding of Neural Network Language Models in a Machine Translation System.

    PubMed

    Zamora-Martinez, Francisco; Castro-Bleda, Maria Jose

    2018-02-22

    Neural Network Language Models (NNLMs) are a successful approach to Natural Language Processing tasks, such as Machine Translation. We introduce in this work a Statistical Machine Translation (SMT) system which fully integrates NNLMs in the decoding stage, breaking the traditional approach based on [Formula: see text]-best list rescoring. The neural net models (both language models (LMs) and translation models) are fully coupled in the decoding stage, allowing to more strongly influence the translation quality. Computational issues were solved by using a novel idea based on memorization and smoothing of the softmax constants to avoid their computation, which introduces a trade-off between LM quality and computational cost. These ideas were studied in a machine translation task with different combinations of neural networks used both as translation models and as target LMs, comparing phrase-based and [Formula: see text]-gram-based systems, showing that the integrated approach seems more promising for [Formula: see text]-gram-based systems, even with nonfull-quality NNLMs.

  13. Multicopy programmable discrimination of general qubit states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sentis, G.; Bagan, E.; Calsamiglia, J.

    2010-10-15

    Quantum state discrimination is a fundamental primitive in quantum statistics where one has to correctly identify the state of a system that is in one of two possible known states. A programmable discrimination machine performs this task when the pair of possible states is not a priori known but instead the two possible states are provided through two respective program ports. We study optimal programmable discrimination machines for general qubit states when several copies of states are available in the data or program ports. Two scenarios are considered: One in which the purity of the possible states is a priorimore » known, and the fully universal one where the machine operates over generic mixed states of unknown purity. We find analytical results for both the unambiguous and minimum error discrimination strategies. This allows us to calculate the asymptotic performance of programmable discrimination machines when a large number of copies are provided and to recover the standard state discrimination and state comparison values as different limiting cases.« less

  14. Comparison of Neural Network and Linear Regression Models in Statistically Predicting Mental and Physical Health Status of Breast Cancer Survivors

    DTIC Science & Technology

    2015-07-15

    Long-term effects on cancer survivors’ quality of life of physical training versus physical training combined with cognitive-behavioral therapy ...COMPARISON OF NEURAL NETWORK AND LINEAR REGRESSION MODELS IN STATISTICALLY PREDICTING MENTAL AND PHYSICAL HEALTH STATUS OF BREAST...34Comparison of Neural Network and Linear Regression Models in Statistically Predicting Mental and Physical Health Status of Breast Cancer Survivors

  15. Analysis towards VMEM File of a Suspended Virtual Machine

    NASA Astrophysics Data System (ADS)

    Song, Zheng; Jin, Bo; Sun, Yongqing

    With the popularity of virtual machines, forensic investigators are challenged with more complicated situations, among which discovering the evidences in virtualized environment is of significant importance. This paper mainly analyzes the file suffixed with .vmem in VMware Workstation, which stores all pseudo-physical memory into an image. The internal file structure of .vmem file is studied and disclosed. Key information about processes and threads of a suspended virtual machine is revealed. Further investigation into the Windows XP SP3 heap contents is conducted and a proof-of-concept tool is provided. Different methods to obtain forensic memory images are introduced, with both advantages and limits analyzed. We conclude with an outlook.

  16. National Synchrotron Light Source annual report 1991

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hulbert, S.L.; Lazarz, N.M.

    1992-04-01

    This report discusses the following research conducted at NSLS: atomic and molecular science; energy dispersive diffraction; lithography, microscopy and tomography; nuclear physics; UV photoemission and surface science; x-ray absorption spectroscopy; x-ray scattering and crystallography; x-ray topography; workshop on surface structure; workshop on electronic and chemical phenomena at surfaces; workshop on imaging; UV FEL machine reviews; VUV machine operations; VUV beamline operations; VUV storage ring parameters; x-ray machine operations; x-ray beamline operations; x-ray storage ring parameters; superconducting x-ray lithography source; SXLS storage ring parameters; the accelerator test facility; proposed UV-FEL user facility at the NSLS; global orbit feedback systems; and NSLSmore » computer system.« less

  17. Machining-induced surface transformations of magnesium alloys to enhance corrosion resistance in human-like environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bruschi, Stefania; Bertolini, Rachele; Ghiotti, Andrea

    We report that magnesium alloys are becoming increasingly attractive for producing temporary prosthetic devices thanks to their bioresorbable characteristics in human body. However, their poor corrosion resistance to body fluids seriously limits their applicability. In this work, machining-induced surface transformations are explored as means to enhance corrosion resistance of AZ31 magnesium alloy. Surface characteristics including topography, residual stresses, wettability, microstructures and depth of transformed layer, were analysed and correlated to in-vitro corrosion resistance. Results showed that cryogenic machining at low feed provided the most promising corrosion reduction. Finally, thorough physical characterizations gave fundamental insights into possible drivers for this enhancedmore » resistance.« less

  18. Machining-induced surface transformations of magnesium alloys to enhance corrosion resistance in human-like environment

    DOE PAGES

    Bruschi, Stefania; Bertolini, Rachele; Ghiotti, Andrea; ...

    2018-04-22

    We report that magnesium alloys are becoming increasingly attractive for producing temporary prosthetic devices thanks to their bioresorbable characteristics in human body. However, their poor corrosion resistance to body fluids seriously limits their applicability. In this work, machining-induced surface transformations are explored as means to enhance corrosion resistance of AZ31 magnesium alloy. Surface characteristics including topography, residual stresses, wettability, microstructures and depth of transformed layer, were analysed and correlated to in-vitro corrosion resistance. Results showed that cryogenic machining at low feed provided the most promising corrosion reduction. Finally, thorough physical characterizations gave fundamental insights into possible drivers for this enhancedmore » resistance.« less

  19. POLYSHIFT Communications Software for the Connection Machine System CM-200

    DOE PAGES

    George, William; Brickner, Ralph G.; Johnsson, S. Lennart

    1994-01-01

    We describe the use and implementation of a polyshift function PSHIFT for circular shifts and end-offs shifts. Polyshift is useful in many scientific codes using regular grids, such as finite difference codes in several dimensions, and multigrid codes, molecular dynamics computations, and in lattice gauge physics computations, such as quantum chromodynamics (QCD) calculations. Our implementation of the PSHIFT function on the Connection Machine systems CM-2 and CM-200 offers a speedup of up to a factor of 3–4 compared with CSHIFT when the local data motion within a node is small. The PSHIFT routine is included in the Connection Machine Scientificmore » Software Library (CMSSL).« less

  20. Stability Assessment of a System Comprising a Single Machine and Inverter with Scalable Ratings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Brian B; Lin, Yashen; Gevorgian, Vahan

    From the inception of power systems, synchronous machines have acted as the foundation of large-scale electrical infrastructures and their physical properties have formed the cornerstone of system operations. However, power electronics interfaces are playing a growing role as they are the primary interface for several types of renewable energy sources and storage technologies. As the role of power electronics in systems continues to grow, it is crucial to investigate the properties of bulk power systems in low inertia settings. In this paper, we assess the properties of coupled machine-inverter systems by studying an elementary system comprised of a synchronous generator,more » three-phase inverter, and a load. Furthermore, the inverter model is formulated such that its power rating can be scaled continuously across power levels while preserving its closed-loop response. Accordingly, the properties of the machine-inverter system can be assessed for varying ratios of machine-to-inverter power ratings and, hence, differing levels of inertia. After linearizing the model and assessing its eigenvalues, we show that system stability is highly dependent on the interaction between the inverter current controller and machine exciter, thus uncovering a key concern with mixed machine-inverter systems and motivating the need for next-generation grid-stabilizing inverter controls.« less

Top