Sample records for machine combining coarse

  1. CAMELOT: A machine learning approach for coarse-grained simulations of aggregation of block-copolymeric protein sequences

    PubMed Central

    Ruff, Kiersten M.; Harmon, Tyler S.; Pappu, Rohit V.

    2015-01-01

    We report the development and deployment of a coarse-graining method that is well suited for computer simulations of aggregation and phase separation of protein sequences with block-copolymeric architectures. Our algorithm, named CAMELOT for Coarse-grained simulations Aided by MachinE Learning Optimization and Training, leverages information from converged all atom simulations that is used to determine a suitable resolution and parameterize the coarse-grained model. To parameterize a system-specific coarse-grained model, we use a combination of Boltzmann inversion, non-linear regression, and a Gaussian process Bayesian optimization approach. The accuracy of the coarse-grained model is demonstrated through direct comparisons to results from all atom simulations. We demonstrate the utility of our coarse-graining approach using the block-copolymeric sequence from the exon 1 encoded sequence of the huntingtin protein. This sequence comprises of 17 residues from the N-terminal end of huntingtin (N17) followed by a polyglutamine (polyQ) tract. Simulations based on the CAMELOT approach are used to show that the adsorption and unfolding of the wild type N17 and its sequence variants on the surface of polyQ tracts engender a patchy colloid like architecture that promotes the formation of linear aggregates. These results provide a plausible explanation for experimental observations, which show that N17 accelerates the formation of linear aggregates in block-copolymeric N17-polyQ sequences. The CAMELOT approach is versatile and is generalizable for simulating the aggregation and phase behavior of a range of block-copolymeric protein sequences. PMID:26723608

  2. High-performance Chinese multiclass traffic sign detection via coarse-to-fine cascade and parallel support vector machine detectors

    NASA Astrophysics Data System (ADS)

    Chang, Faliang; Liu, Chunsheng

    2017-09-01

    The high variability of sign colors and shapes in uncontrolled environments has made the detection of traffic signs a challenging problem in computer vision. We propose a traffic sign detection (TSD) method based on coarse-to-fine cascade and parallel support vector machine (SVM) detectors to detect Chinese warning and danger traffic signs. First, a region of interest (ROI) extraction method is proposed to extract ROIs using color contrast features in local regions. The ROI extraction can reduce scanning regions and save detection time. For multiclass TSD, we propose a structure that combines a coarse-to-fine cascaded tree with a parallel structure of histogram of oriented gradients (HOG) + SVM detectors. The cascaded tree is designed to detect different types of traffic signs in a coarse-to-fine process. The parallel HOG + SVM detectors are designed to do fine detection of different types of traffic signs. The experiments demonstrate the proposed TSD method can rapidly detect multiclass traffic signs with different colors and shapes in high accuracy.

  3. Fabrication, characterization and fracture study of a machinable hydroxyapatite ceramic.

    PubMed

    Shareef, M Y; Messer, P F; van Noort, R

    1993-01-01

    In this study the preparation of a machinable hydroxyapatite from mixtures of a fine, submicrometer powder and either a coarse powder composed of porous aggregates up to 50 microns or a medium powder composed of dense particles of 3 microns median size is described. These were characterized using X-ray diffraction, transmission and scanning electron microscopy and infra-red spectroscopy. Test-pieces were formed by powder pressing and slip casting mixtures of various combinations of the fine, medium and coarse powders. The fired test-pieces were subjected to measurements of firing shrinkage, porosity, bulk density, tensile strength and fracture toughness. The microstructure and composition were examined using scanning electron microscopy and X-ray diffraction. For both processing methods, a uniform interconnected microporous structure was produced of a high-purity hydroxyapatite. The maximum tensile strength and fracture toughness that could be attained while retaining machinability were 37 MPa and 0.8 MPa m1/2 respectively.

  4. Innovations in the flotation of fine and coarse particles

    NASA Astrophysics Data System (ADS)

    Fornasiero, D.; Filippov, L. O.

    2017-07-01

    Research on the mechanisms of particle-bubble interaction has provided valuable information on how to improve the flotation of fine (<20 µm) and coarse particles (>100 µm) with novel flotation machines which provide higher collision and attachment efficiencies of fine particles with bubbles and lower detachment of the coarse particles. Also, new grinding methods and technologies have reduced energy consumption in mining and produced better mineral liberation and therefore flotation performance.

  5. Principal component analysis acceleration of rovibrational coarse-grain models for internal energy excitation and dissociation

    NASA Astrophysics Data System (ADS)

    Bellemans, Aurélie; Parente, Alessandro; Magin, Thierry

    2018-04-01

    The present work introduces a novel approach for obtaining reduced chemistry representations of large kinetic mechanisms in strong non-equilibrium conditions. The need for accurate reduced-order models arises from compression of large ab initio quantum chemistry databases for their use in fluid codes. The method presented in this paper builds on existing physics-based strategies and proposes a new approach based on the combination of a simple coarse grain model with Principal Component Analysis (PCA). The internal energy levels of the chemical species are regrouped in distinct energy groups with a uniform lumping technique. Following the philosophy of machine learning, PCA is applied on the training data provided by the coarse grain model to find an optimally reduced representation of the full kinetic mechanism. Compared to recently published complex lumping strategies, no expert judgment is required before the application of PCA. In this work, we will demonstrate the benefits of the combined approach, stressing its simplicity, reliability, and accuracy. The technique is demonstrated by reducing the complex quantum N2(g+1Σ) -N(S4u ) database for studying molecular dissociation and excitation in strong non-equilibrium. Starting from detailed kinetics, an accurate reduced model is developed and used to study non-equilibrium properties of the N2(g+1Σ) -N(S4u ) system in shock relaxation simulations.

  6. Identification and location of catenary insulator in complex background based on machine vision

    NASA Astrophysics Data System (ADS)

    Yao, Xiaotong; Pan, Yingli; Liu, Li; Cheng, Xiao

    2018-04-01

    It is an important premise to locate insulator precisely for fault detection. Current location algorithms for insulator under catenary checking images are not accurate, a target recognition and localization method based on binocular vision combined with SURF features is proposed. First of all, because of the location of the insulator in complex environment, using SURF features to achieve the coarse positioning of target recognition; then Using binocular vision principle to calculate the 3D coordinates of the object which has been coarsely located, realization of target object recognition and fine location; Finally, Finally, the key is to preserve the 3D coordinate of the object's center of mass, transfer to the inspection robot to control the detection position of the robot. Experimental results demonstrate that the proposed method has better recognition efficiency and accuracy, can successfully identify the target and has a define application value.

  7. Location and acquisition of objects in unpredictable locations. [a teleoperator system with a computer for manipulator control

    NASA Technical Reports Server (NTRS)

    Sword, A. J.; Park, W. T.

    1975-01-01

    A teleoperator system with a computer for manipulator control to combine the capabilities of both man and computer to accomplish a task is described. This system allows objects in unpredictable locations to be successfully located and acquired. By using a method of characterizing the work-space together with man's ability to plan a strategy and coarsely locate an object, the computer is provided with enough information to complete the tedious part of the task. In addition, the use of voice control is shown to be a useful component of the man/machine interface.

  8. STOCK: Structure mapper and online coarse-graining kit for molecular simulations

    DOE PAGES

    Bevc, Staš; Junghans, Christoph; Praprotnik, Matej

    2015-03-15

    We present a web toolkit STructure mapper and Online Coarse-graining Kit for setting up coarse-grained molecular simulations. The kit consists of two tools: structure mapping and Boltzmann inversion tools. The aim of the first tool is to define a molecular mapping from high, e.g. all-atom, to low, i.e. coarse-grained, resolution. Using a graphical user interface it generates input files, which are compatible with standard coarse-graining packages, e.g. VOTCA and DL_CGMAP. Our second tool generates effective potentials for coarse-grained simulations preserving the structural properties, e.g. radial distribution functions, of the underlying higher resolution model. The required distribution functions can be providedmore » by any simulation package. Simulations are performed on a local machine and only the distributions are uploaded to the server. The applicability of the toolkit is validated by mapping atomistic pentane and polyalanine molecules to a coarse-grained representation. Effective potentials are derived for systems of TIP3P (transferable intermolecular potential 3 point) water molecules and salt solution. The presented coarse-graining web toolkit is available at http://stock.cmm.ki.si.« less

  9. Comparative Performance Analysis of Coarse Solvers for Algebraic Multigrid on Multicore and Manycore Architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Druinsky, Alex; Ghysels, Pieter; Li, Xiaoye S.

    In this paper, we study the performance of a two-level algebraic-multigrid algorithm, with a focus on the impact of the coarse-grid solver on performance. We consider two algorithms for solving the coarse-space systems: the preconditioned conjugate gradient method and a new robust HSS-embedded low-rank sparse-factorization algorithm. Our test data comes from the SPE Comparative Solution Project for oil-reservoir simulations. We contrast the performance of our code on one 12-core socket of a Cray XC30 machine with performance on a 60-core Intel Xeon Phi coprocessor. To obtain top performance, we optimized the code to take full advantage of fine-grained parallelism andmore » made it thread-friendly for high thread count. We also developed a bounds-and-bottlenecks performance model of the solver which we used to guide us through the optimization effort, and also carried out performance tuning in the solver’s large parameter space. Finally, as a result, significant speedups were obtained on both machines.« less

  10. AMR on the CM-2

    NASA Technical Reports Server (NTRS)

    Berger, Marsha J.; Saltzman, Jeff S.

    1992-01-01

    We describe the development of a structured adaptive mesh algorithm (AMR) for the Connection Machine-2 (CM-2). We develop a data layout scheme that preserves locality even for communication between fine and coarse grids. On 8K of a 32K machine we achieve performance slightly less than 1 CPU of the Cray Y-MP. We apply our algorithm to an inviscid compressible flow problem.

  11. Quantitative assessment of the enamel machinability in tooth preparation with dental diamond burs.

    PubMed

    Song, Xiao-Fei; Jin, Chen-Xin; Yin, Ling

    2015-01-01

    Enamel cutting using dental handpieces is a critical process in tooth preparation for dental restorations and treatment but the machinability of enamel is poorly understood. This paper reports on the first quantitative assessment of the enamel machinability using computer-assisted numerical control, high-speed data acquisition, and force sensing systems. The enamel machinability in terms of cutting forces, force ratio, cutting torque, cutting speed and specific cutting energy were characterized in relation to enamel surface orientation, specific material removal rate and diamond bur grit size. The results show that enamel surface orientation, specific material removal rate and diamond bur grit size critically affected the enamel cutting capability. Cutting buccal/lingual surfaces resulted in significantly higher tangential and normal forces, torques and specific energy (p<0.05) but lower cutting speeds than occlusal surfaces (p<0.05). Increasing material removal rate for high cutting efficiencies using coarse burs yielded remarkable rises in cutting forces and torque (p<0.05) but significant reductions in cutting speed and specific cutting energy (p<0.05). In particular, great variations in cutting forces, torques and specific energy were observed at the specific material removal rate of 3mm(3)/min/mm using coarse burs, indicating the cutting limit. This work provides fundamental data and the scientific understanding of the enamel machinability for clinical dental practice. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Thermal and Mechanical Property Characterization of the Advanced Disk Alloy LSHR

    NASA Technical Reports Server (NTRS)

    Gabb, Timothy P.; Gayda, John; Telesman, Jack; Kantzos, Peter T.

    2005-01-01

    A low solvus, high refractory (LSHR) powder metallurgy disk alloy was recently designed using experimental screening and statistical modeling of composition and processing variables on sub-scale disks to have versatile processing-property capabilities for advanced disk applications. The objective of the present study was to produce a scaled-up disk and apply varied heat treat processes to enable full-scale demonstration of LSHR properties. Scaled-up disks were produced, heat treated, sectioned, and then machined into specimens for mechanical testing. Results indicate the LSHR alloy can be processed to produce fine and coarse grain microstructures with differing combinations of strength and time-dependent mechanical properties, for application at temperatures exceeding 1300 F.

  13. Quantum Mechanics/Molecular Mechanics Method Combined with Hybrid All-Atom and Coarse-Grained Model: Theory and Application on Redox Potential Calculations.

    PubMed

    Shen, Lin; Yang, Weitao

    2016-04-12

    We developed a new multiresolution method that spans three levels of resolution with quantum mechanical, atomistic molecular mechanical, and coarse-grained models. The resolution-adapted all-atom and coarse-grained water model, in which an all-atom structural description of the entire system is maintained during the simulations, is combined with the ab initio quantum mechanics and molecular mechanics method. We apply this model to calculate the redox potentials of the aqueous ruthenium and iron complexes by using the fractional number of electrons approach and thermodynamic integration simulations. The redox potentials are recovered in excellent accordance with the experimental data. The speed-up of the hybrid all-atom and coarse-grained water model renders it computationally more attractive. The accuracy depends on the hybrid all-atom and coarse-grained water model used in the combined quantum mechanical and molecular mechanical method. We have used another multiresolution model, in which an atomic-level layer of water molecules around redox center is solvated in supramolecular coarse-grained waters for the redox potential calculations. Compared with the experimental data, this alternative multilayer model leads to less accurate results when used with the coarse-grained polarizable MARTINI water or big multipole water model for the coarse-grained layer.

  14. Downscaling Coarse Scale Microwave Soil Moisture Product using Machine Learning

    NASA Astrophysics Data System (ADS)

    Abbaszadeh, P.; Moradkhani, H.; Yan, H.

    2016-12-01

    Soil moisture (SM) is a key variable in partitioning and examining the global water-energy cycle, agricultural planning, and water resource management. It is also strongly coupled with climate change, playing an important role in weather forecasting and drought monitoring and prediction, flood modeling and irrigation management. Although satellite retrievals can provide an unprecedented information of soil moisture at a global-scale, the products might be inadequate for basin scale study or regional assessment. To improve the spatial resolution of SM, this work presents a novel approach based on Machine Learning (ML) technique that allows for downscaling of the satellite soil moisture to fine resolution. For this purpose, the SMAP L-band radiometer SM products were used and conditioned on the Variable Infiltration Capacity (VIC) model prediction to describe the relationship between the coarse and fine scale soil moisture data. The proposed downscaling approach was applied to a western US basin and the products were compared against the available SM data from in-situ gauge stations. The obtained results indicated a great potential of the machine learning technique to derive the fine resolution soil moisture information that is currently used for land data assimilation applications.

  15. Nanoscale swimmers: hydrodynamic interactions and propulsion of molecular machines

    NASA Astrophysics Data System (ADS)

    Sakaue, T.; Kapral, R.; Mikhailov, A. S.

    2010-06-01

    Molecular machines execute nearly regular cyclic conformational changes as a result of ligand binding and product release. This cyclic conformational dynamics is generally non-reciprocal so that under time reversal a different sequence of machine conformations is visited. Since such changes occur in a solvent, coupling to solvent hydrodynamic modes will generally result in self-propulsion of the molecular machine. These effects are investigated for a class of coarse grained models of protein machines consisting of a set of beads interacting through pair-wise additive potentials. Hydrodynamic effects are incorporated through a configuration-dependent mobility tensor, and expressions for the propulsion linear and angular velocities, as well as the stall force, are obtained. In the limit where conformational changes are small so that linear response theory is applicable, it is shown that propulsion is exponentially small; thus, propulsion is nonlinear phenomenon. The results are illustrated by computations on a simple model molecular machine.

  16. Preprocessor with spline interpolation for converting stereolithography into cutter location source data

    NASA Astrophysics Data System (ADS)

    Nagata, Fusaomi; Okada, Yudai; Sakamoto, Tatsuhiko; Kusano, Takamasa; Habib, Maki K.; Watanabe, Keigo

    2017-06-01

    The authors have developed earlier an industrial machining robotic system for foamed polystyrene materials. The developed robotic CAM system provided a simple and effective interface without the need to use any robot language between operators and the machining robot. In this paper, a preprocessor for generating Cutter Location Source data (CLS data) from Stereolithography (STL data) is first proposed for robotic machining. The preprocessor enables to control the machining robot directly using STL data without using any commercially provided CAM system. The STL deals with a triangular representation for a curved surface geometry. The preprocessor allows machining robots to be controlled through a zigzag or spiral path directly calculated from STL data. Then, a smart spline interpolation method is proposed and implemented for smoothing coarse CLS data. The effectiveness and potential of the developed approaches are demonstrated through experiments on actual machining and interpolation.

  17. Electrical Equipment of Electrical Stations and Substations,

    DTIC Science & Technology

    1979-10-25

    of Communist society. In 1921 he wrote: "fhe sole material base of socialism can be the large/coarse machine industry, capable of reorganizing and...produced with the aid of special switching system, structurally/ constructionally being part transformer itself. The transformers, supplied with this

  18. Thermodynamically consistent coarse graining of biocatalysts beyond Michaelis–Menten

    NASA Astrophysics Data System (ADS)

    Wachtel, Artur; Rao, Riccardo; Esposito, Massimiliano

    2018-04-01

    Starting from the detailed catalytic mechanism of a biocatalyst we provide a coarse-graining procedure which, by construction, is thermodynamically consistent. This procedure provides stoichiometries, reaction fluxes (rate laws), and reaction forces (Gibbs energies of reaction) for the coarse-grained level. It can treat active transporters and molecular machines, and thus extends the applicability of ideas that originated in enzyme kinetics. Our results lay the foundations for systematic studies of the thermodynamics of large-scale biochemical reaction networks. Moreover, we identify the conditions under which a relation between one-way fluxes and forces holds at the coarse-grained level as it holds at the detailed level. In doing so, we clarify the speculations and broad claims made in the literature about such a general flux–force relation. As a further consequence we show that, in contrast to common belief, the second law of thermodynamics does not require the currents and the forces of biochemical reaction networks to be always aligned.

  19. Rolling bearing fault detection and diagnosis based on composite multiscale fuzzy entropy and ensemble support vector machines

    NASA Astrophysics Data System (ADS)

    Zheng, Jinde; Pan, Haiyang; Cheng, Junsheng

    2017-02-01

    To timely detect the incipient failure of rolling bearing and find out the accurate fault location, a novel rolling bearing fault diagnosis method is proposed based on the composite multiscale fuzzy entropy (CMFE) and ensemble support vector machines (ESVMs). Fuzzy entropy (FuzzyEn), as an improvement of sample entropy (SampEn), is a new nonlinear method for measuring the complexity of time series. Since FuzzyEn (or SampEn) in single scale can not reflect the complexity effectively, multiscale fuzzy entropy (MFE) is developed by defining the FuzzyEns of coarse-grained time series, which represents the system dynamics in different scales. However, the MFE values will be affected by the data length, especially when the data are not long enough. By combining information of multiple coarse-grained time series in the same scale, the CMFE algorithm is proposed in this paper to enhance MFE, as well as FuzzyEn. Compared with MFE, with the increasing of scale factor, CMFE obtains much more stable and consistent values for a short-term time series. In this paper CMFE is employed to measure the complexity of vibration signals of rolling bearings and is applied to extract the nonlinear features hidden in the vibration signals. Also the physically meanings of CMFE being suitable for rolling bearing fault diagnosis are explored. Based on these, to fulfill an automatic fault diagnosis, the ensemble SVMs based multi-classifier is constructed for the intelligent classification of fault features. Finally, the proposed fault diagnosis method of rolling bearing is applied to experimental data analysis and the results indicate that the proposed method could effectively distinguish different fault categories and severities of rolling bearings.

  20. Multitasking runtime systems for the Cedar Multiprocessor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guzzi, M.D.

    1986-07-01

    The programming of a MIMD machine is more complex than for SISD and SIMD machines. The multiple computational resources of the machine must be made available to the programming language compiler and to the programmer so that multitasking programs may be written. This thesis will explore the additional complexity of programming a MIMD machine, the Cedar Multiprocessor specifically, and the multitasking runtime system necessary to provide multitasking resources to the user. First, the problem will be well defined: the Cedar machine, its operating system, the programming language, and multitasking concepts will be described. Second, a solution to the problem, calledmore » macrotasking, will be proposed. This solution provides multitasking facilities to the programmer at a very coarse level with many visible machine dependencies. Third, an alternate solution, called microtasking, will be proposed. This solution provides multitasking facilities of a much finer grain. This solution does not depend so rigidly on the specific architecture of the machine. Finally, the two solutions will be compared for effectiveness. 12 refs., 16 figs.« less

  1. A machine learning approach for efficient uncertainty quantification using multiscale methods

    NASA Astrophysics Data System (ADS)

    Chan, Shing; Elsheikh, Ahmed H.

    2018-02-01

    Several multiscale methods account for sub-grid scale features using coarse scale basis functions. For example, in the Multiscale Finite Volume method the coarse scale basis functions are obtained by solving a set of local problems over dual-grid cells. We introduce a data-driven approach for the estimation of these coarse scale basis functions. Specifically, we employ a neural network predictor fitted using a set of solution samples from which it learns to generate subsequent basis functions at a lower computational cost than solving the local problems. The computational advantage of this approach is realized for uncertainty quantification tasks where a large number of realizations has to be evaluated. We attribute the ability to learn these basis functions to the modularity of the local problems and the redundancy of the permeability patches between samples. The proposed method is evaluated on elliptic problems yielding very promising results.

  2. RG-inspired machine learning for lattice field theory

    NASA Astrophysics Data System (ADS)

    Foreman, Sam; Giedt, Joel; Meurice, Yannick; Unmuth-Yockey, Judah

    2018-03-01

    Machine learning has been a fast growing field of research in several areas dealing with large datasets. We report recent attempts to use renormalization group (RG) ideas in the context of machine learning. We examine coarse graining procedures for perceptron models designed to identify the digits of the MNIST data. We discuss the correspondence between principal components analysis (PCA) and RG flows across the transition for worm configurations of the 2D Ising model. Preliminary results regarding the logarithmic divergence of the leading PCA eigenvalue were presented at the conference. More generally, we discuss the relationship between PCA and observables in Monte Carlo simulations and the possibility of reducing the number of learning parameters in supervised learning based on RG inspired hierarchical ansatzes.

  3. Segmentation and Quantitative Analysis of Apoptosis of Chinese Hamster Ovary Cells from Fluorescence Microscopy Images.

    PubMed

    Du, Yuncheng; Budman, Hector M; Duever, Thomas A

    2017-06-01

    Accurate and fast quantitative analysis of living cells from fluorescence microscopy images is useful for evaluating experimental outcomes and cell culture protocols. An algorithm is developed in this work to automatically segment and distinguish apoptotic cells from normal cells. The algorithm involves three steps consisting of two segmentation steps and a classification step. The segmentation steps are: (i) a coarse segmentation, combining a range filter with a marching square method, is used as a prefiltering step to provide the approximate positions of cells within a two-dimensional matrix used to store cells' images and the count of the number of cells for a given image; and (ii) a fine segmentation step using the Active Contours Without Edges method is applied to the boundaries of cells identified in the coarse segmentation step. Although this basic two-step approach provides accurate edges when the cells in a given image are sparsely distributed, the occurrence of clusters of cells in high cell density samples requires further processing. Hence, a novel algorithm for clusters is developed to identify the edges of cells within clusters and to approximate their morphological features. Based on the segmentation results, a support vector machine classifier that uses three morphological features: the mean value of pixel intensities in the cellular regions, the variance of pixel intensities in the vicinity of cell boundaries, and the lengths of the boundaries, is developed for distinguishing apoptotic cells from normal cells. The algorithm is shown to be efficient in terms of computational time, quantitative analysis, and differentiation accuracy, as compared with the use of the active contours method without the proposed preliminary coarse segmentation step.

  4. Swept Mechanism of Micro-Milling Tool Geometry Effect on Machined Oxygen Free High Conductivity Copper (OFHC) Surface Roughness

    PubMed Central

    Shi, Zhenyu; Liu, Zhanqiang; Li, Yuchao; Qiao, Yang

    2017-01-01

    Cutting tool geometry should be very much considered in micro-cutting because it has a significant effect on the topography and accuracy of the machined surface, particularly considering the uncut chip thickness is comparable to the cutting edge radius. The objective of this paper was to clarify the influence of the mechanism of the cutting tool geometry on the surface topography in the micro-milling process. Four different cutting tools including two two-fluted end milling tools with different helix angles of 15° and 30° cutting tools, as well as two three-fluted end milling tools with different helix angles of 15° and 30° were investigated by combining theoretical modeling analysis with experimental research. The tool geometry was mathematically modeled through coordinate translation and transformation to make all three cutting edges at the cutting tool tip into the same coordinate system. Swept mechanisms, minimum uncut chip thickness, and cutting tool run-out were considered on modeling surface roughness parameters (the height of surface roughness Rz and average surface roughness Ra) based on the established mathematical model. A set of cutting experiments was carried out using four different shaped cutting tools. It was found that the sweeping volume of the cutting tool increases with the decrease of both the cutting tool helix angle and the flute number. Great coarse machined surface roughness and more non-uniform surface topography are generated when the sweeping volume increases. The outcome of this research should bring about new methodologies for micro-end milling tool design and manufacturing. The machined surface roughness can be improved by appropriately selecting the tool geometrical parameters. PMID:28772479

  5. Engineering molecular machines

    NASA Astrophysics Data System (ADS)

    Erman, Burak

    2016-04-01

    Biological molecular motors use chemical energy, mostly in the form of ATP hydrolysis, and convert it to mechanical energy. Correlated thermal fluctuations are essential for the function of a molecular machine and it is the hydrolysis of ATP that modifies the correlated fluctuations of the system. Correlations are consequences of the molecular architecture of the protein. The idea that synthetic molecular machines may be constructed by designing the proper molecular architecture is challenging. In their paper, Sarkar et al (2016 New J. Phys. 18 043006) propose a synthetic molecular motor based on the coarse grained elastic network model of proteins and show by numerical simulations that motor function is realized, ranging from deterministic to thermal, depending on temperature. This work opens up a new range of possibilities of molecular architecture based engine design.

  6. Performance Comparison of Systematic Methods for Rigorous Definition of Coarse-Grained Sites of Large Biomolecules.

    PubMed

    Zhang, Yuwei; Cao, Zexing; Zhang, John Zenghui; Xia, Fei

    2017-02-27

    Construction of coarse-grained (CG) models for large biomolecules used for multiscale simulations demands a rigorous definition of CG sites for them. Several coarse-graining methods such as the simulated annealing and steepest descent (SASD) based on the essential dynamics coarse-graining (ED-CG) or the stepwise local iterative optimization (SLIO) based on the fluctuation maximization coarse-graining (FM-CG), were developed to do it. However, the practical applications of these methods such as SASD based on ED-CG are subject to limitations because they are too expensive. In this work, we extend the applicability of ED-CG by combining it with the SLIO algorithm. A comprehensive comparison of optimized results and accuracy of various algorithms based on ED-CG show that SLIO is the fastest as well as the most accurate algorithm among them. ED-CG combined with SLIO could give converged results as the number of CG sites increases, which demonstrates that it is another efficient method for coarse-graining large biomolecules. The construction of CG sites for Ras protein by using MD fluctuations demonstrates that the CG sites derived from FM-CG can reflect the fluctuation properties of secondary structures in Ras accurately.

  7. Evaluating lubricant performance by 3D profilometry of wear scars

    NASA Astrophysics Data System (ADS)

    Georgescu, C.; Deleanu, L.; Pirvu, C.

    2016-08-01

    Due to improvement in analysing surface texture and optical instruments for investigating the texture surface, the authors propose to evaluate the lubricant performance by analysing the change in several 3D parameters in comparison to an analysis on 2D profile. All the surface of the wear scar generated on the four ball machine is investigated and the conclusion is that from the tribological point of view, the 3D parameters reflect better the surface quality evolution after testing. Investigation was done on the wear scars generated on the three fixed balls, for five lubricants: a non-additivated transmission mineral oil (T90), two grades of rapeseed oil (coarse degummed and refined) and two grades of soybean oil (coarse and degummed).

  8. Optimal Design of Experiments by Combining Coarse and Fine Measurements

    NASA Astrophysics Data System (ADS)

    Lee, Alpha A.; Brenner, Michael P.; Colwell, Lucy J.

    2017-11-01

    In many contexts, it is extremely costly to perform enough high-quality experimental measurements to accurately parametrize a predictive quantitative model. However, it is often much easier to carry out large numbers of experiments that indicate whether each sample is above or below a given threshold. Can many such categorical or "coarse" measurements be combined with a much smaller number of high-resolution or "fine" measurements to yield accurate models? Here, we demonstrate an intuitive strategy, inspired by statistical physics, wherein the coarse measurements are used to identify the salient features of the data, while the fine measurements determine the relative importance of these features. A linear model is inferred from the fine measurements, augmented by a quadratic term that captures the correlation structure of the coarse data. We illustrate our strategy by considering the problems of predicting the antimalarial potency and aqueous solubility of small organic molecules from their 2D molecular structure.

  9. Plutonium isotopes offer an alternative approach to establishing chronological profiles in coarse sediments

    NASA Astrophysics Data System (ADS)

    Pondell, C.; Kuehl, S. A.; Canuel, E. A.

    2016-12-01

    There are several methodologies used to determine chronologies for sediments deposited within the past 100 years, including 210Pb and 137Cs radioisotopes and organic and inorganic contaminants. These techniques are quite effective in fine sediments, which generally have a high affinity for metals and organic compounds. However, the application of these chronological tools becomes limited in systems where coarse sediments accumulate. Englebright Lake is an impoundment in northern California where sediment accumulation is characterized by a combination of fine and coarse sediments. This combination of sediment grain size complicated chronological analysis using the more traditional 137Cs chronological approach. This study established a chronology of these sediments using 239+240Pu isotopes. While most of the 249+240Pu activity was measured in the fine grain size fraction (<63 microns), up to 25% of the plutonium activity was detected in the coarse size fractions of sediments from Englebright Lake. Profiles of 239+240Pu were similar to available 137Cs profiles, verifying the application of plutonium isotopes for determining sediment chronologies and expanding the established geochronology for Englebright Lake sediments. This study of sediment accumulation in Englebright Lake demonstrates the application of plutonium isotopes in establishing chronologies in coarse sediments and highlights the potential for plutonium to offer new insights into patterns of coarse sediment accumulation.

  10. Path-space variational inference for non-equilibrium coarse-grained systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harmandaris, Vagelis, E-mail: harman@uoc.gr; Institute of Applied and Computational Mathematics; Kalligiannaki, Evangelia, E-mail: ekalligian@tem.uoc.gr

    In this paper we discuss information-theoretic tools for obtaining optimized coarse-grained molecular models for both equilibrium and non-equilibrium molecular simulations. The latter are ubiquitous in physicochemical and biological applications, where they are typically associated with coupling mechanisms, multi-physics and/or boundary conditions. In general the non-equilibrium steady states are not known explicitly as they do not necessarily have a Gibbs structure. The presented approach can compare microscopic behavior of molecular systems to parametric and non-parametric coarse-grained models using the relative entropy between distributions on the path space and setting up a corresponding path-space variational inference problem. The methods can become entirelymore » data-driven when the microscopic dynamics are replaced with corresponding correlated data in the form of time series. Furthermore, we present connections and generalizations of force matching methods in coarse-graining with path-space information methods. We demonstrate the enhanced transferability of information-based parameterizations to different observables, at a specific thermodynamic point, due to information inequalities. We discuss methodological connections between information-based coarse-graining of molecular systems and variational inference methods primarily developed in the machine learning community. However, we note that the work presented here addresses variational inference for correlated time series due to the focus on dynamics. The applicability of the proposed methods is demonstrated on high-dimensional stochastic processes given by overdamped and driven Langevin dynamics of interacting particles.« less

  11. 10 CFR 431.292 - Definitions concerning refrigerated bottled or canned beverage vending machines.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... cooled, and is not a combination vending machine. Class B means any refrigerated bottled or canned beverage vending machine not considered to be Class A, and is not a combination vending machine. Combination vending machine means a refrigerated bottled or canned beverage vending machine that also has non...

  12. Evaluation and recognition of skin images with aging by support vector machine

    NASA Astrophysics Data System (ADS)

    Hu, Liangjun; Wu, Shulian; Li, Hui

    2016-10-01

    Aging is a very important issue not only in dermatology, but also cosmetic science. Cutaneous aging involves both chronological and photoaging aging process. The evaluation and classification of aging is an important issue with the medical cosmetology workers nowadays. The purpose of this study is to assess chronological-age-related and photo-age-related of human skin. The texture features of skin surface skin, such as coarseness, contrast were analyzed by Fourier transform and Tamura. And the aim of it is to detect the object hidden in the skin texture in difference aging skin. Then, Support vector machine was applied to train the texture feature. The different age's states were distinguished by the support vector machine (SVM) classifier. The results help us to further understand the mechanism of different aging skin from texture feature and help us to distinguish the different aging states.

  13. The relative influence of geographic location and reach-scale habitat on benthic invertebrate assemblages in six ecoregions

    USGS Publications Warehouse

    Munn, M.D.; Waite, I.R.; Larsen, D.P.; Herlihy, A.T.

    2009-01-01

    The objective of this study was to determine the relative influence of reach-specific habitat variables and geographic location on benthic invertebrate assemblages within six ecoregions across the Western USA. This study included 417 sites from six ecoregions. A total of 301 taxa were collected with the highest richness associated with ecoregions dominated by streams with coarse substrate (19-29 taxa per site). Lowest richness (seven to eight taxa per site) was associated with ecoregions dominated by fine-grain substrate. Principle component analysis (PCA) on reach-scale habitat separated the six ecoregions into those in high-gradient mountainous areas (Coast Range, Cascades, and Southern Rockies) and those in lower-gradient ecoregions (Central Great Plains and Central California Valley). Nonmetric multidimensional scaling (NMS) models performed best in ecoregions dominated by coarse-grain substrate and high taxa richness, along with coarse-grain substrates sites combined from multiple ecoregions regardless of location. In contrast, ecoregions or site combinations dominated by fine-grain substrate had poor model performance (high stress). Four NMS models showed that geographic location (i.e. latitude and longitude) was important for: (1) all ecoregions combined, (2) all sites dominated by coarse-grain sub strate combined, (3) Cascades Ecoregion, and (4) Columbia Ecoregion. Local factors (i.e. substrate or water temperature) seem to be overriding factors controlling invertebrate composition across the West, regardless of geographic location. ?? The Author(s) 2008.

  14. Single Particulate SEM-EDX Analysis of Iron-Containing Coarse Particulate Matter in an Urban Environment: Sources and Distribution of Iron within Cleveland, Ohio

    EPA Science Inventory

    The physicochemical properties of coarse-mode, iron-containing particles, and their temporal and spatial distributions are poorly understood. Single particle analysis combining x-ray elemental mapping and computer-controlled scanning electron microscopy (CCSEM-EDX) of passively ...

  15. Development of a Navier-Stokes algorithm for parallel-processing supercomputers. Ph.D. Thesis - Colorado State Univ., Dec. 1988

    NASA Technical Reports Server (NTRS)

    Swisshelm, Julie M.

    1989-01-01

    An explicit flow solver, applicable to the hierarchy of model equations ranging from Euler to full Navier-Stokes, is combined with several techniques designed to reduce computational expense. The computational domain consists of local grid refinements embedded in a global coarse mesh, where the locations of these refinements are defined by the physics of the flow. Flow characteristics are also used to determine which set of model equations is appropriate for solution in each region, thereby reducing not only the number of grid points at which the solution must be obtained, but also the computational effort required to get that solution. Acceleration to steady-state is achieved by applying multigrid on each of the subgrids, regardless of the particular model equations being solved. Since each of these components is explicit, advantage can readily be taken of the vector- and parallel-processing capabilities of machines such as the Cray X-MP and Cray-2.

  16. Effective Fingerprint Quality Estimation for Diverse Capture Sensors

    PubMed Central

    Xie, Shan Juan; Yoon, Sook; Shin, Jinwook; Park, Dong Sun

    2010-01-01

    Recognizing the quality of fingerprints in advance can be beneficial for improving the performance of fingerprint recognition systems. The representative features to assess the quality of fingerprint images from different types of capture sensors are known to vary. In this paper, an effective quality estimation system that can be adapted for different types of capture sensors is designed by modifying and combining a set of features including orientation certainty, local orientation quality and consistency. The proposed system extracts basic features, and generates next level features which are applicable for various types of capture sensors. The system then uses the Support Vector Machine (SVM) classifier to determine whether or not an image should be accepted as input to the recognition system. The experimental results show that the proposed method can perform better than previous methods in terms of accuracy. In the meanwhile, the proposed method has an ability to eliminate residue images from the optical and capacitive sensors, and the coarse images from thermal sensors. PMID:22163632

  17. Classification of JET Neutron and Gamma Emissivity Profiles

    NASA Astrophysics Data System (ADS)

    Craciunescu, T.; Murari, A.; Kiptily, V.; Vega, J.; Contributors, JET

    2016-05-01

    In thermonuclear plasmas, emission tomography uses integrated measurements along lines of sight (LOS) to determine the two-dimensional (2-D) spatial distribution of the volume emission intensity. Due to the availability of only a limited number views and to the coarse sampling of the LOS, the tomographic inversion is a limited data set problem. Several techniques have been developed for tomographic reconstruction of the 2-D gamma and neutron emissivity on JET. In specific experimental conditions the availability of LOSs is restricted to a single view. In this case an explicit reconstruction of the emissivity profile is no longer possible. However, machine learning classification methods can be used in order to derive the type of the distribution. In the present approach the classification is developed using the theory of belief functions which provide the support to fuse the results of independent clustering and supervised classification. The method allows to represent the uncertainty of the results provided by different independent techniques, to combine them and to manage possible conflicts.

  18. Effect of cinnamon powder addition during conching on the flavor of dark chocolate mass.

    PubMed

    Albak, F; Tekin, A R

    2015-04-01

    In the present study, refined dark chocolate mix was conched with the addition of finely powdered cinnamon in a laboratory-style conching machine to evaluate its aroma profile both analytically and sensorially. The analytical determinations were carried out by a combination of solid phase micro extraction (SPME)-gas chromatography (GC)-mass spectroscopy (MS) and-olfactometry(O), while the sensory evaluation was made with trained panelists. The optimum conditions for the SPME were found to be CAR/PDMS as the fiber, 60 °C as the temperature, and 60 min as the time. SPME analyses were carried out at 60 °C for 60 min with toluene as an internal standard. 26 compounds were monitored before and after conching. The unconched sample had a significantly higher fruity odor value than the conched sample. This new product was highly acceptable according to the overall inclination test. However some of textural properties, such as coarseness, and hardness were below the general preference.

  19. Refining Markov state models for conformational dynamics using ensemble-averaged data and time-series trajectories

    NASA Astrophysics Data System (ADS)

    Matsunaga, Y.; Sugita, Y.

    2018-06-01

    A data-driven modeling scheme is proposed for conformational dynamics of biomolecules based on molecular dynamics (MD) simulations and experimental measurements. In this scheme, an initial Markov State Model (MSM) is constructed from MD simulation trajectories, and then, the MSM parameters are refined using experimental measurements through machine learning techniques. The second step can reduce the bias of MD simulation results due to inaccurate force-field parameters. Either time-series trajectories or ensemble-averaged data are available as a training data set in the scheme. Using a coarse-grained model of a dye-labeled polyproline-20, we compare the performance of machine learning estimations from the two types of training data sets. Machine learning from time-series data could provide the equilibrium populations of conformational states as well as their transition probabilities. It estimates hidden conformational states in more robust ways compared to that from ensemble-averaged data although there are limitations in estimating the transition probabilities between minor states. We discuss how to use the machine learning scheme for various experimental measurements including single-molecule time-series trajectories.

  20. Nontraditional Machining Guide, 26 Newcomers for Production

    DTIC Science & Technology

    1976-07-01

    essential: Frequent coarse wheel dressing to maintain sharpness Lower wheel speeds (under 3500 sfpm) Lower infeed rates (0.0002 to 0.0005 inch per pass...Oil-base lubricants with good flow control Soft wheels (H, I or J grades) Higher table speeds (50 sfpm or more) Solid fixtures and well...the wheel and the workpiece as in ECG. Electrical discharges from the graphite wheel are initiated from the higher a-c voltage superimposed on

  1. Partitioned learning of deep Boltzmann machines for SNP data.

    PubMed

    Hess, Moritz; Lenz, Stefan; Blätte, Tamara J; Bullinger, Lars; Binder, Harald

    2017-10-15

    Learning the joint distributions of measurements, and in particular identification of an appropriate low-dimensional manifold, has been found to be a powerful ingredient of deep leaning approaches. Yet, such approaches have hardly been applied to single nucleotide polymorphism (SNP) data, probably due to the high number of features typically exceeding the number of studied individuals. After a brief overview of how deep Boltzmann machines (DBMs), a deep learning approach, can be adapted to SNP data in principle, we specifically present a way to alleviate the dimensionality problem by partitioned learning. We propose a sparse regression approach to coarsely screen the joint distribution of SNPs, followed by training several DBMs on SNP partitions that were identified by the screening. Aggregate features representing SNP patterns and the corresponding SNPs are extracted from the DBMs by a combination of statistical tests and sparse regression. In simulated case-control data, we show how this can uncover complex SNP patterns and augment results from univariate approaches, while maintaining type 1 error control. Time-to-event endpoints are considered in an application with acute myeloid leukemia patients, where SNP patterns are modeled after a pre-screening based on gene expression data. The proposed approach identified three SNPs that seem to jointly influence survival in a validation dataset. This indicates the added value of jointly investigating SNPs compared to standard univariate analyses and makes partitioned learning of DBMs an interesting complementary approach when analyzing SNP data. A Julia package is provided at 'http://github.com/binderh/BoltzmannMachines.jl'. binderh@imbi.uni-freiburg.de. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  2. Locating landmarks on high-dimensional free energy surfaces

    PubMed Central

    Chen, Ming; Yu, Tang-Qing; Tuckerman, Mark E.

    2015-01-01

    Coarse graining of complex systems possessing many degrees of freedom can often be a useful approach for analyzing and understanding key features of these systems in terms of just a few variables. The relevant energy landscape in a coarse-grained description is the free energy surface as a function of the coarse-grained variables, which, despite the dimensional reduction, can still be an object of high dimension. Consequently, navigating and exploring this high-dimensional free energy surface is a nontrivial task. In this paper, we use techniques from multiscale modeling, stochastic optimization, and machine learning to devise a strategy for locating minima and saddle points (termed “landmarks”) on a high-dimensional free energy surface “on the fly” and without requiring prior knowledge of or an explicit form for the surface. In addition, we propose a compact graph representation of the landmarks and connections between them, and we show that the graph nodes can be subsequently analyzed and clustered based on key attributes that elucidate important properties of the system. Finally, we show that knowledge of landmark locations allows for the efficient determination of their relative free energies via enhanced sampling techniques. PMID:25737545

  3. Effect of aggregate graining compositions on skid resistance of Exposed Aggregate Concrete pavement

    NASA Astrophysics Data System (ADS)

    Wasilewska, Marta; Gardziejczyk, Wladysław; Gierasimiuk, Pawel

    2018-05-01

    The paper presents the evaluation of skid resistance of EAC (Exposed Aggregate Concrete) pavements which differ in aggregate graining compositions. The tests were carried out on concrete mixes with a maximum aggregate size of 8 mm. Three types of coarse aggregates were selected depending on their resistance to polishing which was determined on the basis of the PSV (Polished Stone Value). Basalt (PSV 48), gabbro (PSV 50) and trachybasalt (PSV 52) aggregates were chosen. For each type of aggregate three graining compositions were designed, which differed in the content of coarse aggregate > 4mm. Their content for each series was as follows: A - 38%, B - 50% and C - 68%. Evaluation of the skid resistance has been performed using the FAP (Friction After Polishing) test equipment also known as the Wehner/Schulze machine. Laboratory method enables to compare the skid resistance of different types of wearing course under specified conditions simulating polishing processes. In addition, macrotexture measurements were made on the surface of each specimen using the Elatexure laser profile. Analysis of variance showed that at significance level α = 0.05, aggregate graining compositions as well as the PSV have a significant influence on the obtained values of the friction coefficient μm of the tested EAC pavements. The highest values of the μm have been obtained for EAC with the lowest amount of coarse aggregates (compositions A). In these cases the resistance to polishing of the aggregate does not significantly affect the friction coefficients. This is related to the large areas of cement mortar between the exposed coarse grains. Based on the analysis of microscope images, it was observed that the coarse aggregates were not sufficiently exposed. It has been proved that PSV significantly affected the coefficient of friction in the case of compositions B and C. This is caused by large areas of exposed coarse aggregate. The best parameters were achieved for the EAC pavements with graining composition B and C and trachybasalt aggregate.

  4. Modeling disease transmission near eradication: An equation free approach

    NASA Astrophysics Data System (ADS)

    Williams, Matthew O.; Proctor, Joshua L.; Kutz, J. Nathan

    2015-01-01

    Although disease transmission in the near eradication regime is inherently stochastic, deterministic quantities such as the probability of eradication are of interest to policy makers and researchers. Rather than running large ensembles of discrete stochastic simulations over long intervals in time to compute these deterministic quantities, we create a data-driven and deterministic "coarse" model for them using the Equation Free (EF) framework. In lieu of deriving an explicit coarse model, the EF framework approximates any needed information, such as coarse time derivatives, by running short computational experiments. However, the choice of the coarse variables (i.e., the state of the coarse system) is critical if the resulting model is to be accurate. In this manuscript, we propose a set of coarse variables that result in an accurate model in the endemic and near eradication regimes, and demonstrate this on a compartmental model representing the spread of Poliomyelitis. When combined with adaptive time-stepping coarse projective integrators, this approach can yield over a factor of two speedup compared to direct simulation, and due to its lower dimensionality, could be beneficial when conducting systems level tasks such as designing eradication or monitoring campaigns.

  5. GARN: Sampling RNA 3D Structure Space with Game Theory and Knowledge-Based Scoring Strategies.

    PubMed

    Boudard, Mélanie; Bernauer, Julie; Barth, Dominique; Cohen, Johanne; Denise, Alain

    2015-01-01

    Cellular processes involve large numbers of RNA molecules. The functions of these RNA molecules and their binding to molecular machines are highly dependent on their 3D structures. One of the key challenges in RNA structure prediction and modeling is predicting the spatial arrangement of the various structural elements of RNA. As RNA folding is generally hierarchical, methods involving coarse-grained models hold great promise for this purpose. We present here a novel coarse-grained method for sampling, based on game theory and knowledge-based potentials. This strategy, GARN (Game Algorithm for RNa sampling), is often much faster than previously described techniques and generates large sets of solutions closely resembling the native structure. GARN is thus a suitable starting point for the molecular modeling of large RNAs, particularly those with experimental constraints. GARN is available from: http://garn.lri.fr/.

  6. A Structural Perspective on the Dynamics of Kinesin Motors

    PubMed Central

    Hyeon, Changbong; Onuchic, José N.

    2011-01-01

    Despite significant fluctuation under thermal noise, biological machines in cells perform their tasks with exquisite precision. Using molecular simulation of a coarse-grained model and theoretical arguments, we envisaged how kinesin, a prototype of biological machines, generates force and regulates its dynamics to sustain persistent motor action. A structure-based model, which can be versatile in adapting its structure to external stresses while maintaining its native fold, was employed to account for several features of kinesin dynamics along the biochemical cycle. This analysis complements our current understandings of kinesin dynamics and connections to experiments. We propose a thermodynamic cycle for kinesin that emphasizes the mechanical and regulatory role of the neck linker and clarify issues related to the motor directionality, and the difference between the external stalling force and the internal tension responsible for the head-head coordination. The comparison between the thermodynamic cycle of kinesin and macroscopic heat engines highlights the importance of structural change as the source of work production in biomolecular machines. PMID:22261064

  7. Development of a New Utm (universal Testing Machine) System for the Nano/micro In-Process Measurement

    NASA Astrophysics Data System (ADS)

    Kweon, Hyunkyu; Choi, Sungdae; Kim, Youngsik; Nam, Kiho

    Micro UTM (Universal Testing Machines) are becoming increasingly popular for testing the mechanical properties of MEMS materials, metal thin films, and micro-molecule materials1-2. And, new miniature testing machines that can perform in-process measurement in SEM, TEM, and SPM are also needed. In this paper, a new micro UTM with a precision positioning system that can be fine positioning stage. Coarse positioning is implemented by step motor. The size, load output and used in SEM, TEM, and SPM have been proposed. Bimorph type PZT precision actuator is used in displacement output of bimorph type UTM are 109×64×22(mm), about 35g, and 0.4 mm, respectively. And the displacement output is controlled in the block digital form. The results of the analysis and basic properties of positioning system and the UTM system are presented. In addition, the experiment results of in-process measurement during tensile load in SEM and AFM are showed.

  8. Analysis of the Cytomorphological Features in Atypical Urine Specimens following Application of The Paris System for Reporting Urinary Cytology.

    PubMed

    Glass, Ryan; Rosen, Lisa; Chau, Karen; Sheikh-Fayyaz, Sylvat; Farmer, Peter; Coutsouvelis, Constantinos; Slim, Farah; Brenkert, Ryan; Das, Kasturi; Raab, Stephen; Cocker, Rubina

    2018-01-01

    This study investigates the use of The Paris System (TPS) for Reporting Urinary Cytopathology and examines the performance of individual and combined morphological features in atypical urine cytologies. We reviewed 118 atypical cytologies with subsequent bladder biopsies for the presence of several morphological features and reclassified them into Paris System categories. The sensitivity and specificity of individual and combined features were calculated along with the risk of malignancy. An elevated nuclear-to-cytoplasmic ratio was only predictive of malignancy if seen in single cells, while irregular nuclear borders, hyperchromasia, and coarse granular chromatin were predictive in single cells and in groups. Identification of coarse chromatin alone yielded a malignancy risk comparable to 2-feature combinations. The use of TPS criteria identified the specimens at a higher risk of malignancy. Our findings support the use of TPS criteria, suggesting that the presence of coarse chromatin is more specific than other individual features, and confirming that cytologic atypia is more worrisome in single cells than in groups. © 2017 S. Karger AG, Basel.

  9. Enhancing of chemical compound and drug name recognition using representative tag scheme and fine-grained tokenization.

    PubMed

    Dai, Hong-Jie; Lai, Po-Ting; Chang, Yung-Chun; Tsai, Richard Tzong-Han

    2015-01-01

    The functions of chemical compounds and drugs that affect biological processes and their particular effect on the onset and treatment of diseases have attracted increasing interest with the advancement of research in the life sciences. To extract knowledge from the extensive literatures on such compounds and drugs, the organizers of BioCreative IV administered the CHEMical Compound and Drug Named Entity Recognition (CHEMDNER) task to establish a standard dataset for evaluating state-of-the-art chemical entity recognition methods. This study introduces the approach of our CHEMDNER system. Instead of emphasizing the development of novel feature sets for machine learning, this study investigates the effect of various tag schemes on the recognition of the names of chemicals and drugs by using conditional random fields. Experiments were conducted using combinations of different tokenization strategies and tag schemes to investigate the effects of tag set selection and tokenization method on the CHEMDNER task. This study presents the performance of CHEMDNER of three more representative tag schemes-IOBE, IOBES, and IOB12E-when applied to a widely utilized IOB tag set and combined with the coarse-/fine-grained tokenization methods. The experimental results thus reveal that the fine-grained tokenization strategy performance best in terms of precision, recall and F-scores when the IOBES tag set was utilized. The IOBES model with fine-grained tokenization yielded the best-F-scores in the six chemical entity categories other than the "Multiple" entity category. Nonetheless, no significant improvement was observed when a more representative tag schemes was used with the coarse or fine-grained tokenization rules. The best F-scores that were achieved using the developed system on the test dataset of the CHEMDNER task were 0.833 and 0.815 for the chemical documents indexing and the chemical entity mention recognition tasks, respectively. The results herein highlight the importance of tag set selection and the use of different tokenization strategies. Fine-grained tokenization combined with the tag set IOBES most effectively recognizes chemical and drug names. To the best of the authors' knowledge, this investigation is the first comprehensive investigation use of various tag set schemes combined with different tokenization strategies for the recognition of chemical entities.

  10. Textural and stable isotope studies of the Big Mike cupriferous volcanogenic massive sulfide deposit, Pershing County, Nevada.

    USGS Publications Warehouse

    Rye, R.O.; Roberts, R.J.; Snyder, W.S.; Lahusen, G.L.; Motica, J.E.

    1984-01-01

    The Big Mike deposit is a massive sulphide lens entirely within a carbonaceous argillite of the Palaeozoic Havallah pelagic sequence. The massive ore contains two generations of pyrite, a fine- and a coarse-grained variety; framboidal pyrite occurs in the surrounding carbonaceous argillite. Coarse grained pyrite is largely recrystallized fine-grained pyrite and is proportionately more abundant toward the margins of the lens. Chalcopyrite and sphalerite replace fine-grained pyrite and vein-fragmented coarse-grained pyrite. Quartz fills openings in the sulphide fabric. S-isotope data are related to sulphide mineralogy and textures. Isotopically light S in the early fine-grained pyrite was probably derived from framboidal biogenic pyrite. The S-isotope values of the later coarse-grained pyrite and chalcopyrite probably reflect a combination of reduced sea-water sulphate and igneous S. Combined S- and O-isotope and textural data accord with precipitation of fine-grained pyrite from a hydrothermal plume like those at the East Pacific Rise spreading centre at lat. 21oN. The primary material was recystallized and mineralized by later fluids of distinctly different S-isotope composition. -G.J.N.

  11. 8. VIEW OF COMBINATION GEAR HOBBING MACHINE (Gould & Eberhardt, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    8. VIEW OF COMBINATION GEAR HOBBING MACHINE (Gould & Eberhardt, Newark, New Jersey. Patented No. 2103) AND LATHE (W.E. Shipley Machiner Co. Metal Working Machinery, Philadelphia, Pennsylvania, 1913). - Juniata Shops, Machine Shop No. 1, East of Fourth Avenue at Third Street, Altoona, Blair County, PA

  12. An efficient and robust 3D mesh compression based on 3D watermarking and wavelet transform

    NASA Astrophysics Data System (ADS)

    Zagrouba, Ezzeddine; Ben Jabra, Saoussen; Didi, Yosra

    2011-06-01

    The compression and watermarking of 3D meshes are very important in many areas of activity including digital cinematography, virtual reality as well as CAD design. However, most studies on 3D watermarking and 3D compression are done independently. To verify a good trade-off between protection and a fast transfer of 3D meshes, this paper proposes a new approach which combines 3D mesh compression with mesh watermarking. This combination is based on a wavelet transformation. In fact, the used compression method is decomposed to two stages: geometric encoding and topologic encoding. The proposed approach consists to insert a signature between these two stages. First, the wavelet transformation is applied to the original mesh to obtain two components: wavelets coefficients and a coarse mesh. Then, the geometric encoding is done on these two components. The obtained coarse mesh will be marked using a robust mesh watermarking scheme. This insertion into coarse mesh allows obtaining high robustness to several attacks. Finally, the topologic encoding is applied to the marked coarse mesh to obtain the compressed mesh. The combination of compression and watermarking permits to detect the presence of signature after a compression of the marked mesh. In plus, it allows transferring protected 3D meshes with the minimum size. The experiments and evaluations show that the proposed approach presents efficient results in terms of compression gain, invisibility and robustness of the signature against of many attacks.

  13. Climatological Aspects of the Optical Properties of Fine/Coarse Mode Aerosol Mixtures

    NASA Technical Reports Server (NTRS)

    Eck, T. F.; Holben, B. N.; Sinyuk, A.; Pinker, R. T.; Goloub, P.; Chen, H.; Chatenet, B.; Li, Z.; Singh, R. P.; Tripathi, S.N.; hide

    2010-01-01

    Aerosol mixtures composed of coarse mode desert dust combined with fine mode combustion generated aerosols (from fossil fuel and biomass burning sources) were investigated at three locations that are in and/or downwind of major global aerosol emission source regions. Multiyear monitoring data at Aerosol Robotic Network sites in Beijing (central eastern China), Kanpur (Indo-Gangetic Plain, northern India), and Ilorin (Nigeria, Sudanian zone of West Africa) were utilized to study the climatological characteristics of aerosol optical properties. Multiyear climatological averages of spectral single scattering albedo (SSA) versus fine mode fraction (FMF) of aerosol optical depth at 675 nm at all three sites exhibited relatively linear trends up to 50% FMF. This suggests the possibility that external linear mixing of both fine and coarse mode components (weighted by FMF) dominates the SSA variation, where the SSA of each component remains relatively constant for this range of FMF only. However, it is likely that a combination of other factors is also involved in determining the dynamics of SSA as a function of FMF, such as fine mode particles adhering to coarse mode dust. The spectral variation of the climatological averaged aerosol absorption optical depth (AAOD) was nearly linear in logarithmic coordinates over the wavelength range of 440-870 nm for both the Kanpur and Ilorin sites. However, at two sites in China (Beijing and Xianghe), a distinct nonlinearity in spectral AAOD in logarithmic space was observed, suggesting the possibility of anomalously strong absorption in coarse mode aerosols increasing the 870 nm AAOD.

  14. Bryophyte species associations with coarse woody debris and stand ages in Oregon

    USGS Publications Warehouse

    Rambo, T.; Muir, Patricia S.

    1998-01-01

    We quantified the relationships of 93 forest floor bryophyte species, including epiphytes from incorporated litterfall, to substrate and stand age in Pseudotsuga menziesii-Tsuga heterophylla stands at two sites in western Oregon. We used the method of Dufrêne and Legendre that combines a species' relative abundance and relative frequency, to calculate that species' importance in relation to environmental variables. The resulting "indicator value" describes a species' reliability for indicating the given environmental parameter. Thirty-nine species were indicative of either humus, a decay class of coarse woody debris, or stand age. Bryophyte community composition changed along the continuum of coarse woody debris decomposition from recently fallen trees with intact bark to forest floor humus. Richness of forest floor bryophytes will be enhanced when a full range of coarse woody debris decay classes is present. A suite of bryophytes indicated old-growth forest. These were mainly either epiphytes associated with older conifers or liverworts associated with coarse woody debris. Hardwood-associated epiphytes mainly indicated young stands. Mature conifers, hardwoods, and coarse woody debris are biological legacies that can be protected when thinning managed stands to foster habitat complexity and biodiversity, consistent with an ecosystem approach to forest management.

  15. Time Resolved Detectors and Measurements for Accelerators and Beamlines at the Australian Synchrotron

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boland, M. J.; School of Physics, University of Melbourne, Parkville, Victoria 3010; Rassool, R. P.

    2010-06-23

    Time resolved experiments require precision timing equipment and careful configuration of the machine and the beamline. The Australian Synchrotron has a state of the art timing system that allows flexible, real-time control of the machine and beamline timing parameters to target specific electron bunches. Results from a proof-of-principle measurement with a pulsed laser and a streak camera on the optical diagnostic beamline will be presented. The timing system was also used to fast trigger the PILATUS detector on an x-ray beamline to measure the fill pattern dependent effects of the detector. PILATUS was able to coarsely measure the fill patternmore » in the storage ring which implies that fill pattern intensity variations need to be corrected for when using the detector in this mode.« less

  16. Conformational State Distributions and Catalytically Relevant Dynamics of a Hinge-Bending Enzyme Studied by Single-Molecule FRET and a Coarse-Grained Simulation

    PubMed Central

    Gabba, Matteo; Poblete, Simón; Rosenkranz, Tobias; Katranidis, Alexandros; Kempe, Daryan; Züchner, Tina; Winkler, Roland G.; Gompper, Gerhard; Fitter, Jörg

    2014-01-01

    Over the last few decades, a view has emerged showing that multidomain enzymes are biological machines evolved to harness stochastic kicks of solvent particles into highly directional functional motions. These intrinsic motions are structurally encoded, and Nature makes use of them to catalyze chemical reactions by means of ligand-induced conformational changes and states redistribution. Such mechanisms align reactive groups for efficient chemistry and stabilize conformers most proficient for catalysis. By combining single-molecule Förster resonance energy transfer measurements with normal mode analysis and coarse-grained mesoscopic simulations, we obtained results for a hinge-bending enzyme, namely phosphoglycerate kinase (PGK), which support and extend these ideas. From single-molecule Förster resonance energy transfer, we obtained insight into the distribution of conformational states and the dynamical properties of the domains. The simulations allowed for the characterization of interdomain motions of a compact state of PGK. The data show that PGK is intrinsically a highly dynamic system sampling a wealth of conformations on timescales ranging from nanoseconds to milliseconds and above. Functional motions encoded in the fold are performed by the PGK domains already in its ligand-free form, and substrate binding is not required to enable them. Compared to other multidomain proteins, these motions are rather fast and presumably not rate-limiting in the enzymatic reaction. Ligand binding slightly readjusts the orientation of the domains and feasibly locks the protein motions along a preferential direction. In addition, the functionally relevant compact state is stabilized by the substrates, and acts as a prestate to reach active conformations by means of Brownian motions. PMID:25418172

  17. A combined emitter threat assessment method based on ICW-RCM

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Wang, Hongwei; Guo, Xiaotao; Wang, Yubing

    2017-08-01

    Considering that the tradition al emitter threat assessment methods are difficult to intuitively reflect the degree of target threaten and the deficiency of real-time and complexity, on the basis of radar chart method(RCM), an algorithm of emitter combined threat assessment based on ICW-RCM (improved combination weighting method, ICW) is proposed. The coarse sorting is integrated with fine sorting in emitter combined threat assessment, sequencing the emitter threat level roughly accordance to radar operation mode, and reducing task priority of the low-threat emitter; On the basis of ICW-RCM, sequencing the same radar operation mode emitter roughly, finally, obtain the results of emitter threat assessment through coarse and fine sorting. Simulation analyses show the correctness and effectiveness of this algorithm. Comparing with classical method of emitter threat assessment based on CW-RCM, the algorithm is visual in image and can work quickly with lower complexity.

  18. Interlaced coarse-graining for the dynamical cluster approximation

    NASA Astrophysics Data System (ADS)

    Haehner, Urs; Staar, Peter; Jiang, Mi; Maier, Thomas; Schulthess, Thomas

    The negative sign problem remains a challenging limiting factor in quantum Monte Carlo simulations of strongly correlated fermionic many-body systems. The dynamical cluster approximation (DCA) makes this problem less severe by coarse-graining the momentum space to map the bulk lattice to a cluster embedded in a dynamical mean-field host. Here, we introduce a new form of an interlaced coarse-graining and compare it with the traditional coarse-graining. We show that it leads to more controlled results with weaker cluster shape and smoother cluster size dependence, which with increasing cluster size converge to the results obtained using the standard coarse-graining. In addition, the new coarse-graining reduces the severity of the fermionic sign problem. Therefore, it enables calculations on much larger clusters and can allow the evaluation of the exact infinite cluster size result via finite size scaling. To demonstrate this, we study the hole-doped two-dimensional Hubbard model and show that the interlaced coarse-graining in combination with the DCA+ algorithm permits the determination of the superconducting Tc on cluster sizes, for which the results can be fitted with the Kosterlitz-Thouless scaling law. This research used resources of the Oak Ridge Leadership Computing Facility (OLCF) awarded by the INCITE program, and of the Swiss National Supercomputing Center. OLCF is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725.

  19. Scalable parallel communications

    NASA Technical Reports Server (NTRS)

    Maly, K.; Khanna, S.; Overstreet, C. M.; Mukkamala, R.; Zubair, M.; Sekhar, Y. S.; Foudriat, E. C.

    1992-01-01

    Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth service to a single application); and (3) coarse grain parallelism will be able to incorporate many future improvements from related work (e.g., reduced data movement, fast TCP, fine-grain parallelism) also with near linear speed-ups.

  20. The Effect of Heat Treatment on Residual Stress and Machining Distortions in Advanced Nickel Base Disk Alloys

    NASA Technical Reports Server (NTRS)

    Gayda, John

    2001-01-01

    This paper describes an extension of NASA's AST and IDPAT Programs which sought to predict the effect of stabilization heat treatments on residual stress and subsequent machining distortions in the advanced disk alloy, ME-209. Simple "pancake" forgings of ME-209 were produced and given four heat treats: 2075F(SUBSOLVUS)/OIL QUENCH/NO AGE; 2075F/OIL QUENCH/1400F@8HR;2075F/OIL QUENCH/1550F@3HR/l400F@8HR; and 2160F(SUPERSOLVUS)/OIL QUENCH/1550F@3HR/ 1400F@8HR. The forgings were then measured to obtain surface profiles in the heat treated condition. A simple machining plan consisting of face cuts from the top surface followed by measurements of the surface profile opposite the cut were made. This data provided warpage maps which were compared with analytical results. The analysis followed the IDPAT methodology and utilized a 2-D axisymmetric, viscoplastic FEA code. The analytical results accurately tracked the experimental data for each of the four heat treatments. The 1550F stabilization heat treatment was found to significantly reduce residual stresses and subsequent machining distortions for fine grain (subsolvus) ME209, while coarse grain (supersolvus) ME209 would require additional time or higher stabilization temperatures to attain the same degree of stress relief.

  1. Shifts in the suitable habitat available for brown trout (Salmo trutta L.) under short-term climate change scenarios.

    PubMed

    Muñoz-Mas, R; Lopez-Nicolas, A; Martínez-Capel, F; Pulido-Velazquez, M

    2016-02-15

    The impact of climate change on the habitat suitability for large brown trout (Salmo trutta L.) was studied in a segment of the Cabriel River (Iberian Peninsula). The future flow and water temperature patterns were simulated at a daily time step with M5 models' trees (NSE of 0.78 and 0.97 respectively) for two short-term scenarios (2011-2040) under the representative concentration pathways (RCP 4.5 and 8.5). An ensemble of five strongly regularized machine learning techniques (generalized additive models, multilayer perceptron ensembles, random forests, support vector machines and fuzzy rule base systems) was used to model the microhabitat suitability (depth, velocity and substrate) during summertime and to evaluate several flows simulated with River2D©. The simulated flow rate and water temperature were combined with the microhabitat assessment to infer bivariate habitat duration curves (BHDCs) under historical conditions and climate change scenarios using either the weighted usable area (WUA) or the Boolean-based suitable area (SA). The forecasts for both scenarios jointly predicted a significant reduction in the flow rate and an increase in water temperature (mean rate of change of ca. -25% and +4% respectively). The five techniques converged on the modelled suitability and habitat preferences; large brown trout selected relatively high flow velocity, large depth and coarse substrate. However, the model developed with support vector machines presented a significantly trimmed output range (max.: 0.38), and thus its predictions were banned from the WUA-based analyses. The BHDCs based on the WUA and the SA broadly matched, indicating an increase in the number of days with less suitable habitat available (WUA and SA) and/or with higher water temperature (trout will endure impoverished environmental conditions ca. 82% of the days). Finally, our results suggested the potential extirpation of the species from the study site during short time spans. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. High-Resolution Coarse-Grained Modeling Using Oriented Coarse-Grained Sites.

    PubMed

    Haxton, Thomas K

    2015-03-10

    We introduce a method to bring nearly atomistic resolution to coarse-grained models, and we apply the method to proteins. Using a small number of coarse-grained sites (about one per eight atoms) but assigning an independent three-dimensional orientation to each site, we preferentially integrate out stiff degrees of freedom (bond lengths and angles, as well as dihedral angles in rings) that are accurately approximated by their average values, while retaining soft degrees of freedom (unconstrained dihedral angles) mostly responsible for conformational variability. We demonstrate that our scheme retains nearly atomistic resolution by mapping all experimental protein configurations in the Protein Data Bank onto coarse-grained configurations and then analytically backmapping those configurations back to all-atom configurations. This roundtrip mapping throws away all information associated with the eliminated (stiff) degrees of freedom except for their average values, which we use to construct optimal backmapping functions. Despite the 4:1 reduction in the number of degrees of freedom, we find that heavy atoms move only 0.051 Å on average during the roundtrip mapping, while hydrogens move 0.179 Å on average, an unprecedented combination of efficiency and accuracy among coarse-grained protein models. We discuss the advantages of such a high-resolution model for parametrizing effective interactions and accurately calculating observables through direct or multiscale simulations.

  3. Failure criterion of glass fabric reinforced plastic laminates

    NASA Technical Reports Server (NTRS)

    Haga, O.; Hayashi, N.; Kasuya, K.

    1986-01-01

    Failure criteria are derived for several modes of failure (in unaxial tensile or compressive loading, or biaxial combined tensile-compressive loading) in the case of closely woven plain fabric, coarsely-woven plain fabric, or roving glass cloth reinforcements. The shear strength in the interaction formula is replaced by an equation dealing with tensile or compressive strength in the direction making a 45 degree angle with one of the anisotropic axes, for the uniaxial failure criteria. The interaction formula is useful as the failure criterion in combined tension-compression biaxial failure for the case of closely woven plain fabric laminates, but poor agreement is obtained in the case of coarsely woven fabric laminates.

  4. Learning with incomplete information in the committee machine.

    PubMed

    Bergmann, Urs M; Kühn, Reimer; Stamatescu, Ion-Olimpiu

    2009-12-01

    We study the problem of learning with incomplete information in a student-teacher setup for the committee machine. The learning algorithm combines unsupervised Hebbian learning of a series of associations with a delayed reinforcement step, in which the set of previously learnt associations is partly and indiscriminately unlearnt, to an extent that depends on the success rate of the student on these previously learnt associations. The relevant learning parameter lambda represents the strength of Hebbian learning. A coarse-grained analysis of the system yields a set of differential equations for overlaps of student and teacher weight vectors, whose solutions provide a complete description of the learning behavior. It reveals complicated dynamics showing that perfect generalization can be obtained if the learning parameter exceeds a threshold lambda ( c ), and if the initial value of the overlap between student and teacher weights is non-zero. In case of convergence, the generalization error exhibits a power law decay as a function of the number of examples used in training, with an exponent that depends on the parameter lambda. An investigation of the system flow in a subspace with broken permutation symmetry between hidden units reveals a bifurcation point lambda* above which perfect generalization does not depend on initial conditions. Finally, we demonstrate that cases of a complexity mismatch between student and teacher are optimally resolved in the sense that an over-complex student can emulate a less complex teacher rule, while an under-complex student reaches a state which realizes the minimal generalization error compatible with the complexity mismatch.

  5. Coarse-to-fine wavelet-based airport detection

    NASA Astrophysics Data System (ADS)

    Li, Cheng; Wang, Shuigen; Pang, Zhaofeng; Zhao, Baojun

    2015-10-01

    Airport detection on optical remote sensing images has attracted great interest in the applications of military optics scout and traffic control. However, most of the popular techniques for airport detection from optical remote sensing images have three weaknesses: 1) Due to the characteristics of optical images, the detection results are often affected by imaging conditions, like weather situation and imaging distortion; and 2) optical images contain comprehensive information of targets, so that it is difficult for extracting robust features (e.g., intensity and textural information) to represent airport area; 3) the high resolution results in large data volume, which makes real-time processing limited. Most of the previous works mainly focus on solving one of those problems, and thus, the previous methods cannot achieve the balance of performance and complexity. In this paper, we propose a novel coarse-to-fine airport detection framework to solve aforementioned three issues using wavelet coefficients. The framework includes two stages: 1) an efficient wavelet-based feature extraction is adopted for multi-scale textural feature representation, and support vector machine(SVM) is exploited for classifying and coarsely deciding airport candidate region; and then 2) refined line segment detection is used to obtain runway and landing field of airport. Finally, airport recognition is achieved by applying the fine runway positioning to the candidate regions. Experimental results show that the proposed approach outperforms the existing algorithms in terms of detection accuracy and processing efficiency.

  6. Hybrid continuum-coarse-grained modeling of erythrocytes

    NASA Astrophysics Data System (ADS)

    Lyu, Jinming; Chen, Paul G.; Boedec, Gwenn; Leonetti, Marc; Jaeger, Marc

    2018-06-01

    The red blood cell (RBC) membrane is a composite structure, consisting of a phospholipid bilayer and an underlying membrane-associated cytoskeleton. Both continuum and particle-based coarse-grained RBC models make use of a set of vertices connected by edges to represent the RBC membrane, which can be seen as a triangular surface mesh for the former and a spring network for the latter. Here, we present a modeling approach combining an existing continuum vesicle model with a coarse-grained model for the cytoskeleton. Compared to other two-component approaches, our method relies on only one mesh, representing the cytoskeleton, whose velocity in the tangential direction of the membrane may be different from that of the lipid bilayer. The finitely extensible nonlinear elastic (FENE) spring force law in combination with a repulsive force defined as a power function (POW), called FENE-POW, is used to describe the elastic properties of the RBC membrane. The mechanical interaction between the lipid bilayer and the cytoskeleton is explicitly computed and incorporated into the vesicle model. Our model includes the fundamental mechanical properties of the RBC membrane, namely fluidity and bending rigidity of the lipid bilayer, and shear elasticity of the cytoskeleton while maintaining surface-area and volume conservation constraint. We present three simulation examples to demonstrate the effectiveness of this hybrid continuum-coarse-grained model for the study of RBCs in fluid flows.

  7. Compensation strategy for machining optical freeform surfaces by the combined on- and off-machine measurement.

    PubMed

    Zhang, Xiaodong; Zeng, Zhen; Liu, Xianlei; Fang, Fengzhou

    2015-09-21

    Freeform surface is promising to be the next generation optics, however it needs high form accuracy for excellent performance. The closed-loop of fabrication-measurement-compensation is necessary for the improvement of the form accuracy. It is difficult to do an off-machine measurement during the freeform machining because the remounting inaccuracy can result in significant form deviations. On the other side, on-machine measurement may hides the systematic errors of the machine because the measuring device is placed in situ on the machine. This study proposes a new compensation strategy based on the combination of on-machine and off-machine measurement. The freeform surface is measured in off-machine mode with nanometric accuracy, and the on-machine probe achieves accurate relative position between the workpiece and machine after remounting. The compensation cutting path is generated according to the calculated relative position and shape errors to avoid employing extra manual adjustment or highly accurate reference-feature fixture. Experimental results verified the effectiveness of the proposed method.

  8. Effect of water content and flour particle size on gluten-free bread quality and digestibility.

    PubMed

    de la Hera, Esther; Rosell, Cristina M; Gomez, Manuel

    2014-05-15

    The impact of dough hydration level and particle size distribution of the rice flour on the gluten free bread quality and in vitro starch hydrolysis was studied. Rice flour was fractionated in fine and coarse parts and mixed with different amounts of water (70%, 90% and 110% hydration levels) and the rest of ingredients used for making gluten free bread. A larger bread specific volume was obtained when coarser fraction and great dough hydration (90-110%) were combined. The crumb texture improved when increasing dough hydration, although that effect was more pronounced when breads were obtained from a fine fraction. The estimated glycaemic index was higher in breads with higher hydration (90-110%). Slowly digestible starch (SDS) and resistant starch (RS) increased in the coarse flour breads. The coarse fraction complemented with a great dough hydration (90-110%) was the most suitable combination for developing rice bread when considering the bread volume and crumb texture. However, the lowest dough hydration limited starch gelatinization and hindered the in vitro starch digestibility. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Effects of microstructural variation on Charpy impact properties in heavy-section Mn-Mo-Ni low alloy steel for reactor pressure vessel

    NASA Astrophysics Data System (ADS)

    Hong, Seokmin; Song, Jaemin; Kim, Min-Chul; Choi, Kwon-Jae; Lee, Bong-Sang

    2016-03-01

    The effects of microstructural changes in heavy-section Mn-Mo-Ni low alloy steel on Charpy impact properties were investigated using a 210 mm thick reactor pressure vessel. Specimens were sampled from 5 different positions at intervals of 1/4 thickness from the inner surface to the outer surface. A detailed microstructural analysis of impact-fractured specimens showed that coarse carbides along the lath boundaries acted as fracture initiation sites, and cleavage cracks deviated at prior-austenite grain boundaries and bainite lath boundaries. Upper shelf energy was higher and energy transition temperature was lower at the surface positon, where fine bainitic microstructure with homogeneously distributed fine carbides were present. Toward the center, coarse upper bainite and precipitation of coarse inter-lath carbides were observed, which deteriorated impact properties. At the 1/4T position, the Charpy impact properties were worse than those at other positions owing to the combination of elongated-coarse inter-lath carbides and large effective grain size.

  10. A new technique for online measurement of total and water-soluble copper (Cu) in coarse particulate matter (PM).

    PubMed

    Wang, Dongbin; Shafer, Martin M; Schauer, James J; Sioutas, Constantinos

    2015-04-01

    This study presents a novel system for online, field measurement of copper (Cu) in ambient coarse (2.5-10 μm) particulate matter (PM). This new system utilizes two virtual impactors combined with a modified liquid impinger (BioSampler) to collect coarse PM directly as concentrated slurry samples. The total and water-soluble Cu concentrations are subsequently measured by a copper Ion Selective Electrode (ISE). Laboratory evaluation results indicated excellent collection efficiency (over 85%) for particles in the coarse PM size ranges. In the field evaluations, very good agreements for both total and water-soluble Cu concentrations were obtained between online ISE-based monitor measurements and those analyzed by means of inductively coupled plasma mass spectrometry (ICP-MS). Moreover, the field tests indicated that the Cu monitor could achieve near-continuous operation for at least 6 consecutive days (a time resolution of 2-4 h) without obvious shortcomings. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. The in-situ 3D measurement system combined with CNC machine tools

    NASA Astrophysics Data System (ADS)

    Zhao, Huijie; Jiang, Hongzhi; Li, Xudong; Sui, Shaochun; Tang, Limin; Liang, Xiaoyue; Diao, Xiaochun; Dai, Jiliang

    2013-06-01

    With the development of manufacturing industry, the in-situ 3D measurement for the machining workpieces in CNC machine tools is regarded as the new trend of efficient measurement. We introduce a 3D measurement system based on the stereovision and phase-shifting method combined with CNC machine tools, which can measure 3D profile of the machining workpieces between the key machining processes. The measurement system utilizes the method of high dynamic range fringe acquisition to solve the problem of saturation induced by specular lights reflected from shiny surfaces such as aluminum alloy workpiece or titanium alloy workpiece. We measured two workpieces of aluminum alloy on the CNC machine tools to demonstrate the effectiveness of the developed measurement system.

  12. A two-level approach to large mixed-integer programs with application to cogeneration in energy-efficient buildings

    DOE PAGES

    Lin, Fu; Leyffer, Sven; Munson, Todd

    2016-04-12

    We study a two-stage mixed-integer linear program (MILP) with more than 1 million binary variables in the second stage. We develop a two-level approach by constructing a semi-coarse model that coarsens with respect to variables and a coarse model that coarsens with respect to both variables and constraints. We coarsen binary variables by selecting a small number of prespecified on/off profiles. We aggregate constraints by partitioning them into groups and taking convex combination over each group. With an appropriate choice of coarsened profiles, the semi-coarse model is guaranteed to find a feasible solution of the original problem and hence providesmore » an upper bound on the optimal solution. We show that solving a sequence of coarse models converges to the same upper bound with proven finite steps. This is achieved by adding violated constraints to coarse models until all constraints in the semi-coarse model are satisfied. We demonstrate the effectiveness of our approach in cogeneration for buildings. Here, the coarsened models allow us to obtain good approximate solutions at a fraction of the time required by solving the original problem. Extensive numerical experiments show that the two-level approach scales to large problems that are beyond the capacity of state-of-the-art commercial MILP solvers.« less

  13. A two-level approach to large mixed-integer programs with application to cogeneration in energy-efficient buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Fu; Leyffer, Sven; Munson, Todd

    We study a two-stage mixed-integer linear program (MILP) with more than 1 million binary variables in the second stage. We develop a two-level approach by constructing a semi-coarse model that coarsens with respect to variables and a coarse model that coarsens with respect to both variables and constraints. We coarsen binary variables by selecting a small number of prespecified on/off profiles. We aggregate constraints by partitioning them into groups and taking convex combination over each group. With an appropriate choice of coarsened profiles, the semi-coarse model is guaranteed to find a feasible solution of the original problem and hence providesmore » an upper bound on the optimal solution. We show that solving a sequence of coarse models converges to the same upper bound with proven finite steps. This is achieved by adding violated constraints to coarse models until all constraints in the semi-coarse model are satisfied. We demonstrate the effectiveness of our approach in cogeneration for buildings. Here, the coarsened models allow us to obtain good approximate solutions at a fraction of the time required by solving the original problem. Extensive numerical experiments show that the two-level approach scales to large problems that are beyond the capacity of state-of-the-art commercial MILP solvers.« less

  14. Combining Coarse-Grained Protein Models with Replica-Exchange All-Atom Molecular Dynamics

    PubMed Central

    Wabik, Jacek; Kmiecik, Sebastian; Gront, Dominik; Kouza, Maksim; Koliński, Andrzej

    2013-01-01

    We describe a combination of all-atom simulations with CABS, a well-established coarse-grained protein modeling tool, into a single multiscale protocol. The simulation method has been tested on the C-terminal beta hairpin of protein G, a model system of protein folding. After reconstructing atomistic details, conformations derived from the CABS simulation were subjected to replica-exchange molecular dynamics simulations with OPLS-AA and AMBER99sb force fields in explicit solvent. Such a combination accelerates system convergence several times in comparison with all-atom simulations starting from the extended chain conformation, demonstrated by the analysis of melting curves, the number of native-like conformations as a function of time and secondary structure propagation. The results strongly suggest that the proposed multiscale method could be an efficient and accurate tool for high-resolution studies of protein folding dynamics in larger systems. PMID:23665897

  15. Statistical downscaling of GCM simulations to streamflow using relevance vector machine

    NASA Astrophysics Data System (ADS)

    Ghosh, Subimal; Mujumdar, P. P.

    2008-01-01

    General circulation models (GCMs), the climate models often used in assessing the impact of climate change, operate on a coarse scale and thus the simulation results obtained from GCMs are not particularly useful in a comparatively smaller river basin scale hydrology. The article presents a methodology of statistical downscaling based on sparse Bayesian learning and Relevance Vector Machine (RVM) to model streamflow at river basin scale for monsoon period (June, July, August, September) using GCM simulated climatic variables. NCEP/NCAR reanalysis data have been used for training the model to establish a statistical relationship between streamflow and climatic variables. The relationship thus obtained is used to project the future streamflow from GCM simulations. The statistical methodology involves principal component analysis, fuzzy clustering and RVM. Different kernel functions are used for comparison purpose. The model is applied to Mahanadi river basin in India. The results obtained using RVM are compared with those of state-of-the-art Support Vector Machine (SVM) to present the advantages of RVMs over SVMs. A decreasing trend is observed for monsoon streamflow of Mahanadi due to high surface warming in future, with the CCSR/NIES GCM and B2 scenario.

  16. Quench-Induced Stresses in AA2618 Forgings for Impellers: A Multiphysics and Multiscale Problem

    NASA Astrophysics Data System (ADS)

    Chobaut, Nicolas; Saelzle, Peter; Michel, Gilles; Carron, Denis; Drezet, Jean-Marie

    2015-05-01

    In the fabrication of heat-treatable aluminum parts such as AA2618 compressor impellers for turbochargers, solutionizing and quenching are key steps to obtain the required mechanical characteristics. Fast quenching is necessary to avoid coarse precipitation as it reduces the mechanical properties obtained after heat treatment. However, fast quenching induces residual stresses that can cause unacceptable distortions during machining. Furthermore, the remaining residual stresses after final machining can lead to unfavorable stresses in service. Predicting and controlling internal stresses during the whole processing from heat treatment to final machining is therefore of particular interest to prevent negative impacts of residual stresses. This problem is multiphysics because processes such as heat transfer during quenching, precipitation phenomena, thermally induced deformations, and stress generation are interacting and need to be taken into account. The problem is also multiscale as precipitates of nanosize form during quenching at locations where the cooling rate is too low. This precipitation affects the local yield strength of the material and thus impacts the level of macroscale residual stresses. A thermomechanical model accounting for precipitation in a simple but realistic way is presented. Instead of modelling precipitation that occurs during quenching, the model parameters are identified using a limited number of tensile tests achieved after representative interrupted cooling paths in a Gleeble machine. The simulation results are compared with as-quenched residual stresses in a forging measured by neutron diffraction.

  17. Resistance Spot Welding of AA5052 Sheet Metal of Dissimilar Thickness

    NASA Astrophysics Data System (ADS)

    Mat Din, N. A.; Zuhailawati, H.; Anasyida, A. S.

    2016-02-01

    Resistance spot welding of dissimilar thickness of AA5052 aluminum alloy was performed in order to investigate the effect of metal thickness on the weldment strength. Resistance spot welding was done using a spot welder machine available in Coraza Systems Sdn Bhd using a hemispherical of chromium copper electrode tip with radius of 6.00 mm under 14 kA of current and 0.02 bar of pressure for all thickness combinations. Lap joint configuration was produced between 2.0 mm thick sheet and 1.2 - 3.2 mm thick sheet, respectively. Microstructure of joint showed asymmetrical nugget shape that was larger on the thicker side indicating larger molten metal volume. Joint 2.0 mm x 3.2 mm sheets has the lowest hardness in both transverse direction and through thickness direction because less heat left in the weld nugget. The microstructure shows that this joint has coarse grains of HAZ. As thickness of sheet metal increased, the failure load of the joints increased. However, there was no linear correlation established between joint strength and metal thickness due to different shape of fusion zone in dissimilar thickness sheet metal.

  18. Equation-free and variable free modeling for complex/multiscale systems. Coarse-grained computation in science and engineering using fine-grained models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kevrekidis, Ioannis G.

    The work explored the linking of modern developing machine learning techniques (manifold learning and in particular diffusion maps) with traditional PDE modeling/discretization/scientific computation techniques via the equation-free methodology developed by the PI. The result (in addition to several PhD degrees, two of them by CSGF Fellows) was a sequence of strong developments - in part on the algorithmic side, linking data mining with scientific computing, and in part on applications, ranging from PDE discretizations to molecular dynamics and complex network dynamics.

  19. Scalability and Portability of Two Parallel Implementations of ADI

    NASA Technical Reports Server (NTRS)

    Phung, Thanh; VanderWijngaart, Rob F.

    1994-01-01

    Two domain decompositions for the implementation of the NAS Scalar Penta-diagonal Parallel Benchmark on MIMD systems are investigated, namely transposition and multi-partitioning. Hardware platforms considered are the Intel iPSC/860 and Paragon XP/S-15, and clusters of SGI workstations on ethernet, communicating through PVM. It is found that the multi-partitioning strategy offers the kind of coarse granularity that allows scaling up to hundreds of processors on a massively parallel machine. Moreover, efficiency is retained when the code is ported verbatim (save message passing syntax) to a PVM environment on a modest size cluster of workstations.

  20. Intermittent particle distribution in synthetic free-surface turbulent flows.

    PubMed

    Ducasse, Lauris; Pumir, Alain

    2008-06-01

    Tracer particles on the surface of a turbulent flow have a very intermittent distribution. This preferential concentration effect is studied in a two-dimensional synthetic compressible flow, both in the inertial (self-similar) and in the dissipative (smooth) range of scales, as a function of the compressibility C . The second moment of the concentration coarse grained over a scale r , n_{r};{2} , behaves as a power law in both the inertial and the dissipative ranges of scale, with two different exponents. The shapes of the probability distribution functions of the coarse-grained density n_{r} vary as a function of scale r and of compressibility C through the combination C/r;{kappa} (kappa approximately 0.5) , corresponding to the compressibility, coarse grained over a domain of scale r , averaged over Lagrangian trajectories.

  1. Object tracking with robotic total stations: Current technologies and improvements based on image data

    NASA Astrophysics Data System (ADS)

    Ehrhart, Matthias; Lienhart, Werner

    2017-09-01

    The importance of automated prism tracking is increasingly triggered by the rising automation of total station measurements in machine control, monitoring and one-person operation. In this article we summarize and explain the different techniques that are used to coarsely search a prism, to precisely aim at a prism, and to identify whether the correct prism is tracked. Along with the state-of-the-art review, we discuss and experimentally evaluate possible improvements based on the image data of an additional wide-angle camera which is available for many total stations today. In cases in which the total station's fine aiming module loses the prism, the tracked object may still be visible to the wide-angle camera because of its larger field of view. The theodolite angles towards the target can then be derived from its image coordinates which facilitates a fast reacquisition of the prism. In experimental measurements we demonstrate that our image-based approach for the coarse target search is 4 to 10-times faster than conventional approaches.

  2. DNA nanotechnology: understanding and optimisation through simulation

    NASA Astrophysics Data System (ADS)

    Ouldridge, Thomas E.

    2015-01-01

    DNA nanotechnology promises to provide controllable self-assembly on the nanoscale, allowing for the design of static structures, dynamic machines and computational architectures. In this article, I review the state-of-the art of DNA nanotechnology, highlighting the need for a more detailed understanding of the key processes, both in terms of theoretical modelling and experimental characterisation. I then consider coarse-grained models of DNA, mesoscale descriptions that have the potential to provide great insight into the operation of DNA nanotechnology if they are well designed. In particular, I discuss a number of nanotechnological systems that have been studied with oxDNA, a recently developed coarse-grained model, highlighting the subtle interplay of kinetic, thermodynamic and mechanical factors that can determine behaviour. Finally, new results highlighting the importance of mechanical tension in the operation of a two-footed walker are presented, demonstrating that recovery from an unintended 'overstepped' configuration can be accelerated by three to four orders of magnitude by application of a moderate tension to the walker's track. More generally, the walker illustrates the possibility of biasing strand-displacement processes to affect the overall rate.

  3. Study on the Toughness of X100 Pipeline Steel Heat Affected Zone

    NASA Astrophysics Data System (ADS)

    Li, Xueda; Shang, Chengjia; Ma, Xiaoping; Subramanian, S. V.

    Microstructure-property correlation of heat affected zone (HAZ) in X100 longitudinal submerged arc welding (LSAW) real weld joint was studied in this paper. Coarse grained (CG) HAZ and intercritically reheated coarse grained (ICCG) HAZ were characterized by optical microscope (OM), electron backscattered diffraction (EBSD). The microstructure of CGHAZ is mostly composed of granular bainite with low density of high angle boundaries (HAB). Prior austenite grain size is 80μm. In ICCGHAZ, coarse prior austenite grains were decorated by coarse necklacing martensite-austenite (M-A) constituents. Different layers were observed within M-A constituent, which may be martensite and austenite layers. Charpy absorbed energy of two different HAZ regions (ICCGHAZ containing and non-containing regions) was recorded using instrumental Charpy impact test machine. The results showed that the existence of ICCGHAZ resulted in the sharp drop of Charpy absorbed energy from 180J to 50J, while the existence of only CGHAZ could still lead to good toughness. The fracture surface was 60% brittle in the absence of ICCGHAZ, and 100% brittle in the presence of ICCGHAZ in the impact tested samples. The underlying reason is the microstructure of ICCGHAZ consisted of granular bainite and upper bainite with necklace-type M-A constituent along the grain boundaries. Cleavage fracture initiated from M-A constituent, either through cracking of M-A or debonding from the matrix, was observed at the fracture surface of ICCGHAZ. The presence of necklace type M-A constituent in ICCGHAZ notably increases the susceptibility of cleavage microcrack nucleation. Furthermore, the study of secondary microcracks beneath the CGHAZ and the ICCGHAZ through EBSD suggested that the fracture mechanism changes from nucleation-controlled in the CGHAZ to propagation-controlled in the ICCGHAZ because of the presence of necklace-type M-A constituent in the ICCGHAZ region. Both fracture mechanism contribute to the poor toughness of the sample contained ICCGHAZ. In conclusion, big prior austenite grains with low density of HAB plus coarse necklacing M-A products along grain boundary is the dominant factor resulting in low toughness.

  4. Effect of grinding with diamond-disc and -bur on the mechanical behavior of a Y-TZP ceramic.

    PubMed

    Pereira, G K R; Amaral, M; Simoneti, R; Rocha, G C; Cesar, P F; Valandro, L F

    2014-09-01

    This study compared the effects of grinding on the surface micromorphology, phase transformation (t→m), biaxial flexural strength and structural reliability (Weibull analysis) of a Y-TZP (Lava) ceramic using diamond-discs and -burs. 170 discs (15×1.2mm) were produced and divided into 5 groups: without treatment (Ctrl, as-sintered), and ground with 4 different systems: extra-fine (25µm, Xfine) and coarse diamond-bur (181µm, Coarse), 600-grit (25µm, D600) and 120-grit diamond-disc (160µm, D120). Grinding with burs was performed using a contra-angle handpiece (T2-Revo R170, Sirona), while for discs (Allied) a Polishing Machine (Ecomet, Buehler) was employed, both under water-cooling. Micromorphological analysis showed distinct patterns generated by grinding with discs and burs, independent of grit size. There was no statistical difference for characteristic strength values (MPa) between smaller grit sizes (D600 - 1050.08 and Xfine - 1171.33), although they presented higher values compared to Ctrl (917.58). For bigger grit sizes, a significant difference was observed (Coarse - 1136.32>D120 - 727.47). Weibull Modules were statistically similar between the tested groups. Within the limits of this study, from a micromorphological point-of-view, the treatments performed did not generate similar effects, so from a methodological point-of-view, diamond-discs should not be employed to simulate clinical abrasion performed with diamond-burs on Y-TZP ceramics. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Automatic Railway Traffic Object Detection System Using Feature Fusion Refine Neural Network under Shunting Mode.

    PubMed

    Ye, Tao; Wang, Baocheng; Song, Ping; Li, Juan

    2018-06-12

    Many accidents happen under shunting mode when the speed of a train is below 45 km/h. In this mode, train attendants observe the railway condition ahead using the traditional manual method and tell the observation results to the driver in order to avoid danger. To address this problem, an automatic object detection system based on convolutional neural network (CNN) is proposed to detect objects ahead in shunting mode, which is called Feature Fusion Refine neural network (FR-Net). It consists of three connected modules, i.e., the depthwise-pointwise convolution, the coarse detection module, and the object detection module. Depth-wise-pointwise convolutions are used to improve the detection in real time. The coarse detection module coarsely refine the locations and sizes of prior anchors to provide better initialization for the subsequent module and also reduces search space for the classification, whereas the object detection module aims to regress accurate object locations and predict the class labels for the prior anchors. The experimental results on the railway traffic dataset show that FR-Net achieves 0.8953 mAP with 72.3 FPS performance on a machine with a GeForce GTX1080Ti with the input size of 320 × 320 pixels. The results imply that FR-Net takes a good tradeoff both on effectiveness and real time performance. The proposed method can meet the needs of practical application in shunting mode.

  6. Continuous data assimilation for downscaling large-footprint soil moisture retrievals

    NASA Astrophysics Data System (ADS)

    Altaf, Muhammad U.; Jana, Raghavendra B.; Hoteit, Ibrahim; McCabe, Matthew F.

    2016-10-01

    Soil moisture is a key component of the hydrologic cycle, influencing processes leading to runoff generation, infiltration and groundwater recharge, evaporation and transpiration. Generally, the measurement scale for soil moisture is found to be different from the modeling scales for these processes. Reducing this mismatch between observation and model scales in necessary for improved hydrological modeling. An innovative approach to downscaling coarse resolution soil moisture data by combining continuous data assimilation and physically based modeling is presented. In this approach, we exploit the features of Continuous Data Assimilation (CDA) which was initially designed for general dissipative dynamical systems and later tested numerically on the incompressible Navier-Stokes equation, and the Benard equation. A nudging term, estimated as the misfit between interpolants of the assimilated coarse grid measurements and the fine grid model solution, is added to the model equations to constrain the model's large scale variability by available measurements. Soil moisture fields generated at a fine resolution by a physically-based vadose zone model (HYDRUS) are subjected to data assimilation conditioned upon coarse resolution observations. This enables nudging of the model outputs towards values that honor the coarse resolution dynamics while still being generated at the fine scale. Results show that the approach is feasible to generate fine scale soil moisture fields across large extents, based on coarse scale observations. Application of this approach is likely in generating fine and intermediate resolution soil moisture fields conditioned on the radiometerbased, coarse resolution products from remote sensing satellites.

  7. SoilGrids250m: Global gridded soil information based on machine learning

    PubMed Central

    Mendes de Jesus, Jorge; Heuvelink, Gerard B. M.; Ruiperez Gonzalez, Maria; Kilibarda, Milan; Blagotić, Aleksandar; Shangguan, Wei; Wright, Marvin N.; Geng, Xiaoyuan; Bauer-Marschallinger, Bernhard; Guevara, Mario Antonio; Vargas, Rodrigo; MacMillan, Robert A.; Batjes, Niels H.; Leenaars, Johan G. B.; Ribeiro, Eloi; Wheeler, Ichsani; Mantel, Stephan; Kempen, Bas

    2017-01-01

    This paper describes the technical development and accuracy assessment of the most recent and improved version of the SoilGrids system at 250m resolution (June 2016 update). SoilGrids provides global predictions for standard numeric soil properties (organic carbon, bulk density, Cation Exchange Capacity (CEC), pH, soil texture fractions and coarse fragments) at seven standard depths (0, 5, 15, 30, 60, 100 and 200 cm), in addition to predictions of depth to bedrock and distribution of soil classes based on the World Reference Base (WRB) and USDA classification systems (ca. 280 raster layers in total). Predictions were based on ca. 150,000 soil profiles used for training and a stack of 158 remote sensing-based soil covariates (primarily derived from MODIS land products, SRTM DEM derivatives, climatic images and global landform and lithology maps), which were used to fit an ensemble of machine learning methods—random forest and gradient boosting and/or multinomial logistic regression—as implemented in the R packages ranger, xgboost, nnet and caret. The results of 10–fold cross-validation show that the ensemble models explain between 56% (coarse fragments) and 83% (pH) of variation with an overall average of 61%. Improvements in the relative accuracy considering the amount of variation explained, in comparison to the previous version of SoilGrids at 1 km spatial resolution, range from 60 to 230%. Improvements can be attributed to: (1) the use of machine learning instead of linear regression, (2) to considerable investments in preparing finer resolution covariate layers and (3) to insertion of additional soil profiles. Further development of SoilGrids could include refinement of methods to incorporate input uncertainties and derivation of posterior probability distributions (per pixel), and further automation of spatial modeling so that soil maps can be generated for potentially hundreds of soil variables. Another area of future research is the development of methods for multiscale merging of SoilGrids predictions with local and/or national gridded soil products (e.g. up to 50 m spatial resolution) so that increasingly more accurate, complete and consistent global soil information can be produced. SoilGrids are available under the Open Data Base License. PMID:28207752

  8. SoilGrids250m: Global gridded soil information based on machine learning.

    PubMed

    Hengl, Tomislav; Mendes de Jesus, Jorge; Heuvelink, Gerard B M; Ruiperez Gonzalez, Maria; Kilibarda, Milan; Blagotić, Aleksandar; Shangguan, Wei; Wright, Marvin N; Geng, Xiaoyuan; Bauer-Marschallinger, Bernhard; Guevara, Mario Antonio; Vargas, Rodrigo; MacMillan, Robert A; Batjes, Niels H; Leenaars, Johan G B; Ribeiro, Eloi; Wheeler, Ichsani; Mantel, Stephan; Kempen, Bas

    2017-01-01

    This paper describes the technical development and accuracy assessment of the most recent and improved version of the SoilGrids system at 250m resolution (June 2016 update). SoilGrids provides global predictions for standard numeric soil properties (organic carbon, bulk density, Cation Exchange Capacity (CEC), pH, soil texture fractions and coarse fragments) at seven standard depths (0, 5, 15, 30, 60, 100 and 200 cm), in addition to predictions of depth to bedrock and distribution of soil classes based on the World Reference Base (WRB) and USDA classification systems (ca. 280 raster layers in total). Predictions were based on ca. 150,000 soil profiles used for training and a stack of 158 remote sensing-based soil covariates (primarily derived from MODIS land products, SRTM DEM derivatives, climatic images and global landform and lithology maps), which were used to fit an ensemble of machine learning methods-random forest and gradient boosting and/or multinomial logistic regression-as implemented in the R packages ranger, xgboost, nnet and caret. The results of 10-fold cross-validation show that the ensemble models explain between 56% (coarse fragments) and 83% (pH) of variation with an overall average of 61%. Improvements in the relative accuracy considering the amount of variation explained, in comparison to the previous version of SoilGrids at 1 km spatial resolution, range from 60 to 230%. Improvements can be attributed to: (1) the use of machine learning instead of linear regression, (2) to considerable investments in preparing finer resolution covariate layers and (3) to insertion of additional soil profiles. Further development of SoilGrids could include refinement of methods to incorporate input uncertainties and derivation of posterior probability distributions (per pixel), and further automation of spatial modeling so that soil maps can be generated for potentially hundreds of soil variables. Another area of future research is the development of methods for multiscale merging of SoilGrids predictions with local and/or national gridded soil products (e.g. up to 50 m spatial resolution) so that increasingly more accurate, complete and consistent global soil information can be produced. SoilGrids are available under the Open Data Base License.

  9. Progressive Classification Using Support Vector Machines

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri; Kocurek, Michael

    2009-01-01

    An algorithm for progressive classification of data, analogous to progressive rendering of images, makes it possible to compromise between speed and accuracy. This algorithm uses support vector machines (SVMs) to classify data. An SVM is a machine learning algorithm that builds a mathematical model of the desired classification concept by identifying the critical data points, called support vectors. Coarse approximations to the concept require only a few support vectors, while precise, highly accurate models require far more support vectors. Once the model has been constructed, the SVM can be applied to new observations. The cost of classifying a new observation is proportional to the number of support vectors in the model. When computational resources are limited, an SVM of the appropriate complexity can be produced. However, if the constraints are not known when the model is constructed, or if they can change over time, a method for adaptively responding to the current resource constraints is required. This capability is particularly relevant for spacecraft (or any other real-time systems) that perform onboard data analysis. The new algorithm enables the fast, interactive application of an SVM classifier to a new set of data. The classification process achieved by this algorithm is characterized as progressive because a coarse approximation to the true classification is generated rapidly and thereafter iteratively refined. The algorithm uses two SVMs: (1) a fast, approximate one and (2) slow, highly accurate one. New data are initially classified by the fast SVM, producing a baseline approximate classification. For each classified data point, the algorithm calculates a confidence index that indicates the likelihood that it was classified correctly in the first pass. Next, the data points are sorted by their confidence indices and progressively reclassified by the slower, more accurate SVM, starting with the items most likely to be incorrectly classified. The user can halt this reclassification process at any point, thereby obtaining the best possible result for a given amount of computation time. Alternatively, the results can be displayed as they are generated, providing the user with real-time feedback about the current accuracy of classification.

  10. The identification of high potential archers based on fitness and motor ability variables: A Support Vector Machine approach.

    PubMed

    Taha, Zahari; Musa, Rabiu Muazu; P P Abdul Majeed, Anwar; Alim, Muhammad Muaz; Abdullah, Mohamad Razali

    2018-02-01

    Support Vector Machine (SVM) has been shown to be an effective learning algorithm for classification and prediction. However, the application of SVM for prediction and classification in specific sport has rarely been used to quantify/discriminate low and high-performance athletes. The present study classified and predicted high and low-potential archers from a set of fitness and motor ability variables trained on different SVMs kernel algorithms. 50 youth archers with the mean age and standard deviation of 17.0 ± 0.6 years drawn from various archery programmes completed a six arrows shooting score test. Standard fitness and ability measurements namely hand grip, vertical jump, standing broad jump, static balance, upper muscle strength and the core muscle strength were also recorded. Hierarchical agglomerative cluster analysis (HACA) was used to cluster the archers based on the performance variables tested. SVM models with linear, quadratic, cubic, fine RBF, medium RBF, as well as the coarse RBF kernel functions, were trained based on the measured performance variables. The HACA clustered the archers into high-potential archers (HPA) and low-potential archers (LPA), respectively. The linear, quadratic, cubic, as well as the medium RBF kernel functions models, demonstrated reasonably excellent classification accuracy of 97.5% and 2.5% error rate for the prediction of the HPA and the LPA. The findings of this investigation can be valuable to coaches and sports managers to recognise high potential athletes from a combination of the selected few measured fitness and motor ability performance variables examined which would consequently save cost, time and effort during talent identification programme. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. In vivo placental MRI shape and textural features predict fetal growth restriction and postnatal outcome.

    PubMed

    Dahdouh, Sonia; Andescavage, Nickie; Yewale, Sayali; Yarish, Alexa; Lanham, Diane; Bulas, Dorothy; du Plessis, Adre J; Limperopoulos, Catherine

    2018-02-01

    To investigate the ability of three-dimensional (3D) MRI placental shape and textural features to predict fetal growth restriction (FGR) and birth weight (BW) for both healthy and FGR fetuses. We recruited two groups of pregnant volunteers between 18 and 39 weeks of gestation; 46 healthy subjects and 34 FGR. Both groups underwent fetal MR imaging on a 1.5 Tesla GE scanner using an eight-channel receiver coil. We acquired T2-weighted images on either the coronal or the axial plane to obtain MR volumes with a slice thickness of either 4 or 8 mm covering the full placenta. Placental shape features (volume, thickness, elongation) were combined with textural features; first order textural features (mean, variance, kurtosis, and skewness of placental gray levels), as well as, textural features computed on the gray level co-occurrence and run-length matrices characterizing placental homogeneity, symmetry, and coarseness. The features were used in two machine learning frameworks to predict FGR and BW. The proposed machine-learning based method using shape and textural features identified FGR pregnancies with 86% accuracy, 77% precision and 86% recall. BW estimations were 0.3 ± 13.4% (mean percentage error ± standard error) for healthy fetuses and -2.6 ± 15.9% for FGR. The proposed FGR identification and BW estimation methods using in utero placental shape and textural features computed on 3D MR images demonstrated high accuracy in our healthy and high-risk cohorts. Future studies to assess the evolution of each feature with regard to placental development are currently underway. 2 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2018;47:449-458. © 2017 International Society for Magnetic Resonance in Medicine.

  12. Carbon nanotube growth density control

    NASA Technical Reports Server (NTRS)

    Delzeit, Lance D. (Inventor); Schipper, John F. (Inventor)

    2010-01-01

    Method and system for combined coarse scale control and fine scale control of growth density of a carbon nanotube (CNT) array on a substrate, using a selected electrical field adjacent to a substrate surface for coarse scale density control (by one or more orders of magnitude) and a selected CNT growth temperature range for fine scale density control (by multiplicative factors of less than an order of magnitude) of CNT growth density. Two spaced apart regions on a substrate may have different CNT growth densities and/or may use different feed gases for CNT growth.

  13. Reconstructed Solar-Induced Fluorescence: A Machine Learning Vegetation Product Based on MODIS Surface Reflectance to Reproduce GOME-2 Solar-Induced Fluorescence

    NASA Astrophysics Data System (ADS)

    Gentine, P.; Alemohammad, S. H.

    2018-04-01

    Solar-induced fluorescence (SIF) observations from space have resulted in major advancements in estimating gross primary productivity (GPP). However, current SIF observations remain spatially coarse, infrequent, and noisy. Here we develop a machine learning approach using surface reflectances from Moderate Resolution Imaging Spectroradiometer (MODIS) channels to reproduce SIF normalized by clear sky surface irradiance from the Global Ozone Monitoring Experiment-2 (GOME-2). The resulting product is a proxy for ecosystem photosynthetically active radiation absorbed by chlorophyll (fAPARCh). Multiplying this new product with a MODIS estimate of photosynthetically active radiation provides a new MODIS-only reconstruction of SIF called Reconstructed SIF (RSIF). RSIF exhibits much higher seasonal and interannual correlation than the original SIF when compared with eddy covariance estimates of GPP and two reference global GPP products, especially in dry and cold regions. RSIF also reproduces intense productivity regions such as the U.S. Corn Belt contrary to typical vegetation indices and similarly to SIF.

  14. Grumman WS33 wind system. Phase II: executive summary. Prototype construction and testing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adler, F M; Hinton, P; King, P W

    1980-11-01

    The configuration of an 8 kW wind turbine generator and its fabrication and pre-delivery testing are discussed. The machine is a three-bladed, down wind turbine designed to interface directly with an electrical utility network. Power is generated in winds between a cut-in speed of 4.0 m/s and a cut-out speed of 22 m/s. A blade pitch control system provides for positioning the rotor at a coarse pitch for start-up, fine pitch for normal running, and a feather position for shut-down. Operation of the machine is controlled by a self-monitoring, programmable logic microprocessor. System components were obtained through a series ofmore » make-buy decisions, tracked and inspected for specification compliance. Only minor modifications from the original design and minor problems of assembly are reported. Four accelerometers were mounted inside the nacelle to determine the accelerations, frequencies and displacements of the system in the three orthogonal axes. A cost analysis is updated. (LEW)« less

  15. DNA bipedal motor walking dynamics: an experimental and theoretical study of the dependency on step size

    PubMed Central

    Khara, Dinesh C; Berger, Yaron; Ouldridge, Thomas E

    2018-01-01

    Abstract We present a detailed coarse-grained computer simulation and single molecule fluorescence study of the walking dynamics and mechanism of a DNA bipedal motor striding on a DNA origami. In particular, we study the dependency of the walking efficiency and stepping kinetics on step size. The simulations accurately capture and explain three different experimental observations. These include a description of the maximum possible step size, a decrease in the walking efficiency over short distances and a dependency of the efficiency on the walking direction with respect to the origami track. The former two observations were not expected and are non-trivial. Based on this study, we suggest three design modifications to improve future DNA walkers. Our study demonstrates the ability of the oxDNA model to resolve the dynamics of complex DNA machines, and its usefulness as an engineering tool for the design of DNA machines that operate in the three spatial dimensions. PMID:29294083

  16. Some distinguishing characteristics of contour and texture phenomena in images

    NASA Technical Reports Server (NTRS)

    Jobson, Daniel J.

    1992-01-01

    The development of generalized contour/texture discrimination techniques is a central element necessary for machine vision recognition and interpretation of arbitrary images. Here, the visual perception of texture, selected studies of texture analysis in machine vision, and diverse small samples of contour and texture are all used to provide insights into the fundamental characteristics of contour and texture. From these, an experimental discrimination scheme is developed and tested on a battery of natural images. The visual perception of texture defined fine texture as a subclass which is interpreted as shading and is distinct from coarse figural similarity textures. Also, perception defined the smallest scale for contour/texture discrimination as eight to nine visual acuity units. Three contour/texture discrimination parameters were found to be moderately successful for this scale discrimination: (1) lightness change in a blurred version of the image, (2) change in lightness change in the original image, and (3) percent change in edge counts relative to local maximum.

  17. Fine- and coarse-filter conservation strategies in a time of climate change.

    PubMed

    Tingley, Morgan W; Darling, Emily S; Wilcove, David S

    2014-08-01

    As species adapt to a changing climate, so too must humans adapt to a new conservation landscape. Classical frameworks have distinguished between fine- and coarse-filter conservation strategies, focusing on conserving either the species or the landscapes, respectively, that together define extant biodiversity. Adapting this framework for climate change, conservationists are using fine-filter strategies to assess species vulnerability and prioritize the most vulnerable species for conservation actions. Coarse-filter strategies seek to conserve either key sites as determined by natural elements unaffected by climate change, or sites with low climate velocity that are expected to be refugia for climate-displaced species. Novel approaches combine coarse- and fine-scale approaches--for example, prioritizing species within pretargeted landscapes--and accommodate the difficult reality of multiple interacting stressors. By taking a diversified approach to conservation actions and decisions, conservationists can hedge against uncertainty, take advantage of new methods and information, and tailor actions to the unique needs and limitations of places, thereby ensuring that the biodiversity show will go on. © 2014 New York Academy of Sciences.

  18. Temporal acceleration of spatially distributed kinetic Monte Carlo simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chatterjee, Abhijit; Vlachos, Dionisios G.

    The computational intensity of kinetic Monte Carlo (KMC) simulation is a major impediment in simulating large length and time scales. In recent work, an approximate method for KMC simulation of spatially uniform systems, termed the binomial {tau}-leap method, was introduced [A. Chatterjee, D.G. Vlachos, M.A. Katsoulakis, Binomial distribution based {tau}-leap accelerated stochastic simulation, J. Chem. Phys. 122 (2005) 024112], where molecular bundles instead of individual processes are executed over coarse-grained time increments. This temporal coarse-graining can lead to significant computational savings but its generalization to spatially lattice KMC simulation has not been realized yet. Here we extend the binomial {tau}-leapmore » method to lattice KMC simulations by combining it with spatially adaptive coarse-graining. Absolute stability and computational speed-up analyses for spatial systems along with simulations provide insights into the conditions where accuracy and substantial acceleration of the new spatio-temporal coarse-graining method are ensured. Model systems demonstrate that the r-time increment criterion of Chatterjee et al. obeys the absolute stability limit for values of r up to near 1.« less

  19. Hierarchical coarse-graining model for photosystem II including electron and excitation-energy transfer processes.

    PubMed

    Matsuoka, Takeshi; Tanaka, Shigenori; Ebina, Kuniyoshi

    2014-03-01

    We propose a hierarchical reduction scheme to cope with coupled rate equations that describe the dynamics of multi-time-scale photosynthetic reactions. To numerically solve nonlinear dynamical equations containing a wide temporal range of rate constants, we first study a prototypical three-variable model. Using a separation of the time scale of rate constants combined with identified slow variables as (quasi-)conserved quantities in the fast process, we achieve a coarse-graining of the dynamical equations reduced to those at a slower time scale. By iteratively employing this reduction method, the coarse-graining of broadly multi-scale dynamical equations can be performed in a hierarchical manner. We then apply this scheme to the reaction dynamics analysis of a simplified model for an illuminated photosystem II, which involves many processes of electron and excitation-energy transfers with a wide range of rate constants. We thus confirm a good agreement between the coarse-grained and fully (finely) integrated results for the population dynamics. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  20. Evaluation of ternary cementitious combinations : tech summary.

    DOT National Transportation Integrated Search

    2012-02-01

    Portland cement concrete (PCC) is the worlds most versatile and utilized construction material. Modern concrete consists of six : main ingredients: coarse aggregate, sand, portland cement, supplementary cementitious materials (SCMs), chemical admi...

  1. Fuzzy support vector machine: an efficient rule-based classification technique for microarrays.

    PubMed

    Hajiloo, Mohsen; Rabiee, Hamid R; Anooshahpour, Mahdi

    2013-01-01

    The abundance of gene expression microarray data has led to the development of machine learning algorithms applicable for tackling disease diagnosis, disease prognosis, and treatment selection problems. However, these algorithms often produce classifiers with weaknesses in terms of accuracy, robustness, and interpretability. This paper introduces fuzzy support vector machine which is a learning algorithm based on combination of fuzzy classifiers and kernel machines for microarray classification. Experimental results on public leukemia, prostate, and colon cancer datasets show that fuzzy support vector machine applied in combination with filter or wrapper feature selection methods develops a robust model with higher accuracy than the conventional microarray classification models such as support vector machine, artificial neural network, decision trees, k nearest neighbors, and diagonal linear discriminant analysis. Furthermore, the interpretable rule-base inferred from fuzzy support vector machine helps extracting biological knowledge from microarray data. Fuzzy support vector machine as a new classification model with high generalization power, robustness, and good interpretability seems to be a promising tool for gene expression microarray classification.

  2. 19 CFR 10.243 - Articles eligible for preferential treatment.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... construction and of coarse animal hair or man-made filaments; (C) Any combination of findings and trimmings of... incurred in the growth, production, manufacture, or other processing of the components, findings and...

  3. 19 CFR 10.243 - Articles eligible for preferential treatment.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... construction and of coarse animal hair or man-made filaments; (C) Any combination of findings and trimmings of... incurred in the growth, production, manufacture, or other processing of the components, findings and...

  4. 19 CFR 10.243 - Articles eligible for preferential treatment.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... construction and of coarse animal hair or man-made filaments; (C) Any combination of findings and trimmings of... incurred in the growth, production, manufacture, or other processing of the components, findings and...

  5. 19 CFR 10.243 - Articles eligible for preferential treatment.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... construction and of coarse animal hair or man-made filaments; (C) Any combination of findings and trimmings of... incurred in the growth, production, manufacture, or other processing of the components, findings and...

  6. 19 CFR 10.243 - Articles eligible for preferential treatment.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... construction and of coarse animal hair or man-made filaments; (C) Any combination of findings and trimmings of... incurred in the growth, production, manufacture, or other processing of the components, findings and...

  7. Hopper on wheels: evolving the hopping robot concept

    NASA Technical Reports Server (NTRS)

    Schell, S.; Tretten, A.; Burdick, J.; Fuller, S. B.; Fiorini, P.

    2001-01-01

    This paper describes the evolution of our concept of hopping robot for planetary exploration, that combines coarse long range mobility achieved by hopping, with short range wheeled mobility for precision target acquisition.

  8. Fabrication of large aperture SiC brazing mirror

    NASA Astrophysics Data System (ADS)

    Li, Ang; Wang, Peipei; Dong, Huiwen; Wang, Peng

    2016-10-01

    The SiC brazing mirror is the mirror whose blank is made by assembling together smaller SiC pieces with brazing technique. Using such kinds of joining techniques, people can manufacture large and complex SiC assemblies. The key technologies of fabricating and testing SiC brazing flat mirror especially for large aperture were studied. The SiC brazing flat mirror was ground by smart ultrasonic-milling machine, and then it was lapped by the lapping smart robot and measured by Coordinate Measuring Machine (CMM). After the PV of the surface below 4um, we did classic coarse polishing to the surface and studied the shape of the polishing tool which directly effects removal amount distribution. Finally, it was figured by the polishing smart robot and measured by Fizeau interferometer. We also studied the influence of machining path and removal functions of smart robots on the manufacturing results and discussed the use of abrasive in this process. At last, an example for fabricating and measuring a similar SiC brazing flat mirror with the aperture of 600 mm made by Shanghai Institute of Ceramics was given. The mirror blank consists of 6 SiC sectors and the surface was finally processed to a result of the Peak-to-Valley (PV) 150nm and Root Mean Square (RMS) 12nm.

  9. SU-D-BRA-07: A Phantom Study to Assess the Variability in Radiomics Features Extracted From Cone-Beam CT Images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fave, X; Fried, D; UT Health Science Center Graduate School of Biomedical Sciences, Houston, TX

    2015-06-15

    Purpose: Several studies have demonstrated the prognostic potential for texture features extracted from CT images of non-small cell lung cancer (NSCLC) patients. The purpose of this study was to determine if these features could be extracted with high reproducibility from cone-beam CT (CBCT) images in order for features to be easily tracked throughout a patient’s treatment. Methods: Two materials in a radiomics phantom, designed to approximate NSCLC tumor texture, were used to assess the reproducibility of 26 features. This phantom was imaged on 9 CBCT scanners, including Elekta and Varian machines. Thoracic and head imaging protocols were acquired on eachmore » machine. CBCT images from 27 NSCLC patients imaged using the thoracic protocol on Varian machines were obtained for comparison. The variance for each texture measured from these patients was compared to the variance in phantom values for different manufacturer/protocol subsets. Levene’s test was used to identify features which had a significantly smaller variance in the phantom scans versus the patient data. Results: Approximately half of the features (13/26 for material1 and 15/26 for material2) had a significantly smaller variance (p<0.05) between Varian thoracic scans of the phantom compared to patient scans. Many of these same features remained significant for the head scans on Varian (12/26 and 8/26). However, when thoracic scans from Elekta and Varian were combined, only a few features were still significant (4/26 and 5/26). Three features (skewness, coarsely filtered mean and standard deviation) were significant in almost all manufacturer/protocol subsets. Conclusion: Texture features extracted from CBCT images of a radiomics phantom are reproducible and show significantly less variation than the same features measured from patient images when images from the same manufacturer or with similar parameters are used. Reproducibility between CBCT scanners may be high enough to allow the extraction of meaningful texture values for patients. This project was funded in part by the Cancer Prevention Research Institute of Texas (CPRIT). Xenia Fave is a recipient of the American Association of Physicists in Medicine Graduate Fellowship.« less

  10. Automated science target selection for future Mars rovers: A machine vision approach for the future ESA ExoMars 2018 rover mission

    NASA Astrophysics Data System (ADS)

    Tao, Yu; Muller, Jan-Peter

    2013-04-01

    The ESA ExoMars 2018 rover is planned to perform autonomous science target selection (ASTS) using the approaches described in [1]. However, the approaches shown to date have focused on coarse features rather than the identification of specific geomorphological units. These higher-level "geoobjects" can later be employed to perform intelligent reasoning or machine learning. In this work, we show the next stage in the ASTS through examples displaying the identification of bedding planes (not just linear features in rock-face images) and the identification and discrimination of rocks in a rock-strewn landscape (not just rocks). We initially detect the layers and rocks in 2D processing via morphological gradient detection [1] and graph cuts based segmentation [2] respectively. To take this further requires the retrieval of 3D point clouds and the combined processing of point clouds and images for reasoning about the scene. An example is the differentiation of rocks in rover images. This will depend on knowledge of range and range-order of features. We show demonstrations of these "geo-objects" using MER and MSL (released through the PDS) as well as data collected within the EU-PRoViScout project (http://proviscout.eu). An initial assessment will be performed of the automated "geo-objects" using the OpenSource StereoViewer developed within the EU-PRoViSG project (http://provisg.eu) which is released in sourceforge. In future, additional 3D measurement tools will be developed within the EU-FP7 PRoViDE2 project, which started on 1.1.13. References: [1] M. Woods, A. Shaw, D. Barnes, D. Price, D. Long, D. Pullan, (2009) "Autonomous Science for an ExoMars Rover-Like Mission", Journal of Field Robotics Special Issue: Special Issue on Space Robotics, Part II, Volume 26, Issue 4, pages 358-390. [2] J. Shi, J. Malik, (2000) "Normalized Cuts and Image Segmentation", IEEE Transactions on Pattern Analysis and Machine Intelligence, Volume 22. [3] D. Shin, and J.-P. Muller (2009), Stereo workstation for Mars rover image analysis, in EPSC (Europlanets), Potsdam, Germany, EPSC2009-390

  11. Performance and efficiency of old newspaper deinking by combining cellulase/hemicellulase with laccase-violuric acid system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu Qinghua; Fu Yingjuan; Gao Yang

    2009-05-15

    Performance and efficiency of old newspaper (ONP) deinking by combining cellulase/hemicellulase with laccase-violuric acid system (LVS) were investigated in this study. Brightness, effective residual ink concentration (ERIC) and physical properties were evaluated for the deinked pulp. Fiber length, coarseness, specific surface area and specific volume were also tested. The changes of dissolved lignin during the deinking processes were measured with UV spectroscopy. The fiber morphology was observed with environmental scanning electronic microscopy (ESEM). Experimental results showed that, compared to the pulp deinked with each individual enzyme, ERIC was lower for the cellulase/hemicellulase-LVS-deinked pulp. This indicated that a synergy existed inmore » ONP deinking using a combination of enzymes. After being bleached by H{sub 2}O{sub 2}, enzyme-combining deinked pulp gave higher brightness and better strength properties. Compared with individual enzyme deinked pulp, average fiber length and coarseness decreased a little for the enzyme-combining deinked pulps. A higher specific surface area and specific volume of the pulp fibers were achieved. UV analysis proved that more lignin was released during the enzyme-combining deinking process. ESEM images showed that more fibrillation was observed on the fiber surface due to synergistic treatment.« less

  12. 29 CFR 570.62 - Occupations involved in the operation of bakery machines (Order 11).

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., or cleaning any horizontal or vertical dough mixer; batter mixer; bread dividing, rounding, or molding machine; dough brake; dough sheeter; combination bread slicing and wrapping machine; or cake cutting band saw. (2) The occupation of setting up or adjusting a cookie or cracker machine. (b...

  13. 29 CFR 570.62 - Occupations involved in the operation of bakery machines (Order 11).

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., or cleaning any horizontal or vertical dough mixer; batter mixer; bread dividing, rounding, or molding machine; dough brake; dough sheeter; combination bread slicing and wrapping machine; or cake cutting band saw. (2) The occupation of setting up or adjusting a cookie or cracker machine. (b...

  14. 29 CFR 570.62 - Occupations involved in the operation of bakery machines (Order 11).

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., or cleaning any horizontal or vertical dough mixer; batter mixer; bread dividing, rounding, or molding machine; dough brake; dough sheeter; combination bread slicing and wrapping machine; or cake cutting band saw. (2) The occupation of setting up or adjusting a cookie or cracker machine. (b...

  15. 29 CFR 570.62 - Occupations involved in the operation of bakery machines (Order 11).

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., or cleaning any horizontal or vertical dough mixer; batter mixer; bread dividing, rounding, or molding machine; dough brake; dough sheeter; combination bread slicing and wrapping machine; or cake cutting band saw. (2) The occupation of setting up or adjusting a cookie or cracker machine. (b...

  16. THE OPEP COARSE-GRAINED PROTEIN MODEL: FROM SINGLE MOLECULES, AMYLOID FORMATION, ROLE OF MACROMOLECULAR CROWDING AND HYDRODYNAMICS TO RNA/DNA COMPLEXES

    PubMed Central

    Sterpone, Fabio; Melchionna, Simone; Tuffery, Pierre; Pasquali, Samuela; Mousseau, Normand; Cragnolini, Tristan; Chebaro, Yassmine; Saint-Pierre, Jean-Francois; Kalimeri, Maria; Barducci, Alessandro; Laurin, Yohan; Tek, Alex; Baaden, Marc; Nguyen, Phuong Hoang; Derreumaux, Philippe

    2015-01-01

    The OPEP coarse-grained protein model has been applied to a wide range of applications since its first release 15 years ago. The model, which combines energetic and structural accuracy and chemical specificity, allows studying single protein properties, DNA/RNA complexes, amyloid fibril formation and protein suspensions in a crowded environment. Here we first review the current state of the model and the most exciting applications using advanced conformational sampling methods. We then present the current limitations and a perspective on the on-going developments. PMID:24759934

  17. A Multiatlas Segmentation Using Graph Cuts with Applications to Liver Segmentation in CT Scans

    PubMed Central

    2014-01-01

    An atlas-based segmentation approach is presented that combines low-level operations, an affine probabilistic atlas, and a multiatlas-based segmentation. The proposed combination provides highly accurate segmentation due to registrations and atlas selections based on the regions of interest (ROIs) and coarse segmentations. Our approach shares the following common elements between the probabilistic atlas and multiatlas segmentation: (a) the spatial normalisation and (b) the segmentation method, which is based on minimising a discrete energy function using graph cuts. The method is evaluated for the segmentation of the liver in computed tomography (CT) images. Low-level operations define a ROI around the liver from an abdominal CT. We generate a probabilistic atlas using an affine registration based on geometry moments from manually labelled data. Next, a coarse segmentation of the liver is obtained from the probabilistic atlas with low computational effort. Then, a multiatlas segmentation approach improves the accuracy of the segmentation. Both the atlas selections and the nonrigid registrations of the multiatlas approach use a binary mask defined by coarse segmentation. We experimentally demonstrate that this approach performs better than atlas selections and nonrigid registrations in the entire ROI. The segmentation results are comparable to those obtained by human experts and to other recently published results. PMID:25276219

  18. Root architecture impacts on root decomposition rates in switchgrass

    NASA Astrophysics Data System (ADS)

    de Graaff, M.; Schadt, C.; Garten, C. T.; Jastrow, J. D.; Phillips, J.; Wullschleger, S. D.

    2010-12-01

    Roots strongly contribute to soil organic carbon accrual, but the rate of soil carbon input via root litter decomposition is still uncertain. Root systems are built up of roots with a variety of different diameter size classes, ranging from very fine to very coarse roots. Since fine roots have low C:N ratios and coarse roots have high C:N ratios, root systems are heterogeneous in quality, spanning a range of different C:N ratios. Litter decomposition rates are generally well predicted by litter C:N ratios, thus decomposition of roots may be controlled by the relative abundance of fine versus coarse roots. With this study we asked how root architecture (i.e. the relative abundance of fine versus coarse roots) affects the decomposition of roots systems in the biofuels crop switchgrass (Panicum virgatum L.). To understand how root architecture affects root decomposition rates, we collected roots from eight switchgrass cultivars (Alamo, Kanlow, Carthage, Cave-in-Rock, Forestburg, Southlow, Sunburst, Blackwell), grown at FermiLab (IL), by taking 4.8-cm diameter soil cores from on top of the crown and directly next to the crown of individual plants. Roots were carefully excised from the cores by washing and analyzed for root diameter size class distribution using WinRhizo. Subsequently, root systems of each of the plants (4 replicates per cultivar) were separated in 'fine' (0-0.5 mm), 'medium' (0.5-1 mm) and 'coarse' roots (1-2.5 mm), dried, cut into 0.5 cm (medium and coarse roots) and 2 mm pieces (fine roots), and incubated for 90 days. For each of the cultivars we established five root-treatments: 20g of soil was amended with 0.2g of (1) fine roots, (2) medium roots, (3) coarse roots, (4) a 1:1:1 mixture of fine, medium and coarse roots, and (5) a mixture combining fine, medium and coarse roots in realistic proportions. We measured CO2 respiration at days 1, 3, 7, 15, 30, 60 and 90 during the experiment. The 13C signature of the soil was -26‰, and the 13C signature of plants was -12‰, enabling us to differentiate between root-derived C and native SOM-C respiration. We found that the relative abundance of fine, medium and coarse roots were significantly different among cultivars. Root systems of Alamo, Kanlow and Cave-in-Rock were characterized by a large abundance of coarse-, relative to fine roots, whereas Carthage, Forestburg and Blackwell had a large abundance of fine, relative to coarse roots. Fine roots had a 28% lower C:N ratio than medium and coarse roots. These differences led to different root decomposition rates. We conclude that root architecture should be taken into account when predicting root decomposition rates; enhanced understanding of the mechanisms of root decomposition will improve model predictions of C input to soil organic matter.

  19. Systematic methods for defining coarse-grained maps in large biomolecules.

    PubMed

    Zhang, Zhiyong

    2015-01-01

    Large biomolecules are involved in many important biological processes. It would be difficult to use large-scale atomistic molecular dynamics (MD) simulations to study the functional motions of these systems because of the computational expense. Therefore various coarse-grained (CG) approaches have attracted rapidly growing interest, which enable simulations of large biomolecules over longer effective timescales than all-atom MD simulations. The first issue in CG modeling is to construct CG maps from atomic structures. In this chapter, we review the recent development of a novel and systematic method for constructing CG representations of arbitrarily complex biomolecules, in order to preserve large-scale and functionally relevant essential dynamics (ED) at the CG level. In this ED-CG scheme, the essential dynamics can be characterized by principal component analysis (PCA) on a structural ensemble, or elastic network model (ENM) of a single atomic structure. Validation and applications of the method cover various biological systems, such as multi-domain proteins, protein complexes, and even biomolecular machines. The results demonstrate that the ED-CG method may serve as a very useful tool for identifying functional dynamics of large biomolecules at the CG level.

  20. Effect of severe plastic deformation on microstructure and mechanical properties of magnesium and aluminium alloys in wide range of strain rates

    NASA Astrophysics Data System (ADS)

    Skripnyak, Vladimir; Skripnyak, Evgeniya; Skripnyak, Vladimir; Vaganova, Irina; Skripnyak, Nataliya

    2013-06-01

    Results of researches testify that a grain size have a strong influence on the mechanical behavior of metals and alloys. Ultrafine grained HCP and FCC metal alloys present higher values of the spall strength than a corresponding coarse grained counterparts. In the present study we investigate the effect of grain size distribution on the flow stress and strength under dynamic compression and tension of aluminium and magnesium alloys. Microstructure and grain size distribution in alloys were varied by carrying out severe plastic deformation during the multiple-pass equal channel angular pressing, cyclic constrained groove pressing, and surface mechanical attrition treatment. Tests were performed using a VHS-Instron servo-hydraulic machine. Ultra high speed camera Phantom V710 was used for photo registration of deformation and fracture of specimens in range of strain rates from 0,01 to 1000 1/s. In dynamic regime UFG alloys exhibit a stronger decrease in ductility compared to the coarse grained material. The plastic flow of UFG alloys with a bimodal grain size distribution was highly localized. Shear bands and shear crack nucleation and growth were recorded using high speed photography.

  1. Dropout Prediction in E-Learning Courses through the Combination of Machine Learning Techniques

    ERIC Educational Resources Information Center

    Lykourentzou, Ioanna; Giannoukos, Ioannis; Nikolopoulos, Vassilis; Mpardis, George; Loumos, Vassili

    2009-01-01

    In this paper, a dropout prediction method for e-learning courses, based on three popular machine learning techniques and detailed student data, is proposed. The machine learning techniques used are feed-forward neural networks, support vector machines and probabilistic ensemble simplified fuzzy ARTMAP. Since a single technique may fail to…

  2. Obtaining Global Picture From Single Point Observations by Combining Data Assimilation and Machine Learning Tools

    NASA Astrophysics Data System (ADS)

    Shprits, Y.; Zhelavskaya, I. S.; Kellerman, A. C.; Spasojevic, M.; Kondrashov, D. A.; Ghil, M.; Aseev, N.; Castillo Tibocha, A. M.; Cervantes Villa, J. S.; Kletzing, C.; Kurth, W. S.

    2017-12-01

    Increasing volume of satellite measurements requires deployment of new tools that can utilize such vast amount of data. Satellite measurements are usually limited to a single location in space, which complicates the data analysis geared towards reproducing the global state of the space environment. In this study we show how measurements can be combined by means of data assimilation and how machine learning can help analyze large amounts of data and can help develop global models that are trained on single point measurement. Data Assimilation: Manual analysis of the satellite measurements is a challenging task, while automated analysis is complicated by the fact that measurements are given at various locations in space, have different instrumental errors, and often vary by orders of magnitude. We show results of the long term reanalysis of radiation belt measurements along with fully operational real-time predictions using data assimilative VERB code. Machine Learning: We present application of the machine learning tools for the analysis of NASA Van Allen Probes upper-hybrid frequency measurements. Using the obtained data set we train a new global predictive neural network. The results for the Van Allen Probes based neural network are compared with historical IMAGE satellite observations. We also show examples of predictions of geomagnetic indices using neural networks. Combination of machine learning and data assimilation: We discuss how data assimilation tools and machine learning tools can be combine so that physics-based insight into the dynamics of the particular system can be combined with empirical knowledge of it's non-linear behavior.

  3. 26 CFR 148.1-5 - Constructive sale price.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... of articles listed in Chapter 32 of the Internal Revenue Code (other than combinations) that embraces... section. For the rule applicable to combinations of two or more articles, see subdivision (iv) of this..., perforating, cutting, and dating machines, and other check protector machine devices; (o) Taxable cash...

  4. 26 CFR 148.1-5 - Constructive sale price.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... of articles listed in Chapter 32 of the Internal Revenue Code (other than combinations) that embraces... section. For the rule applicable to combinations of two or more articles, see subdivision (iv) of this..., perforating, cutting, and dating machines, and other check protector machine devices; (o) Taxable cash...

  5. 26 CFR 148.1-5 - Constructive sale price.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... of articles listed in Chapter 32 of the Internal Revenue Code (other than combinations) that embraces... section. For the rule applicable to combinations of two or more articles, see subdivision (iv) of this..., perforating, cutting, and dating machines, and other check protector machine devices; (o) Taxable cash...

  6. Monotonic entropy growth for a nonlinear model of random exchanges.

    PubMed

    Apenko, S M

    2013-02-01

    We present a proof of the monotonic entropy growth for a nonlinear discrete-time model of a random market. This model, based on binary collisions, also may be viewed as a particular case of Ulam's redistribution of energy problem. We represent each step of this dynamics as a combination of two processes. The first one is a linear energy-conserving evolution of the two-particle distribution, for which the entropy growth can be easily verified. The original nonlinear process is actually a result of a specific "coarse graining" of this linear evolution, when after the collision one variable is integrated away. This coarse graining is of the same type as the real space renormalization group transformation and leads to an additional entropy growth. The combination of these two factors produces the required result which is obtained only by means of information theory inequalities.

  7. Monotonic entropy growth for a nonlinear model of random exchanges

    NASA Astrophysics Data System (ADS)

    Apenko, S. M.

    2013-02-01

    We present a proof of the monotonic entropy growth for a nonlinear discrete-time model of a random market. This model, based on binary collisions, also may be viewed as a particular case of Ulam's redistribution of energy problem. We represent each step of this dynamics as a combination of two processes. The first one is a linear energy-conserving evolution of the two-particle distribution, for which the entropy growth can be easily verified. The original nonlinear process is actually a result of a specific “coarse graining” of this linear evolution, when after the collision one variable is integrated away. This coarse graining is of the same type as the real space renormalization group transformation and leads to an additional entropy growth. The combination of these two factors produces the required result which is obtained only by means of information theory inequalities.

  8. Forging of Advanced Disk Alloy LSHR

    NASA Technical Reports Server (NTRS)

    Gabb, Timothy P.; Gayda, John; Falsey, John

    2005-01-01

    The powder metallurgy disk alloy LSHR was designed with a relatively low gamma precipitate solvus temperature and high refractory element content to allow versatile heat treatment processing combined with high tensile, creep and fatigue properties. Grain size can be chiefly controlled through proper selection of solution heat treatment temperatures relative to the gamma precipitate solvus temperature. However, forging process conditions can also significantly influence solution heat treatment-grain size response. Therefore, it is necessary to understand the relationships between forging process conditions and the eventual grain size of solution heat treated material. A series of forging experiments were performed with subsequent subsolvus and supersolvus heat treatments, in search of suitable forging conditions for producing uniform fine grain and coarse grain microstructures. Subsolvus, supersolvus, and combined subsolvus plus supersolvus heat treatments were then applied. Forging and subsequent heat treatment conditions were identified allowing uniform fine and coarse grain microstructures.

  9. Diamond turning of Si and Ge single crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blake, P.; Scattergood, R.O.

    Single-point diamond turning studies have been completed on Si and Ge crystals. A new process model was developed for diamond turning which is based on a critical depth of cut for plastic flow-to-brittle fracture transitions. This concept, when combined with the actual machining geometry for single-point turning, predicts that {open_quotes}ductile{close_quotes} machining is a combined action of plasticity and fracture. Interrupted cutting experiments also provide a meant to directly measure the critical depth parameter for given machining conditions.

  10. Possibilities for preservation of coarse particles in pelleting process to improve feed quality characteristics.

    PubMed

    Vukmirović, D; Fišteš, A; Lević, J; Čolović, R; Rakić, D; Brlek, T; Banjac, V

    2017-10-01

    Poultry diets are mainly used in pelleted form because pellets have many advantages compared to mash feed. On the other hand, pelleting causes reduction of feed particle size. The aim of this research was to investigate the possibility of increasing the content of coarse particles in pellets, and, at the same time, to produce pellets with satisfactory quality. In this research, the three grinding treatments of corn were applied using hammer mill with three sieve openings diameter: 3 mm (HM-3), 6 mm (HM-6) and 9 mm (HM-9). These grinding treatments were combined in pelleting process with three gaps between rollers and the die of pellet press (roller-die gap, RDG) (0.30, 1.15 and 2.00 mm) and three moisture contents of the pelleted material (14.5, 16.0 and 17.5%). The increased coarseness of grinding by the hammer mill resulted in the increased amount of coarse particles in pellets, especially when the smallest RDG was applied (0.30 mm), but pellet quality was greatly reduced. Increasing of RDG improved the quality of pellets produced from coarsely ground corn, but reduced the content of coarse particles in pellets and increased specific energy consumption of the pellet press. Increasing the moisture content of material to be pelleted (MC) significantly reduced energy consumption of the pellet press, but there was no significant influence of MC on particle size after pelleting and on the pellet quality. The optimal values of the pelleting process parameters were determined using desirability function method. The results of optimization process showed that to achieve the highest possible quantity of coarse particles in the pellets, and to produce pellets of satisfactory quality, with the lowest possible energy consumption of the pellet press, the coarsest grinding on hammer mill (HM-9), the largest RDG (2 mm) and the highest MC (17.5%) should be applied. Journal of Animal Physiology and Animal Nutrition © 2016 Blackwell Verlag GmbH.

  11. On the removal of hexavalent chromium from a Class F fly ash.

    PubMed

    Huggins, F E; Rezaee, M; Honaker, R Q; Hower, J C

    2016-05-01

    Coarse and fine samples of a Class F fly ash obtained from commercial combustion of Illinois bituminous coal have been exposed to two long-term leaching tests designed to simulate conditions in waste impoundments. ICP-AES analysis indicated that the coarse and fine fly ash samples contained 135 and 171mg/kg Cr, respectively. Measurements by XAFS spectroscopy showed that the ash samples originally contained 5 and 8% of the chromium, respectively, in the hexavalent oxidation state, Cr(VI). After exposure to water for more than four months, the percentage of chromium as Cr(VI) in the fly-ash decreased significantly for the coarse and fine fly-ash in both tests. Combining the XAFS data with ICP-AES data on the concentration of chromium in the leachates indicated that, after the nineteen-week-long, more aggressive, kinetic test on the coarse fly ash, approximately 60% of the Cr(VI) had been leached, 20% had been reduced to Cr(III) and retained in the ash, and 20% remained as Cr(VI) in the ash. In contrast, during the six-month-long baseline test, very little Cr was actually leached from either the coarse or the fine fly-ash (<0.1mg/kg); rather, about 66% and 20%, respectively, of the original Cr(VI) in the coarse and fine fly-ash was retained in the ash in that form, while the remainder, 34% and 80%, respectively, was reduced and retained in the ash as Cr(III). The results are interpreted as indicating that Cr(VI) present in Class F fly-ash can be reduced to Cr(III) when in contact with water and that such chemical reduction can compete with physical removal of Cr(VI) from the ash by aqueous leaching. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Application of p-Multigrid to Discontinuous Galerkin Formulations of the Poisson Equation

    NASA Technical Reports Server (NTRS)

    Helenbrook, B. T.; Atkins, H. L.

    2006-01-01

    We investigate p-multigrid as a solution method for several different discontinuous Galerkin (DG) formulations of the Poisson equation. Different combinations of relaxation schemes and basis sets have been combined with the DG formulations to find the best performing combination. The damping factors of the schemes have been determined using Fourier analysis for both one and two-dimensional problems. One important finding is that when using DG formulations, the standard approach of forming the coarse p matrices separately for each level of multigrid is often unstable. To ensure stability the coarse p matrices must be constructed from the fine grid matrices using algebraic multigrid techniques. Of the relaxation schemes, we find that the combination of Jacobi relaxation with the spectral element basis is fairly effective. The results using this combination are p sensitive in both one and two dimensions, but reasonable convergence rates can still be achieved for moderate values of p and isotropic meshes. A competitive alternative is a block Gauss-Seidel relaxation. This actually out performs a more expensive line relaxation when the mesh is isotropic. When the mesh becomes highly anisotropic, the implicit line method and the Gauss-Seidel implicit line method are the only effective schemes. Adding the Gauss-Seidel terms to the implicit line method gives a significant improvement over the line relaxation method.

  13. Radiation tolerant combinational logic cell

    NASA Technical Reports Server (NTRS)

    Maki, Gary R. (Inventor); Whitaker, Sterling (Inventor); Gambles, Jody W. (Inventor)

    2009-01-01

    A system has a reduced sensitivity to Single Event Upset and/or Single Event Transient(s) compared to traditional logic devices. In a particular embodiment, the system includes an input, a logic block, a bias stage, a state machine, and an output. The logic block is coupled to the input. The logic block is for implementing a logic function, receiving a data set via the input, and generating a result f by applying the data set to the logic function. The bias stage is coupled to the logic block. The bias stage is for receiving the result from the logic block and presenting it to the state machine. The state machine is coupled to the bias stage. The state machine is for receiving, via the bias stage, the result generated by the logic block. The state machine is configured to retain a state value for the system. The state value is typically based on the result generated by the logic block. The output is coupled to the state machine. The output is for providing the value stored by the state machine. Some embodiments of the invention produce dual rail outputs Q and Q'. The logic block typically contains combinational logic and is similar, in size and transistor configuration, to a conventional CMOS combinational logic design. However, only a very small portion of the circuits of these embodiments, is sensitive to Single Event Upset and/or Single Event Transients.

  14. Impact of low asphalt binder for coarse HMA mixes : final report.

    DOT National Transportation Integrated Search

    2017-06-01

    Asphalt mixtures are commonly specified using volumetric controls in combination with aggregate gradation limits, like most transportation agencies, MnDOT also uses this approach. Since 2010 onward, several asphalt paving projects for MnDOT have been...

  15. Two-bead polarizable water models combined with a two-bead multipole force field (TMFF) for coarse-grained simulation of proteins.

    PubMed

    Li, Min; Zhang, John Z H

    2017-03-08

    The development of polarizable water models at coarse-grained (CG) levels is of much importance to CG molecular dynamics simulations of large biomolecular systems. In this work, we combined the newly developed two-bead multipole force field (TMFF) for proteins with the two-bead polarizable water models to carry out CG molecular dynamics simulations for benchmark proteins. In our simulations, two different two-bead polarizable water models are employed, the RTPW model representing five water molecules by Riniker et al. and the LTPW model representing four water molecules. The LTPW model is developed in this study based on the Martini three-bead polarizable water model. Our simulation results showed that the combination of TMFF with the LTPW model significantly stabilizes the protein's native structure in CG simulations, while the use of the RTPW model gives better agreement with all-atom simulations in predicting the residue-level fluctuation dynamics. Overall, the TMFF coupled with the two-bead polarizable water models enables one to perform an efficient and reliable CG dynamics study of the structural and functional properties of large biomolecules.

  16. A general gridding, discretization, and coarsening methodology for modeling flow in porous formations with discrete geological features

    NASA Astrophysics Data System (ADS)

    Karimi-Fard, M.; Durlofsky, L. J.

    2016-10-01

    A comprehensive framework for modeling flow in porous media containing thin, discrete features, which could be high-permeability fractures or low-permeability deformation bands, is presented. The key steps of the methodology are mesh generation, fine-grid discretization, upscaling, and coarse-grid discretization. Our specialized gridding technique combines a set of intersecting triangulated surfaces by constructing approximate intersections using existing edges. This procedure creates a conforming mesh of all surfaces, which defines the internal boundaries for the volumetric mesh. The flow equations are discretized on this conforming fine mesh using an optimized two-point flux finite-volume approximation. The resulting discrete model is represented by a list of control-volumes with associated positions and pore-volumes, and a list of cell-to-cell connections with associated transmissibilities. Coarse models are then constructed by the aggregation of fine-grid cells, and the transmissibilities between adjacent coarse cells are obtained using flow-based upscaling procedures. Through appropriate computation of fracture-matrix transmissibilities, a dual-continuum representation is obtained on the coarse scale in regions with connected fracture networks. The fine and coarse discrete models generated within the framework are compatible with any connectivity-based simulator. The applicability of the methodology is illustrated for several two- and three-dimensional examples. In particular, we consider gas production from naturally fractured low-permeability formations, and transport through complex fracture networks. In all cases, highly accurate solutions are obtained with significant model reduction.

  17. The differential absorption hard x-ray spectrometer at the Z facility

    DOE PAGES

    Bell, Kate S.; Coverdale, Christine A.; Ampleford, David J.; ...

    2017-08-03

    The Differential Absorption Hard X-ray (DAHX) spectrometer is a diagnostic developed to measure time-resolved radiation between 60 keV and 2 MeV at the Z Facility. It consists of an array of 7 Si PIN diodes in a tungsten housing that provides collimation and coarse spectral resolution through differential filters. DAHX is a revitalization of the Hard X-Ray Spectrometer (HXRS) that was fielded on Z prior to refurbishment in 2006. DAHX has been tailored to the present radiation environment in Z to provide information on the power, spectral shape, and time profile of the hard emission by plasma radiation sources drivenmore » by the Z Machine.« less

  18. Influence of Casting Section Thickness on Fatigue Strength of Austempered Ductile Iron

    NASA Astrophysics Data System (ADS)

    Olawale, J. O.; Ibitoye, S. A.

    2017-10-01

    The influence of casting section thickness on fatigue strength of austempered ductile iron was investigated in this study. ASTM A536 65-45-12 grade of ductile iron was produced, machined into round samples of 10, 15, 20 and 25 mm diameter, austenitized at a temperature of 820 °C, quenched into an austempering temperature (TA) of 300 and 375 °C and allowed to be isothermally transformed at these temperatures for a fixed period of 2 h. From the samples, fatigue test specimens were machined to conform to ASTM E-466. Scanning electron microscopy (SEM) and x-ray diffraction (XRD) methods were used to characterize microstructural morphology and phase distribution of heat-treated samples. The fatigue strength decreases as the section thickness increases. The SEM image and XRD patterns show a matrix of acicular ferrite and carbon-stabilized austenite with ferrite coarsening and volume fraction of austenite reducing as the section thickness increases. The study concluded that the higher the value of carbon-stabilized austenite the higher the fatigue strength while it decreases as the ausferrite structure becomes coarse.

  19. A comparison of machine learning and Bayesian modelling for molecular serotyping.

    PubMed

    Newton, Richard; Wernisch, Lorenz

    2017-08-11

    Streptococcus pneumoniae is a human pathogen that is a major cause of infant mortality. Identifying the pneumococcal serotype is an important step in monitoring the impact of vaccines used to protect against disease. Genomic microarrays provide an effective method for molecular serotyping. Previously we developed an empirical Bayesian model for the classification of serotypes from a molecular serotyping array. With only few samples available, a model driven approach was the only option. In the meanwhile, several thousand samples have been made available to us, providing an opportunity to investigate serotype classification by machine learning methods, which could complement the Bayesian model. We compare the performance of the original Bayesian model with two machine learning algorithms: Gradient Boosting Machines and Random Forests. We present our results as an example of a generic strategy whereby a preliminary probabilistic model is complemented or replaced by a machine learning classifier once enough data are available. Despite the availability of thousands of serotyping arrays, a problem encountered when applying machine learning methods is the lack of training data containing mixtures of serotypes; due to the large number of possible combinations. Most of the available training data comprises samples with only a single serotype. To overcome the lack of training data we implemented an iterative analysis, creating artificial training data of serotype mixtures by combining raw data from single serotype arrays. With the enhanced training set the machine learning algorithms out perform the original Bayesian model. However, for serotypes currently lacking sufficient training data the best performing implementation was a combination of the results of the Bayesian Model and the Gradient Boosting Machine. As well as being an effective method for classifying biological data, machine learning can also be used as an efficient method for revealing subtle biological insights, which we illustrate with an example.

  20. High time-resolved elemental components in fine and coarse particles in the Pearl River Delta region of Southern China: Dynamic variations and effects of meteorology.

    PubMed

    Zhou, Shengzhen; Davy, Perry K; Wang, Xuemei; Cohen, Jason Blake; Liang, Jiaquan; Huang, Minjuan; Fan, Qi; Chen, Weihua; Chang, Ming; Ancelet, Travis; Trompetter, William J

    2016-12-01

    Hourly-resolved PM 2.5 and PM 10-2.5 samples were collected in the industrial city Foshan in the Pearl River Delta region, China. The samples were subsequently analyzed for elemental components and black carbon (BC). A key purpose of the study was to understand the composition of particulate matter (PM) at high-time resolution in a polluted urban atmosphere to identify key components contributing to extreme PM concentration events and examine the diurnal chemical concentration patterns for air quality management purposes. It was found that BC and S concentrations dominated in the fine mode, while elements with mostly crustal and oceanic origins such as Si, Ca, Al and Cl were found in the coarse size fraction. Most of the elements showed strong diurnal variations. S did not show clear diurnal variations, suggesting regional rather than local origin. Based on empirical orthogonal functions (EOF) method, 3 forcing factors were identified contributing to the extreme events of PM 2.5 and selected elements, i.e., urban direct emissions, wet deposition and a combination of coarse mode sources. Conditional probability functions (CPF) were performed using wind profiles and elemental concentrations. The CPF results showed that BC and elemental Cl, K, Fe, Cu and Zn in the fine mode were mostly from the northwest, indicating that industrial emissions and combustion were the main sources. For elements in the coarse mode, Si, Al, K, Ca, Fe and Ti showed similar patterns, suggesting same sources such as local soil dust/construction activities. Coarse elemental Cl was mostly from the south and southeast, implying the influence of marine aerosol sources. For other trace elements, we found vanadium (V) in fine PM was mainly from the sources located to the southeast of the measuring site. Combined with CPF results of S and V in fine PM, we concluded shipping emissions were likely an important elemental emission source. Copyright © 2016. Published by Elsevier B.V.

  1. Open Architecture Data System for NASA Langley Combined Loads Test System

    NASA Technical Reports Server (NTRS)

    Lightfoot, Michael C.; Ambur, Damodar R.

    1998-01-01

    The Combined Loads Test System (COLTS) is a new structures test complex that is being developed at NASA Langley Research Center (LaRC) to test large curved panels and cylindrical shell structures. These structural components are representative of aircraft fuselage sections of subsonic and supersonic transport aircraft and cryogenic tank structures of reusable launch vehicles. Test structures are subjected to combined loading conditions that simulate realistic flight load conditions. The facility consists of two pressure-box test machines and one combined loads test machine. Each test machine possesses a unique set of requirements or research data acquisition and real-time data display. Given the complex nature of the mechanical and thermal loads to be applied to the various research test articles, each data system has been designed with connectivity attributes that support both data acquisition and data management functions. This paper addresses the research driven data acquisition requirements for each test machine and demonstrates how an open architecture data system design not only meets those needs but provides robust data sharing between data systems including the various control systems which apply spectra of mechanical and thermal loading profiles.

  2. Machine Learning for the Knowledge Plane

    DTIC Science & Technology

    2006-06-01

    this idea is to combine techniques from machine learning with new architectural concepts in networking to make the internet self-aware and self...work on the machine learning portion of the Knowledge Plane. This consisted of three components: (a) we wrote a document formulating the various

  3. In vivo quantification of plant starch reserves at micrometer resolution using X-ray microCT imaging and machine learning.

    PubMed

    Earles, J Mason; Knipfer, Thorsten; Tixier, Aude; Orozco, Jessica; Reyes, Clarissa; Zwieniecki, Maciej A; Brodersen, Craig R; McElrone, Andrew J

    2018-03-08

    Starch is the primary energy storage molecule used by most terrestrial plants to fuel respiration and growth during periods of limited to no photosynthesis, and its depletion can drive plant mortality. Destructive techniques at coarse spatial scales exist to quantify starch, but these techniques face methodological challenges that can lead to uncertainty about the lability of tissue-specific starch pools and their role in plant survival. Here, we demonstrate how X-ray microcomputed tomography (microCT) and a machine learning algorithm can be coupled to quantify plant starch content in vivo, repeatedly and nondestructively over time in grapevine stems (Vitis spp.). Starch content estimated for xylem axial and ray parenchyma cells from microCT images was correlated strongly with enzymatically measured bulk-tissue starch concentration on the same stems. After validating our machine learning algorithm, we then characterized the spatial distribution of starch concentration in living stems at micrometer resolution, and identified starch depletion in live plants under experimental conditions designed to halt photosynthesis and starch production, initiating the drawdown of stored starch pools. Using X-ray microCT technology for in vivo starch monitoring should enable novel research directed at resolving the spatial and temporal patterns of starch accumulation and depletion in woody plant species. No claim to original US Government works New Phytologist © 2018 New Phytologist Trust.

  4. A Wavelet Support Vector Machine Combination Model for Singapore Tourist Arrival to Malaysia

    NASA Astrophysics Data System (ADS)

    Rafidah, A.; Shabri, Ani; Nurulhuda, A.; Suhaila, Y.

    2017-08-01

    In this study, wavelet support vector machine model (WSVM) is proposed and applied for monthly data Singapore tourist time series prediction. The WSVM model is combination between wavelet analysis and support vector machine (SVM). In this study, we have two parts, first part we compare between the kernel function and second part we compare between the developed models with single model, SVM. The result showed that kernel function linear better than RBF while WSVM outperform with single model SVM to forecast monthly Singapore tourist arrival to Malaysia.

  5. Shell concrete pavement.

    DOT National Transportation Integrated Search

    1966-10-01

    This report describes the testing performed with reef shell, clam shell and a combination of reef and clam shell used as coarse aggregate to determine if a low modulus concrete could be developed for use as a base material as an alternate to the pres...

  6. Man-systems integration and the man-machine interface

    NASA Technical Reports Server (NTRS)

    Hale, Joseph P.

    1990-01-01

    Viewgraphs on man-systems integration and the man-machine interface are presented. Man-systems integration applies the systems' approach to the integration of the user and the machine to form an effective, symbiotic Man-Machine System (MMS). A MMS is a combination of one or more human beings and one or more physical components that are integrated through the common purpose of achieving some objective. The human operator interacts with the system through the Man-Machine Interface (MMI).

  7. Combined empirical mode decomposition and texture features for skin lesion classification using quadratic support vector machine.

    PubMed

    Wahba, Maram A; Ashour, Amira S; Napoleon, Sameh A; Abd Elnaby, Mustafa M; Guo, Yanhui

    2017-12-01

    Basal cell carcinoma is one of the most common malignant skin lesions. Automated lesion identification and classification using image processing techniques is highly required to reduce the diagnosis errors. In this study, a novel technique is applied to classify skin lesion images into two classes, namely the malignant Basal cell carcinoma and the benign nevus. A hybrid combination of bi-dimensional empirical mode decomposition and gray-level difference method features is proposed after hair removal. The combined features are further classified using quadratic support vector machine (Q-SVM). The proposed system has achieved outstanding performance of 100% accuracy, sensitivity and specificity compared to other support vector machine procedures as well as with different extracted features. Basal Cell Carcinoma is effectively classified using Q-SVM with the proposed combined features.

  8. Holliday Junction Thermodynamics and Structure: Coarse-Grained Simulations and Experiments

    NASA Astrophysics Data System (ADS)

    Wang, Wujie; Nocka, Laura M.; Wiemann, Brianne Z.; Hinckley, Daniel M.; Mukerji, Ishita; Starr, Francis W.

    2016-03-01

    Holliday junctions play a central role in genetic recombination, DNA repair and other cellular processes. We combine simulations and experiments to evaluate the ability of the 3SPN.2 model, a coarse-grained representation designed to mimic B-DNA, to predict the properties of DNA Holliday junctions. The model reproduces many experimentally determined aspects of junction structure and stability, including the temperature dependence of melting on salt concentration, the bias between open and stacked conformations, the relative populations of conformers at high salt concentration, and the inter-duplex angle (IDA) between arms. We also obtain a close correspondence between the junction structure evaluated by all-atom and coarse-grained simulations. We predict that, for salt concentrations at physiological and higher levels, the populations of the stacked conformers are independent of salt concentration, and directly observe proposed tetrahedral intermediate sub-states implicated in conformational transitions. Our findings demonstrate that the 3SPN.2 model captures junction properties that are inaccessible to all-atom studies, opening the possibility to simulate complex aspects of junction behavior.

  9. Integration of both dense wavelength-division multiplexing and coarse wavelength-division multiplexing demultiplexer on one photonic crystal chip

    NASA Astrophysics Data System (ADS)

    Tian, Huiping; Shen, Guansheng; Liu, Weijia; Ji, Yuefeng

    2013-07-01

    An integrated model of photonic crystal (PC) demultiplexer that can be used to combine dense wavelength-division multiplexing (DWDM) and coarse wavelength-division multiplexing (CWDM) systems is first proposed. By applying the PC demultiplexer, dense channel spacing 0.8 nm and coarse channel spacing 20 nm are obtained at the same time. The transmission can be improved to nearly 90%, and the crosstalk can be decreased to less than -18 dB by enlarging the width of the bus waveguide. The total size of the device is 21×42 μm2. Four channels on one side of the demultiplexer can achieve DWDM in the wavelength range between 1575 and 1578 nm, and the other four channels on the other side can achieve CWDM in the wavelength range between 1490 and 1565 nm, respectively. The demonstrated demultiplexer can be applied in the future CWDM and DWDM system, and the architecture costs can be significantly reduced.

  10. Quantum mechanics/coarse-grained molecular mechanics (QM/CG-MM)

    NASA Astrophysics Data System (ADS)

    Sinitskiy, Anton V.; Voth, Gregory A.

    2018-01-01

    Numerous molecular systems, including solutions, proteins, and composite materials, can be modeled using mixed-resolution representations, of which the quantum mechanics/molecular mechanics (QM/MM) approach has become the most widely used. However, the QM/MM approach often faces a number of challenges, including the high cost of repetitive QM computations, the slow sampling even for the MM part in those cases where a system under investigation has a complex dynamics, and a difficulty in providing a simple, qualitative interpretation of numerical results in terms of the influence of the molecular environment upon the active QM region. In this paper, we address these issues by combining QM/MM modeling with the methodology of "bottom-up" coarse-graining (CG) to provide the theoretical basis for a systematic quantum-mechanical/coarse-grained molecular mechanics (QM/CG-MM) mixed resolution approach. A derivation of the method is presented based on a combination of statistical mechanics and quantum mechanics, leading to an equation for the effective Hamiltonian of the QM part, a central concept in the QM/CG-MM theory. A detailed analysis of different contributions to the effective Hamiltonian from electrostatic, induction, dispersion, and exchange interactions between the QM part and the surroundings is provided, serving as a foundation for a potential hierarchy of QM/CG-MM methods varying in their accuracy and computational cost. A relationship of the QM/CG-MM methodology to other mixed resolution approaches is also discussed.

  11. A novel capacitive absolute positioning sensor based on time grating with nanometer resolution

    NASA Astrophysics Data System (ADS)

    Pu, Hongji; Liu, Hongzhong; Liu, Xiaokang; Peng, Kai; Yu, Zhicheng

    2018-05-01

    The present work proposes a novel capacitive absolute positioning sensor based on time grating. The sensor includes a fine incremental-displacement measurement component combined with a coarse absolute-position measurement component to obtain high-resolution absolute positioning measurements. A single row type sensor was proposed to achieve fine displacement measurement, which combines the two electrode rows of a previously proposed double-row type capacitive displacement sensor based on time grating into a single row. To achieve absolute positioning measurement, the coarse measurement component is designed as a single-row type displacement sensor employing a single spatial period over the entire measurement range. In addition, this component employs a rectangular induction electrode and four groups of orthogonal discrete excitation electrodes with half-sinusoidal envelope shapes, which were formed by alternately extending the rectangular electrodes of the fine measurement component. The fine and coarse measurement components are tightly integrated to form a compact absolute positioning sensor. A prototype sensor was manufactured using printed circuit board technology for testing and optimization of the design in conjunction with simulations. Experimental results show that the prototype sensor achieves a ±300 nm measurement accuracy with a 1 nm resolution over a displacement range of 200 mm when employing error compensation. The proposed sensor is an excellent alternative to presently available long-range absolute nanometrology sensors owing to its low cost, simple structure, and ease of manufacturing.

  12. Quantum mechanics/coarse-grained molecular mechanics (QM/CG-MM).

    PubMed

    Sinitskiy, Anton V; Voth, Gregory A

    2018-01-07

    Numerous molecular systems, including solutions, proteins, and composite materials, can be modeled using mixed-resolution representations, of which the quantum mechanics/molecular mechanics (QM/MM) approach has become the most widely used. However, the QM/MM approach often faces a number of challenges, including the high cost of repetitive QM computations, the slow sampling even for the MM part in those cases where a system under investigation has a complex dynamics, and a difficulty in providing a simple, qualitative interpretation of numerical results in terms of the influence of the molecular environment upon the active QM region. In this paper, we address these issues by combining QM/MM modeling with the methodology of "bottom-up" coarse-graining (CG) to provide the theoretical basis for a systematic quantum-mechanical/coarse-grained molecular mechanics (QM/CG-MM) mixed resolution approach. A derivation of the method is presented based on a combination of statistical mechanics and quantum mechanics, leading to an equation for the effective Hamiltonian of the QM part, a central concept in the QM/CG-MM theory. A detailed analysis of different contributions to the effective Hamiltonian from electrostatic, induction, dispersion, and exchange interactions between the QM part and the surroundings is provided, serving as a foundation for a potential hierarchy of QM/CG-MM methods varying in their accuracy and computational cost. A relationship of the QM/CG-MM methodology to other mixed resolution approaches is also discussed.

  13. 26 CFR 1.954-4 - Foreign base company services income.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... substantial assistance when taken together or in combination with other assistance furnished by a related... A is paid by related corporation M for the installation and maintenance of industrial machines which... manufactures an industrial machine which requires specialized installation. Corporation M sells the machines...

  14. Evacuation simulation using Hybrid Space Discretisation and Application to Large Underground Rail Tunnel Station

    NASA Astrophysics Data System (ADS)

    Chooramun, N.; Lawrence, P. J.; Galea, E. R.

    2017-08-01

    In all evacuation simulation tools, the space through which agents navigate and interact is represented by one the following methods, namely Coarse regions, Fine nodes and Continuous regions. Each of the spatial representation methods has its benefits and limitations. For instance, the Coarse approach allows simulations to be processed very rapidly, but is unable to represent the interactions of the agents from an individual perspective; the Continuous approach provides a detailed representation of agent movement and interaction but suffers from relatively poor computational performance. The Fine nodal approach presents a compromise between the Continuous and Coarse approaches such that it allows agent interaction to be modelled while providing good computational performance. Our approach for representing space in an evacuation simulation tool differs such that it allows evacuation simulations to be run using a combination of Coarse regions, Fine nodes and Continuous regions. This approach, which we call Hybrid Spatial Discretisation (HSD), is implemented within the buildingEXODUS evacuation simulation software. The HSD incorporates the benefits of each of the spatial representation methods whilst providing an optimal environment for representing agent movement and interaction. In this work, we demonstrate the effectiveness of the HSD through its application to a moderately large case comprising of an underground rail tunnel station with a population of 2,000 agents.

  15. Removal of micropollutants with coarse-ground activated carbon for enhanced separation with hydrocyclone classifiers.

    PubMed

    Otto, N; Platz, S; Fink, T; Wutscherk, M; Menzel, U

    2016-01-01

    One key technology to eliminate organic micropollutants (OMP) from wastewater effluent is adsorption using powdered activated carbon (PAC). To avoid a discharge of highly loaded PAC particles into natural water bodies a separation stage has to be implemented. Commonly large settling tanks and flocculation filters with the application of coagulants and flocculation aids are used. In this study, a multi-hydrocyclone classifier with a downstream cloth filter has been investigated on a pilot plant as a space-saving alternative with no need for a dosing of chemical additives. To improve the separation, a coarser ground PAC type was compared to a standard PAC type with regard to elimination results of OMP as well as separation performance. With a PAC dosing rate of 20 mg/l an average of 64.7 wt% of the standard PAC and 79.5 wt% of the coarse-ground PAC could be separated in the hydrocyclone classifier. A total average separation efficiency of 93-97 wt% could be reached with a combination of both hydrocyclone classifier and cloth filter. Nonetheless, the OMP elimination of the coarse-ground PAC was not sufficient enough to compete with the standard PAC. Further research and development is necessary to find applicable coarse-grained PAC types with adequate OMP elimination capabilities.

  16. Ergonomic risk factor identification for sewing machine operators through supervised occupational therapy fieldwork in Bangladesh: A case study.

    PubMed

    Habib, Md Monjurul

    2015-01-01

    Many sewing machine operators are working with high risk factors for musculoskeletal health in the garments industries in Bangladesh. To identify the physical risk factors among sewing machine operators in a Bangladeshi garments factory. Sewing machine operators (327, 83% female), were evaluated. The mean age of the participants was 25.25 years. Six ergonomic risk factors were determined using the Musculoskeletal Disorders risk assessment. Data collection included measurements of sewing machine table and chair heights; this data was combined with information from informal interviews. Significant ergonomic risk factors found included the combination of awkward postures of the neck and back, repetitive hand and arm movements, poor ergonomic workstations and prolonged working hours without adequate breaks; these risk factors resulted in musculoskeletal complaints, sick leave, and switching jobs. One aspect of improving worker health in garment factories includes addressing musculoskeletal risk factors through ergonomic interventions.

  17. Three dimensional magnetic fields in extra high speed modified Lundell alternators computed by a combined vector-scalar magnetic potential finite element method

    NASA Technical Reports Server (NTRS)

    Demerdash, N. A.; Wang, R.; Secunde, R.

    1992-01-01

    A 3D finite element (FE) approach was developed and implemented for computation of global magnetic fields in a 14.3 kVA modified Lundell alternator. The essence of the new method is the combined use of magnetic vector and scalar potential formulations in 3D FEs. This approach makes it practical, using state of the art supercomputer resources, to globally analyze magnetic fields and operating performances of rotating machines which have truly 3D magnetic flux patterns. The 3D FE-computed fields and machine inductances as well as various machine performance simulations of the 14.3 kVA machine are presented in this paper and its two companion papers.

  18. Mars vertical axis wind machines. The design of a Darreus and a Giromill for use on Mars

    NASA Astrophysics Data System (ADS)

    Brach, David; Dube, John; Kelly, Jon; Peterson, Joanna; Bollig, John; Gohr, Lisa; Mahoney, Kamin; Polidori, Dave

    1992-05-01

    This report contains the design of both a Darrieus and a Giromill for use on Mars. The report has been organized so that the interested reader may read only about one machine without having to read the entire report. Where components for the two machines differ greatly, separate sections have been allotted for each machine. Each section is complete; therefore, no relevant information is missed by reading only the section for the machine of interest. Also, when components for both machines are similar, both machines have been combined into one section. This is done so that the reader interested in both machines need not read the same information twice.

  19. Mars vertical axis wind machines. The design of a Darreus and a Giromill for use on Mars

    NASA Technical Reports Server (NTRS)

    Brach, David; Dube, John; Kelly, Jon; Peterson, Joanna; Bollig, John; Gohr, Lisa; Mahoney, Kamin; Polidori, Dave

    1992-01-01

    This report contains the design of both a Darrieus and a Giromill for use on Mars. The report has been organized so that the interested reader may read only about one machine without having to read the entire report. Where components for the two machines differ greatly, separate sections have been allotted for each machine. Each section is complete; therefore, no relevant information is missed by reading only the section for the machine of interest. Also, when components for both machines are similar, both machines have been combined into one section. This is done so that the reader interested in both machines need not read the same information twice.

  20. Machine characterization based on an abstract high-level language machine

    NASA Technical Reports Server (NTRS)

    Saavedra-Barrera, Rafael H.; Smith, Alan Jay; Miya, Eugene

    1989-01-01

    Measurements are presented for a large number of machines ranging from small workstations to supercomputers. The authors combine these measurements into groups of parameters which relate to specific aspects of the machine implementation, and use these groups to provide overall machine characterizations. The authors also define the concept of pershapes, which represent the level of performance of a machine for different types of computation. A metric based on pershapes is introduced that provides a quantitative way of measuring how similar two machines are in terms of their performance distributions. The metric is related to the extent to which pairs of machines have varying relative performance levels depending on which benchmark is used.

  1. Combining Machine Learning and Natural Language Processing to Assess Literary Text Comprehension

    ERIC Educational Resources Information Center

    Balyan, Renu; McCarthy, Kathryn S.; McNamara, Danielle S.

    2017-01-01

    This study examined how machine learning and natural language processing (NLP) techniques can be leveraged to assess the interpretive behavior that is required for successful literary text comprehension. We compared the accuracy of seven different machine learning classification algorithms in predicting human ratings of student essays about…

  2. Design considerations for ultra-precision magnetic bearing supported slides

    NASA Technical Reports Server (NTRS)

    Slocum, Alexander H.; Eisenhaure, David B.

    1993-01-01

    Development plans for a prototype servocontrolled machine with 1 angstrom resolution of linear motion and 50 mm range of travel are described. Two such devices could then be combined to produce a two dimensional machine for probing large planar objects with atomic resolution, the Angstrom Resolution Measuring Machine (ARMM).

  3. The IBM PC as an Online Search Machine--Part 2: Physiology for Searchers.

    ERIC Educational Resources Information Center

    Kolner, Stuart J.

    1985-01-01

    Enumerates "hardware problems" associated with use of the IBM personal computer as an online search machine: purchase of machinery, unpacking of parts, and assembly into a properly functioning computer. Components that allow transformations of computer into a search machine (combination boards, printer, modem) and diagnostics software…

  4. Creation of operation algorithms for combined operation of anti-lock braking system (ABS) and electric machine included in the combined power plant

    NASA Astrophysics Data System (ADS)

    Bakhmutov, S. V.; Ivanov, V. G.; Karpukhin, K. E.; Umnitsyn, A. A.

    2018-02-01

    The paper considers the Anti-lock Braking System (ABS) operation algorithm, which enables the implementation of hybrid braking, i.e. the braking process combining friction brake mechanisms and e-machine (electric machine), which operates in the energy recovery mode. The provided materials focus only on the rectilinear motion of the vehicle. That the ABS task consists in the maintenance of the target wheel slip ratio, which depends on the tyre-road adhesion coefficient. The tyre-road adhesion coefficient was defined based on the vehicle deceleration. In the course of calculated studies, the following operation algorithm of hybrid braking was determined. At adhesion coefficient ≤0.1, driving axle braking occurs only due to the e-machine operating in the energy recovery mode. In other cases, depending on adhesion coefficient, the e-machine provides the brake torque, which changes from 35 to 100% of the maximum available brake torque. Virtual tests showed that values of the wheel slip ratio are close to the required ones. Thus, this algorithm makes it possible to implement hybrid braking by means of the two sources creating the brake torque.

  5. Using Active Learning for Speeding up Calibration in Simulation Models.

    PubMed

    Cevik, Mucahit; Ergun, Mehmet Ali; Stout, Natasha K; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2016-07-01

    Most cancer simulation models include unobservable parameters that determine disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality, and their values are typically estimated via a lengthy calibration procedure, which involves evaluating a large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We developed an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs and therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using the previously developed University of Wisconsin breast cancer simulation model (UWBCS). In a recent study, calibration of the UWBCS required the evaluation of 378 000 input parameter combinations to build a race-specific model, and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378 000 combinations. Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. © The Author(s) 2015.

  6. Using Active Learning for Speeding up Calibration in Simulation Models

    PubMed Central

    Cevik, Mucahit; Ali Ergun, Mehmet; Stout, Natasha K.; Trentham-Dietz, Amy; Craven, Mark; Alagoz, Oguzhan

    2015-01-01

    Background Most cancer simulation models include unobservable parameters that determine the disease onset and tumor growth. These parameters play an important role in matching key outcomes such as cancer incidence and mortality and their values are typically estimated via lengthy calibration procedure, which involves evaluating large number of combinations of parameter values via simulation. The objective of this study is to demonstrate how machine learning approaches can be used to accelerate the calibration process by reducing the number of parameter combinations that are actually evaluated. Methods Active learning is a popular machine learning method that enables a learning algorithm such as artificial neural networks to interactively choose which parameter combinations to evaluate. We develop an active learning algorithm to expedite the calibration process. Our algorithm determines the parameter combinations that are more likely to produce desired outputs, therefore reduces the number of simulation runs performed during calibration. We demonstrate our method using previously developed University of Wisconsin Breast Cancer Simulation Model (UWBCS). Results In a recent study, calibration of the UWBCS required the evaluation of 378,000 input parameter combinations to build a race-specific model and only 69 of these combinations produced results that closely matched observed data. By using the active learning algorithm in conjunction with standard calibration methods, we identify all 69 parameter combinations by evaluating only 5620 of the 378,000 combinations. Conclusion Machine learning methods hold potential in guiding model developers in the selection of more promising parameter combinations and hence speeding up the calibration process. Applying our machine learning algorithm to one model shows that evaluating only 1.49% of all parameter combinations would be sufficient for the calibration. PMID:26471190

  7. A Comparison of Coarse-Grained and Continuum Models for Membrane Bending in Lipid Bilayer Fusion Pores

    PubMed Central

    Yoo, Jejoong; Jackson, Meyer B.; Cui, Qiang

    2013-01-01

    To establish the validity of continuum mechanics models quantitatively for the analysis of membrane remodeling processes, we compare the shape and energies of the membrane fusion pore predicted by coarse-grained (MARTINI) and continuum mechanics models. The results at these distinct levels of resolution give surprisingly consistent descriptions for the shape of the fusion pore, and the deviation between the continuum and coarse-grained models becomes notable only when the radius of curvature approaches the thickness of a monolayer. Although slow relaxation beyond microseconds is observed in different perturbative simulations, the key structural features (e.g., dimension and shape of the fusion pore near the pore center) are consistent among independent simulations. These observations provide solid support for the use of coarse-grained and continuum models in the analysis of membrane remodeling. The combined coarse-grained and continuum analysis confirms the recent prediction of continuum models that the fusion pore is a metastable structure and that its optimal shape is neither toroidal nor catenoidal. Moreover, our results help reveal a new, to our knowledge, bowing feature in which the bilayers close to the pore axis separate more from one another than those at greater distances from the pore axis; bowing helps reduce the curvature and therefore stabilizes the fusion pore structure. The spread of the bilayer deformations over distances of hundreds of nanometers and the substantial reduction in energy of fusion pore formation provided by this spread indicate that membrane fusion can be enhanced by allowing a larger area of membrane to participate and be deformed. PMID:23442963

  8. Initial Isotopic Heterogeneities in ZAGAMI: Evidence of a Complex Magmatic History

    NASA Technical Reports Server (NTRS)

    Nyquist, L. E.; Shih, C.-Y.; Reese, Y. D.

    2006-01-01

    Interpretations of Zagami s magmatic history range from complex [1,2] to relatively simple [3]. Discordant radiometric ages led to a suggestion that the ages had been reset [4]. In an attempt to identify the mechanism, Rb-Sr isochrons were individually determined for both fine-grained and coarse-grained Zagami [5]. Ages of approx.180 Ma were obtained from both lithologies, but the initial Sr-87/Sr-86 (ISr) of the fine-grained lithology was higher by 8.6+/-0.4 e-units. Recently, a much older age of approx.4 Ga has been advocated [6]. Here, we extend our earlier investigation [5]. Rb-Sr Data: In [5] we applied identical, simplified, procedures to both lithologies to test whether a grain-size dependent process such as thermally-driven subsolidus isotopic reequilibration had caused age-resetting. Minerals were separated only by density. In the present experiment, purer mineral separates were analysed with improved techniques. Combined Rb-Sr results give ages (T) = 166+/-12 Ma and 177+/-9 Ma and I(subSr) = 0.72174+/-9 and 0.72227+/-7 for the coarse-grained and fine-grained lithologies, respectively. ISr in the fine-grained sample is thus higher than in the coarse-grained sample by 7.3+/-1.6 e-units. The results for the coarse-grained lithology are in close agreement with T = 166+/-6 Ma, ISr = 0.72157+/-8 for an adjacent sample [7] and T = 178+/-4 Ma, ISr = 0.72151+/-5 [4, adjusted] for a separate sample. Thus, fine-grained Zagami appears on average to be less typical of the bulk than coarse-grained Zagami.

  9. Prosthetics & Orthotics Manufacturing Initiative (POMI)

    DTIC Science & Technology

    2012-12-21

    the two materials. The rod was then put onto a lathe machine, allowing a thin sheet, with stripes of alternating materials, to be cut from the rod...tooling from. Mentis determined a method to use Aquacore, which involved machining blanks via CNC , followed by coating the mold to prevent resin...infusion into the mold. Mentis also attempted to use plaster combined with CNC machining, however, these molds did not survive the machining process

  10. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach.

    PubMed

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-06-19

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.

  11. Optimization of computations for adjoint field and Jacobian needed in 3D CSEM inversion

    NASA Astrophysics Data System (ADS)

    Dehiya, Rahul; Singh, Arun; Gupta, Pravin K.; Israil, M.

    2017-01-01

    We present the features and results of a newly developed code, based on Gauss-Newton optimization technique, for solving three-dimensional Controlled-Source Electromagnetic inverse problem. In this code a special emphasis has been put on representing the operations by block matrices for conjugate gradient iteration. We show how in the computation of Jacobian, the matrix formed by differentiation of system matrix can be made independent of frequency to optimize the operations at conjugate gradient step. The coarse level parallel computing, using OpenMP framework, is used primarily due to its simplicity in implementation and accessibility of shared memory multi-core computing machine to almost anyone. We demonstrate how the coarseness of modeling grid in comparison to source (comp`utational receivers) spacing can be exploited for efficient computing, without compromising the quality of the inverted model, by reducing the number of adjoint calls. It is also demonstrated that the adjoint field can even be computed on a grid coarser than the modeling grid without affecting the inversion outcome. These observations were reconfirmed using an experiment design where the deviation of source from straight tow line is considered. Finally, a real field data inversion experiment is presented to demonstrate robustness of the code.

  12. Human facial neural activities and gesture recognition for machine-interfacing applications.

    PubMed

    Hamedi, M; Salleh, Sh-Hussain; Tan, T S; Ismail, K; Ali, J; Dee-Uam, C; Pavaganun, C; Yupapin, P P

    2011-01-01

    The authors present a new method of recognizing different human facial gestures through their neural activities and muscle movements, which can be used in machine-interfacing applications. Human-machine interface (HMI) technology utilizes human neural activities as input controllers for the machine. Recently, much work has been done on the specific application of facial electromyography (EMG)-based HMI, which have used limited and fixed numbers of facial gestures. In this work, a multipurpose interface is suggested that can support 2-11 control commands that can be applied to various HMI systems. The significance of this work is finding the most accurate facial gestures for any application with a maximum of eleven control commands. Eleven facial gesture EMGs are recorded from ten volunteers. Detected EMGs are passed through a band-pass filter and root mean square features are extracted. Various combinations of gestures with a different number of gestures in each group are made from the existing facial gestures. Finally, all combinations are trained and classified by a Fuzzy c-means classifier. In conclusion, combinations with the highest recognition accuracy in each group are chosen. An average accuracy >90% of chosen combinations proved their ability to be used as command controllers.

  13. Implementation and Characterization of Three-Dimensional Particle-in-Cell Codes on Multiple-Instruction-Multiple-Data Massively Parallel Supercomputers

    NASA Technical Reports Server (NTRS)

    Lyster, P. M.; Liewer, P. C.; Decyk, V. K.; Ferraro, R. D.

    1995-01-01

    A three-dimensional electrostatic particle-in-cell (PIC) plasma simulation code has been developed on coarse-grain distributed-memory massively parallel computers with message passing communications. Our implementation is the generalization to three-dimensions of the general concurrent particle-in-cell (GCPIC) algorithm. In the GCPIC algorithm, the particle computation is divided among the processors using a domain decomposition of the simulation domain. In a three-dimensional simulation, the domain can be partitioned into one-, two-, or three-dimensional subdomains ("slabs," "rods," or "cubes") and we investigate the efficiency of the parallel implementation of the push for all three choices. The present implementation runs on the Intel Touchstone Delta machine at Caltech; a multiple-instruction-multiple-data (MIMD) parallel computer with 512 nodes. We find that the parallel efficiency of the push is very high, with the ratio of communication to computation time in the range 0.3%-10.0%. The highest efficiency (> 99%) occurs for a large, scaled problem with 64(sup 3) particles per processing node (approximately 134 million particles of 512 nodes) which has a push time of about 250 ns per particle per time step. We have also developed expressions for the timing of the code which are a function of both code parameters (number of grid points, particles, etc.) and machine-dependent parameters (effective FLOP rate, and the effective interprocessor bandwidths for the communication of particles and grid points). These expressions can be used to estimate the performance of scaled problems--including those with inhomogeneous plasmas--to other parallel machines once the machine-dependent parameters are known.

  14. Machine Shorthand Dictation, Business Education: 7706.33.

    ERIC Educational Resources Information Center

    Cropper, Clara Joy F.

    The course is designed to reinforce machine shorthand theory with emphasis on taking dictation with speed and accuracy. In this course, students are expected to complete the basic theory of techniques for writing sounds, in combinations of letters of the alphabet, on the keyboard of a touch shorthand machine; to increase their recording speeds;…

  15. Prediction and Validation of Disease Genes Using HeteSim Scores.

    PubMed

    Zeng, Xiangxiang; Liao, Yuanlu; Liu, Yuansheng; Zou, Quan

    2017-01-01

    Deciphering the gene disease association is an important goal in biomedical research. In this paper, we use a novel relevance measure, called HeteSim, to prioritize candidate disease genes. Two methods based on heterogeneous networks constructed using protein-protein interaction, gene-phenotype associations, and phenotype-phenotype similarity, are presented. In HeteSim_MultiPath (HSMP), HeteSim scores of different paths are combined with a constant that dampens the contributions of longer paths. In HeteSim_SVM (HSSVM), HeteSim scores are combined with a machine learning method. The 3-fold experiments show that our non-machine learning method HSMP performs better than the existing non-machine learning methods, our machine learning method HSSVM obtains similar accuracy with the best existing machine learning method CATAPULT. From the analysis of the top 10 predicted genes for different diseases, we found that HSSVM avoid the disadvantage of the existing machine learning based methods, which always predict similar genes for different diseases. The data sets and Matlab code for the two methods are freely available for download at http://lab.malab.cn/data/HeteSim/index.jsp.

  16. Comparison of machinability of manganese alloyed austempered ductile iron produced using conventional and two step austempering processes

    NASA Astrophysics Data System (ADS)

    Hegde, Ananda; Sharma, Sathyashankara

    2018-05-01

    Austempered Ductile Iron (ADI) is a revolutionary material with high strength and hardness combined with optimum ductility and toughness. The discovery of two step austempering process has lead to the superior combination of all the mechanical properties. However, because of the high strength and hardness of ADI, there is a concern regarding its machinability. In the present study, machinability of ADI produced using conventional and two step heat treatment processes is assessed using tool life and the surface roughness. Speed, feed and depth of cut are considered as the machining parameters in the dry turning operation. The machinability results along with the mechanical properties are compared for ADI produced using both conventional and two step austempering processes. The results have shown that two step austempering process has produced better toughness with good hardness and strength without sacrificing ductility. Addition of 0.64 wt% manganese did not cause any detrimental effect on the machinability of ADI, both in conventional and two step processes. Marginal improvement in tool life and surface roughness were observed in two step process compared to that with conventional process.

  17. 12 CFR 205.2 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... device means a card, code, or other means of access to a consumer's account, or any combination thereof..., automated teller machines, and cash dispensing machines. (i) Financial institution means a bank, savings...

  18. 12 CFR 205.2 - Definitions.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... device means a card, code, or other means of access to a consumer's account, or any combination thereof..., automated teller machines, and cash dispensing machines. (i) Financial institution means a bank, savings...

  19. 12 CFR 205.2 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... device means a card, code, or other means of access to a consumer's account, or any combination thereof..., automated teller machines, and cash dispensing machines. (i) Financial institution means a bank, savings...

  20. 12 CFR 205.2 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... device means a card, code, or other means of access to a consumer's account, or any combination thereof..., automated teller machines, and cash dispensing machines. (i) Financial institution means a bank, savings...

  1. Computing the Free Energy Barriers for Less by Sampling with a Coarse Reference Potential while Retaining Accuracy of the Target Fine Model.

    PubMed

    Plotnikov, Nikolay V

    2014-08-12

    Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force.

  2. Computing the Free Energy Barriers for Less by Sampling with a Coarse Reference Potential while Retaining Accuracy of the Target Fine Model

    PubMed Central

    2015-01-01

    Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force. PMID:25136268

  3. Young’s modulus and Poisson’s ratio changes due to machining in porous microcracked cordierite

    DOE PAGES

    Cooper, R. C.; Bruno, Giovanni; Onel, Yener; ...

    2016-07-25

    Microstructural changes in porous cordierite caused by machining were characterized using microtensile testing, X-ray computed tomography and scanning electron microscopy. Young s moduli and Poisson s ratios were determined on ~215-380 um thick machined samples by combining digital image correlation and microtensile loading. The results provide evidence for an increase in microcrack density due to machining of the thin samples extracted from diesel particulate filter honeycombs.

  4. Analysis of motion during the breast clamping phase of mammography

    PubMed Central

    McEntee, Mark F; Mercer, Claire; Kelly, Judith; Millington, Sara; Hogg, Peter

    2016-01-01

    Objective: To measure paddle motion during the clamping phase of a breast phantom for a range of machine/paddle combinations. Methods: A deformable breast phantom was used to simulate a female breast. 12 mammography machines from three manufacturers with 22 flexible and 20 fixed paddles were evaluated. Vertical motion at the paddle was measured using two calibrated linear potentiometers. For each paddle, the motion in millimetres was recorded every 0.5 s for 40 s, while the phantom was compressed with 80 N. Independent t-tests were used to determine differences in paddle motion between flexible and fixed, small and large, GE Senographe Essential (General Electric Medical Systems, Milwaukee, WI) and Hologic Selenia Dimensions paddles (Hologic, Bedford, MA). Paddle tilt in the medial–lateral plane for each machine/paddle combination was calculated. Results: All machine/paddle combinations demonstrate highest levels of motion during the first 10 s of the clamping phase. The least motion is 0.17 ± 0.05 mm/10 s (n = 20) and the most motion is 0.51 ± 0.15 mm/10 s (n = 80). There is a statistical difference in paddle motion between fixed and flexible (p < 0.001), GE Senographe Essential and Hologic Selenia Dimensions paddles (p < 0.001). Paddle tilt in the medial–lateral plane is independent of time and varied from 0.04 ° to 0.69 °. Conclusion: All machine/paddle combinations exhibited motion and tilting, and the extent varied with machine and paddle sizes and types. Advances in knowledge: This research suggests that image blurring will likely be clinically insignificant 4 s or more after the clamping phase commences. PMID:26739577

  5. Highly Productive Tools For Turning And Milling

    NASA Astrophysics Data System (ADS)

    Vasilko, Karol

    2015-12-01

    Beside cutting speed, shift is another important parameter of machining. Its considerable influence is shown mainly in the workpiece machined surface microgeometry. In practice, mainly its combination with the radius of cutting tool tip rounding is used. Options to further increase machining productivity and machined surface quality are hidden in this approach. The paper presents variations of the design of productive cutting tools for lathe work and milling on the base of the use of the laws of the relationship among the highest reached uneveness of machined surface, tool tip radius and shift.

  6. Machine Learning Applications to Resting-State Functional MR Imaging Analysis.

    PubMed

    Billings, John M; Eder, Maxwell; Flood, William C; Dhami, Devendra Singh; Natarajan, Sriraam; Whitlow, Christopher T

    2017-11-01

    Machine learning is one of the most exciting and rapidly expanding fields within computer science. Academic and commercial research entities are investing in machine learning methods, especially in personalized medicine via patient-level classification. There is great promise that machine learning methods combined with resting state functional MR imaging will aid in diagnosis of disease and guide potential treatment for conditions thought to be impossible to identify based on imaging alone, such as psychiatric disorders. We discuss machine learning methods and explore recent advances. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. The Relationship Between Fusion, Suppression, and Diplopia in Normal and Amblyopic Vision.

    PubMed

    Spiegel, Daniel P; Baldwin, Alex S; Hess, Robert F

    2016-10-01

    Single vision occurs through a combination of fusion and suppression. When neither mechanism takes place, we experience diplopia. Under normal viewing conditions, the perceptual state depends on the spatial scale and interocular disparity. The purpose of this study was to examine the three perceptual states in human participants with normal and amblyopic vision. Participants viewed two dichoptically separated horizontal blurred edges with an opposite tilt (2.35°) and indicated their binocular percept: "one flat edge," "one tilted edge," or "two edges." The edges varied with scale (fine 4 min arc and coarse 32 min arc), disparity, and interocular contrast. We investigated how the binocular interactions vary in amblyopic (visual acuity [VA] > 0.2 logMAR, n = 4) and normal vision (VA ≤ 0 logMAR, n = 4) under interocular variations in stimulus contrast and luminance. In amblyopia, despite the established sensory dominance of the fellow eye, fusion prevails at the coarse scale and small disparities (75%). We also show that increasing the relative contrast to the amblyopic eye enhances the probability of fusion at the fine scale (from 18% to 38%), and leads to a reversal of the sensory dominance at coarse scale. In normal vision we found that interocular luminance imbalances disturbed binocular combination only at the fine scale in a way similar to that seen in amblyopia. Our results build upon the growing evidence that the amblyopic visual system is binocular and further show that the suppressive mechanisms rendering the amblyopic system functionally monocular are scale dependent.

  8. 2007 SB14 Source Reduction Plan/Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, L

    2007-07-24

    Aqueous solutions (mixed waste) generated from various LLNL operations, such as debris washing, sample preparation and analysis, and equipment maintenance and cleanout, were combined for storage in the B695 tank farm. Prior to combination the individual waste streams had different codes depending on the particular generating process and waste characteristics. The largest streams were CWC 132, 791, 134, 792. Several smaller waste streams were also included. This combined waste stream was treated at LLNL's waste treatment facility using a vacuum filtration and cool vapor evaporation process in preparation for discharge to sanitary sewer. Prior to discharge, the treated waste streammore » was sampled and the results were reviewed by LLNL's water monitoring specialists. The treated solution was discharged following confirmation that it met the discharge criteria. A major source, accounting for 50% for this waste stream, is metal machining, cutting and grinding operations in the engineering machine shops in B321/B131. An additional 7% was from similar operations in B131 and B132S. This waste stream primarily contains metal cuttings from machined parts, machining coolant and water, with small amounts of tramp oil from the machining and grinding equipment. Several waste reduction measures for the B321 machine shop have been taken, including the use of a small point-of-use filtering/tramp-oil coalescing/UV-sterilization coolant recycling unit, and improved management techniques (testing and replenishing) for coolants. The recycling unit had some operational problems during 2006. The machine shop is planning to have it repaired in the near future. A major source, accounting for 50% for this waste stream, is metal machining, cutting and grinding operations in the engineering machine shops in B321/B131. An additional 7% was from similar operations in B131 and B132S. This waste stream primarily contains metal cuttings from machined parts, machining coolant and water, with small amounts of tramp oil from the machining and grinding equipment. Several waste reduction measures for the B321 machine shop have been taken, including the use of a small point-of-use filtering/tramp-oil coalescing/UV-sterilization coolant recycling unit, and improved management techniques (testing and replenishing) for coolants. The recycling unit had some operational problems during 2006. The machine shop is planning to have it repaired in the near future. Quarterly waste generation data prepared by the Environmental Protection Department's P2 Team are regularly provided to engineering shops as well as other facilities so that generators can track the effectiveness of their waste minimization efforts.« less

  9. Preconditioned implicit solvers for the Navier-Stokes equations on distributed-memory machines

    NASA Technical Reports Server (NTRS)

    Ajmani, Kumud; Liou, Meng-Sing; Dyson, Rodger W.

    1994-01-01

    The GMRES method is parallelized, and combined with local preconditioning to construct an implicit parallel solver to obtain steady-state solutions for the Navier-Stokes equations of fluid flow on distributed-memory machines. The new implicit parallel solver is designed to preserve the convergence rate of the equivalent 'serial' solver. A static domain-decomposition is used to partition the computational domain amongst the available processing nodes of the parallel machine. The SPMD (Single-Program Multiple-Data) programming model is combined with message-passing tools to develop the parallel code on a 32-node Intel Hypercube and a 512-node Intel Delta machine. The implicit parallel solver is validated for internal and external flow problems, and is found to compare identically with flow solutions obtained on a Cray Y-MP/8. A peak computational speed of 2300 MFlops/sec has been achieved on 512 nodes of the Intel Delta machine,k for a problem size of 1024 K equations (256 K grid points).

  10. 12 CFR 1005.2 - Definitions.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., code, or other means of access to a consumer's account, or any combination thereof, that may be used by..., automated teller machines (ATMs), and cash dispensing machines. (i) “Financial institution” means a bank...

  11. 12 CFR 1005.2 - Definitions.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ..., code, or other means of access to a consumer's account, or any combination thereof, that may be used by..., automated teller machines (ATMs), and cash dispensing machines. (i) “Financial institution” means a bank...

  12. Combining information from 3 anatomic regions in the diagnosis of glaucoma with time-domain optical coherence tomography.

    PubMed

    Wang, Mingwu; Lu, Ake Tzu-Hui; Varma, Rohit; Schuman, Joel S; Greenfield, David S; Huang, David

    2014-03-01

    To improve the diagnosis of glaucoma by combining time-domain optical coherence tomography (TD-OCT) measurements of the optic disc, circumpapillary retinal nerve fiber layer (RNFL), and macular retinal thickness. Ninety-six age-matched normal and 96 perimetric glaucoma participants were included in this observational, cross-sectional study. Or-logic, support vector machine, relevance vector machine, and linear discrimination function were used to analyze the performances of combined TD-OCT diagnostic variables. The area under the receiver-operating curve (AROC) was used to evaluate the diagnostic accuracy and to compare the diagnostic performance of single and combined anatomic variables. The best RNFL thickness variables were the inferior (AROC=0.900), overall (AROC=0.892), and superior quadrants (AROC=0.850). The best optic disc variables were horizontal integrated rim width (AROC=0.909), vertical integrated rim area (AROC=0.908), and cup/disc vertical ratio (AROC=0.890). All macular retinal thickness variables had AROCs of 0.829 or less. Combining the top 3 RNFL and optic disc variables in optimizing glaucoma diagnosis, support vector machine had the highest AROC, 0.954, followed by or-logic (AROC=0.946), linear discrimination function (AROC=0.946), and relevance vector machine (AROC=0.943). All combination diagnostic variables had significantly larger AROCs than any single diagnostic variable. There are no significant differences among the combination diagnostic indices. With TD-OCT, RNFL and optic disc variables had better diagnostic accuracy than macular retinal variables. Combining top RNFL and optic disc variables significantly improved diagnostic performance. Clinically, or-logic classification was the most practical analytical tool with sufficient accuracy to diagnose early glaucoma.

  13. Design features and results from fatigue reliability research machines.

    NASA Technical Reports Server (NTRS)

    Lalli, V. R.; Kececioglu, D.; Mcconnell, J. B.

    1971-01-01

    The design, fabrication, development, operation, calibration and results from reversed bending combined with steady torque fatigue research machines are presented. Fifteen-centimeter long, notched, SAE 4340 steel specimens are subjected to various combinations of these stresses and cycled to failure. Failure occurs when the crack in the notch passes through the specimen automatically shutting down the test machine. These cycles-to-failure data are statistically analyzed to develop a probabilistic S-N diagram. These diagrams have many uses; a rotating component design example given in the literature shows that minimum size and weight for a specified number of cycles and reliability can be calculated using these diagrams.

  14. Weather models as virtual sensors to data-driven rainfall predictions in urban watersheds

    NASA Astrophysics Data System (ADS)

    Cozzi, Lorenzo; Galelli, Stefano; Pascal, Samuel Jolivet De Marc; Castelletti, Andrea

    2013-04-01

    Weather and climate predictions are a key element of urban hydrology where they are used to inform water management and assist in flood warning delivering. Indeed, the modelling of the very fast dynamics of urbanized catchments can be substantially improved by the use of weather/rainfall predictions. For example, in Singapore Marina Reservoir catchment runoff processes have a very short time of concentration (roughly one hour) and observational data are thus nearly useless for runoff predictions and weather prediction are required. Unfortunately, radar nowcasting methods do not allow to carrying out long - term weather predictions, whereas numerical models are limited by their coarse spatial scale. Moreover, numerical models are usually poorly reliable because of the fast motion and limited spatial extension of rainfall events. In this study we investigate the combined use of data-driven modelling techniques and weather variables observed/simulated with a numerical model as a way to improve rainfall prediction accuracy and lead time in the Singapore metropolitan area. To explore the feasibility of the approach, we use a Weather Research and Forecast (WRF) model as a virtual sensor network for the input variables (the states of the WRF model) to a machine learning rainfall prediction model. More precisely, we combine an input variable selection method and a non-parametric tree-based model to characterize the empirical relation between the rainfall measured at the catchment level and all possible weather input variables provided by WRF model. We explore different lead time to evaluate the model reliability for different long - term predictions, as well as different time lags to see how past information could improve results. Results show that the proposed approach allow a significant improvement of the prediction accuracy of the WRF model on the Singapore urban area.

  15. Efficient Execution of Microscopy Image Analysis on CPU, GPU, and MIC Equipped Cluster Systems.

    PubMed

    Andrade, G; Ferreira, R; Teodoro, George; Rocha, Leonardo; Saltz, Joel H; Kurc, Tahsin

    2014-10-01

    High performance computing is experiencing a major paradigm shift with the introduction of accelerators, such as graphics processing units (GPUs) and Intel Xeon Phi (MIC). These processors have made available a tremendous computing power at low cost, and are transforming machines into hybrid systems equipped with CPUs and accelerators. Although these systems can deliver a very high peak performance, making full use of its resources in real-world applications is a complex problem. Most current applications deployed to these machines are still being executed in a single processor, leaving other devices underutilized. In this paper we explore a scenario in which applications are composed of hierarchical data flow tasks which are allocated to nodes of a distributed memory machine in coarse-grain, but each of them may be composed of several finer-grain tasks which can be allocated to different devices within the node. We propose and implement novel performance aware scheduling techniques that can be used to allocate tasks to devices. We evaluate our techniques using a pathology image analysis application used to investigate brain cancer morphology, and our experimental evaluation shows that the proposed scheduling strategies significantly outperforms other efficient scheduling techniques, such as Heterogeneous Earliest Finish Time - HEFT, in cooperative executions using CPUs, GPUs, and MICs. We also experimentally show that our strategies are less sensitive to inaccuracy in the scheduling input data and that the performance gains are maintained as the application scales.

  16. Efficient Execution of Microscopy Image Analysis on CPU, GPU, and MIC Equipped Cluster Systems

    PubMed Central

    Andrade, G.; Ferreira, R.; Teodoro, George; Rocha, Leonardo; Saltz, Joel H.; Kurc, Tahsin

    2015-01-01

    High performance computing is experiencing a major paradigm shift with the introduction of accelerators, such as graphics processing units (GPUs) and Intel Xeon Phi (MIC). These processors have made available a tremendous computing power at low cost, and are transforming machines into hybrid systems equipped with CPUs and accelerators. Although these systems can deliver a very high peak performance, making full use of its resources in real-world applications is a complex problem. Most current applications deployed to these machines are still being executed in a single processor, leaving other devices underutilized. In this paper we explore a scenario in which applications are composed of hierarchical data flow tasks which are allocated to nodes of a distributed memory machine in coarse-grain, but each of them may be composed of several finer-grain tasks which can be allocated to different devices within the node. We propose and implement novel performance aware scheduling techniques that can be used to allocate tasks to devices. We evaluate our techniques using a pathology image analysis application used to investigate brain cancer morphology, and our experimental evaluation shows that the proposed scheduling strategies significantly outperforms other efficient scheduling techniques, such as Heterogeneous Earliest Finish Time - HEFT, in cooperative executions using CPUs, GPUs, and MICs. We also experimentally show that our strategies are less sensitive to inaccuracy in the scheduling input data and that the performance gains are maintained as the application scales. PMID:26640423

  17. An AFM-SIMS Nano Tomography Acquisition System

    NASA Astrophysics Data System (ADS)

    Swinford, Richard William

    An instrument, adding the capability to measure 3D volumetric chemical composition, has been constructed by me as a member of the Sanchez Nano Laboratory. The laboratory's in situ atomic force microscope (AFM) and secondary ion mass spectrometry systems (SIMS) are functional and integrated as one instrument. The SIMS utilizes a Ga focused ion beam (FIB) combined with a quadrupole mass analyzer. The AFM is comprised of a 6-axis stage, three coarse axes and three fine. The coarse stage is used for placing the AFM tip anywhere inside a (13x13x5 mm3) (xyz) volume. Thus the tip can be moved in and out of the FIB processing region with ease. The planned range for the Z-axis piezo was 60 microm, but was reduced after it was damaged from arc events. The repaired Z-axis piezo is now operated at a smaller nominal range of 18 microm (16.7 microm after pre-loading), still quite respectable for an AFM. The noise floor of the AFM is approximately 0.4 nm Rq. The voxel size for the combined instrument is targeted at 50 nm or larger. Thus 0.4 nm of xyz uncertainty is acceptable. The instrument has been used for analyzing samples using FIB beam currents of 250 pA and 5.75 nA. Coarse tip approaches can take a long time so an abbreviated technique is employed. Because of the relatively long thro of the Z piezo, the tip can be disengaged by deactivating the servo PID. Once disengaged, it can be moved laterally out of the way of the FIB-SIMS using the coarse stage. This instrument has been used to acquire volumetric data on AlTiC using AFM tip diameters of 18.9 nm and 30.6 nm. Acquisition times are very long, requiring multiple days to acquire a 50-image stack. New features to be added include auto stigmation, auto beam shift, more software automation, etc. Longer term upgrades to include a new lower voltage Z-piezo with strain-gauge feedback and a new design to extend the life for the coarse XY nano-positioners. This AFM-SIMS instrument, as constructed, has proven to be a great proof of concept vehicle. In the future it will be used to analyze micro fossils and it will also be used as a part of an intensive teaching curriculum.

  18. Construction of continuous cooling transformation (CCT) diagram using Gleeble for coarse grained heat affected zone of SA106 grade B steel

    NASA Astrophysics Data System (ADS)

    Vimalan, G.; Muthupandi, V.; Ravichandran, G.

    2018-05-01

    A continuous cooling transformation diagram is constructed for simulated coarse grain heat affected zone (CGHAZ) of SA106 grade B carbon steel. Samples are heated to a peak temperature of 1200°C in the Gleeble thermo mechanical simulator and then cooled at different cooling rates varying from 0.1°C/s to 100°C/s. Microstructure of the specimens simulated at different cooling rates were characterised by optical microscopy and hardness was assessed by Vicker's hardness test and micro-hardness test. Transformation temperatures and the corresponding phase fields were identified from dilatometric curves and the same could be confirmed by correlating with the microstructures at room temperature. These data were used to construct the CCT diagram. Phase fields were found to have ferrite, pearlite, bainite and martensite or their combinations. With the help of this CCT diagram it is possible to predict the microstructure and hardness of coarse grain HAZ experiencing different cooling rates. The constructed CCT diagram becomes an important tool in evaluating the weldability of SA106 grade B carbon steel.

  19. Coarse-grained simulations of protein-protein association: an energy landscape perspective.

    PubMed

    Ravikumar, Krishnakumar M; Huang, Wei; Yang, Sichun

    2012-08-22

    Understanding protein-protein association is crucial in revealing the molecular basis of many biological processes. Here, we describe a theoretical simulation pipeline to study protein-protein association from an energy landscape perspective. First, a coarse-grained model is implemented and its applications are demonstrated via molecular dynamics simulations for several protein complexes. Second, an enhanced search method is used to efficiently sample a broad range of protein conformations. Third, multiple conformations are identified and clustered from simulation data and further projected on a three-dimensional globe specifying protein orientations and interacting energies. Results from several complexes indicate that the crystal-like conformation is favorable on the energy landscape even if the landscape is relatively rugged with metastable conformations. A closer examination on molecular forces shows that the formation of associated protein complexes can be primarily electrostatics-driven, hydrophobics-driven, or a combination of both in stabilizing specific binding interfaces. Taken together, these results suggest that the coarse-grained simulations and analyses provide an alternative toolset to study protein-protein association occurring in functional biomolecular complexes. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  20. Coarse-Grained Simulations of Protein-Protein Association: An Energy Landscape Perspective

    PubMed Central

    Ravikumar, Krishnakumar M.; Huang, Wei; Yang, Sichun

    2012-01-01

    Understanding protein-protein association is crucial in revealing the molecular basis of many biological processes. Here, we describe a theoretical simulation pipeline to study protein-protein association from an energy landscape perspective. First, a coarse-grained model is implemented and its applications are demonstrated via molecular dynamics simulations for several protein complexes. Second, an enhanced search method is used to efficiently sample a broad range of protein conformations. Third, multiple conformations are identified and clustered from simulation data and further projected on a three-dimensional globe specifying protein orientations and interacting energies. Results from several complexes indicate that the crystal-like conformation is favorable on the energy landscape even if the landscape is relatively rugged with metastable conformations. A closer examination on molecular forces shows that the formation of associated protein complexes can be primarily electrostatics-driven, hydrophobics-driven, or a combination of both in stabilizing specific binding interfaces. Taken together, these results suggest that the coarse-grained simulations and analyses provide an alternative toolset to study protein-protein association occurring in functional biomolecular complexes. PMID:22947945

  1. A coarse-grained model of microtubule self-assembly

    NASA Astrophysics Data System (ADS)

    Regmi, Chola; Cheng, Shengfeng

    Microtubules play critical roles in cell structures and functions. They also serve as a model system to stimulate the next-generation smart, dynamic materials. A deep understanding of their self-assembly process and biomechanical properties will not only help elucidate how microtubules perform biological functions, but also lead to exciting insight on how microtubule dynamics can be altered or even controlled for specific purposes such as suppressing the division of cancer cells. Combining all-atom molecular dynamics (MD) simulations and the essential dynamics coarse-graining method, we construct a coarse-grained (CG) model of the tubulin protein, which is the building block of microtubules. In the CG model a tubulin dimer is represented as an elastic network of CG sites, the locations of which are determined by examining the protein dynamics of the tubulin and identifying the essential dynamic domains. Atomistic MD modeling is employed to directly compute the tubulin bond energies in the surface lattice of a microtubule, which are used to parameterize the interactions between CG building blocks. The CG model is then used to study the self-assembly pathways, kinetics, dynamics, and nanomechanics of microtubules.

  2. Investigating Source Contributions of Size-Aggregated Aerosols Collected in Southern Ocean and Baring Head, New Zealand Using Sulfur Isotopes

    NASA Astrophysics Data System (ADS)

    Li, Jianghanyang; Michalski, Greg; Davy, Perry; Harvey, Mike; Katzman, Tanya; Wilkins, Benjamin

    2018-04-01

    Marine sulfate aerosols in the Southern Ocean are critical to the global radiation balance, yet the sources of sulfate and their seasonal variations are unclear. We separately sampled marine and ambient aerosols at Baring Head, New Zealand for 1 year using two collectors and evaluated the sources of sulfate in coarse (1-10 μm) and fine (0.05-1 μm) aerosols using sulfur isotopes (δ34S). In both collectors, sea-salt sulfate (SO42-SS) mainly existed in coarse aerosols and nonsea-salt sulfate (SO42-NSS) dominated the sulfate in fine aerosols, although some summer SO42-NSS appeared in coarse particles due to aerosol coagulation. SO42-NSS in the marine aerosols was mainly (88-100%) from marine biogenic dimethylsulfide (DMS) emission, while the SO42-NSS in the ambient aerosols was a combination of DMS (73-79%) and SO2 emissions from shipping activities ( 21-27%). The seasonal variations of SO42-NSS concentrations inferred from the δ34S values in both collectors were mainly controlled by the DMS flux.

  3. A rapid solvent accessible surface area estimator for coarse grained molecular simulations.

    PubMed

    Wei, Shuai; Brooks, Charles L; Frank, Aaron T

    2017-06-05

    The rapid and accurate calculation of solvent accessible surface area (SASA) is extremely useful in the energetic analysis of biomolecules. For example, SASA models can be used to estimate the transfer free energy associated with biophysical processes, and when combined with coarse-grained simulations, can be particularly useful for accounting for solvation effects within the framework of implicit solvent models. In such cases, a fast and accurate, residue-wise SASA predictor is highly desirable. Here, we develop a predictive model that estimates SASAs based on Cα-only protein structures. Through an extensive comparison between this method and a comparable method, POPS-R, we demonstrate that our new method, Protein-C α Solvent Accessibilities or PCASA, shows better performance, especially for unfolded conformations of proteins. We anticipate that this model will be quite useful in the efficient inclusion of SASA-based solvent free energy estimations in coarse-grained protein folding simulations. PCASA is made freely available to the academic community at https://github.com/atfrank/PCASA. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  4. Sediment transport processes in the Pearl River Estuary as revealed by grain-size end-member modeling and sediment trend analysis

    NASA Astrophysics Data System (ADS)

    Li, Tao; Li, Tuan-Jie

    2018-04-01

    The analysis of grain-size distribution enables us to decipher sediment transport processes and understand the causal relations between dynamic processes and grain-size distributions. In the present study, grain sizes were measured from surface sediments collected in the Pearl River Estuary and its adjacent coastal areas. End-member modeling analysis attempts to unmix the grain sizes into geologically meaningful populations. Six grain-size end-members were identified. Their dominant modes are 0 Φ, 1.5 Φ, 2.75 Φ, 4.5 Φ, 7 Φ, and 8 Φ, corresponding to coarse sand, medium sand, fine sand, very coarse silt, silt, and clay, respectively. The spatial distributions of the six end-members are influenced by sediment transport and depositional processes. The two coarsest end-members (coarse sand and medium sand) may reflect relict sediments deposited during the last glacial period. The fine sand end-member would be difficult to transport under fair weather conditions, and likely indicates storm deposits. The three remaining fine-grained end-members (very coarse silt, silt, and clay) are recognized as suspended particles transported by saltwater intrusion via the flood tidal current, the Guangdong Coastal Current, and riverine outflow. The grain-size trend analysis shows distinct transport patterns for the three fine-grained end-members. The landward transport of the very coarse silt end-member occurs in the eastern part of the estuary, the seaward transport of the silt end-member occurs in the western part, and the east-west transport of the clay end-member occurs in the coastal areas. The results show that grain-size end-member modeling analysis in combination with sediment trend analysis help to better understand sediment transport patterns and the associated transport mechanisms.

  5. Domain-averaged snow depth over complex terrain from flat field measurements

    NASA Astrophysics Data System (ADS)

    Helbig, Nora; van Herwijnen, Alec

    2017-04-01

    Snow depth is an important parameter for a variety of coarse-scale models and applications, such as hydrological forecasting. Since high-resolution snow cover models are computational expensive, simplified snow models are often used. Ground measured snow depth at single stations provide a chance for snow depth data assimilation to improve coarse-scale model forecasts. Snow depth is however commonly recorded at so-called flat fields, often in large measurement networks. While these ground measurement networks provide a wealth of information, various studies questioned the representativity of such flat field snow depth measurements for the surrounding topography. We developed two parameterizations to compute domain-averaged snow depth for coarse model grid cells over complex topography using easy to derive topographic parameters. To derive the two parameterizations we performed a scale dependent analysis for domain sizes ranging from 50m to 3km using highly-resolved snow depth maps at the peak of winter from two distinct climatic regions in Switzerland and in the Spanish Pyrenees. The first, simpler parameterization uses a commonly applied linear lapse rate. For the second parameterization, we first removed the obvious elevation gradient in mean snow depth, which revealed an additional correlation with the subgrid sky view factor. We evaluated domain-averaged snow depth derived with both parameterizations using flat field measurements nearby with the domain-averaged highly-resolved snow depth. This revealed an overall improved performance for the parameterization combining a power law elevation trend scaled with the subgrid parameterized sky view factor. We therefore suggest the parameterization could be used to assimilate flat field snow depth into coarse-scale snow model frameworks in order to improve coarse-scale snow depth estimates over complex topography.

  6. Grain-size segregation and levee formation in geophysical mass flows

    USGS Publications Warehouse

    Johnson, C.G.; Kokelaar, B.P.; Iverson, Richard M.; Logan, M.; LaHusen, R.G.; Gray, J.M.N.T.

    2012-01-01

    Data from large-scale debris-flow experiments are combined with modeling of particle-size segregation to explain the formation of lateral levees enriched in coarse grains. The experimental flows consisted of 10 m3 of water-saturated sand and gravel, which traveled ∼80 m down a steeply inclined flume before forming an elongated leveed deposit 10 m long on a nearly horizontal runout surface. We measured the surface velocity field and observed the sequence of deposition by seeding tracers onto the flow surface and tracking them in video footage. Levees formed by progressive downslope accretion approximately 3.5 m behind the flow front, which advanced steadily at ∼2 m s−1during most of the runout. Segregation was measured by placing ∼600 coarse tracer pebbles on the bed, which, when entrained into the flow, segregated upwards at ∼6–7.5 cm s−1. When excavated from the deposit these were distributed in a horseshoe-shaped pattern that became increasingly elevated closer to the deposit termination. Although there was clear evidence for inverse grading during the flow, transect sampling revealed that the resulting leveed deposit was strongly graded laterally, with only weak vertical grading. We construct an empirical, three-dimensional velocity field resembling the experimental observations, and use this with a particle-size segregation model to predict the segregation and transport of material through the flow. We infer that coarse material segregates to the flow surface and is transported to the flow front by shear. Within the flow head, coarse material is overridden, then recirculates in spiral trajectories due to size-segregation, before being advected to the flow edges and deposited to form coarse-particle-enriched levees.

  7. Grain-size segregation and levee formation in geophysical mass flows

    USGS Publications Warehouse

    Johnson, C.G.; Kokelaar, B.P.; Iverson, R.M.; Logan, M.; LaHusen, R.G.; Gray, J.M.N.T.

    2012-01-01

    Data from large-scale debris-flow experiments are combined with modeling of particle-size segregation to explain the formation of lateral levees enriched in coarse grains. The experimental flows consisted of 10 m3 of water-saturated sand and gravel, which traveled ~80 m down a steeply inclined flume before forming an elongated leveed deposit 10 m long on a nearly horizontal runout surface. We measured the surface velocity field and observed the sequence of deposition by seeding tracers onto the flow surface and tracking them in video footage. Levees formed by progressive downslope accretion approximately 3.5 m behind the flow front, which advanced steadily at ~2 m s-1 during most of the runout. Segregation was measured by placing ~600 coarse tracer pebbles on the bed, which, when entrained into the flow, segregated upwards at ~6–7.5 cm s-1. When excavated from the deposit these were distributed in a horseshoe-shaped pattern that became increasingly elevated closer to the deposit termination. Although there was clear evidence for inverse grading during the flow, transect sampling revealed that the resulting leveed deposit was strongly graded laterally, with only weak vertical grading. We construct an empirical, three-dimensional velocity field resembling the experimental observations, and use this with a particle-size segregation model to predict the segregation and transport of material through the flow. We infer that coarse material segregates to the flow surface and is transported to the flow front by shear. Within the flow head, coarse material is overridden, then recirculates in spiral trajectories due to size-segregation, before being advected to the flow edges and deposited to form coarse-particle-enriched levees.

  8. An improved fast multipole method for electrostatic potential calculations in a class of coarse-grained molecular simulations

    NASA Astrophysics Data System (ADS)

    Poursina, Mohammad; Anderson, Kurt S.

    2014-08-01

    This paper presents a novel algorithm to approximate the long-range electrostatic potential field in the Cartesian coordinates applicable to 3D coarse-grained simulations of biopolymers. In such models, coarse-grained clusters are formed via treating groups of atoms as rigid and/or flexible bodies connected together via kinematic joints. Therefore, multibody dynamic techniques are used to form and solve the equations of motion of such coarse-grained systems. In this article, the approximations for the potential fields due to the interaction between a highly negatively/positively charged pseudo-atom and charged particles, as well as the interaction between clusters of charged particles, are presented. These approximations are expressed in terms of physical and geometrical properties of the bodies such as the entire charge, the location of the center of charge, and the pseudo-inertia tensor about the center of charge of the clusters. Further, a novel substructuring scheme is introduced to implement the presented far-field potential evaluations in a binary tree framework as opposed to the existing quadtree and octree strategies of implementing fast multipole method. Using the presented Lagrangian grids, the electrostatic potential is recursively calculated via sweeping two passes: assembly and disassembly. In the assembly pass, adjacent charged bodies are combined together to form new clusters. Then, the potential field of each cluster due to its interaction with faraway resulting clusters is recursively calculated in the disassembly pass. The method is highly compatible with multibody dynamic schemes to model coarse-grained biopolymers. Since the proposed method takes advantage of constant physical and geometrical properties of rigid clusters, improvement in the overall computational cost is observed comparing to the tradition application of fast multipole method.

  9. 19 CFR 10.213 - Articles eligible for preferential treatment.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Articles Under the African Growth and Opportunity Act § 10.213 Articles eligible for preferential treatment... coarse animal hair or man-made filaments; (iii) Any combination of findings and trimmings of foreign... Customs finds that price to be unreasonable, all reasonable expenses incurred in the growth, production...

  10. 19 CFR 10.213 - Articles eligible for preferential treatment.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Articles Under the African Growth and Opportunity Act § 10.213 Articles eligible for preferential treatment... coarse animal hair or man-made filaments; (iii) Any combination of findings and trimmings of foreign... Customs finds that price to be unreasonable, all reasonable expenses incurred in the growth, production...

  11. 19 CFR 10.213 - Articles eligible for preferential treatment.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Articles Under the African Growth and Opportunity Act § 10.213 Articles eligible for preferential treatment... coarse animal hair or man-made filaments; (iii) Any combination of findings and trimmings of foreign... Customs finds that price to be unreasonable, all reasonable expenses incurred in the growth, production...

  12. 19 CFR 10.213 - Articles eligible for preferential treatment.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Articles Under the African Growth and Opportunity Act § 10.213 Articles eligible for preferential treatment... coarse animal hair or man-made filaments; (iii) Any combination of findings and trimmings of foreign... Customs finds that price to be unreasonable, all reasonable expenses incurred in the growth, production...

  13. 19 CFR 10.213 - Articles eligible for preferential treatment.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Articles Under the African Growth and Opportunity Act § 10.213 Articles eligible for preferential treatment... coarse animal hair or man-made filaments; (iii) Any combination of findings and trimmings of foreign... Customs finds that price to be unreasonable, all reasonable expenses incurred in the growth, production...

  14. Screening Saccharomyces cerevisiae Distillery Strains in Industrial Media

    USDA-ARS?s Scientific Manuscript database

    Twenty-four distillery yeast strains were obtained from the ARS Culture Collection (NRRL) in Peoria, IL, and screened for ethanol production at 30 and 35°C using industrial media. The medium used in the tests consisted of corn mash prepared by combining coarse ground corn, water, and stillage from a...

  15. Effects of machining parameters on tool life and its optimization in turning mild steel with brazed carbide cutting tool

    NASA Astrophysics Data System (ADS)

    Dasgupta, S.; Mukherjee, S.

    2016-09-01

    One of the most significant factors in metal cutting is tool life. In this research work, the effects of machining parameters on tool under wet machining environment were studied. Tool life characteristics of brazed carbide cutting tool machined against mild steel and optimization of machining parameters based on Taguchi design of experiments were examined. The experiments were conducted using three factors, spindle speed, feed rate and depth of cut each having three levels. Nine experiments were performed on a high speed semi-automatic precision central lathe. ANOVA was used to determine the level of importance of the machining parameters on tool life. The optimum machining parameter combination was obtained by the analysis of S/N ratio. A mathematical model based on multiple regression analysis was developed to predict the tool life. Taguchi's orthogonal array analysis revealed the optimal combination of parameters at lower levels of spindle speed, feed rate and depth of cut which are 550 rpm, 0.2 mm/rev and 0.5mm respectively. The Main Effects plot reiterated the same. The variation of tool life with different process parameters has been plotted. Feed rate has the most significant effect on tool life followed by spindle speed and depth of cut.

  16. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach

    PubMed Central

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-01-01

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification. PMID:28629202

  17. ATP-Binding Cassette Proteins: Towards a Computational View of Mechanism

    NASA Astrophysics Data System (ADS)

    Liao, Jielou

    2004-03-01

    Many large machine proteins can generate mechanical force and undergo large-scale conformational changes (LSCC) to perform varying biological tasks in living cells by utilizing ATP. Important examples include ATP-binding cassette (ABC) transporters. They are membrane proteins that couple ATP binding and hydrolysis to the translocation of substrates across membranes [1]. To interpret how the mechanical force generated by ATP binding and hydrolysis is propagated, a coarse-grained ATP-dependent harmonic network model (HNM) [2,3] is applied to the ABC protein, BtuCD. This protein machine transports vitamin B12 across membranes. The analysis shows that subunits of the protein move against each other in a concerted manner. The lowest-frequency modes of the BtuCD protein are found to link the functionally critical domains, and are suggested to be responsible for large-scale ATP-coupled conformational changes. [1] K. P. Locher, A. T. Lee and D. C. Rees. Science 296, 1091-1098 (2002). [2] Atilgan, A. R., S. R. Durell, R. L. Jernigan, M. C. Demirel, O. Keskin, and I. Bahar. Biophys. J. 80, 505-515(2002); M. M Tirion, Phys. Rev. Lett. 77, 1905-1908 (1996). [3] J. -L. Liao and D. N. Beratan, 2003, to be published.

  18. Machine Learning-based discovery of closures for reduced models of dynamical systems

    NASA Astrophysics Data System (ADS)

    Pan, Shaowu; Duraisamy, Karthik

    2017-11-01

    Despite the successful application of machine learning (ML) in fields such as image processing and speech recognition, only a few attempts has been made toward employing ML to represent the dynamics of complex physical systems. Previous attempts mostly focus on parameter calibration or data-driven augmentation of existing models. In this work we present a ML framework to discover closure terms in reduced models of dynamical systems and provide insights into potential problems associated with data-driven modeling. Based on exact closure models for linear system, we propose a general linear closure framework from viewpoint of optimization. The framework is based on trapezoidal approximation of convolution term. Hyperparameters that need to be determined include temporal length of memory effect, number of sampling points, and dimensions of hidden states. To circumvent the explicit specification of memory effect, a general framework inspired from neural networks is also proposed. We conduct both a priori and posteriori evaluations of the resulting model on a number of non-linear dynamical systems. This work was supported in part by AFOSR under the project ``LES Modeling of Non-local effects using Statistical Coarse-graining'' with Dr. Jean-Luc Cambier as the technical monitor.

  19. Automated anatomical labeling of bronchial branches extracted from CT datasets based on machine learning and combination optimization and its application to bronchoscope guidance.

    PubMed

    Mori, Kensaku; Ota, Shunsuke; Deguchi, Daisuke; Kitasaka, Takayuki; Suenaga, Yasuhito; Iwano, Shingo; Hasegawa, Yosihnori; Takabatake, Hirotsugu; Mori, Masaki; Natori, Hiroshi

    2009-01-01

    This paper presents a method for the automated anatomical labeling of bronchial branches extracted from 3D CT images based on machine learning and combination optimization. We also show applications of anatomical labeling on a bronchoscopy guidance system. This paper performs automated labeling by using machine learning and combination optimization. The actual procedure consists of four steps: (a) extraction of tree structures of the bronchus regions extracted from CT images, (b) construction of AdaBoost classifiers, (c) computation of candidate names for all branches by using the classifiers, (d) selection of best combination of anatomical names. We applied the proposed method to 90 cases of 3D CT datasets. The experimental results showed that the proposed method can assign correct anatomical names to 86.9% of the bronchial branches up to the sub-segmental lobe branches. Also, we overlaid the anatomical names of bronchial branches on real bronchoscopic views to guide real bronchoscopy.

  20. Dual Microstructure Heat Treatment of a Nickel-Base Disk Alloy Assessed

    NASA Technical Reports Server (NTRS)

    Gayda, John

    2002-01-01

    Gas turbine engines for future subsonic aircraft will require nickel-base disk alloys that can be used at temperatures in excess of 1300 F. Smaller turbine engines, with higher rotational speeds, also require disk alloys with high strength. To address these challenges, NASA funded a series of disk programs in the 1990's. Under these initiatives, Honeywell and Allison focused their attention on Alloy 10, a high-strength, nickel-base disk alloy developed by Honeywell for application in the small turbine engines used in regional jet aircraft. Since tensile, creep, and fatigue properties are strongly influenced by alloy grain size, the effect of heat treatment on grain size and the attendant properties were studied in detail. It was observed that a fine grain microstructure offered the best tensile and fatigue properties, whereas a coarse grain microstructure offered the best creep resistance at high temperatures. Therefore, a disk with a dual microstructure, consisting of a fine-grained bore and a coarse-grained rim, should have a high potential for optimal performance. Under NASA's Ultra-Safe Propulsion Project and Ultra-Efficient Engine Technology (UEET) Program, a disk program was initiated at the NASA Glenn Research Center to assess the feasibility of using Alloy 10 to produce a dual-microstructure disk. The objectives of this program were twofold. First, existing dual-microstructure heat treatment (DMHT) technology would be applied and refined as necessary for Alloy 10 to yield the desired grain structure in full-scale forgings appropriate for use in regional gas turbine engines. Second, key mechanical properties from the bore and rim of a DMHT Alloy 10 disk would be measured and compared with conventional heat treatments to assess the benefits of DMHT technology. At Wyman Gordon and Honeywell, an active-cooling DMHT process was used to convert four full-scale Alloy 10 disks to a dual-grain microstructure. The resulting microstructures are illustrated in the photomicrographs. The fine grain size in the bore can be contrasted with the coarse grain size in the rim. Testing (at NASA Glenn) of coupons machined from these disks showed that the DMHT approach did indeed produce a high-strength, fatigue resistant bore and a creep-resistant rim. This combination of properties was previously unobtainable using conventional heat treatments, which produced disks with a uniform grain size. Future plans are in place to spin test a DMHT disk under the Ultra Safe Propulsion Project to assess the viability of this technology at the component level. This testing will include measurements of disk growth at a high temperature as well as the determination of burst speed at an intermediate temperature.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cooper, R. C.; Bruno, Giovanni; Onel, Yener

    Microstructural changes in porous cordierite caused by machining were characterized using microtensile testing, X-ray computed tomography and scanning electron microscopy. Young s moduli and Poisson s ratios were determined on ~215-380 um thick machined samples by combining digital image correlation and microtensile loading. The results provide evidence for an increase in microcrack density due to machining of the thin samples extracted from diesel particulate filter honeycombs.

  2. Multi-Cultural Competency-Based Vocational Curricula. Machine Trades. Multi-Cultural Competency-Based Vocational/Technical Curricula Series.

    ERIC Educational Resources Information Center

    Hepburn, Larry; Shin, Masako

    This document, one of eight in a multi-cultural competency-based vocational/technical curricula series, is on machine trades. This program is designed to run 36 weeks and cover 6 instructional areas: use of measuring tools; benchwork/tool bit grinding; lathe work; milling work; precision grinding; and combination machine work. A duty-task index…

  3. Neck pain combined with arm pain among professional drivers of forest machines and the association with whole-body vibration exposure.

    PubMed

    Rehn, B; Nilsson, T; Lundström, R; Hagberg, M; Burström, L

    2009-10-01

    The purpose of this study was to investigate the existence of neck pain and arm pain among professional forest machine drivers and to find out if pain were related to their whole-body vibration (WBV) exposure. A self-administered questionnaire was sent to 529 forest machine drivers in northern Sweden and the response was 63%. Two pain groups were formed; 1) neck pain; 2) neck pain combined with arm pain. From WBV exposure data (recent measurements made according to ISO 2631-1, available information from reports) and from the self-administered questionnaire, 14 various WBV exposure/dose measures were calculated for each driver. The prevalence of neck pain reported both for the previous 12 months and for the previous 7 d was 34% and more than half of them reported neck pain combined with pain in one or both arms. Analysis showed no significant association between neck pain and high WBV exposure; however, cases with neck pain more often experienced shocks and jolts in the vehicle as uncomfortable. There was no significant association between the 14 WBV measures and type of neck pain (neck pain vs. neck pain combined with arm pain). It seems as if characteristics of WBV exposure can explain neither existence nor the type of neck pain amongst professional drivers of forest machines. The logging industry is important for several industrialised countries. Drivers of forest machines frequently report neuromusculoskeletal pain from the neck. The type of neck pain is important for the decision of treatment modality and may be associated with exposure characteristics at work.

  4. Machine learning in updating predictive models of planning and scheduling transportation projects

    DOT National Transportation Integrated Search

    1997-01-01

    A method combining machine learning and regression analysis to automatically and intelligently update predictive models used in the Kansas Department of Transportations (KDOTs) internal management system is presented. The predictive models used...

  5. Decomposition of intact chicken feathers by a thermophile in combination with an acidulocomposting garbage-treatment process.

    PubMed

    Shigeri, Yasushi; Matsui, Tatsunobu; Watanabe, Kunihiko

    2009-11-01

    In order to develop a practical method for the decomposition of intact chicken feathers, a moderate thermophile strain, Meiothermus ruber H328, having strong keratinolytic activity, was used in a bio-type garbage-treatment machine working with an acidulocomposting process. The addition of strain H328 cells (15 g) combined with acidulocomposting in the garbage machine resulted in 70% degradation of intact chicken feathers (30 g) within 14 d. This degradation efficiency is comparable to a previous result employing the strain as a single bacterium in flask culture, and it indicates that strain H328 can promote intact feather degradation activity in a garbage machine currently on the market.

  6. Optimizing a machine learning based glioma grading system using multi-parametric MRI histogram and texture features

    PubMed Central

    Hu, Yu-Chuan; Li, Gang; Yang, Yang; Han, Yu; Sun, Ying-Zhi; Liu, Zhi-Cheng; Tian, Qiang; Han, Zi-Yang; Liu, Le-De; Hu, Bin-Quan; Qiu, Zi-Yu; Wang, Wen; Cui, Guang-Bin

    2017-01-01

    Current machine learning techniques provide the opportunity to develop noninvasive and automated glioma grading tools, by utilizing quantitative parameters derived from multi-modal magnetic resonance imaging (MRI) data. However, the efficacies of different machine learning methods in glioma grading have not been investigated.A comprehensive comparison of varied machine learning methods in differentiating low-grade gliomas (LGGs) and high-grade gliomas (HGGs) as well as WHO grade II, III and IV gliomas based on multi-parametric MRI images was proposed in the current study. The parametric histogram and image texture attributes of 120 glioma patients were extracted from the perfusion, diffusion and permeability parametric maps of preoperative MRI. Then, 25 commonly used machine learning classifiers combined with 8 independent attribute selection methods were applied and evaluated using leave-one-out cross validation (LOOCV) strategy. Besides, the influences of parameter selection on the classifying performances were investigated. We found that support vector machine (SVM) exhibited superior performance to other classifiers. By combining all tumor attributes with synthetic minority over-sampling technique (SMOTE), the highest classifying accuracy of 0.945 or 0.961 for LGG and HGG or grade II, III and IV gliomas was achieved. Application of Recursive Feature Elimination (RFE) attribute selection strategy further improved the classifying accuracies. Besides, the performances of LibSVM, SMO, IBk classifiers were influenced by some key parameters such as kernel type, c, gama, K, etc. SVM is a promising tool in developing automated preoperative glioma grading system, especially when being combined with RFE strategy. Model parameters should be considered in glioma grading model optimization. PMID:28599282

  7. Effective Information Extraction Framework for Heterogeneous Clinical Reports Using Online Machine Learning and Controlled Vocabularies

    PubMed Central

    Zheng, Shuai; Ghasemzadeh, Nima; Hayek, Salim S; Quyyumi, Arshed A

    2017-01-01

    Background Extracting structured data from narrated medical reports is challenged by the complexity of heterogeneous structures and vocabularies and often requires significant manual effort. Traditional machine-based approaches lack the capability to take user feedbacks for improving the extraction algorithm in real time. Objective Our goal was to provide a generic information extraction framework that can support diverse clinical reports and enables a dynamic interaction between a human and a machine that produces highly accurate results. Methods A clinical information extraction system IDEAL-X has been built on top of online machine learning. It processes one document at a time, and user interactions are recorded as feedbacks to update the learning model in real time. The updated model is used to predict values for extraction in subsequent documents. Once prediction accuracy reaches a user-acceptable threshold, the remaining documents may be batch processed. A customizable controlled vocabulary may be used to support extraction. Results Three datasets were used for experiments based on report styles: 100 cardiac catheterization procedure reports, 100 coronary angiographic reports, and 100 integrated reports—each combines history and physical report, discharge summary, outpatient clinic notes, outpatient clinic letter, and inpatient discharge medication report. Data extraction was performed by 3 methods: online machine learning, controlled vocabularies, and a combination of these. The system delivers results with F1 scores greater than 95%. Conclusions IDEAL-X adopts a unique online machine learning–based approach combined with controlled vocabularies to support data extraction for clinical reports. The system can quickly learn and improve, thus it is highly adaptable. PMID:28487265

  8. Optimizing a machine learning based glioma grading system using multi-parametric MRI histogram and texture features.

    PubMed

    Zhang, Xin; Yan, Lin-Feng; Hu, Yu-Chuan; Li, Gang; Yang, Yang; Han, Yu; Sun, Ying-Zhi; Liu, Zhi-Cheng; Tian, Qiang; Han, Zi-Yang; Liu, Le-De; Hu, Bin-Quan; Qiu, Zi-Yu; Wang, Wen; Cui, Guang-Bin

    2017-07-18

    Current machine learning techniques provide the opportunity to develop noninvasive and automated glioma grading tools, by utilizing quantitative parameters derived from multi-modal magnetic resonance imaging (MRI) data. However, the efficacies of different machine learning methods in glioma grading have not been investigated.A comprehensive comparison of varied machine learning methods in differentiating low-grade gliomas (LGGs) and high-grade gliomas (HGGs) as well as WHO grade II, III and IV gliomas based on multi-parametric MRI images was proposed in the current study. The parametric histogram and image texture attributes of 120 glioma patients were extracted from the perfusion, diffusion and permeability parametric maps of preoperative MRI. Then, 25 commonly used machine learning classifiers combined with 8 independent attribute selection methods were applied and evaluated using leave-one-out cross validation (LOOCV) strategy. Besides, the influences of parameter selection on the classifying performances were investigated. We found that support vector machine (SVM) exhibited superior performance to other classifiers. By combining all tumor attributes with synthetic minority over-sampling technique (SMOTE), the highest classifying accuracy of 0.945 or 0.961 for LGG and HGG or grade II, III and IV gliomas was achieved. Application of Recursive Feature Elimination (RFE) attribute selection strategy further improved the classifying accuracies. Besides, the performances of LibSVM, SMO, IBk classifiers were influenced by some key parameters such as kernel type, c, gama, K, etc. SVM is a promising tool in developing automated preoperative glioma grading system, especially when being combined with RFE strategy. Model parameters should be considered in glioma grading model optimization.

  9. Multitasking the three-dimensional transport code TORT on CRAY platforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azmy, Y.Y.; Barnett, D.A.; Burre, C.A.

    1996-04-01

    The multitasking options in the three-dimensional neutral particle transport code TORT originally implemented for Cray`s CTSS operating system are revived and extended to run on Cray Y/MP and C90 computers using the UNICOS operating system. These include two coarse-grained domain decompositions; across octants, and across directions within an octant, termed Octant Parallel (OP), and Direction Parallel (DP), respectively. Parallel performance of the DP is significantly enhanced by increasing the task grain size and reducing load imbalance via dynamic scheduling of the discrete angles among the participating tasks. Substantial Wall Clock speedup factors, approaching 4.5 using 8 tasks, have been measuredmore » in a time-sharing environment, and generally depend on the test problem specifications, number of tasks, and machine loading during execution.« less

  10. An Integrated Framework for Parameter-based Optimization of Scientific Workflows.

    PubMed

    Kumar, Vijay S; Sadayappan, P; Mehta, Gaurang; Vahi, Karan; Deelman, Ewa; Ratnakar, Varun; Kim, Jihie; Gil, Yolanda; Hall, Mary; Kurc, Tahsin; Saltz, Joel

    2009-01-01

    Data analysis processes in scientific applications can be expressed as coarse-grain workflows of complex data processing operations with data flow dependencies between them. Performance optimization of these workflows can be viewed as a search for a set of optimal values in a multi-dimensional parameter space. While some performance parameters such as grouping of workflow components and their mapping to machines do not a ect the accuracy of the output, others may dictate trading the output quality of individual components (and of the whole workflow) for performance. This paper describes an integrated framework which is capable of supporting performance optimizations along multiple dimensions of the parameter space. Using two real-world applications in the spatial data analysis domain, we present an experimental evaluation of the proposed framework.

  11. Welding Thermal Simulation and Corrosion Study of X-70 Deep Sea Pipeline Steel

    NASA Astrophysics Data System (ADS)

    Zhang, Weipeng; Li, Zhuoran; Gao, Jixiang; Peng, Zhengwu

    2017-12-01

    Gleeble thermomechanical processing machine was used to simulate coarse grain heat affected zone (CGHAZ) of API X-70 thick wall pipeline steel used in deep sea. Microstructures and corresponding corrosion behavior of the simulated CGHAZs using different cooling rate were investigated and compared to the as-received material by scanning electron microscope and electrochemical experiments carried out in 3.5 wt. % NaCl solution. Results of this study show that the as-received samples exhibited a little bit higher corrosion resistance than the simulated CGHAZs. Among 3 sets of simulation experiments, the maximum corrosion tendency was exhibited at the t8/5 = 20 s with the most martensite-austensite (M-A) microstructure and highest corrosion potential was shown at the t8/5 = 60 s.

  12. Sampling methods for terrestrial amphibians and reptiles.

    Treesearch

    Paul Stephen Corn; R. Bruce Bury

    1990-01-01

    Methods described for sampling amphibians and reptiles in Douglas-fir forests in the Pacific Northwest include pitfall trapping, time-constrained collecting, and surveys of coarse woody debris. The herpetofauna of this region differ in breeding and nonbreeding habitats and vagility, so that no single technique is sufficient for a community study. A combination of...

  13. Machine learning-based methods for prediction of linear B-cell epitopes.

    PubMed

    Wang, Hsin-Wei; Pai, Tun-Wen

    2014-01-01

    B-cell epitope prediction facilitates immunologists in designing peptide-based vaccine, diagnostic test, disease prevention, treatment, and antibody production. In comparison with T-cell epitope prediction, the performance of variable length B-cell epitope prediction is still yet to be satisfied. Fortunately, due to increasingly available verified epitope databases, bioinformaticians could adopt machine learning-based algorithms on all curated data to design an improved prediction tool for biomedical researchers. Here, we have reviewed related epitope prediction papers, especially those for linear B-cell epitope prediction. It should be noticed that a combination of selected propensity scales and statistics of epitope residues with machine learning-based tools formulated a general way for constructing linear B-cell epitope prediction systems. It is also observed from most of the comparison results that the kernel method of support vector machine (SVM) classifier outperformed other machine learning-based approaches. Hence, in this chapter, except reviewing recently published papers, we have introduced the fundamentals of B-cell epitope and SVM techniques. In addition, an example of linear B-cell prediction system based on physicochemical features and amino acid combinations is illustrated in details.

  14. [Support vector machine?assisted diagnosis of human malignant gastric tissues based on dielectric properties].

    PubMed

    Zhang, Sa; Li, Zhou; Xin, Xue-Gang

    2017-12-20

    To achieve differential diagnosis of normal and malignant gastric tissues based on discrepancies in their dielectric properties using support vector machine. The dielectric properties of normal and malignant gastric tissues at the frequency ranging from 42.58 to 500 MHz were measured by coaxial probe method, and the Cole?Cole model was used to fit the measured data. Receiver?operating characteristic (ROC) curve analysis was used to evaluate the discrimination capability with respect to permittivity, conductivity, and Cole?Cole fitting parameters. Support vector machine was used for discriminating normal and malignant gastric tissues, and the discrimination accuracy was calculated using k?fold cross? The area under the ROC curve was above 0.8 for permittivity at the 5 frequencies at the lower end of the measured frequency range. The combination of the support vector machine with the permittivity at all these 5 frequencies combined achieved the highest discrimination accuracy of 84.38% with a MATLAB runtime of 3.40 s. The support vector machine?assisted diagnosis is feasible for human malignant gastric tissues based on the dielectric properties.

  15. Repurposing mainstream CNC machine tools for laser-based additive manufacturing

    NASA Astrophysics Data System (ADS)

    Jones, Jason B.

    2016-04-01

    The advent of laser technology has been a key enabler for industrial 3D printing, known as Additive Manufacturing (AM). Despite its commercial success and unique technical capabilities, laser-based AM systems are not yet able to produce parts with the same accuracy and surface finish as CNC machining. To enable the geometry and material freedoms afforded by AM, yet achieve the precision and productivity of CNC machining, hybrid combinations of these two processes have started to gain traction. To achieve the benefits of combined processing, laser technology has been integrated into mainstream CNC machines - effectively repurposing them as hybrid manufacturing platforms. This paper reviews how this engineering challenge has prompted beam delivery innovations to allow automated changeover between laser processing and machining, using standard CNC tool changers. Handling laser-processing heads using the tool changer also enables automated change over between different types of laser processing heads, further expanding the breadth of laser processing flexibility in a hybrid CNC. This paper highlights the development, challenges and future impact of hybrid CNCs on laser processing.

  16. Interpretation of sedimentological processes of coarse-grained deposits applying a novel combined cluster and discriminant analysis

    NASA Astrophysics Data System (ADS)

    Farics, Éva; Farics, Dávid; Kovács, József; Haas, János

    2017-10-01

    The main aim of this paper is to determine the depositional environments of an Upper-Eocene coarse-grained clastic succession in the Buda Hills, Hungary. First of all, we measured some commonly used parameters of samples (size, amount, roundness and sphericity) in a much more objective overall and faster way than with traditional measurement approaches, using the newly developed Rock Analyst application. For the multivariate data obtained, we applied Combined Cluster and Discriminant Analysis (CCDA) in order to determine homogeneous groups of the sampling locations based on the quantitative composition of the conglomerate as well as the shape parameters (roundness and sphericity). The result is the spatial pattern of these groups, which assists with the interpretation of the depositional processes. According to our concept, those sampling sites which belong to the same homogeneous groups were likely formed under similar geological circumstances and by similar geological processes. In the Buda Hills, we were able to distinguish various sedimentological environments within the area based on the results: fan, intermittent stream or marine.

  17. A dual-route approach to orthographic processing.

    PubMed

    Grainger, Jonathan; Ziegler, Johannes C

    2011-01-01

    In the present theoretical note we examine how different learning constraints, thought to be involved in optimizing the mapping of print to meaning during reading acquisition, might shape the nature of the orthographic code involved in skilled reading. On the one hand, optimization is hypothesized to involve selecting combinations of letters that are the most informative with respect to word identity (diagnosticity constraint), and on the other hand to involve the detection of letter combinations that correspond to pre-existing sublexical phonological and morphological representations (chunking constraint). These two constraints give rise to two different kinds of prelexical orthographic code, a coarse-grained and a fine-grained code, associated with the two routes of a dual-route architecture. Processing along the coarse-grained route optimizes fast access to semantics by using minimal subsets of letters that maximize information with respect to word identity, while coding for approximate within-word letter position independently of letter contiguity. Processing along the fined-grained route, on the other hand, is sensitive to the precise ordering of letters, as well as to position with respect to word beginnings and endings. This enables the chunking of frequently co-occurring contiguous letter combinations that form relevant units for morpho-orthographic processing (prefixes and suffixes) and for the sublexical translation of print to sound (multi-letter graphemes).

  18. A Dual-Route Approach to Orthographic Processing

    PubMed Central

    Grainger, Jonathan; Ziegler, Johannes C.

    2011-01-01

    In the present theoretical note we examine how different learning constraints, thought to be involved in optimizing the mapping of print to meaning during reading acquisition, might shape the nature of the orthographic code involved in skilled reading. On the one hand, optimization is hypothesized to involve selecting combinations of letters that are the most informative with respect to word identity (diagnosticity constraint), and on the other hand to involve the detection of letter combinations that correspond to pre-existing sublexical phonological and morphological representations (chunking constraint). These two constraints give rise to two different kinds of prelexical orthographic code, a coarse-grained and a fine-grained code, associated with the two routes of a dual-route architecture. Processing along the coarse-grained route optimizes fast access to semantics by using minimal subsets of letters that maximize information with respect to word identity, while coding for approximate within-word letter position independently of letter contiguity. Processing along the fined-grained route, on the other hand, is sensitive to the precise ordering of letters, as well as to position with respect to word beginnings and endings. This enables the chunking of frequently co-occurring contiguous letter combinations that form relevant units for morpho-orthographic processing (prefixes and suffixes) and for the sublexical translation of print to sound (multi-letter graphemes). PMID:21716577

  19. Fast and robust segmentation of white blood cell images by self-supervised learning.

    PubMed

    Zheng, Xin; Wang, Yong; Wang, Guoyou; Liu, Jianguo

    2018-04-01

    A fast and accurate white blood cell (WBC) segmentation remains a challenging task, as different WBCs vary significantly in color and shape due to cell type differences, staining technique variations and the adhesion between the WBC and red blood cells. In this paper, a self-supervised learning approach, consisting of unsupervised initial segmentation and supervised segmentation refinement, is presented. The first module extracts the overall foreground region from the cell image by K-means clustering, and then generates a coarse WBC region by touching-cell splitting based on concavity analysis. The second module further uses the coarse segmentation result of the first module as automatic labels to actively train a support vector machine (SVM) classifier. Then, the trained SVM classifier is further used to classify each pixel of the image and achieve a more accurate segmentation result. To improve its segmentation accuracy, median color features representing the topological structure and a new weak edge enhancement operator (WEEO) handling fuzzy boundary are introduced. To further reduce its time cost, an efficient cluster sampling strategy is also proposed. We tested the proposed approach with two blood cell image datasets obtained under various imaging and staining conditions. The experiment results show that our approach has a superior performance of accuracy and time cost on both datasets. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Analysis of the Laser Drilling Process for the Combination with a Single-Lip Deep Hole Drilling Process with Small Diameters

    NASA Astrophysics Data System (ADS)

    Biermann, Dirk; Heilmann, Markus

    Due to the tendency of downsizing of components, also the industrial relevance of bore holes with small diameters and high length-to-diameter ratios rises with the growing requirements on parts. In these applications, the combination of laser pre-drilling and single-lip deep hole drilling can shorten the process chain in machining components with non-planar surfaces, or can reduce tool wear in machining case-hardened materials. In this research, the combination of these processes was realized and investigated for the very first time.

  1. Soft electroactive actuators and hard ratchet-wheels enable unidirectional locomotion of hybrid machine

    NASA Astrophysics Data System (ADS)

    Sun, Wenjie; Liu, Fan; Ma, Ziqi; Li, Chenghai; Zhou, Jinxiong

    2017-01-01

    Combining synergistically the muscle-like actuation of soft materials and load-carrying and locomotive capability of hard mechanical components results in hybrid soft machines that can exhibit specific functions. Here, we describe the design, fabrication, modeling and experiment of a hybrid soft machine enabled by marrying unidirectionally actuated dielectric elastomer (DE) membrane-spring system and ratchet wheels. Subjected to an applied voltage 8.2 kV at ramping velocity 820 V/s, the hybrid machine prototype exhibits monotonic uniaxial locomotion with an averaged velocity 0.5mm/s. The underlying physics and working mechanisms of the soft machine are verified and elucidated by finite element simulation.

  2. Achieving Small Structures in Thin NiTi Sheets for Medical Applications with Water Jet and Micro Machining: A Comparison

    NASA Astrophysics Data System (ADS)

    Frotscher, M.; Kahleyss, F.; Simon, T.; Biermann, D.; Eggeler, G.

    2011-07-01

    NiTi shape memory alloys (SMA) are used for a variety of applications including medical implants and tools as well as actuators, making use of their unique properties. However, due to the hardness and strength, in combination with the high elasticity of the material, the machining of components can be challenging. The most common machining techniques used today are laser cutting and electrical discharge machining (EDM). In this study, we report on the machining of small structures into binary NiTi sheets, applying alternative processing methods being well-established for other metallic materials. Our results indicate that water jet machining and micro milling can be used to machine delicate structures, even in very thin NiTi sheets. Further work is required to optimize the cut quality and the machining speed in order to increase the cost-effectiveness and to make both methods more competitive.

  3. Back to the future: using historical climate variation to project near-term shifts in habitat suitable for coast redwood.

    PubMed

    Fernández, Miguel; Hamilton, Healy H; Kueppers, Lara M

    2015-11-01

    Studies that model the effect of climate change on terrestrial ecosystems often use climate projections from downscaled global climate models (GCMs). These simulations are generally too coarse to capture patterns of fine-scale climate variation, such as the sharp coastal energy and moisture gradients associated with wind-driven upwelling of cold water. Coastal upwelling may limit future increases in coastal temperatures, compromising GCMs' ability to provide realistic scenarios of future climate in these coastal ecosystems. Taking advantage of naturally occurring variability in the high-resolution historic climatic record, we developed multiple fine-scale scenarios of California climate that maintain coherent relationships between regional climate and coastal upwelling. We compared these scenarios against coarse resolution GCM projections at a regional scale to evaluate their temporal equivalency. We used these historically based scenarios to estimate potential suitable habitat for coast redwood (Sequoia sempervirens D. Don) under 'normal' combinations of temperature and precipitation, and under anomalous combinations representative of potential future climates. We found that a scenario of warmer temperature with historically normal precipitation is equivalent to climate projected by GCMs for California by 2020-2030 and that under these conditions, climatically suitable habitat for coast redwood significantly contracts at the southern end of its current range. Our results suggest that historical climate data provide a high-resolution alternative to downscaled GCM outputs for near-term ecological forecasts. This method may be particularly useful in other regions where local climate is strongly influenced by ocean-atmosphere dynamics that are not represented by coarse-scale GCMs. © 2015 John Wiley & Sons Ltd.

  4. Contribution of coarse particles from road surfaces to dissolved and particle-bound heavy metal loads in runoff: A laboratory leaching study with synthetic stormwater.

    PubMed

    Borris, Matthias; Österlund, Heléne; Marsalek, Jiri; Viklander, Maria

    2016-12-15

    Laboratory leaching experiments were performed to study the potential of coarse street sediments (i.e. >250μm) to release dissolved and particulate-bound heavy metals (i.e. Cd, Cr, Cu, Ni, Pb and Zn) during rainfall/runoff. Towards this end, street sediments were sampled by vacuuming at seven sites in five Swedish cities and the collected sediments were characterized with respect to their physical and chemical properties. In the laboratory, the sediments were combined with synthetic rainwater and subject to agitation by a shaker mimicking particle motion during transport by runoff from street surfaces. As a result of such action, coarse street sediments were found to release significant amounts of heavy metals, which were predominantly (up to 99%) in the particulate bound phase. Thus, in dry weather, coarse street sediments functioned as collectors of fine particles with attached heavy metals, but in wet weather, metal burdens were released by rainfall/runoff processes. The magnitude of such releases depended on the site characteristics (i.e. street cleaning and traffic intensity), particle properties (i.e. organic matter content), and runoff characteristics (pH, and the duration of, and energy input into, sediment/water agitation). The study findings suggest that street cleaning, which preferentially removes coarser sediments, may produce additional environmental benefits by also removing fine contaminated particles attached to coarser materials. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Optimization of Analytical Potentials for Coarse-Grained Biopolymer Models.

    PubMed

    Mereghetti, Paolo; Maccari, Giuseppe; Spampinato, Giulia Lia Beatrice; Tozzini, Valentina

    2016-08-25

    The increasing trend in the recent literature on coarse grained (CG) models testifies their impact in the study of complex systems. However, the CG model landscape is variegated: even considering a given resolution level, the force fields are very heterogeneous and optimized with very different parametrization procedures. Along the road for standardization of CG models for biopolymers, here we describe a strategy to aid building and optimization of statistics based analytical force fields and its implementation in the software package AsParaGS (Assisted Parameterization platform for coarse Grained modelS). Our method is based on the use and optimization of analytical potentials, optimized by targeting internal variables statistical distributions by means of the combination of different algorithms (i.e., relative entropy driven stochastic exploration of the parameter space and iterative Boltzmann inversion). This allows designing a custom model that endows the force field terms with a physically sound meaning. Furthermore, the level of transferability and accuracy can be tuned through the choice of statistical data set composition. The method-illustrated by means of applications to helical polypeptides-also involves the analysis of two and three variable distributions, and allows handling issues related to the FF term correlations. AsParaGS is interfaced with general-purpose molecular dynamics codes and currently implements the "minimalist" subclass of CG models (i.e., one bead per amino acid, Cα based). Extensions to nucleic acids and different levels of coarse graining are in the course.

  6. A novel device for head gesture measurement system in combination with eye-controlled human machine interface

    NASA Astrophysics Data System (ADS)

    Lin, Chern-Sheng; Ho, Chien-Wa; Chang, Kai-Chieh; Hung, San-Shan; Shei, Hung-Jung; Yeh, Mau-Shiun

    2006-06-01

    This study describes the design and combination of an eye-controlled and a head-controlled human-machine interface system. This system is a highly effective human-machine interface, detecting head movement by changing positions and numbers of light sources on the head. When the users utilize the head-mounted display to browse a computer screen, the system will catch the images of the user's eyes with CCD cameras, which can also measure the angle and position of the light sources. In the eye-tracking system, the program in the computer will locate each center point of the pupils in the images, and record the information on moving traces and pupil diameters. In the head gesture measurement system, the user wears a double-source eyeglass frame, so the system catches images of the user's head by using a CCD camera in front of the user. The computer program will locate the center point of the head, transferring it to the screen coordinates, and then the user can control the cursor by head motions. We combine the eye-controlled and head-controlled human-machine interface system for the virtual reality applications.

  7. Flotation machine and process for removing impurities from coals

    DOEpatents

    Szymocha, K.; Ignasiak, B.; Pawlak, W.; Kulik, C.; Lebowitz, H.E.

    1995-12-05

    The present invention is directed to a type of flotation machine that combines three separate operations in a single unit. The flotation machine is a hydraulic separator that is capable of reducing the pyrite and other mineral matter content of a coal. When the hydraulic separator is used with a flotation system, the pyrite and certain other mineral particles that may have been entrained by hydrodynamic forces associated with conventional flotation machines and/or by the attachment forces associated with the formation of microagglomerates are washed and separated from the coal. 4 figs.

  8. Flotation machine and process for removing impurities from coals

    DOEpatents

    Szymocha, Kazimierz; Ignasiak, Boleslaw; Pawlak, Wanda; Kulik, Conrad; Lebowitz, Howard E.

    1995-01-01

    The present invention is directed to a type of flotation machine that combines three separate operations in a single unit. The flotation machine is a hydraulic separator that is capable of reducing the pyrite and other mineral matter content of a coal. When the hydraulic separator is used with a flotation system, the pyrite and certain other minerals particles that may have been entrained by hydrodynamic forces associated with conventional flotation machines and/or by the attachment forces associated with the formation of microagglomerates are washed and separated from the coal.

  9. Flotation machine and process for removing impurities from coals

    DOEpatents

    Szymocha, K.; Ignasiak, B.; Pawlak, W.; Kulik, C.; Lebowitz, H.E.

    1997-02-11

    The present invention is directed to a type of flotation machine that combines three separate operations in a single unit. The flotation machine is a hydraulic separator that is capable of reducing the pyrite and other mineral matter content of a coal. When the hydraulic separator is used with a flotation system, the pyrite and certain other minerals particles that may have been entrained by hydrodynamic forces associated with conventional flotation machines and/or by the attachment forces associated with the formation of microagglomerates are washed and separated from the coal. 4 figs.

  10. Flotation machine and process for removing impurities from coals

    DOEpatents

    Szymocha, Kazimierz; Ignasiak, Boleslaw; Pawlak, Wanda; Kulik, Conrad; Lebowitz, Howard E.

    1997-01-01

    The present invention is directed to a type of flotation machine that combines three separate operations in a single unit. The flotation machine is a hydraulic separator that is capable of reducing the pyrite and other mineral matter content of a coal. When the hydraulic separator is used with a flotation system, the pyrite and certain other minerals particles that may have been entrained by hydrodynamic forces associated with conventional flotation machines and/or by the attachment forces associated with the formation of microagglomerates are washed and separated from the coal.

  11. A hybrid method with deviational particles for spatial inhomogeneous plasma

    NASA Astrophysics Data System (ADS)

    Yan, Bokai

    2016-03-01

    In this work we propose a Hybrid method with Deviational Particles (HDP) for a plasma modeled by the inhomogeneous Vlasov-Poisson-Landau system. We split the distribution into a Maxwellian part evolved by a grid based fluid solver and a deviation part simulated by numerical particles. These particles, named deviational particles, could be both positive and negative. We combine the Monte Carlo method proposed in [31], a Particle in Cell method and a Macro-Micro decomposition method [3] to design an efficient hybrid method. Furthermore, coarse particles are employed to accelerate the simulation. A particle resampling technique on both deviational particles and coarse particles is also investigated and improved. This method is applicable in all regimes and significantly more efficient compared to a PIC-DSMC method near the fluid regime.

  12. An efficient cloud detection method for high resolution remote sensing panchromatic imagery

    NASA Astrophysics Data System (ADS)

    Li, Chaowei; Lin, Zaiping; Deng, Xinpu

    2018-04-01

    In order to increase the accuracy of cloud detection for remote sensing satellite imagery, we propose an efficient cloud detection method for remote sensing satellite panchromatic images. This method includes three main steps. First, an adaptive intensity threshold value combined with a median filter is adopted to extract the coarse cloud regions. Second, a guided filtering process is conducted to strengthen the textural features difference and then we conduct the detection process of texture via gray-level co-occurrence matrix based on the acquired texture detail image. Finally, the candidate cloud regions are extracted by the intersection of two coarse cloud regions above and we further adopt an adaptive morphological dilation to refine them for thin clouds in boundaries. The experimental results demonstrate the effectiveness of the proposed method.

  13. Trace element distribution in mineral separates of the Allende inclusions and their genetic implications

    NASA Technical Reports Server (NTRS)

    Nagasawa, H.; Blanchard, D. P.; Jacobs, J. W.; Brannon, J. C.; Philpotts, J. A.; Onuma, N.

    1977-01-01

    Concentrations of the rare earth elements (REE), Sc, Co, Fe, Zn, Ir, Na, and Cr were determined for mineral separates of the coarseand fine-grained types (group I and II) of the Allende inclusions. These data in combination with other data suggest that the minerals in the coarse-grained inclusions (group I) crystallized in a closed system with respect to refractory elements although a totally molten stage is precluded. The data also indicate that fine-grained (group II) inclusions were formed by condensation from a super-cooled nebular gas; REE-rich clinopyroxene and spinel were formed earlier than REE-poor sodalite and nepheline. In addition, pre-existing Mg isotope anomalies in the coarse-grained inclusions must have been erased during the heating stage.

  14. Tectonic control on coarse-grained foreland-basin sequences: An example from the Cordilleran foreland basin, Utah

    NASA Astrophysics Data System (ADS)

    Horton, Brian K.; Constenius, Kurt N.; Decelles, Peter G.

    2004-07-01

    Newly released reflection seismic and borehole data, combined with sedimentological, provenance, and biostratigraphic data from Upper Cretaceous Paleocene strata in the proximal part of the Cordilleran foreland-basin system in Utah, establish the nature of tectonic controls on stratigraphic sequences in the proximal to distal foreland basin. During Campanian time, coarse-grained sand and gravel were derived from the internally shortening Charleston-Nebo salient of the Sevier thrust belt. A rapid, regional Campanian progradational event in the distal foreland basin (>200 km from the thrust belt in <8 m.y.) can be tied directly to active thrust-generated growth structures and an influx of quartzose detritus derived from the Charleston-Nebo salient. Eustatic sea-level variation exerted a minimal role in sequence progradation.

  15. Lathe tool bit and holder for machining fiberglass materials

    NASA Technical Reports Server (NTRS)

    Winn, L. E. (Inventor)

    1972-01-01

    A lathe tool and holder combination for machining resin impregnated fiberglass cloth laminates is described. The tool holder and tool bit combination is designed to accommodate a conventional carbide-tipped, round shank router bit as the cutting medium, and provides an infinite number of cutting angles in order to produce a true and smooth surface in the fiberglass material workpiece with every pass of the tool bit. The technique utilizes damaged router bits which ordinarily would be discarded.

  16. SAINT: A combined simulation language for modeling man-machine systems

    NASA Technical Reports Server (NTRS)

    Seifert, D. J.

    1979-01-01

    SAINT (Systems Analysis of Integrated Networks of Tasks) is a network modeling and simulation technique for design and analysis of complex man machine systems. SAINT provides the conceptual framework for representing systems that consist of discrete task elements, continuous state variables, and interactions between them. It also provides a mechanism for combining human performance models and dynamic system behaviors in a single modeling structure. The SAINT technique is described and applications of the SAINT are discussed.

  17. Position Paper: Applying Machine Learning to Software Analysis to Achieve Trusted, Repeatable Scientific Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prowell, Stacy J; Symons, Christopher T

    2015-01-01

    Producing trusted results from high-performance codes is essential for policy and has significant economic impact. We propose combining rigorous analytical methods with machine learning techniques to achieve the goal of repeatable, trustworthy scientific computing.

  18. Switching Circuit for Shop Vacuum System

    NASA Technical Reports Server (NTRS)

    Burley, R. K.

    1987-01-01

    No internal connections to machine tools required. Switching circuit controls vacuum system draws debris from grinders and sanders in machine shop. Circuit automatically turns on vacuum system whenever at least one sander or grinder operating. Debris safely removed, even when operator neglects to turn on vacuum system manually. Pickup coils sense alternating magnetic fields just outside operating machines. Signal from any coil or combination of coils causes vacuum system to be turned on.

  19. A Hybrid Method for Opinion Finding Task (KUNLP at TREC 2008 Blog Track)

    DTIC Science & Technology

    2008-11-01

    retrieve relevant documents. For the Opinion Retrieval subtask, we propose a hybrid model of lexicon-based approach and machine learning approach for...estimating and ranking the opinionated documents. For the Polarized Opinion Retrieval subtask, we employ machine learning for predicting the polarity...and linear combination technique for ranking polar documents. The hybrid model which utilize both lexicon-based approach and machine learning approach

  20. Pellet to Part Manufacturing System for CNCs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roschli, Alex C.; Love, Lonnie J.; Post, Brian K.

    Oak Ridge National Laboratory’s Manufacturing Demonstration Facility worked with Hybrid Manufacturing Technologies to develop a compact prototype composite additive manufacturing head that can effectively extrude injection molding pellets. The head interfaces with conventional CNC machine tools enabling rapid conversion of conventional machine tools to additive manufacturing tools. The intent was to enable wider adoption of Big Area Additive Manufacturing (BAAM) technology and combine BAAM technology with conventional machining systems.

  1. A single-phase axially-magnetized permanent-magnet oscillating machine for miniature aerospace power sources

    NASA Astrophysics Data System (ADS)

    Sui, Yi; Zheng, Ping; Cheng, Luming; Wang, Weinan; Liu, Jiaqi

    2017-05-01

    A single-phase axially-magnetized permanent-magnet (PM) oscillating machine which can be integrated with a free-piston Stirling engine to generate electric power, is investigated for miniature aerospace power sources. Machine structure, operating principle and detent force characteristic are elaborately studied. With the sinusoidal speed characteristic of the mover considered, the proposed machine is designed by 2D finite-element analysis (FEA), and some main structural parameters such as air gap diameter, dimensions of PMs, pole pitches of both stator and mover, and the pole-pitch combinations, etc., are optimized to improve both the power density and force capability. Compared with the three-phase PM linear machines, the proposed single-phase machine features less PM use, simple control and low controller cost. The power density of the proposed machine is higher than that of the three-phase radially-magnetized PM linear machine, but lower than the three-phase axially-magnetized PM linear machine.

  2. Adiabatic Quantum Anomaly Detection and Machine Learning

    NASA Astrophysics Data System (ADS)

    Pudenz, Kristen; Lidar, Daniel

    2012-02-01

    We present methods of anomaly detection and machine learning using adiabatic quantum computing. The machine learning algorithm is a boosting approach which seeks to optimally combine somewhat accurate classification functions to create a unified classifier which is much more accurate than its components. This algorithm then becomes the first part of the larger anomaly detection algorithm. In the anomaly detection routine, we first use adiabatic quantum computing to train two classifiers which detect two sets, the overlap of which forms the anomaly class. We call this the learning phase. Then, in the testing phase, the two learned classification functions are combined to form the final Hamiltonian for an adiabatic quantum computation, the low energy states of which represent the anomalies in a binary vector space.

  3. Research on the method of improving the accuracy of CMM (coordinate measuring machine) testing aspheric surface

    NASA Astrophysics Data System (ADS)

    Cong, Wang; Xu, Lingdi; Li, Ang

    2017-10-01

    Large aspheric surface which have the deviation with spherical surface are being used widely in various of optical systems. Compared with spherical surface, Large aspheric surfaces have lots of advantages, such as improving image quality, correcting aberration, expanding field of view, increasing the effective distance and make the optical system compact, lightweight. Especially, with the rapid development of space optics, space sensor resolution is required higher and viewing angle is requred larger. Aspheric surface will become one of the essential components in the optical system. After finishing Aspheric coarse Grinding surface profile error is about Tens of microns[1].In order to achieve the final requirement of surface accuracy,the aspheric surface must be quickly modified, high precision testing is the basement of rapid convergence of the surface error . There many methods on aspheric surface detection[2], Geometric ray detection, hartmann detection, ronchi text, knifeedge method, direct profile test, interferometry, while all of them have their disadvantage[6]. In recent years the measure of the aspheric surface become one of the import factors which are restricting the aspheric surface processing development. A two meter caliber industrial CMM coordinate measuring machine is avaiable, but it has many drawbacks such as large detection error and low repeatability precision in the measurement of aspheric surface coarse grinding , which seriously affects the convergence efficiency during the aspherical mirror processing. To solve those problems, this paper presents an effective error control, calibration and removal method by calibration mirror position of the real-time monitoring and other effective means of error control, calibration and removal by probe correction and the measurement mode selection method to measure the point distribution program development. This method verified by real engineer examples, this method increases the original industrial-grade coordinate system nominal measurement accuracy PV value of 7 microns to 4microns, Which effectively improves the grinding efficiency of aspheric mirrors and verifies the correctness of the method. This paper also investigates the error detection and operation control method, the error calibration of the CMM and the random error calibration of the CMM .

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Aiying; Liu, Jiabin; Wang, Hongtao

    Gradient materials often have attractive mechanical properties that outperform uniform microstructure counterparts. It remains a difficult task to investigate and compare the performance of various gradient microstructures due to the difficulty of fabrication, the wide range of length scales involved, and their respective volume percentage variations. We have investigated four types of gradient microstructures in 304 stainless steels that utilize submicrotwins, nanotwins, nanocrystalline-, ultrafine- and coarse-grains as building blocks. Tensile tests reveal that the gradient microstructure consisting of submicrotwins and nanotwins has a persistent and stable work hardening rate and yields an impressive combination of high strength and high ductility,more » leading to a toughness that is nearly 50% higher than that of the coarse-grained counterpart. Ex- and in-situ transmission electron microscopy indicates that nanoscale and submicroscale twins help to suppress and limit martensitic phase transformation via the confinement of martensite within the twin lamellar. Twinning and detwinning remain active during tensile deformation and contribute to the work hardening behavior. We discuss the advantageous properties of using submicrotwins as the main load carrier and nanotwins as the strengthening layers over those coarse and nanocrystalline grains. Furthermore, our work uncovers a new gradient design strategy to help metals and alloys achieve high strength and high ductility.« less

  5. "Martinizing" the Variational Implicit Solvent Method (VISM): Solvation Free Energy for Coarse-Grained Proteins.

    PubMed

    Ricci, Clarisse G; Li, Bo; Cheng, Li-Tien; Dzubiella, Joachim; McCammon, J Andrew

    2017-07-13

    Solvation is a fundamental driving force in many biological processes including biomolecular recognition and self-assembly, not to mention protein folding, dynamics, and function. The variational implicit solvent method (VISM) is a theoretical tool currently developed and optimized to estimate solvation free energies for systems of very complex topology, such as biomolecules. VISM's theoretical framework makes it unique because it couples hydrophobic, van der Waals, and electrostatic interactions as a functional of the solvation interface. By minimizing this functional, VISM produces the solvation interface as an output of the theory. In this work, we push VISM to larger scale applications by combining it with coarse-grained solute Hamiltonians adapted from the MARTINI framework, a well-established mesoscale force field for modeling large-scale biomolecule assemblies. We show how MARTINI-VISM ( M VISM) compares with atomistic VISM ( A VISM) for a small set of proteins differing in size, shape, and charge distribution. We also demonstrate M VISM's suitability to study the solvation properties of an interesting encounter complex, barnase-barstar. The promising results suggest that coarse-graining the protein with the MARTINI force field is indeed a valuable step to broaden VISM's and MARTINI's applications in the near future.

  6. Predicting mesh density for adaptive modelling of the global atmosphere.

    PubMed

    Weller, Hilary

    2009-11-28

    The shallow water equations are solved using a mesh of polygons on the sphere, which adapts infrequently to the predicted future solution. Infrequent mesh adaptation reduces the cost of adaptation and load-balancing and will thus allow for more accurate mapping on adaptation. We simulate the growth of a barotropically unstable jet adapting the mesh every 12 h. Using an adaptation criterion based largely on the gradient of the vorticity leads to a mesh with around 20 per cent of the cells of a uniform mesh that gives equivalent results. This is a similar proportion to previous studies of the same test case with mesh adaptation every 1-20 min. The prediction of the mesh density involves solving the shallow water equations on a coarse mesh in advance of the locally refined mesh in order to estimate where features requiring higher resolution will grow, decay or move to. The adaptation criterion consists of two parts: that resolved on the coarse mesh, and that which is not resolved and so is passively advected on the coarse mesh. This combination leads to a balance between resolving features controlled by the large-scale dynamics and maintaining fine-scale features.

  7. Evaluation of an improved finite-element thermal stress calculation technique

    NASA Technical Reports Server (NTRS)

    Camarda, C. J.

    1982-01-01

    A procedure for generating accurate thermal stresses with coarse finite element grids (Ojalvo's method) is described. The procedure is based on the observation that for linear thermoelastic problems, the thermal stresses may be envisioned as being composed of two contributions; the first due to the strains in the structure which depend on the integral of the temperature distribution over the finite element and the second due to the local variation of the temperature in the element. The first contribution can be accurately predicted with a coarse finite-element mesh. The resulting strain distribution can then be combined via the constitutive relations with detailed temperatures from a separate thermal analysis. The result is accurate thermal stresses from coarse finite element structural models even where the temperature distributions have sharp variations. The range of applicability of the method for various classes of thermostructural problems such as in-plane or bending type problems and the effect of the nature of the temperature distribution and edge constraints are addressed. Ojalvo's method is used in conjunction with the SPAR finite element program. Results are obtained for rods, membranes, a box beam and a stiffened panel.

  8. A coarse-to-fine approach for medical hyperspectral image classification with sparse representation

    NASA Astrophysics Data System (ADS)

    Chang, Lan; Zhang, Mengmeng; Li, Wei

    2017-10-01

    A coarse-to-fine approach with sparse representation is proposed for medical hyperspectral image classification in this work. Segmentation technique with different scales is employed to exploit edges of the input image, where coarse super-pixel patches provide global classification information while fine ones further provide detail information. Different from common RGB image, hyperspectral image has multi bands to adjust the cluster center with more high precision. After segmentation, each super pixel is classified by recently-developed sparse representation-based classification (SRC), which assigns label for testing samples in one local patch by means of sparse linear combination of all the training samples. Furthermore, segmentation with multiple scales is employed because single scale is not suitable for complicate distribution of medical hyperspectral imagery. Finally, classification results for different sizes of super pixel are fused by some fusion strategy, offering at least two benefits: (1) the final result is obviously superior to that of segmentation with single scale, and (2) the fusion process significantly simplifies the choice of scales. Experimental results using real medical hyperspectral images demonstrate that the proposed method outperforms the state-of-the-art SRC.

  9. An evaluation of open set recognition for FLIR images

    NASA Astrophysics Data System (ADS)

    Scherreik, Matthew; Rigling, Brian

    2015-05-01

    Typical supervised classification algorithms label inputs according to what was learned in a training phase. Thus, test inputs that were not seen in training are always given incorrect labels. Open set recognition algorithms address this issue by accounting for inputs that are not present in training and providing the classifier with an option to reject" unknown samples. A number of such techniques have been developed in the literature, many of which are based on support vector machines (SVMs). One approach, the 1-vs-set machine, constructs a slab" in feature space using the SVM hyperplane. Inputs falling on one side of the slab or within the slab belong to a training class, while inputs falling on the far side of the slab are rejected. We note that rejection of unknown inputs can be achieved by thresholding class posterior probabilities. Another recently developed approach, the Probabilistic Open Set SVM (POS-SVM), empirically determines good probability thresholds. We apply the 1-vs-set machine, POS-SVM, and closed set SVMs to FLIR images taken from the Comanche SIG dataset. Vehicles in the dataset are divided into three general classes: wheeled, armored personnel carrier (APC), and tank. For each class, a coarse pose estimate (front, rear, left, right) is taken. In a closed set sense, we analyze these algorithms for prediction of vehicle class and pose. To test open set performance, one or more vehicle classes are held out from training. By considering closed and open set performance separately, we may closely analyze both inter-class discrimination and threshold effectiveness.

  10. Machines That Teach.

    ERIC Educational Resources Information Center

    Sniecinski, Jozef

    This paper reviews efforts which have been made to improve the effectiveness of teaching through the development of principles of programed teaching and the construction of teaching machines, concluding that a combination of computer technology and programed teaching principles offers an efficient approach to improving teaching. Three different…

  11. Radiation injury vs. recurrent brain metastasis: combining textural feature radiomics analysis and standard parameters may increase 18F-FET PET accuracy without dynamic scans.

    PubMed

    Lohmann, Philipp; Stoffels, Gabriele; Ceccon, Garry; Rapp, Marion; Sabel, Michael; Filss, Christian P; Kamp, Marcel A; Stegmayr, Carina; Neumaier, Bernd; Shah, Nadim J; Langen, Karl-Josef; Galldiks, Norbert

    2017-07-01

    We investigated the potential of textural feature analysis of O-(2-[ 18 F]fluoroethyl)-L-tyrosine ( 18 F-FET) PET to differentiate radiation injury from brain metastasis recurrence. Forty-seven patients with contrast-enhancing brain lesions (n = 54) on MRI after radiotherapy of brain metastases underwent dynamic 18 F-FET PET. Tumour-to-brain ratios (TBRs) of 18 F-FET uptake and 62 textural parameters were determined on summed images 20-40 min post-injection. Tracer uptake kinetics, i.e., time-to-peak (TTP) and patterns of time-activity curves (TAC) were evaluated on dynamic PET data from 0-50 min post-injection. Diagnostic accuracy of investigated parameters and combinations thereof to discriminate between brain metastasis recurrence and radiation injury was compared. Diagnostic accuracy increased from 81 % for TBR mean alone to 85 % when combined with the textural parameter Coarseness or Short-zone emphasis. The accuracy of TBR max alone was 83 % and increased to 85 % after combination with the textural parameters Coarseness, Short-zone emphasis, or Correlation. Analysis of TACs resulted in an accuracy of 70 % for kinetic pattern alone and increased to 83 % when combined with TBR max . Textural feature analysis in combination with TBRs may have the potential to increase diagnostic accuracy for discrimination between brain metastasis recurrence and radiation injury, without the need for dynamic 18 F-FET PET scans. • Textural feature analysis provides quantitative information about tumour heterogeneity • Textural features help improve discrimination between brain metastasis recurrence and radiation injury • Textural features might be helpful to further understand tumour heterogeneity • Analysis does not require a more time consuming dynamic PET acquisition.

  12. High efficiency machining technology and equipment for edge chamfer of KDP crystals

    NASA Astrophysics Data System (ADS)

    Chen, Dongsheng; Wang, Baorui; Chen, Jihong

    2016-10-01

    Potassium dihydrogen phosphate (KDP) is a type of nonlinear optical crystal material. To Inhibit the transverse stimulated Raman scattering of laser beam and then enhance the optical performance of the optics, the edges of the large-sized KDP crystal needs to be removed to form chamfered faces with high surface quality (RMS<5 nm). However, as the depth of cut (DOC) of fly cutting is usually several, its machining efficiency is too low to be accepted for chamfering of the KDP crystal as the amount of materials to be removed is in the order of millimeter. This paper proposes a novel hybrid machining method, which combines precision grinding with fly cutting, for crackless and high efficiency chamfer of KDP crystal. A specialized machine tool, which adopts aerostatic bearing linear slide and aerostatic bearing spindle, was developed for chamfer of the KDP crystal. The aerostatic bearing linear slide consists of an aerostatic bearing guide with linearity of 0.1 μm/100mm and a linear motor to achieve linear feeding with high precision and high dynamic performance. The vertical spindle consists of an aerostatic bearing spindle with the rotation accuracy (axial) of 0.05 microns and Fork type flexible connection precision driving mechanism. The machining experiment on flying and grinding was carried out, the optimize machining parameters was gained by a series of experiment. Surface roughness of 2.4 nm has been obtained. The machining efficiency can be improved by six times using the combined method to produce the same machined surface quality.

  13. Effective Information Extraction Framework for Heterogeneous Clinical Reports Using Online Machine Learning and Controlled Vocabularies.

    PubMed

    Zheng, Shuai; Lu, James J; Ghasemzadeh, Nima; Hayek, Salim S; Quyyumi, Arshed A; Wang, Fusheng

    2017-05-09

    Extracting structured data from narrated medical reports is challenged by the complexity of heterogeneous structures and vocabularies and often requires significant manual effort. Traditional machine-based approaches lack the capability to take user feedbacks for improving the extraction algorithm in real time. Our goal was to provide a generic information extraction framework that can support diverse clinical reports and enables a dynamic interaction between a human and a machine that produces highly accurate results. A clinical information extraction system IDEAL-X has been built on top of online machine learning. It processes one document at a time, and user interactions are recorded as feedbacks to update the learning model in real time. The updated model is used to predict values for extraction in subsequent documents. Once prediction accuracy reaches a user-acceptable threshold, the remaining documents may be batch processed. A customizable controlled vocabulary may be used to support extraction. Three datasets were used for experiments based on report styles: 100 cardiac catheterization procedure reports, 100 coronary angiographic reports, and 100 integrated reports-each combines history and physical report, discharge summary, outpatient clinic notes, outpatient clinic letter, and inpatient discharge medication report. Data extraction was performed by 3 methods: online machine learning, controlled vocabularies, and a combination of these. The system delivers results with F1 scores greater than 95%. IDEAL-X adopts a unique online machine learning-based approach combined with controlled vocabularies to support data extraction for clinical reports. The system can quickly learn and improve, thus it is highly adaptable. ©Shuai Zheng, James J Lu, Nima Ghasemzadeh, Salim S Hayek, Arshed A Quyyumi, Fusheng Wang. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 09.05.2017.

  14. Peculiar Traits of Coarse AP (Briefing Charts)

    DTIC Science & Technology

    2014-12-01

    coarse AP Bircumshaw, Newman Active centers are sources of AP decomposition gases AP low temperature decomposition (LTD) Most unstable AP particles ...delay before coarse AP ejection *Coarse AP particle flame retardancy 19 Air Force Research Laboratory Distribution A: Approved for public release...distribution unlimited. PA clearance #. Combustion bomb trials 2 AP phase change may enable coarse particle breakage Fractured coarse AP ejection agrees

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Yinan; Shi Handuo; Xiong Zhaoxi

    We present a unified universal quantum cloning machine, which combines several different existing universal cloning machines together, including the asymmetric case. In this unified framework, the identical pure states are projected equally into each copy initially constituted by input and one half of the maximally entangled states. We show explicitly that the output states of those universal cloning machines are the same. One importance of this unified cloning machine is that the cloning procession is always the symmetric projection, which reduces dramatically the difficulties for implementation. Also, it is found that this unified cloning machine can be directly modified tomore » the general asymmetric case. Besides the global fidelity and the single-copy fidelity, we also present all possible arbitrary-copy fidelities.« less

  16. A Two-Layer Least Squares Support Vector Machine Approach to Credit Risk Assessment

    NASA Astrophysics Data System (ADS)

    Liu, Jingli; Li, Jianping; Xu, Weixuan; Shi, Yong

    Least squares support vector machine (LS-SVM) is a revised version of support vector machine (SVM) and has been proved to be a useful tool for pattern recognition. LS-SVM had excellent generalization performance and low computational cost. In this paper, we propose a new method called two-layer least squares support vector machine which combines kernel principle component analysis (KPCA) and linear programming form of least square support vector machine. With this method sparseness and robustness is obtained while solving large dimensional and large scale database. A U.S. commercial credit card database is used to test the efficiency of our method and the result proved to be a satisfactory one.

  17. A general method for the derivation of the functional forms of the effective energy terms in coarse-grained energy functions of polymers. II. Backbone-local potentials of coarse-grained O 1 →4 -bonded polyglucose chains

    NASA Astrophysics Data System (ADS)

    Lubecka, Emilia A.; Liwo, Adam

    2017-09-01

    Based on the theory of the construction of coarse-grained force fields for polymer chains described in our recent work [A. K. Sieradzan et al., J. Chem. Phys. 146, 124106 (2017)], in this work effective coarse-grained potentials, to be used in the SUGRES-1P model of polysaccharides that is being developed in our laboratory, have been determined for the O ⋯O ⋯O virtual-bond angles (θ ) and for the dihedral angles for rotation about the O ⋯O virtual bonds (γ ) of 1 → 4 -linked glucosyl polysaccharides, for all possible combinations of [α ,β ]-[d,l]-glucose. The potentials of mean force corresponding to the virtual-bond angles and the virtual-bond dihedral angles were calculated from the free-energy surfaces of [α ,β ]-[d,l]-glucose pairs, determined by umbrella-sampling molecular-dynamics simulations with the AMBER12 force field, or combinations of the surfaces of two pairs sharing the overlapping residue, respectively, by integrating the respective Boltzmann factor over the dihedral angles λ for the rotation of the sugar units about the O ⋯O virtual bonds. Analytical expressions were subsequently fitted to the potentials of mean force. The virtual-bond-torsional potentials depend on both virtual-bond-dihedral angles and virtual-bond angles. The virtual-bond-angle potentials contain a single minimum at about θ =14 0° for all pairs except β -d-[α ,β ] -l-glucose, where the global minimum is shifted to θ =150° and a secondary minimum appears at θ =90°. The torsional potentials favor small negative γ angles for the α -d-glucose and extended negative angles γ for the β -d-glucose chains, as observed in the experimental structures of starch and cellulose, respectively. It was also demonstrated that the approximate expression derived based on Kubo's cluster-cumulant theory, whose coefficients depend on the identity of the disugar units comprising a trisugar unit that defines a torsional potential, fits simultaneously all torsional potentials very well, thus reducing the number of parameters significantly.

  18. Zooniverse: Combining Human and Machine Classifiers for the Big Survey Era

    NASA Astrophysics Data System (ADS)

    Fortson, Lucy; Wright, Darryl; Beck, Melanie; Lintott, Chris; Scarlata, Claudia; Dickinson, Hugh; Trouille, Laura; Willi, Marco; Laraia, Michael; Boyer, Amy; Veldhuis, Marten; Zooniverse

    2018-01-01

    Many analyses of astronomical data sets, ranging from morphological classification of galaxies to identification of supernova candidates, have relied on humans to classify data into distinct categories. Crowdsourced galaxy classifications via the Galaxy Zoo project provided a solution that scaled visual classification for extant surveys by harnessing the combined power of thousands of volunteers. However, the much larger data sets anticipated from upcoming surveys will require a different approach. Automated classifiers using supervised machine learning have improved considerably over the past decade but their increasing sophistication comes at the expense of needing ever more training data. Crowdsourced classification by human volunteers is a critical technique for obtaining these training data. But several improvements can be made on this zeroth order solution. Efficiency gains can be achieved by implementing a “cascade filtering” approach whereby the task structure is reduced to a set of binary questions that are more suited to simpler machines while demanding lower cognitive loads for humans.Intelligent subject retirement based on quantitative metrics of volunteer skill and subject label reliability also leads to dramatic improvements in efficiency. We note that human and machine classifiers may retire subjects differently leading to trade-offs in performance space. Drawing on work with several Zooniverse projects including Galaxy Zoo and Supernova Hunter, we will present recent findings from experiments that combine cohorts of human and machine classifiers. We show that the most efficient system results when appropriate subsets of the data are intelligently assigned to each group according to their particular capabilities.With sufficient online training, simple machines can quickly classify “easy” subjects, leaving more difficult (and discovery-oriented) tasks for volunteers. We also find humans achieve higher classification purity while samples produced by machines are typically more complete. These findings set the stage for further investigations, with the ultimate goal of efficiently and accurately labeling the wide range of data classes that will arise from the planned large astronomical surveys.

  19. A tubular flux-switching permanent magnet machine

    NASA Astrophysics Data System (ADS)

    Wang, J.; Wang, W.; Clark, R.; Atallah, K.; Howe, D.

    2008-04-01

    The paper describes a novel tubular, three-phase permanent magnet brushless machine, which combines salient features from both switched reluctance and permanent magnet machine technologies. It has no end windings and zero net radial force and offers a high power density and peak force capability, as well as the potential for low manufacturing cost. It is, therefore, eminently suitable for a variety of applications, ranging from free-piston energy converters to active vehicle suspensions.

  20. Mesoscale Simulation and Machine Learning of Asphaltene Aggregation Phase Behavior and Molecular Assembly Landscapes.

    PubMed

    Wang, Jiang; Gayatri, Mohit A; Ferguson, Andrew L

    2017-05-11

    Asphaltenes constitute the heaviest fraction of the aromatic group in crude oil. Aggregation and precipitation of asphaltenes during petroleum processing costs the petroleum industry billions of dollars each year due to downtime and production inefficiencies. Asphaltene aggregation proceeds via a hierarchical self-assembly process that is well-described by the Yen-Mullins model. Nevertheless, the microscopic details of the emergent cluster morphologies and their relative stability under different processing conditions remain poorly understood. We perform coarse-grained molecular dynamics simulations of a prototypical asphaltene molecule to establish a phase diagram mapping the self-assembled morphologies as a function of temperature, pressure, and n-heptane:toluene solvent ratio informing how to control asphaltene aggregation by regulating external processing conditions. We then combine our simulations with graph matching and nonlinear manifold learning to determine low-dimensional free energy surfaces governing asphaltene self-assembly. In doing so, we introduce a variant of diffusion maps designed to handle data sets with large local density variations, and report the first application of many-body diffusion maps to molecular self-assembly to recover a pseudo-1D free energy landscape. Increasing pressure only weakly affects the landscape, serving only to destabilize the largest aggregates. Increasing temperature and toluene solvent fraction stabilizes small cluster sizes and loose bonding arrangements. Although the underlying molecular mechanisms differ, the strikingly similar effect of these variables on the free energy landscape suggests that toluene acts upon asphaltene self-assembly as an effective temperature.

  1. Technology of high-speed combined machining with brush electrode

    NASA Astrophysics Data System (ADS)

    Kirillov, O. N.; Smolentsev, V. P.; Yukhnevich, S. S.

    2018-03-01

    The new method was proposed for high-precision dimensional machining with a brush electrode when the true position of bundles of metal wire is adjusted by means of creating controlled centrifugal forces appeared due to the increased frequency of rotation of a tool. There are the ultimate values of circumferential velocity at which the bundles are pressed against a machined area of a workpiece in a stable manner despite the profile of the machined surface and variable stock of the workpiece. The special aspects of design of processing procedures for finishing standard parts, including components of products with low rigidity, are disclosed. The methodology of calculation and selection of processing modes which allow one to produce high-precision details and to provide corresponding surface roughness required to perform finishing operations (including the preparation of a surface for metal deposition) is presented. The production experience concerned with the use of high-speed combined machining with an unshaped tool electrode in knowledge-intensive branches of the machine-building industry for different types of production is analyzed. It is shown that the implementation of high-speed dimensional machining with an unshaped brush electrode allows one to expand the field of use of the considered process due to the application of a multipurpose tool in the form of a metal brush, as well as to obtain stable results of finishing and to provide the opportunities for long-term operation of the equipment without its changeover and readjustment.

  2. Design and experimental validation for direct-drive fault-tolerant permanent-magnet vernier machines.

    PubMed

    Liu, Guohai; Yang, Junqin; Chen, Ming; Chen, Qian

    2014-01-01

    A fault-tolerant permanent-magnet vernier (FT-PMV) machine is designed for direct-drive applications, incorporating the merits of high torque density and high reliability. Based on the so-called magnetic gearing effect, PMV machines have the ability of high torque density by introducing the flux-modulation poles (FMPs). This paper investigates the fault-tolerant characteristic of PMV machines and provides a design method, which is able to not only meet the fault-tolerant requirements but also keep the ability of high torque density. The operation principle of the proposed machine has been analyzed. The design process and optimization are presented specifically, such as the combination of slots and poles, the winding distribution, and the dimensions of PMs and teeth. By using the time-stepping finite element method (TS-FEM), the machine performances are evaluated. Finally, the FT-PMV machine is manufactured, and the experimental results are presented to validate the theoretical analysis.

  3. Block-Module Electric Machines of Alternating Current

    NASA Astrophysics Data System (ADS)

    Zabora, I.

    2018-03-01

    The paper deals with electric machines having active zone based on uniform elements. It presents data on disk-type asynchronous electric motors with short-circuited rotors, where active elements are made by integrated technique that forms modular elements. Photolithography, spraying, stamping of windings, pressing of core and combined methods are utilized as the basic technological approaches of production. The constructions and features of operation for new electric machine - compatible electric machines-transformers are considered. Induction motors are intended for operation in hermetic plants with extreme conditions surrounding gas, steam-to-gas and liquid environment at a high temperature (to several hundred of degrees).

  4. Multiple performance characteristics optimization for Al 7075 on electric discharge drilling by Taguchi grey relational theory

    NASA Astrophysics Data System (ADS)

    Khanna, Rajesh; Kumar, Anish; Garg, Mohinder Pal; Singh, Ajit; Sharma, Neeraj

    2015-12-01

    Electric discharge drill machine (EDDM) is a spark erosion process to produce micro-holes in conductive materials. This process is widely used in aerospace, medical, dental and automobile industries. As for the performance evaluation of the electric discharge drilling machine, it is very necessary to study the process parameters of machine tool. In this research paper, a brass rod 2 mm diameter was selected as a tool electrode. The experiments generate output responses such as tool wear rate (TWR). The best parameters such as pulse on-time, pulse off-time and water pressure were studied for best machining characteristics. This investigation presents the use of Taguchi approach for better TWR in drilling of Al-7075. A plan of experiments, based on L27 Taguchi design method, was selected for drilling of material. Analysis of variance (ANOVA) shows the percentage contribution of the control factor in the machining of Al-7075 in EDDM. The optimal combination levels and the significant drilling parameters on TWR were obtained. The optimization results showed that the combination of maximum pulse on-time and minimum pulse off-time gives maximum MRR.

  5. Research on the EDM Technology for Micro-holes at Complex Spatial Locations

    NASA Astrophysics Data System (ADS)

    Y Liu, J.; Guo, J. M.; Sun, D. J.; Cai, Y. H.; Ding, L. T.; Jiang, H.

    2017-12-01

    For the demands on machining micro-holes at complex spatial location, several key technical problems are conquered such as micro-Electron Discharge Machining (micro-EDM) power supply system’s development, the host structure’s design and machining process technical. Through developing low-voltage power supply circuit, high-voltage circuit, micro and precision machining circuit and clearance detection system, the narrow pulse and high frequency six-axis EDM machining power supply system is developed to meet the demands on micro-hole discharging machining. With the method of combining the CAD structure design, CAE simulation analysis, modal test, ODS (Operational Deflection Shapes) test and theoretical analysis, the host construction and key axes of the machine tool are optimized to meet the position demands of the micro-holes. Through developing the special deionized water filtration system to make sure that the machining process is stable enough. To verify the machining equipment and processing technical developed in this paper through developing the micro-hole’s processing flow and test on the real machine tool. As shown in the final test results: the efficient micro-EDM machining pulse power supply system, machine tool host system, deionized filtration system and processing method developed in this paper meet the demands on machining micro-holes at complex spatial locations.

  6. Predicting Protein-protein Association Rates using Coarse-grained Simulation and Machine Learning

    NASA Astrophysics Data System (ADS)

    Xie, Zhong-Ru; Chen, Jiawen; Wu, Yinghao

    2017-04-01

    Protein-protein interactions dominate all major biological processes in living cells. We have developed a new Monte Carlo-based simulation algorithm to study the kinetic process of protein association. We tested our method on a previously used large benchmark set of 49 protein complexes. The predicted rate was overestimated in the benchmark test compared to the experimental results for a group of protein complexes. We hypothesized that this resulted from molecular flexibility at the interface regions of the interacting proteins. After applying a machine learning algorithm with input variables that accounted for both the conformational flexibility and the energetic factor of binding, we successfully identified most of the protein complexes with overestimated association rates and improved our final prediction by using a cross-validation test. This method was then applied to a new independent test set and resulted in a similar prediction accuracy to that obtained using the training set. It has been thought that diffusion-limited protein association is dominated by long-range interactions. Our results provide strong evidence that the conformational flexibility also plays an important role in regulating protein association. Our studies provide new insights into the mechanism of protein association and offer a computationally efficient tool for predicting its rate.

  7. Predicting Protein–protein Association Rates using Coarse-grained Simulation and Machine Learning

    PubMed Central

    Xie, Zhong-Ru; Chen, Jiawen; Wu, Yinghao

    2017-01-01

    Protein–protein interactions dominate all major biological processes in living cells. We have developed a new Monte Carlo-based simulation algorithm to study the kinetic process of protein association. We tested our method on a previously used large benchmark set of 49 protein complexes. The predicted rate was overestimated in the benchmark test compared to the experimental results for a group of protein complexes. We hypothesized that this resulted from molecular flexibility at the interface regions of the interacting proteins. After applying a machine learning algorithm with input variables that accounted for both the conformational flexibility and the energetic factor of binding, we successfully identified most of the protein complexes with overestimated association rates and improved our final prediction by using a cross-validation test. This method was then applied to a new independent test set and resulted in a similar prediction accuracy to that obtained using the training set. It has been thought that diffusion-limited protein association is dominated by long-range interactions. Our results provide strong evidence that the conformational flexibility also plays an important role in regulating protein association. Our studies provide new insights into the mechanism of protein association and offer a computationally efficient tool for predicting its rate. PMID:28418043

  8. Predicting Protein-protein Association Rates using Coarse-grained Simulation and Machine Learning.

    PubMed

    Xie, Zhong-Ru; Chen, Jiawen; Wu, Yinghao

    2017-04-18

    Protein-protein interactions dominate all major biological processes in living cells. We have developed a new Monte Carlo-based simulation algorithm to study the kinetic process of protein association. We tested our method on a previously used large benchmark set of 49 protein complexes. The predicted rate was overestimated in the benchmark test compared to the experimental results for a group of protein complexes. We hypothesized that this resulted from molecular flexibility at the interface regions of the interacting proteins. After applying a machine learning algorithm with input variables that accounted for both the conformational flexibility and the energetic factor of binding, we successfully identified most of the protein complexes with overestimated association rates and improved our final prediction by using a cross-validation test. This method was then applied to a new independent test set and resulted in a similar prediction accuracy to that obtained using the training set. It has been thought that diffusion-limited protein association is dominated by long-range interactions. Our results provide strong evidence that the conformational flexibility also plays an important role in regulating protein association. Our studies provide new insights into the mechanism of protein association and offer a computationally efficient tool for predicting its rate.

  9. Automatic Picking of Foraminifera: Design of the Foraminifera Image Recognition and Sorting Tool (FIRST) Prototype and Results of the Image Classification Scheme

    NASA Astrophysics Data System (ADS)

    de Garidel-Thoron, T.; Marchant, R.; Soto, E.; Gally, Y.; Beaufort, L.; Bolton, C. T.; Bouslama, M.; Licari, L.; Mazur, J. C.; Brutti, J. M.; Norsa, F.

    2017-12-01

    Foraminifera tests are the main proxy carriers for paleoceanographic reconstructions. Both geochemical and taxonomical studies require large numbers of tests to achieve statistical relevance. To date, the extraction of foraminifera from the sediment coarse fraction is still done by hand and thus time-consuming. Moreover, the recognition of morphotypes, ecologically relevant, requires some taxonomical skills not easily taught. The automatic recognition and extraction of foraminifera would largely help paleoceanographers to overcome these issues. Recent advances in automatic image classification using machine learning opens the way to automatic extraction of foraminifera. Here we detail progress on the design of an automatic picking machine as part of the FIRST project. The machine handles 30 pre-sieved samples (100-1000µm), separating them into individual particles (including foraminifera) and imaging each in pseudo-3D. The particles are classified and specimens of interest are sorted either for Individual Foraminifera Analyses (44 per slide) and/or for classical multiple analyses (8 morphological classes per slide, up to 1000 individuals per hole). The classification is based on machine learning using Convolutional Neural Networks (CNNs), similar to the approach used in the coccolithophorid imaging system SYRACO. To prove its feasibility, we built two training image datasets of modern planktonic foraminifera containing approximately 2000 and 5000 images each, corresponding to 15 & 25 morphological classes. Using a CNN with a residual topology (ResNet) we achieve over 95% correct classification for each dataset. We tested the network on 160,000 images from 45 depths of a sediment core from the Pacific ocean, for which we have human counts. The current algorithm is able to reproduce the downcore variability in both Globigerinoides ruber and the fragmentation index (r2 = 0.58 and 0.88 respectively). The FIRST prototype yields some promising results for high-resolution paleoceanographic studies and evolutionary studies.

  10. Prediction of B-cell linear epitopes with a combination of support vector machine classification and amino acid propensity identification.

    PubMed

    Wang, Hsin-Wei; Lin, Ya-Chi; Pai, Tun-Wen; Chang, Hao-Teng

    2011-01-01

    Epitopes are antigenic determinants that are useful because they induce B-cell antibody production and stimulate T-cell activation. Bioinformatics can enable rapid, efficient prediction of potential epitopes. Here, we designed a novel B-cell linear epitope prediction system called LEPS, Linear Epitope Prediction by Propensities and Support Vector Machine, that combined physico-chemical propensity identification and support vector machine (SVM) classification. We tested the LEPS on four datasets: AntiJen, HIV, a newly generated PC, and AHP, a combination of these three datasets. Peptides with globally or locally high physicochemical propensities were first identified as primitive linear epitope (LE) candidates. Then, candidates were classified with the SVM based on the unique features of amino acid segments. This reduced the number of predicted epitopes and enhanced the positive prediction value (PPV). Compared to four other well-known LE prediction systems, the LEPS achieved the highest accuracy (72.52%), specificity (84.22%), PPV (32.07%), and Matthews' correlation coefficient (10.36%).

  11. A matter of scale: apparent niche differentiation of diploid and tetraploid plants may depend on extent and grain of analysis.

    PubMed

    Kirchheimer, Bernhard; Schinkel, Christoph C F; Dellinger, Agnes S; Klatt, Simone; Moser, Dietmar; Winkler, Manuela; Lenoir, Jonathan; Caccianiga, Marco; Guisan, Antoine; Nieto-Lugilde, Diego; Svenning, Jens-Christian; Thuiller, Wilfried; Vittoz, Pascal; Willner, Wolfgang; Zimmermann, Niklaus E; Hörandl, Elvira; Dullinger, Stefan

    2016-03-22

    Emerging polyploids may depend on environmental niche shifts for successful establishment. Using the alpine plant Ranunculus kuepferi as a model system, we explore the niche shift hypothesis at different spatial resolutions and in contrasting parts of the species range. European Alps. We sampled 12 individuals from each of 102 populations of R. kuepferi across the Alps, determined their ploidy levels, derived coarse-grain (100 × 100 m) environmental descriptors for all sampling sites by downscaling WorldClim maps, and calculated fine-scale environmental descriptors (2 × 2 m) from indicator values of the vegetation accompanying the sampled individuals. Both coarse and fine-scale variables were further computed for 8239 vegetation plots from across the Alps. Subsequently, we compared niche optima and breadths of diploid and tetraploid cytotypes by combining principal components analysis and kernel smoothing procedures. Comparisons were done separately for coarse and fine-grain data sets and for sympatric, allopatric and the total set of populations. All comparisons indicate that the niches of the two cytotypes differ in optima and/or breadths, but results vary in important details. The whole-range analysis suggests differentiation along the temperature gradient to be most important. However, sympatric comparisons indicate that this climatic shift was not a direct response to competition with diploid ancestors. Moreover, fine-grained analyses demonstrate niche contraction of tetraploids, especially in the sympatric range, that goes undetected with coarse-grained data. Although the niche optima of the two cytotypes differ, separation along ecological gradients was probably less decisive for polyploid establishment than a shift towards facultative apomixis, a particularly effective strategy to avoid minority cytotype exclusion. In addition, our results suggest that coarse-grained analyses overestimate niche breadths of widely distributed taxa. Niche comparison analyses should hence be conducted at environmental data resolutions appropriate for the organism and question under study.

  12. Using Pipelined XNOR Logic to Reduce SEU Risks in State Machines

    NASA Technical Reports Server (NTRS)

    Le, Martin; Zheng, Xin; Katanyoutant, Sunant

    2008-01-01

    Single-event upsets (SEUs) pose great threats to avionic systems state machine control logic, which are frequently used to control sequence of events and to qualify protocols. The risks of SEUs manifest in two ways: (a) the state machine s state information is changed, causing the state machine to unexpectedly transition to another state; (b) due to the asynchronous nature of SEU, the state machine's state registers become metastable, consequently causing any combinational logic associated with the metastable registers to malfunction temporarily. Effect (a) can be mitigated with methods such as triplemodular redundancy (TMR). However, effect (b) cannot be eliminated and can degrade the effectiveness of any mitigation method of effect (a). Although there is no way to completely eliminate the risk of SEU-induced errors, the risk can be made very small by use of a combination of very fast state-machine logic and error-detection logic. Therefore, one goal of two main elements of the present method is to design the fastest state-machine logic circuitry by basing it on the fastest generic state-machine design, which is that of a one-hot state machine. The other of the two main design elements is to design fast error-detection logic circuitry and to optimize it for implementation in a field-programmable gate array (FPGA) architecture: In the resulting design, the one-hot state machine is fitted with a multiple-input XNOR gate for detection of illegal states. The XNOR gate is implemented with lookup tables and with pipelines for high speed. In this method, the task of designing all the logic must be performed manually because no currently available logic synthesis software tool can produce optimal solutions of design problems of this type. However, some assistance is provided by a script, written for this purpose in the Python language (an object-oriented interpretive computer language) to automatically generate hardware description language (HDL) code from state-transition rules.

  13. The BLAZE language: A parallel language for scientific programming

    NASA Technical Reports Server (NTRS)

    Mehrotra, P.; Vanrosendale, J.

    1985-01-01

    A Pascal-like scientific programming language, Blaze, is described. Blaze contains array arithmetic, forall loops, and APL-style accumulation operators, which allow natural expression of fine grained parallelism. It also employs an applicative or functional procedure invocation mechanism, which makes it easy for compilers to extract coarse grained parallelism using machine specific program restructuring. Thus Blaze should allow one to achieve highly parallel execution on multiprocessor architectures, while still providing the user with onceptually sequential control flow. A central goal in the design of Blaze is portability across a broad range of parallel architectures. The multiple levels of parallelism present in Blaze code, in principle, allow a compiler to extract the types of parallelism appropriate for the given architecture while neglecting the remainder. The features of Blaze are described and shows how this language would be used in typical scientific programming.

  14. Parallelising a molecular dynamics algorithm on a multi-processor workstation

    NASA Astrophysics Data System (ADS)

    Müller-Plathe, Florian

    1990-12-01

    The Verlet neighbour-list algorithm is parallelised for a multi-processor Hewlett-Packard/Apollo DN10000 workstation. The implementation makes use of memory shared between the processors. It is a genuine master-slave approach by which most of the computational tasks are kept in the master process and the slaves are only called to do part of the nonbonded forces calculation. The implementation features elements of both fine-grain and coarse-grain parallelism. Apart from three calls to library routines, two of which are standard UNIX calls, and two machine-specific language extensions, the whole code is written in standard Fortran 77. Hence, it may be expected that this parallelisation concept can be transfered in parts or as a whole to other multi-processor shared-memory computers. The parallel code is routinely used in production work.

  15. Conformation-controlled binding kinetics of antibodies

    NASA Astrophysics Data System (ADS)

    Galanti, Marta; Fanelli, Duccio; Piazza, Francesco

    2016-01-01

    Antibodies are large, extremely flexible molecules, whose internal dynamics is certainly key to their astounding ability to bind antigens of all sizes, from small hormones to giant viruses. In this paper, we build a shape-based coarse-grained model of IgG molecules and show that it can be used to generate 3D conformations in agreement with single-molecule Cryo-Electron Tomography data. Furthermore, we elaborate a theoretical model that can be solved exactly to compute the binding rate constant of a small antigen to an IgG in a prescribed 3D conformation. Our model shows that the antigen binding process is tightly related to the internal dynamics of the IgG. Our findings pave the way for further investigation of the subtle connection between the dynamics and the function of large, flexible multi-valent molecular machines.

  16. PHYSICAL MODEL FOR RECOGNITION TUNNELING

    PubMed Central

    Krstić, Predrag; Ashcroft, Brian; Lindsay, Stuart

    2015-01-01

    Recognition tunneling (RT) identifies target molecules trapped between tunneling electrodes functionalized with recognition molecules that serve as specific chemical linkages between the metal electrodes and the trapped target molecule. Possible applications include single molecule DNA and protein sequencing. This paper addresses several fundamental aspects of RT by multiscale theory, applying both all-atom and coarse-grained DNA models: (1) We show that the magnitude of the observed currents are consistent with the results of non-equilibrium Green's function calculations carried out on a solvated all-atom model. (2) Brownian fluctuations in hydrogen bond-lengths lead to current spikes that are similar to what is observed experimentally. (3) The frequency characteristics of these fluctuations can be used to identify the trapped molecules with a machine-learning algorithm, giving a theoretical underpinning to this new method of identifying single molecule signals. PMID:25650375

  17. Final Report, DE-FG01-06ER25718 Domain Decomposition and Parallel Computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Widlund, Olof B.

    2015-06-09

    The goal of this project is to develop and improve domain decomposition algorithms for a variety of partial differential equations such as those of linear elasticity and electro-magnetics.These iterative methods are designed for massively parallel computing systems and allow the fast solution of the very large systems of algebraic equations that arise in large scale and complicated simulations. A special emphasis is placed on problems arising from Maxwell's equation. The approximate solvers, the preconditioners, are combined with the conjugate gradient method and must always include a solver of a coarse model in order to have a performance which is independentmore » of the number of processors used in the computer simulation. A recent development allows for an adaptive construction of this coarse component of the preconditioner.« less

  18. Research on the treatment of oily wastewater by coalescence technology.

    PubMed

    Li, Chunbiao; Li, Meng; Zhang, Xiaoyan

    2015-01-01

    Recently, oily wastewater treatment has become a hot research topic across the world. Among the common methods for oily wastewater treatment, coalescence is one of the most promising technologies because of its high efficiency, easy operation, smaller land coverage, and lower investment and operational costs. In this research, a new type of ceramic filter material was chosen to investigate the effects of some key factors including particle size of coarse-grained materials, temperature, inflow direction and inflow velocity of the reactor. The aim was to explore the optimum operating conditions for coarse-graining. Results of a series of tests showed that the optimum operating conditions were a combination of grain size 1-3 mm, water temperature 35 °C and up-flow velocity 8 m/h, which promised a maximum oil removal efficiency of 93%.

  19. Modeling the effect of dune sorting on the river long profile

    NASA Astrophysics Data System (ADS)

    Blom, A.

    2012-12-01

    River dunes, which occur in low slope sand bed and sand-gravel bed rivers, generally show a downward coarsening pattern due to grain flows down their avalanche lee faces. These grain flows cause coarse particles to preferentially deposit at lower elevations of the lee face, while fines show a preference for its upper elevations. Before considering the effect of this dune sorting mechanism on the river long profile, let us first have a look at some general trends along the river profile. Tributaries increasing the river's water discharge in streamwise direction also cause a streamwise increase in flow depth. As under subcritical conditions mean dune height generally increases with increasing flow depth, the dune height shows a streamwise increase, as well. This means that also the standard deviation of bedform height increases in streamwise direction, as in earlier work it was found that the standard deviation of bedform height linearly increases with an increasing mean value of bedform height. As a result of this streamwise increase in standard deviation of dune height, the above-mentioned dune sorting then results in a loss of coarse particles to the lower elevations of the bed that are less and even rarely exposed to the flow. This loss of coarse particles to lower elevations thus increases the rate of fining in streamwise direction. As finer material is more easily transported downstream than coarser material, a smaller bed slope is required to transport the same amount of sediment downstream. This means that dune sorting adds to river profile concavity, compared to the combined effect of abrasion, selective transport and tributaries. A Hirano-type mass conservation model is presented that deals with dune sorting. The model includes two active layers: a bedform layer representing the sediment in the bedforms and a coarse layer representing the coarse and less mobile sediment underneath migrating bedforms. The exposure of the coarse layer is governed by the rate of sediment supply from upstream. By definition the sum of the exposure of both layers equals unity. The model accounts for vertical sediment fluxes due to grain flows down the bedform lee face and the formation of a less mobile coarse layer. The model with its vertical sediment fluxes is validated against earlier flume experiments. It deals well with the transition between a plane bed and a bedform-dominated bed. Applying the model to field scale confirms that dune sorting increases river profile concavity.

  20. A combined coarse-grained and all-atom simulation of TRPV1 channel gating and heat activation

    PubMed Central

    Qin, Feng

    2015-01-01

    The transient receptor potential (TRP) channels act as key sensors of various chemical and physical stimuli in eukaryotic cells. Despite years of study, the molecular mechanisms of TRP channel activation remain unclear. To elucidate the structural, dynamic, and energetic basis of gating in TRPV1 (a founding member of the TRPV subfamily), we performed coarse-grained modeling and all-atom molecular dynamics (MD) simulation based on the recently solved high resolution structures of the open and closed form of TRPV1. Our coarse-grained normal mode analysis captures two key modes of collective motions involved in the TRPV1 gating transition, featuring a quaternary twist motion of the transmembrane domains (TMDs) relative to the intracellular domains (ICDs). Our transition pathway modeling predicts a sequence of structural movements that propagate from the ICDs to the TMDs via key interface domains (including the membrane proximal domain and the C-terminal domain), leading to sequential opening of the selectivity filter followed by the lower gate in the channel pore (confirmed by modeling conformational changes induced by the activation of ICDs). The above findings of coarse-grained modeling are robust to perturbation by lipids. Finally, our MD simulation of the ICD identifies key residues that contribute differently to the nonpolar energy of the open and closed state, and these residues are predicted to control the temperature sensitivity of TRPV1 gating. These computational predictions offer new insights to the mechanism for heat activation of TRPV1 gating, and will guide our future electrophysiology and mutagenesis studies. PMID:25918362

  1. A molecule-centered method for accelerating the calculation of hydrodynamic interactions in Brownian dynamics simulations containing many flexible biomolecules

    PubMed Central

    Elcock, Adrian H.

    2013-01-01

    Inclusion of hydrodynamic interactions (HIs) is essential in simulations of biological macromolecules that treat the solvent implicitly if the macromolecules are to exhibit correct translational and rotational diffusion. The present work describes the development and testing of a simple approach aimed at allowing more rapid computation of HIs in coarse-grained Brownian dynamics simulations of systems that contain large numbers of flexible macromolecules. The method combines a complete treatment of intramolecular HIs with an approximate treatment of the intermolecular HIs which assumes that the molecules are effectively spherical; all of the HIs are calculated at the Rotne-Prager-Yamakawa level of theory. When combined with Fixman’s Chebyshev polynomial method for calculating correlated random displacements, the proposed method provides an approach that is simple to program but sufficiently fast that it makes it computationally viable to include HIs in large-scale simulations. Test calculations performed on very coarse-grained models of the pyruvate dehydrogenase (PDH) E2 complex and on oligomers of ParM (ranging in size from 1 to 20 monomers) indicate that the method reproduces the translational diffusion behavior seen in more complete HI simulations surprisingly well; the method performs less well at capturing rotational diffusion but its discrepancies diminish with increasing size of the simulated assembly. Simulations of residue-level models of two tetrameric protein models demonstrate that the method also works well when more structurally detailed models are used in the simulations. Finally, test simulations of systems containing up to 1024 coarse-grained PDH molecules indicate that the proposed method rapidly becomes more efficient than the conventional BD approach in which correlated random displacements are obtained via a Cholesky decomposition of the complete diffusion tensor. PMID:23914146

  2. Machine learning in laboratory medicine: waiting for the flood?

    PubMed

    Cabitza, Federico; Banfi, Giuseppe

    2018-03-28

    This review focuses on machine learning and on how methods and models combining data analytics and artificial intelligence have been applied to laboratory medicine so far. Although still in its infancy, the potential for applying machine learning to laboratory data for both diagnostic and prognostic purposes deserves more attention by the readership of this journal, as well as by physician-scientists who will want to take advantage of this new computer-based support in pathology and laboratory medicine.

  3. Modeling and simulation of the fluid flow in wire electrochemical machining with rotating tool (wire ECM)

    NASA Astrophysics Data System (ADS)

    Klocke, F.; Herrig, T.; Zeis, M.; Klink, A.

    2017-10-01

    Combining the working principle of electrochemical machining (ECM) with a universal rotating tool, like a wire, could manage lots of challenges of the classical ECM sinking process. Such a wire-ECM process could be able to machine flexible and efficient 2.5-dimensional geometries like fir tree slots in turbine discs. Nowadays, established manufacturing technologies for slotting turbine discs are broaching and wire electrical discharge machining (wire EDM). Nevertheless, high requirements on surface integrity of turbine parts need cost intensive process development and - in case of wire-EDM - trim cuts to reduce the heat affected rim zone. Due to the process specific advantages, ECM is an attractive alternative manufacturing technology and is getting more and more relevant for sinking applications within the last few years. But ECM is also opposed with high costs for process development and complex electrolyte flow devices. In the past, few studies dealt with the development of a wire ECM process to meet these challenges. However, previous concepts of wire ECM were only suitable for micro machining applications. Due to insufficient flushing concepts the application of the process for machining macro geometries failed. Therefore, this paper presents the modeling and simulation of a new flushing approach for process assessment. The suitability of a rotating structured wire electrode in combination with an axial flushing for electrodes with high aspect ratios is investigated and discussed.

  4. Machine Learning Based Classification of Microsatellite Variation: An Effective Approach for Phylogeographic Characterization of Olive Populations.

    PubMed

    Torkzaban, Bahareh; Kayvanjoo, Amir Hossein; Ardalan, Arman; Mousavi, Soraya; Mariotti, Roberto; Baldoni, Luciana; Ebrahimie, Esmaeil; Ebrahimi, Mansour; Hosseini-Mazinani, Mehdi

    2015-01-01

    Finding efficient analytical techniques is overwhelmingly turning into a bottleneck for the effectiveness of large biological data. Machine learning offers a novel and powerful tool to advance classification and modeling solutions in molecular biology. However, these methods have been less frequently used with empirical population genetics data. In this study, we developed a new combined approach of data analysis using microsatellite marker data from our previous studies of olive populations using machine learning algorithms. Herein, 267 olive accessions of various origins including 21 reference cultivars, 132 local ecotypes, and 37 wild olive specimens from the Iranian plateau, together with 77 of the most represented Mediterranean varieties were investigated using a finely selected panel of 11 microsatellite markers. We organized data in two '4-targeted' and '16-targeted' experiments. A strategy of assaying different machine based analyses (i.e. data cleaning, feature selection, and machine learning classification) was devised to identify the most informative loci and the most diagnostic alleles to represent the population and the geography of each olive accession. These analyses revealed microsatellite markers with the highest differentiating capacity and proved efficiency for our method of clustering olive accessions to reflect upon their regions of origin. A distinguished highlight of this study was the discovery of the best combination of markers for better differentiating of populations via machine learning models, which can be exploited to distinguish among other biological populations.

  5. Label-free sensor for automatic identification of erythrocytes using digital in-line holographic microscopy and machine learning.

    PubMed

    Go, Taesik; Byeon, Hyeokjun; Lee, Sang Joon

    2018-04-30

    Cell types of erythrocytes should be identified because they are closely related to their functionality and viability. Conventional methods for classifying erythrocytes are time consuming and labor intensive. Therefore, an automatic and accurate erythrocyte classification system is indispensable in healthcare and biomedical fields. In this study, we proposed a new label-free sensor for automatic identification of erythrocyte cell types using a digital in-line holographic microscopy (DIHM) combined with machine learning algorithms. A total of 12 features, including information on intensity distributions, morphological descriptors, and optical focusing characteristics, is quantitatively obtained from numerically reconstructed holographic images. All individual features for discocytes, echinocytes, and spherocytes are statistically different. To improve the performance of cell type identification, we adopted several machine learning algorithms, such as decision tree model, support vector machine, linear discriminant classification, and k-nearest neighbor classification. With the aid of these machine learning algorithms, the extracted features are effectively utilized to distinguish erythrocytes. Among the four tested algorithms, the decision tree model exhibits the best identification performance for the training sets (n = 440, 98.18%) and test sets (n = 190, 97.37%). This proposed methodology, which smartly combined DIHM and machine learning, would be helpful for sensing abnormal erythrocytes and computer-aided diagnosis of hematological diseases in clinic. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Research on bearing fault diagnosis of large machinery based on mathematical morphology

    NASA Astrophysics Data System (ADS)

    Wang, Yu

    2018-04-01

    To study the automatic diagnosis of large machinery fault based on support vector machine, combining the four common faults of the large machinery, the support vector machine is used to classify and identify the fault. The extracted feature vectors are entered. The feature vector is trained and identified by multi - classification method. The optimal parameters of the support vector machine are searched by trial and error method and cross validation method. Then, the support vector machine is compared with BP neural network. The results show that the support vector machines are short in time and high in classification accuracy. It is more suitable for the research of fault diagnosis in large machinery. Therefore, it can be concluded that the training speed of support vector machines (SVM) is fast and the performance is good.

  7. Improvement of human operator vibroprotection system in the utility machine

    NASA Astrophysics Data System (ADS)

    Korchagin, P. A.; Teterina, I. A.; Rahuba, L. F.

    2018-01-01

    The article is devoted to an urgent problem of improving efficiency of road-building utility machines in terms of improving human operator vibroprotection system by determining acceptable values of the rigidity coefficients and resistance coefficients of operator’s cab suspension system elements and those of operator’s seat. Negative effects of vibration result in labour productivity decrease and occupational diseases. Besides, structure vibrations have a damaging impact on the machine units and mechanisms, which leads to reducing an overall service life of the machine. Results of experimental and theoretical research of operator vibroprotection system in the road-building utility machine are presented. An algorithm for the program to calculate dynamic impacts on the operator in terms of different structural and performance parameters of the machine and considering combination of external pertrubation influences was proposed.

  8. Identification of Technological Parameters of Ni-Alloys When Machining by Monolithic Ceramic Milling Tool

    NASA Astrophysics Data System (ADS)

    Czán, Andrej; Kubala, Ondrej; Danis, Igor; Czánová, Tatiana; Holubják, Jozef; Mikloš, Matej

    2017-12-01

    The ever-increasing production and the usage of hard-to-machine progressive materials are the main cause of continual finding of new ways and methods of machining. One of these ways is the ceramic milling tool, which combines the pros of conventional ceramic cutting materials and pros of conventional coating steel-based insert. These properties allow to improve cutting conditions and so increase the productivity with preserved quality known from conventional tools usage. In this paper, there is made the identification of properties and possibilities of this tool when machining of hard-to-machine materials such as nickel alloys using in airplanes engines. This article is focused on the analysis and evaluation ordinary technological parameters and surface quality, mainly roughness of surface and quality of machined surface and tool wearing.

  9. Investigation of a tubular dual-stator flux-switching permanent-magnet linear generator for free-piston energy converter

    NASA Astrophysics Data System (ADS)

    Sui, Yi; Zheng, Ping; Tong, Chengde; Yu, Bin; Zhu, Shaohong; Zhu, Jianguo

    2015-05-01

    This paper describes a tubular dual-stator flux-switching permanent-magnet (PM) linear generator for free-piston energy converter. The operating principle, topology, and design considerations of the machine are investigated. Combining the motion characteristic of free-piston Stirling engine, a tubular dual-stator PM linear generator is designed by finite element method. Some major structural parameters, such as the outer and inner radii of the mover, PM thickness, mover tooth width, tooth width of the outer and inner stators, etc., are optimized to improve the machine performances like thrust capability and power density. In comparison with conventional single-stator PM machines like moving-magnet linear machine and flux-switching linear machine, the proposed dual-stator flux-switching PM machine shows advantages in higher mass power density, higher volume power density, and lighter mover.

  10. Effect of High-speed Milling tool path strategies on the surface roughness of Stavax ESR mold insert machining

    NASA Astrophysics Data System (ADS)

    Mebrahitom, A.; Rizuan, D.; Azmir, M.; Nassif, M.

    2016-02-01

    High speed milling is one of the recent technologies used to produce mould inserts due to the need for high surface finish. It is a faster machining process where it uses a small side step and a small down step combined with very high spindle speed and feed rate. In order to effectively use the HSM capabilities, optimizing the tool path strategies and machining parameters is an important issue. In this paper, six different tool path strategies have been investigated on the surface finish and machining time of a rectangular cavities of ESR Stavax material. CAD/CAM application of CATIA V5 machining module for pocket milling of the cavities was used for process planning.

  11. Investigation of a less rare-earth permanent-magnet machine with the consequent pole rotor

    NASA Astrophysics Data System (ADS)

    Bai, Jingang; Liu, Jiaqi; Wang, Mingqiao; Zheng, Ping; Liu, Yong; Gao, Haibo; Xiao, Lijun

    2018-05-01

    Due to the rising price of rare-earth materials, permanent-magnet (PM) machines in different applications have a trend of reducing the use of rare-earth materials. Since iron-core poles replace half of PM poles in the consequent pole (CP) rotor, the PM machine with CP rotor can be a promising candidate for less rare-earth PM machine. Additionally, the investigation of CP rotor in special electrical machines, like hybrid excitation permanent-magnet PM machine, bearingless motor, etc., has verified the application feasibility of CP rotor. Therefore, this paper focuses on design and performance of PM machines when traditional PM machine uses the CP rotor. In the CP rotor, all the PMs are of the same polarity and they are inserted into the rotor core. Since the fundamental PM flux density depends on the ratio of PM pole to iron-core pole, the combination rule between them is investigated by analytical and finite-element methods. On this basis, to comprehensively analyze and evaluate PM machine with CP rotor, four typical schemes, i.e., integer-slot machines with CP rotor and surface-mounted PM (SPM) rotor, fractional-slot machines with CP rotor and SPM rotor, are designed to investigate the performance of PM machine with CP rotor, including electromagnetic performance, anti-demagnetization capacity and cost.

  12. Chunk Alignment for Corpus-Based Machine Translation

    ERIC Educational Resources Information Center

    Kim, Jae Dong

    2011-01-01

    Since sub-sentential alignment is critically important to the translation quality of an Example-Based Machine Translation (EBMT) system, which operates by finding and combining phrase-level matches against the training examples, we developed a new alignment algorithm for the purpose of improving the EBMT system's performance. This new…

  13. 30 CFR 18.20 - Quality of material, workmanship, and design.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ..., circuits, arrangements, or combinations of components and materials cannot be foreseen, MSHA reserves the... provided on each mobile machine that travels at a speed greater than 2.5 miles per hour. (f) Brakes shall be provided for each wheel-mounted machine, unless design of the driving mechanism will preclude...

  14. 40 CFR 63.5460 - What definitions apply to this subpart?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... combination of smaller leather pieces and leather fibers, which when joined together, form an integral..., thus, cannot withstand 5,000 Maeser Flexes with a Maeser Flex Testing Machine or a method approved by... Maeser Flex Testing Machine or a method approved by the Administrator prior to initial water penetration...

  15. 40 CFR 63.460 - Applicability and designation of source.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... cold, and batch cold solvent cleaning machine that uses any solvent containing methylene chloride (CAS... combination of these halogenated HAP solvents, in a total concentration greater than 5 percent by weight, as a... to owners or operators of any solvent cleaning machine meeting the applicability criteria of...

  16. 30 CFR 18.20 - Quality of material, workmanship, and design.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ..., circuits, arrangements, or combinations of components and materials cannot be foreseen, MSHA reserves the... provided on each mobile machine that travels at a speed greater than 2.5 miles per hour. (f) Brakes shall be provided for each wheel-mounted machine, unless design of the driving mechanism will preclude...

  17. 40 CFR 63.460 - Applicability and designation of source.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... cold, and batch cold solvent cleaning machine that uses any solvent containing methylene chloride (CAS... combination of these halogenated HAP solvents, in a total concentration greater than 5 percent by weight, as a... to owners or operators of any solvent cleaning machine meeting the applicability criteria of...

  18. 40 CFR 63.5460 - What definitions apply to this subpart?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... combination of smaller leather pieces and leather fibers, which when joined together, form an integral..., thus, cannot withstand 5,000 Maeser Flexes with a Maeser Flex Testing Machine or a method approved by... Maeser Flex Testing Machine or a method approved by the Administrator prior to initial water penetration...

  19. 40 CFR 63.5460 - What definitions apply to this subpart?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... combination of smaller leather pieces and leather fibers, which when joined together, form an integral..., thus, cannot withstand 5,000 Maeser Flexes with a Maeser Flex Testing Machine or a method approved by... Maeser Flex Testing Machine or a method approved by the Administrator prior to initial water penetration...

  20. 40 CFR 63.460 - Applicability and designation of source.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... cold, and batch cold solvent cleaning machine that uses any solvent containing methylene chloride (CAS... combination of these halogenated HAP solvents, in a total concentration greater than 5 percent by weight, as a... to owners or operators of any solvent cleaning machine meeting the applicability criteria of...

  1. 30 CFR 18.20 - Quality of material, workmanship, and design.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ..., circuits, arrangements, or combinations of components and materials cannot be foreseen, MSHA reserves the... provided on each mobile machine that travels at a speed greater than 2.5 miles per hour. (f) Brakes shall be provided for each wheel-mounted machine, unless design of the driving mechanism will preclude...

  2. Southern pulpwood production, 1958

    Treesearch

    Joe F. Christopher; Martha E. Nelson

    1959-01-01

    The pulp industry in the south is now larger than in all other parts. of the Nation combined. In the years since 1946, construction of 2S new mills and the expansion of existing mills has more than doubled plant capacity. The annual cut of pulpwood bolts has increased proportionately, and a new source of raw material has been developed from the coarse residues at...

  3. Bran hydration and physical treatments improve the bread-baking quality of whole grain wheat flour

    USDA-ARS?s Scientific Manuscript database

    Fine and coarse bran particles of a hard red and a hard white wheat were used to study the influences of bran hydration and physical treatments such as autoclaving and freezing as well as their combinations on the dough properties and bread-baking quality of whole grain wheat flour (WWF). For both h...

  4. Is there an association between root architecture and mycorrhizal growth response?

    PubMed

    Maherali, Hafiz

    2014-10-01

    The symbiosis between arbuscular mycorrhizal (AM) fungi and plants is evolutionarily widespread. The response of plant growth to inoculation by these fungi (mycorrhizal growth response; MGR) is highly variable, ranging from positive to negative. Some of this variation is hypothesized to be associated with root structure and function. Specifically, species with a coarse root architecture, and thus a limited intrinsic capacity to absorb soil nutrients, are expected to derive the greatest growth benefit from inoculation with AM fungi. To test this hypothesis, previously published literature and phylogenetic information were combined in a meta-analysis to examine the magnitude and direction of relationships among several root architectural traits and MGR. Published studies differed in the magnitude and direction of relationships between root architecture and MGR. However, when combined, the overall relationship between MGR and allocation to roots, root diameter, root hair length and root hair density did not differ significantly from zero. These findings indicate that possessing coarse roots is not necessarily a predictor of plant growth response to AM fungal colonization. Root architecture is therefore unlikely to limit the evolution of variation in MGR. © 2014 The Authors. New Phytologist © 2014 New Phytologist Trust.

  5. Combination of Universal Mechanical Testing Machine with Atomic Force Microscope for Materials Research

    PubMed Central

    Zhong, Jian; He, Dannong

    2015-01-01

    Surface deformation and fracture processes of materials under external force are important for understanding and developing materials. Here, a combined horizontal universal mechanical testing machine (HUMTM)-atomic force microscope (AFM) system is developed by modifying UMTM to combine with AFM and designing a height-adjustable stabilizing apparatus. Then the combined HUMTM-AFM system is evaluated. Finally, as initial demonstrations, it is applied to analyze the relationship among macroscopic mechanical properties, surface nanomorphological changes under external force, and fracture processes of two kinds of representative large scale thin film materials: polymer material with high strain rate (Parafilm) and metal material with low strain rate (aluminum foil). All the results demonstrate the combined HUMTM-AFM system overcomes several disadvantages of current AFM-combined tensile/compression devices including small load force, incapability for large scale specimens, disability for materials with high strain rate, and etc. Therefore, the combined HUMTM-AFM system is a promising tool for materials research in the future. PMID:26265357

  6. Combination of Universal Mechanical Testing Machine with Atomic Force Microscope for Materials Research.

    PubMed

    Zhong, Jian; He, Dannong

    2015-08-12

    Surface deformation and fracture processes of materials under external force are important for understanding and developing materials. Here, a combined horizontal universal mechanical testing machine (HUMTM)-atomic force microscope (AFM) system is developed by modifying UMTM to combine with AFM and designing a height-adjustable stabilizing apparatus. Then the combined HUMTM-AFM system is evaluated. Finally, as initial demonstrations, it is applied to analyze the relationship among macroscopic mechanical properties, surface nanomorphological changes under external force, and fracture processes of two kinds of representative large scale thin film materials: polymer material with high strain rate (Parafilm) and metal material with low strain rate (aluminum foil). All the results demonstrate the combined HUMTM-AFM system overcomes several disadvantages of current AFM-combined tensile/compression devices including small load force, incapability for large scale specimens, disability for materials with high strain rate, and etc. Therefore, the combined HUMTM-AFM system is a promising tool for materials research in the future.

  7. Combined effect of smear layer characteristics and hydrostatic pulpal pressure on dentine bond strength of HEMA-free and HEMA-containing adhesives.

    PubMed

    Mahdan, Mohd Haidil Akmal; Nakajima, Masatoshi; Foxton, Richard M; Tagami, Junji

    2013-10-01

    This study evaluated the combined effect of smear layer characteristics with hydrostatic pulpal pressure (PP) on bond strength and nanoleakage expression of HEMA-free and -containing self-etch adhesives. Flat dentine surfaces were obtained from extracted human molars. Smear layers were created by grinding with #180- or #600-SiC paper. Three HEMA-free adhesives (Xeno V, G Bond Plus, Beautibond Multi) and two HEMA-containing adhesives (Bond Force, Tri-S Bond) were applied to the dentine surfaces under hydrostatic PP or none. Dentine bond strengths were determined using the microtensile bond test (μTBS). Data were statistically analyzed using three- and two-way ANOVA with Tukey post hoc comparison test. Nanoleakage evaluation was carried out under a scanning electron microscope (SEM). Coarse smear layer preparation and hydrostatic PP negatively affected the μTBS of HEMA-free and -containing adhesives, but there were no significant differences. The combined experimental condition significantly reduced μTBS of the HEMA-free adhesives, while the HEMA-containing adhesives exhibited no significant differences. Two-way ANOVA indicated that for HEMA-free adhesives, there were significant interactions in μTBS between smear layer characteristics and pulpal pressure, while for HEMA-containing adhesives, there were no significant interactions between them. Nanoleakage formation within the adhesive layers of both adhesive systems distinctly increased in the combined experimental group. The combined effect of coarse smear layer preparation with hydrostatic PP significantly reduced the μTBS of HEMA-free adhesives, while in HEMA-containing adhesives, these effects were not obvious. Smear layer characteristics and hydrostatic PP would additively compromise dentine bonding of self-etch adhesives, especially HEMA-free adhesives. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Combining human and machine processes (CHAMP)

    NASA Astrophysics Data System (ADS)

    Sudit, Moises; Sudit, David; Hirsch, Michael

    2015-05-01

    Machine Reasoning and Intelligence is usually done in a vacuum, without consultation of the ultimate decision-maker. The late consideration of the human cognitive process causes some major problems in the use of automated systems to provide reliable and actionable information that users can trust and depend to make the best Course-of-Action (COA). On the other hand, if automated systems are created exclusively based on human cognition, then there is a danger of developing systems that don't push the barrier of technology and are mainly done for the comfort level of selected subject matter experts (SMEs). Our approach to combining human and machine processes (CHAMP) is based on the notion of developing optimal strategies for where, when, how, and which human intelligence should be injected within a machine reasoning and intelligence process. This combination is based on the criteria of improving the quality of the output of the automated process while maintaining the required computational efficiency for a COA to be actuated in timely fashion. This research addresses the following problem areas: • Providing consistency within a mission: Injection of human reasoning and intelligence within the reliability and temporal needs of a mission to attain situational awareness, impact assessment, and COA development. • Supporting the incorporation of data that is uncertain, incomplete, imprecise and contradictory (UIIC): Development of mathematical models to suggest the insertion of a cognitive process within a machine reasoning and intelligent system so as to minimize UIIC concerns. • Developing systems that include humans in the loop whose performance can be analyzed and understood to provide feedback to the sensors.

  9. Tool geometry and damage mechanisms influencing CNC turning efficiency of Ti6Al4V

    NASA Astrophysics Data System (ADS)

    Suresh, Sangeeth; Hamid, Darulihsan Abdul; Yazid, M. Z. A.; Nasuha, Nurdiyanah; Ain, Siti Nurul

    2017-12-01

    Ti6Al4V or Grade 5 titanium alloy is widely used in the aerospace, medical, automotive and fabrication industries, due to its distinctive combination of mechanical and physical properties. Ti6Al4V has always been perverse during its machining, strangely due to the same mix of properties mentioned earlier. Ti6Al4V machining has resulted in shorter cutting tool life which has led to objectionable surface integrity and rapid failure of the parts machined. However, the proven functional relevance of this material has prompted extensive research in the optimization of machine parameters and cutting tool characteristics. Cutting tool geometry plays a vital role in ensuring dimensional and geometric accuracy in machined parts. In this study, an experimental investigation is actualized to optimize the nose radius and relief angles of the cutting tools and their interaction to different levels of machining parameters. Low elastic modulus and thermal conductivity of Ti6Al4V contribute to the rapid tool damage. The impact of these properties over the tool tips damage is studied. An experimental design approach is utilized in the CNC turning process of Ti6Al4V to statistically analyze and propose optimum levels of input parameters to lengthen the tool life and enhance surface characteristics of the machined parts. A greater tool nose radius with a straight flank, combined with low feed rates have resulted in a desirable surface integrity. The presence of relief angle has proven to aggravate tool damage and also dimensional instability in the CNC turning of Ti6Al4V.

  10. Quadrilateral Micro-Hole Array Machining on Invar Thin Film: Wet Etching and Electrochemical Fusion Machining

    PubMed Central

    Choi, Woong-Kirl; Kim, Seong-Hyun; Choi, Seung-Geon; Lee, Eun-Sang

    2018-01-01

    Ultra-precision products which contain a micro-hole array have recently shown remarkable demand growth in many fields, especially in the semiconductor and display industries. Photoresist etching and electrochemical machining are widely known as precision methods for machining micro-holes with no residual stress and lower surface roughness on the fabricated products. The Invar shadow masks used for organic light-emitting diodes (OLEDs) contain numerous micro-holes and are currently machined by a photoresist etching method. However, this method has several problems, such as uncontrollable hole machining accuracy, non-etched areas, and overcutting. To solve these problems, a machining method that combines photoresist etching and electrochemical machining can be applied. In this study, negative photoresist with a quadrilateral hole array pattern was dry coated onto 30-µm-thick Invar thin film, and then exposure and development were carried out. After that, photoresist single-side wet etching and a fusion method of wet etching-electrochemical machining were used to machine micro-holes on the Invar. The hole machining geometry, surface quality, and overcutting characteristics of the methods were studied. Wet etching and electrochemical fusion machining can improve the accuracy and surface quality. The overcutting phenomenon can also be controlled by the fusion machining. Experimental results show that the proposed method is promising for the fabrication of Invar film shadow masks. PMID:29351235

  11. Apparatus and method for fluid analysis

    DOEpatents

    Wilson, Bary W.; Peters, Timothy J.; Shepard, Chester L.; Reeves, James H.

    2004-11-02

    The present invention is an apparatus and method for analyzing a fluid used in a machine or in an industrial process line. The apparatus has at least one meter placed proximate the machine or process line and in contact with the machine or process fluid for measuring at least one parameter related to the fluid. The at least one parameter is a standard laboratory analysis parameter. The at least one meter includes but is not limited to viscometer, element meter, optical meter, particulate meter, and combinations thereof.

  12. The design and improvement of radial tire molding machine

    NASA Astrophysics Data System (ADS)

    Wang, Wenhao; Zhang, Tao

    2018-04-01

    This paper presented that the high accuracy semisteel meridian tire molding machine structure configurations, combining tyre high precision characteristics, the original structure and parameter optimization, technology improvement innovation design period of opening and closing machine rotary shaping drum institutions. This way out of the shaft from the structure to the push-pull type movable shaping drum of thinking limit, compared with the specifications and shaping drum can smaller contraction, is conducive to forming the tire and reduce the tire deformation.

  13. Techniques for Combined Arms for Air Defense

    DTIC Science & Technology

    2016-07-29

    rockets can be expected on the initial attack run, while cannon and machine - gun fire will likely be used in the follow- on attack. THREAT...wing aircraft with 8 stinger missiles and an M3P .50 caliber machine gun . This system is highly mobile and can be used to provide SHORAD security for...position. If cover and concealment are less substantial, use the low kneeling position. When using the M240 machine gun , the gunner will also fire from a

  14. Mobile Landing Platform with Core Capability Set (MLP w/CCS): Combined Initial Operational Test and Evaluation and Live Fire Test and Evaluation Report

    DTIC Science & Technology

    2015-07-01

    annex.   iii Self-defense testing was limited to structural test firing from each machine gun mount and an ammunition resupply drill. Robust self...provided in the classified annex. Self-   8 defense testing was limited to structural test firing from each machine gun mount and a single...Caliber Machine Gun Mount Structural Test Fire November 2014 San Diego, Offshore Ship Weapons Range Operating Independently       9 Section Three

  15. Predicting competency in automated machine use in an acquired brain injury population using neuropsychological measures.

    PubMed

    Crowe, Simon F; Mahony, Kate; Jackson, Martin

    2004-08-01

    The purpose of the current study was to explore whether performance on standardised neuropsychological measures could predict functional ability with automated machines and services among people with an acquired brain injury (ABI). Participants were 45 individuals who met the criteria for mild, moderate or severe ABI and 15 control participants matched on demographic variables including age- and education. Each participant was required to complete a battery of neuropsychological tests, as well as performing three automated service delivery tasks: a transport automated ticketing machine, an automated teller machine (ATM) and an automated telephone service. The results showed consistently high relationship between the neuropsychological measures, both as single predictors and in combination, and level of competency with the automated machines. Automated machines are part of a relatively new phenomena in service delivery and offer an ecologically valid functional measure of performance that represents a true indication of functional disability.

  16. Design and Experimental Validation for Direct-Drive Fault-Tolerant Permanent-Magnet Vernier Machines

    PubMed Central

    Liu, Guohai; Yang, Junqin; Chen, Ming; Chen, Qian

    2014-01-01

    A fault-tolerant permanent-magnet vernier (FT-PMV) machine is designed for direct-drive applications, incorporating the merits of high torque density and high reliability. Based on the so-called magnetic gearing effect, PMV machines have the ability of high torque density by introducing the flux-modulation poles (FMPs). This paper investigates the fault-tolerant characteristic of PMV machines and provides a design method, which is able to not only meet the fault-tolerant requirements but also keep the ability of high torque density. The operation principle of the proposed machine has been analyzed. The design process and optimization are presented specifically, such as the combination of slots and poles, the winding distribution, and the dimensions of PMs and teeth. By using the time-stepping finite element method (TS-FEM), the machine performances are evaluated. Finally, the FT-PMV machine is manufactured, and the experimental results are presented to validate the theoretical analysis. PMID:25045729

  17. Logic Learning Machine and standard supervised methods for Hodgkin's lymphoma prognosis using gene expression data and clinical variables.

    PubMed

    Parodi, Stefano; Manneschi, Chiara; Verda, Damiano; Ferrari, Enrico; Muselli, Marco

    2018-03-01

    This study evaluates the performance of a set of machine learning techniques in predicting the prognosis of Hodgkin's lymphoma using clinical factors and gene expression data. Analysed samples from 130 Hodgkin's lymphoma patients included a small set of clinical variables and more than 54,000 gene features. Machine learning classifiers included three black-box algorithms ( k-nearest neighbour, Artificial Neural Network, and Support Vector Machine) and two methods based on intelligible rules (Decision Tree and the innovative Logic Learning Machine method). Support Vector Machine clearly outperformed any of the other methods. Among the two rule-based algorithms, Logic Learning Machine performed better and identified a set of simple intelligible rules based on a combination of clinical variables and gene expressions. Decision Tree identified a non-coding gene ( XIST) involved in the early phases of X chromosome inactivation that was overexpressed in females and in non-relapsed patients. XIST expression might be responsible for the better prognosis of female Hodgkin's lymphoma patients.

  18. A robust automatic phase correction method for signal dense spectra

    NASA Astrophysics Data System (ADS)

    Bao, Qingjia; Feng, Jiwen; Chen, Li; Chen, Fang; Liu, Zao; Jiang, Bin; Liu, Chaoyang

    2013-09-01

    A robust automatic phase correction method for Nuclear Magnetic Resonance (NMR) spectra is presented. In this work, a new strategy combining ‘coarse tuning' with ‘fine tuning' is introduced to correct various spectra accurately. In the ‘coarse tuning' procedure, a new robust baseline recognition method is proposed for determining the positions of the tail ends of the peaks, and then the preliminary phased spectra are obtained by minimizing the objective function based on the height difference of these tail ends. After the ‘coarse tuning', the peaks in the preliminary corrected spectra can be categorized into three classes: positive, negative, and distorted. Based on the classification result, a new custom negative penalty function used in the step of ‘fine tuning' is constructed to avoid the negative peak points in the spectra excluded in the negative peaks and distorted peaks. Finally, the fine phased spectra can be obtained by minimizing the custom negative penalty function. This method is proven to be very robust for it is tolerant to low signal-to-noise ratio, large baseline distortion and independent of the starting search points of phasing parameters. The experimental results on both 1D metabonomics spectra with over-crowded peaks and 2D spectra demonstrate the high efficiency of this automatic method.

  19. Gradient twinned 304 stainless steels for high strength and high ductility

    DOE PAGES

    Chen, Aiying; Liu, Jiabin; Wang, Hongtao; ...

    2016-04-23

    Gradient materials often have attractive mechanical properties that outperform uniform microstructure counterparts. It remains a difficult task to investigate and compare the performance of various gradient microstructures due to the difficulty of fabrication, the wide range of length scales involved, and their respective volume percentage variations. We have investigated four types of gradient microstructures in 304 stainless steels that utilize submicrotwins, nanotwins, nanocrystalline-, ultrafine- and coarse-grains as building blocks. Tensile tests reveal that the gradient microstructure consisting of submicrotwins and nanotwins has a persistent and stable work hardening rate and yields an impressive combination of high strength and high ductility,more » leading to a toughness that is nearly 50% higher than that of the coarse-grained counterpart. Ex- and in-situ transmission electron microscopy indicates that nanoscale and submicroscale twins help to suppress and limit martensitic phase transformation via the confinement of martensite within the twin lamellar. Twinning and detwinning remain active during tensile deformation and contribute to the work hardening behavior. We discuss the advantageous properties of using submicrotwins as the main load carrier and nanotwins as the strengthening layers over those coarse and nanocrystalline grains. Furthermore, our work uncovers a new gradient design strategy to help metals and alloys achieve high strength and high ductility.« less

  20. “Martinizing” the Variational Implicit Solvent Method (VISM): Solvation Free Energy for Coarse-Grained Proteins

    PubMed Central

    2017-01-01

    Solvation is a fundamental driving force in many biological processes including biomolecular recognition and self-assembly, not to mention protein folding, dynamics, and function. The variational implicit solvent method (VISM) is a theoretical tool currently developed and optimized to estimate solvation free energies for systems of very complex topology, such as biomolecules. VISM’s theoretical framework makes it unique because it couples hydrophobic, van der Waals, and electrostatic interactions as a functional of the solvation interface. By minimizing this functional, VISM produces the solvation interface as an output of the theory. In this work, we push VISM to larger scale applications by combining it with coarse-grained solute Hamiltonians adapted from the MARTINI framework, a well-established mesoscale force field for modeling large-scale biomolecule assemblies. We show how MARTINI-VISM (MVISM) compares with atomistic VISM (AVISM) for a small set of proteins differing in size, shape, and charge distribution. We also demonstrate MVISM’s suitability to study the solvation properties of an interesting encounter complex, barnase–barstar. The promising results suggest that coarse-graining the protein with the MARTINI force field is indeed a valuable step to broaden VISM’s and MARTINI’s applications in the near future. PMID:28613904

  1. Consistent integration of experimental and ab initio data into molecular and coarse-grained models

    NASA Astrophysics Data System (ADS)

    Vlcek, Lukas

    As computer simulations are increasingly used to complement or replace experiments, highly accurate descriptions of physical systems at different time and length scales are required to achieve realistic predictions. The questions of how to objectively measure model quality in relation to reference experimental or ab initio data, and how to transition seamlessly between different levels of resolution are therefore of prime interest. To address these issues, we use the concept of statistical distance to define a measure of similarity between statistical mechanical systems, i.e., a model and its target, and show that its minimization leads to general convergence of the systems' measurable properties. Through systematic coarse-graining, we arrive at appropriate expressions for optimization loss functions consistently incorporating microscopic ab initio data as well as macroscopic experimental data. The design of coarse-grained and multiscale models is then based on factoring the model system partition function into terms describing the system at different resolution levels. The optimization algorithm takes advantage of thermodynamic perturbation expressions for fast exploration of the model parameter space, enabling us to scan millions of parameter combinations per hour on a single CPU. The robustness and generality of the new model optimization framework and its efficient implementation are illustrated on selected examples including aqueous solutions, magnetic systems, and metal alloys.

  2. Distributions of experimental protein structures on coarse-grained free energy landscapes

    PubMed Central

    Liu, Jie; Jernigan, Robert L.

    2015-01-01

    Predicting conformational changes of proteins is needed in order to fully comprehend functional mechanisms. With the large number of available structures in sets of related proteins, it is now possible to directly visualize the clusters of conformations and their conformational transitions through the use of principal component analysis. The most striking observation about the distributions of the structures along the principal components is their highly non-uniform distributions. In this work, we use principal component analysis of experimental structures of 50 diverse proteins to extract the most important directions of their motions, sample structures along these directions, and estimate their free energy landscapes by combining knowledge-based potentials and entropy computed from elastic network models. When these resulting motions are visualized upon their coarse-grained free energy landscapes, the basis for conformational pathways becomes readily apparent. Using three well-studied proteins, T4 lysozyme, serum albumin, and sarco-endoplasmic reticular Ca2+ adenosine triphosphatase (SERCA), as examples, we show that such free energy landscapes of conformational changes provide meaningful insights into the functional dynamics and suggest transition pathways between different conformational states. As a further example, we also show that Monte Carlo simulations on the coarse-grained landscape of HIV-1 protease can directly yield pathways for force-driven conformational changes. PMID:26723638

  3. Characterizing and contrasting instream and riparian coarse wood in western Montana basins

    Treesearch

    Michael K. Young; Ethan A. Mace; Eric T. Ziegler; Elaine K. Sutherland

    2006-01-01

    The importance of coarse wood to aquatic biota and stream channel structure is widely recognized, yet characterizations of large-scale patterns in coarse wood dimensions and loads are rare. To address these issues, we censused instream coarse wood ( 2 m long and 10 cm minimum diameter) and sampled riparian coarse wood and channel characteristics in and along 13 streams...

  4. Compact Microscope Imaging System with Intelligent Controls

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2004-01-01

    The figure presents selected views of a compact microscope imaging system (CMIS) that includes a miniature video microscope, a Cartesian robot (a computer- controlled three-dimensional translation stage), and machine-vision and control subsystems. The CMIS was built from commercial off-the-shelf instrumentation, computer hardware and software, and custom machine-vision software. The machine-vision and control subsystems include adaptive neural networks that afford a measure of artificial intelligence. The CMIS can perform several automated tasks with accuracy and repeatability . tasks that, heretofore, have required the full attention of human technicians using relatively bulky conventional microscopes. In addition, the automation and control capabilities of the system inherently include a capability for remote control. Unlike human technicians, the CMIS is not at risk of becoming fatigued or distracted: theoretically, it can perform continuously at the level of the best human technicians. In its capabilities for remote control and for relieving human technicians of tedious routine tasks, the CMIS is expected to be especially useful in biomedical research, materials science, inspection of parts on industrial production lines, and space science. The CMIS can automatically focus on and scan a microscope sample, find areas of interest, record the resulting images, and analyze images from multiple samples simultaneously. Automatic focusing is an iterative process: The translation stage is used to move the microscope along its optical axis in a succession of coarse, medium, and fine steps. A fast Fourier transform (FFT) of the image is computed at each step, and the FFT is analyzed for its spatial-frequency content. The microscope position that results in the greatest dispersal of FFT content toward high spatial frequencies (indicating that the image shows the greatest amount of detail) is deemed to be the focal position.

  5. Efficient cost-sensitive human-machine collaboration for offline signature verification

    NASA Astrophysics Data System (ADS)

    Coetzer, Johannes; Swanepoel, Jacques; Sabourin, Robert

    2012-01-01

    We propose a novel strategy for the optimal combination of human and machine decisions in a cost-sensitive environment. The proposed algorithm should be especially beneficial to financial institutions where off-line signatures, each associated with a specific transaction value, require authentication. When presented with a collection of genuine and fraudulent training signatures, produced by so-called guinea pig writers, the proficiency of a workforce of human employees and a score-generating machine can be estimated and represented in receiver operating characteristic (ROC) space. Using a set of Boolean fusion functions, the majority vote decision of the human workforce is combined with each threshold-specific machine-generated decision. The performance of the candidate ensembles is estimated and represented in ROC space, after which only the optimal ensembles and associated decision trees are retained. When presented with a questioned signature linked to an arbitrary writer, the system first uses the ROC-based cost gradient associated with the transaction value to select the ensemble that minimises the expected cost, and then uses the corresponding decision tree to authenticate the signature in question. We show that, when utilising the entire human workforce, the incorporation of a machine streamlines the authentication process and decreases the expected cost for all operating conditions.

  6. Seismo-acoustic imaging of marine hard substrate habitats: a case study from the German Bight (SE North Sea)

    NASA Astrophysics Data System (ADS)

    Papenmeier, Svenja; Hass, H. Christian

    2016-04-01

    The detection of hard substrate habitats in sublittoral environments is a considerable challenge in spite of modern high resolution hydroacoustic techniques. In offshore areas those habitats are mainly represented by either cobbles and boulders (stones) often located in wide areas of soft sediments or by glacial relict sediments (heterogeneous mixture of medium sand to gravel size with cobbles and boulders). Sediment classification and object detection is commonly done on the basis of hydroacoustic backscatter intensities recorded with e.g. sidescan sonar (SSS) and multibeam echo sounder (MBES). Single objects lying on the sediment such as stones can generally be recognized by the acoustic shadow behind the object. However, objects close to the sonar's nadir may remain undetected because their shadows are below the data resolution. Further limitation in the detection of objects is caused by sessile communities that thrive on the objects. The bio-cover tends to absorb most of the acoustic signal. Automated identification based on the backscatter signal is often not satisfactory, especially when stones are present in a setting with glacial deposits. Areas characterized by glacial relict sediments are hardly differentiable in their backscatter characteristics from rippled coarse sand and fine gravel (rippled coarse sediments) without an intensive ground-truthing program. From the ecological point of view the relict and rippled coarse sediments are completely different habitats and need to be distinguished. The case study represents a seismo-acoustic approach in which SSS and nonlinear sediment echo sounder (SES) data are combined to enable a reliable and reproducible differentiation between relict sediments (with stones and coarse gravels) and rippled coarse sediments. Elevated objects produce hyperbola signatures at the sediment surface in the echo data which can be used to complement the SSS data. The nonlinear acoustic propagation of the SES sound pulses produces a comparably small foot print which results in high spatial resolution (decimeter in the xyz directions) and hence allows a more precise demarcation of hard substrate areas. Data for this study were recorded in the "Sylt Outer Reef" (German Bight, North Sea) in May 2013 and March 2015. The investigated area is characterized by heterogeneously distributed moraine deposits and rippled coarse sediments partly draped with Holocene fine sands. The relict sediments and the rippled coarse sediments indicate both high backscatter intensities but can be distinguished by means of the hyperbola locations. The northeast of the study area is dominated by rippled coarse sediments (without hyperbolas) and the southwestern part by relict sediments with a high amount of stones represented by hyperbolas which is also proven by extensive ground-truthing (grab sampling and high quality underwater videos). An automated procedure to identify and export the hyperbola positions makes the demarcation of hard substrate grounds (here: relict sediments) reproducible, faster and less complex in comparison to the visual-manual identification on the basis of sidescan sonar data.

  7. WE-G-BRA-05: IROC Houston On-Site Audits and Parameters That Affect Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kry, S; Dromgoole, L; Alvarez, P

    Purpose: To highlight the IROC Houston on-site dosimetry audit program, and to investigate the impact of clinical conditions on the frequency of errors/recommendations noted by IROC Houston. Methods: The results of IROC Houston on-site audits from 2000-present were abstracted and compared to clinical parameters, this included 409 institutions and 1020 linacs. In particular, we investigated the frequency of recommendations versus year, and the impact of repeat visits on the number of recommendations. We also investigated the impact on the number of recommendations of several clinical parameters: the number and age of the linacs, the linac/TPS combination, and the scope ofmore » the QA program. Results: The number of recommendations per institution (3.1 average) has shown decline between 2000 and present, although the number of recommendations per machine (0.89) has not changed. Previous IROC Houston site visits did not Result in fewer recommendations on a repeat visit, but IROC Houston tests have changed substantially during the last 15 years as radiotherapy technology has changed. There was no impact on the number of recommendations based on the number of machines at the institution or the age of a given machine. The fewest recommendations were observed for Varian-Eclipse combinations (0.71 recs/machine), while Elekta- Pinnacle combinations yielded the most (1.62 recs/machine). Finally, in the TG-142 era (post-2010), those institutions that had a QA recommendation (n=77) had significantly more other recommendations (1.83 per institution) than those that had no QA rec (n=12, 1.33 per institution). Conclusion: Establishing and maintaining a successful radiotherapy program is challenging and areas of improvement can routinely be identified. Clinical conditions such as linac-TPS combinations and the establishment of a good QA program impact the frequency of errors/deficiencies identified by IROC Houston during their on-site review process.« less

  8. Industrial Inspection with Open Eyes: Advance with Machine Vision Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Zheng; Ukida, H.; Niel, Kurt

    Machine vision systems have evolved significantly with the technology advances to tackle the challenges from modern manufacturing industry. A wide range of industrial inspection applications for quality control are benefiting from visual information captured by different types of cameras variously configured in a machine vision system. This chapter screens the state of the art in machine vision technologies in the light of hardware, software tools, and major algorithm advances for industrial inspection. The inspection beyond visual spectrum offers a significant complementary to the visual inspection. The combination with multiple technologies makes it possible for the inspection to achieve a bettermore » performance and efficiency in varied applications. The diversity of the applications demonstrates the great potential of machine vision systems for industry.« less

  9. 75 FR 79352 - Arbitration Panel Decision Under the Randolph-Sheppard Act

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-20

    ... manage vending machines at the Chemeketa Community College in addition to those he was already operating... 2005, while operating his vending location at the Lottery Building, Complainant learned from another... permitting it to be combined with the proposed espresso cart, in return for a vending machine location at the...

  10. 10 CFR 431.296 - Energy conservation standards and their effective dates.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... CERTAIN COMMERCIAL AND INDUSTRIAL EQUIPMENT Refrigerated Bottled or Canned Beverage Vending Machines... refrigerated bottled or canned beverage vending machine manufactured on or after August 31, 2012 shall have a... (kilowatt hours per day) Class A MDEC = 0.055 × V + 2.56. Class B MDEC = 0.073 × V + 3.16. Combination...

  11. Predictors of return rate discrimination in slot machine play.

    PubMed

    Coates, Ewan; Blaszczynski, Alex

    2014-09-01

    The purpose of this study was to investigate the extent to which accurate estimates of payback percentages and volatility combined with prior learning, enabled players to successfully discriminate between multi-line/multi-credit slot machines that provided differing rates of reinforcement. The aim was to determine if the capacity to discriminate structural characteristics of gaming machines influenced player choices in selecting 'favourite' slot machines. Slot machine gambling history, gambling beliefs and knowledge, impulsivity, illusions of control, and problem solving style were assessed in a sample of 48 first year undergraduate psychology students. Participants were subsequently exposed to a choice paradigm where they could freely select to play either of two concurrently presented PC-simulated slot machines programmed to randomly differ in expected player return rates (payback percentage) and win frequency (volatility). Results suggest that prior learning and cognitions (particularly gambler's fallacy) but not payback, were major contributors to the ability of a player to discriminate volatility between slot machines. Participants displayed a general tendency to discriminate payback, but counter-intuitively placed more bets on the slot machine with lower payback percentage rates.

  12. Using the Relevance Vector Machine Model Combined with Local Phase Quantization to Predict Protein-Protein Interactions from Protein Sequences.

    PubMed

    An, Ji-Yong; Meng, Fan-Rong; You, Zhu-Hong; Fang, Yu-Hong; Zhao, Yu-Jun; Zhang, Ming

    2016-01-01

    We propose a novel computational method known as RVM-LPQ that combines the Relevance Vector Machine (RVM) model and Local Phase Quantization (LPQ) to predict PPIs from protein sequences. The main improvements are the results of representing protein sequences using the LPQ feature representation on a Position Specific Scoring Matrix (PSSM), reducing the influence of noise using a Principal Component Analysis (PCA), and using a Relevance Vector Machine (RVM) based classifier. We perform 5-fold cross-validation experiments on Yeast and Human datasets, and we achieve very high accuracies of 92.65% and 97.62%, respectively, which is significantly better than previous works. To further evaluate the proposed method, we compare it with the state-of-the-art support vector machine (SVM) classifier on the Yeast dataset. The experimental results demonstrate that our RVM-LPQ method is obviously better than the SVM-based method. The promising experimental results show the efficiency and simplicity of the proposed method, which can be an automatic decision support tool for future proteomics research.

  13. A Novel Approach for Lie Detection Based on F-Score and Extreme Learning Machine

    PubMed Central

    Gao, Junfeng; Wang, Zhao; Yang, Yong; Zhang, Wenjia; Tao, Chunyi; Guan, Jinan; Rao, Nini

    2013-01-01

    A new machine learning method referred to as F-score_ELM was proposed to classify the lying and truth-telling using the electroencephalogram (EEG) signals from 28 guilty and innocent subjects. Thirty-one features were extracted from the probe responses from these subjects. Then, a recently-developed classifier called extreme learning machine (ELM) was combined with F-score, a simple but effective feature selection method, to jointly optimize the number of the hidden nodes of ELM and the feature subset by a grid-searching training procedure. The method was compared to two classification models combining principal component analysis with back-propagation network and support vector machine classifiers. We thoroughly assessed the performance of these classification models including the training and testing time, sensitivity and specificity from the training and testing sets, as well as network size. The experimental results showed that the number of the hidden nodes can be effectively optimized by the proposed method. Also, F-score_ELM obtained the best classification accuracy and required the shortest training and testing time. PMID:23755136

  14. Multi-Response Optimization of WEDM Process Parameters Using Taguchi Based Desirability Function Analysis

    NASA Astrophysics Data System (ADS)

    Majumder, Himadri; Maity, Kalipada

    2018-03-01

    Shape memory alloy has a unique capability to return to its original shape after physical deformation by applying heat or thermo-mechanical or magnetic load. In this experimental investigation, desirability function analysis (DFA), a multi-attribute decision making was utilized to find out the optimum input parameter setting during wire electrical discharge machining (WEDM) of Ni-Ti shape memory alloy. Four critical machining parameters, namely pulse on time (TON), pulse off time (TOFF), wire feed (WF) and wire tension (WT) were taken as machining inputs for the experiments to optimize three interconnected responses like cutting speed, kerf width, and surface roughness. Input parameter combination TON = 120 μs., TOFF = 55 μs., WF = 3 m/min. and WT = 8 kg-F were found to produce the optimum results. The optimum process parameters for each desired response were also attained using Taguchi’s signal-to-noise ratio. Confirmation test has been done to validate the optimum machining parameter combination which affirmed DFA was a competent approach to select optimum input parameters for the ideal response quality for WEDM of Ni-Ti shape memory alloy.

  15. Calculations of the binding affinities of protein-protein complexes with the fast multipole method

    NASA Astrophysics Data System (ADS)

    Kim, Bongkeun; Song, Jiming; Song, Xueyu

    2010-09-01

    In this paper, we used a coarse-grained model at the residue level to calculate the binding free energies of three protein-protein complexes. General formulations to calculate the electrostatic binding free energy and the van der Waals free energy are presented by solving linearized Poisson-Boltzmann equations using the boundary element method in combination with the fast multipole method. The residue level model with the fast multipole method allows us to efficiently investigate how the mutations on the active site of the protein-protein interface affect the changes in binding affinities of protein complexes. Good correlations between the calculated results and the experimental ones indicate that our model can capture the dominant contributions to the protein-protein interactions. At the same time, additional effects on protein binding due to atomic details are also discussed in the context of the limitations of such a coarse-grained model.

  16. NOTE: Acceleration of Monte Carlo-based scatter compensation for cardiac SPECT

    NASA Astrophysics Data System (ADS)

    Sohlberg, A.; Watabe, H.; Iida, H.

    2008-07-01

    Single proton emission computed tomography (SPECT) images are degraded by photon scatter making scatter compensation essential for accurate reconstruction. Reconstruction-based scatter compensation with Monte Carlo (MC) modelling of scatter shows promise for accurate scatter correction, but it is normally hampered by long computation times. The aim of this work was to accelerate the MC-based scatter compensation using coarse grid and intermittent scatter modelling. The acceleration methods were compared to un-accelerated implementation using MC-simulated projection data of the mathematical cardiac torso (MCAT) phantom modelling 99mTc uptake and clinical myocardial perfusion studies. The results showed that when combined the acceleration methods reduced the reconstruction time for 10 ordered subset expectation maximization (OS-EM) iterations from 56 to 11 min without a significant reduction in image quality indicating that the coarse grid and intermittent scatter modelling are suitable for MC-based scatter compensation in cardiac SPECT.

  17. Development of Simultaneous Corrosion Barrier and Optimized Microstructure in FeCrAl Heat-Resistant Alloy for Energy Applications. Part 1: The Protective Scale

    NASA Astrophysics Data System (ADS)

    Pimentel, G.; Aranda, M. M.; Chao, J.; González-Carrasco, J. L.; Capdevila, C.

    2015-09-01

    Coarse-grained Fe-based oxide dispersion-strengthened (ODS) steels are a class of advanced materials for combined cycle gas turbine systems to deal with operating temperatures and pressures of around 1100°C and 15-30 bar in aggressive environments, which would increase biomass energy conversion efficiencies up to 45% and above. This two-part paper reports the possibility of the development of simultaneous corrosion barrier and optimized microstructure in a FeCrAl heat-resistant alloy for energy applications. The first part reports the mechanism of generating a dense, self-healing α-alumina layer by thermal oxidation, during a heat treatment that leads to a coarse-grained microstructure with a potential value for high-temperature creep resistance in a FeCrAl ODS ferritic alloy, which will be described in more detail in the second part.

  18. Early stages of clathrin aggregation at a membrane in coarse-grained simulations

    NASA Astrophysics Data System (ADS)

    Giani, M.; den Otter, W. K.; Briels, W. J.

    2017-04-01

    The self-assembly process of clathrin coated pits during endocytosis has been simulated by combining and extending coarse grained models of the clathrin triskelion, the adaptor protein AP2, and a flexible network membrane. The AP2's core, upon binding to membrane and cargo, releases a motif that can bind clathrin. In conditions where the core-membrane-cargo binding is weak, the binding of this motif to clathrin can result in a stable complex. We characterize the conditions and mechanisms resulting in the formation of clathrin lattices that curve the membrane, i.e., clathrin coated pits. The mechanical properties of the AP2 β linker appear crucial to the orientation of the curved clathrin lattice relative to the membrane, with wild-type short linkers giving rise to the inward curving buds enabling endocytosis while long linkers produce upside-down cages and outward curving bulges.

  19. Evolution of twinning in extruded AZ31 alloy with bimodal grain structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garcés, G., E-mail: ggarces@cenim.csic.es

    2017-04-15

    Twinning in extruded AZ31 alloy with a bimodal grain structure is studied under compression along the extrusion direction. This study has combined in-situ measurements during the compression tests by Synchrotron Radiation Diffraction and Acoustic Emission techniques and the evaluation of the microstructure and texture in post-mortem compression samples deformed at different strains. The microstructure of the alloy is characterized by the coexistence of large areas of fine dynamic recrystallized grains and coarse non-recrystallized grains elongated along extrusion direction. Twinning occurs initially in large elongated grains before the macroscopic yield stress which is controlled by the twinning in equiaxed dynamically recrystallizedmore » grains. - Highlights: • The AZ31 extruded at low temperature exhibits a bimodal grains structure. • Twinning takes place before macroscopic yielding in coarse non-DRXed grains. • DRXed grains controls the beginning of plasticity in magnesium alloys with bimodal grain structure.« less

  20. A Calculus for Boxes and Traits in a Java-Like Setting

    NASA Astrophysics Data System (ADS)

    Bettini, Lorenzo; Damiani, Ferruccio; de Luca, Marco; Geilmann, Kathrin; Schäfer, Jan

    The box model is a component model for the object-oriented paradigm, that defines components (the boxes) with clear encapsulation boundaries. Having well-defined boundaries is crucial in component-based software development, because it enables to argue about the interference and interaction between a component and its context. In general, boxes contain several objects and inner boxes, of which some are local to the box and cannot be accessed from other boxes and some can be accessible by other boxes. A trait is a set of methods divorced from any class hierarchy. Traits can be composed together to form classes or other traits. We present a calculus for boxes and traits. Traits are units of fine-grained reuse, whereas boxes can be seen as units of coarse-grained reuse. The calculus is equipped with an ownership type system and allows us to combine coarse- and fine-grained reuse of code by maintaining encapsulation of components.

  1. Proximity correction of high-dosed frame with PROXECCO

    NASA Astrophysics Data System (ADS)

    Eisenmann, Hans; Waas, Thomas; Hartmann, Hans

    1994-05-01

    Usefulness of electron beam lithography is strongly related to the efficiency and quality of methods used for proximity correction. This paper addresses the above issue by proposing an extension to the new proximity correction program PROXECCO. The combination of a framing step with PROXECCO produces a pattern with a very high edge accuracy and still allows usage of the fast correction procedure. Making a frame with a higher dose imitates a fine resolution correction where the coarse part is disregarded. So after handling the high resolution effect by means of framing, an additional coarse correction is still needed. Higher doses have a higher contribution to the proximity effect. This additional proximity effect is taken into account with the help of the multi-dose input of PROXECCO. The dose of the frame is variable, depending on the deposited energy coming from backscattering of the proximity. Simulation proves the very high edge accuracy of the applied method.

  2. Green-noise halftoning with dot diffusion

    NASA Astrophysics Data System (ADS)

    Lippens, Stefaan; Philips, Wilfried

    2007-02-01

    Dot diffusion is a halftoning technique that is based on the traditional error diffusion concept, but offers a high degree of parallel processing by its block based approach. Traditional dot diffusion however suffers from periodicity artifacts. To limit the visibility of these artifacts, we propose grid diffusion, which applies different class matrices for different blocks. Furthermore, in this paper we will discuss two approaches in the dot diffusion framework to generate green-noise halftone patterns. The first approach is based on output dependent feedback (hysteresis), analogous to the standard green-noise error diffusion techniques. We observe that the resulting halftones are rather coarse and highly dependent on the used dot diffusion class matrices. In the second approach we don't limit the diffusion to the nearest neighbors. This leads to less coarse halftones, compared to the first approach. The drawback is that it can only cope with rather limited cluster sizes. We can reduce these drawbacks by combining the two approaches.

  3. Coarse-Grained Structural Modeling of Molecular Motors Using Multibody Dynamics

    PubMed Central

    Parker, David; Bryant, Zev; Delp, Scott L.

    2010-01-01

    Experimental and computational approaches are needed to uncover the mechanisms by which molecular motors convert chemical energy into mechanical work. In this article, we describe methods and software to generate structurally realistic models of molecular motor conformations compatible with experimental data from different sources. Coarse-grained models of molecular structures are constructed by combining groups of atoms into a system of rigid bodies connected by joints. Contacts between rigid bodies enforce excluded volume constraints, and spring potentials model system elasticity. This simplified representation allows the conformations of complex molecular motors to be simulated interactively, providing a tool for hypothesis building and quantitative comparisons between models and experiments. In an example calculation, we have used the software to construct atomically detailed models of the myosin V molecular motor bound to its actin track. The software is available at www.simtk.org. PMID:20428469

  4. Teaching a Machine to Feel Postoperative Pain: Combining High-Dimensional Clinical Data with Machine Learning Algorithms to Forecast Acute Postoperative Pain

    PubMed Central

    Tighe, Patrick J.; Harle, Christopher A.; Hurley, Robert W.; Aytug, Haldun; Boezaart, Andre P.; Fillingim, Roger B.

    2015-01-01

    Background Given their ability to process highly dimensional datasets with hundreds of variables, machine learning algorithms may offer one solution to the vexing challenge of predicting postoperative pain. Methods Here, we report on the application of machine learning algorithms to predict postoperative pain outcomes in a retrospective cohort of 8071 surgical patients using 796 clinical variables. Five algorithms were compared in terms of their ability to forecast moderate to severe postoperative pain: Least Absolute Shrinkage and Selection Operator (LASSO), gradient-boosted decision tree, support vector machine, neural network, and k-nearest neighbor, with logistic regression included for baseline comparison. Results In forecasting moderate to severe postoperative pain for postoperative day (POD) 1, the LASSO algorithm, using all 796 variables, had the highest accuracy with an area under the receiver-operating curve (ROC) of 0.704. Next, the gradient-boosted decision tree had an ROC of 0.665 and the k-nearest neighbor algorithm had an ROC of 0.643. For POD 3, the LASSO algorithm, using all variables, again had the highest accuracy, with an ROC of 0.727. Logistic regression had a lower ROC of 0.5 for predicting pain outcomes on POD 1 and 3. Conclusions Machine learning algorithms, when combined with complex and heterogeneous data from electronic medical record systems, can forecast acute postoperative pain outcomes with accuracies similar to methods that rely only on variables specifically collected for pain outcome prediction. PMID:26031220

  5. Systematic and simulation-free coarse graining of homopolymer melts: a relative-entropy-based study.

    PubMed

    Yang, Delian; Wang, Qiang

    2015-09-28

    We applied the systematic and simulation-free strategy proposed in our previous work (D. Yang and Q. Wang, J. Chem. Phys., 2015, 142, 054905) to the relative-entropy-based (RE-based) coarse graining of homopolymer melts. RE-based coarse graining provides a quantitative measure of the coarse-graining performance and can be used to select the appropriate analytic functional forms of the pair potentials between coarse-grained (CG) segments, which are more convenient to use than the tabulated (numerical) CG potentials obtained from structure-based coarse graining. In our general coarse-graining strategy for homopolymer melts using the RE framework proposed here, the bonding and non-bonded CG potentials are coupled and need to be solved simultaneously. Taking the hard-core Gaussian thread model (K. S. Schweizer and J. G. Curro, Chem. Phys., 1990, 149, 105) as the original system, we performed RE-based coarse graining using the polymer reference interaction site model theory under the assumption that the intrachain segment pair correlation functions of CG systems are the same as those in the original system, which de-couples the bonding and non-bonded CG potentials and simplifies our calculations (that is, we only calculated the latter). We compared the performance of various analytic functional forms of non-bonded CG pair potential and closures for CG systems in RE-based coarse graining, as well as the structural and thermodynamic properties of original and CG systems at various coarse-graining levels. Our results obtained from RE-based coarse graining are also compared with those from structure-based coarse graining.

  6. A comparative study of machine learning methods for time-to-event survival data for radiomics risk modelling.

    PubMed

    Leger, Stefan; Zwanenburg, Alex; Pilz, Karoline; Lohaus, Fabian; Linge, Annett; Zöphel, Klaus; Kotzerke, Jörg; Schreiber, Andreas; Tinhofer, Inge; Budach, Volker; Sak, Ali; Stuschke, Martin; Balermpas, Panagiotis; Rödel, Claus; Ganswindt, Ute; Belka, Claus; Pigorsch, Steffi; Combs, Stephanie E; Mönnich, David; Zips, Daniel; Krause, Mechthild; Baumann, Michael; Troost, Esther G C; Löck, Steffen; Richter, Christian

    2017-10-16

    Radiomics applies machine learning algorithms to quantitative imaging data to characterise the tumour phenotype and predict clinical outcome. For the development of radiomics risk models, a variety of different algorithms is available and it is not clear which one gives optimal results. Therefore, we assessed the performance of 11 machine learning algorithms combined with 12 feature selection methods by the concordance index (C-Index), to predict loco-regional tumour control (LRC) and overall survival for patients with head and neck squamous cell carcinoma. The considered algorithms are able to deal with continuous time-to-event survival data. Feature selection and model building were performed on a multicentre cohort (213 patients) and validated using an independent cohort (80 patients). We found several combinations of machine learning algorithms and feature selection methods which achieve similar results, e.g. C-Index = 0.71 and BT-COX: C-Index = 0.70 in combination with Spearman feature selection. Using the best performing models, patients were stratified into groups of low and high risk of recurrence. Significant differences in LRC were obtained between both groups on the validation cohort. Based on the presented analysis, we identified a subset of algorithms which should be considered in future radiomics studies to develop stable and clinically relevant predictive models for time-to-event endpoints.

  7. Prostate cancer detection using machine learning techniques by employing combination of features extracting strategies.

    PubMed

    Hussain, Lal; Ahmed, Adeel; Saeed, Sharjil; Rathore, Saima; Awan, Imtiaz Ahmed; Shah, Saeed Arif; Majid, Abdul; Idris, Adnan; Awan, Anees Ahmed

    2018-02-06

    Prostate is a second leading causes of cancer deaths among men. Early detection of cancer can effectively reduce the rate of mortality caused by Prostate cancer. Due to high and multiresolution of MRIs from prostate cancer require a proper diagnostic systems and tools. In the past researchers developed Computer aided diagnosis (CAD) systems that help the radiologist to detect the abnormalities. In this research paper, we have employed novel Machine learning techniques such as Bayesian approach, Support vector machine (SVM) kernels: polynomial, radial base function (RBF) and Gaussian and Decision Tree for detecting prostate cancer. Moreover, different features extracting strategies are proposed to improve the detection performance. The features extracting strategies are based on texture, morphological, scale invariant feature transform (SIFT), and elliptic Fourier descriptors (EFDs) features. The performance was evaluated based on single as well as combination of features using Machine Learning Classification techniques. The Cross validation (Jack-knife k-fold) was performed and performance was evaluated in term of receiver operating curve (ROC) and specificity, sensitivity, Positive predictive value (PPV), negative predictive value (NPV), false positive rate (FPR). Based on single features extracting strategies, SVM Gaussian Kernel gives the highest accuracy of 98.34% with AUC of 0.999. While, using combination of features extracting strategies, SVM Gaussian kernel with texture + morphological, and EFDs + morphological features give the highest accuracy of 99.71% and AUC of 1.00.

  8. Machine characterization and benchmark performance prediction

    NASA Technical Reports Server (NTRS)

    Saavedra-Barrera, Rafael H.

    1988-01-01

    From runs of standard benchmarks or benchmark suites, it is not possible to characterize the machine nor to predict the run time of other benchmarks which have not been run. A new approach to benchmarking and machine characterization is reported. The creation and use of a machine analyzer is described, which measures the performance of a given machine on FORTRAN source language constructs. The machine analyzer yields a set of parameters which characterize the machine and spotlight its strong and weak points. Also described is a program analyzer, which analyzes FORTRAN programs and determines the frequency of execution of each of the same set of source language operations. It is then shown that by combining a machine characterization and a program characterization, we are able to predict with good accuracy the run time of a given benchmark on a given machine. Characterizations are provided for the Cray-X-MP/48, Cyber 205, IBM 3090/200, Amdahl 5840, Convex C-1, VAX 8600, VAX 11/785, VAX 11/780, SUN 3/50, and IBM RT-PC/125, and for the following benchmark programs or suites: Los Alamos (BMK8A1), Baskett, Linpack, Livermore Loops, Madelbrot Set, NAS Kernels, Shell Sort, Smith, Whetstone and Sieve of Erathostenes.

  9. Hybrid micromachining using a nanosecond pulsed laser and micro EDM

    NASA Astrophysics Data System (ADS)

    Kim, Sanha; Kim, Bo Hyun; Chung, Do Kwan; Shin, Hong Shik; Chu, Chong Nam

    2010-01-01

    Micro electrical discharge machining (micro EDM) is a well-known precise machining process that achieves micro structures of excellent quality for any conductive material. However, the slow machining speed and high tool wear are main drawbacks of this process. Though the use of deionized water instead of kerosene as a dielectric fluid can reduce the tool wear and increase the machine speed, the material removal rate (MRR) is still low. In contrast, laser ablation using a nanosecond pulsed laser is a fast and non-wear machining process but achieves micro figures of rather low quality. Therefore, the integration of these two processes can overcome the respective disadvantages. This paper reports a hybrid process of a nanosecond pulsed laser and micro EDM for micromachining. A novel hybrid micromachining system that combines the two discrete machining processes is introduced. Then, the feasibility and characteristics of the hybrid machining process are investigated compared to conventional EDM and laser ablation. It is verified experimentally that the machining time can be effectively reduced in both EDM drilling and milling by rapid laser pre-machining prior to micro EDM. Finally, some examples of complicated 3D micro structures fabricated by the hybrid process are shown.

  10. LANDMARK-BASED SPEECH RECOGNITION: REPORT OF THE 2004 JOHNS HOPKINS SUMMER WORKSHOP.

    PubMed

    Hasegawa-Johnson, Mark; Baker, James; Borys, Sarah; Chen, Ken; Coogan, Emily; Greenberg, Steven; Juneja, Amit; Kirchhoff, Katrin; Livescu, Karen; Mohan, Srividya; Muller, Jennifer; Sonmez, Kemal; Wang, Tianyu

    2005-01-01

    Three research prototype speech recognition systems are described, all of which use recently developed methods from artificial intelligence (specifically support vector machines, dynamic Bayesian networks, and maximum entropy classification) in order to implement, in the form of an automatic speech recognizer, current theories of human speech perception and phonology (specifically landmark-based speech perception, nonlinear phonology, and articulatory phonology). All three systems begin with a high-dimensional multiframe acoustic-to-distinctive feature transformation, implemented using support vector machines trained to detect and classify acoustic phonetic landmarks. Distinctive feature probabilities estimated by the support vector machines are then integrated using one of three pronunciation models: a dynamic programming algorithm that assumes canonical pronunciation of each word, a dynamic Bayesian network implementation of articulatory phonology, or a discriminative pronunciation model trained using the methods of maximum entropy classification. Log probability scores computed by these models are then combined, using log-linear combination, with other word scores available in the lattice output of a first-pass recognizer, and the resulting combination score is used to compute a second-pass speech recognition output.

  11. The wear properties of CFR-PEEK-OPTIMA articulating against ceramic assessed on a multidirectional pin-on-plate machine.

    PubMed

    Scholes, S C; Unsworth, A

    2007-04-01

    In an attempt to prolong the lives of rubbing implantable devices, several 'new' materials have been examined to determine their suitability as joint couplings. Tests were performed on a multidirectional pin-on-plate machine to determine the wear of both pitch and PAN (polyacrylonitrile)-based carbon fibre reinforced-polyetheretherketone (CFR-PEEK-OPTIMA) pins articulating against both BioLox Delta and BioLox Forte plates (ceramic materials). Both reciprocation and rotational motion were applied to the samples. The tests were conducted using 24.5 per cent bovine serum as the lubricant (protein concentration 15 g/l). Although all four material combinations gave similar low wear with no statistically significant difference (p > 0.25), the lowest average total wear of these pin-on-plate tests was provided by CFR-PEEK-OPTIMA pitch pins versus BioLox Forte plates. This was much lower than the wear produced by conventional joint materials (metal-on-polyethylene) and metal-on-metal combinations when tested on the pin-on-plate machine. This therefore indicates optimism that these PEEK-OPTIMA-based material combinations may perform well in joint applications.

  12. Entity recognition in the biomedical domain using a hybrid approach.

    PubMed

    Basaldella, Marco; Furrer, Lenz; Tasso, Carlo; Rinaldi, Fabio

    2017-11-09

    This article describes a high-recall, high-precision approach for the extraction of biomedical entities from scientific articles. The approach uses a two-stage pipeline, combining a dictionary-based entity recognizer with a machine-learning classifier. First, the OGER entity recognizer, which has a bias towards high recall, annotates the terms that appear in selected domain ontologies. Subsequently, the Distiller framework uses this information as a feature for a machine learning algorithm to select the relevant entities only. For this step, we compare two different supervised machine-learning algorithms: Conditional Random Fields and Neural Networks. In an in-domain evaluation using the CRAFT corpus, we test the performance of the combined systems when recognizing chemicals, cell types, cellular components, biological processes, molecular functions, organisms, proteins, and biological sequences. Our best system combines dictionary-based candidate generation with Neural-Network-based filtering. It achieves an overall precision of 86% at a recall of 60% on the named entity recognition task, and a precision of 51% at a recall of 49% on the concept recognition task. These results are to our knowledge the best reported so far in this particular task.

  13. Ranking support vector machine for multiple kernels output combination in protein-protein interaction extraction from biomedical literature.

    PubMed

    Yang, Zhihao; Lin, Yuan; Wu, Jiajin; Tang, Nan; Lin, Hongfei; Li, Yanpeng

    2011-10-01

    Knowledge about protein-protein interactions (PPIs) unveils the molecular mechanisms of biological processes. However, the volume and content of published biomedical literature on protein interactions is expanding rapidly, making it increasingly difficult for interaction database curators to detect and curate protein interaction information manually. We present a multiple kernel learning-based approach for automatic PPI extraction from biomedical literature. The approach combines the following kernels: feature-based, tree, and graph and combines their output with Ranking support vector machine (SVM). Experimental evaluations show that the features in individual kernels are complementary and the kernel combined with Ranking SVM achieves better performance than those of the individual kernels, equal weight combination and optimal weight combination. Our approach can achieve state-of-the-art performance with respect to the comparable evaluations, with 64.88% F-score and 88.02% AUC on the AImed corpus. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Prediction of hot spot residues at protein-protein interfaces by combining machine learning and energy-based methods.

    PubMed

    Lise, Stefano; Archambeau, Cedric; Pontil, Massimiliano; Jones, David T

    2009-10-30

    Alanine scanning mutagenesis is a powerful experimental methodology for investigating the structural and energetic characteristics of protein complexes. Individual amino-acids are systematically mutated to alanine and changes in free energy of binding (DeltaDeltaG) measured. Several experiments have shown that protein-protein interactions are critically dependent on just a few residues ("hot spots") at the interface. Hot spots make a dominant contribution to the free energy of binding and if mutated they can disrupt the interaction. As mutagenesis studies require significant experimental efforts, there is a need for accurate and reliable computational methods. Such methods would also add to our understanding of the determinants of affinity and specificity in protein-protein recognition. We present a novel computational strategy to identify hot spot residues, given the structure of a complex. We consider the basic energetic terms that contribute to hot spot interactions, i.e. van der Waals potentials, solvation energy, hydrogen bonds and Coulomb electrostatics. We treat them as input features and use machine learning algorithms such as Support Vector Machines and Gaussian Processes to optimally combine and integrate them, based on a set of training examples of alanine mutations. We show that our approach is effective in predicting hot spots and it compares favourably to other available methods. In particular we find the best performances using Transductive Support Vector Machines, a semi-supervised learning scheme. When hot spots are defined as those residues for which DeltaDeltaG >or= 2 kcal/mol, our method achieves a precision and a recall respectively of 56% and 65%. We have developed an hybrid scheme in which energy terms are used as input features of machine learning models. This strategy combines the strengths of machine learning and energy-based methods. Although so far these two types of approaches have mainly been applied separately to biomolecular problems, the results of our investigation indicate that there are substantial benefits to be gained by their integration.

  15. Influence of Bank Afforestation and Snag Angle-of-fall on Riparian Large Woody Debris Recruitment

    Treesearch

    Don C. Bragg; Jeffrey L. Kershner

    2002-01-01

    A riparian large woody debris (LWD) recruitment simulator (Coarse Woody Debris [CWD]) was used to test the impact of bank afforestation and snag fall direction on delivery trends. Combining all cumulative LWD recruitment across bank afforestation levels averaged 77.1 cubic meters per 100 meter reach (both banks forested) compared to 49.3 cubic meters per 100 meter...

  16. Acoustic estimates of zooplankton and micronekton biomass in cyclones and anticyclones of the northeastern Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Ressler, Patrick Henry

    2001-12-01

    In the Gulf of Mexico (GOM), coarse to mesoscale eddies can enhance the supply of limiting nutrients into the euphotic zone, elevating primary production. This leads to 'oases' of enriched standing stocks of zooplankton and micronekton in otherwise oligotrophic deepwater (>200 m bottom depth). A combination of acoustic volume backscattering (Sv) measurements with an acoustic Doppler current profiler (ADCP) and concurrent net sampling of zooplankton and micronekton biomass in GOM eddy fields between October 1996 and November 1998 confirmed that cyclones and flow confluences were areas of locally enhanced Sv and standing stock biomass. Net samples were used both to 'sea-truth' the acoustic measurements and to assess the influence of taxonomic composition on measured Sv. During October 1996 and August 1997, a mesoscale (200--300 km diameter) cyclone-anticyclone pair in the northeastern GOM was surveyed as part of a cetacean (whale and dolphin) and seabird habitat, study. Acoustic estimates of biomass in the upper 10--50 m of the water column showed that the cyclone and flow confluence were enriched relative to anticyclonic Loop Current Eddies during both years. Cetacean and seabird survey results reported by other project researchers imply that these eddies provide preferential habitat because they foster locally higher concentrations of higher-trophic-level prey. Sv measurements in November 1997 and 1998 showed that coarse scale eddies (30--150 km diameter) probably enhanced nutrients and S, in the deepwater GOM within 100 km of the Mississippi delta, an area suspected to be important habitat for cetaceans and seabirds. Finally, Sv, data collected during November-December 1997 and October-December 1998 from a mooring at the head of DeSoto Canyon in the northeastern GOM revealed temporal variability at a single location: characteristic temporal decorrelation scales were 1 day (diel vertical migration of zooplankton and micronekton) and 5 days (advective processes). A combination of acoustic and net sampling is a useful way to survey temporal and spatial patterns in zooplankton and micronekton biomass in coarse to mesoscale eddies. Further research should employ such a combination of methods to investigate plankton patterns in eddies and their implications for cetacean and seabird habitat.

  17. A method to combine target volume data from 3D and 4D planned thoracic radiotherapy patient cohorts for machine learning applications.

    PubMed

    Johnson, Corinne; Price, Gareth; Khalifa, Jonathan; Faivre-Finn, Corinne; Dekker, Andre; Moore, Christopher; van Herk, Marcel

    2018-02-01

    The gross tumour volume (GTV) is predictive of clinical outcome and consequently features in many machine-learned models. 4D-planning, however, has prompted substitution of the GTV with the internal gross target volume (iGTV). We present and validate a method to synthesise GTV data from the iGTV, allowing the combination of 3D and 4D planned patient cohorts for modelling. Expert delineations in 40 non-small cell lung cancer patients were used to develop linear fit and erosion methods to synthesise the GTV volume and shape. Quality was assessed using Dice Similarity Coefficients (DSC) and closest point measurements; by calculating dosimetric features; and by assessing the quality of random forest models built on patient populations with and without synthetic GTVs. Volume estimates were within the magnitudes of inter-observer delineation variability. Shape comparisons produced mean DSCs of 0.8817 and 0.8584 for upper and lower lobe cases, respectively. A model trained on combined true and synthetic data performed significantly better than models trained on GTV alone, or combined GTV and iGTV data. Accurate synthesis of GTV size from the iGTV permits the combination of lung cancer patient cohorts, facilitating machine learning applications in thoracic radiotherapy. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Monitoring forest dynamics with multi-scale and time series imagery.

    PubMed

    Huang, Chunbo; Zhou, Zhixiang; Wang, Di; Dian, Yuanyong

    2016-05-01

    To learn the forest dynamics and evaluate the ecosystem services of forest effectively, a timely acquisition of spatial and quantitative information of forestland is very necessary. Here, a new method was proposed for mapping forest cover changes by combining multi-scale satellite remote-sensing imagery with time series data. Using time series Normalized Difference Vegetation Index products derived from the Moderate Resolution Imaging Spectroradiometer images (MODIS-NDVI) and Landsat Thematic Mapper/Enhanced Thematic Mapper Plus (TM/ETM+) images as data source, a hierarchy stepwise analysis from coarse scale to fine scale was developed for detecting the forest change area. At the coarse scale, MODIS-NDVI data with 1-km resolution were used to detect the changes in land cover types and a land cover change map was constructed using NDVI values at vegetation growing seasons. At the fine scale, based on the results at the coarse scale, Landsat TM/ETM+ data with 30-m resolution were used to precisely detect the forest change location and forest change trend by analyzing time series forest vegetation indices (IFZ). The method was tested using the data for Hubei Province, China. The MODIS-NDVI data from 2001 to 2012 were used to detect the land cover changes, and the overall accuracy was 94.02 % at the coarse scale. At the fine scale, the available TM/ETM+ images at vegetation growing seasons between 2001 and 2012 were used to locate and verify forest changes in the Three Gorges Reservoir Area, and the overall accuracy was 94.53 %. The accuracy of the two layer hierarchical monitoring results indicated that the multi-scale monitoring method is feasible and reliable.

  19. [Distribution and enrichment characteristics of organic carbon and total nitrogen in mollisols under long-term fertilization].

    PubMed

    Xu, Xiang-ru; Luo, Kun; Zhou, Bao-ku; Wang, Jing-kuan; Zhang, Wen-ju; Xu, Ming-gang

    2015-07-01

    The characteristics and changes of soil organic carbon (SOC) and total nitrogen (TN) in different size particles of soil under different agricultural practices are the basis for better understanding soil carbon sequestration of mollisols. Based on a 31-year long-term field experiment located at the Heilongjiang Academy of Agricultural Sciences (Harbin) , soil samples under six treatments were separated by size-fractionation method to explore changes and distribution of SOC and TN in coarse sand, fine sand, silt and clay from the top layer (0-20 cm) and subsurface layer (20-40 cm). Results showed that long-term application of manure (M) increased the percentages of SOC and TN in coarse sand and clay size fractions. In the top layer, application of nitrogen, phosphorus and potassium fertilizers combined with manure (NPKM) increased the percentages of SOC and TN in coarse sand by 191.3% and 179.3% compared with the control (CK), whereas M application increased the percentages of SOC and TN in clay by 45% and 47% respectively. For subsurface layers, the increase rates of SOC and TN in corresponding parts were lower than that in top layer. In the surface and subsurface layers, the percentages of SOC storage in silt size fraction accounted for 42%-63% and 48%-54%, TN storage accounted for 34%-59% and 41%-47%, respectively. The enrichment factors of SOC and TN in coarse sand and clay fractions of surface layers increased significantly under the treatments with manure. The SOC and TN enrichment factors were highest in the NPKM, being 2.30 and 1.88, respectively, while that in the clay fraction changed little in the subsurface layer.

  20. Multi Response Optimization of Laser Micro Marking Process:A Grey- Fuzzy Approach

    NASA Astrophysics Data System (ADS)

    Shivakoti, I.; Das, P. P.; Kibria, G.; Pradhan, B. B.; Mustafa, Z.; Ghadai, R. K.

    2017-07-01

    The selection of optimal parametric combination for efficient machining has always become a challenging issue for the manufacturing researcher. The optimal parametric combination always provides a better machining which improves the productivity, product quality and subsequently reduces the production cost and time. The paper presents the hybrid approach of Grey relational analysis and Fuzzy logic to obtain the optimal parametric combination for better laser beam micro marking on the Gallium Nitride (GaN) work material. The response surface methodology has been implemented for design of experiment considering three parameters with their five levels. The parameter such as current, frequency and scanning speed has been considered and the mark width, mark depth and mark intensity has been considered as the process response.

  1. Research on Microstructure and Properties of Welded Joint of High Strength Steel

    NASA Astrophysics Data System (ADS)

    Zhu, Pengxiao; Li, Yi; Chen, Bo; Ma, Xuejiao; Zhang, Dongya; Tang, Cai

    2018-01-01

    BS960 steel plates were welded by Laser-MAG and MAG. The microstructure and properties of the welded joints were investigated by optical microscope, micro-hardness tester, universal tensile testing machine, impact tester, scanning electron microscope (SEM) and fatigue tester. By a series of experiments, the following results were obtained: The grain size of the coarse grain zone with Laser-MAG welded joint is 20μm, and that with MAG welded joint is about 32μm, both of the fine grain region are composed of fine lath martensite and granular bainite; the width of the heat affected region with Laser-MAG is lower than that with MAG. The strength and impact energy of welded joints with Laser-MAG is higher than that with MAG. The conditioned fatigue limit of welded joint with Laser-MAG is 280MPa; however, the conditioned fatigue limit of welded joint with MAG is 250MPa.

  2. The BLAZE language - A parallel language for scientific programming

    NASA Technical Reports Server (NTRS)

    Mehrotra, Piyush; Van Rosendale, John

    1987-01-01

    A Pascal-like scientific programming language, BLAZE, is described. BLAZE contains array arithmetic, forall loops, and APL-style accumulation operators, which allow natural expression of fine grained parallelism. It also employs an applicative or functional procedure invocation mechanism, which makes it easy for compilers to extract coarse grained parallelism using machine specific program restructuring. Thus BLAZE should allow one to achieve highly parallel execution on multiprocessor architectures, while still providing the user with conceptually sequential control flow. A central goal in the design of BLAZE is portability across a broad range of parallel architectures. The multiple levels of parallelism present in BLAZE code, in principle, allow a compiler to extract the types of parallelism appropriate for the given architecture while neglecting the remainder. The features of BLAZE are described and it is shown how this language would be used in typical scientific programming.

  3. Microstructural characterization of Ti-6Al-4V metal chips by focused ion beam (FIB) and transmission electron microscopy (TEM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, Judy; Dong, Lei; Howe, Jane Y

    2011-01-01

    The microstructure of the secondary deformation zone (SDZ) near the cutting surface in metal chips of Ti-6Al-4V formed during machining was investigated using focused ion beam (FIB) specimen preparation and transmission electron microscopy (TEM) imaging. Use of the FIB allowed precise extraction of the specimen across this region to reveal its inhomogeneous microstructure resulting from the non-uniform distribution of strain, strain rate, and temperature generated during the cutting process. Initial imaging from conventional TEM foil preparation revealed microstructures ranging from heavily textured to regions of fine grains. Using FIB preparation, the transverse microstructure could be interpreted as fine grains nearmore » the cutting surface which transitioned to coarse grains toward the free surface. At the cutting surface a 10 nm thick recrystallized layer was observed capping a 20 nm thick amorphous layer.« less

  4. Critical parameters for coarse coal underground slurry haulage systems

    NASA Technical Reports Server (NTRS)

    Maynard, D. P.

    1981-01-01

    Factors are identified which must be considered in meeting the requirements of a transportation system for conveying, in a pipeline, the coal mined by a continuous mining machine to a storage location neat the mine entrance or to a coal preparation plant located near the surface. For successful operation, the slurry haulage the system should be designed to operated in the turbulent flow regime at a flow rate at least 30% greater than the deposition velocity (slurry flow rate at which the solid particles tend to settle in the pipe). The capacity of the haulage system should be compatible with the projected coal output. Partical size, solid concentration, density, and viscosity of the suspension are if importance as well as the selection of the pumps, pipes, and valves. The parameters with the greatest effect on system performance ar flow velocity, pressure coal particle size, and solids concentration.

  5. Integrated intelligent sensor for the textile industry

    NASA Astrophysics Data System (ADS)

    Peltie, Philippe; David, Dominique

    1996-08-01

    A new sensor has been developed for pantyhose inspection. Unlike a first complete inspection machine devoted to post- manufacturing control of the whole panty, this sensor will be directly integrated on currently existing manufacturing machines, and will combine advantages of miniaturization is to design an intelligent, compact and very cheap product, which should be integrated without requiring any modifications of host machines. The sensor part was designed to achieve closed acquisition, and various solutions have been explored to maintain adequate depth of field. The illumination source will be integrated in the device. The processing part will include correction facilities and electronic processing. Finally, high-level information will be output in order to interface directly with the manufacturing machine automate.

  6. Human capabilities in space. [man machine interaction

    NASA Technical Reports Server (NTRS)

    Nicogossian, A. E.

    1984-01-01

    Man's ability to live and perform useful work in space was demonstrated throughout the history of manned space flight. Current planning envisions a multi-functional space station. Man's unique abilities to respond to the unforeseen and to operate at a level of complexity exceeding any reasonable amount of previous planning distinguish him from present day machines. His limitations, however, include his inherent inability to survive without protection, his limited strength, and his propensity to make mistakes when performing repetitive and monotonous tasks. By contrast, an automated system does routine and delicate tasks, exerts force smoothly and precisely, stores, and recalls large amounts of data, and performs deductive reasoning while maintaining a relative insensitivity to the environment. The establishment of a permanent presence of man in space demands that man and machines be appropriately combined in spaceborne systems. To achieve this optimal combination, research is needed in such diverse fields as artificial intelligence, robotics, behavioral psychology, economics, and human factors engineering.

  7. The optical properties, physical properties and direct radiative forcing of urban columnar aerosols in the Yangtze River Delta, China

    NASA Astrophysics Data System (ADS)

    Zhuang, Bingliang; Wang, Tijian; Liu, Jane; Che, Huizheng; Han, Yong; Fu, Yu; Li, Shu; Xie, Min; Li, Mengmeng; Chen, Pulong; Chen, Huimin; Yang, Xiu-qun; Sun, Jianning

    2018-02-01

    The optical and physical properties as well as the direct radiative forcings (DRFs) of fractionated aerosols in the urban area of the western Yangtze River Delta (YRD) are investigated with measurements from a Cimel sun photometer combined with a radiation transfer model. Ground-based observations of aerosols have much higher temporal resolutions than satellite retrievals. An initial analysis reveals the characteristics of the optical properties of different types of fractionated aerosols in the western YRD. The total aerosols, mostly composed of scattering components (93.8 %), have mean optical depths of 0.65 at 550 nm and refractive index of 1.44 + 0.0084i at 440 nm. The fine aerosols are approximately four times more abundant and have very different compositions from coarse aerosols. The absorbing components account for only ˜ 4.6 % of fine aerosols and 15.5 % of coarse aerosols and have smaller sizes than the scattering aerosols within the same mode. Therefore, fine particles have stronger scattering than coarse ones, simultaneously reflecting the different size distributions between the absorbing and scattering aerosols. The relationships among the optical properties quantify the aerosol mixing and imply that approximately 15 and 27.5 % of the total occurrences result in dust- and black-carbon-dominating mixing aerosols, respectively, in the western YRD. Unlike the optical properties, the size distributions of aerosols in the western YRD are similar to those found at other sites over eastern China on a climatological scale, peaking at radii of 0.148 and 2.94 µm. However, further analysis reveals that the coarse-dominated particles can also lead to severe haze pollution over the YRD. Observation-based estimations indicate that both fine and coarse aerosols in the western YRD exert negative DRFs, and this is especially true for fine aerosols (-11.17 W m-2 at the top of atmosphere, TOA). A higher absorption fraction leads directly to the negative DRF being further offset for coarse aerosols (-0.33 W m-2) at the TOA. Similarly, the coarse-mode DRF contributes to only 13.3 % of the total scattering aerosols but > 33.7 % to the total absorbing aerosols. A sensitivity analysis states that aerosol DRFs are not highly sensitive to their profiles in clear-sky conditions. Most of the aerosol properties and DRFs have substantial seasonality in the western YRD. The results further reveal the contributions of each component of the different size particles to the total aerosol optical depths (AODs) and DRFs. Additionally, these results can be used to improve aerosol modelling performance and the modelling of aerosol effects in the eastern regions of China.

  8. Development and Analysis of SRIC Harvesting Systems

    Treesearch

    Bryce J. Stokes; Bruce R. Hartsough

    1993-01-01

    This paper reviews several machine combinations for harvesting short-rotation, intensive-culture (SRIC) plantations. Productivity and cost information for individual machines was obtained from published sources. Three felling and skidding systems were analyzed for two stands, a 7.6-cm (3-in) average d.b.h. sycamore and a 15.2-cm (6-in) average d.b.h. eucalyptus. The...

  9. ICTNET at Web Track 2012 Ad-hoc Task

    DTIC Science & Technology

    2012-11-01

    Model and use it as baseline this year. 3.2 Learning to rank Learning to rank (LTR) introduces machine learning to retrieval ranking problem. It...Yoram Singer. An efficient boosting algorithm  for  combining preferences [J]. The Journal of  Machine   Learning  Research. 2003. 

  10. Games and Machine Learning: A Powerful Combination in an Artificial Intelligence Course

    ERIC Educational Resources Information Center

    Wallace, Scott A.; McCartney, Robert; Russell, Ingrid

    2010-01-01

    Project MLeXAI [Machine Learning eXperiences in Artificial Intelligence (AI)] seeks to build a set of reusable course curriculum and hands on laboratory projects for the artificial intelligence classroom. In this article, we describe two game-based projects from the second phase of project MLeXAI: Robot Defense--a simple real-time strategy game…

  11. Hybrid Model Based on Genetic Algorithms and SVM Applied to Variable Selection within Fruit Juice Classification

    PubMed Central

    Fernandez-Lozano, C.; Canto, C.; Gestal, M.; Andrade-Garda, J. M.; Rabuñal, J. R.; Dorado, J.; Pazos, A.

    2013-01-01

    Given the background of the use of Neural Networks in problems of apple juice classification, this paper aim at implementing a newly developed method in the field of machine learning: the Support Vector Machines (SVM). Therefore, a hybrid model that combines genetic algorithms and support vector machines is suggested in such a way that, when using SVM as a fitness function of the Genetic Algorithm (GA), the most representative variables for a specific classification problem can be selected. PMID:24453933

  12. Towards a framework of human factors certification of complex human-machine systems

    NASA Technical Reports Server (NTRS)

    Bukasa, Birgit

    1994-01-01

    As far as total automation is not realized, the combination of technical and social components in man-machine systems demands not only contributions from engineers but at least to an equal extent from behavioral scientists. This has been neglected far too long. The psychological, social and cultural aspects of technological innovations were almost totally overlooked. Yet, along with expected safety improvements the institutionalization of human factors is on the way. The introduction of human factors certification of complex man-machine systems will be a milestone in this process.

  13. More About The Farley Three-Dimensional Braider

    NASA Technical Reports Server (NTRS)

    Farley, Gary L.

    1993-01-01

    Farley three-dimensional braider, undergoing development, is machine for automatic fabrication of three-dimensional braided structures. Incorporates yarns into structure at arbitrary braid angles to produce complicated shape. Braiding surface includes movable braiding segments containing pivot points, along which yarn carriers travel during braiding process. Yarn carrier travels along sequence of pivot points as braiding segments move. Combined motions position yarns for braiding onto preform. Intended for use in making fiber preforms for fiber/matrix composite parts, such as multiblade propellers. Machine also described in "Farley Three-Dimensional Braiding Machine" (LAR-13911).

  14. The Aerosol Coarse Mode Initiative

    NASA Astrophysics Data System (ADS)

    Arnott, W. P.; Adhikari, N.; Air, D.; Kassianov, E.; Barnard, J.

    2014-12-01

    Many areas of the world show an aerosol volume distribution with a significant coarse mode and sometimes a dominant coarse mode. The large coarse mode is usually due to dust, but sea salt aerosol can also play an important role. However, in many field campaigns, the coarse mode tends to be ignored, because it is difficult to measure. This lack of measurements leads directly to a concomitant "lack of analysis" of this mode. Because, coarse mode aerosols can have significant effects on radiative forcing, both in the shortwave and longwave spectrum, the coarse mode -- and these forcings -- should be accounted for in atmospheric models. Forcings based only on fine mode aerosols have the potential to be misleading. In this paper we describe examples of large coarse modes that occur in areas of large aerosol loading (Mexico City, Barnard et al., 2010) as well as small loadings (Sacramento, CA; Kassianov et al., 2012; and Reno, NV). We then demonstrate that: (1) the coarse mode can contribute significantly to radiative forcing, relative to the fine mode, and (2) neglecting the coarse mode may result in poor comparisons between measurements and models. Next we describe -- in general terms -- the limitations of instrumentation to measure the coarse mode. Finally, we suggest a new initiative aimed at examining coarse mode aerosol generation mechanisms; transport and deposition; chemical composition; visible and thermal IR refractive indices; morphology; microphysical behavior when deposited on snow and ice; and specific instrumentation needs. Barnard, J. C., J. D. Fast, G. Paredes-Miranda, W. P. Arnott, and A. Laskin, 2010: Technical Note: Evaluation of the WRF-Chem "Aerosol Chemical to Aerosol Optical Properties" Module using data from the MILAGRO campaign, Atmospheric Chemistry and Physics, 10, 7325-7340. Kassianov, E. I., M. S. Pekour, and J. C. Barnard, 2012: Aerosols in Central California: Unexpectedly large contribution of coarse mode to aerosol radiative forcing, Geophys. Res. Lett., 39, L20806, doi:10.1029/2012GL053469.

  15. Texture coarseness responsive neurons and their mapping in layer 2–3 of the rat barrel cortex in vivo

    PubMed Central

    Garion, Liora; Dubin, Uri; Rubin, Yoav; Khateb, Mohamed; Schiller, Yitzhak; Azouz, Rony; Schiller, Jackie

    2014-01-01

    Texture discrimination is a fundamental function of somatosensory systems, yet the manner by which texture is coded and spatially represented in the barrel cortex are largely unknown. Using in vivo two-photon calcium imaging in the rat barrel cortex during artificial whisking against different surface coarseness or controlled passive whisker vibrations simulating different coarseness, we show that layer 2–3 neurons within barrel boundaries differentially respond to specific texture coarsenesses, while only a minority of neurons responded monotonically with increased or decreased surface coarseness. Neurons with similar preferred texture coarseness were spatially clustered. Multi-contact single unit recordings showed a vertical columnar organization of texture coarseness preference in layer 2–3. These findings indicate that layer 2–3 neurons perform high hierarchical processing of tactile information, with surface coarseness embodied by distinct neuronal subpopulations that are spatially mapped onto the barrel cortex. DOI: http://dx.doi.org/10.7554/eLife.03405.001 PMID:25233151

  16. An approach for retrieval of atmospheric trace gases CO II, CH 4 and CO from the future Canadian micro earth observation satellite (MEOS)

    NASA Astrophysics Data System (ADS)

    Trishchenko, Alexander P.; Khlopenkov, Konstantin V.; Wang, Shusen; Luo, Yi; Kruzelecky, Roman V.; Jamroz, Wes; Kroupnik, Guennadi

    2007-10-01

    Among all trace gases, the carbon dioxide and methane provide the largest contribution to the climate radiative forcing and together with carbon monoxide also to the global atmospheric carbon budget. New Micro Earth Observation Satellite (MEOS) mission is proposed to obtain information about these gases along with some other mission's objectives related to studying cloud and aerosol interactions. The miniature suit of instruments is proposed to make measurements with reduced spectral resolution (1.2nm) over wide NIR range 0.9μm to 2.45μm and with high spectral resolution (0.03nm) for three selected regions: oxygen A-band, 1.5μm-1.7μm band and 2.2μm-2.4μm band. It is also planned to supplement the spectrometer measurements with high spatial resolution imager for detailed characterization of cloud and surface albedo distribution within spectrometer field of view. The approaches for cloud/clear-sky identification and column retrievals of above trace gases are based on differential absorption technique and employ the combination of coarse and high-resolution spectral data. The combination of high and coarse resolution spectral data is beneficial for better characterization of surface spectral albedo and aerosol effects. An additional capability for retrieval of the vertical distribution amounts is obtained from the combination of nadir and limb measurements. Oxygen A-band path length will be used for normalization of trace gas retrievals.

  17. The decomposition of fine and coarse roots: their global patterns and controlling factors

    PubMed Central

    Zhang, Xinyue; Wang, Wei

    2015-01-01

    Fine root decomposition represents a large carbon (C) cost to plants, and serves as a potential soil C source, as well as a substantial proportion of net primary productivity. Coarse roots differ markedly from fine roots in morphology, nutrient concentrations, functions, and decomposition mechanisms. Still poorly understood is whether a consistent global pattern exists between the decomposition of fine (<2 mm root diameter) and coarse (≥2 mm) roots. A comprehensive terrestrial root decomposition dataset, including 530 observations from 71 sampling sites, was thus used to compare global patterns of decomposition of fine and coarse roots. Fine roots decomposed significantly faster than coarse roots in middle latitude areas, but their decomposition in low latitude regions was not significantly different from that of coarse roots. Coarse root decomposition showed more dependence on climate, especially mean annual temperature (MAT), than did fine roots. Initial litter lignin content was the most important predictor of fine root decomposition, while lignin to nitrogen ratios, MAT, and mean annual precipitation were the most important predictors of coarse root decomposition. Our study emphasizes the necessity of separating fine roots and coarse roots when predicting the response of belowground C release to future climate changes. PMID:25942391

  18. The use of single-date MODIS imagery for estimating large-scale urban impervious surface fraction with spectral mixture analysis and machine learning techniques

    NASA Astrophysics Data System (ADS)

    Deng, Chengbin; Wu, Changshan

    2013-12-01

    Urban impervious surface information is essential for urban and environmental applications at the regional/national scales. As a popular image processing technique, spectral mixture analysis (SMA) has rarely been applied to coarse-resolution imagery due to the difficulty of deriving endmember spectra using traditional endmember selection methods, particularly within heterogeneous urban environments. To address this problem, we derived endmember signatures through a least squares solution (LSS) technique with known abundances of sample pixels, and integrated these endmember signatures into SMA for mapping large-scale impervious surface fraction. In addition, with the same sample set, we carried out objective comparative analyses among SMA (i.e. fully constrained and unconstrained SMA) and machine learning (i.e. Cubist regression tree and Random Forests) techniques. Analysis of results suggests three major conclusions. First, with the extrapolated endmember spectra from stratified random training samples, the SMA approaches performed relatively well, as indicated by small MAE values. Second, Random Forests yields more reliable results than Cubist regression tree, and its accuracy is improved with increased sample sizes. Finally, comparative analyses suggest a tentative guide for selecting an optimal approach for large-scale fractional imperviousness estimation: unconstrained SMA might be a favorable option with a small number of samples, while Random Forests might be preferred if a large number of samples are available.

  19. Performance Evaluation in Network-Based Parallel Computing

    NASA Technical Reports Server (NTRS)

    Dezhgosha, Kamyar

    1996-01-01

    Network-based parallel computing is emerging as a cost-effective alternative for solving many problems which require use of supercomputers or massively parallel computers. The primary objective of this project has been to conduct experimental research on performance evaluation for clustered parallel computing. First, a testbed was established by augmenting our existing SUNSPARCs' network with PVM (Parallel Virtual Machine) which is a software system for linking clusters of machines. Second, a set of three basic applications were selected. The applications consist of a parallel search, a parallel sort, a parallel matrix multiplication. These application programs were implemented in C programming language under PVM. Third, we conducted performance evaluation under various configurations and problem sizes. Alternative parallel computing models and workload allocations for application programs were explored. The performance metric was limited to elapsed time or response time which in the context of parallel computing can be expressed in terms of speedup. The results reveal that the overhead of communication latency between processes in many cases is the restricting factor to performance. That is, coarse-grain parallelism which requires less frequent communication between processes will result in higher performance in network-based computing. Finally, we are in the final stages of installing an Asynchronous Transfer Mode (ATM) switch and four ATM interfaces (each 155 Mbps) which will allow us to extend our study to newer applications, performance metrics, and configurations.

  20. [Intervention of coarse cereals on lipid metabolism in rats].

    PubMed

    Guo, Yanbo; Zhai, Chengkai; Wang, Yanli; Zhang, Qun; Ding, Zhoubo; Jin, Xin

    2010-03-01

    To observe the effect of coarse cereals on improving the disorder of lipid metabolism and the expression of PPARgamma mRNA in white adipose tissue in rats to investigate the mechanism of coarse cereals on lipid metabolism disorder. Forty four SPF rats were randomly divided into 4 groups: the negative control group was fed with normal diet and 3 experimental groups were fed with high-fat modeling diet for 6 weeks for model building. The 3 experimental groups, the coarse cereals group,rice-flour group and the hyperlipemia model group, were then fed with coarse cereals high-fat diet,rice-flour high-diet and high-fat modeling diet respectively for another 15 weeks. Compared with the hyperlipemia modeling group, serum TG, TC, IL-6 and TNF-alpha in the coarse cereals group were declined significantly (P < 0.05), serum HDL-C in coarse cereals group was higher than that in rice-flour group and hyperlipemia model group (P < 0.05), LPL, HL and TNF-alpha in coarse cereal group were close to the negative control group. Moreover, the expression of PPAR-gamma mRNA in white adipose tissue of the coarse cereals group was higher than other groups. The coarse cereals could activate PPARgamma and enhance the activity of key enzymes in lipids metabolism, so as to reduce the level of TG relieve inflammation and improve lipid dysmetabolism eventually.

  1. Correlating bioaerosol load with PM2.5 and PM10cf concentrations: a comparison between natural desert and urban-fringe aerosols

    NASA Astrophysics Data System (ADS)

    Boreson, Justin; Dillner, Ann M.; Peccia, Jordan

    2004-11-01

    Seasonal allergies and microbial mediated respiratory diseases, can coincide with elevated particulate matter concentrations, often when dry desert soils are disturbed. In addition to effects from the allergens, allergic and asthmatic responses may be enhanced when chemical and biological constituents of particulate matter (PM) are combined together. Because of these associations and also the recent regulatory and health-related interests of monitoring PM2.5, separately from total PM10, the biological loading between the fine (dp<2.5 μm) and coarse (2.5 μm

  2. Combining MLC and SVM Classifiers for Learning Based Decision Making: Analysis and Evaluations

    PubMed Central

    Zhang, Yi; Ren, Jinchang; Jiang, Jianmin

    2015-01-01

    Maximum likelihood classifier (MLC) and support vector machines (SVM) are two commonly used approaches in machine learning. MLC is based on Bayesian theory in estimating parameters of a probabilistic model, whilst SVM is an optimization based nonparametric method in this context. Recently, it is found that SVM in some cases is equivalent to MLC in probabilistically modeling the learning process. In this paper, MLC and SVM are combined in learning and classification, which helps to yield probabilistic output for SVM and facilitate soft decision making. In total four groups of data are used for evaluations, covering sonar, vehicle, breast cancer, and DNA sequences. The data samples are characterized in terms of Gaussian/non-Gaussian distributed and balanced/unbalanced samples which are then further used for performance assessment in comparing the SVM and the combined SVM-MLC classifier. Interesting results are reported to indicate how the combined classifier may work under various conditions. PMID:26089862

  3. Combining MLC and SVM Classifiers for Learning Based Decision Making: Analysis and Evaluations.

    PubMed

    Zhang, Yi; Ren, Jinchang; Jiang, Jianmin

    2015-01-01

    Maximum likelihood classifier (MLC) and support vector machines (SVM) are two commonly used approaches in machine learning. MLC is based on Bayesian theory in estimating parameters of a probabilistic model, whilst SVM is an optimization based nonparametric method in this context. Recently, it is found that SVM in some cases is equivalent to MLC in probabilistically modeling the learning process. In this paper, MLC and SVM are combined in learning and classification, which helps to yield probabilistic output for SVM and facilitate soft decision making. In total four groups of data are used for evaluations, covering sonar, vehicle, breast cancer, and DNA sequences. The data samples are characterized in terms of Gaussian/non-Gaussian distributed and balanced/unbalanced samples which are then further used for performance assessment in comparing the SVM and the combined SVM-MLC classifier. Interesting results are reported to indicate how the combined classifier may work under various conditions.

  4. A transient search using combined human and machine classifications

    NASA Astrophysics Data System (ADS)

    Wright, Darryl E.; Lintott, Chris J.; Smartt, Stephen J.; Smith, Ken W.; Fortson, Lucy; Trouille, Laura; Allen, Campbell R.; Beck, Melanie; Bouslog, Mark C.; Boyer, Amy; Chambers, K. C.; Flewelling, Heather; Granger, Will; Magnier, Eugene A.; McMaster, Adam; Miller, Grant R. M.; O'Donnell, James E.; Simmons, Brooke; Spiers, Helen; Tonry, John L.; Veldthuis, Marten; Wainscoat, Richard J.; Waters, Chris; Willman, Mark; Wolfenbarger, Zach; Young, Dave R.

    2017-12-01

    Large modern surveys require efficient review of data in order to find transient sources such as supernovae, and to distinguish such sources from artefacts and noise. Much effort has been put into the development of automatic algorithms, but surveys still rely on human review of targets. This paper presents an integrated system for the identification of supernovae in data from Pan-STARRS1, combining classifications from volunteers participating in a citizen science project with those from a convolutional neural network. The unique aspect of this work is the deployment, in combination, of both human and machine classifications for near real-time discovery in an astronomical project. We show that the combination of the two methods outperforms either one used individually. This result has important implications for the future development of transient searches, especially in the era of Large Synoptic Survey Telescope and other large-throughput surveys.

  5. Modelling machine ensembles with discrete event dynamical system theory

    NASA Technical Reports Server (NTRS)

    Hunter, Dan

    1990-01-01

    Discrete Event Dynamical System (DEDS) theory can be utilized as a control strategy for future complex machine ensembles that will be required for in-space construction. The control strategy involves orchestrating a set of interactive submachines to perform a set of tasks for a given set of constraints such as minimum time, minimum energy, or maximum machine utilization. Machine ensembles can be hierarchically modeled as a global model that combines the operations of the individual submachines. These submachines are represented in the global model as local models. Local models, from the perspective of DEDS theory , are described by the following: a set of system and transition states, an event alphabet that portrays actions that takes a submachine from one state to another, an initial system state, a partial function that maps the current state and event alphabet to the next state, and the time required for the event to occur. Each submachine in the machine ensemble is presented by a unique local model. The global model combines the local models such that the local models can operate in parallel under the additional logistic and physical constraints due to submachine interactions. The global model is constructed from the states, events, event functions, and timing requirements of the local models. Supervisory control can be implemented in the global model by various methods such as task scheduling (open-loop control) or implementing a feedback DEDS controller (closed-loop control).

  6. High-performance reactionless scan mechanism

    NASA Technical Reports Server (NTRS)

    Williams, Ellen I.; Summers, Richard T.; Ostaszewski, Miroslaw A.

    1995-01-01

    A high-performance reactionless scan mirror mechanism was developed for space applications to provide thermal images of the Earth. The design incorporates a unique mechanical means of providing reactionless operation that also minimizes weight, mechanical resonance operation to minimize power, combined use of a single optical encoder to sense coarse and fine angular position, and a new kinematic mount of the mirror. A flex pivot hardware failure and current project status are discussed.

  7. Variational formulation of high performance finite elements: Parametrized variational principles

    NASA Technical Reports Server (NTRS)

    Felippa, Carlos A.; Militello, Carmello

    1991-01-01

    High performance elements are simple finite elements constructed to deliver engineering accuracy with coarse arbitrary grids. This is part of a series on the variational basis of high-performance elements, with emphasis on those constructed with the free formulation (FF) and assumed natural strain (ANS) methods. Parametrized variational principles that provide a foundation for the FF and ANS methods, as well as for a combination of both are presented.

  8. Ensemble Methods

    NASA Astrophysics Data System (ADS)

    Re, Matteo; Valentini, Giorgio

    2012-03-01

    Ensemble methods are statistical and computational learning procedures reminiscent of the human social learning behavior of seeking several opinions before making any crucial decision. The idea of combining the opinions of different "experts" to obtain an overall “ensemble” decision is rooted in our culture at least from the classical age of ancient Greece, and it has been formalized during the Enlightenment with the Condorcet Jury Theorem[45]), which proved that the judgment of a committee is superior to those of individuals, provided the individuals have reasonable competence. Ensembles are sets of learning machines that combine in some way their decisions, or their learning algorithms, or different views of data, or other specific characteristics to obtain more reliable and more accurate predictions in supervised and unsupervised learning problems [48,116]. A simple example is represented by the majority vote ensemble, by which the decisions of different learning machines are combined, and the class that receives the majority of “votes” (i.e., the class predicted by the majority of the learning machines) is the class predicted by the overall ensemble [158]. In the literature, a plethora of terms other than ensembles has been used, such as fusion, combination, aggregation, and committee, to indicate sets of learning machines that work together to solve a machine learning problem [19,40,56,66,99,108,123], but in this chapter we maintain the term ensemble in its widest meaning, in order to include the whole range of combination methods. Nowadays, ensemble methods represent one of the main current research lines in machine learning [48,116], and the interest of the research community on ensemble methods is witnessed by conferences and workshops specifically devoted to ensembles, first of all the multiple classifier systems (MCS) conference organized by Roli, Kittler, Windeatt, and other researchers of this area [14,62,85,149,173]. Several theories have been proposed to explain the characteristics and the successful application of ensembles to different application domains. For instance, Allwein, Schapire, and Singer interpreted the improved generalization capabilities of ensembles of learning machines in the framework of large margin classifiers [4,177], Kleinberg in the context of stochastic discrimination theory [112], and Breiman and Friedman in the light of the bias-variance analysis borrowed from classical statistics [21,70]. Empirical studies showed that both in classification and regression problems, ensembles improve on single learning machines, and moreover large experimental studies compared the effectiveness of different ensemble methods on benchmark data sets [10,11,49,188]. The interest in this research area is motivated also by the availability of very fast computers and networks of workstations at a relatively low cost that allow the implementation and the experimentation of complex ensemble methods using off-the-shelf computer platforms. However, as explained in Section 26.2 there are deeper reasons to use ensembles of learning machines, motivated by the intrinsic characteristics of the ensemble methods. The main aim of this chapter is to introduce ensemble methods and to provide an overview and a bibliography of the main areas of research, without pretending to be exhaustive or to explain the detailed characteristics of each ensemble method. The paper is organized as follows. In the next section, the main theoretical and practical reasons for combining multiple learners are introduced. Section 26.3 depicts the main taxonomies on ensemble methods proposed in the literature. In Section 26.4 and 26.5, we present an overview of the main supervised ensemble methods reported in the literature, adopting a simple taxonomy, originally proposed in Ref. [201]. Applications of ensemble methods are only marginally considered, but a specific section on some relevant applications of ensemble methods in astronomy and astrophysics has been added (Section 26.6). The conclusion (Section 26.7) ends this paper and lists some issues not covered in this work.

  9. Spark Plasma Sintering of a Gas Atomized Al7075 Alloy: Microstructure and Properties

    PubMed Central

    Molnárová, Orsolya; Málek, Přemysl; Lukáč, František; Chráska, Tomáš

    2016-01-01

    The powder of an Al7075 alloy was prepared by gas atomization. A combination of cellular, columnar, and equiaxed dendritic-like morphology was observed in individual powder particles with continuous layers of intermetallic phases along boundaries. The cells are separated predominantly by high-angle boundaries, the areas with dendritic-like morphology usually have a similar crystallographic orientation. Spark plasma sintering resulted in a fully dense material with a microstructure similar to that of the powder material. The continuous layers of intermetallic phases are replaced by individual particles located along internal boundaries, coarse particles are formed at the surface of original powder particles. Microhardness measurements revealed both artificial and natural ageing behavior similar to that observed in ingot metallurgy material. The minimum microhardness of 81 HV, observed in the sample annealed at 300 °C, reflects the presence of coarse particles. The peak microhardness of 160 HV was observed in the sample annealed at 500 °C and then aged at room temperature. Compression tests confirmed high strength combined with sufficient plasticity. Annealing even at 500 °C does not significantly influence the distribution of grain sizes—about 45% of the area is occupied by grains with the size below 10 µm. PMID:28774126

  10. Corrosion Behavior of Steel Reinforcement in Concrete with Recycled Aggregates, Fly Ash and Spent Cracking Catalyst.

    PubMed

    Gurdián, Hebé; García-Alcocel, Eva; Baeza-Brotons, Francisco; Garcés, Pedro; Zornoza, Emilio

    2014-04-21

    The main strategy to reduce the environmental impact of the concrete industry is to reuse the waste materials. This research has considered the combination of cement replacement by industrial by-products, and natural coarse aggregate substitution by recycled aggregate. The aim is to evaluate the behavior of concretes with a reduced impact on the environment by replacing a 50% of cement by industrial by-products (15% of spent fluid catalytic cracking catalyst and 35% of fly ash) and a 100% of natural coarse aggregate by recycled aggregate. The concretes prepared according to these considerations have been tested in terms of mechanical strengths and the protection offered against steel reinforcement corrosion under carbonation attack and chloride-contaminated environments. The proposed concrete combinations reduced the mechanical performance of concretes in terms of elastic modulus, compressive strength, and flexural strength. In addition, an increase in open porosity due to the presence of recycled aggregate was observed, which is coherent with the changes observed in mechanical tests. Regarding corrosion tests, no significant differences were observed in the case of the resistance of these types of concretes under a natural chloride attack. In the case of carbonation attack, although all concretes did not stand the highly aggressive conditions, those concretes with cement replacement behaved worse than Portland cement concretes.

  11. Lidar-Radiometer Inversion Code (LIRIC) for the Retrieval of Vertical Aerosol Properties from Combined Lidar Radiometer Data: Development and Distribution in EARLINET

    NASA Technical Reports Server (NTRS)

    Chaikovsky, A.; Dubovik, O.; Holben, Brent N.; Bril, A.; Goloub, P.; Tanre, D.; Pappalardo, G.; Wandinger, U.; Chaikovskaya, L.; Denisov, S.; hide

    2015-01-01

    This paper presents a detailed description of LIRIC (LIdar-Radiometer Inversion Code)algorithm for simultaneous processing of coincident lidar and radiometric (sun photometric) observations for the retrieval of the aerosol concentration vertical profiles. As the lidar radiometric input data we use measurements from European Aerosol Re-search Lidar Network (EARLINET) lidars and collocated sun-photometers of Aerosol Robotic Network (AERONET). The LIRIC data processing provides sequential inversion of the combined lidar and radiometric data by the estimations of column-integrated aerosol parameters from radiometric measurements followed by the retrieval of height-dependent concentrations of fine and coarse aerosols from lidar signals using integrated column characteristics of aerosol layer as a priori constraints. The use of polarized lidar observations allows us to discriminate between spherical and non-spherical particles of the coarse aerosol mode. The LIRIC software package was implemented and tested at a number of EARLINET stations. Inter-comparison of the LIRIC-based aerosol retrievals was performed for the observations by seven EARLNET lidars in Leipzig, Germany on 25 May 2009. We found close agreement between the aerosol parameters derived from different lidars that supports high robustness of the LIRIC algorithm. The sensitivity of the retrieval results to the possible reduction of the available observation data is also discussed.

  12. Machine learning on brain MRI data for differential diagnosis of Parkinson's disease and Progressive Supranuclear Palsy.

    PubMed

    Salvatore, C; Cerasa, A; Castiglioni, I; Gallivanone, F; Augimeri, A; Lopez, M; Arabia, G; Morelli, M; Gilardi, M C; Quattrone, A

    2014-01-30

    Supervised machine learning has been proposed as a revolutionary approach for identifying sensitive medical image biomarkers (or combination of them) allowing for automatic diagnosis of individual subjects. The aim of this work was to assess the feasibility of a supervised machine learning algorithm for the assisted diagnosis of patients with clinically diagnosed Parkinson's disease (PD) and Progressive Supranuclear Palsy (PSP). Morphological T1-weighted Magnetic Resonance Images (MRIs) of PD patients (28), PSP patients (28) and healthy control subjects (28) were used by a supervised machine learning algorithm based on the combination of Principal Components Analysis as feature extraction technique and on Support Vector Machines as classification algorithm. The algorithm was able to obtain voxel-based morphological biomarkers of PD and PSP. The algorithm allowed individual diagnosis of PD versus controls, PSP versus controls and PSP versus PD with an Accuracy, Specificity and Sensitivity>90%. Voxels influencing classification between PD and PSP patients involved midbrain, pons, corpus callosum and thalamus, four critical regions known to be strongly involved in the pathophysiological mechanisms of PSP. Classification accuracy of individual PSP patients was consistent with previous manual morphological metrics and with other supervised machine learning application to MRI data, whereas accuracy in the detection of individual PD patients was significantly higher with our classification method. The algorithm provides excellent discrimination of PD patients from PSP patients at an individual level, thus encouraging the application of computer-based diagnosis in clinical practice. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Servo scanning 3D micro EDM for array micro cavities using on-machine fabricated tool electrodes

    NASA Astrophysics Data System (ADS)

    Tong, Hao; Li, Yong; Zhang, Long

    2018-02-01

    Array micro cavities are useful in many fields including in micro molds, optical devices, biochips and so on. Array servo scanning micro electro discharge machining (EDM), using array micro electrodes with simple cross-sectional shape, has the advantage of machining complex 3D micro cavities in batches. In this paper, the machining errors caused by offline-fabricated array micro electrodes are analyzed in particular, and then a machining process of array servo scanning micro EDM is proposed by using on-machine fabricated array micro electrodes. The array micro electrodes are fabricated on-machine by combined procedures including wire electro discharge grinding, array reverse copying and electrode end trimming. Nine-array tool electrodes with Φ80 µm diameter and 600 µm length are obtained. Furthermore, the proposed process is verified by several machining experiments for achieving nine-array hexagonal micro cavities with top side length of 300 µm, bottom side length of 150 µm, and depth of 112 µm or 120 µm. In the experiments, a chip hump accumulates on the electrode tips like the built-up edge in mechanical machining under the conditions of brass workpieces, copper electrodes and the dielectric of deionized water. The accumulated hump can be avoided by replacing the water dielectric by an oil dielectric.

  14. Research on computer systems benchmarking

    NASA Technical Reports Server (NTRS)

    Smith, Alan Jay (Principal Investigator)

    1996-01-01

    This grant addresses the topic of research on computer systems benchmarking and is more generally concerned with performance issues in computer systems. This report reviews work in those areas during the period of NASA support under this grant. The bulk of the work performed concerned benchmarking and analysis of CPUs, compilers, caches, and benchmark programs. The first part of this work concerned the issue of benchmark performance prediction. A new approach to benchmarking and machine characterization was reported, using a machine characterizer that measures the performance of a given system in terms of a Fortran abstract machine. Another report focused on analyzing compiler performance. The performance impact of optimization in the context of our methodology for CPU performance characterization was based on the abstract machine model. Benchmark programs are analyzed in another paper. A machine-independent model of program execution was developed to characterize both machine performance and program execution. By merging these machine and program characterizations, execution time can be estimated for arbitrary machine/program combinations. The work was continued into the domain of parallel and vector machines, including the issue of caches in vector processors and multiprocessors. All of the afore-mentioned accomplishments are more specifically summarized in this report, as well as those smaller in magnitude supported by this grant.

  15. Combining Relevance Vector Machines and exponential regression for bearing residual life estimation

    NASA Astrophysics Data System (ADS)

    Di Maio, Francesco; Tsui, Kwok Leung; Zio, Enrico

    2012-08-01

    In this paper we present a new procedure for estimating the bearing Residual Useful Life (RUL) by combining data-driven and model-based techniques. Respectively, we resort to (i) Relevance Vector Machines (RVMs) for selecting a low number of significant basis functions, called Relevant Vectors (RVs), and (ii) exponential regression to compute and continuously update residual life estimations. The combination of these techniques is developed with reference to partially degraded thrust ball bearings and tested on real world vibration-based degradation data. On the case study considered, the proposed procedure outperforms other model-based methods, with the added value of an adequate representation of the uncertainty associated to the estimates of the quantification of the credibility of the results by the Prognostic Horizon (PH) metric.

  16. Combining a hybrid robotic system with a bain-machine interface for the rehabilitation of reaching movements: A case study with a stroke patient.

    PubMed

    Resquin, F; Ibañez, J; Gonzalez-Vargas, J; Brunetti, F; Dimbwadyo, I; Alves, S; Carrasco, L; Torres, L; Pons, Jose Luis

    2016-08-01

    Reaching and grasping are two of the most affected functions after stroke. Hybrid rehabilitation systems combining Functional Electrical Stimulation with Robotic devices have been proposed in the literature to improve rehabilitation outcomes. In this work, we present the combined use of a hybrid robotic system with an EEG-based Brain-Machine Interface to detect the user's movement intentions to trigger the assistance. The platform has been tested in a single session with a stroke patient. The results show how the patient could successfully interact with the BMI and command the assistance of the hybrid system with low latencies. Also, the Feedback Error Learning controller implemented in this system could adjust the required FES intensity to perform the task.

  17. Coarse-graining using the relative entropy and simplex-based optimization methods in VOTCA

    NASA Astrophysics Data System (ADS)

    Rühle, Victor; Jochum, Mara; Koschke, Konstantin; Aluru, N. R.; Kremer, Kurt; Mashayak, S. Y.; Junghans, Christoph

    2014-03-01

    Coarse-grained (CG) simulations are an important tool to investigate systems on larger time and length scales. Several methods for systematic coarse-graining were developed, varying in complexity and the property of interest. Thus, the question arises which method best suits a specific class of system and desired application. The Versatile Object-oriented Toolkit for Coarse-graining Applications (VOTCA) provides a uniform platform for coarse-graining methods and allows for their direct comparison. We present recent advances of VOTCA, namely the implementation of the relative entropy method and downhill simplex optimization for coarse-graining. The methods are illustrated by coarse-graining SPC/E bulk water and a water-methanol mixture. Both CG models reproduce the pair distributions accurately. SYM is supported by AFOSR under grant 11157642 and by NSF under grant 1264282. CJ was supported in part by the NSF PHY11-25915 at KITP. K. Koschke acknowledges funding by the Nestle Research Center.

  18. Using Multiple Indicators of Cognitive State in Logistic Models that Predict Individual Performance in Machine-Mediated Learning Environments.

    ERIC Educational Resources Information Center

    Hancock, Thomas E.; And Others

    1995-01-01

    In machine-mediated learning environments, there is a need for more reliable methods of calculating the probability that a learner's response will be correct in future trials. A combination of domain-independent response-state measures of cognition along with two instructional variables for maximum predictive ability are demonstrated. (Author/LRW)

  19. Object-based habitat mapping using very high spatial resolution multispectral and hyperspectral imagery with LiDAR data

    NASA Astrophysics Data System (ADS)

    Onojeghuo, Alex Okiemute; Onojeghuo, Ajoke Ruth

    2017-07-01

    This study investigated the combined use of multispectral/hyperspectral imagery and LiDAR data for habitat mapping across parts of south Cumbria, North West England. The methodology adopted in this study integrated spectral information contained in pansharp QuickBird multispectral/AISA Eagle hyperspectral imagery and LiDAR-derived measures with object-based machine learning classifiers and ensemble analysis techniques. Using the LiDAR point cloud data, elevation models (such as the Digital Surface Model and Digital Terrain Model raster) and intensity features were extracted directly. The LiDAR-derived measures exploited in this study included Canopy Height Model, intensity and topographic information (i.e. mean, maximum and standard deviation). These three LiDAR measures were combined with spectral information contained in the pansharp QuickBird and Eagle MNF transformed imagery for image classification experiments. A fusion of pansharp QuickBird multispectral and Eagle MNF hyperspectral imagery with all LiDAR-derived measures generated the best classification accuracies, 89.8 and 92.6% respectively. These results were generated with the Support Vector Machine and Random Forest machine learning algorithms respectively. The ensemble analysis of all three learning machine classifiers for the pansharp QuickBird and Eagle MNF fused data outputs did not significantly increase the overall classification accuracy. Results of the study demonstrate the potential of combining either very high spatial resolution multispectral or hyperspectral imagery with LiDAR data for habitat mapping.

  20. WORMHOLE: Novel Least Diverged Ortholog Prediction through Machine Learning

    PubMed Central

    Sutphin, George L.; Mahoney, J. Matthew; Sheppard, Keith; Walton, David O.; Korstanje, Ron

    2016-01-01

    The rapid advancement of technology in genomics and targeted genetic manipulation has made comparative biology an increasingly prominent strategy to model human disease processes. Predicting orthology relationships between species is a vital component of comparative biology. Dozens of strategies for predicting orthologs have been developed using combinations of gene and protein sequence, phylogenetic history, and functional interaction with progressively increasing accuracy. A relatively new class of orthology prediction strategies combines aspects of multiple methods into meta-tools, resulting in improved prediction performance. Here we present WORMHOLE, a novel ortholog prediction meta-tool that applies machine learning to integrate 17 distinct ortholog prediction algorithms to identify novel least diverged orthologs (LDOs) between 6 eukaryotic species—humans, mice, zebrafish, fruit flies, nematodes, and budding yeast. Machine learning allows WORMHOLE to intelligently incorporate predictions from a wide-spectrum of strategies in order to form aggregate predictions of LDOs with high confidence. In this study we demonstrate the performance of WORMHOLE across each combination of query and target species. We show that WORMHOLE is particularly adept at improving LDO prediction performance between distantly related species, expanding the pool of LDOs while maintaining low evolutionary distance and a high level of functional relatedness between genes in LDO pairs. We present extensive validation, including cross-validated prediction of PANTHER LDOs and evaluation of evolutionary divergence and functional similarity, and discuss future applications of machine learning in ortholog prediction. A WORMHOLE web tool has been developed and is available at http://wormhole.jax.org/. PMID:27812085

  1. Development of a sugar-binding residue prediction system from protein sequences using support vector machine.

    PubMed

    Banno, Masaki; Komiyama, Yusuke; Cao, Wei; Oku, Yuya; Ueki, Kokoro; Sumikoshi, Kazuya; Nakamura, Shugo; Terada, Tohru; Shimizu, Kentaro

    2017-02-01

    Several methods have been proposed for protein-sugar binding site prediction using machine learning algorithms. However, they are not effective to learn various properties of binding site residues caused by various interactions between proteins and sugars. In this study, we classified sugars into acidic and nonacidic sugars and showed that their binding sites have different amino acid occurrence frequencies. By using this result, we developed sugar-binding residue predictors dedicated to the two classes of sugars: an acid sugar binding predictor and a nonacidic sugar binding predictor. We also developed a combination predictor which combines the results of the two predictors. We showed that when a sugar is known to be an acidic sugar, the acidic sugar binding predictor achieves the best performance, and showed that when a sugar is known to be a nonacidic sugar or is not known to be either of the two classes, the combination predictor achieves the best performance. Our method uses only amino acid sequences for prediction. Support vector machine was used as a machine learning algorithm and the position-specific scoring matrix created by the position-specific iterative basic local alignment search tool was used as the feature vector. We evaluated the performance of the predictors using five-fold cross-validation. We have launched our system, as an open source freeware tool on the GitHub repository (https://doi.org/10.5281/zenodo.61513). Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. WORMHOLE: Novel Least Diverged Ortholog Prediction through Machine Learning.

    PubMed

    Sutphin, George L; Mahoney, J Matthew; Sheppard, Keith; Walton, David O; Korstanje, Ron

    2016-11-01

    The rapid advancement of technology in genomics and targeted genetic manipulation has made comparative biology an increasingly prominent strategy to model human disease processes. Predicting orthology relationships between species is a vital component of comparative biology. Dozens of strategies for predicting orthologs have been developed using combinations of gene and protein sequence, phylogenetic history, and functional interaction with progressively increasing accuracy. A relatively new class of orthology prediction strategies combines aspects of multiple methods into meta-tools, resulting in improved prediction performance. Here we present WORMHOLE, a novel ortholog prediction meta-tool that applies machine learning to integrate 17 distinct ortholog prediction algorithms to identify novel least diverged orthologs (LDOs) between 6 eukaryotic species-humans, mice, zebrafish, fruit flies, nematodes, and budding yeast. Machine learning allows WORMHOLE to intelligently incorporate predictions from a wide-spectrum of strategies in order to form aggregate predictions of LDOs with high confidence. In this study we demonstrate the performance of WORMHOLE across each combination of query and target species. We show that WORMHOLE is particularly adept at improving LDO prediction performance between distantly related species, expanding the pool of LDOs while maintaining low evolutionary distance and a high level of functional relatedness between genes in LDO pairs. We present extensive validation, including cross-validated prediction of PANTHER LDOs and evaluation of evolutionary divergence and functional similarity, and discuss future applications of machine learning in ortholog prediction. A WORMHOLE web tool has been developed and is available at http://wormhole.jax.org/.

  3. Support vector machine based classification of fast Fourier transform spectroscopy of proteins

    NASA Astrophysics Data System (ADS)

    Lazarevic, Aleksandar; Pokrajac, Dragoljub; Marcano, Aristides; Melikechi, Noureddine

    2009-02-01

    Fast Fourier transform spectroscopy has proved to be a powerful method for study of the secondary structure of proteins since peak positions and their relative amplitude are affected by the number of hydrogen bridges that sustain this secondary structure. However, to our best knowledge, the method has not been used yet for identification of proteins within a complex matrix like a blood sample. The principal reason is the apparent similarity of protein infrared spectra with actual differences usually masked by the solvent contribution and other interactions. In this paper, we propose a novel machine learning based method that uses protein spectra for classification and identification of such proteins within a given sample. The proposed method uses principal component analysis (PCA) to identify most important linear combinations of original spectral components and then employs support vector machine (SVM) classification model applied on such identified combinations to categorize proteins into one of given groups. Our experiments have been performed on the set of four different proteins, namely: Bovine Serum Albumin, Leptin, Insulin-like Growth Factor 2 and Osteopontin. Our proposed method of applying principal component analysis along with support vector machines exhibits excellent classification accuracy when identifying proteins using their infrared spectra.

  4. Angular approach combined to mechanical model for tool breakage detection by eddy current sensors

    NASA Astrophysics Data System (ADS)

    Ritou, M.; Garnier, S.; Furet, B.; Hascoet, J. Y.

    2014-02-01

    The paper presents a new complete approach for Tool Condition Monitoring (TCM) in milling. The aim is the early detection of small damages so that catastrophic tool failures are prevented. A versatile in-process monitoring system is introduced for reliability concerns. The tool condition is determined by estimates of the radial eccentricity of the teeth. An adequate criterion is proposed combining mechanical model of milling and angular approach.Then, a new solution is proposed for the estimate of cutting force using eddy current sensors implemented close to spindle nose. Signals are analysed in the angular domain, notably by synchronous averaging technique. Phase shifts induced by changes of machining direction are compensated. Results are compared with cutting forces measured with a dynamometer table.The proposed method is implemented in an industrial case of pocket machining operation. One of the cutting edges has been slightly damaged during the machining, as shown by a direct measurement of the tool. A control chart is established with the estimates of cutter eccentricity obtained during the machining from the eddy current sensors signals. Efficiency and reliability of the method is demonstrated by a successful detection of the damage.

  5. Effects of cutting parameters and machining environments on surface roughness in hard turning using design of experiment

    NASA Astrophysics Data System (ADS)

    Mia, Mozammel; Bashir, Mahmood Al; Dhar, Nikhil Ranjan

    2016-07-01

    Hard turning is gradually replacing the time consuming conventional turning process, which is typically followed by grinding, by producing surface quality compatible to grinding. The hard turned surface roughness depends on the cutting parameters, machining environments and tool insert configurations. In this article the variation of the surface roughness of the produced surfaces with the changes in tool insert configuration, use of coolant and different cutting parameters (cutting speed, feed rate) has been investigated. This investigation was performed in machining AISI 1060 steel, hardened to 56 HRC by heat treatment, using coated carbide inserts under two different machining environments. The depth of cut, fluid pressure and material hardness were kept constant. The Design of Experiment (DOE) was performed to determine the number and combination sets of different cutting parameters. A full factorial analysis has been performed to examine the effect of main factors as well as interaction effect of factors on surface roughness. A statistical analysis of variance (ANOVA) was employed to determine the combined effect of cutting parameters, environment and tool configuration. The result of this analysis reveals that environment has the most significant impact on surface roughness followed by feed rate and tool configuration respectively.

  6. Intelligent power management in a vehicular system with multiple power sources

    NASA Astrophysics Data System (ADS)

    Murphey, Yi L.; Chen, ZhiHang; Kiliaris, Leonidas; Masrur, M. Abul

    This paper presents an optimal online power management strategy applied to a vehicular power system that contains multiple power sources and deals with largely fluctuated load requests. The optimal online power management strategy is developed using machine learning and fuzzy logic. A machine learning algorithm has been developed to learn the knowledge about minimizing power loss in a Multiple Power Sources and Loads (M_PS&LD) system. The algorithm exploits the fact that different power sources used to deliver a load request have different power losses under different vehicle states. The machine learning algorithm is developed to train an intelligent power controller, an online fuzzy power controller, FPC_MPS, that has the capability of finding combinations of power sources that minimize power losses while satisfying a given set of system and component constraints during a drive cycle. The FPC_MPS was implemented in two simulated systems, a power system of four power sources, and a vehicle system of three power sources. Experimental results show that the proposed machine learning approach combined with fuzzy control is a promising technology for intelligent vehicle power management in a M_PS&LD power system.

  7. Coarse woody debris: Managing benefits and fire hazard in the recovering forest

    Treesearch

    James K. Brown; Elizabeth D. Reinhardt; Kylie A. Kramer

    2003-01-01

    Management of coarse woody debris following fire requires consideration of its positive and negative values. The ecological benefits of coarse woody debris and fire hazard considerations are summarized. This paper presents recommendations for desired ranges of coarse woody debris. Example simulations illustrate changes in debris over time and with varying management....

  8. Coarse-graining errors and numerical optimization using a relative entropy framework

    NASA Astrophysics Data System (ADS)

    Chaimovich, Aviel; Shell, M. Scott

    2011-03-01

    The ability to generate accurate coarse-grained models from reference fully atomic (or otherwise "first-principles") ones has become an important component in modeling the behavior of complex molecular systems with large length and time scales. We recently proposed a novel coarse-graining approach based upon variational minimization of a configuration-space functional called the relative entropy, Srel, that measures the information lost upon coarse-graining. Here, we develop a broad theoretical framework for this methodology and numerical strategies for its use in practical coarse-graining settings. In particular, we show that the relative entropy offers tight control over the errors due to coarse-graining in arbitrary microscopic properties, and suggests a systematic approach to reducing them. We also describe fundamental connections between this optimization methodology and other coarse-graining strategies like inverse Monte Carlo, force matching, energy matching, and variational mean-field theory. We suggest several new numerical approaches to its minimization that provide new coarse-graining strategies. Finally, we demonstrate the application of these theoretical considerations and algorithms to a simple, instructive system and characterize convergence and errors within the relative entropy framework.

  9. Improving Machining Accuracy of CNC Machines with Innovative Design Methods

    NASA Astrophysics Data System (ADS)

    Yemelyanov, N. V.; Yemelyanova, I. V.; Zubenko, V. L.

    2018-03-01

    The article considers achieving the machining accuracy of CNC machines by applying innovative methods in modelling and design of machining systems, drives and machine processes. The topological method of analysis involves visualizing the system as matrices of block graphs with a varying degree of detail between the upper and lower hierarchy levels. This approach combines the advantages of graph theory and the efficiency of decomposition methods, it also has visual clarity, which is inherent in both topological models and structural matrices, as well as the resiliency of linear algebra as part of the matrix-based research. The focus of the study is on the design of automated machine workstations, systems, machines and units, which can be broken into interrelated parts and presented as algebraic, topological and set-theoretical models. Every model can be transformed into a model of another type, and, as a result, can be interpreted as a system of linear and non-linear equations which solutions determine the system parameters. This paper analyses the dynamic parameters of the 1716PF4 machine at the stages of design and exploitation. Having researched the impact of the system dynamics on the component quality, the authors have developed a range of practical recommendations which have enabled one to reduce considerably the amplitude of relative motion, exclude some resonance zones within the spindle speed range of 0...6000 min-1 and improve machining accuracy.

  10. Markerless gating for lung cancer radiotherapy based on machine learning techniques

    NASA Astrophysics Data System (ADS)

    Lin, Tong; Li, Ruijiang; Tang, Xiaoli; Dy, Jennifer G.; Jiang, Steve B.

    2009-03-01

    In lung cancer radiotherapy, radiation to a mobile target can be delivered by respiratory gating, for which we need to know whether the target is inside or outside a predefined gating window at any time point during the treatment. This can be achieved by tracking one or more fiducial markers implanted inside or near the target, either fluoroscopically or electromagnetically. However, the clinical implementation of marker tracking is limited for lung cancer radiotherapy mainly due to the risk of pneumothorax. Therefore, gating without implanted fiducial markers is a promising clinical direction. We have developed several template-matching methods for fluoroscopic marker-less gating. Recently, we have modeled the gating problem as a binary pattern classification problem, in which principal component analysis (PCA) and support vector machine (SVM) are combined to perform the classification task. Following the same framework, we investigated different combinations of dimensionality reduction techniques (PCA and four nonlinear manifold learning methods) and two machine learning classification methods (artificial neural networks—ANN and SVM). Performance was evaluated on ten fluoroscopic image sequences of nine lung cancer patients. We found that among all combinations of dimensionality reduction techniques and classification methods, PCA combined with either ANN or SVM achieved a better performance than the other nonlinear manifold learning methods. ANN when combined with PCA achieves a better performance than SVM in terms of classification accuracy and recall rate, although the target coverage is similar for the two classification methods. Furthermore, the running time for both ANN and SVM with PCA is within tolerance for real-time applications. Overall, ANN combined with PCA is a better candidate than other combinations we investigated in this work for real-time gated radiotherapy.

  11. Coarse-sediment bands on the inner shelf of southern Monterey Bay, California

    USGS Publications Warehouse

    Hunter, R.E.; Dingler, J.R.; Anima, R.J.; Richmond, B.M.

    1988-01-01

    Bands of coarse sand that trend parallel to the shore, unlike the approximately shore-normal bands found in many inner shelf areas, occur in southern Monterey Bay at water depths of 10-20 m, less than 1 km from the shore. The bands are 20-100 m wide and alternate with bands of fine sand that are of similar width. The coarse-sand bands are as much as 1 m lower than the adjacent fine-sand bands, which have margins inclined at angles of about 20??. The mean grain sizes of the coarse and fine sand are in the range of 0.354-1.0 mm and 0.125-0.354 mm, respectively. Wave ripples that average about 1 m in spacing always occur in the coarse-sand bands. Over a period of 3 yrs, the individual bands moved irregularly and changed in shape, as demonstrated by repeated sidescan sonar surveys and by the monitoring of rods jetted into the sea floor. However, the overall pattern and distribution of the bands remained essentially unchanged. Cores, 0.5-1.0 m long, taken in coarse-sand bands contain 0.2-0.5 m of coarse sand overlying fine sand or interbedded fine and coarse sand. Cores from fine-sand bands have at least one thin coarse sand layer at about the depth of the adjacent coarse-sand band. None of the cores revealed a thick deposit of coarse sand. The shore-parallel bands are of unknown origin. Their origin is especially puzzling because approximately shore-normal bands are present in parts of the study area and immediately to the north. ?? 1988.

  12. Reservoir sedimentology of the Big Injun sandstone in Granny Creek field, West Virginia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zou, Xiangdong; Donaldson, K.; Donaldson, A.C.

    1992-01-01

    Big Injun sandstones of Granny Creek oil field (WV) are interpreted as fluvial-deltaic deposits from core and geophysical log data. The reservoir consists of two distinctive lithologies throughout the field; fine-grained sandstones overlain by pebbly and coarse-grained sandstones. Lower fine-grained sandstones were deposited in westward prograding river-mouth bars, where distal, marine-dominant proximal, and fluvial-dominant proximal bar subfacies are recognized. Principal pay is marine-influenced proximal bar, where porosity ranges from 13 to 23% and permeability, up to 24 md. Thin marine transgressive shales and their laterally equivalent low-permeability sandstones bound time-rock sequences generally less than 10 meters thick. Where field mapped,more » width of prograding bar sequence is approximately 2.7 km (dip trend), measured from truncated eastern edge (pre-coarse-grained member erosional surface) to distal western margin. Dip-trending elongate lobes occur within marine-influenced proximal mouth-bar area, representing thickest part of tidally influenced preserved bar. Upper coarse-grained part of reservoir consists of pebbly sandstones of channel fill from bedload streams. Laterally persistent low permeability cemented interval in lower part commonly caps underlying pay zone and probably serves as seal to vertical oil migration. Southwest paleoflow trends based on thickness maps of unit portent emergence of West Virginia dome, which influences erosion patterns of pre-Greenbrier unconformity for this combination oil trap.« less

  13. Short-Term Effects of Tillage Practices on Soil Organic Carbon Turnover Assessed by δ 13C Abundance in Particle-Size Fractions of Black Soils from Northeast China

    PubMed Central

    Zhang, Xiaoping; Chen, Xuewen

    2014-01-01

    The combination of isotope trace technique and SOC fractionation allows a better understanding of SOC dynamics. A five-year tillage experiment consisting of no-tillage (NT) and mouldboard plough (MP) was used to study the changes in particle-size SOC fractions and corresponding δ 13C natural abundance to assess SOC turnover in the 0–20 cm layer of black soils under tillage practices. Compared to the initial level, total SOC tended to be stratified but showed a slight increase in the entire plough layer under short-term NT. MP had no significant impacts on SOC at any depth. Because of significant increases in coarse particulate organic carbon (POC) and decreases in fine POC, total POC did not remarkably decrease under NT and MP. A distinct increase in silt plus clay OC occurred in NT plots, but not in MP plots. However, the δ 13C abundances of both coarse and fine POC increased, while those of silt plus clay OC remained almost the same under NT. The C derived from C3 plants was mainly associated with fine particles and much less with coarse particles. These results suggested that short-term NT and MP preferentially enhanced the turnover of POC, which was considerably faster than that of silt plus clay OC. PMID:25162052

  14. MEMD-enhanced multivariate fuzzy entropy for the evaluation of complexity in biomedical signals.

    PubMed

    Azami, Hamed; Smith, Keith; Escudero, Javier

    2016-08-01

    Multivariate multiscale entropy (mvMSE) has been proposed as a combination of the coarse-graining process and multivariate sample entropy (mvSE) to quantify the irregularity of multivariate signals. However, both the coarse-graining process and mvSE may not be reliable for short signals. Although the coarse-graining process can be replaced with multivariate empirical mode decomposition (MEMD), the relative instability of mvSE for short signals remains a problem. Here, we address this issue by proposing the multivariate fuzzy entropy (mvFE) with a new fuzzy membership function. The results using white Gaussian noise show that the mvFE leads to more reliable and stable results, especially for short signals, in comparison with mvSE. Accordingly, we propose MEMD-enhanced mvFE to quantify the complexity of signals. The characteristics of brain regions influenced by partial epilepsy are investigated by focal and non-focal electroencephalogram (EEG) time series. In this sense, the proposed MEMD-enhanced mvFE and mvSE are employed to discriminate focal EEG signals from non-focal ones. The results demonstrate the MEMD-enhanced mvFE values have a smaller coefficient of variation in comparison with those obtained by the MEMD-enhanced mvSE, even for long signals. The results also show that the MEMD-enhanced mvFE has better performance to quantify focal and non-focal signals compared with multivariate multiscale permutation entropy.

  15. Fine-grained visual marine vessel classification for coastal surveillance and defense applications

    NASA Astrophysics Data System (ADS)

    Solmaz, Berkan; Gundogdu, Erhan; Karaman, Kaan; Yücesoy, Veysel; Koç, Aykut

    2017-10-01

    The need for capabilities of automated visual content analysis has substantially increased due to presence of large number of images captured by surveillance cameras. With a focus on development of practical methods for extracting effective visual data representations, deep neural network based representations have received great attention due to their success in visual categorization of generic images. For fine-grained image categorization, a closely related yet a more challenging research problem compared to generic image categorization due to high visual similarities within subgroups, diverse applications were developed such as classifying images of vehicles, birds, food and plants. Here, we propose the use of deep neural network based representations for categorizing and identifying marine vessels for defense and security applications. First, we gather a large number of marine vessel images via online sources grouping them into four coarse categories; naval, civil, commercial and service vessels. Next, we subgroup naval vessels into fine categories such as corvettes, frigates and submarines. For distinguishing images, we extract state-of-the-art deep visual representations and train support-vector-machines. Furthermore, we fine tune deep representations for marine vessel images. Experiments address two scenarios, classification and verification of naval marine vessels. Classification experiment aims coarse categorization, as well as learning models of fine categories. Verification experiment embroils identification of specific naval vessels by revealing if a pair of images belongs to identical marine vessels by the help of learnt deep representations. Obtaining promising performance, we believe these presented capabilities would be essential components of future coastal and on-board surveillance systems.

  16. 3D registration of depth data of porous surface coatings based on 3D phase correlation and the trimmed ICP algorithm

    NASA Astrophysics Data System (ADS)

    Loftfield, Nina; Kästner, Markus; Reithmeier, Eduard

    2017-06-01

    A critical factor of endoprostheses is the quality of the tribological pairing. The objective of this research project is to manufacture stochastically porous aluminum oxide surface coatings with high wear resistance and an active friction minimization. There are many experimental and computational techniques from mercury porosimetry to imaging methods for studying porous materials, however, the characterization of disordered pore networks is still a great challenge. To meet this challenge it is striven to gain a three dimensional high resolution reconstruction of the surface. In this work, the reconstruction is approached by repeatedly milling down the surface by a fixed decrement while measuring each layer using a confocal laser scanning microscope (CLSM). The so acquired depth data of the successive layers is then registered pairwise. Within this work a direct registration approach is deployed and implemented in two steps, a coarse and a fine alignment. The coarse alignment of the depth data is limited to a translational shift which occurs in horizontal direction due to placing the sample in turns under the CLSM and the milling machine and in vertical direction due to the milling process itself. The shift is determined by an approach utilizing 3D phase correlation. The fine alignment is implemented by the Trimmed Iterative Closest Point algorithm, matching the most likely common pixels roughly specified by an estimated overlap rate. With the presented two-step approach a proper 3D registration of the successive depth data of the layer is obtained.

  17. Application of Numerical Simulation for the Analysis of the Processes of Rotary Ultrasonic Drilling

    NASA Astrophysics Data System (ADS)

    Naď, Milan; Čičmancová, Lenka; Hajdu, Štefan

    2016-12-01

    Rotary ultrasonic machining (RUM) is a hybrid process that combines diamond grinding with ultrasonic machining. It is most suitable to machine hard brittle materials such as ceramics and composites. Due to its excellent machining performance, RUM is very often applied for drilling of hard machinable materials. In the final phase of drilling, the edge deterioration of the drilled hole can occur, which results in a phenomenon called edge chipping. During hole drilling, a change in the thickness of the bottom of the drilled hole occurs. Consequently, the bottom of the hole as a plate structure is exposed to the transfer through the resonance state. This resonance state can be considered as one of the important aspects leading to edge chipping. Effects of changes in the bottom thickness and as well as the fillet radius between the wall and bottom of the borehole on the stress-strain states during RUM are analyzed.

  18. Machining of Aircraft Titanium with Abrasive-Waterjets for Fatigue Critical Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, H. T.; Hovanski, Yuri; Dahl, Michael E.

    2010-10-04

    Laboratory tests were conducted to determine the fatigue performance of AWJ-machined aircraft titanium. Dog-bone specimens machined with AWJs were prepared and tested with and without sanding and dry-grit blasting with Al2O3 as secondary processes. The secondary processes were applied to remove the visual appearance of AWJ-generated striations and to clean up the garnet embedment. The fatigue performance of AWJ-machined specimens was compared with baseline specimens machined with CNC milling. Fatigue test results not only confirmed the findings of the aluminum dog-bone specimens but also further enhance the fatigue performance. In addition, titanium is known to be notoriously difficult to cutmore » with contact tools while AWJs cut it 34% faster than stainless steel. AWJ cutting and dry-grit blasting are shown to be a preferred combination for processing aircraft titanium that is fatigue critical.« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ken L. Stratton

    The objective of this project is to investigate the applicability of a combined Global Positioning System and Inertial Measurement Unit (GPS/IMU) for information based displays on earthmoving machines and for automated earthmoving machines in the future. This technology has the potential of allowing an information-based product like Caterpillar's Computer Aided Earthmoving System (CAES) to operate in areas with satellite shading. Satellite shading is an issue in open pit mining because machines are routinely required to operate close to high walls, which reduces significantly the amount of the visible sky to the GPS antenna mounted on the machine. An inertial measurementmore » unit is a product, which provides data for the calculation of position based on sensing accelerations and rotation rates of the machine's rigid body. When this information is coupled with GPS it results in a positioning system that can maintain positioning capability during time periods of shading.« less

  20. Surface dimpling on rotating work piece using rotation cutting tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhapkar, Rohit Arun; Larsen, Eric Richard

    A combined method of machining and applying a surface texture to a work piece and a tool assembly that is capable of machining and applying a surface texture to a work piece are disclosed. The disclosed method includes machining portions of an outer or inner surface of a work piece. The method also includes rotating the work piece in front of a rotating cutting tool and engaging the outer surface of the work piece with the rotating cutting tool to cut dimples in the outer surface of the work piece. The disclosed tool assembly includes a rotating cutting tool coupledmore » to an end of a rotational machining device, such as a lathe. The same tool assembly can be used to both machine the work piece and apply a surface texture to the work piece without unloading the work piece from the tool assembly.« less

  1. A new method for the prediction of chatter stability lobes based on dynamic cutting force simulation model and support vector machine

    NASA Astrophysics Data System (ADS)

    Peng, Chong; Wang, Lun; Liao, T. Warren

    2015-10-01

    Currently, chatter has become the critical factor in hindering machining quality and productivity in machining processes. To avoid cutting chatter, a new method based on dynamic cutting force simulation model and support vector machine (SVM) is presented for the prediction of chatter stability lobes. The cutting force is selected as the monitoring signal, and the wavelet energy entropy theory is used to extract the feature vectors. A support vector machine is constructed using the MATLAB LIBSVM toolbox for pattern classification based on the feature vectors derived from the experimental cutting data. Then combining with the dynamic cutting force simulation model, the stability lobes diagram (SLD) can be estimated. Finally, the predicted results are compared with existing methods such as zero-order analytical (ZOA) and semi-discretization (SD) method as well as actual cutting experimental results to confirm the validity of this new method.

  2. Seeing through the Canopy: Relationship between Coarse Woody Debris and Forest Structure measured by Airborne Lidar in the Brazilian Amazon

    NASA Astrophysics Data System (ADS)

    Scaranello, M. A., Sr.; Keller, M. M.; dos-Santos, M. N.; Longo, M.; Pinagé, E. R.; Leitold, V.

    2016-12-01

    Coarse woody debris is an important but infrequently quantified carbon pool in tropical forests. Based on studies at 12 sites spread across the Brazilian Amazon, we quantified coarse woody debris stocks in intact forests and forests affected by different intensities of degradation by logging and/or fire. Measurement were made in-situ and for the first time field measurements of coarse woody debris were related to structural metrics derived from airborne lidar. Using the line-intercept method we established 84 transects for sampling fallen coarse woody debris and associated inventory plots for sampling standing dead wood in intact, conventional logging, reduced impact logging, burned and burned after logging forests. Overall mean and standard deviation of total coarse woody debris were 50.0 Mg ha-1 and 26.4 Mg ha-1 respectively. Forest degradation increased coarse woody debris stocks compared to intact forests by a factor of 1.7 in reduced impact logging forests and up to 3-fold in burned forests, in a side-by-side comparison of nearby areas. The ratio between coarse woody debris and biomass increased linearly with number of degradation events (R²: 0.67, p<0.01). Individual lidar-derived structural variables strongly correlated with coarse woody debris in intact and reduced impact logging forests: the 5th percentile of last returns for in intact forests (R²: 0.78, p<0.01) and forest gap area, mapped using lidar-derived canopy height model, for reduced impact logging forests (R²: 0.63, p<0.01). Individual gap area also played a weak but significant role in determining coarse woody debris in burned forests (R2: 0.21, p<0.05), but with contrasting trend. Both degradation-specific and general multiple models using lidar-derived variables were good predictor of coarse woody debris stocks in different degradation levels in the Brazilian Amazon. The strong relation of coarse woody debris with lidar derived structural variables suggests an approach for quantifying infrequently measured coarse woody debris over large areas.

  3. A Framework to Guide the Assessment of Human-Machine Systems.

    PubMed

    Stowers, Kimberly; Oglesby, James; Sonesh, Shirley; Leyva, Kevin; Iwig, Chelsea; Salas, Eduardo

    2017-03-01

    We have developed a framework for guiding measurement in human-machine systems. The assessment of safety and performance in human-machine systems often relies on direct measurement, such as tracking reaction time and accidents. However, safety and performance emerge from the combination of several variables. The assessment of precursors to safety and performance are thus an important part of predicting and improving outcomes in human-machine systems. As part of an in-depth literature analysis involving peer-reviewed, empirical articles, we located and classified variables important to human-machine systems, giving a snapshot of the state of science on human-machine system safety and performance. Using this information, we created a framework of safety and performance in human-machine systems. This framework details several inputs and processes that collectively influence safety and performance. Inputs are divided according to human, machine, and environmental inputs. Processes are divided into attitudes, behaviors, and cognitive variables. Each class of inputs influences the processes and, subsequently, outcomes that emerge in human-machine systems. This framework offers a useful starting point for understanding the current state of the science and measuring many of the complex variables relating to safety and performance in human-machine systems. This framework can be applied to the design, development, and implementation of automated machines in spaceflight, military, and health care settings. We present a hypothetical example in our write-up of how it can be used to aid in project success.

  4. Competitive two-agent scheduling problems to minimize the weighted combination of makespans in a two-machine open shop

    NASA Astrophysics Data System (ADS)

    Jiang, Fuhong; Zhang, Xingong; Bai, Danyu; Wu, Chin-Chia

    2018-04-01

    In this article, a competitive two-agent scheduling problem in a two-machine open shop is studied. The objective is to minimize the weighted sum of the makespans of two competitive agents. A complexity proof is presented for minimizing the weighted combination of the makespan of each agent if the weight α belonging to agent B is arbitrary. Furthermore, two pseudo-polynomial-time algorithms using the largest alternate processing time (LAPT) rule are presented. Finally, two approximation algorithms are presented if the weight is equal to one. Additionally, another approximation algorithm is presented if the weight is larger than one.

  5. Automation of the CCTV-mediated detection of individuals illegally carrying firearms: combining psychological and technological approaches

    NASA Astrophysics Data System (ADS)

    Darker, Iain T.; Kuo, Paul; Yang, Ming Yuan; Blechko, Anastassia; Grecos, Christos; Makris, Dimitrios; Nebel, Jean-Christophe; Gale, Alastair G.

    2009-05-01

    Findings from the current UK national research programme, MEDUSA (Multi Environment Deployable Universal Software Application), are presented. MEDUSA brings together two approaches to facilitate the design of an automatic, CCTV-based firearm detection system: psychological-to elicit strategies used by CCTV operators; and machine vision-to identify key cues derived from camera imagery. Potentially effective human- and machine-based strategies have been identified; these will form elements of the final system. The efficacies of these algorithms have been tested on staged CCTV footage in discriminating between firearms and matched distractor objects. Early results indicate the potential for this combined approach.

  6. ABB's advanced steam turbine program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chellini, R.

    Demand for industrial steam turbines for combined-cycle applications and cogeneration plants has influenced turbine manufacturers to standardize their machines to reduce delivery time and cost. ABB, also a supplier of turnkey plants, manufactures steam turbines in Finspong, Sweden, at the former ASEA Stal facilities and in Nuernberg, Germany, at the former AEG facilities. The companies have joined forces, setting up the advanced Steam Turbine Program (ATP) that, once completed, will cover a power range from two to 100 MW. The company decided to use two criteria as a starting point, the high efficiency design of the Swedish turbines and themore » high reliability of the German machines. Thus, the main task was combining the two designs in standard machines that could be assembled quickly into predefined packages to meet specific needs of combined-cycle and cogeneration plants specified by customers. In carrying out this project, emphasis was put on cost reduction as one of the main goals. The first results of the ATP program, presented by ABB Turbinen Nuernberg, is the range of 2-30 MW turbines covered by two frame sizes comprising standard components supporting the thermodynamic module. An important feature is the standardization of the speed reduction gearbox.« less

  7. Learning to predict chemical reactions.

    PubMed

    Kayala, Matthew A; Azencott, Chloé-Agathe; Chen, Jonathan H; Baldi, Pierre

    2011-09-26

    Being able to predict the course of arbitrary chemical reactions is essential to the theory and applications of organic chemistry. Approaches to the reaction prediction problems can be organized around three poles corresponding to: (1) physical laws; (2) rule-based expert systems; and (3) inductive machine learning. Previous approaches at these poles, respectively, are not high throughput, are not generalizable or scalable, and lack sufficient data and structure to be implemented. We propose a new approach to reaction prediction utilizing elements from each pole. Using a physically inspired conceptualization, we describe single mechanistic reactions as interactions between coarse approximations of molecular orbitals (MOs) and use topological and physicochemical attributes as descriptors. Using an existing rule-based system (Reaction Explorer), we derive a restricted chemistry data set consisting of 1630 full multistep reactions with 2358 distinct starting materials and intermediates, associated with 2989 productive mechanistic steps and 6.14 million unproductive mechanistic steps. And from machine learning, we pose identifying productive mechanistic steps as a statistical ranking, information retrieval problem: given a set of reactants and a description of conditions, learn a ranking model over potential filled-to-unfilled MO interactions such that the top-ranked mechanistic steps yield the major products. The machine learning implementation follows a two-stage approach, in which we first train atom level reactivity filters to prune 94.00% of nonproductive reactions with a 0.01% error rate. Then, we train an ensemble of ranking models on pairs of interacting MOs to learn a relative productivity function over mechanistic steps in a given system. Without the use of explicit transformation patterns, the ensemble perfectly ranks the productive mechanism at the top 89.05% of the time, rising to 99.86% of the time when the top four are considered. Furthermore, the system is generalizable, making reasonable predictions over reactants and conditions which the rule-based expert does not handle. A web interface to the machine learning based mechanistic reaction predictor is accessible through our chemoinformatics portal ( http://cdb.ics.uci.edu) under the Toolkits section.

  8. Learning to Predict Chemical Reactions

    PubMed Central

    Kayala, Matthew A.; Azencott, Chloé-Agathe; Chen, Jonathan H.

    2011-01-01

    Being able to predict the course of arbitrary chemical reactions is essential to the theory and applications of organic chemistry. Approaches to the reaction prediction problems can be organized around three poles corresponding to: (1) physical laws; (2) rule-based expert systems; and (3) inductive machine learning. Previous approaches at these poles respectively are not high-throughput, are not generalizable or scalable, or lack sufficient data and structure to be implemented. We propose a new approach to reaction prediction utilizing elements from each pole. Using a physically inspired conceptualization, we describe single mechanistic reactions as interactions between coarse approximations of molecular orbitals (MOs) and use topological and physicochemical attributes as descriptors. Using an existing rule-based system (Reaction Explorer), we derive a restricted chemistry dataset consisting of 1630 full multi-step reactions with 2358 distinct starting materials and intermediates, associated with 2989 productive mechanistic steps and 6.14 million unproductive mechanistic steps. And from machine learning, we pose identifying productive mechanistic steps as a statistical ranking, information retrieval, problem: given a set of reactants and a description of conditions, learn a ranking model over potential filled-to-unfilled MO interactions such that the top ranked mechanistic steps yield the major products. The machine learning implementation follows a two-stage approach, in which we first train atom level reactivity filters to prune 94.00% of non-productive reactions with a 0.01% error rate. Then, we train an ensemble of ranking models on pairs of interacting MOs to learn a relative productivity function over mechanistic steps in a given system. Without the use of explicit transformation patterns, the ensemble perfectly ranks the productive mechanism at the top 89.05% of the time, rising to 99.86% of the time when the top four are considered. Furthermore, the system is generalizable, making reasonable predictions over reactants and conditions which the rule-based expert does not handle. A web interface to the machine learning based mechanistic reaction predictor is accessible through our chemoinformatics portal (http://cdb.ics.uci.edu) under the Toolkits section. PMID:21819139

  9. Relative entropy and optimization-driven coarse-graining methods in VOTCA

    DOE PAGES

    Mashayak, S. Y.; Jochum, Mara N.; Koschke, Konstantin; ...

    2015-07-20

    We discuss recent advances of the VOTCA package for systematic coarse-graining. Two methods have been implemented, namely the downhill simplex optimization and the relative entropy minimization. We illustrate the new methods by coarse-graining SPC/E bulk water and more complex water-methanol mixture systems. The CG potentials obtained from both methods are then evaluated by comparing the pair distributions from the coarse-grained to the reference atomistic simulations.We have also added a parallel analysis framework to improve the computational efficiency of the coarse-graining process.

  10. Role of translational entropy in spatially inhomogeneous, coarse-grained models

    NASA Astrophysics Data System (ADS)

    Langenberg, Marcel; Jackson, Nicholas E.; de Pablo, Juan J.; Müller, Marcus

    2018-03-01

    Coarse-grained models of polymer and biomolecular systems have enabled the computational study of cooperative phenomena, e.g., self-assembly, by lumping multiple atomistic degrees of freedom along the backbone of a polymer, lipid, or DNA molecule into one effective coarse-grained interaction center. Such a coarse-graining strategy leaves the number of molecules unaltered. In order to treat the surrounding solvent or counterions on the same coarse-grained level of description, one can also stochastically group several of those small molecules into an effective, coarse-grained solvent bead or "fluid element." Such a procedure reduces the number of molecules, and we discuss how to compensate the concomitant loss of translational entropy by density-dependent interactions in spatially inhomogeneous systems.

  11. Coarse graining for synchronization in directed networks

    NASA Astrophysics Data System (ADS)

    Zeng, An; Lü, Linyuan

    2011-05-01

    Coarse-graining model is a promising way to analyze and visualize large-scale networks. The coarse-grained networks are required to preserve statistical properties as well as the dynamic behaviors of the initial networks. Some methods have been proposed and found effective in undirected networks, while the study on coarse-graining directed networks lacks of consideration. In this paper we proposed a path-based coarse-graining (PCG) method to coarse grain the directed networks. Performing the linear stability analysis of synchronization and numerical simulation of the Kuramoto model on four kinds of directed networks, including tree networks and variants of Barabási-Albert networks, Watts-Strogatz networks, and Erdös-Rényi networks, we find our method can effectively preserve the network synchronizability.

  12. The Design of the Automatic Control System of the Gripping-Belt Speed in Long-Rootstalk Traditional Chinese Herbal Harvester

    NASA Astrophysics Data System (ADS)

    Huang, Jinxia; Wang, Junfa; Yu, Yonghong

    This article aims to design a kind of gripping-belt speed automatic tracking system of traditional Chinese herbal harvester by AT89C52 single-chip micro computer as a core combined with fuzzy PID control algorithm. The system can adjust the gripping-belt speed in accordance with the variation of the machine's operation, so there is a perfect matching between the machine operation speed and the gripping-belt speed. The harvesting performance of the machine can be improved greatly. System design includes hardware and software.

  13. The responses of subjective feeling, task performance ability, cortisol and HRV for the various types of floor impact sound: a pilot study.

    PubMed

    Yun, Seok Hyeon; Park, Sang Jin; Sim, Chang Sun; Sung, Joo Hyun; Kim, Ahra; Lee, Jang Myeong; Lee, Sang Hyun; Lee, Jiho

    2017-01-01

    Recently, noise coming from the neighborhood via floor wall has become a great social problem. The noise between the floors can be a cause of physical and psychological problems, and the different types of floor impact sound (FIS) may have the different effects on the human's body and mind. The purpose of this study is to assess the responses of subjective feeling, task performance ability, cortisol and HRV for the various types of floor impact. Ten men and 5 women were enrolled in our study, and the English listening test was performed under the twelve different types of FIS, which were made by the combinations of bang machine (B), tapping machine (T), impact ball (I) and sound-proof mattress (M). The 15 subjects were exposed to each FIS for about 3 min, and the subjective annoyance, performance ability (English listening test), cortisol level of urine/saliva and heart rate variability (HRV) were examined. The sound pressure level (SPL) and frequency of FIS were analyzed. Repeated-measures ANOVA, paired t-test, Wilcoxon signed rank test were performed for data analysis. The SPL of tapping machine (T) was reduced with the soundproof mattress (M) by 3.9-7.3 dBA. Impact ball (I) was higher than other FIS in low frequency (31.5-125 Hz) by 10 dBA, and tapping machine (T) was higher than other FIS in high frequency (2-4 k Hz) by 10 dBA. The subjective annoyance is highest in the combination of bang machine and tapping machine (BT), and next in the tapping machine (T). The English listening score was also lowest in the BT, and next in T. The difference of salivary cortisol levels between various types of FIS was significant ( p  = 0.003). The change of HRV parameters by the change of FIS types was significant in some parameters, which were total power (TP) ( p  = 0.004), low frequency (LF) ( p  = 0.002) and high frequency (HF) ( p  = 0.011). These results suggest that the human's subjective and objective responses were different according to FIS types and those combinations.

  14. Relative Effectiveness of Titles, Abstracts, and Subject Headings for Machine Retrieval from the COMPENDEX Services

    ERIC Educational Resources Information Center

    Byrne, Jerry R.

    1975-01-01

    Investigated the relative merits of searching on titles, subject headings, abstracts, free-language terms, and combinations of these elements. The combination of titles and abstracts came the closest to 100 percent retrieval. (Author/PF)

  15. Fabrication of an infrared Shack-Hartmann sensor by combining high-speed single-point diamond milling and precision compression molding processes.

    PubMed

    Zhang, Lin; Zhou, Wenchen; Naples, Neil J; Yi, Allen Y

    2018-05-01

    A novel fabrication method by combining high-speed single-point diamond milling and precision compression molding processes for fabrication of discontinuous freeform microlens arrays was proposed. Compared with slow tool servo diamond broaching, high-speed single-point diamond milling was selected for its flexibility in the fabrication of true 3D optical surfaces with discontinuous features. The advantage of single-point diamond milling is that the surface features can be constructed sequentially by spacing the axes of a virtual spindle at arbitrary positions based on the combination of rotational and translational motions of both the high-speed spindle and linear slides. By employing this method, each micro-lenslet was regarded as a microstructure cell by passing the axis of the virtual spindle through the vertex of each cell. An optimization arithmetic based on minimum-area fabrication was introduced to the machining process to further increase the machining efficiency. After the mold insert was machined, it was employed to replicate the microlens array onto chalcogenide glass. In the ensuing optical measurement, the self-built Shack-Hartmann wavefront sensor was proven to be accurate in detecting an infrared wavefront by both experiments and numerical simulation. The combined results showed that precision compression molding of chalcogenide glasses could be an economic and precision optical fabrication technology for high-volume production of infrared optics.

  16. Effects of coarse chalk dust particles (2.5-10 μm) on respiratory burst and oxidative stress in alveolar macrophages.

    PubMed

    Zhang, Yuexia; Yang, Zhenhua; Feng, Yan; Li, Ruijin; Zhang, Quanxi; Geng, Hong; Dong, Chuan

    2015-08-01

    The main aim of the present study was to examine in vitro responses of rat alveolar macrophages (AMs) exposed to coarse chalk dust particles (particulate matter in the size range 2.5-10 μm, PM(coarse)) by respiratory burst and oxidative stress. Chalk PM(coarse)-induced respiratory burst in AMs was measured by using a luminol-dependent chemiluminescence (CL) method. Also, the cell viability; lactate dehydrogenase (LDH) release; levels of cellular superoxide dismutase (SOD), catalase (CAT), glutathione (GSH), malondialdehyde (MDA), and acid phosphatase (ACP); plasma membrane ATPase; and extracellular nitric oxide (NO) level were determined 4 h following the treatment with the different dosages of chalk PM(coarse). The results showed that chalk PM(coarse) initiated the respiratory burst of AMs as indicated by strong CL, which was inhibited by diphenyleneiodonium chloride and L-N-nitro-L-arginine methyl ester hydrochloride. It suggested that chalk PM(coarse) induced the production of reactive oxygen species (ROS) and reactive nitrogen species (RNS) in AMs. This hypothesis was confirmed by the fact that chalk PM(coarse) resulted in a significant decrease of intracellular SOD, GSH, ACP, and ATPase levels and a notable increase of intracellular CAT, MDA content, and extracellular NO level, consequently leading to a decrease of the cell viability and a increase of LDH release. It was concluded that AMs exposed to chalk PM(coarse) can suffer from cytotoxicity which may be mediated by generation of excessive ROS/RNS. Graphical Abstract The possible mechanism of coarse chalk particles-induced adverse effects in AMs.

  17. String model for the dynamics of glass-forming liquids

    PubMed Central

    Pazmiño Betancourt, Beatriz A.; Douglas, Jack F.; Starr, Francis W.

    2014-01-01

    We test the applicability of a living polymerization theory to describe cooperative string-like particle rearrangement clusters (strings) observed in simulations of a coarse-grained polymer melt. The theory quantitatively describes the interrelation between the average string length L, configurational entropy Sconf, and the order parameter for string assembly Φ without free parameters. Combining this theory with the Adam-Gibbs model allows us to predict the relaxation time τ in a lower temperature T range than accessible by current simulations. In particular, the combined theories suggest a return to Arrhenius behavior near Tg and a low T residual entropy, thus avoiding a Kauzmann “entropy crisis.” PMID:24880303

  18. String model for the dynamics of glass-forming liquids.

    PubMed

    Pazmiño Betancourt, Beatriz A; Douglas, Jack F; Starr, Francis W

    2014-05-28

    We test the applicability of a living polymerization theory to describe cooperative string-like particle rearrangement clusters (strings) observed in simulations of a coarse-grained polymer melt. The theory quantitatively describes the interrelation between the average string length L, configurational entropy Sconf, and the order parameter for string assembly Φ without free parameters. Combining this theory with the Adam-Gibbs model allows us to predict the relaxation time τ in a lower temperature T range than accessible by current simulations. In particular, the combined theories suggest a return to Arrhenius behavior near Tg and a low T residual entropy, thus avoiding a Kauzmann "entropy crisis."

  19. Upscaling of Mixed Finite Element Discretization Problems by the Spectral AMGe Method

    DOE PAGES

    Kalchev, Delyan Z.; Lee, C. S.; Villa, U.; ...

    2016-09-22

    Here, we propose two multilevel spectral techniques for constructing coarse discretization spaces for saddle-point problems corresponding to PDEs involving a divergence constraint, with a focus on mixed finite element discretizations of scalar self-adjoint second order elliptic equations on general unstructured grids. We use element agglomeration algebraic multigrid (AMGe), which employs coarse elements that can have nonstandard shape since they are agglomerates of fine-grid elements. The coarse basis associated with each agglomerated coarse element is constructed by solving local eigenvalue problems and local mixed finite element problems. This construction leads to stable upscaled coarse spaces and guarantees the inf-sup compatibility ofmore » the upscaled discretization. Also, the approximation properties of these upscaled spaces improve by adding more local eigenfunctions to the coarse spaces. The higher accuracy comes at the cost of additional computational effort, as the sparsity of the resulting upscaled coarse discretization (referred to as operator complexity) deteriorates when we introduce additional functions in the coarse space. We also provide an efficient solver for the coarse (upscaled) saddle-point system by employing hybridization, which leads to a symmetric positive definite (s.p.d.) reduced system for the Lagrange multipliers, and to solve the latter s.p.d. system, we use our previously developed spectral AMGe solver. Numerical experiments, in both two and three dimensions, are provided to illustrate the efficiency of the proposed upscaling technique.« less

  20. Upscaling of Mixed Finite Element Discretization Problems by the Spectral AMGe Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalchev, Delyan Z.; Lee, C. S.; Villa, U.

    Here, we propose two multilevel spectral techniques for constructing coarse discretization spaces for saddle-point problems corresponding to PDEs involving a divergence constraint, with a focus on mixed finite element discretizations of scalar self-adjoint second order elliptic equations on general unstructured grids. We use element agglomeration algebraic multigrid (AMGe), which employs coarse elements that can have nonstandard shape since they are agglomerates of fine-grid elements. The coarse basis associated with each agglomerated coarse element is constructed by solving local eigenvalue problems and local mixed finite element problems. This construction leads to stable upscaled coarse spaces and guarantees the inf-sup compatibility ofmore » the upscaled discretization. Also, the approximation properties of these upscaled spaces improve by adding more local eigenfunctions to the coarse spaces. The higher accuracy comes at the cost of additional computational effort, as the sparsity of the resulting upscaled coarse discretization (referred to as operator complexity) deteriorates when we introduce additional functions in the coarse space. We also provide an efficient solver for the coarse (upscaled) saddle-point system by employing hybridization, which leads to a symmetric positive definite (s.p.d.) reduced system for the Lagrange multipliers, and to solve the latter s.p.d. system, we use our previously developed spectral AMGe solver. Numerical experiments, in both two and three dimensions, are provided to illustrate the efficiency of the proposed upscaling technique.« less

  1. Quantifying the coarse-root biomass of intensively managed loblolly pine plantations

    Treesearch

    Ashley T. Miller; H. Lee Allen; Chris A. Maier

    2006-01-01

    Most of the carbon accumulation during a forest rotation is in plant biomass and the forest floor. Most of the belowground biomass in older loblolly pine (Pinus taeda L.) forests is in coarse roots, and coarse roots persist longer after harvest than aboveground biomass and fine roots. The main objective was to assess the carbon accumulation in coarse...

  2. Quantifying the coarse-root biomass of intensively managed loblolly pine plantations

    Treesearch

    Ashley T. Miller; H. Lee Allen; Chris A. Maier

    2006-01-01

    Most of the carbon accumulation during a forest rotation is in plant biomass and the forest floor. Most of the belowground biomass in older loblolly pine (Pinus taeda L.) forests is in coarse roots, and coarse roots ersist longer after harvest than aboveground biomass and fine oots. The main objective was to assess the carbon accumulation in coarse...

  3. Coarse-graining and self-dissimilarity of complex networks

    NASA Astrophysics Data System (ADS)

    Itzkovitz, Shalev; Levitt, Reuven; Kashtan, Nadav; Milo, Ron; Itzkovitz, Michael; Alon, Uri

    2005-01-01

    Can complex engineered and biological networks be coarse-grained into smaller and more understandable versions in which each node represents an entire pattern in the original network? To address this, we define coarse-graining units as connectivity patterns which can serve as the nodes of a coarse-grained network and present algorithms to detect them. We use this approach to systematically reverse-engineer electronic circuits, forming understandable high-level maps from incomprehensible transistor wiring: first, a coarse-grained version in which each node is a gate made of several transistors is established. Then the coarse-grained network is itself coarse-grained, resulting in a high-level blueprint in which each node is a circuit module made of many gates. We apply our approach also to a mammalian protein signal-transduction network, to find a simplified coarse-grained network with three main signaling channels that resemble multi-layered perceptrons made of cross-interacting MAP-kinase cascades. We find that both biological and electronic networks are “self-dissimilar,” with different network motifs at each level. The present approach may be used to simplify a variety of directed and nondirected, natural and designed networks.

  4. Coarse-graining errors and numerical optimization using a relative entropy framework.

    PubMed

    Chaimovich, Aviel; Shell, M Scott

    2011-03-07

    The ability to generate accurate coarse-grained models from reference fully atomic (or otherwise "first-principles") ones has become an important component in modeling the behavior of complex molecular systems with large length and time scales. We recently proposed a novel coarse-graining approach based upon variational minimization of a configuration-space functional called the relative entropy, S(rel), that measures the information lost upon coarse-graining. Here, we develop a broad theoretical framework for this methodology and numerical strategies for its use in practical coarse-graining settings. In particular, we show that the relative entropy offers tight control over the errors due to coarse-graining in arbitrary microscopic properties, and suggests a systematic approach to reducing them. We also describe fundamental connections between this optimization methodology and other coarse-graining strategies like inverse Monte Carlo, force matching, energy matching, and variational mean-field theory. We suggest several new numerical approaches to its minimization that provide new coarse-graining strategies. Finally, we demonstrate the application of these theoretical considerations and algorithms to a simple, instructive system and characterize convergence and errors within the relative entropy framework. © 2011 American Institute of Physics.

  5. Bio-Inspired Human-Level Machine Learning

    DTIC Science & Technology

    2015-10-25

    extensions to high-level cognitive functions such as anagram solving problem. 15. SUBJECT TERMS 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...extensions to high-level cognitive functions such as anagram solving problem. We expect that the bio-inspired human-level machine learning combined with...numbers of 1011 neurons and 1014 synaptic connections in the human brain. In previous work, we experimentally demonstrated the feasibility of cognitive

  6. Source identification of coarse particles in the Desert ...

    EPA Pesticide Factsheets

    The Desert Southwest Coarse Particulate Matter Study was undertaken to further our understanding of the spatial and temporal variability and sources of fine and coarse particulate matter (PM) in rural, arid, desert environments. Sampling was conducted between February 2009 and February 2010 in Pinal County, AZ near the town of Casa Grande where PM concentrations routinely exceed the U.S. National Ambient Air Quality Standards (NAAQS) for both PM10 and PM2.5. In this desert region, exceedances of the PM10 NAAQS are dominated by high coarse particle concentrations, a common occurrence in this region of the United States. This work expands on previously published measurements of PM mass and chemistry by examining the sources of fine and coarse particles and the relative contribution of each to ambient PM mass concentrations using the Positive Matrix Factorization receptor model (Clements et al., 2014). Highlights • Isolation of coarse particles from fine particle sources. • Unique chemical composition of coarse particles. • Role of primary biological particles on aerosol loadings.

  7. Effect of fly ash on the strength of porous concrete using recycled coarse aggregate to replace low-quality natural coarse aggregate

    NASA Astrophysics Data System (ADS)

    Arifi, Eva; Cahya, Evi Nur; Christin Remayanti, N.

    2017-09-01

    The performance of porous concrete made of recycled coarse aggregate was investigated. Fly ash was used as cement partial replacement. In this study, the strength of recycled aggregate was coMPared to low quality natural coarse aggregate which has high water absorption. Compression strength and tensile splitting strength test were conducted to evaluate the performance of porous concrete using fly ash as cement replacement. Results have shown that the utilization of recycled coarse aggregate up to 75% to replace low quality natural coarse aggregate with high water absorption increases compressive strength and splitting tensile strength of porous concrete. Using fly ash up to 25% as cement replacement improves compressive strength and splitting tensile strength of porous concrete.

  8. Machine Learning for Medical Imaging

    PubMed Central

    Korfiatis, Panagiotis; Akkus, Zeynettin; Kline, Timothy L.

    2017-01-01

    Machine learning is a technique for recognizing patterns that can be applied to medical images. Although it is a powerful tool that can help in rendering medical diagnoses, it can be misapplied. Machine learning typically begins with the machine learning algorithm system computing the image features that are believed to be of importance in making the prediction or diagnosis of interest. The machine learning algorithm system then identifies the best combination of these image features for classifying the image or computing some metric for the given image region. There are several methods that can be used, each with different strengths and weaknesses. There are open-source versions of most of these machine learning methods that make them easy to try and apply to images. Several metrics for measuring the performance of an algorithm exist; however, one must be aware of the possible associated pitfalls that can result in misleading metrics. More recently, deep learning has started to be used; this method has the benefit that it does not require image feature identification and calculation as a first step; rather, features are identified as part of the learning process. Machine learning has been used in medical imaging and will have a greater influence in the future. Those working in medical imaging must be aware of how machine learning works. ©RSNA, 2017 PMID:28212054

  9. Machine Learning for Medical Imaging.

    PubMed

    Erickson, Bradley J; Korfiatis, Panagiotis; Akkus, Zeynettin; Kline, Timothy L

    2017-01-01

    Machine learning is a technique for recognizing patterns that can be applied to medical images. Although it is a powerful tool that can help in rendering medical diagnoses, it can be misapplied. Machine learning typically begins with the machine learning algorithm system computing the image features that are believed to be of importance in making the prediction or diagnosis of interest. The machine learning algorithm system then identifies the best combination of these image features for classifying the image or computing some metric for the given image region. There are several methods that can be used, each with different strengths and weaknesses. There are open-source versions of most of these machine learning methods that make them easy to try and apply to images. Several metrics for measuring the performance of an algorithm exist; however, one must be aware of the possible associated pitfalls that can result in misleading metrics. More recently, deep learning has started to be used; this method has the benefit that it does not require image feature identification and calculation as a first step; rather, features are identified as part of the learning process. Machine learning has been used in medical imaging and will have a greater influence in the future. Those working in medical imaging must be aware of how machine learning works. © RSNA, 2017.

  10. Simulation study of entropy production in the one-dimensional Vlasov system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dai, Zongliang, E-mail: liangliang1223@gmail.com; Wang, Shaojie

    2016-07-15

    The coarse-grain averaged distribution function of the one-dimensional Vlasov system is obtained by numerical simulation. The entropy productions in cases of the random field, the linear Landau damping, and the bump-on-tail instability are computed with the coarse-grain averaged distribution function. The computed entropy production is converged with increasing length of coarse-grain average. When the distribution function differs slightly from a Maxwellian distribution, the converged value agrees with the result computed by using the definition of thermodynamic entropy. The length of the coarse-grain average to compute the coarse-grain averaged distribution function is discussed.

  11. Comparing machine learning and logistic regression methods for predicting hypertension using a combination of gene expression and next-generation sequencing data.

    PubMed

    Held, Elizabeth; Cape, Joshua; Tintle, Nathan

    2016-01-01

    Machine learning methods continue to show promise in the analysis of data from genetic association studies because of the high number of variables relative to the number of observations. However, few best practices exist for the application of these methods. We extend a recently proposed supervised machine learning approach for predicting disease risk by genotypes to be able to incorporate gene expression data and rare variants. We then apply 2 different versions of the approach (radial and linear support vector machines) to simulated data from Genetic Analysis Workshop 19 and compare performance to logistic regression. Method performance was not radically different across the 3 methods, although the linear support vector machine tended to show small gains in predictive ability relative to a radial support vector machine and logistic regression. Importantly, as the number of genes in the models was increased, even when those genes contained causal rare variants, model predictive ability showed a statistically significant decrease in performance for both the radial support vector machine and logistic regression. The linear support vector machine showed more robust performance to the inclusion of additional genes. Further work is needed to evaluate machine learning approaches on larger samples and to evaluate the relative improvement in model prediction from the incorporation of gene expression data.

  12. A Power Transformers Fault Diagnosis Model Based on Three DGA Ratios and PSO Optimization SVM

    NASA Astrophysics Data System (ADS)

    Ma, Hongzhe; Zhang, Wei; Wu, Rongrong; Yang, Chunyan

    2018-03-01

    In order to make up for the shortcomings of existing transformer fault diagnosis methods in dissolved gas-in-oil analysis (DGA) feature selection and parameter optimization, a transformer fault diagnosis model based on the three DGA ratios and particle swarm optimization (PSO) optimize support vector machine (SVM) is proposed. Using transforming support vector machine to the nonlinear and multi-classification SVM, establishing the particle swarm optimization to optimize the SVM multi classification model, and conducting transformer fault diagnosis combined with the cross validation principle. The fault diagnosis results show that the average accuracy of test method is better than the standard support vector machine and genetic algorithm support vector machine, and the proposed method can effectively improve the accuracy of transformer fault diagnosis is proved.

  13. Finding and defining the natural automata acting in living plants: Toward the synthetic biology for robotics and informatics in vivo.

    PubMed

    Kawano, Tomonori; Bouteau, François; Mancuso, Stefano

    2012-11-01

    The automata theory is the mathematical study of abstract machines commonly studied in the theoretical computer science and highly interdisciplinary fields that combine the natural sciences and the theoretical computer science. In the present review article, as the chemical and biological basis for natural computing or informatics, some plants, plant cells or plant-derived molecules involved in signaling are listed and classified as natural sequential machines (namely, the Mealy machines or Moore machines) or finite state automata. By defining the actions (states and transition functions) of these natural automata, the similarity between the computational data processing and plant decision-making processes became obvious. Finally, their putative roles as the parts for plant-based computing or robotic systems are discussed.

  14. Finding and defining the natural automata acting in living plants: Toward the synthetic biology for robotics and informatics in vivo

    PubMed Central

    Kawano, Tomonori; Bouteau, François; Mancuso, Stefano

    2012-01-01

    The automata theory is the mathematical study of abstract machines commonly studied in the theoretical computer science and highly interdisciplinary fields that combine the natural sciences and the theoretical computer science. In the present review article, as the chemical and biological basis for natural computing or informatics, some plants, plant cells or plant-derived molecules involved in signaling are listed and classified as natural sequential machines (namely, the Mealy machines or Moore machines) or finite state automata. By defining the actions (states and transition functions) of these natural automata, the similarity between the computational data processing and plant decision-making processes became obvious. Finally, their putative roles as the parts for plant-based computing or robotic systems are discussed. PMID:23336016

  15. Laser milling of martensitic stainless steels using spiral trajectories

    NASA Astrophysics Data System (ADS)

    Romoli, L.; Tantussi, F.; Fuso, F.

    2017-04-01

    A laser beam with sub-picosecond pulse duration was driven in spiral trajectories to perform micro-milling of martensitic stainless steel. The geometry of the machined micro-grooves channels was investigated by a specifically conceived Scanning Probe Microscopy instrument and linked to laser parameters by using an experimental approach combining the beam energy distribution profile and the absorption phenomena in the material. Preliminary analysis shows that, despite the numerous parameters involved in the process, layer removal obtained by spiral trajectories, varying the radial overlap, allows for a controllable depth of cut combined to a flattening effect of surface roughness. Combining the developed machining strategy to a feed motion of the work stage, could represent a method to obtain three-dimensional structures with a resolution of few microns, with an areal roughness Sa below 100 nm.

  16. Combining Machine Learning Systems and Multiple Docking Simulation Packages to Improve Docking Prediction Reliability for Network Pharmacology

    PubMed Central

    Hsin, Kun-Yi; Ghosh, Samik; Kitano, Hiroaki

    2013-01-01

    Increased availability of bioinformatics resources is creating opportunities for the application of network pharmacology to predict drug effects and toxicity resulting from multi-target interactions. Here we present a high-precision computational prediction approach that combines two elaborately built machine learning systems and multiple molecular docking tools to assess binding potentials of a test compound against proteins involved in a complex molecular network. One of the two machine learning systems is a re-scoring function to evaluate binding modes generated by docking tools. The second is a binding mode selection function to identify the most predictive binding mode. Results from a series of benchmark validations and a case study show that this approach surpasses the prediction reliability of other techniques and that it also identifies either primary or off-targets of kinase inhibitors. Integrating this approach with molecular network maps makes it possible to address drug safety issues by comprehensively investigating network-dependent effects of a drug or drug candidate. PMID:24391846

  17. Change detection and classification of land cover in multispectral satellite imagery using clustering of sparse approximations (CoSA) over learned feature dictionaries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moody, Daniela I.; Brumby, Steven P.; Rowland, Joel C.

    Neuromimetic machine vision and pattern recognition algorithms are of great interest for landscape characterization and change detection in satellite imagery in support of global climate change science and modeling. We present results from an ongoing effort to extend machine vision methods to the environmental sciences, using adaptive sparse signal processing combined with machine learning. A Hebbian learning rule is used to build multispectral, multiresolution dictionaries from regional satellite normalized band difference index data. Land cover labels are automatically generated via our CoSA algorithm: Clustering of Sparse Approximations, using a clustering distance metric that combines spectral and spatial textural characteristics tomore » help separate geologic, vegetative, and hydrologie features. We demonstrate our method on example Worldview-2 satellite images of an Arctic region, and use CoSA labels to detect seasonal surface changes. In conclusion, our results suggest that neuroscience-based models are a promising approach to practical pattern recognition and change detection problems in remote sensing.« less

  18. Change detection and classification of land cover in multispectral satellite imagery using clustering of sparse approximations (CoSA) over learned feature dictionaries

    DOE PAGES

    Moody, Daniela I.; Brumby, Steven P.; Rowland, Joel C.; ...

    2014-10-01

    Neuromimetic machine vision and pattern recognition algorithms are of great interest for landscape characterization and change detection in satellite imagery in support of global climate change science and modeling. We present results from an ongoing effort to extend machine vision methods to the environmental sciences, using adaptive sparse signal processing combined with machine learning. A Hebbian learning rule is used to build multispectral, multiresolution dictionaries from regional satellite normalized band difference index data. Land cover labels are automatically generated via our CoSA algorithm: Clustering of Sparse Approximations, using a clustering distance metric that combines spectral and spatial textural characteristics tomore » help separate geologic, vegetative, and hydrologie features. We demonstrate our method on example Worldview-2 satellite images of an Arctic region, and use CoSA labels to detect seasonal surface changes. In conclusion, our results suggest that neuroscience-based models are a promising approach to practical pattern recognition and change detection problems in remote sensing.« less

  19. Imaging and machine learning techniques for diagnosis of Alzheimer's disease.

    PubMed

    Mirzaei, Golrokh; Adeli, Anahita; Adeli, Hojjat

    2016-12-01

    Alzheimer's disease (AD) is a common health problem in elderly people. There has been considerable research toward the diagnosis and early detection of this disease in the past decade. The sensitivity of biomarkers and the accuracy of the detection techniques have been defined to be the key to an accurate diagnosis. This paper presents a state-of-the-art review of the research performed on the diagnosis of AD based on imaging and machine learning techniques. Different segmentation and machine learning techniques used for the diagnosis of AD are reviewed including thresholding, supervised and unsupervised learning, probabilistic techniques, Atlas-based approaches, and fusion of different image modalities. More recent and powerful classification techniques such as the enhanced probabilistic neural network of Ahmadlou and Adeli should be investigated with the goal of improving the diagnosis accuracy. A combination of different image modalities can help improve the diagnosis accuracy rate. Research is needed on the combination of modalities to discover multi-modal biomarkers.

  20. Effect of magnetic polarity on surface roughness during magnetic field assisted EDM of tool steel

    NASA Astrophysics Data System (ADS)

    Efendee, A. M.; Saifuldin, M.; Gebremariam, MA; Azhari, A.

    2018-04-01

    Electrical discharge machining (EDM) is one of the non-traditional machining techniques where the process offers wide range of parameters manipulation and machining applications. However, surface roughness, material removal rate, electrode wear and operation costs were among the topmost issue within this technique. Alteration of magnetic device around machining area offers exciting output to be investigated and the effects of magnetic polarity on EDM remain unacquainted. The aim of this research is to investigate the effect of magnetic polarity on surface roughness during magnetic field assisted electrical discharge machining (MFAEDM) on tool steel material (AISI 420 mod.) using graphite electrode. A Magnet with a force of 18 Tesla was applied to the EDM process at selected parameters. The sparks under magnetic field assisted EDM produced better surface finish than the normal conventional EDM process. At the presence of high magnetic field, the spark produced was squeezed and discharge craters generated on the machined surface was tiny and shallow. Correct magnetic polarity combination of MFAEDM process is highly useful to attain a high efficiency machining and improved quality of surface finish to meet the demand of modern industrial applications.

  1. Modeling of Principal Flank Wear: An Empirical Approach Combining the Effect of Tool, Environment and Workpiece Hardness

    NASA Astrophysics Data System (ADS)

    Mia, Mozammel; Al Bashir, Mahmood; Dhar, Nikhil Ranjan

    2016-10-01

    Hard turning is increasingly employed in machining, lately, to replace time-consuming conventional turning followed by grinding process. An excessive amount of tool wear in hard turning is one of the main hurdles to be overcome. Many researchers have developed tool wear model, but most of them developed it for a particular work-tool-environment combination. No aggregate model is developed that can be used to predict the amount of principal flank wear for specific machining time. An empirical model of principal flank wear (VB) has been developed for the different hardness of workpiece (HRC40, HRC48 and HRC56) while turning by coated carbide insert with different configurations (SNMM and SNMG) under both dry and high pressure coolant conditions. Unlike other developed model, this model includes the use of dummy variables along with the base empirical equation to entail the effect of any changes in the input conditions on the response. The base empirical equation for principal flank wear is formulated adopting the Exponential Associate Function using the experimental results. The coefficient of dummy variable reflects the shifting of the response from one set of machining condition to another set of machining condition which is determined by simple linear regression. The independent cutting parameters (speed, rate, depth of cut) are kept constant while formulating and analyzing this model. The developed model is validated with different sets of machining responses in turning hardened medium carbon steel by coated carbide inserts. For any particular set, the model can be used to predict the amount of principal flank wear for specific machining time. Since the predicted results exhibit good resemblance with experimental data and the average percentage error is <10 %, this model can be used to predict the principal flank wear for stated conditions.

  2. Combination process of diamond machining and roll-to-roll UV-replication for thin film micro- and nanostructures

    NASA Astrophysics Data System (ADS)

    Väyrynen, J.; Mönkkönen, K.; Siitonen, S.

    2016-09-01

    Roll-to-roll (R2R) ultraviolet (UV) curable embossing replication process is a highly accurate and cost effective way to replicate large quantities of thin film polymer parts. These structures can be used for microfluidics, LED-optics, light guides, displays, cameras, diffusers, decorative, laser sensing and measuring devices. In the R2R UV-process, plastic thin film coated with UV-curable lacquer, passes through an imprinting embossing drum and is then hardened by an UV-lamp. One key element for mastering this process is the ability to manufacture a rotating drum containing micro- and nanostructures. Depending on the pattern shapes, the drum can be directly machined by diamond machining or it can be done through wafer level lithographical process. Due to the shrinkage of UV-curable lacquer, the R2R drum pattern process needs to be prototyped few times, in order to get the desired performance and shape from the R2R produced part. To speed up the prototyping and overall process we have developed a combination process where planar diamond machining patterns are being turned into a drum roller. Initially diamond machined patterns from a planar surface are replicated on a polymer sheet using UV-replication. Secondly, a nickel stamper shim is grown form the polymer sheet and at the end the stamper is turned into a roller and used in the R2R process. This process allows various micro milled, turned, grooved and ruled structures to be made at thin film products through the R2R process. In this paper, the process flow and examples of fabricating R2R embossed UVcurable thin film micro- and nanostructures from planar diamond machined patterns, is reported.

  3. 7 CFR 3555.101 - Loan purposes.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ...-wall carpeting, ovens, ranges, refrigerators, washing machines, clothes dryers, heating and cooling... the loan amount to be guaranteed. (c) Combination construction and permanent loan. Loan funds may be used and Rural Development will guarantee a “combination construction and permanent loan” as defined at...

  4. Prediction and validation of protein intermediate states from structurally rich ensembles and coarse-grained simulations

    NASA Astrophysics Data System (ADS)

    Orellana, Laura; Yoluk, Ozge; Carrillo, Oliver; Orozco, Modesto; Lindahl, Erik

    2016-08-01

    Protein conformational changes are at the heart of cell functions, from signalling to ion transport. However, the transient nature of the intermediates along transition pathways hampers their experimental detection, making the underlying mechanisms elusive. Here we retrieve dynamic information on the actual transition routes from principal component analysis (PCA) of structurally-rich ensembles and, in combination with coarse-grained simulations, explore the conformational landscapes of five well-studied proteins. Modelling them as elastic networks in a hybrid elastic-network Brownian dynamics simulation (eBDIMS), we generate trajectories connecting stable end-states that spontaneously sample the crystallographic motions, predicting the structures of known intermediates along the paths. We also show that the explored non-linear routes can delimit the lowest energy passages between end-states sampled by atomistic molecular dynamics. The integrative methodology presented here provides a powerful framework to extract and expand dynamic pathway information from the Protein Data Bank, as well as to validate sampling methods in general.

  5. Automatic markerless registration of point clouds with semantic-keypoint-based 4-points congruent sets

    NASA Astrophysics Data System (ADS)

    Ge, Xuming

    2017-08-01

    The coarse registration of point clouds from urban building scenes has become a key topic in applications of terrestrial laser scanning technology. Sampling-based algorithms in the random sample consensus (RANSAC) model have emerged as mainstream solutions to address coarse registration problems. In this paper, we propose a novel combined solution to automatically align two markerless point clouds from building scenes. Firstly, the method segments non-ground points from ground points. Secondly, the proposed method detects feature points from each cross section and then obtains semantic keypoints by connecting feature points with specific rules. Finally, the detected semantic keypoints from two point clouds act as inputs to a modified 4PCS algorithm. Examples are presented and the results compared with those of K-4PCS to demonstrate the main contributions of the proposed method, which are the extension of the original 4PCS to handle heavy datasets and the use of semantic keypoints to improve K-4PCS in relation to registration accuracy and computational efficiency.

  6. More than the sum of its parts: Coarse-grained peptide-lipid interactions from a simple cross-parametrization

    NASA Astrophysics Data System (ADS)

    Bereau, Tristan; Wang, Zun-Jing; Deserno, Markus

    2014-03-01

    Interfacial systems are at the core of fascinating phenomena in many disciplines, such as biochemistry, soft-matter physics, and food science. However, the parametrization of accurate, reliable, and consistent coarse-grained (CG) models for systems at interfaces remains a challenging endeavor. In the present work, we explore to what extent two independently developed solvent-free CG models of peptides and lipids—of different mapping schemes, parametrization methods, target functions, and validation criteria—can be combined by only tuning the cross-interactions. Our results show that the cross-parametrization can reproduce a number of structural properties of membrane peptides (for example, tilt and hydrophobic mismatch), in agreement with existing peptide-lipid CG force fields. We find encouraging results for two challenging biophysical problems: (i) membrane pore formation mediated by the cooperative action of several antimicrobial peptides, and (ii) the insertion and folding of the helix-forming peptide WALP23 in the membrane.

  7. Coarse-Grained Simulations of Membrane Insertion and Folding of Small Helical Proteins Using the CABS Model.

    PubMed

    Pulawski, Wojciech; Jamroz, Michal; Kolinski, Michal; Kolinski, Andrzej; Kmiecik, Sebastian

    2016-11-28

    The CABS coarse-grained model is a well-established tool for modeling globular proteins (predicting their structure, dynamics, and interactions). Here we introduce an extension of the CABS representation and force field (CABS-membrane) to the modeling of the effect of the biological membrane environment on the structure of membrane proteins. We validate the CABS-membrane model in folding simulations of 10 short helical membrane proteins not using any knowledge about their structure. The simulations start from random protein conformations placed outside the membrane environment and allow for full flexibility of the modeled proteins during their spontaneous insertion into the membrane. In the resulting trajectories, we have found models close to the experimental membrane structures. We also attempted to select the correctly folded models using simple filtering followed by structural clustering combined with reconstruction to the all-atom representation and all-atom scoring. The CABS-membrane model is a promising approach for further development toward modeling of large protein-membrane systems.

  8. Extreme temperature robust optical sensor designs and fault-tolerant signal processing

    DOEpatents

    Riza, Nabeel Agha [Oviedo, FL; Perez, Frank [Tujunga, CA

    2012-01-17

    Silicon Carbide (SiC) probe designs for extreme temperature and pressure sensing uses a single crystal SiC optical chip encased in a sintered SiC material probe. The SiC chip may be protected for high temperature only use or exposed for both temperature and pressure sensing. Hybrid signal processing techniques allow fault-tolerant extreme temperature sensing. Wavelength peak-to-peak (or null-to-null) collective spectrum spread measurement to detect wavelength peak/null shift measurement forms a coarse-fine temperature measurement using broadband spectrum monitoring. The SiC probe frontend acts as a stable emissivity Black-body radiator and monitoring the shift in radiation spectrum enables a pyrometer. This application combines all-SiC pyrometry with thick SiC etalon laser interferometry within a free-spectral range to form a coarse-fine temperature measurement sensor. RF notch filtering techniques improve the sensitivity of the temperature measurement where fine spectral shift or spectrum measurements are needed to deduce temperature.

  9. Short range spread-spectrum radiolocation system and method

    DOEpatents

    Smith, Stephen F.

    2003-04-29

    A short range radiolocation system and associated methods that allow the location of an item, such as equipment, containers, pallets, vehicles, or personnel, within a defined area. A small, battery powered, self-contained tag is provided to an item to be located. The tag includes a spread-spectrum transmitter that transmits a spread-spectrum code and identification information. A plurality of receivers positioned about the area receive signals from a transmitting tag. The position of the tag, and hence the item, is located by triangulation. The system employs three different ranging techniques for providing coarse, intermediate, and fine spatial position resolution. Coarse positioning information is provided by use of direct-sequence code phase transmitted as a spread-spectrum signal. Intermediate positioning information is provided by the use of a difference signal transmitted with the direct-sequence spread-spectrum code. Fine positioning information is provided by use of carrier phase measurements. An algorithm is employed to combine the three data sets to provide accurate location measurements.

  10. Automatic classification of protein structures using physicochemical parameters.

    PubMed

    Mohan, Abhilash; Rao, M Divya; Sunderrajan, Shruthi; Pennathur, Gautam

    2014-09-01

    Protein classification is the first step to functional annotation; SCOP and Pfam databases are currently the most relevant protein classification schemes. However, the disproportion in the number of three dimensional (3D) protein structures generated versus their classification into relevant superfamilies/families emphasizes the need for automated classification schemes. Predicting function of novel proteins based on sequence information alone has proven to be a major challenge. The present study focuses on the use of physicochemical parameters in conjunction with machine learning algorithms (Naive Bayes, Decision Trees, Random Forest and Support Vector Machines) to classify proteins into their respective SCOP superfamily/Pfam family, using sequence derived information. Spectrophores™, a 1D descriptor of the 3D molecular field surrounding a structure was used as a benchmark to compare the performance of the physicochemical parameters. The machine learning algorithms were modified to select features based on information gain for each SCOP superfamily/Pfam family. The effect of combining physicochemical parameters and spectrophores on classification accuracy (CA) was studied. Machine learning algorithms trained with the physicochemical parameters consistently classified SCOP superfamilies and Pfam families with a classification accuracy above 90%, while spectrophores performed with a CA of around 85%. Feature selection improved classification accuracy for both physicochemical parameters and spectrophores based machine learning algorithms. Combining both attributes resulted in a marginal loss of performance. Physicochemical parameters were able to classify proteins from both schemes with classification accuracy ranging from 90-96%. These results suggest the usefulness of this method in classifying proteins from amino acid sequences.

  11. Machine Learning Approach to Optimizing Combined Stimulation and Medication Therapies for Parkinson's Disease.

    PubMed

    Shamir, Reuben R; Dolber, Trygve; Noecker, Angela M; Walter, Benjamin L; McIntyre, Cameron C

    2015-01-01

    Deep brain stimulation (DBS) of the subthalamic region is an established therapy for advanced Parkinson's disease (PD). However, patients often require time-intensive post-operative management to balance their coupled stimulation and medication treatments. Given the large and complex parameter space associated with this task, we propose that clinical decision support systems (CDSS) based on machine learning algorithms could assist in treatment optimization. Develop a proof-of-concept implementation of a CDSS that incorporates patient-specific details on both stimulation and medication. Clinical data from 10 patients, and 89 post-DBS surgery visits, were used to create a prototype CDSS. The system was designed to provide three key functions: (1) information retrieval; (2) visualization of treatment, and; (3) recommendation on expected effective stimulation and drug dosages, based on three machine learning methods that included support vector machines, Naïve Bayes, and random forest. Measures of medication dosages, time factors, and symptom-specific pre-operative response to levodopa were significantly correlated with post-operative outcomes (P < 0.05) and their effect on outcomes was of similar magnitude to that of DBS. Using those results, the combined machine learning algorithms were able to accurately predict 86% (12/14) of the motor improvement scores at one year after surgery. Using patient-specific details, an appropriately parameterized CDSS could help select theoretically optimal DBS parameter settings and medication dosages that have potential to improve the clinical management of PD patients. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Machine learning algorithms to classify spinal muscular atrophy subtypes.

    PubMed

    Srivastava, Tuhin; Darras, Basil T; Wu, Jim S; Rutkove, Seward B

    2012-07-24

    The development of better biomarkers for disease assessment remains an ongoing effort across the spectrum of neurologic illnesses. One approach for refining biomarkers is based on the concept of machine learning, in which individual, unrelated biomarkers are simultaneously evaluated. In this cross-sectional study, we assess the possibility of using machine learning, incorporating both quantitative muscle ultrasound (QMU) and electrical impedance myography (EIM) data, for classification of muscles affected by spinal muscular atrophy (SMA). Twenty-one normal subjects, 15 subjects with SMA type 2, and 10 subjects with SMA type 3 underwent EIM and QMU measurements of unilateral biceps, wrist extensors, quadriceps, and tibialis anterior. EIM and QMU parameters were then applied in combination using a support vector machine (SVM), a type of machine learning, in an attempt to accurately categorize 165 individual muscles. For all 3 classification problems, normal vs SMA, normal vs SMA 3, and SMA 2 vs SMA 3, use of SVM provided the greatest accuracy in discrimination, surpassing both EIM and QMU individually. For example, the accuracy, as measured by the receiver operating characteristic area under the curve (ROC-AUC) for the SVM discriminating SMA 2 muscles from SMA 3 muscles was 0.928; in comparison, the ROC-AUCs for EIM and QMU parameters alone were only 0.877 (p < 0.05) and 0.627 (p < 0.05), respectively. Combining EIM and QMU data categorizes individual SMA-affected muscles with very high accuracy. Further investigation of this approach for classifying and for following the progression of neuromuscular illness is warranted.

  13. The study on the nanomachining property and cutting model of single-crystal sapphire by atomic force microscopy.

    PubMed

    Huang, Jen-Ching; Weng, Yung-Jin

    2014-01-01

    This study focused on the nanomachining property and cutting model of single-crystal sapphire during nanomachining. The coated diamond probe is used to as a tool, and the atomic force microscopy (AFM) is as an experimental platform for nanomachining. To understand the effect of normal force on single-crystal sapphire machining, this study tested nano-line machining and nano-rectangular pattern machining at different normal force. In nano-line machining test, the experimental results showed that the normal force increased, the groove depth from nano-line machining also increased. And the trend is logarithmic type. In nano-rectangular pattern machining test, it is found when the normal force increases, the groove depth also increased, but rather the accumulation of small chips. This paper combined the blew by air blower, the cleaning by ultrasonic cleaning machine and using contact mode probe to scan the surface topology after nanomaching, and proposed the "criterion of nanomachining cutting model," in order to determine the cutting model of single-crystal sapphire in the nanomachining is ductile regime cutting model or brittle regime cutting model. After analysis, the single-crystal sapphire substrate is processed in small normal force during nano-linear machining; its cutting modes are ductile regime cutting model. In the nano-rectangular pattern machining, due to the impact of machined zones overlap, the cutting mode is converted into a brittle regime cutting model. © 2014 Wiley Periodicals, Inc.

  14. Comparison of effects of overload on parameters and performance of samarium-cobalt and strontium-ferrite radially oriented permanent magnet brushless DC motors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Demerdash, N.A.; Nehl, T.W.; Nyamusa, T.A.

    1985-08-01

    Effects of high momentary overloads on the samarium-cobalt and strontium-ferrite permanent magnets and the magnetic field in electronically commutated brushless dc machines, as well as their impact on the associated machine parameters were studied. The effect of overload on the machine parameters, and subsequently on the machine system performance was also investigated. This was accomplished through the combined use of finite element analysis of the magnetic field in such machines, perturbation of the magnetic energies to determine machine inductances, and dynamic simulation of the performance of brushless dc machines, when energized from voltage source inverters. These effects were investigated throughmore » application of the above methods to two equivalent 15 hp brushless dc motors, one of which was built with samarium-cobalt magnets, while the other was built with strontium- ferrite magnets. For momentary overloads as high as 4.5 p.u. magnet flux reductions of 29% and 42% of the no load flux were obtained in the samarium-cobalt and strontiumferrite machines, respectively. Corresponding reductions in the line to line armature inductances of 52% and 46% of the no load values were reported for the samarium-cobalt and strontium-ferrite cases, respectively. The overload affected the profiles and magnitudes of armature induced back emfs. Subsequently, the effects of overload on machine parameters were found to have significant impact on the performance of the machine systems, where findings indicate that the samarium-cobalt unit is more suited for higher overload duties than the strontium-ferrite machine.« less

  15. Methods, systems and apparatus for controlling operation of two alternating current (AC) machines

    DOEpatents

    Gallegos-Lopez, Gabriel [Torrance, CA; Nagashima, James M [Cerritos, CA; Perisic, Milun [Torrance, CA; Hiti, Silva [Redondo Beach, CA

    2012-02-14

    A system is provided for controlling two AC machines. The system comprises a DC input voltage source that provides a DC input voltage, a voltage boost command control module (VBCCM), a five-phase PWM inverter module coupled to the two AC machines, and a boost converter coupled to the inverter module and the DC input voltage source. The boost converter is designed to supply a new DC input voltage to the inverter module having a value that is greater than or equal to a value of the DC input voltage. The VBCCM generates a boost command signal (BCS) based on modulation indexes from the two AC machines. The BCS controls the boost converter such that the boost converter generates the new DC input voltage in response to the BCS. When the two AC machines require additional voltage that exceeds the DC input voltage required to meet a combined target mechanical power required by the two AC machines, the BCS controls the boost converter to drive the new DC input voltage generated by the boost converter to a value greater than the DC input voltage.

  16. [Machinable property of a novel dental mica glass-ceramic].

    PubMed

    Chen, Ji-hua; Li, Na; Ma, Xin-pei; Zhao, Ying-hua; Sun, Xiang; Li, Guang-xin

    2007-12-01

    To investigate the machinability of a novel dental mica glass-ceramic and analyze the effect of heat-treatment on its ductile machinable behavior. The drilling and turning experiment were used to measure the machinabilities of the control group (feldspar ceramic: Vita Mark II) and 7 experiment groups treated with different crystallization techniques. The microstructures were analyzed using scanning electron microscopy (SEM) and X-ray diffraction (XRD). The average drilling depths in 30 s of the experimental groups ranged from (0.5 +/- 0.1) mm to (7.1 +/- 0.8) mm. There were significant differences between the control [(0.8 +/- 0.1) mm] and the experimental groups (P < 0.05) except the group crystallized at 740 degrees C for 60 min. When crystallized at 650 degrees C (60 min), continuous band chips could formed in machining at a high velocity and cut depth. The crystal portion of this group is only about 40%. This material has a satisfactory machinability. The mechanism could be attributed to a combination of the interlocked structure of mica crystals and the low viscosity of glassy phase.

  17. Geometry and lithofacies of coarse-grained injectites and extrudites in a late Pliocene trench-slope basin on the southern Boso Peninsula, Japan

    NASA Astrophysics Data System (ADS)

    Ito, Makoto; Ishimoto, Sakumi; Ito, Kento; Kotake, Nobuhiro

    2016-10-01

    This study investigates the geometry and internal structures of coarse- to very coarse-grained volcanic sandstones and volcanic breccias with many siltstone clasts (interpreted to be sill-like injectites and extrudites) occurring in an upper Pliocene trench-slope basin succession on the southern Boso Peninsula, Japan. The injectites occur in the uppermost Shiramazu Formation, and pinch out laterally into siltstone-dominated deposits of the Mera Formation. Their thicknesses vary from a few centimeters to 2 m. The basal and upper contacts of the injectites with host muddy deposits are sharp and/or erosional, and are locally discordant with the bedding of the host deposits. Siltstone clasts, which were ripped up or ripped down from the host muddy deposits, are commonly incorporated into the injectites, although some siltstone clasts have geological ages older than those of the host deposits. Seven lithofacies have been identified in the injectites based on the internal structures. The combinations of internal structures are different from those of high-density turbidity current deposits and debrites, and suggest that injection was promoted by a combination of turbulent and laminar flow conditions. The extrudites show an overall convex-up geometry and possess lithological features similar to those of the injectites. They have been identified in the Rendaiji Conglomerate Member, which is encased in the Mera Formation, and which rests on the uppermost Shiramazu Formation. The extrudites are characterized by gently undulating waveforms that show upstream migration and climbing stacking patterns similar to the cross-sectional geometry of cyclic steps or upstream-migrating antidunes. The active eruption of solid-liquid mixtures onto the seafloor and sedimentary piles may have subsequently collapsed to produce supercritical high-density gravity currents down the flanks of a neptunian volcano. The injectites and extrudites locally contain Calyptogena shells and shell fragments, as well as fragments of carbonate concretions that exhibit low carbon isotopic ratios, suggesting that fluidization of the source sediments was triggered by a combination of seepage of cold methane-bearing water into the source sediments and seismic shaking. Volcaniclastic deposits older than the host muddy deposits are present in the trench-slope basin deposits and these are the likely source of the injectites and extrudites.

  18. Synthesis of a pH-Sensitive Hetero[4]Rotaxane Molecular Machine that Combines [c2]Daisy and [2]Rotaxane Arrangements.

    PubMed

    Waelès, Philip; Riss-Yaw, Benjamin; Coutrot, Frédéric

    2016-05-10

    The synthesis of a novel pH-sensitive hetero[4]rotaxane molecular machine through a self-sorting strategy is reported. The original tetra-interlocked molecular architecture combines a [c2]daisy chain scaffold linked to two [2]rotaxane units. Actuation of the system through pH variation is possible thanks to the specific interactions of the dibenzo-24-crown-8 (DB24C8) macrocycles for ammonium, anilinium, and triazolium molecular stations. Selective deprotonation of the anilinium moieties triggers shuttling of the unsubstituted DB24C8 along the [2]rotaxane units. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Hydraulic logic gates: building a digital water computer

    NASA Astrophysics Data System (ADS)

    Taberlet, Nicolas; Marsal, Quentin; Ferrand, Jérémy; Plihon, Nicolas

    2018-03-01

    In this article, we propose an easy-to-build hydraulic machine which serves as a digital binary computer. We first explain how an elementary adder can be built from test tubes and pipes (a cup filled with water representing a 1, and empty cup a 0). Using a siphon and a slow drain, the proposed setup combines AND and XOR logical gates in a single device which can add two binary digits. We then show how these elementary units can be combined to construct a full 4-bit adder. The sequencing of the computation is discussed and a water clock can be incorporated so that the machine can run without any exterior intervention.

  20. Microscopes and computers combined for analysis of chromosomes

    NASA Technical Reports Server (NTRS)

    Butler, J. W.; Butler, M. K.; Stroud, A. N.

    1969-01-01

    Scanning machine CHLOE, developed for photographic use, is combined with a digital computer to obtain quantitative and statistically significant data on chromosome shapes, distribution, density, and pairing. CHLOE permits data acquisition about a chromosome complement to be obtained two times faster than by manual pairing.

Top